uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,995,163
arxiv
\section*{Acknowledgment} \label{section:ack} \vspace{-0.1cm} Mazuera-Rozo and Bavota gratefully acknowledge the financial support of the Swiss National Science Foundation for the CCQR project (SNF Project No. 175513). Escobar-Vel\'asquez and Linares-V\'asquez were partially funded by a Google Latin America Research Award 2018-2021. Escobar-Velásquez was supported by a ESKAS scholarship, No. 2020.0820. Trubiani was partially supported by the MIUR PRIN project 2017TWRCNB SEDUCE. \section{Conclusions} \label{sec:conclusion} \vspace{-0.15cm} We presented the first available taxonomy of security weaknesses in Android apps that covers both Java- and Kotlin-related code. Our taxonomy features 80 types of software security weaknesses, and it is the result of both a mining-based study in which we manually inspected 681\xspace commits fixing security weaknesses (that contributed 74 types of security weaknesses), and a survey performed with 43\xspace developers (contributing six additional types). \textRevision{Our results discussion resulted in the identification of several lessons learned for both practitioners (see the \faCodeFork~icon in Section 3) and researchers (\faFlask~icon).} Our future work will be mostly driven by the findings discussed in \secref{sec:results}. In particular, we plan to focus on the definition of techniques able to detect (and possibly automatically fix) security weaknesses that are (i) not currently supported by existing detection tools, (ii) frequently spread in real Android apps, and (iii) relevant for software developers. Besides, we are interested to investigate the portability of methodologies and tools detecting Java-based weaknesses in Kotlin-based code, to understand which changes are needed to enable interoperability between the two languages. Our study provides the foundations for such a research agenda. \section{Study Design} \label{sec:design} \vspace{-0.2cm} The {\em goal} of the study is to investigate software security weaknesses affecting Java and Kotlin Android apps. The {\em context} consists of (i) 681\xspace commits performed by software developers of Android apps to fix software security weaknesses, and (ii) answers to a survey conducted with 43\xspace Android developers to investigate the software security weaknesses they face and how they deal with their identification and fixing. \noindent Our study addresses the following research question: \vspace{-0.1cm} \begin{quote} \textbf{RQ$_1$:}\emph{ \rqone}\smallskip \end{quote} \vspace{-0.3cm} To answer RQ$_1$, we combine two orthogonal analyses. We start by manually analyzing a set of 681\xspace commits fixing security weaknesses performed in 315\xspace Java and Kotlin open source Android apps with the goal of defining a taxonomy of software security weaknesses faced by Android developers. We analyze both apps written in Java and in Kotlin, by presenting the differences (if any) in the distribution of security issues across the two languages. Then, we run a survey with 43\xspace Android developers. The survey has a dual goal. First, we ``validate'' the taxonomy defined in the first step, by asking developers which security weaknesses they address more often. This allows to assess the comprehensiveness of our taxonomy and to complement it with new categories of security weaknesses if needed. Second, we collect additional data reporting how developers perceive security weaknesses in Android apps. \vspace{-0.2cm} \subsection{Manual Analysis of Commits} \label{sub:manualDesign} We present the procedure to collect the data needed for our study (\emph{i.e.,}\xspace commits fixing security weaknesses we manually validated) and the process performed to derive our taxonomy. \vspace{-0.2cm} \subsubsection{Data Collection} As previously explained, Java has been historically the official programming language for creating Android apps. However, in 2019, Google announced that Kotlin is its official and preferred language for native Android apps.\footnote{\url{https://tcrn.ch/363AyBv}} Thus, when selecting the mobile apps to study, we made sure to have a mix of Java and Kotlin apps by (i) merging different datasets available in the literature, and (ii) mining a dataset we created for this study. Keep in mind that for all considered apps we must have access to their repositories, since we later mine their commits. Having in mind previously mentioned considerations, we adopted the three following datasets. \smallskip \textbf{Geiger \emph{et~al.}\xspace\cite{pascarella2018osprojects}} This dataset is composed of 8,431 real-world open-source Android apps. It combines source and commit history information from GitHub with metadata from Google Play store. We processed the dataset to exclude apps that are no longer available on GitHub, leading to 7,862 apps currently usable from this dataset (all available both on GitHub and on the Google Play store). \textbf{Coppola \emph{et~al.}\xspace\cite{coppola2019migrationkotlin}} The authors of this dataset mined all projects hosted on F-Droid \footnote{\url{https://f-droid.org}}, a repository for free and open source Android apps. This dataset is interesting because Coppola \emph{et~al.}\xspace reported the presence of 19\% of apps featuring Kotlin code among the 1,232 mined apps. We excluded apps that are no longer available on GitHub and, for consistency with the previous dataset, also those not published in the Google Play store. This resulted in 472 projects. \textbf{GitHub Archive.} Since in the two previous datasets there is a prevalence of Java apps (also due to the fact that they were built before the announcement by Google pushing Android apps towards Kotlin), we ran a query on GH Archive \footnote{\url{https://www.gharchive.org}} using Google BigQuery, with the goal of identifying repositories having Kotlin as the primary language. The query is available in our online appendix \cite{replication}. The aforementioned query was run on March 1st, 2020, obtaining a list of 3,967 repositories as a result. We sorted these projects by number of stars (in descending order) and selected the top 5\% (\emph{i.e.,}\xspace 200 repositories) for manual analysis. In particular, we checked that the 200 repositories were real-world Android apps available on the Play Store. From this screening, we obtained a list of 22 Kotlin apps to consider in our dataset. \smallskip We aggregated these three datasets and removed duplicates, obtaining a final list of 8,157 open-source Android apps. The list is available in our replication package \cite{replication}. We cloned all 8,157 repositories and ran on them a customized version of {\tt git-vuln-finder} \cite{gitvulnfinder}, a Python application aimed at finding commits likely to fix a security weakness. The search is based on a set of regular expressions applied on the commit message \cite{zhou2017commits}. While most of the used regular expressions are applicable in the context of mobile apps, the work by Zhou and Sharma \cite{zhou2017commits} focuses on web applications. Thus, we modified their tool by complementing the list of regular expressions with others we defined by looking at the list of security weaknesses relevant to mobile apps and present in the Common Weakness Enumeration (CWE \footnote{\url{https://cwe.mitre.org}}) version 4.0, a community-developed list of common software and hardware security weaknesses. Also, we considered a commit as relevant for our study if it explicitly mentions the name or id of any weakness present in the CWE dictionary. The adopted regular expressions are publicly available \cite{replication}. After running {\tt git-vuln-finder} on the 8,157 projects, we identified a set of candidate commits from which we removed duplicates due to: (i) commits mined from both the master branch and other branches merged in the master; (ii) forked repositories. Also, we decided to keep in our dataset only commits in which the developers are modifying a single Java or Kotlin file (as identified by their extension). The rationale behind this decision is two-fold. First, if a developer mentions in the commit note that she is fixing a security weakness and only one file is modified in the commit, we can be sure that the fix happened in that file. Second, since we aim at classifying the type of security weakness involved in each commit, understanding a fix spanning across many files can be quite challenging, and lead to misclassifications. This cleaning process resulted in a final list of 4,781\xspace candidate commits. \vspace{-0.2cm} \subsubsection{Open Coding} Given the 4,781\xspace commits collected in the previous step, we manually analyzed 681\xspace of them with the goal of describing, using a label, the type of security weakness fixed in the commit. The number of inspected commits ensures a significance interval (margin of error) of $\pm5\%$ with a confidence level of 99\%. We did not use random sampling for the selection of the commits to manually inspect. Indeed, in the set of 4,781\xspace candidate commits, there are 4,391 commits impacting a Java file, and 390 modifying a Kotlin file. Since we aim at comparing the types of security weaknesses affecting these two main languages used to develop native Android apps, we decided to target the analysis of the same number of Java- and Kotlin-related commits. We targeted the inclusion of 200 valid commits per language (\emph{i.e.,}\xspace excluding commits labeled as false positive since they are not related to security weaknesses' fix). The choice of 200 was tailored on the amount of commits available for Kotlin, since we expected to find a substantial number of false positives as result of the regular expressions used to select the commits. By applying the process described in the following, we analyzed 360 Java-related commits (200 valid + 160 false positives) and 321 Kotlin-related commits (200 valid + 121 false positives). Five authors took part to the labeling process that was supported by a web application. Each author independently labeled the commits randomly assigned to her/him by the web application, defining a ``label'' describing the security weakness fixed in each commit. To define such a label the authors manually inspected the diff of the commit and the message accompanying it. As a guideline for the label definition, the authors used the CWE 4.0 list. The authors reused as much as possible the list of security weaknesses in CWE, defining new labels only when needed. Moreover, the web application also showed the list of labels created so far, allowing the author to select one of the already defined labels. Since the number of possible labels (\emph{i.e.,}\xspace types of security weaknesses) is extremely high, such a choice helps using consistent naming while not introducing a substantial bias. In case the commit was not related to a security weakness fix, a \emph{false positive} label was assigned, discarding the commit from the study. Each commit was assigned to two authors and, in cases for which there was no agreement between the two authors, the commit was assigned to a third author. Conflicts arisen for 344\xspace commits ($\sim$50\% of 681\xspace). While such a number may look high, note that we considered as a conflict also cases in which the authors used two slightly different labels to express the same concept (\emph{e.g.,}\xspace CWE-703: improper check or handling of exceptional conditions \emph{vs} CWE-754: improper check for unusual or exceptional conditions). A total of 1,706\xspace labels was required in order to reach our target of assessing and characterizing 200 valid commits per programming language: two labels per each of the 400 valid commits (800), two labels for each of the 281 false positives we discarded (562), and one more label for each of the 344\xspace solved conflicts (344\xspace). As outcome, we present a taxonomy of software security weaknesses identified in the manual analysis and we complement our discussion with qualitative examples. \begin{table}[ht] \scriptsize \centering \caption{Structure of the survey used in our study\vspace{-0.3cm}} \label{tab:survey} \resizebox{\linewidth}{!}{ \rowcolors{2}{gray!15}{white} \begin{tabular}{p{6.5cm}} \toprule \textbf{BACKGROUND QUESTIONS} \\\midrule $Q_1$: In which country do you live?\\ $Q_2$: What is your current job position?\\ $Q_3$: How many years of programming experience do you have?\\ $Q_4$: How many years of programming experience do you have concerning native Android apps? Please specify overall/Java/Kotlin/Dart.\\ $Q_5$: How many years of programming experience do you have concerning the testing of native Android apps?\\\midrule \textbf{EXPERIENCE WITH SOFTWARE security weaknesses AND THEIR PERCEPTION} \\\midrule $Q_6$: Which factors do you consider to estimate the likelihood of a security weakness to be exploited?\\ $Q_7$: Which factors do you consider to estimate the negative impact of a security weakness in case it is exploited?\\ $Q_8$: Which are the most common security weaknesses that you found?\\ $Q_9$: Which security weaknesses do you consider as the most dangerous?\\ $Q_{10}$: How do you detect security weaknesses? Do you use any specific tool for this task?\\\bottomrule \end{tabular} } \end{table} \vspace{-0.2cm} \subsection{Survey with Developers} \label{sub:surveyDesign} We designed a survey aimed at investigating the types of security weaknesses that are found by developers in their apps and their perception about specific aspects of security weaknesses. The survey was designed to last at most 15 minutes, to maximize the survey completion rate. The survey structure is reported in \tabref{tab:survey}. Note that we rephrased some of the questions to shorten them. First, we collected background information about participants ($Q_1$-$Q_5$). If a participant answered ``zero'' to the part of $Q_4$ related to the overall programming experience of native Android apps \footnote{With \emph{native Android apps}, we refer to mobile apps written in one of the official programming languages of Android (\emph{i.e.,}\xspace Java and Kotlin)}, the survey ended, and the participant was excluded from the study. This happened in 2 cases. Then, $Q_6$-$Q_7$ aimed to collect information about the developers' perception of security weaknesses. For these questions we provided a predefined list of possible factors to check, with the possibility of specifying additional factors. For $Q_6$, the predefined list included: \emph{Skill level required to exploit it}, \emph{Motivation to exploit it}, \emph{Chances for a successfully exploit}, \emph{Number of agents needed for the exploit\footnote{\textRevision{With “agents needed for the exploit” we refer to the number of attackers that are needed to exploit a security weakness. Indeed, not all security issues can be exposed by a single attacker}}}, \emph{Ease of discovery}, \emph{Technical difficulty of the exploit}, \emph{How well-known is the weakness}, and \emph{How likely is the exploit to be detected}. Concerning $Q_7$: \emph{Confidentiality}, \emph{Integrity}, \emph{Availability}, \emph{Accountability}, \emph{Brand reputation}, \emph{Business profits}, and \emph{Privacy violation}. $Q_8$ and $Q_9$ aimed to validate/complement the taxonomy defined as output of the manual study, with $Q_8$ focusing on the most frequent and $Q_9$ on the most dangerous security weaknesses experienced by developers. Both these questions required an open answer. Two authors read each answer and assigned the CWE-ID(s) needed to describe the security weaknesses mentioned in each answer. A third author merged these tags and solved conflicts arisen for 15 answers (18\%). Since a respondent might have answered the same for $Q_8$ and $Q_9$, duplicates among these answers were removed to avoid counting twice the same security weakness mentioned by the same developer. Finally, $Q_{10}$ asked developers how they detect security weaknesses and whether they are supported by any tool. We used convenience sampling to invite developers from companies we know to participate in our survey. Also, the link to the survey was shared in social media. We collected answers for ten days, with a total of 43\xspace participants that completed our survey from nine countries (\emph{i.e.,}\xspace Argentina, Canada, Colombia, Germany, Hungary, Italy, Macedonia, Poland and USA). On average, the participants had $\sim$6 years of overall programming experience and approximately 3 years of Android development experience (see \figref{fig:demographics}). The average testing experience is close to two years. Regarding their job position, 21\% of participants are B.Sc. students, 7\% M.Sc. students, 4.6\% Ph.D students and 67.4\% professional Android developers having different job positions in the industry (\emph{e.g.,}\xspace Senior Android developer, Technical leader, Project Management Engineer, Director). \noindent \begin{figure}[ht] \vspace{-0.4cm} \centering \hspace{-0.7cm}\includegraphics[width=1.07\linewidth]{img/full} \vspace{-0.9cm} \caption{Experience in years of the 43\xspace surveyed participants.} \label{fig:demographics} \vspace{-0.7cm} \end{figure} \subsection{Testing the Generalizability of Our Taxonomy} Once obtained the final taxonomy including both categories defined through the mining-based study as well as those complemented by the developers' survey, we assessed its generalizability. We used all 64 Kotlin-related commits we did not manually analyze while building our taxonomy and a sample of 186 Java-related (again, among those we did not analyze). Then, we asked two Master students both having experience in Android development and not involved in the taxonomy definition and unaware of its structure, to perform the same manual analysis previously described. Each of them independently evaluated all instances. Conflicts arisen in 68\% of cases were solved through an open discussion between the two students and the first two authors of this work. The final output is a taxonomy of security weaknesses affecting Android apps, that we can compare with the taxonomy we defined to assess its stability. While in principle more Kotlin-related commits would be needed, we labeled all those we found by mining several datasets of Android apps. \vspace{-0.2cm} \subsection{Data Analysis} \label{sub:dataAnalysis} We start by presenting the taxonomy of types of software security weaknesses output of our mining-based study. Then, we discuss how the developers' survey helped in validating/complementing the obtained taxonomy. Finally, we report about the results of the generalizability study. The data used in our study are publicly available \cite{replication}. \section{Introduction} \label{sec:introduction} \vspace{-0.1cm} Mobile apps and devices are nowadays omnipresent in daily life activities, supporting many crucial tasks (\emph{e.g.,}\xspace banking, social networking, etc.\xspace) involving the manipulation and storage of sensitive and private data. The usage of mobile operating systems has already exceeded the usage of desktops/laptops operating systems \cite{statcounter2020a,statcounter2020b,google2019report}. As a consequence, mobile apps and devices have become a very attractive target for malicious attacks aimed at stealing private and sensitive information from apps/devices and to exploit on-device capabilities such as processing, data collection via sensors, and networking. Also, according to the CVE details portal \footnote{\url{https://www.cvedetails.com/product/19997/Google-Android.html}} the number of vulnerabilities in the Android operating system has seen a steep growth in the last years, with a total of 2563 reports in 10 years (2009-2019). As a natural reaction to such a rising of vulnerabilities in the mobile ecosystem, original equipment manufactures (OEMs), operating system designers (\emph{e.g.,}\xspace Google), researchers, and companies have devoted efforts to improve the security of mobile OSs, devices and apps. A paramount example is the volume of research focused on detecting vulnerabilities in Android apps (see \emph{e.g.,}\xspace~\cite{arzt2014flowdroid,li2015iccta,sadeghi2017taxonomy,lee2017sealant,singleton2019firebugs,you2016reference, bello2019opia,ren2015hijacking,novak2015covertchannels,gadient2019securitysmells}). The Android OS and devices have been also investigated in the context of previous studies aimed at categorizing their security weaknesses and exploits~(\emph{e.g.,}\xspace \cite{huang2015servershutdown, thomas2015securitymetrics, cao2015inputvalidation,wang2016systemserver, jimenez2016profiling, bagheri2018androidpermissions, meng2018survey, mazuera2019android}). Even datasets with malicious apps have been built~\cite{allix2016androzoo, Zhou2012Genome}. Still, to the best of our knowledge, there is no comprehensive taxonomy of security weaknesses exhibited in Android apps. With security weaknesses we refer to flaws or gaps in a software that could be exploited to violate its security policy, thus eventually causing a disruption of the confidentiality, integrity, or availability of the system in question. \textRevision{As compared to desktop applications, Android apps may suffer of specific vulnerability types since they (i) run on a mobile device, thus usually collecting a larger amount of information about the user (e.g., location, video/audio, as well as biometric information); (ii) are built on top of a specific framework and programming model, that, as we will show, requires to carefully handle specific types of resources and components (e.g., Activities, Intents, Broadcast Receivers, etc.); (iii) despite the Android OS is built on top of the Linux kernel, several modifications have been done to the kernel, and there is a set of specific OS layers built on top of the kernel that makes Android apps programming different from web and desktop app programming, even the programming model is different from the iOS model. In this paper, we focus on Android apps written in Java and in Kotlin, the two main programming languages officially supported for the development of Android apps\footnote{https://developer.android.com/kotlin/first}.} Despite previous individual efforts for analyzing, detecting and fixing specific sets of security weaknesses, the research community still lacks a body of knowledge characterizing the types of weaknesses affecting Android apps. Also, some of the empirical investigations performed in the past could become outdated due to the frenetic evolution of the Android ecosystem. Indeed, the programming models include now the possibility of creating native, hybrid, cross-platform, and mobile web apps for the Android platform. Previous studies on specific security vulnerabilities have focused on analyzing Android Java apps, because of the availability of code bases and APKs in this language. Given the rising interest for Kotlin apps and its status of official Android language, investigating security weaknesses in Kotlin becomes a required avenue for research. While Dart\footnote{\textRevision{Dart is a programming language developed by Google and designed to support the implementation of applications, including mobile apps. https://dart.dev/}}/Flutter\footnote{\textRevision{Flutter is a software development kit created by Google that is built on top of Dart and can be used to develop cross-platform applications. https://flutter.dev/}} also represent interesting targets for research, their diffusion is still limited, with $\sim$18k GitHub repositories as compared to the $\sim$75k Kotlin repositories (May 2020). In this paper, we present the first empirical study characterizing software security weaknesses in Android Java and Kotlin apps. To this end, we build a taxonomy of security weaknesses by (i) manually analyzing 681\xspace commits in open source Android Java/Kotlin apps (\emph{i.e.,}\xspace \emph{mining-based study}), and (ii) surveying 43\xspace Android developers to collect their experience with security weaknesses, and in particular with the types they frequently faced (\emph{i.e.,}\xspace \emph{survey-based study}). The output of the mining-based study is a taxonomy on multiple levels featuring a total of 74 categories of security weaknesses. As results of the developers' survey, we identified 28 types of security weaknesses, of which 22 were already covered in our taxonomy, and six more were added. We use the defined taxonomy to discuss interesting directions for future research in the area, and lessons learned for practitioners. Note that, while catalogues of security weaknesses in mobile apps have been previously defined \cite{cwemobile,OWASP}, they are not based on the empirical observation of weaknesses affecting real mobile apps and, as a result, they are less comprehensive than the taxonomy we derive in this work. \section{Data Availability} \vspace{-0.2cm} The data used in our study are publicly available at \cite{replication}. \vspace{-0.5cm} \input{ack} \balance \bibliographystyle{elsarticle-num} \section*{Acknowledgment} \label{section:ack} \vspace{-0.1cm} Mazuera-Rozo and Bavota gratefully acknowledge the financial support of the Swiss National Science Foundation for the CCQR project (SNF Project No. 175513). Escobar-Vel\'asquez and Linares-V\'asquez were partially funded by a Google Latin America Research Award 2018-2021. Escobar-Velásquez was supported by a ESKAS scholarship, No. 2020.0820. Trubiani was partially supported by the MIUR PRIN project 2017TWRCNB SEDUCE. \section{Conclusions} \label{sec:conclusion} \vspace{-0.15cm} We presented the first available taxonomy of security weaknesses in Android apps that covers both Java- and Kotlin-related code. Our taxonomy features 80 types of software security weaknesses, and it is the result of both a mining-based study in which we manually inspected 681\xspace commits fixing security weaknesses (that contributed 74 types of security weaknesses), and a survey performed with 43\xspace developers (contributing six additional types). \textRevision{Our results discussion resulted in the identification of several lessons learned for both practitioners (see the \faCodeFork~icon in Section 3) and researchers (\faFlask~icon).} Our future work will be mostly driven by the findings discussed in \secref{sec:results}. In particular, we plan to focus on the definition of techniques able to detect (and possibly automatically fix) security weaknesses that are (i) not currently supported by existing detection tools, (ii) frequently spread in real Android apps, and (iii) relevant for software developers. Besides, we are interested to investigate the portability of methodologies and tools detecting Java-based weaknesses in Kotlin-based code, to understand which changes are needed to enable interoperability between the two languages. Our study provides the foundations for such a research agenda. \section{Study Design} \label{sec:design} \vspace{-0.2cm} The {\em goal} of the study is to investigate software security weaknesses affecting Java and Kotlin Android apps. The {\em context} consists of (i) 681\xspace commits performed by software developers of Android apps to fix software security weaknesses, and (ii) answers to a survey conducted with 43\xspace Android developers to investigate the software security weaknesses they face and how they deal with their identification and fixing. \noindent Our study addresses the following research question: \vspace{-0.1cm} \begin{quote} \textbf{RQ$_1$:}\emph{ \rqone}\smallskip \end{quote} \vspace{-0.3cm} To answer RQ$_1$, we combine two orthogonal analyses. We start by manually analyzing a set of 681\xspace commits fixing security weaknesses performed in 315\xspace Java and Kotlin open source Android apps with the goal of defining a taxonomy of software security weaknesses faced by Android developers. We analyze both apps written in Java and in Kotlin, by presenting the differences (if any) in the distribution of security issues across the two languages. Then, we run a survey with 43\xspace Android developers. The survey has a dual goal. First, we ``validate'' the taxonomy defined in the first step, by asking developers which security weaknesses they address more often. This allows to assess the comprehensiveness of our taxonomy and to complement it with new categories of security weaknesses if needed. Second, we collect additional data reporting how developers perceive security weaknesses in Android apps. \vspace{-0.2cm} \subsection{Manual Analysis of Commits} \label{sub:manualDesign} We present the procedure to collect the data needed for our study (\emph{i.e.,}\xspace commits fixing security weaknesses we manually validated) and the process performed to derive our taxonomy. \vspace{-0.2cm} \subsubsection{Data Collection} As previously explained, Java has been historically the official programming language for creating Android apps. However, in 2019, Google announced that Kotlin is its official and preferred language for native Android apps.\footnote{\url{https://tcrn.ch/363AyBv}} Thus, when selecting the mobile apps to study, we made sure to have a mix of Java and Kotlin apps by (i) merging different datasets available in the literature, and (ii) mining a dataset we created for this study. Keep in mind that for all considered apps we must have access to their repositories, since we later mine their commits. Having in mind previously mentioned considerations, we adopted the three following datasets. \smallskip \textbf{Geiger \emph{et~al.}\xspace\cite{pascarella2018osprojects}} This dataset is composed of 8,431 real-world open-source Android apps. It combines source and commit history information from GitHub with metadata from Google Play store. We processed the dataset to exclude apps that are no longer available on GitHub, leading to 7,862 apps currently usable from this dataset (all available both on GitHub and on the Google Play store). \textbf{Coppola \emph{et~al.}\xspace\cite{coppola2019migrationkotlin}} The authors of this dataset mined all projects hosted on F-Droid \footnote{\url{https://f-droid.org}}, a repository for free and open source Android apps. This dataset is interesting because Coppola \emph{et~al.}\xspace reported the presence of 19\% of apps featuring Kotlin code among the 1,232 mined apps. We excluded apps that are no longer available on GitHub and, for consistency with the previous dataset, also those not published in the Google Play store. This resulted in 472 projects. \textbf{GitHub Archive.} Since in the two previous datasets there is a prevalence of Java apps (also due to the fact that they were built before the announcement by Google pushing Android apps towards Kotlin), we ran a query on GH Archive \footnote{\url{https://www.gharchive.org}} using Google BigQuery, with the goal of identifying repositories having Kotlin as the primary language. The query is available in our online appendix \cite{replication}. The aforementioned query was run on March 1st, 2020, obtaining a list of 3,967 repositories as a result. We sorted these projects by number of stars (in descending order) and selected the top 5\% (\emph{i.e.,}\xspace 200 repositories) for manual analysis. In particular, we checked that the 200 repositories were real-world Android apps available on the Play Store. From this screening, we obtained a list of 22 Kotlin apps to consider in our dataset. \smallskip We aggregated these three datasets and removed duplicates, obtaining a final list of 8,157 open-source Android apps. The list is available in our replication package \cite{replication}. We cloned all 8,157 repositories and ran on them a customized version of {\tt git-vuln-finder} \cite{gitvulnfinder}, a Python application aimed at finding commits likely to fix a security weakness. The search is based on a set of regular expressions applied on the commit message \cite{zhou2017commits}. While most of the used regular expressions are applicable in the context of mobile apps, the work by Zhou and Sharma \cite{zhou2017commits} focuses on web applications. Thus, we modified their tool by complementing the list of regular expressions with others we defined by looking at the list of security weaknesses relevant to mobile apps and present in the Common Weakness Enumeration (CWE \footnote{\url{https://cwe.mitre.org}}) version 4.0, a community-developed list of common software and hardware security weaknesses. Also, we considered a commit as relevant for our study if it explicitly mentions the name or id of any weakness present in the CWE dictionary. The adopted regular expressions are publicly available \cite{replication}. After running {\tt git-vuln-finder} on the 8,157 projects, we identified a set of candidate commits from which we removed duplicates due to: (i) commits mined from both the master branch and other branches merged in the master; (ii) forked repositories. Also, we decided to keep in our dataset only commits in which the developers are modifying a single Java or Kotlin file (as identified by their extension). The rationale behind this decision is two-fold. First, if a developer mentions in the commit note that she is fixing a security weakness and only one file is modified in the commit, we can be sure that the fix happened in that file. Second, since we aim at classifying the type of security weakness involved in each commit, understanding a fix spanning across many files can be quite challenging, and lead to misclassifications. This cleaning process resulted in a final list of 4,781\xspace candidate commits. \vspace{-0.2cm} \subsubsection{Open Coding} Given the 4,781\xspace commits collected in the previous step, we manually analyzed 681\xspace of them with the goal of describing, using a label, the type of security weakness fixed in the commit. The number of inspected commits ensures a significance interval (margin of error) of $\pm5\%$ with a confidence level of 99\%. We did not use random sampling for the selection of the commits to manually inspect. Indeed, in the set of 4,781\xspace candidate commits, there are 4,391 commits impacting a Java file, and 390 modifying a Kotlin file. Since we aim at comparing the types of security weaknesses affecting these two main languages used to develop native Android apps, we decided to target the analysis of the same number of Java- and Kotlin-related commits. We targeted the inclusion of 200 valid commits per language (\emph{i.e.,}\xspace excluding commits labeled as false positive since they are not related to security weaknesses' fix). The choice of 200 was tailored on the amount of commits available for Kotlin, since we expected to find a substantial number of false positives as result of the regular expressions used to select the commits. By applying the process described in the following, we analyzed 360 Java-related commits (200 valid + 160 false positives) and 321 Kotlin-related commits (200 valid + 121 false positives). Five authors took part to the labeling process that was supported by a web application. Each author independently labeled the commits randomly assigned to her/him by the web application, defining a ``label'' describing the security weakness fixed in each commit. To define such a label the authors manually inspected the diff of the commit and the message accompanying it. As a guideline for the label definition, the authors used the CWE 4.0 list. The authors reused as much as possible the list of security weaknesses in CWE, defining new labels only when needed. Moreover, the web application also showed the list of labels created so far, allowing the author to select one of the already defined labels. Since the number of possible labels (\emph{i.e.,}\xspace types of security weaknesses) is extremely high, such a choice helps using consistent naming while not introducing a substantial bias. In case the commit was not related to a security weakness fix, a \emph{false positive} label was assigned, discarding the commit from the study. Each commit was assigned to two authors and, in cases for which there was no agreement between the two authors, the commit was assigned to a third author. Conflicts arisen for 344\xspace commits ($\sim$50\% of 681\xspace). While such a number may look high, note that we considered as a conflict also cases in which the authors used two slightly different labels to express the same concept (\emph{e.g.,}\xspace CWE-703: improper check or handling of exceptional conditions \emph{vs} CWE-754: improper check for unusual or exceptional conditions). A total of 1,706\xspace labels was required in order to reach our target of assessing and characterizing 200 valid commits per programming language: two labels per each of the 400 valid commits (800), two labels for each of the 281 false positives we discarded (562), and one more label for each of the 344\xspace solved conflicts (344\xspace). As outcome, we present a taxonomy of software security weaknesses identified in the manual analysis and we complement our discussion with qualitative examples. \begin{table}[ht] \scriptsize \centering \caption{Structure of the survey used in our study\vspace{-0.3cm}} \label{tab:survey} \resizebox{\linewidth}{!}{ \rowcolors{2}{gray!15}{white} \begin{tabular}{p{6.5cm}} \toprule \textbf{BACKGROUND QUESTIONS} \\\midrule $Q_1$: In which country do you live?\\ $Q_2$: What is your current job position?\\ $Q_3$: How many years of programming experience do you have?\\ $Q_4$: How many years of programming experience do you have concerning native Android apps? Please specify overall/Java/Kotlin/Dart.\\ $Q_5$: How many years of programming experience do you have concerning the testing of native Android apps?\\\midrule \textbf{EXPERIENCE WITH SOFTWARE security weaknesses AND THEIR PERCEPTION} \\\midrule $Q_6$: Which factors do you consider to estimate the likelihood of a security weakness to be exploited?\\ $Q_7$: Which factors do you consider to estimate the negative impact of a security weakness in case it is exploited?\\ $Q_8$: Which are the most common security weaknesses that you found?\\ $Q_9$: Which security weaknesses do you consider as the most dangerous?\\ $Q_{10}$: How do you detect security weaknesses? Do you use any specific tool for this task?\\\bottomrule \end{tabular} } \end{table} \vspace{-0.2cm} \subsection{Survey with Developers} \label{sub:surveyDesign} We designed a survey aimed at investigating the types of security weaknesses that are found by developers in their apps and their perception about specific aspects of security weaknesses. The survey was designed to last at most 15 minutes, to maximize the survey completion rate. The survey structure is reported in \tabref{tab:survey}. Note that we rephrased some of the questions to shorten them. First, we collected background information about participants ($Q_1$-$Q_5$). If a participant answered ``zero'' to the part of $Q_4$ related to the overall programming experience of native Android apps \footnote{With \emph{native Android apps}, we refer to mobile apps written in one of the official programming languages of Android (\emph{i.e.,}\xspace Java and Kotlin)}, the survey ended, and the participant was excluded from the study. This happened in 2 cases. Then, $Q_6$-$Q_7$ aimed to collect information about the developers' perception of security weaknesses. For these questions we provided a predefined list of possible factors to check, with the possibility of specifying additional factors. For $Q_6$, the predefined list included: \emph{Skill level required to exploit it}, \emph{Motivation to exploit it}, \emph{Chances for a successfully exploit}, \emph{Number of agents needed for the exploit\footnote{\textRevision{With “agents needed for the exploit” we refer to the number of attackers that are needed to exploit a security weakness. Indeed, not all security issues can be exposed by a single attacker}}}, \emph{Ease of discovery}, \emph{Technical difficulty of the exploit}, \emph{How well-known is the weakness}, and \emph{How likely is the exploit to be detected}. Concerning $Q_7$: \emph{Confidentiality}, \emph{Integrity}, \emph{Availability}, \emph{Accountability}, \emph{Brand reputation}, \emph{Business profits}, and \emph{Privacy violation}. $Q_8$ and $Q_9$ aimed to validate/complement the taxonomy defined as output of the manual study, with $Q_8$ focusing on the most frequent and $Q_9$ on the most dangerous security weaknesses experienced by developers. Both these questions required an open answer. Two authors read each answer and assigned the CWE-ID(s) needed to describe the security weaknesses mentioned in each answer. A third author merged these tags and solved conflicts arisen for 15 answers (18\%). Since a respondent might have answered the same for $Q_8$ and $Q_9$, duplicates among these answers were removed to avoid counting twice the same security weakness mentioned by the same developer. Finally, $Q_{10}$ asked developers how they detect security weaknesses and whether they are supported by any tool. We used convenience sampling to invite developers from companies we know to participate in our survey. Also, the link to the survey was shared in social media. We collected answers for ten days, with a total of 43\xspace participants that completed our survey from nine countries (\emph{i.e.,}\xspace Argentina, Canada, Colombia, Germany, Hungary, Italy, Macedonia, Poland and USA). On average, the participants had $\sim$6 years of overall programming experience and approximately 3 years of Android development experience (see \figref{fig:demographics}). The average testing experience is close to two years. Regarding their job position, 21\% of participants are B.Sc. students, 7\% M.Sc. students, 4.6\% Ph.D students and 67.4\% professional Android developers having different job positions in the industry (\emph{e.g.,}\xspace Senior Android developer, Technical leader, Project Management Engineer, Director). \noindent \begin{figure}[ht] \vspace{-0.4cm} \centering \hspace{-0.7cm}\includegraphics[width=1.07\linewidth]{img/full} \vspace{-0.9cm} \caption{Experience in years of the 43\xspace surveyed participants.} \label{fig:demographics} \vspace{-0.7cm} \end{figure} \subsection{Testing the Generalizability of Our Taxonomy} Once obtained the final taxonomy including both categories defined through the mining-based study as well as those complemented by the developers' survey, we assessed its generalizability. We used all 64 Kotlin-related commits we did not manually analyze while building our taxonomy and a sample of 186 Java-related (again, among those we did not analyze). Then, we asked two Master students both having experience in Android development and not involved in the taxonomy definition and unaware of its structure, to perform the same manual analysis previously described. Each of them independently evaluated all instances. Conflicts arisen in 68\% of cases were solved through an open discussion between the two students and the first two authors of this work. The final output is a taxonomy of security weaknesses affecting Android apps, that we can compare with the taxonomy we defined to assess its stability. While in principle more Kotlin-related commits would be needed, we labeled all those we found by mining several datasets of Android apps. \vspace{-0.2cm} \subsection{Data Analysis} \label{sub:dataAnalysis} We start by presenting the taxonomy of types of software security weaknesses output of our mining-based study. Then, we discuss how the developers' survey helped in validating/complementing the obtained taxonomy. Finally, we report about the results of the generalizability study. The data used in our study are publicly available \cite{replication}. \section{Introduction} \label{sec:introduction} \vspace{-0.1cm} Mobile apps and devices are nowadays omnipresent in daily life activities, supporting many crucial tasks (\emph{e.g.,}\xspace banking, social networking, etc.\xspace) involving the manipulation and storage of sensitive and private data. The usage of mobile operating systems has already exceeded the usage of desktops/laptops operating systems \cite{statcounter2020a,statcounter2020b,google2019report}. As a consequence, mobile apps and devices have become a very attractive target for malicious attacks aimed at stealing private and sensitive information from apps/devices and to exploit on-device capabilities such as processing, data collection via sensors, and networking. Also, according to the CVE details portal \footnote{\url{https://www.cvedetails.com/product/19997/Google-Android.html}} the number of vulnerabilities in the Android operating system has seen a steep growth in the last years, with a total of 2563 reports in 10 years (2009-2019). As a natural reaction to such a rising of vulnerabilities in the mobile ecosystem, original equipment manufactures (OEMs), operating system designers (\emph{e.g.,}\xspace Google), researchers, and companies have devoted efforts to improve the security of mobile OSs, devices and apps. A paramount example is the volume of research focused on detecting vulnerabilities in Android apps (see \emph{e.g.,}\xspace~\cite{arzt2014flowdroid,li2015iccta,sadeghi2017taxonomy,lee2017sealant,singleton2019firebugs,you2016reference, bello2019opia,ren2015hijacking,novak2015covertchannels,gadient2019securitysmells}). The Android OS and devices have been also investigated in the context of previous studies aimed at categorizing their security weaknesses and exploits~(\emph{e.g.,}\xspace \cite{huang2015servershutdown, thomas2015securitymetrics, cao2015inputvalidation,wang2016systemserver, jimenez2016profiling, bagheri2018androidpermissions, meng2018survey, mazuera2019android}). Even datasets with malicious apps have been built~\cite{allix2016androzoo, Zhou2012Genome}. Still, to the best of our knowledge, there is no comprehensive taxonomy of security weaknesses exhibited in Android apps. With security weaknesses we refer to flaws or gaps in a software that could be exploited to violate its security policy, thus eventually causing a disruption of the confidentiality, integrity, or availability of the system in question. \textRevision{As compared to desktop applications, Android apps may suffer of specific vulnerability types since they (i) run on a mobile device, thus usually collecting a larger amount of information about the user (e.g., location, video/audio, as well as biometric information); (ii) are built on top of a specific framework and programming model, that, as we will show, requires to carefully handle specific types of resources and components (e.g., Activities, Intents, Broadcast Receivers, etc.); (iii) despite the Android OS is built on top of the Linux kernel, several modifications have been done to the kernel, and there is a set of specific OS layers built on top of the kernel that makes Android apps programming different from web and desktop app programming, even the programming model is different from the iOS model. In this paper, we focus on Android apps written in Java and in Kotlin, the two main programming languages officially supported for the development of Android apps\footnote{https://developer.android.com/kotlin/first}.} Despite previous individual efforts for analyzing, detecting and fixing specific sets of security weaknesses, the research community still lacks a body of knowledge characterizing the types of weaknesses affecting Android apps. Also, some of the empirical investigations performed in the past could become outdated due to the frenetic evolution of the Android ecosystem. Indeed, the programming models include now the possibility of creating native, hybrid, cross-platform, and mobile web apps for the Android platform. Previous studies on specific security vulnerabilities have focused on analyzing Android Java apps, because of the availability of code bases and APKs in this language. Given the rising interest for Kotlin apps and its status of official Android language, investigating security weaknesses in Kotlin becomes a required avenue for research. While Dart\footnote{\textRevision{Dart is a programming language developed by Google and designed to support the implementation of applications, including mobile apps. https://dart.dev/}}/Flutter\footnote{\textRevision{Flutter is a software development kit created by Google that is built on top of Dart and can be used to develop cross-platform applications. https://flutter.dev/}} also represent interesting targets for research, their diffusion is still limited, with $\sim$18k GitHub repositories as compared to the $\sim$75k Kotlin repositories (May 2020). In this paper, we present the first empirical study characterizing software security weaknesses in Android Java and Kotlin apps. To this end, we build a taxonomy of security weaknesses by (i) manually analyzing 681\xspace commits in open source Android Java/Kotlin apps (\emph{i.e.,}\xspace \emph{mining-based study}), and (ii) surveying 43\xspace Android developers to collect their experience with security weaknesses, and in particular with the types they frequently faced (\emph{i.e.,}\xspace \emph{survey-based study}). The output of the mining-based study is a taxonomy on multiple levels featuring a total of 74 categories of security weaknesses. As results of the developers' survey, we identified 28 types of security weaknesses, of which 22 were already covered in our taxonomy, and six more were added. We use the defined taxonomy to discuss interesting directions for future research in the area, and lessons learned for practitioners. Note that, while catalogues of security weaknesses in mobile apps have been previously defined \cite{cwemobile,OWASP}, they are not based on the empirical observation of weaknesses affecting real mobile apps and, as a result, they are less comprehensive than the taxonomy we derive in this work. \section{Data Availability} \vspace{-0.2cm} The data used in our study are publicly available at \cite{replication}. \vspace{-0.5cm} \input{ack} \balance \section*{References} \bibliographystyle{elsarticle-num} \section{Related Work} \label{sec:related} Several techniques have been proposed to detect, and in some cases fix, vulnerabilities in mobile apps (\emph{e.g.,}\xspace~\cite{arzt2014flowdroid,li2015iccta,sadeghi2017taxonomy,lee2017sealant,singleton2019firebugs,you2016reference, bello2019opia}). We focus on studies investigating security-related aspects in \emph{Android apps}, since these are the most related to our work. \textRevision{\tabref{tab:relatedWorks} provides an overview of the discussed studies, reporting for each \emph{reference}, the \emph{year of publication}, a \emph{brief summary} of its contribution, the \emph{size} of the dataset including the number of analyzed apps (\#a) or commits (\#c) since our paper reports this information, along with the number of \emph{security weaknesses types} and \emph{categories} that have been outlined } \begin{table*}[tb] \vspace{0.4cm} \centering \caption{\textRevision{Empirical studies on security weaknesses in Android apps}} \label{tab:relatedWorks} \rowcolors{2}{gray!15}{white} \begin{tabular}{l|l|l|c|c|c} \midrule \textbf{Ref.} & \textbf{Year} & \textbf{Brief summary} & \textbf{\textRevision{Size}} & \textbf{\textRevision{Types}} & \textbf{\textRevision{Categories}} \\ \midrule \multicolumn{1}{c|}{\cite{felt2011android}} & 2011 & Detection of overprivileges in Android apps & \texttt{\#a:} 940 & 10 & 1 \\ \multicolumn{1}{c|}{\cite{enck2011study}} & 2011 & Identification of vulnerabilities' root causes & \texttt{\#a:} 1,100 & 8 & 1\\ \multicolumn{1}{c|}{\cite{egele2013empirical}} & 2013 & Cryptographic misuse in Android apps & \texttt{\#a:} 11k+ & 6 & 1 \\ \multicolumn{1}{c|}{\cite{zuo2015automatically}} & 2015 & Detection of SSL error-handling vulnerabilities & \texttt{\#a:} 13,820 & 1 & 1 \\ \multicolumn{1}{c|}{\cite{bagheri2015covert}} & 2015 & Analysis of inter-app security vulnerabilities & \texttt{\#a:} 500 & 2 & 1 \\ \multicolumn{1}{c|}{\cite{ahmad2016inter}} & 2016 & Developers challenges for inter-app communication & \texttt{\#a:} 52k & 3 & 1 \\ \multicolumn{1}{c|}{\cite{weir2020needs}} & 2020 & Survey on developer practices for app security & \texttt{\#a:} 454 & 3 & 1 \\ \multicolumn{1}{c|}{\cite{gao2021understanding}} & \textRevision{2021} & \textRevision{Temporal evolution of vulnerabilities in Android apps} & \texttt{\#a:} 465,037 & 10 & 4 \\ \hline \multicolumn{2}{c|}{This paper} & \textRevision{Taxonomy of Security Weaknesses} &\texttt{\#a:}8,157 \texttt{\#c:}4,781 & 80 & 5 \\ \bottomrule \end{tabular} \end{table*} Felt \emph{et~al.}\xspace~\cite{felt2011android} identified over-privileges in the permissions (\emph{e.g.,}\xspace bluetooth, read contacts) of one-third of the 940 Android apps they analyzed. \textRevision{10 most common unnecessary permissions are identified, and the percentage of overprivileged applications varies from 5\% to 16\%.} The authors point out that this is mainly due to developers not interpreting correctly the API documentation. The results of our work, and especially of our survey, support the relevance of permissions for the vulnerabilities affecting Android apps. Enck \emph{et~al.}\xspace~\cite{enck2011study} investigated the root causes of vulnerabilities in 1,100 free Android apps. The authors find misuse of sensitive information (\emph{i.e.,}\xspace phone identifiers and geographic location) among the root causes. \textRevision{Android-specific vulnerabilities relate all to the sensitivity of data, and 8 different types are identified, \emph{e.g.,}\xspace leaking information to logs, unprotected broadcast receivers, etc.} The security of Android APIs was also considered insufficient, but no vulnerability was found able to maliciously control the apps. The mishandling of sensitive information is also a prevalent aspect in our taxonomy. Egele \emph{et~al.}\xspace~\cite{egele2013empirical} used a static analysis technique to capture cryptographic misuses in 11k+ apps. They showed that 88\% of the analyzed apps do not correctly use cryptographic APIs. This is mainly due to the lack of inter-procedural analysis that correlates multiple functionalities (\emph{e.g.,}\xspace encryption and decryption) within a method instantiation. \textRevision{Focus here is on cryptography, and 6 different types of violations (\emph{e.g.,}\xspace constant encryption keys) have been highlighted.} Instances of issues related to cryptography are found both in Java and Kotlin in our taxonomy. Sufatrio \emph{et~al.}\xspace~\cite{tan2015securing} presented a secondary study reviewing the literature about existing security solutions for Android apps. The taxonomy is relevant for five deployment stages, \emph{i.e.,}\xspace development, availability on markets, installation on a device, execution, and security settings modification. \textRevision{It surveys existing work, it does not rely on a specific dataset of analyzed apps/commits, but it elaborates on the literature to derive a taxonomy including 5 categories and 18 types of security vulnerabilities that should be prevented.} Zou \emph{et~al.}\xspace \cite{zuo2015automatically} exploited static and dynamic analysis to detect apps opening {\tt https} web pages with illegal certificates. \textRevision{It targets a specific category of vulnerabilities, \emph{i.e.,}\xspace the privacy of the communications. The developed framework detects a specific type of violation, \emph{i.e.,}\xspace ignoring the illegal certificate error and proceeding with the sending of sensitive information over an insecure communication channel.} Bagheri \emph{et~al.}\xspace ~\cite{bagheri2015covert} analyzed inter-app and inter-component security vulnerabilities in 500 apps. Specifically, a formal model expressing security properties of apps/components is extracted and a model checker verifies the safety of simultaneously running two apps that may interact while holding certain permissions. \textRevision{This research focuses on identifying a specific category of vulnerability, \emph{i.e.,}\xspace privilege escalation --an application with less permissions can be not restricted to access components of a more privileged application--. Two types of detection strategies are adopted: (i) entities that can be inferred from a method; (ii) vulnerable paths of communication between entities.} Also Ahmad \emph{et~al.}\xspace \cite{ahmad2016inter} analyzed inter-app communication (IAC) in 52k apps\textRevision{, where the focus is on different types of actors involved in IAC (Library, Caller, and Callee), which are recognized as types of entities potentially vulnerable. Overall, these works \cite{zuo2015automatically,bagheri2015covert,ahmad2016inter} focus on a specific category of security vulnerabilities that, also due to the nature of our investigation (intentionally meant to be more general)}, we did not identify in our study. Android devices and the operating system have been also investigated. Meng \emph{et~al.}\xspace~\cite{meng2018survey} presented a taxonomy of \textRevision{63} device exploits (\emph{i.e.,}\xspace vulnerabilities leading to privilege escalation) \textRevision{grouped in 3 main categories that are related to perspectives: societal, practical, and technical. It is shown} that the diffusion of exploits is decreasing due to Android systems and Linux kernels strengthening their security mechanisms. Our study does not limit its focus to exploits, but looks at security weaknesses from a more general perspective. Jimenez \emph{et~al.}\xspace~\cite{jimenez2016profiling} presented a taxonomy of 43 issues related to Android OS vulnerabilities by leveraging the CVE-NVD (Common Vulnerability Exposures - National Vulnerability Database) database \textRevision{whose size is left unspecified}. The authors found that Android vulnerabilities \textRevision{related to the code mainly belong to 9 categories (\emph{e.g.,}\xspace resource management, handling data, etc.\xspace). They} are mainly located in components dealing with browsing, cryptography, access control or networking. Also the fixing of vulnerabilities is investigated looking at the distribution of code changes, and most of them related to the additions of condition(s), authorization, functions, etc.\xspace Mazuera-Rozo \emph{et~al.}\xspace~\cite{mazuera2019android} also performed empirical studies on the Android OS to categorize the types of the vulnerabilities (\emph{e.g.,}\xspace denial of service, improper authorization), their evolution overtime and their survivability. \textRevision{Security weaknesses are grouped in 14 categories where 154 types (\emph{e.g.,}\xspace credentials management, improper authorization, transmission of sensitive information, etc.\xspace) have been identified.} Besides, vulnerability patches (\emph{e.g.,}\xspace check for exceptional conditions, proper handling of certificates, appropriate initialization values for variables) are analyzed to investigate the most used fixes. Our work, while related, focuses on security weaknesses affecting Android apps rather than the Android OS. Weir \emph{et~al.}\xspace~\cite{weir2020needs} conducted a survey on the effect of requirements and developer practices on apps' security. For app development, security is perceived relevant by the participants, even if assurance techniques are poorly used. \textRevision{The survey refers to a set of 454 apps, and 335 developers were using tools suitable to check the following 3 types of weaknesses: SSL security, cryptographic API misuse, and privacy leaks. As result, a portion of participants have been classified as security specialists and they advocated the usage of cryptography to enforce security.} \textRevision{Gao \emph{et~al.}\xspace~\cite{gao2021understanding} investigated the temporal evolution of vulnerabilities in Android apps. Vulnerable code is detected in terms of which locations (\emph{e.g.,}\xspace library code) are more common than others, the types of code change (\emph{e.g.,}\xspace the addition of new files) that may entail security-related issues, and also if there is a correlation with malwares. The list of considered vulnerabilities is constituted of 4 categories (\emph{i.e.,}\xspace security features, permissions, injection flaws and data/communication handling) and 10 types, each associated to a detection tool providing evidence of the corresponding vulnerability.} To the best of our knowledge, our work represents the first and most comprehensive taxonomy of security weaknesses in Android apps, including both Java and Kotlin app-related code. Besides, our taxonomy is the result of a two-phase study, involving the inspection of software-related artifacts (\emph{i.e.,}\xspace security weakness-fixing commits) and a survey with software developers. The derived taxonomy is more comprehensive and extensive, covering 18 of the 20 issues analyzed in previous papers by Enck \emph{et~al.}\xspace\cite{enck2011study}, Egele \emph{et~al.}\xspace\cite{egele2013empirical}, Zuo \emph{et~al.}\xspace\cite{zuo2015automatically}, Bagheri \emph{et~al.}\xspace\cite{bagheri2015covert}, Jimenez \emph{et~al.}\xspace\cite{jimenez2016profiling}, Weir \emph{et~al.}\xspace\cite{weir2020needs}\textRevision{, and Gao \emph{et~al.}\xspace~\cite{gao2021understanding}}. Finally, we focus on both Java and Kotlin code \textRevision{recently suggested in~\cite{coppola2019migrationkotlin}}, while only Java-related security weaknesses are analyzed in previously mentioned works. \section{Results} \label{sec:results} \vspace{-0.2cm} \figref{fig:taxonomy} depicts the taxonomy presenting the types of security weaknesses we found. Each sub-hierarchy uses a different color, with the black boxes representing the root categories. The taxonomy is derived by a total of 400 commits (200 for Java and 200 for Kotlin) we manually validated. However, there are 14 commits that were grouped in the \emph{Unclear} category since in these cases, while it was clear the intent of fixing a security flaw, we were unable to derive the type of fixed security weakness. Each category in \figref{fig:taxonomy} is accompanied by one, two, or three numbers. The two numbers with white background represent the number of instances of the corresponding security weakness type we found in Java (top number) and Kotlin (bottom). The one with gray background represents the number of developers that mentioned the type of security weakness in our survey. Categories added to our taxonomy as the result of the survey (\emph{e.g.,}\xspace \emph{CWE-625: Permissive Regular Expression}), only have a gray-background number. \textRevision{It is worth noting that some categories have only been found in a few commits or have only been mentioned by developers (but not found in the mining-based study). Concerning the first case (\emph{i.e.,}\xspace low number of commits related to the category), we preferred to still include those categories since, thanks to the numbers attached to them, it is easy for the reader to assess their relevance. In other words, it is clear from our taxonomy that, for example, the prevalence of \emph{CWE-691} vulnerabilities (78 overall instances) is much higher as compared to \emph{CWE-779} (3 overall instances). Concerning the latter case (\emph{i.e.,}\xspace categories only mentioned by developers), they increase the comprehensiveness of our taxonomy; the fact that we did not find them in the analyzed sample of commits does not make them less relevant for our study. Indeed, while we analyzed a substantial set of commits (400), it is reasonable to expect that we did not encounter specific types of vulnerabilities in our study (as we will also show in \secref{sec:stability}).} In addition, it is worth mentioning the hierarchical organization of the categories, moving from the most general categories (\emph{i.e.,}\xspace the root nodes, such as \emph{CWE-710}), to more specialized ones (\emph{e.g.,}\xspace \emph{CWE-1164}) down to the leafs (\emph{e.g.,}\xspace \emph{CWE-1069}). The sum of instances for all child categories of a given node is lower or equal than the number of instances reported in its parent node. For example, \emph{CWE-1069} and \emph{CWE-561} are the two child categories of \emph{CWE-1164} (top-left corner of \figref{fig:taxonomy}). \emph{CWE-1069} and \emph{CWE-561} have 1 and 11 Java-related instances, respectively, while their parent category \emph{CWE-1164} has 18 Java-related instances. This is due to the labeling process since we assigned to each commit the most specific security weakness type we could derive from the manual inspection. Thus, for 12 of the 18 commits belonging to \emph{CWE-1164} we managed to provide a more specific categorization, resulting in the two child categories, while for 6 of them \emph{CWE-1164} was the most detailed label we could assign. Finally, some categories are not linked to any CWE-ID. These categories are either (i) aggregating some sub-categories for better visualization, or (ii) created by the authors since they did not find a proper type within the CWE dictionary to classify an instance (See \tabref{tab:newVulns}). \begin{table*}[tb] \centering \caption{Definition of categories created by authors not existing in CWE dictionary} \label{tab:newVulns} \rowcolors{2}{gray!15}{white} \begin{tabular}{p{4.9cm}|p{12.3cm}} \toprule \textbf{Categories} & \textbf{Common consequences } \\ \midrule Missing Code Obfuscation &\textRevision{ Non-obfuscated code is susceptible to reverse engineering allowing an attacker to retrieve sensitive information from a system.} \\ Double Serialization & \textRevision{Data mishandling can lead to a degradation of its integrity quality.}\\ Exposure of Sensitive Information Through User Interface & \textRevision{Sensitive information could be exposed within the GUI to an actor that is not explicitly authorized to have access to that information.} \\ File URI Exposed & \textRevision{A file can be made unsafely accessible from other apps providing unintended actors with inappropriate access to the resource.} \\ Component Hijacking & \textRevision{A vulnerable component within an app can be seized by an actor to gain privileges in order to conduct operations originally prohibited.} \\ \bottomrule \end{tabular} \vspace{-0.3cm} \end{table*} We start by discussing the categories output of the manual analysis (\secref{sec:mining}), presenting then the main differences between Java and Kotlin-related security weaknesses (\secref{sec:comparison}), and then discussing how the developers survey validated/complemented our taxonomy (\secref{sec:survey}). Finally, we present the results of the further manual validation performed by two Master students to test the generalizability of our taxonomy. We use icons to highlight parts related to implications for researchers (\faFlask) and practitioners (\faCodeFork). \subsection{Mining-based Study} \label{sec:mining} \emph{1) Improper Control of a Resource Through its Lifetime (145 instances - 36.25\%).} It includes security weaknesses related to not maintaining or incorrectly maintaining control over a resource throughout its lifetime of creation, use, and release, leading to potentially exploitable states. A strongly represented type in this category is \emph{CWE-557: Concurrency Issues}, being prominent in both Java (29 instances) an Kotlin (40). \figref{fig:concurrency-557-3571} depicts an example of a concurrency issue in Kotlin code in which the developer is modifying the nature of the collection being used. \begin{figure}[ht] \vspace{-0.2cm} \begin{center} \includegraphics[width=1\linewidth]{img/concurrency-557-3571} \caption{Usage of Thread-Safe Collection.} \label{fig:concurrency-557-3571} \end{center} \vspace{-0.3cm} \end{figure} The collection type {\tt mutable\-Map\-Of} is replaced with a {\tt Concurrent\-Hash\-Map}, preventing concurrency issues. The automatic detection and fixing of the type of code issues reported in the example can be easily targeted through approaches supporting \emph{Change Variable Type} refactoring. \faFlask~A customization of these techniques is needed to embed a set of ``change type'' rules that are relevant for security weaknesses (\emph{e.g.,}\xspace replace {\tt mutable\-Map\-Of} with {\tt Concurrent\-Hash\-Map} if the class extends {\tt Thread}). Another common weakness related to the improper control of resources is \emph{CWE-668: Exposure of Resource to Wrong Sphere}, with 11 instances found in Java and 8 in Kotlin. CWE-668 arises when a resource is inadvertently exposed due to insecure permissions or unexpected execution scenarios. \figref{fig:exposure-359-6381} shows Kotlin code in which the developer sets the {\tt FLAG\_SECURE} to a window showing a password in the app. \begin{figure}[ht] \vspace{-0.2cm} \begin{center} \includegraphics[width=1\linewidth]{img/exposure-359-6381} \caption{Exposing private information in the interface.} \label{fig:exposure-359-6381} \end{center} \vspace{-0.3cm} \end{figure} The added flag asks the window manager to disable screen recording/capturing when the {\tt show\-Password} method is executed. The usage of this flag in windows containing sensitive information is recommended in the official Android documentation. Also in this case, \faFlask~techniques can be developed by researchers to automatically identify features in code that (i) deal with sensitive information that can be detected through simple keyword matching mechanisms (\emph{e.g.,}\xspace looking for words like ``password''), and (ii) are in charge of displaying windows. Then, a {simple} automatic addition of proper flags can avoid potential points of attack. Such a security issue is also documented in Stack Overflow \footnote{\url{https://stackoverflow.com/questions/9822076}}. \faCodeFork~This suggests the potential usefulness for developers of recommender systems able to point out them to relevant Stack Overflow discussions while writing code (\emph{e.g.,}\xspace Prompter\cite{Ponzanelli:emse2016}). Making the developer aware of such issues at coding time can avoid the introduction of the security flaw in the first place. Other types of security weaknesses that are less diffused but still relevant in the context of controlling resources are: \emph{CWE-178: Improper Handling of Case Sensitivity} (12 cases) and \emph{CWE-665: Improper Initialization} (13). The complete dataset of labeled weaknesses is available in the replication package \cite{replication}. \emph{2) Improper Adherence to Coding Standards (98 instances - 24.50\%).} This category frames security weaknesses present in software due to ignored development best practices. The most represented sub-category for both programming languages is \emph{CWE-1164: Irrelevant Code}, with 18 Java and 19 Kotlin instances. This category is related, for example, to the presence of dead code in the apps (\emph{i.e.,}\xspace code that is not executed in any of the app's feature). Such a code, while not executed in the normal app's usage, can still be unintentionally invoked/tested by software developers, or even exploited and executed by an attacker. The execution of dead code can be particularly dangerous since it is often not maintained with the latest security-related updates. Moreover, notice that for both investigated languages, dead code is not removed from the APK (\emph{i.e.,}\xspace the compiled app) after compilation. \textRevision{Besides being possibly exploited, dead code can ``come back to life’’ by mistake, thus leading to unexpected consequences. For example, the implementation of a new feature can by mistake be invoking an old (dead) implementation of a method accessing the database, leading to a loss of information when the app is deployed. In addition, dead code ``might indirectly make it easier to introduce security-relevant weaknesses or make them more difficult to detect." \footnote{https://cwe.mitre.org/data/definitions/561.html}}. When dead code is identified, two strategies are usually adopted by developers to remove it \cite{Romano:tse2020}: (i) adding a explanatory comment before the dead fragment in question in which the developer mentions that the fragment it is or could be dead; and (ii) commenting out the code, leaving it available for future usage. The latter strategy is the one that has been applied in one of the fixing commits we inspected. The developer is commenting out dead code that seems to be related to the management of contacts in the database. Two days before this commit, the same developer added a comment on top of the dead code saying {\tt //TODO: what is this for again?} (see changes to file {\tt MVP\_\-Activity\_\-Contacts} in commit {\tt f0801d88}). \faFlask~The prevalence of \emph{CWE-561: Dead Code} weaknesses in our taxonomy confirms the importance for researchers to investigate approaches able to automatically identify code components that can be removed without ripple effects on the code functionalities. \faCodeFork~To the best of our knowledge, very few tools are available for this task such as the one by Romano \emph{et~al.}\xspace~\cite{Romano:tse2020}, the Android Lint tool~\cite{ALINT}, and the Kotlin DCE plugin~\cite{DCE}. Another prevalent type of security weaknesses related to coding standards are the ones grouped in the \emph{Serialization issues} category. \faCodeFork~A simple yet frequent issue we identified is the lack of a unique {\tt serial\-Version\-UID} in serializable classes, something expected in Java. Indeed, this identifier is stored with the serialized object and it is verified when deserializing it, thus to avoid data integrity issues. All other first-level subcategories in the ``coding standard'' tree have less than ten total instances and, in several cases, are only related to one of the investigated languages (see \figref{fig:taxonomy}). The categories added as result of our survey will be discussed in \secref{sec:survey}. \vspace{0.1cm} \emph{3) Improper Check or Handling of Exceptional Conditions (84 instances - 21\%).} This category includes weaknesses that can lead to unpredictable behavior due to the improper or missing handling of exceptional conditions rarely occurring during the normal operation of the app. Within this category, the most represented type of security weakness is \emph{CWE-707: Improper Neutralization}, happening when messages and/or data are not properly checked to be well-formed, valid, or benign (\emph{i.e.,}\xspace the exceptional condition of malformed messages/data is not properly handled). This category is mostly composed by cases related to \emph{CWE-20: Improper Input Validation} (\emph{e.g.,}\xspace issues related to the improper validation of the password in a login form, such as commit {\tt 4875515b} in the ccomeaux/boardgamegeek4android app, which could lead to a future credential management error). This type of issues can be addressed by relying on dynamic analysis, and in particular on fuzz testing, which aims at feeding unexpected input data that may generate crashes, exploit security weaknesses, or induce unexpected states in the~app. \faCodeFork~Several tools for this scope exist nowadays \cite{arzt2014flowdroid, monkey, DroidFuzzer, Huang2019fuzzing, EVOTAINT, IVDROID, DifFuzz}, thus giving to practitioners a vast repertory of available options that can be adopted for their testing activities. \textRevision{However, \faFlask~these tools work on Java and to the best of our knowledge, there are neither proposals of fuzzers that work at source-code level for Kotlin nor Dart/Flutter. In the case of Kotlin, fuzzers at the Java bytecode level could be used, however, this is not the case for Dart/Flutter apps because the Dart language is not JVM-based.} Therefore, we encourage the research community to devise fuzzers and benchmarks for Kotlin and Dart such as \emph{e.g.,}\xspace FuzzBench~\cite{FuzzBench}. Another well-represented subcategory is \emph{CWE-248: Uncaught Exception}, that may cause the program to crash and/or expose sensitive information. Uncaught exceptions are a well-known issue in Android apps, especially when apps strongly rely on Android abstractions (\emph{e.g.,}\xspace activities, asynctasks, etc.\xspace) \cite{Oliveira:jss2018}. The prevalence of this type of weakness in our taxonomy, \faFlask~supports previous findings reported in the literature, \faCodeFork~and highlights the potential usefulness for developers of tools developed in academia to automatically test Android apps using systematic input generation (see \emph{e.g.,}\xspace \cite{linan2018rip,li2017droidbot}). \vspace{0.1cm} \emph{4) Protection Mechanism Failure (59 instances - 14.75\%).} These security weaknesses are related to the incorrect restriction of access to a resource from an unauthorized actor. Thus, an attacker can compromise the security of the app by gaining privileges, accessing sensitive information, etc.\xspace Most of the weaknesses in this category are related to \emph{CWE-287: Improper Authentication}. \figref{fig:impr-auth-287} shows an example of this type of security weakness, in which the developer fixes a security bug due to the missing authentication step in a feature requiring the user to have a valid authorization. \newline \begin{figure}[ht] \vspace{-0.3cm} \centering \includegraphics[width=0.75\linewidth]{img/impr-auth-287} \caption{Unauthorized user must not be able to resume an Activity.} \label{fig:impr-auth-287} \end{figure} While most of the cases in this category are simple programming mistakes (\emph{e.g.,}\xspace a wrong/missing {\tt if} statement), these bugs are difficult to catch and automated testing tools are of little help here, since mostly focused on identifying app crashes. \faFlask~The development of approaches relying on machine learning (ML) to automatically discriminate apps' features that can be accessed with/without authentication could help. \newpage This would require the existence of a large training set of apps components (\emph{e.g.,}\xspace GUIs) labeled with their need for authentication (\emph{e.g.,}\xspace a simple boolean). Assuming the feasibility of this ML-based approach, exploratory testing can then be used in combination with it to identify apps' features that should not be accessible without authentication but that, instead, can be reached by randomly exploring the app without a valid authentication. In this category we also found cases related to \emph{CWE-798: Use of Hard-coded Credentials}, such as commit {\tt f92221f} from the UserLAnd app. \footnote{\url{https://github.com/CypherpunkArmory/UserLAnd/commit/f92221f}} \faCodeFork~These cases are mostly due to hard-coded credentials most likely for testing purposes. However, using these credentials and having them available in repositories and/or URLs could lead to attacks. \begin{figure}[ht] \begin{center} \includegraphics[width=0.8\linewidth]{img/capture-and-replay-294-3770} \caption{Exposing token within GET request.} \label{fig:capture-and-replay-294-3770} \end{center} \vspace{-0.3cm} \end{figure} Finally, representative of the \emph{Improper Access Control} category is also the commit in \figref{fig:capture-and-replay-294-3770} . \textRevision{Before the fix, the app was sending a private token as a parameter within a GET request, which makes the token visible to anybody that can catch the URL, then exposing it and potentially allowing a Man-in-the-Middle attack, therefore under such circumstances an attacker could impersonate the user to whom the token belongs.} The commit fixes this issue by removing the problematic code. In the case of the example, a private token was sent as a parameter of a GET request, which makes the token visible to anybody that can catch the URL. This type of tokens are user to make the server know that the client sending the HTTP message is a valid/certified/authorized client, in that sense if authentication tokens are visible when sending an HTTP message, the user sending the token can be impersonated. The identification of leaks for security-related information in mobile apps is an active research area\cite{bello2019opia}, with approaches extracting data from the apps' local databases and shared preferences to identify sensitive information that is not properly encrypted and/or anonymized. \faFlask~Identifying security-related information passed through query strings in URLs is a needed complement to these approaches. \vspace{-0.2cm} \subsection{Java vs Kotlin.} \label{sec:comparison} This section compares the distribution of security weaknesses we observed in Java and Kotlin code. We focus on second-level categories (\emph{i.e.,}\xspace the direct child nodes of the root categories). We do not consider in this discussion categories in which there are less than ten overall instances when summing up the weaknesses for Java and Kotlin. Indeed, whatever observation made for these categories may be due to the low number of instances in the category. Also, it is worth noting that our goal is simply to highlight the differences we found in our taxonomy. Indeed, explaining the reasons for the observed differences without best-guessing is not possible with the available empirical data. A different experimental design targeting this RQ is needed to properly answer it. We found a balanced distribution of Kotlin/Java instances among most of the subcategories. In particular, no major differences are observed in the subtree related to \emph{CWE-710: Improper Adherence to Coding Standards}. Instead, when moving to the \emph{CWE-664: Improper Control of a Resource Through its Lifetime} subtree, we observe a slight prevalence of Kotlin-related security weaknesses. This is mostly due to more issues related to improper thread synchronization and handling of case sensitivity (\emph{i.e.,}\xspace the code does not properly handle differences in case sensitivity, possibly leading to inconsistent results). Concerning the \emph{CWE-703: Improper Check or Handling of Exceptional Conditions} tree, the main category exhibiting differences is the one related to uncaught exceptions, with a prevalence of Java-related security weaknesses (15~\emph{vs}~7). Finally, no major differences have been observed for what concerns \emph{CWE-693: Protection Mechanism Failure}. Summarizing, the distribution of types of security weaknesses across Java and Kotlin seems to be quite similar. \faFlask~This suggests that previous findings reported in empirical studies about security weaknesses in Java Android apps are likely to generalize to Kotlin apps as well, at least for what concerns the security weaknesses diffusion. \subsection{Survey with Developers} \label{sec:survey} Our taxonomy has been validated/complemented through the survey we performed with software developers. In the developers' answers to $Q_8$ and $Q_9$ (see \tabref{tab:survey}), we found mentions to 87 software security weaknesses, that can be classified into the 28 types labeled with a gray number (\emph{i.e.,}\xspace the number of developers who mentioned that security weakness type) in \figref{fig:taxonomy}. Out of these, 22 were already part of our taxonomy as output of the \emph{mining-based} study, while six were added: \emph{CWE-269: Improper Privilege Management}, \emph{CWE-325: Missing Required Cryptographic Step}, \emph{CWE-625: Permissive Regular Expression}, \emph{CWE-1104: Use of Unmaintained Third Party Components}, \emph{Hijacking}, and \emph{Missing Code Obfuscation}. The fact that 78\% of the security weakness types mentioned by developers (22/28) were already part of our taxonomy, provides a good level of confidence about its comprehensiveness. The most common security weaknesses ($Q_8$) mentioned by the surveyed developers can be easily seen in \figref{fig:taxonomy}, with those belonging to the \emph{CWE-693: Protection Mechanism Failure} and \emph{CWE-664: Improper Control of a Resource Through its Lifetime} trees representing 81\% of the mentioned security weaknesses (71/87). There is a common thread we found when analyzing the answers provided to $Q_9$, meaning the most dangerous weaknesses perceived by developers. All developers are mostly worried about unauthorized access to sensitive, private data stored in the app or sent/received through/by it. Some of the (shortened) answers: ``\emph{vulnerabilities related to confidentiality, since they can expose user information}'', ``\emph{wrong/missing encryption of data being stored within the app}'', ``\emph{the leak of user personal information}''. \faFlask~Answers to $Q_9$ confirm the importance of research studying security weaknesses related to data stored/manipulated by the apps~\cite{arzt2014flowdroid,bello2019opia,Blackdroid}. An orthogonal view about the harmfulness of security weaknesses as perceived by developers is given by the answers to $Q_6$ (\emph{i.e.,}\xspace the factors impacting the likelihood of a security weakness to be exploited) and $Q_7$ (\emph{i.e.,}\xspace the factors impacting the harmfulness of the security weakness if exploited). Developers pointed to technical aspects when answering $Q_6$, indicating the difficulty of exploiting a security weakness as more important than the motivation to exploit it (\emph{i.e.,}\xspace the actual gain an attacker gets). Indeed, the difficulty of exploiting has been mentioned by 79\% of the surveyed developers, as compared to the $\sim$56\% mentioning the potential gain. Answers to $Q_7$ stress again the importance for developers of protecting sensitive information, with most (88.3\%) of the respondents reporting confidentiality and privacy violations as the main factors impacting the dangerousness of a security weakness. Finally, we analyze the answers provided for $Q_{10}$, related to the tools used by developers to detect security weaknesses. None of the surveyed developers mentioned tools developed in academia. Clearly, this does not mean that the adopted tools do not use any idea proposed in the literature. Among the mentioned ones (available in our replication package) there are AppScan from IBM \cite{appScan}, Infer from Facebook \cite{infer}, Sonarqube \cite{sonarqube}, and pre-launch reports given by Google Play when uploading the app to the market. Then, we looked into the relevant literature for tools that can be used by developers to detect the types of security weaknesses they more often face or they perceive as more dangerous (\emph{i.e.,}\xspace previously analyzed answers to $Q_8$ and $Q_9$). \tabref{tab:toolsSurveyess} reports categories of security weaknesses with corresponding references presenting approaches for their detection. Some categories are merged in a single row since their security weaknesses are quite similar, and approaches related for one category should work for the other as well. \faCodeFork~For 12 of the 28 types of security weaknesses mentioned by developers we found at least one approach supporting their automatic detection. \faFlask~On the one side, this shows that the research community is working on security weaknesses that are relevant for developers. On the other side, the developed approaches are unknown (at least) to our small pool of surveyed developers. This may also be due to the unavailability of industry-strength products implementing these approaches. \subsection{Stability of the Taxonomy} \label{sec:stability} Among the 250 commits analyzed by the two Master students (see \secref{sec:design} for details), 73 were classified as false positives for Java and 24 for Kotlin. This left us with 153 valid instances that have been used for the construction of the validation taxonomy (See \figref{fig:taxonomyValidation}). Looking at it, it can be seen that 85\% of the identified categories are already covered in our taxonomy and only 8 new categories were identified (\emph{i.e.,}\xspace \emph{CWE-22: Improper Limitation of a Pathname to a Restricted Directory}, \emph{CWE-372: Incomplete Internal State Distinction}, \emph{CWE-392: Missing Report of Error Condition}, \emph{CWE-400: Uncontrolled Resource Consumption}, \emph{CWE-446: UI Discrepancy for Security Feature}, \emph{CWE-474: Use of Function with Inconsistent Implementations}, \emph{CWE-544: Missing Standardized Error Handling Mechanism}, and \emph{CWE-766: Critical Data Element Declared Public}). Also, all these categories are child of one of our root categories. This indicates a good generalizability of our taxonomy. Additionally, although the proportion of Kotlin artifacts is considerably lower than the amount of Java ones, it is worth noting that in the two taxonomies the distribution of types of security weaknesses across Java and Kotlin is similar. \begin{figure*} \begin{center} \includegraphics[width=\linewidth, angle =90, scale=1.25]{img/taxonomy_vertical_modified_validation_msc.pdf} \caption{Validation taxonomy of types of security weaknesses found in Java and Kotlin Android apps.} \label{fig:taxonomyValidation} \end{center} \end{figure*} \begin{table*}[tb] \centering \caption{Security weaknesses mentioned by developers: Available tools\vspace{-0.2cm}} \label{tab:toolsSurveyess} \rowcolors{2}{gray!15}{white} \begin{tabular}{p{8.2cm}|p{9.2cm}} \toprule \textbf{Security weaknesses} & \textbf{Tools} \\ \midrule CWE-20: Improper Input Validation & DifFuzz \cite{DifFuzz}, DroidFuzzer \cite{DroidFuzzer}, EvoTaint \cite{EVOTAINT}, Flowdroid \cite{arzt2014flowdroid}, Huang \emph{et~al.}\xspace \cite{Huang2019fuzzing}, IVDroid \cite{IVDROID}, Monkey \cite{monkey}\\ CWE-89: SQL Injection & OPIA \cite{bello2019opia}, Kul \emph{et~al.}\xspace \cite{KUL}\\ CWE-200: Exposure of Sensitive Information to an Unauthorized Actor & AppFence \cite{AppFence}, AppIntent \cite{AppIntent}, AutoPatchDroid \cite{AutoPatchDroid}, Blackdroid \cite{Blackdroid}, CoChecker \cite{CoChecker}, ComDroid \cite{ComDroid}, ContentScope \cite{ContentScope}, Covert \cite{bagheri2015covert}, CredMiner \cite{CredMiner},Flowdroid \cite{arzt2014flowdroid}, IccTA \cite{li2015iccta}, Kul \emph{et~al.}\xspace \cite{KUL}, Matsumoto2013 \emph{et~al.}\xspace \cite{Matsumoto2013}, MITHYS \cite{MITHYS}, M-Perm \cite{MPERM}, OAUTHLINT \cite{OAUTHLINT}, Onwuzurike \emph{et~al.}\xspace \cite{Onwuzurike2015} \\ CWE-269: Improper Privilege Management & AppProfiler \cite{AppProfiler}, AppGuard \cite{AppGuard}, AutoPatchDroid \cite{AutoPatchDroid}, AWiDe \cite{AWiDe}, Bartsch \emph{et~al.}\xspace \cite{Bartsch2013}, CoChecker \cite{CoChecker}, Covert \cite{bagheri2015covert}, DroidChecker \cite{DroidChecker}, Droidtector \cite{Droidtector}, Lintent \cite{Lintent}, M-Perm \cite{MPERM}, PaddyFrog \cite{PaddyFrog} \\ CWE-284: Improper Access Control & ContentScope \cite{ContentScope}\\ CWE-311: Missing Encryption of Sensitive Data & DroidSearch \cite{DroidSearch}, OPIA \cite{bello2019opia}\\ CWE-325: Missing Required Cryptographic Step & CrypLint \cite{egele2013empirical}\\ \makecell[tl]{CWE-312: Cleartext Storage of Sensitive Information \\ CWE-922: Insecure Storage of Sensitive Information} & Blackdroid \cite{Blackdroid}, CredMiner \cite{CredMiner}, Flowdroid \cite{arzt2014flowdroid}\\ \makecell[tl]{CWE-359: Exposure of Private Personal Information \\ to an Unauthorized Actor \\ CWE-798: Use of Hard-coded Credentials} & Flowdroid \cite{arzt2014flowdroid}, Kul \emph{et~al.}\xspace \cite{KUL}, M-Perm \cite{MPERM}, CredMiner \cite{CredMiner}\\ Component Hijacking & ActivityHijacker \cite{ActivityHijacker}, AppSealer \cite{AppSealer}, CHEX \cite{CHEX}, ComDroid~\cite{ComDroid}, Ren \emph{et~al.}\xspace \cite{ren2015hijacking}, You \emph{et~al.}\xspace \cite{you2016reference}\\ \bottomrule \end{tabular} \vspace{-0.3cm} \end{table*} \section{Threats to Validity} \label{sec:threats} \vspace{-0.2cm} \textbf{Construct validity.} We identified through manual analysis the types of security weaknesses fixed by developers. To mitigate subjectivity bias, two authors have been assigned to each commit and, in case of conflict, the commit was assigned to a third evaluator. Also, when the type of security flaw being fixed was not clear, we assigned the ``unclear'' tag rather than best-guessing the classification. Despite this mitigation strategies, imprecisions are still possible. Concerning the survey, we tried to not bias the participants' answers especially in the context of questions asking for the most common/dangerous security weaknesses they faced in their apps. For this reason, we did not provide a multiple choice answer but we used an open answer. \textbf{Internal validity.} In the survey, we collected information about the background of the participants, and excluded developers having no experience with native Android apps. For the manual study, we acknowledge that we only analyzed one specific source of information (\emph{i.e.,}\xspace security weakness-fixing commits) and this may have an impact on the derived taxonomy. Similarly, we only included in the manual analysis commits that impacted a single file, to make sure that the ``security weakness'' mentioned in the commit message was located in that file. Again, this could have affected the resulting taxonomy. \textbf{External validity.} We manually analyzed a total of 681\xspace security weakness-fixing commits coming from 315\xspace apps. However, due to the removal of false positives and ``unclear'' instances, our taxonomy is based on 386 actual instances. Also, we asked two Master students to analyze an additional set of 250 instances to test the generalizability of our taxonomy. Analyzing additional instances and other orthogonal sources of information (\emph{e.g.,}\xspace \url{cvedetails.com}) could complement our taxonomy. As for the survey, we collected a total of 43\xspace complete answers. While this number is limited, it is in line with many previously published survey studies in software engineering (see \emph{e.g.,}\xspace \cite{DagenaisOBRV10,CanforaPOP12,Romano:tse2020}). \section{Related Work} \label{sec:related} Several techniques have been proposed to detect, and in some cases fix, vulnerabilities in mobile apps (\emph{e.g.,}\xspace~\cite{arzt2014flowdroid,li2015iccta,sadeghi2017taxonomy,lee2017sealant,singleton2019firebugs,you2016reference, bello2019opia}). We focus on studies investigating security-related aspects in \emph{Android apps}, since these are the most related to our work. \textRevision{\tabref{tab:relatedWorks} provides an overview of the discussed studies, reporting for each \emph{reference}, the \emph{year of publication}, a \emph{brief summary} of its contribution, the \emph{size} of the dataset including the number of analyzed apps (\#a) or commits (\#c) since our paper reports this information, along with the number of \emph{security weaknesses types} and \emph{categories} that have been outlined } \begin{table*}[tb] \vspace{0.4cm} \centering \caption{\textRevision{Empirical studies on security weaknesses in Android apps}} \label{tab:relatedWorks} \rowcolors{2}{gray!15}{white} \begin{tabular}{l|l|l|c|c|c} \midrule \textbf{Ref.} & \textbf{Year} & \textbf{Brief summary} & \textbf{\textRevision{Size}} & \textbf{\textRevision{Types}} & \textbf{\textRevision{Categories}} \\ \midrule \multicolumn{1}{c|}{\cite{felt2011android}} & 2011 & Detection of overprivileges in Android apps & \texttt{\#a:} 940 & 10 & 1 \\ \multicolumn{1}{c|}{\cite{enck2011study}} & 2011 & Identification of vulnerabilities' root causes & \texttt{\#a:} 1,100 & 8 & 1\\ \multicolumn{1}{c|}{\cite{egele2013empirical}} & 2013 & Cryptographic misuse in Android apps & \texttt{\#a:} 11k+ & 6 & 1 \\ \multicolumn{1}{c|}{\cite{zuo2015automatically}} & 2015 & Detection of SSL error-handling vulnerabilities & \texttt{\#a:} 13,820 & 1 & 1 \\ \multicolumn{1}{c|}{\cite{bagheri2015covert}} & 2015 & Analysis of inter-app security vulnerabilities & \texttt{\#a:} 500 & 2 & 1 \\ \multicolumn{1}{c|}{\cite{ahmad2016inter}} & 2016 & Developers challenges for inter-app communication & \texttt{\#a:} 52k & 3 & 1 \\ \multicolumn{1}{c|}{\cite{weir2020needs}} & 2020 & Survey on developer practices for app security & \texttt{\#a:} 454 & 3 & 1 \\ \multicolumn{1}{c|}{\cite{gao2021understanding}} & \textRevision{2021} & \textRevision{Temporal evolution of vulnerabilities in Android apps} & \texttt{\#a:} 465,037 & 10 & 4 \\ \hline \multicolumn{2}{c|}{This paper} & \textRevision{Taxonomy of Security Weaknesses} &\texttt{\#a:}8,157 \texttt{\#c:}4,781 & 80 & 5 \\ \bottomrule \end{tabular} \end{table*} Felt \emph{et~al.}\xspace~\cite{felt2011android} identified over-privileges in the permissions (\emph{e.g.,}\xspace bluetooth, read contacts) of one-third of the 940 Android apps they analyzed. \textRevision{10 most common unnecessary permissions are identified, and the percentage of overprivileged applications varies from 5\% to 16\%.} The authors point out that this is mainly due to developers not interpreting correctly the API documentation. The results of our work, and especially of our survey, support the relevance of permissions for the vulnerabilities affecting Android apps. Enck \emph{et~al.}\xspace~\cite{enck2011study} investigated the root causes of vulnerabilities in 1,100 free Android apps. The authors find misuse of sensitive information (\emph{i.e.,}\xspace phone identifiers and geographic location) among the root causes. \textRevision{Android-specific vulnerabilities relate all to the sensitivity of data, and 8 different types are identified, \emph{e.g.,}\xspace leaking information to logs, unprotected broadcast receivers, etc.} The security of Android APIs was also considered insufficient, but no vulnerability was found able to maliciously control the apps. The mishandling of sensitive information is also a prevalent aspect in our taxonomy. Egele \emph{et~al.}\xspace~\cite{egele2013empirical} used a static analysis technique to capture cryptographic misuses in 11k+ apps. They showed that 88\% of the analyzed apps do not correctly use cryptographic APIs. This is mainly due to the lack of inter-procedural analysis that correlates multiple functionalities (\emph{e.g.,}\xspace encryption and decryption) within a method instantiation. \textRevision{Focus here is on cryptography, and 6 different types of violations (\emph{e.g.,}\xspace constant encryption keys) have been highlighted.} Instances of issues related to cryptography are found both in Java and Kotlin in our taxonomy. Sufatrio \emph{et~al.}\xspace~\cite{tan2015securing} presented a secondary study reviewing the literature about existing security solutions for Android apps. The taxonomy is relevant for five deployment stages, \emph{i.e.,}\xspace development, availability on markets, installation on a device, execution, and security settings modification. \textRevision{It surveys existing work, it does not rely on a specific dataset of analyzed apps/commits, but it elaborates on the literature to derive a taxonomy including 5 categories and 18 types of security vulnerabilities that should be prevented.} Zou \emph{et~al.}\xspace \cite{zuo2015automatically} exploited static and dynamic analysis to detect apps opening {\tt https} web pages with illegal certificates. \textRevision{It targets a specific category of vulnerabilities, \emph{i.e.,}\xspace the privacy of the communications. The developed framework detects a specific type of violation, \emph{i.e.,}\xspace ignoring the illegal certificate error and proceeding with the sending of sensitive information over an insecure communication channel.} Bagheri \emph{et~al.}\xspace ~\cite{bagheri2015covert} analyzed inter-app and inter-component security vulnerabilities in 500 apps. Specifically, a formal model expressing security properties of apps/components is extracted and a model checker verifies the safety of simultaneously running two apps that may interact while holding certain permissions. \textRevision{This research focuses on identifying a specific category of vulnerability, \emph{i.e.,}\xspace privilege escalation --an application with less permissions can be not restricted to access components of a more privileged application--. Two types of detection strategies are adopted: (i) entities that can be inferred from a method; (ii) vulnerable paths of communication between entities.} Also Ahmad \emph{et~al.}\xspace \cite{ahmad2016inter} analyzed inter-app communication (IAC) in 52k apps\textRevision{, where the focus is on different types of actors involved in IAC (Library, Caller, and Callee), which are recognized as types of entities potentially vulnerable. Overall, these works \cite{zuo2015automatically,bagheri2015covert,ahmad2016inter} focus on a specific category of security vulnerabilities that, also due to the nature of our investigation (intentionally meant to be more general)}, we did not identify in our study. Android devices and the operating system have been also investigated. Meng \emph{et~al.}\xspace~\cite{meng2018survey} presented a taxonomy of \textRevision{63} device exploits (\emph{i.e.,}\xspace vulnerabilities leading to privilege escalation) \textRevision{grouped in 3 main categories that are related to perspectives: societal, practical, and technical. It is shown} that the diffusion of exploits is decreasing due to Android systems and Linux kernels strengthening their security mechanisms. Our study does not limit its focus to exploits, but looks at security weaknesses from a more general perspective. Jimenez \emph{et~al.}\xspace~\cite{jimenez2016profiling} presented a taxonomy of 43 issues related to Android OS vulnerabilities by leveraging the CVE-NVD (Common Vulnerability Exposures - National Vulnerability Database) database \textRevision{whose size is left unspecified}. The authors found that Android vulnerabilities \textRevision{related to the code mainly belong to 9 categories (\emph{e.g.,}\xspace resource management, handling data, etc.\xspace). They} are mainly located in components dealing with browsing, cryptography, access control or networking. Also the fixing of vulnerabilities is investigated looking at the distribution of code changes, and most of them related to the additions of condition(s), authorization, functions, etc.\xspace Mazuera-Rozo \emph{et~al.}\xspace~\cite{mazuera2019android} also performed empirical studies on the Android OS to categorize the types of the vulnerabilities (\emph{e.g.,}\xspace denial of service, improper authorization), their evolution overtime and their survivability. \textRevision{Security weaknesses are grouped in 14 categories where 154 types (\emph{e.g.,}\xspace credentials management, improper authorization, transmission of sensitive information, etc.\xspace) have been identified.} Besides, vulnerability patches (\emph{e.g.,}\xspace check for exceptional conditions, proper handling of certificates, appropriate initialization values for variables) are analyzed to investigate the most used fixes. Our work, while related, focuses on security weaknesses affecting Android apps rather than the Android OS. Weir \emph{et~al.}\xspace~\cite{weir2020needs} conducted a survey on the effect of requirements and developer practices on apps' security. For app development, security is perceived relevant by the participants, even if assurance techniques are poorly used. \textRevision{The survey refers to a set of 454 apps, and 335 developers were using tools suitable to check the following 3 types of weaknesses: SSL security, cryptographic API misuse, and privacy leaks. As result, a portion of participants have been classified as security specialists and they advocated the usage of cryptography to enforce security.} \textRevision{Gao \emph{et~al.}\xspace~\cite{gao2021understanding} investigated the temporal evolution of vulnerabilities in Android apps. Vulnerable code is detected in terms of which locations (\emph{e.g.,}\xspace library code) are more common than others, the types of code change (\emph{e.g.,}\xspace the addition of new files) that may entail security-related issues, and also if there is a correlation with malwares. The list of considered vulnerabilities is constituted of 4 categories (\emph{i.e.,}\xspace security features, permissions, injection flaws and data/communication handling) and 10 types, each associated to a detection tool providing evidence of the corresponding vulnerability.} To the best of our knowledge, our work represents the first and most comprehensive taxonomy of security weaknesses in Android apps, including both Java and Kotlin app-related code. Besides, our taxonomy is the result of a two-phase study, involving the inspection of software-related artifacts (\emph{i.e.,}\xspace security weakness-fixing commits) and a survey with software developers. The derived taxonomy is more comprehensive and extensive, covering 18 of the 20 issues analyzed in previous papers by Enck \emph{et~al.}\xspace\cite{enck2011study}, Egele \emph{et~al.}\xspace\cite{egele2013empirical}, Zuo \emph{et~al.}\xspace\cite{zuo2015automatically}, Bagheri \emph{et~al.}\xspace\cite{bagheri2015covert}, Jimenez \emph{et~al.}\xspace\cite{jimenez2016profiling}, Weir \emph{et~al.}\xspace\cite{weir2020needs}\textRevision{, and Gao \emph{et~al.}\xspace~\cite{gao2021understanding}}. Finally, we focus on both Java and Kotlin code \textRevision{recently suggested in~\cite{coppola2019migrationkotlin}}, while only Java-related security weaknesses are analyzed in previously mentioned works. \section{Results} \label{sec:results} \vspace{-0.2cm} \figref{fig:taxonomy} depicts the taxonomy presenting the types of security weaknesses we found. Each sub-hierarchy uses a different color, with the black boxes representing the root categories. The taxonomy is derived by a total of 400 commits (200 for Java and 200 for Kotlin) we manually validated. However, there are 14 commits that were grouped in the \emph{Unclear} category since in these cases, while it was clear the intent of fixing a security flaw, we were unable to derive the type of fixed security weakness. Each category in \figref{fig:taxonomy} is accompanied by one, two, or three numbers. The two numbers with white background represent the number of instances of the corresponding security weakness type we found in Java (top number) and Kotlin (bottom). The one with gray background represents the number of developers that mentioned the type of security weakness in our survey. Categories added to our taxonomy as the result of the survey (\emph{e.g.,}\xspace \emph{CWE-625: Permissive Regular Expression}), only have a gray-background number. \textRevision{It is worth noting that some categories have only been found in a few commits or have only been mentioned by developers (but not found in the mining-based study). Concerning the first case (\emph{i.e.,}\xspace low number of commits related to the category), we preferred to still include those categories since, thanks to the numbers attached to them, it is easy for the reader to assess their relevance. In other words, it is clear from our taxonomy that, for example, the prevalence of \emph{CWE-691} vulnerabilities (78 overall instances) is much higher as compared to \emph{CWE-779} (3 overall instances). Concerning the latter case (\emph{i.e.,}\xspace categories only mentioned by developers), they increase the comprehensiveness of our taxonomy; the fact that we did not find them in the analyzed sample of commits does not make them less relevant for our study. Indeed, while we analyzed a substantial set of commits (400), it is reasonable to expect that we did not encounter specific types of vulnerabilities in our study (as we will also show in \secref{sec:stability}).} In addition, it is worth mentioning the hierarchical organization of the categories, moving from the most general categories (\emph{i.e.,}\xspace the root nodes, such as \emph{CWE-710}), to more specialized ones (\emph{e.g.,}\xspace \emph{CWE-1164}) down to the leafs (\emph{e.g.,}\xspace \emph{CWE-1069}). The sum of instances for all child categories of a given node is lower or equal than the number of instances reported in its parent node. For example, \emph{CWE-1069} and \emph{CWE-561} are the two child categories of \emph{CWE-1164} (top-left corner of \figref{fig:taxonomy}). \emph{CWE-1069} and \emph{CWE-561} have 1 and 11 Java-related instances, respectively, while their parent category \emph{CWE-1164} has 18 Java-related instances. This is due to the labeling process since we assigned to each commit the most specific security weakness type we could derive from the manual inspection. Thus, for 12 of the 18 commits belonging to \emph{CWE-1164} we managed to provide a more specific categorization, resulting in the two child categories, while for 6 of them \emph{CWE-1164} was the most detailed label we could assign. Finally, some categories are not linked to any CWE-ID. These categories are either (i) aggregating some sub-categories for better visualization, or (ii) created by the authors since they did not find a proper type within the CWE dictionary to classify an instance (See \tabref{tab:newVulns}). \begin{table*}[tb] \centering \caption{Definition of categories created by authors not existing in CWE dictionary} \label{tab:newVulns} \rowcolors{2}{gray!15}{white} \begin{tabular}{p{4.9cm}|p{12.3cm}} \toprule \textbf{Categories} & \textbf{Common consequences } \\ \midrule Missing Code Obfuscation &\textRevision{ Non-obfuscated code is susceptible to reverse engineering allowing an attacker to retrieve sensitive information from a system.} \\ Double Serialization & \textRevision{Data mishandling can lead to a degradation of its integrity quality.}\\ Exposure of Sensitive Information Through User Interface & \textRevision{Sensitive information could be exposed within the GUI to an actor that is not explicitly authorized to have access to that information.} \\ File URI Exposed & \textRevision{A file can be made unsafely accessible from other apps providing unintended actors with inappropriate access to the resource.} \\ Component Hijacking & \textRevision{A vulnerable component within an app can be seized by an actor to gain privileges in order to conduct operations originally prohibited.} \\ \bottomrule \end{tabular} \vspace{-0.3cm} \end{table*} We start by discussing the categories output of the manual analysis (\secref{sec:mining}), presenting then the main differences between Java and Kotlin-related security weaknesses (\secref{sec:comparison}), and then discussing how the developers survey validated/complemented our taxonomy (\secref{sec:survey}). Finally, we present the results of the further manual validation performed by two Master students to test the generalizability of our taxonomy. We use icons to highlight parts related to implications for researchers (\faFlask) and practitioners (\faCodeFork). \subsection{Mining-based Study} \label{sec:mining} \emph{1) Improper Control of a Resource Through its Lifetime (145 instances - 36.25\%).} It includes security weaknesses related to not maintaining or incorrectly maintaining control over a resource throughout its lifetime of creation, use, and release, leading to potentially exploitable states. A strongly represented type in this category is \emph{CWE-557: Concurrency Issues}, being prominent in both Java (29 instances) an Kotlin (40). \figref{fig:concurrency-557-3571} depicts an example of a concurrency issue in Kotlin code in which the developer is modifying the nature of the collection being used. \begin{figure}[ht] \vspace{-0.2cm} \begin{center} \includegraphics[width=1\linewidth]{img/concurrency-557-3571} \caption{Usage of Thread-Safe Collection.} \label{fig:concurrency-557-3571} \end{center} \vspace{-0.3cm} \end{figure} The collection type {\tt mutable\-Map\-Of} is replaced with a {\tt Concurrent\-Hash\-Map}, preventing concurrency issues. The automatic detection and fixing of the type of code issues reported in the example can be easily targeted through approaches supporting \emph{Change Variable Type} refactoring. \faFlask~A customization of these techniques is needed to embed a set of ``change type'' rules that are relevant for security weaknesses (\emph{e.g.,}\xspace replace {\tt mutable\-Map\-Of} with {\tt Concurrent\-Hash\-Map} if the class extends {\tt Thread}). Another common weakness related to the improper control of resources is \emph{CWE-668: Exposure of Resource to Wrong Sphere}, with 11 instances found in Java and 8 in Kotlin. CWE-668 arises when a resource is inadvertently exposed due to insecure permissions or unexpected execution scenarios. \figref{fig:exposure-359-6381} shows Kotlin code in which the developer sets the {\tt FLAG\_SECURE} to a window showing a password in the app. \begin{figure}[ht] \vspace{-0.2cm} \begin{center} \includegraphics[width=1\linewidth]{img/exposure-359-6381} \caption{Exposing private information in the interface.} \label{fig:exposure-359-6381} \end{center} \vspace{-0.3cm} \end{figure} The added flag asks the window manager to disable screen recording/capturing when the {\tt show\-Password} method is executed. The usage of this flag in windows containing sensitive information is recommended in the official Android documentation. Also in this case, \faFlask~techniques can be developed by researchers to automatically identify features in code that (i) deal with sensitive information that can be detected through simple keyword matching mechanisms (\emph{e.g.,}\xspace looking for words like ``password''), and (ii) are in charge of displaying windows. Then, a {simple} automatic addition of proper flags can avoid potential points of attack. Such a security issue is also documented in Stack Overflow \footnote{\url{https://stackoverflow.com/questions/9822076}}. \faCodeFork~This suggests the potential usefulness for developers of recommender systems able to point out them to relevant Stack Overflow discussions while writing code (\emph{e.g.,}\xspace Prompter\cite{Ponzanelli:emse2016}). Making the developer aware of such issues at coding time can avoid the introduction of the security flaw in the first place. Other types of security weaknesses that are less diffused but still relevant in the context of controlling resources are: \emph{CWE-178: Improper Handling of Case Sensitivity} (12 cases) and \emph{CWE-665: Improper Initialization} (13). The complete dataset of labeled weaknesses is available in the replication package \cite{replication}. \emph{2) Improper Adherence to Coding Standards (98 instances - 24.50\%).} This category frames security weaknesses present in software due to ignored development best practices. The most represented sub-category for both programming languages is \emph{CWE-1164: Irrelevant Code}, with 18 Java and 19 Kotlin instances. This category is related, for example, to the presence of dead code in the apps (\emph{i.e.,}\xspace code that is not executed in any of the app's feature). Such a code, while not executed in the normal app's usage, can still be unintentionally invoked/tested by software developers, or even exploited and executed by an attacker. The execution of dead code can be particularly dangerous since it is often not maintained with the latest security-related updates. Moreover, notice that for both investigated languages, dead code is not removed from the APK (\emph{i.e.,}\xspace the compiled app) after compilation. \textRevision{Besides being possibly exploited, dead code can ``come back to life’’ by mistake, thus leading to unexpected consequences. For example, the implementation of a new feature can by mistake be invoking an old (dead) implementation of a method accessing the database, leading to a loss of information when the app is deployed. In addition, dead code ``might indirectly make it easier to introduce security-relevant weaknesses or make them more difficult to detect." \footnote{https://cwe.mitre.org/data/definitions/561.html}}. When dead code is identified, two strategies are usually adopted by developers to remove it \cite{Romano:tse2020}: (i) adding a explanatory comment before the dead fragment in question in which the developer mentions that the fragment it is or could be dead; and (ii) commenting out the code, leaving it available for future usage. The latter strategy is the one that has been applied in one of the fixing commits we inspected. The developer is commenting out dead code that seems to be related to the management of contacts in the database. Two days before this commit, the same developer added a comment on top of the dead code saying {\tt //TODO: what is this for again?} (see changes to file {\tt MVP\_\-Activity\_\-Contacts} in commit {\tt f0801d88}). \faFlask~The prevalence of \emph{CWE-561: Dead Code} weaknesses in our taxonomy confirms the importance for researchers to investigate approaches able to automatically identify code components that can be removed without ripple effects on the code functionalities. \faCodeFork~To the best of our knowledge, very few tools are available for this task such as the one by Romano \emph{et~al.}\xspace~\cite{Romano:tse2020}, the Android Lint tool~\cite{ALINT}, and the Kotlin DCE plugin~\cite{DCE}. Another prevalent type of security weaknesses related to coding standards are the ones grouped in the \emph{Serialization issues} category. \faCodeFork~A simple yet frequent issue we identified is the lack of a unique {\tt serial\-Version\-UID} in serializable classes, something expected in Java. Indeed, this identifier is stored with the serialized object and it is verified when deserializing it, thus to avoid data integrity issues. All other first-level subcategories in the ``coding standard'' tree have less than ten total instances and, in several cases, are only related to one of the investigated languages (see \figref{fig:taxonomy}). The categories added as result of our survey will be discussed in \secref{sec:survey}. \vspace{0.1cm} \emph{3) Improper Check or Handling of Exceptional Conditions (84 instances - 21\%).} This category includes weaknesses that can lead to unpredictable behavior due to the improper or missing handling of exceptional conditions rarely occurring during the normal operation of the app. Within this category, the most represented type of security weakness is \emph{CWE-707: Improper Neutralization}, happening when messages and/or data are not properly checked to be well-formed, valid, or benign (\emph{i.e.,}\xspace the exceptional condition of malformed messages/data is not properly handled). This category is mostly composed by cases related to \emph{CWE-20: Improper Input Validation} (\emph{e.g.,}\xspace issues related to the improper validation of the password in a login form, such as commit {\tt 4875515b} in the ccomeaux/boardgamegeek4android app, which could lead to a future credential management error). This type of issues can be addressed by relying on dynamic analysis, and in particular on fuzz testing, which aims at feeding unexpected input data that may generate crashes, exploit security weaknesses, or induce unexpected states in the~app. \faCodeFork~Several tools for this scope exist nowadays \cite{arzt2014flowdroid, monkey, DroidFuzzer, Huang2019fuzzing, EVOTAINT, IVDROID, DifFuzz}, thus giving to practitioners a vast repertory of available options that can be adopted for their testing activities. \textRevision{However, \faFlask~these tools work on Java and to the best of our knowledge, there are neither proposals of fuzzers that work at source-code level for Kotlin nor Dart/Flutter. In the case of Kotlin, fuzzers at the Java bytecode level could be used, however, this is not the case for Dart/Flutter apps because the Dart language is not JVM-based.} Therefore, we encourage the research community to devise fuzzers and benchmarks for Kotlin and Dart such as \emph{e.g.,}\xspace FuzzBench~\cite{FuzzBench}. Another well-represented subcategory is \emph{CWE-248: Uncaught Exception}, that may cause the program to crash and/or expose sensitive information. Uncaught exceptions are a well-known issue in Android apps, especially when apps strongly rely on Android abstractions (\emph{e.g.,}\xspace activities, asynctasks, etc.\xspace) \cite{Oliveira:jss2018}. The prevalence of this type of weakness in our taxonomy, \faFlask~supports previous findings reported in the literature, \faCodeFork~and highlights the potential usefulness for developers of tools developed in academia to automatically test Android apps using systematic input generation (see \emph{e.g.,}\xspace \cite{linan2018rip,li2017droidbot}). \vspace{0.1cm} \emph{4) Protection Mechanism Failure (59 instances - 14.75\%).} These security weaknesses are related to the incorrect restriction of access to a resource from an unauthorized actor. Thus, an attacker can compromise the security of the app by gaining privileges, accessing sensitive information, etc.\xspace Most of the weaknesses in this category are related to \emph{CWE-287: Improper Authentication}. \figref{fig:impr-auth-287} shows an example of this type of security weakness, in which the developer fixes a security bug due to the missing authentication step in a feature requiring the user to have a valid authorization. \newline \begin{figure}[ht] \vspace{-0.3cm} \centering \includegraphics[width=0.75\linewidth]{img/impr-auth-287} \caption{Unauthorized user must not be able to resume an Activity.} \label{fig:impr-auth-287} \end{figure} While most of the cases in this category are simple programming mistakes (\emph{e.g.,}\xspace a wrong/missing {\tt if} statement), these bugs are difficult to catch and automated testing tools are of little help here, since mostly focused on identifying app crashes. \faFlask~The development of approaches relying on machine learning (ML) to automatically discriminate apps' features that can be accessed with/without authentication could help. \newpage This would require the existence of a large training set of apps components (\emph{e.g.,}\xspace GUIs) labeled with their need for authentication (\emph{e.g.,}\xspace a simple boolean). Assuming the feasibility of this ML-based approach, exploratory testing can then be used in combination with it to identify apps' features that should not be accessible without authentication but that, instead, can be reached by randomly exploring the app without a valid authentication. In this category we also found cases related to \emph{CWE-798: Use of Hard-coded Credentials}, such as commit {\tt f92221f} from the UserLAnd app. \footnote{\url{https://github.com/CypherpunkArmory/UserLAnd/commit/f92221f}} \faCodeFork~These cases are mostly due to hard-coded credentials most likely for testing purposes. However, using these credentials and having them available in repositories and/or URLs could lead to attacks. \begin{figure}[ht] \begin{center} \includegraphics[width=0.8\linewidth]{img/capture-and-replay-294-3770} \caption{Exposing token within GET request.} \label{fig:capture-and-replay-294-3770} \end{center} \vspace{-0.3cm} \end{figure} Finally, representative of the \emph{Improper Access Control} category is also the commit in \figref{fig:capture-and-replay-294-3770} . \textRevision{Before the fix, the app was sending a private token as a parameter within a GET request, which makes the token visible to anybody that can catch the URL, then exposing it and potentially allowing a Man-in-the-Middle attack, therefore under such circumstances an attacker could impersonate the user to whom the token belongs.} The commit fixes this issue by removing the problematic code. In the case of the example, a private token was sent as a parameter of a GET request, which makes the token visible to anybody that can catch the URL. This type of tokens are user to make the server know that the client sending the HTTP message is a valid/certified/authorized client, in that sense if authentication tokens are visible when sending an HTTP message, the user sending the token can be impersonated. The identification of leaks for security-related information in mobile apps is an active research area\cite{bello2019opia}, with approaches extracting data from the apps' local databases and shared preferences to identify sensitive information that is not properly encrypted and/or anonymized. \faFlask~Identifying security-related information passed through query strings in URLs is a needed complement to these approaches. \vspace{-0.2cm} \subsection{Java vs Kotlin.} \label{sec:comparison} This section compares the distribution of security weaknesses we observed in Java and Kotlin code. We focus on second-level categories (\emph{i.e.,}\xspace the direct child nodes of the root categories). We do not consider in this discussion categories in which there are less than ten overall instances when summing up the weaknesses for Java and Kotlin. Indeed, whatever observation made for these categories may be due to the low number of instances in the category. Also, it is worth noting that our goal is simply to highlight the differences we found in our taxonomy. Indeed, explaining the reasons for the observed differences without best-guessing is not possible with the available empirical data. A different experimental design targeting this RQ is needed to properly answer it. We found a balanced distribution of Kotlin/Java instances among most of the subcategories. In particular, no major differences are observed in the subtree related to \emph{CWE-710: Improper Adherence to Coding Standards}. Instead, when moving to the \emph{CWE-664: Improper Control of a Resource Through its Lifetime} subtree, we observe a slight prevalence of Kotlin-related security weaknesses. This is mostly due to more issues related to improper thread synchronization and handling of case sensitivity (\emph{i.e.,}\xspace the code does not properly handle differences in case sensitivity, possibly leading to inconsistent results). Concerning the \emph{CWE-703: Improper Check or Handling of Exceptional Conditions} tree, the main category exhibiting differences is the one related to uncaught exceptions, with a prevalence of Java-related security weaknesses (15~\emph{vs}~7). Finally, no major differences have been observed for what concerns \emph{CWE-693: Protection Mechanism Failure}. Summarizing, the distribution of types of security weaknesses across Java and Kotlin seems to be quite similar. \faFlask~This suggests that previous findings reported in empirical studies about security weaknesses in Java Android apps are likely to generalize to Kotlin apps as well, at least for what concerns the security weaknesses diffusion. \subsection{Survey with Developers} \label{sec:survey} Our taxonomy has been validated/complemented through the survey we performed with software developers. In the developers' answers to $Q_8$ and $Q_9$ (see \tabref{tab:survey}), we found mentions to 87 software security weaknesses, that can be classified into the 28 types labeled with a gray number (\emph{i.e.,}\xspace the number of developers who mentioned that security weakness type) in \figref{fig:taxonomy}. Out of these, 22 were already part of our taxonomy as output of the \emph{mining-based} study, while six were added: \emph{CWE-269: Improper Privilege Management}, \emph{CWE-325: Missing Required Cryptographic Step}, \emph{CWE-625: Permissive Regular Expression}, \emph{CWE-1104: Use of Unmaintained Third Party Components}, \emph{Hijacking}, and \emph{Missing Code Obfuscation}. The fact that 78\% of the security weakness types mentioned by developers (22/28) were already part of our taxonomy, provides a good level of confidence about its comprehensiveness. The most common security weaknesses ($Q_8$) mentioned by the surveyed developers can be easily seen in \figref{fig:taxonomy}, with those belonging to the \emph{CWE-693: Protection Mechanism Failure} and \emph{CWE-664: Improper Control of a Resource Through its Lifetime} trees representing 81\% of the mentioned security weaknesses (71/87). There is a common thread we found when analyzing the answers provided to $Q_9$, meaning the most dangerous weaknesses perceived by developers. All developers are mostly worried about unauthorized access to sensitive, private data stored in the app or sent/received through/by it. Some of the (shortened) answers: ``\emph{vulnerabilities related to confidentiality, since they can expose user information}'', ``\emph{wrong/missing encryption of data being stored within the app}'', ``\emph{the leak of user personal information}''. \faFlask~Answers to $Q_9$ confirm the importance of research studying security weaknesses related to data stored/manipulated by the apps~\cite{arzt2014flowdroid,bello2019opia,Blackdroid}. An orthogonal view about the harmfulness of security weaknesses as perceived by developers is given by the answers to $Q_6$ (\emph{i.e.,}\xspace the factors impacting the likelihood of a security weakness to be exploited) and $Q_7$ (\emph{i.e.,}\xspace the factors impacting the harmfulness of the security weakness if exploited). Developers pointed to technical aspects when answering $Q_6$, indicating the difficulty of exploiting a security weakness as more important than the motivation to exploit it (\emph{i.e.,}\xspace the actual gain an attacker gets). Indeed, the difficulty of exploiting has been mentioned by 79\% of the surveyed developers, as compared to the $\sim$56\% mentioning the potential gain. Answers to $Q_7$ stress again the importance for developers of protecting sensitive information, with most (88.3\%) of the respondents reporting confidentiality and privacy violations as the main factors impacting the dangerousness of a security weakness. Finally, we analyze the answers provided for $Q_{10}$, related to the tools used by developers to detect security weaknesses. None of the surveyed developers mentioned tools developed in academia. Clearly, this does not mean that the adopted tools do not use any idea proposed in the literature. Among the mentioned ones (available in our replication package) there are AppScan from IBM \cite{appScan}, Infer from Facebook \cite{infer}, Sonarqube \cite{sonarqube}, and pre-launch reports given by Google Play when uploading the app to the market. Then, we looked into the relevant literature for tools that can be used by developers to detect the types of security weaknesses they more often face or they perceive as more dangerous (\emph{i.e.,}\xspace previously analyzed answers to $Q_8$ and $Q_9$). \tabref{tab:toolsSurveyess} reports categories of security weaknesses with corresponding references presenting approaches for their detection. Some categories are merged in a single row since their security weaknesses are quite similar, and approaches related for one category should work for the other as well. \faCodeFork~For 12 of the 28 types of security weaknesses mentioned by developers we found at least one approach supporting their automatic detection. \faFlask~On the one side, this shows that the research community is working on security weaknesses that are relevant for developers. On the other side, the developed approaches are unknown (at least) to our small pool of surveyed developers. This may also be due to the unavailability of industry-strength products implementing these approaches. \subsection{Stability of the Taxonomy} \label{sec:stability} Among the 250 commits analyzed by the two Master students (see \secref{sec:design} for details), 73 were classified as false positives for Java and 24 for Kotlin. This left us with 153 valid instances that have been used for the construction of the validation taxonomy (See \figref{fig:taxonomyValidation}). Looking at it, it can be seen that 85\% of the identified categories are already covered in our taxonomy and only 8 new categories were identified (\emph{i.e.,}\xspace \emph{CWE-22: Improper Limitation of a Pathname to a Restricted Directory}, \emph{CWE-372: Incomplete Internal State Distinction}, \emph{CWE-392: Missing Report of Error Condition}, \emph{CWE-400: Uncontrolled Resource Consumption}, \emph{CWE-446: UI Discrepancy for Security Feature}, \emph{CWE-474: Use of Function with Inconsistent Implementations}, \emph{CWE-544: Missing Standardized Error Handling Mechanism}, and \emph{CWE-766: Critical Data Element Declared Public}). Also, all these categories are child of one of our root categories. This indicates a good generalizability of our taxonomy. Additionally, although the proportion of Kotlin artifacts is considerably lower than the amount of Java ones, it is worth noting that in the two taxonomies the distribution of types of security weaknesses across Java and Kotlin is similar. \begin{figure*} \begin{center} \includegraphics[width=\linewidth, angle =90, scale=1.25]{img/taxonomy_vertical_modified_validation_msc.pdf} \caption{Validation taxonomy of types of security weaknesses found in Java and Kotlin Android apps.} \label{fig:taxonomyValidation} \end{center} \end{figure*} \begin{table*}[tb] \centering \caption{Security weaknesses mentioned by developers: Available tools\vspace{-0.2cm}} \label{tab:toolsSurveyess} \rowcolors{2}{gray!15}{white} \begin{tabular}{p{8.2cm}|p{9.2cm}} \toprule \textbf{Security weaknesses} & \textbf{Tools} \\ \midrule CWE-20: Improper Input Validation & DifFuzz \cite{DifFuzz}, DroidFuzzer \cite{DroidFuzzer}, EvoTaint \cite{EVOTAINT}, Flowdroid \cite{arzt2014flowdroid}, Huang \emph{et~al.}\xspace \cite{Huang2019fuzzing}, IVDroid \cite{IVDROID}, Monkey \cite{monkey}\\ CWE-89: SQL Injection & OPIA \cite{bello2019opia}, Kul \emph{et~al.}\xspace \cite{KUL}\\ CWE-200: Exposure of Sensitive Information to an Unauthorized Actor & AppFence \cite{AppFence}, AppIntent \cite{AppIntent}, AutoPatchDroid \cite{AutoPatchDroid}, Blackdroid \cite{Blackdroid}, CoChecker \cite{CoChecker}, ComDroid \cite{ComDroid}, ContentScope \cite{ContentScope}, Covert \cite{bagheri2015covert}, CredMiner \cite{CredMiner},Flowdroid \cite{arzt2014flowdroid}, IccTA \cite{li2015iccta}, Kul \emph{et~al.}\xspace \cite{KUL}, Matsumoto2013 \emph{et~al.}\xspace \cite{Matsumoto2013}, MITHYS \cite{MITHYS}, M-Perm \cite{MPERM}, OAUTHLINT \cite{OAUTHLINT}, Onwuzurike \emph{et~al.}\xspace \cite{Onwuzurike2015} \\ CWE-269: Improper Privilege Management & AppProfiler \cite{AppProfiler}, AppGuard \cite{AppGuard}, AutoPatchDroid \cite{AutoPatchDroid}, AWiDe \cite{AWiDe}, Bartsch \emph{et~al.}\xspace \cite{Bartsch2013}, CoChecker \cite{CoChecker}, Covert \cite{bagheri2015covert}, DroidChecker \cite{DroidChecker}, Droidtector \cite{Droidtector}, Lintent \cite{Lintent}, M-Perm \cite{MPERM}, PaddyFrog \cite{PaddyFrog} \\ CWE-284: Improper Access Control & ContentScope \cite{ContentScope}\\ CWE-311: Missing Encryption of Sensitive Data & DroidSearch \cite{DroidSearch}, OPIA \cite{bello2019opia}\\ CWE-325: Missing Required Cryptographic Step & CrypLint \cite{egele2013empirical}\\ \makecell[tl]{CWE-312: Cleartext Storage of Sensitive Information \\ CWE-922: Insecure Storage of Sensitive Information} & Blackdroid \cite{Blackdroid}, CredMiner \cite{CredMiner}, Flowdroid \cite{arzt2014flowdroid}\\ \makecell[tl]{CWE-359: Exposure of Private Personal Information \\ to an Unauthorized Actor \\ CWE-798: Use of Hard-coded Credentials} & Flowdroid \cite{arzt2014flowdroid}, Kul \emph{et~al.}\xspace \cite{KUL}, M-Perm \cite{MPERM}, CredMiner \cite{CredMiner}\\ Component Hijacking & ActivityHijacker \cite{ActivityHijacker}, AppSealer \cite{AppSealer}, CHEX \cite{CHEX}, ComDroid~\cite{ComDroid}, Ren \emph{et~al.}\xspace \cite{ren2015hijacking}, You \emph{et~al.}\xspace \cite{you2016reference}\\ \bottomrule \end{tabular} \vspace{-0.3cm} \end{table*} \section{Threats to Validity} \label{sec:threats} \vspace{-0.2cm} \textbf{Construct validity.} We identified through manual analysis the types of security weaknesses fixed by developers. To mitigate subjectivity bias, two authors have been assigned to each commit and, in case of conflict, the commit was assigned to a third evaluator. Also, when the type of security flaw being fixed was not clear, we assigned the ``unclear'' tag rather than best-guessing the classification. Despite this mitigation strategies, imprecisions are still possible. Concerning the survey, we tried to not bias the participants' answers especially in the context of questions asking for the most common/dangerous security weaknesses they faced in their apps. For this reason, we did not provide a multiple choice answer but we used an open answer. \textbf{Internal validity.} In the survey, we collected information about the background of the participants, and excluded developers having no experience with native Android apps. For the manual study, we acknowledge that we only analyzed one specific source of information (\emph{i.e.,}\xspace security weakness-fixing commits) and this may have an impact on the derived taxonomy. Similarly, we only included in the manual analysis commits that impacted a single file, to make sure that the ``security weakness'' mentioned in the commit message was located in that file. Again, this could have affected the resulting taxonomy. \textbf{External validity.} We manually analyzed a total of 681\xspace security weakness-fixing commits coming from 315\xspace apps. However, due to the removal of false positives and ``unclear'' instances, our taxonomy is based on 386 actual instances. Also, we asked two Master students to analyze an additional set of 250 instances to test the generalizability of our taxonomy. Analyzing additional instances and other orthogonal sources of information (\emph{e.g.,}\xspace \url{cvedetails.com}) could complement our taxonomy. As for the survey, we collected a total of 43\xspace complete answers. While this number is limited, it is in line with many previously published survey studies in software engineering (see \emph{e.g.,}\xspace \cite{DagenaisOBRV10,CanforaPOP12,Romano:tse2020}).
1,314,259,995,164
arxiv
\section{Introduction} \label{introduction} Precision measurements in the top quark sector, and searches for the Higgs boson and physics beyond the Standard Model, critically depend on the good identification (``tagging") of jets produced by b quarks. Tagging techniques exploit specific properties of B-hadrons to differentiate them from the large background of jets produced by light quarks and gluons. The long lifetime of B-hadrons results in displaced vertices formed by tracks from their decays. Physical observables associated to these vertices constitute the input for secondary vertex tagging. Also, tracks from B- and D-hadron decays typically have large impact parameters, which are frequently used to construct discriminating variables. In a different approach, soft-lepton tagging searches for low transverse momentum leptons inside jets, originating from semileptonic decays of B- and D-hadrons. The tagging performance is substantially improved when individual taggers are combined to give a single jet classifier. In high energy physics, the feed-forward neural network is one of the most popular methods of combining several discriminating variables into one classifier and have been extensively applied to b-tagging. In this paper, the capability of an alternative classification technique, the boosted decision trees, for tagging b-jets is evaluated. Using a sample of $WH \to l\nu q\bar{q}$ Monte Carlo events, the performance of boosted decision trees and feed-forward neural networks is compared. Boosted decision trees is a learning technique recently introduced in high energy physics for data analysis in the MiniBooNE experiment \cite{roe}. It was found that particle identification with boosted decision trees has better performance than that with neural networks in a Monte Carlo simulation of MiniBooNE data. This insight motivated the studies reported here, which indicate that boosted decision trees is also a promising technique for tagging b-jets. In the next section, a brief description of the boosted decision trees algorithm is given. The Monte Carlo simulation used in this analysis is explained in Section \ref{monte_carlo_simulation}. Section \ref{discriminant_variables} describes the discriminant variables which feed the tagging algorithms. The tagging performances of boosted decision trees and neural networks are compared in Section \ref{results}. Finally, conclusions are given in Section~\ref{conclusions}. \section{Boosted decision trees} \label{boosted_decision_trees} The boosted decision trees algorithm implemented in this analysis starts with a parent node containing a training set of b-jet and u-jet patterns. All jets in the first tree iteration are given the same weight $w^{(0)}$, such that the sum of weights equals 1. Then, the algorithm loops over all binary splits in order to find the discriminating variable and corresponding separation value that optimizes a given figure of merit. For instance, in \Fig{BDT} the optimal figure of merit is obtained when the jets are divided between those that have a secondary vertex mass greater than 1 GeV/c$^2$ and those that do not. This procedure is then repeated for the new daughter nodes until a stopping criterion is satisfied. \begin{figure}[htb] \centering \epsfig{file=BDT.eps, width=8cm} \caption{Example of a decision tree.} \label{BDT} \end{figure} A node is called ``signal node" if the sum of the weights of b-jets is greater than the sum of the weights of u-jets. Otherwise, it is called ``background node". A b-jet (u-jet) is correctly classified if it lands on a signal (background) node. If $p$ designates the fraction of correctly classified jets in a node, its Gini index is defined to be $Q(p) = -2p(1-p)$. The optimal discriminating variable and separation value are the ones which maximize the figure of merit \beq{qsplit} Q_{split} = \frac{w_LQ(p_L) + w_RQ(p_R)}{w_L+w_R}\,, \eeq where $w_L$ and $w_R$ are the sum of the jet weights in the left and right daughter nodes, respectively, and $Q(p_L)$ and $Q(p_R)$ are the Gini indices of the left and right daughter nodes. A node is not split if the optimal $Q_{split}$ is smaller than its own $Q(p)$, or, alternatively, if it contains less events than a prespecified limit. Unsplit nodes are called ``leafs", which are depicted as rectangles in \Fig{BDT}. After the $k$th tree is built, the jet weights are updated. There are several methods to accomplish this. Here, we will consider the AdaBoost algorithm~\cite{adaboost}. First, the total misclassification error $\varepsilon_k$ of the tree is calculated: \beq{epsilon} \varepsilon_k = \frac{\sum_{i=1}^{N_{jets}}w_i^{(k)} I_i^{(k)}}{\sum_{i=1}^{N_{jets}}w_i^{(k)}}\,, \eeq where $i$ loops over all jets in the training sample and $I_i^{(k)}$ is an indicator function which is equal to 1 if the $i$th jet was misclassified or equal to 0 if the $i$th jet was correctly classified. Then, the weights of misclassified jets are increased ({\it boosted}) \beq{w1} w_i^{(k+1)} = \frac{w_i^{(k)}}{2\varepsilon_k}\,, \eeq while the weights of correctly classified jets are decreased \beq{w2} w_i^{(k+1)} = \frac{w_i^{(k)}}{2(1-\varepsilon_k)}\,. \eeq Finally, the tree $k+1$ is constructed using the new weights $w^{(k+1)}$. After $M$ trees are trained their performance can be evaluated with a testing sample of jets. The final score of jet $i$ is a weighted sum of the scores over the individual trees \beq{score} F_i = \sum_{k=1}^{M}\beta_kf_i^{(k)}\,, \eeq where \beq{beta} \beta_k = \log\left(\frac{1 - \varepsilon_k}{\varepsilon_k}\right)\,, \eeq and $f_i^{(k)} = 1 (-1)$ if the $k$th tree makes the jet land on a signal (background) leaf. Therefore, b-jets will have large positive scores, while u-jets will have large negative scores. Trees with lower misclassification errors $\varepsilon_k$ are given more weight when the jet score is calculated. Further details of the AdaBoost algorithm can be found in \cite{spr}. \section{Monte Carlo simulation} \label{monte_carlo_simulation} The studies described in this paper were done with events generated with PYTHIA 6.319 \cite{pythia}. We considered the environment of the LHC collider, in which $pp$ interactions with a center-of-mass energy of 14 TeV are produced. One of the benchmark channels for b tagging studies at the LHC is the associated $WH$ production. We generated $WH$ events with $m_H$ = 120 GeV/c$^2$, the $W$ boson decaying semileptonically $W\to l\nu$ and the Higgs boson decaying to quark pairs $H\to q\bar{q}$. Initial and final state radiation and multiple interactions were included in the simulation. Tracks are parametrized by the following set of 5 parameters: $d_0$, $z_0$, $\phi$, $\cot\theta$ and $1/p_T$. The transverse impact parameter $d_0$ is the distance of closest approach of the track to the primary vertex in the plane perpendicular to the beam-line. The longitudinal impact parameter $z_0$ is the component along the beam-line of the distance of closest approach. The parameters $\phi$ and $\theta$ are the azimuthal and polar angles of the track, respectively, and $1/p_T$ is the inverse of the particle transverse momentum. In order to simulate measurement errors, these parameters were smeared with Gaussian resolution functions. The transverse and longitudinal impact parameters were smeared with standard deviations $\sigma_{d_0} = 10$~$\mu$m and $\sigma_{z_0} = 100$~$\mu$m, the angle $\phi$ with $\sigma_{\phi} = 0.10$~mrad, $\cot\theta$ with $\sigma_{\cot\theta} = 0.001$ and the inverse of the transverse momentum with $\sigma_{1/p_T}= 0.001$ GeV$^{-1}$. The primary vertex positions were smeared with Gaussian resolution functions with $\sigma_x = \sigma_y = 50$~$\mu$m and $\sigma_z = 100$~$\mu$m. A jet is formed by all stable particles inside a cone $\Delta R = \sqrt{(\Delta\phi)^2 + (\Delta\eta)^2} < 0.4$ around its axis, where $\eta = -\log\left(\tan(\theta / 2)\right)$ is the track pseudorapidity. \section{Discriminant variables} \label{discriminant_variables} The physical observables used for discrimination between b-jets and light jets are taken from well known ``spatial" b-tagging algorithms. Physical observables from tagging techniques based on soft leptons are not considered in this analysis. Only jets with $p_T > 15$ GeV/c and $|\eta| < 2.5$ are considered taggable. \subsection{Impact parameter tag} \label{impact_parameter_tag} Due to the long decay distances traveled by B-hadrons, tracks from b-jets have on average larger impact parameters than tracks from light jets, since sizeable impact parameters in light jets are exclusively due to measurement errors. Therefore, the impact parameter of jet tracks can be used to build a useful variable for discrimination between b-jets and light jets. \Fig{ipsig} shows the distributions of (a) signed transverse impact parameter significances $S_{d_0} = d_0 / \sigma_{d_0}$ and (b) signed longitudinal impact parameter significances $S_{z_0} = z_0 / \sigma_{z_0}$ of tracks in b-jets (solid line) and u-jets (dashed line). A positive (negative) sign is assigned to the impact parameter if the track intersects the jet axis in front (behind) of the primary vertex. These distributions give likelihood functions $b(S)$ and $u(S)$ for a track to belong to a b-jet or a u-jet, respectively. A jet weight is defined as the sum of the log-likelihood ratio over all tracks in the jet: \beq{weight} w_{jet} = \sum_{i\in jet} \ln \left(\frac{b(S_i)}{u(S_i)}\right)\,. \eeq \begin{figure}[htb] \centering \epsfig{file=d0sig.eps, width=8cm} \\ \epsfig{file=z0sig.eps, width=8cm} \caption{(a) transverse and (b) longitudinal impact parameter significances for tracks in b-jets (solid line) and u-jets (dashed line).} \label{ipsig} \end{figure} In \Fig{weights} it is shown the distribution of jet weights for u and b quarks. Because the transverse impact parameter has better resolution, it yields greater discrimination power. A given efficiency for selecting b-jets is obtained by selecting jets with weights above some threshold level. Obviously, for moderate or high selection efficiencies there will always be some contamination with light jets. \begin{figure}[htb] \centering \epsfig{file=d0weight.eps, width=8cm} \\ \epsfig{file=z0weight.eps, width=8cm} \caption{Jet weight distributions given by the transverse impact parameter (a) and longitudinal impact parameter (b). The solid (dashed) line corresponds to b-jets (u-jets).} \label{weights} \end{figure} \subsection{Secondary vertex tag} \label{secondary_vertex_tag} An alternative approach for building b tagging discriminating variables consists in reconstructing displaced secondary vertex from B- and D-hadron decays inside the jet. Secondary vertices were reconstructed with Billoir and Qian's fast vertex fitting algorithm \cite{billoir_qian}. For purposes of secondary vertex b-tagging the exact topology of the secondary vertex is irrelevant and, therefore, an inclusive vertex search is performed. All jet tracks with large transverse impact parameter significance participate in the vertex fit and vertices compatible with $V^0$ decays are rejected. \Fig{vertex}(a) shows the decay distance significance for b-jets and u-jets for good quality vertices. Besides the decay distance significance, other variables associated to the secondary vertex may have discrimination power, such as the vertex mass (\Fig{vertex}(b)) and the ratio between the absolute momentum sum of tracks in the secondary vertex and that of all tracks in the jet (\Fig{vertex}(c)). \begin{figure} \centering \epsfig{file=ddsign.eps, width=7cm} \\ \epsfig{file=vertex_mass.eps, width=7cm} \\ \epsfig{file=mom_ratio.eps, width=7cm} \caption{(a) Decay distance significance of the secondary vertex. (b) Invariant mass of tracks associated to the secondary vertex. (c) Fraction of jet momentum in the secondary vertex. The solid (dashed) line corresponds to b-jets (u-jets).} \label{vertex} \end{figure} \subsection{One-prong tag} \label{one_prong_tag} For one-prong decays of B- and D-hadrons the secondary vertex fit fails. In this situation, though, some information can still be extracted from tracks in the jet. For instance, the maximal transverse and longitudinal impact parameters of jet tracks clearly have discrimination power, as can be observed in \Fig{maxip}. \begin{figure}[htb] \centering \epsfig{file=d0_lead.eps, width=8cm} \\ \epsfig{file=z0_lead.eps, width=8cm} \caption{Maximal (a) transverse and (b) longitudinal impact parameter significances in jets. The solid (dashed) line corresponds to b-jets (u-jets).} \label{maxip} \end{figure} \section{Results} \label{results} \begin{figure}[htb] \centering \epsfig{file=btdscore.eps, width=8cm} \\ \epsfig{file=nnscore.eps, width=8cm} \caption{Jet scores given by (a) boosted decision trees and (b) a neural network, for b-jets (solid line) and u-jets (dashed line).} \label{scores} \end{figure} Boosted decision trees were implemented using the StatPatternRecognition package \cite{spr}. The trees were fed with the 7 discriminant variables mentioned in the previous section and were trained with 50000 b-jet patterns and 50000 u-jet patterns. An unbiased evaluation of the boosted decision trees performance is obtained using a distinct sample of b-jets and u-jets patterns (test sample). The best results were obtained with a minimum number of jets per leaf of about 7000. The performance becomes better with increasing number of trees, but no significant improvement was observed after several hundreds of tree iterations. \Fig{scores}(a) shows the jet scores, normalized to be within the interval $\left[0,1\right]$, for the test sample of b-jets (solid line) and u-jets (dashed line). In order to compare the performance of boosted decision trees with the neural network approach, a feed-forward neural network was implemented using the Multi-Layer Perceptron class \cite{mlproot} provided by the data analysis framework ROOT \cite{root}. The architecture of the network consisted of 7 nodes in the input layer (corresponding to the 7 discriminant variables mentioned above), 8 nodes in a single hidden layer and 1 node in the output layer. The network was trained with the Broyden-Fletcher-Goldfarb-Shanno learning method with a learning rate parameter $\eta = 0.1$. The training set consisted of 100000 jet patterns, of which 50000 were b-jets and 50000 were u-jets. Since the magnitude of the discriminant variables differ considerably, which may affect the performance of the neural network, all input variables were normalized. The number of epochs (training cycles) was 1000. Care was taken to prevent overtraining the network by monitoring the evolution of the learning curve. \Fig{scores}(b) shows the jet scores given by the neural network for a test sample of b-jet (solid line) and u-jet (dashed line) patterns. \begin{figure}[htb] \centering \epsfig{file=rejections.eps, width=10cm} \caption{Light jet rejection as a function of b-jet efficiency given by boosted decision trees (black circles) and a feed-forward neural network (gray squares).} \label{comparison} \end{figure} Jets with a score above some specified threshold value are tagged as b-jets. The threshold value is contingent on the desired efficiency for tagging b-jets $\varepsilon_b = N_b^{tag} / N_b$, where $N_b$ is the number of b-jets in the data and $N_b^{tag}$ is the number of tagged b-jets, or, alternatively, on the tolerated level of contamination by light jets. \Fig{comparison} shows the light jet rejection, $R_u = \varepsilon_u^{-1}$ as a function of the b-tagging efficiency $\varepsilon_b$, for the test sample given by boosted decision trees (black circles) and the feed-forward neural network (gray squares). For high b-tagging efficiencies there is no significant improvement of the performance of boosted decision trees relative to the neural network. However, for moderate b-tagging efficiencies, boosted decision trees clearly outperform neural networks. For a b-tagging efficiency of 60\%, the light jet rejection given by boosted decision trees is about 35\% higher than that given by neural networks. \section{Conclusions} \label{conclusions} The studies presented in this paper indicate that boosted decision trees outperform neural networks for tagging b-jets, using a Monte Carlo simulation of $WH \to l\nu q\bar{q}$ events, and sensible physical observables as discriminating variables. For a b-tagging efficiency of 60\%, the light jet rejection given by boosted decision trees is about 35\% higher than that given by the neural network approach. Although encouraging, these results should be complemented with studies performed with a full simulation in which detector inefficiencies are considered. Also, the performance of both techniques may differ if other physics channels are considered, since it may be affected by jet overlaps and gluon splitting into $b\bar{b}$ pairs. \section*{Acknowledgments} I would like to thank J. Carvalho and A. Onofre for many valuable remarks. This work was supported by grant SFRH/BPD/20616/2004 of Funda\c c\~ao para a Ci\^encia e Tecnologia.
1,314,259,995,165
arxiv
\section{Introduction} The notion of vertex weighted graph, i.e. a graph whose vertices are assigned a non negative integer (the weight), arises naturally in algebraic geometry, as every Deligne-Mumford stable curve has an associated weighted ``dual" graph, and the moduli space of stable curves, $\ov{M_g}$, has a stratification with nice properties given by the loci of curves having a certain weighted graph as dual graph; see \cite{gac}. On the other hand, and more recently, vertex weighted graphs have appeared in tropical geometry in the study of degenerations of tropical curves obtained by letting the lengths of some edges go to zero. To describe the limits of such families, with the above algebro-geometric picture in mind, one is led to consider metric graphs with a weight function on the vertices keeping track of the cycles that have vanished in the degeneration. Such metric weighted graphs are called weighted tropical curves; they admit a moduli space, ${M_g^{\rm trop}}$, whose topological properties have strong similarities with those of $\ov{M_g}$; see \cite{BMV} and \cite{CHBK}. The connections between the algebraic and the tropical theory of curves have been the subject of much attention in latest times, and the topic presents a variety of interesting open problems. Moreover, the combinatorial skeleton of the theory, its graph-theoretic side, has been studied in the weightless case independently of the tropical structure; also in this setting the analogies with the classical theory of algebraic curves are quite compelling; see \cite{BN} and \cite{BN2}. In this paper we are interested in divisor theory. For graphs and tropical curves with no weights the theory has been founded so that there are good notions of linear equivalence, canonical divisor, and rank of a divisor. One of the most important facts, as in algebraic geometry, is the Riemann-Roch theorem for the rank, which has been proved in \cite{BN} for loopless, weightless graphs, and in \cite{GK} and \cite{MZ} for weightless tropical curves. The combinatorial theory is linked to the algebro-geometric theory not only by the formal analogies. Indeed, a remarkable fact that connects the two theories is Baker's Specialization Lemma, of \cit {bakersp}. This result has been applied in \cite{CDPR} to obtain a new proof of the famous Brill-Noether theorem for algebraic curves, in \cite {bakersp} to prove the Existence theorem (i.e., the non-emptyness of the Brill-Noether loci when the Brill-Noether number is non-negative) for weightless tropical curves, and in \cite{CBNgraph}, strengthened by generalizing to graphs admitting loops (corresponding to the situation where the irreducible components of the special fiber could have nodal singularities), to prove the Existence theorem for weightless graphs. A Specialization Lemma valid also for weighted graphs could be applied to relate the Brill-Noether loci of $\ov{M_g}$ with those of ${M_g^{\rm trop}}$, or to characterize singular stable curves that lie in the Brill-Noether loci (a well known open problem). The main goal of this paper is to set up the divisor theory for weighted graphs and tropical curves, and to extend the above results. We hope in this way to prompt future developments in tropical Brill-Noether theory; see \cite{len}, for example. We begin by giving a geometric interpretation of the weight structure; namely, we associate to every weighted graph a certain weightless graph, and to every weighted tropical curve what we call a ``pseudo-metric" graph. In both cases, the weight of a vertex is given a geometric interpretation using certain ``virtual" cycles attached to that vertex; in the tropical case such cycles have length zero, so that weighted tropical curves bijectively correspond to pseudo-metric graphs; see Proposition~\ref{pseudo}. Intuitively, from the algebro-geometric point of view where a graph is viewed as the dual graph of an algebraic curve, the operation of adding virtual loops at a vertex corresponds to degenerating the irreducible component corresponding to that vertex to a rational curve with a certain number (equal to the weight of the vertex) of nodes, while breaking a loop by inserting a new vertex translates, as in the weightless case, into ``blowing up" the node corresponding to the loop. With these definitions we prove that the Riemann-Roch theorem holds; see Theorem~\ref{wRR} for graphs, and Theorem~\ref{RRwc} for tropical curves. Furthermore, we prove, in Theorem~\ref{spe}, that the Specialization Lemma holds in a more general form taking into account the weighted structure. We note that this is a stronger fact than the specialization lemma for weightless graphs~\cite{BN, CBNgraph}. For example, in the simplest case of a weighted graph consisting of a unique vertex without any edge, the inequalities of~\cite{BN, CBNgraph} become trivial, while the weighted specialization theorem we prove in this paper is equivalent to Clifford's inequality for irreducible curves. Moreover, one easily sees that the operation of adding loops can only result in decreasing the rank of a given divisor, so our weighted specialization lemma gives stronger inequalities and more information on degeneration of line bundles. In fact, the proof of our result is not a simple consequence of the weightless case, and the argument requires some non-trivial algebro-geometric steps. \ We wish to express our gratitude to Matt Baker for many stimulating conversations about the contents of this paper, and to Yoav Len for pointing out a gap in an earlier proof of Theorem~\ref{spe}. We also thank the referee for a very accurate report. \section{Preliminaries} \subsection{Divisor theory on graphs} \label{graphprel} Graphs are assumed to be connected, unless otherwise specified. We here extend the set-up of \cite{BN} and \cite{bakersp} to graphs with loops. Our notation is non-sensitive to the presence or non-presence of loops. Let $G$ be a graph and $V(G)$ the set of its vertices. The group of divisors of $G$, denoted by $\operatorname{Div} (G)$, is the free abelian group generated by $V(G)$: $$ \operatorname{Div} (G):=\{\sum _{v\in V(G)}n_vv,\ n_v\in \mathbb{Z} \}. $$ For $D\in \operatorname{Div} (G)$ we write $D=\sum _{v\in V(G)}D(v)v $ where $D(v)\in \mathbb{Z}$. For example, if $D=v_0$ for some $v_0\in V(G)$, we have \begin{displaymath} v_0(v)=\left\{ \begin{array}{ll} 1 \text{ if } v= v_0\\ 0 \text{ otherwise.}\\ \end{array}\right. \end{displaymath} The degree of a divisor $D$ is $\deg D:=\sum_{v\in V(G)} D(v)$. We say that $D$ is {\it effective}, and write $D\geq 0$, if $D(v)\geq 0$ for all $v\in V(G)$. We denote by $\operatorname{Div}_+(G)$ the set of effective divisors, and by $\operatorname{Div} ^d(G)$ (respectively $\operatorname{Div}_+ ^d(G)$) the set of divisors (resp. effective divisors) of degree $d$. Let $G$ be a graph and $\iota:H\hookrightarrow G$ a subgraph, so that we have $V(H)\subset V(G)$. For any $D\in \operatorname{Div}(G)$ we denote by $D_{H}\in \operatorname{Div}(H)$ the restriction of $D$ to $H$. We have a natural injective homomorphism \begin{equation} \label{notres} \iota_*: \operatorname{Div}(H)\longrightarrow \operatorname{Div} (G); \ \ D\mapsto \iota_*D \end{equation} such that $\iota_*D(v)=D(v)$ for every $v\in V(H)$ and $\iota_*D(u)=0$ for every $v\in V(G)\smallsetminus V(H)$. \ \noindent {\it Principal divisors.} We shall now define principal divisors and linear equivalence. We set \begin{displaymath} (v\cdot w )=\left\{ \begin{array}{ll} \text{number of edges joining }v \text{ and } w&\text{ if } v\neq w\\ - \operatorname{val}(v) +2 \operatorname{loop}(v)&\text{ if } v= w\\ \end{array}\right. \end{displaymath} where $\operatorname{val}(v)$ is the valency of $v$, and $\operatorname{loop}(v)$ is the number of loops based at $v$. This extends linearly to a symmetric, bilinear ``intersection" product $$ \operatorname{Div}(G) \times \operatorname{Div}(G) \longrightarrow \mathbb{Z}. $$ Clearly, this product does not change if some loops are removed from $G$. \medskip For a vertex $v$ of $G$ we denote by $T_v\in \operatorname{Div}(G)$ the following divisor $$ T_v :=\sum_{w\in V(G)}(v\cdot w )w. $$ Observe that $\deg T_v=0$. \medskip The group $\operatorname{Prin} (G)$ of {\it principal} divisors of $G$ is the subgroup of $\operatorname{Div}(G)$ generated by all the $T_v$: $$ \operatorname{Prin}(G)=<T_v,\ \forall v\in V(G)>. $$ We refer to the divisors $T_v$ as the {\it generators} of $\operatorname{Prin} (G)$. \medskip For any subset $Z\subset V(G)$ we denote by $T_Z\in \operatorname{Prin} (G)$ the divisor \begin{equation} \label{defT} T_Z:=\sum_{v\in Z}T_v. \end{equation} \begin{remark} \label{oneless} For any subset $U\subset V(G)$ such that $|U|=|V(G)|-1$ the set $\{T_v,\ v\in U\}$ freely generates $\operatorname{Prin} (G)$. \end{remark} Let us show that the above definition of principal divisors coincides with the one given in \cite{BN}. Consider the set $ k(G):=\{f:V(G) \to \mathbb{Z} \} $ of integer valued functions on $V(G)$. Then the divisor associated to $f$ is defined in \cite{BN} as $$ {\rm div} (f):= \sum_{v\in V(G)}\sum_{e=vw\in E(G)}(f(v)-f(w))v, $$ and these are defined as the principal divisors in \cite{BN}. Now, we have \begin{eqnarray*} {\rm div} (f) \,\, =& \sum_{v\in V(G)}\bigr(\sum_{w\in V(G)\smallsetminus v}(f(v)-f(w))(v\cdot w )\bigr)v\\ \\ =&\sum_{v\in V(G)}\Bigr[\Bigr(\sum_{w\in V(G)\smallsetminus v}(-f(w) (v\cdot w ))\Bigl) -f(v)(v\cdot v)\Bigl]v\\ \\ =&-\sum_{v\in V(G)}\Bigr(\sum_{w\in V(G)}f(w) (v\cdot w )\Bigl)v. \end{eqnarray*} \noindent Fix any $v\in V(G)$ and consider the function $f_{v}:V(G)\to \mathbb{Z}$ such that $f_{v}(v)=1$ and $f_{v}(w)=0$ for all $w\in V(G)\smallsetminus v$. Using the above expression for ${\rm div} (f)$ one checks that $ T_{v}=-{\rm div} (f_{v}). $ As the functions $f_v$ generate $k(G)$, and the divisors $T_v$ generate $\operatorname{Prin} (G)$, the two definitions of principal divisors are equal. \medskip We say that $D,D'\in \operatorname{Div} (G)$ are {\it linearly equivalent}, and write $D\sim D'$, if $D-D'\in \operatorname{Prin} (G)$. We denote by $ \operatorname{Jac} ^d(G)=\operatorname{Div}^d(G)/\sim$ the set of linear equivalence classes of divisors of degree $d$; we set $$ \operatorname{Jac}(G)= \operatorname{Div} (G)/\operatorname{Prin} (G). $$ \begin{remark} \label{connect} If $d=0$ then $ \operatorname{Jac} ^0(G)$ is a finite group, usually called the {\it Jacobian group} of $G$. This group has several other incarnations, most notably in combinatorics and algebraic geometry. We need to explain the conection with \cite{cner}. If $X_0$ is a nodal curve with dual graph $G$ (see section~\ref{secspe}), the elements of $\operatorname{Prin} (G)$ correspond to the multidegrees of some distinguished divisors on $X_0$, called {\it twisters}. This explains why we denote by a decorated ``$T$" the elements of $\operatorname{Prin} (G)$. See \ref{connect2} for more details. The Jacobian group $ \operatorname{Jac} ^0(G)$ is the same as the {\it degree class group} $\Delta_X$ of \cite{cner}; similarly, we have $ \operatorname{Jac} ^d(G)=\Delta^d_X. $ \end{remark} Let $D\in \operatorname{Div}(G)$; in analogy with algebraic geometry, one denotes by $$ |D\ |:=\{E\in \operatorname{Div} _+(G): E\sim D\} $$ the set of effective divisors equivalent to $D$. Next, the {\it rank}, $r_{G}(D)$, of $D\in \operatorname{Div}(G)$ is defined as follows. If $ |D|=\emptyset $ we set $r_{G}(D)=-1$. Otherwise we define \begin{equation} \label{rank} r_{G}(D):=\max\{k\geq 0: \ \forall E\in \operatorname{Div}^k_+(G) \ \ |D-E| \neq \emptyset \}. \end{equation} \begin{remark} The following facts follow directly from the definition. \noindent If $D\sim D'$, then $r_{G}(D)=r_{G}(D')$. \noindent If $\deg D<0$, then $r_{G}(D)=-1$. Let $\deg D=0$; then $r_{G}(D)\leq 0$ with equality if and only if $D\in \operatorname{Prin} (G)$. \end{remark} \ \noindent {\it Refinements of graphs.} Let $\widetilde{G}$ be a graph obtained by adding a finite set of vertices in the interior of some of the edges of $G$. We say that $\widetilde{G}$ is a {\it refinement} of $G$. We have a natural inclusion $V(G)\subset V(\widetilde{G})$; denote by $U:=V(\widetilde{G})\smallsetminus V(G)$ the {\it new} vertices of $\widetilde{G}$. We have a natural map \begin{equation} \label{notref} \sigma^*:\operatorname{Div}(G)\longrightarrow \operatorname{Div} (\widetilde{G});\ \ D\mapsto \sigma^*D \end{equation} such that $\sigma^*D(v)=D(v)$ for every $v\in V(G)$ and $\sigma^*D(u)=0$ for every $u\in U$. It is clear that $\sigma^*$ induces an isomorphism of $ \operatorname{Div}(G)$ with the subgroup of divisors on $\widetilde{G}$ that vanish on $U$. The notation $\sigma^*$ is motivated in remark~\ref{pushrk}. A particular case that we shall use a few times is that of a refinement of $G$ obtained by adding the same number, $n$, of vertices in the interior of every edge; we denote by $G^{(n)}$ this graph, and refer to it as the {\it $n$-subdivision} of $G$. \begin{remark} \label{pushrk} Let $G$ be a graph and $e\in E(G)$ a fixed edge. Let $\widetilde{G}$ be the refinement obtained by inserting only one vertex, $\widetilde{v}$, in the interior $e$. Let $v_1,v_2\in V(G)$ be the end-points of $e$, so that they are also vertices of $\widetilde{G}$. Note that $\widetilde{G}$ has a unique edge $\widetilde{e}_1$ joining $v_1$ to $\widetilde{v}$, and a unique edge $\widetilde{e}_2$ joining $v_2$ to $\widetilde{v}$. Then the contraction of, say, $\widetilde{e}_1$ is a morphism of graphs $$ \sigma :\widetilde{G} \longrightarrow G. $$ There is a natural pull-back map $ \sigma^*:\operatorname{Div}(G)\to \operatorname{Div}(\widetilde{G})$ associated to $\sigma$, which maps $D\in \operatorname{Div}(G)$ to $\sigma^*D\in \operatorname{Div}(\widetilde{G})$ such that $\sigma^*D(\widetilde{v})=0$, and $\sigma^*D$ is equal to $D$ on the remaining vertices of $\widetilde{G}$, which are of course identifed with the vertices of $G$. \noindent By iterating, this construction generalizes to any refinement of $G$. From this description, we have that the map $\sigma^*$ coincides with the map we defined in (\ref{notref}), and also that it does not change if we define it by choosing as $\sigma$ the map contracting $\widetilde{e}_2$ instead of $\widetilde{e}_1$. In the sequel, we shall sometimes simplify the notation and omit to indicate the map $\sigma^*$, viewing (\ref{notref}) as an inclusion. \end{remark} \subsection{Cut vertices} \label{cutsubs} Let $G$ be a graph with a cut vertex, $v$. Then we can write $G = H_1\vee H_2$ where $H_1$ and $H_2$ are connected subgraphs of $G$ such that $V(H_1)\cap V(H _2)= \{ v\}$ and $E(H_1)\cap E(H _2)= \emptyset$. We say that $G = H_1\vee H_2$ is a decomposition associated to $v$. Pick $D_j\in \operatorname{Div}(H_j)$ for $j=1,2$, then we define $D_1+D_2\in\operatorname{Div} G$ as follows\begin{displaymath} (D_1+D_2)(u)=\left\{ \begin{array}{ll} D_1(v)+D_2(v) &\text{ if } u=v\\ D_1(u) &\text{ if } u\in V(H_1)-\{v\}\\ D_2(u) &\text{ if } u\in V(H_2)-\{v\}.\\ \end{array}\right. \end{displaymath} \begin{lemma} \label{sepvertex} Let $G$ be a graph with a cut vertex and let $G = H_1\vee H_2$ be a corresponding decomposition (as described above). Let $j=1,2$. \begin{enumerate} \item \label{sepvertex1} The map below is a surjective homomorphism with kernel isomorphic to $\mathbb{Z}$ \begin{equation} \label{summap} \operatorname{Div} (H_1)\oplus \operatorname{Div} (H_2) \longrightarrow \operatorname{Div} (G); \quad \quad (D_1,D_2)\mapsto D_1+D_2 \end{equation} and it induces an isomorphism $ \operatorname{Prin}(H_1)\oplus \operatorname{Prin}(H_2)\cong \operatorname{Prin} (G) $ and an exact sequence $$ 0\longrightarrow \mathbb{Z}\longrightarrow \operatorname{Jac} (H_1)\oplus \operatorname{Jac} (H_2)\longrightarrow \operatorname{Jac} (G)\longrightarrow 0. $$ \item \label{sepvertex2} We have a commutative diagram with injective vertical arrows $$ \xymatrix{ 0 \ar[r] & \operatorname{Prin}(G) \ar[r] & \operatorname{Div}(G) \ar[r] & \operatorname{Jac}(G) \ar[r] & 0 \\ 0 \ar[r] & \operatorname{Prin}(H_j) \ar[r] \ar@{^{(}->}[u]& \operatorname{Div}(H_j) \ar[r]\ar@{^{(}->}[u] & \operatorname{Jac}(H_j) \ar[r] \ar@{^{(}->}[u] & 0 \\ } $$ \item \label{sepvertex3} For every $D_1,D_2$ with $D_j\in \operatorname{Div} (H_j)$, we have $$ r_{G}(D_1+D_2)\geq \min \{r_{H_1}(D_1),r_{H_2}(D_2)\}. $$ \item \label{sepvertex4} For every $D_j\in \operatorname{Div}(H_j)$, we have $r_{H_j}(D_j)\geq r_G(D_j)$. \end{enumerate} \end{lemma} \begin{proof} Denote $V(H_j)=\{u_1^j,\ldots, u_{n_j}^j, v\}$ and $V(G)=\{u_1^1,\ldots, u_{n_1}^1, v,u_1^2,\ldots, u_{n_2}^2\}$. \medskip (\ref{sepvertex1}). An equivalent way of defining the divisor $D_1+D_2$ is to use the two maps $\iota_*^j:\operatorname{Div} (H_j)\to \operatorname{Div} (G)$ defined in (\ref{notres}). Then we have $D_1+D_2=\iota_*^1D_1+\iota_*^2D_2$. With this description, it is clear that the map in part (\ref{sepvertex1}) is a surjective homomorphism. In addition, the kernel of this map has generator $(v , - v) \in \operatorname{Div}(H_1)\oplus \operatorname{Div}(H_2)$ and is thus isomorphic to $\mathbb Z$. To distinguish the generators of $\operatorname{Prin}(H_j)$ from those of $\operatorname{Prin} (G)$ we denote by $T^j_w\in \operatorname{Prin}(H_j)$ the generator corresponding to $w\in V(H_j)$. We clearly have $$ \iota_*^jT^j_{u_h^j}=T_{u_h^j} $$ for $j=1,2$ and $h=1,\ldots,n_i$. As $\operatorname{Prin}(H_j)$ is freely generated by $T^j_{u_1 ^j},\ldots, T^j_{u_{n_j}^j}$ and $\operatorname{Prin} (G)$ is freely generated by $T_{u_1 ^1},\ldots, T_{u_{n_1}^1},T_{u_1 ^2},\ldots, T_{u_{n_2}^2}$, the first part is proved. Part (\ref{sepvertex2}) also follows from the previous argument. (\ref{sepvertex3}). Set $r_j=r_{H_j}(D_j)$ and assume $r_1\leq r_2$. Set $D=D_1+D_2$; to prove that $r_{G}(D)\geq r_1$ we must show that for every $E\in \operatorname{Div}_+^{r_1}(G)$ there exists $T\in \operatorname{Prin} (G)$ such that $D-E+T\geq 0$. Pick such an $E$; let $E_1=E_{H_1}$ and $E_2=E-E_1$, so that $E_2\in \operatorname{Div} H_2$. Since $\deg E_j\leq r_1\leq r_j$ we have that there exists $T_j\in \operatorname{Prin}(H_j)$ such that $D_j-E_j+T_j\geq 0$ in $H_j$. By the previous part $T=T_1+T_2\in \operatorname{Prin} (G)$; let us conclude by showing that $D-E+T\geq 0$. In fact $$ D-E+T=D_1+D_2-E_1-E_2+T_1+T_2=(D_1-E_1+T_1)+(D_2-E_2+T_2)\geq 0. $$ (\ref{sepvertex4}). Assume $j=1$ and set $r=r_G(D_1)$. By (\ref{sepvertex2}) we are free to view $\operatorname{Div}(H_1)$ as a subset of $\operatorname{Div}(G)$. Pick $E\in \operatorname{Div}^r_+(H_1)$, then there exists $T\in \operatorname{Prin}(G)$ such that in $G$ we have $D_1-E+T\geq 0$. By \eqref{sepvertex1} we know that $T=T_1+T_2$ with $T_i\in \operatorname{Prin}(G_i)$; since $D_1(u_h^2)=E(u_h^2)=0$ for all $h=1,\ldots, n_2$ we have that $T_2=0$, hence $D_1-E+T_1\geq 0$ in $ H_1$ \end{proof} Now let $G=H_1\vee H_2$ as above and let $m,n$ be two nonnegative integers; we denote by $G^{(m,n)}$ the graph obtained by inserting $m$ vertices in the interior of every edge of $H_1$ and $n$ vertices in the interior of every edge of $H_2$. Hence we can write $G^{(m,n)} := H_1^{(m)} \vee H_2^{(n)}$ (recall that $H^{(m)}$ denotes the $m$-subdivision of a graph $H$). We denote by $\sigma^*_{m,n}:\operatorname{Div}(G) \to \operatorname{Div}(G^{(m,n)})$ the natural map. \begin{prop} \label{sepref} Let $G$ be a graph with a cut vertex and $G = H_1\vee H_2$ a corresponding decomposition. Let $m,n$ be non-negative integers and $G^{(m,n)}=H_1^{(m)} \vee H_2^{(n)}$ the corresponding refinement. Then \begin{enumerate} \item \label{sepref1} $\sigma^*_{m,n}(\operatorname{Prin}(G))\subset \operatorname{Prin}(G^{(m,n)})$. \item \label{sepref2} Assume that $G$ has no loops. Then for every $D\in \operatorname{Div}(G)$, we have $$r_G(D) = r_{G^{(m,n)}}(\sigma_{m,n}^*D).$$ \end{enumerate} \end{prop} \begin{proof} It is clear that it suffices to prove part (\ref{sepref1}) for $(0,n)$ and $(0,m)$ separately, hence it suffices to prove it for $(0,m)$. Consider the map (for simplicity we write $\sigma^*=\sigma_{0,m}^*$) $$ \sigma^*:\operatorname{Div}(G)=\operatorname{Div} (H_1 \vee H_2)\to \operatorname{Div} (H_1 \vee H_2^{(m)})=\operatorname{Div}(G^{(0,m)}). $$ The group $\operatorname{Prin}(G)$ is generated by $\{T_u,\ \forall u\in V(G)\smallsetminus \{v\}\}$ (see Remark~\ref{oneless}). Hence it is enough to prove that $\sigma^*(T_u)$ is principal for all $u\in V(G)\smallsetminus \{v\}$. We denote by $\widehat{u}\in V(G^{(0,m)})$ the vertex corresponding to $u\in V(G)$ via the inclusion $V(G)\subset V(G^{(0,m)})$. If $u\in V(H_1)\smallsetminus \{v\}$ we clearly have $\sigma^*(T_u)= T_{\widehat{u}}$, hence $\sigma^*(T_u)\in \operatorname{Prin}(G^{(0,m)})$ . Let $u\in V(H_2)\smallsetminus \{v\}$. Denote by $E_u(G)$ the set of edges of $G$ adjacent to $u$ and pick $e\in E_u(G)$; as $G^{(0,m)}$ is given by adding $m$ vertices in every edge of $G$, we will denote the vertices added in the interior of $e$ by $$ \{w_1^{e},\ldots, w_m^{e}\}\subset V(G^{(0,m)}), $$ ordering $w_1^{e},\ldots, w_m^{e}$ according to the orientation of $e$ which has $u$ as target, so that in $G^{(0,m)}$ we have $(w_m^{e}\cdot \widehat{u})=1$ and $(w_i^{e}\cdot \widehat{u})=0$ if $i<m$ (and $(w_i^e\cdot w_{i+1}^e)=1$ for all $i$). One then easily checks that $$ \sigma^*(T_u)= (m+1)T_{\widehat{u}}+\sum_{e\in E_u(G)} \sum_{i=1}^miT_{w_i^e}; $$ hence $\sigma^*(T_u)\in \operatorname{Prin}(G^{(0,m)})$, and part (\ref{sepref1}) is proved. Part (\ref{sepref2}). First we note that the statement holds in the case $m=n$. Indeed, in this case $G^{(n,n)}=G^{(n)}$ and hence our statement is \cite[Cor. 22]{HKN}; see also \cite[Thm 1.3]{luo}. Using this fact, we claim that it will be enough to show only the inequality \begin{equation} \label{ineq*} r_G(D) \leq r_{G^{(m,n)}}(\sigma_{m,n}^*D). \end{equation} Indeed, suppose this inequality holds for every divisor $D$ on every graph of the form $G=H_1\vee H_2$ and for all pairs of integers $(m,n)$. Pick a divisor $D\in \operatorname{Div}(G)$, we get, omitting the maps $\sigma_{\ldots}^*$ for simplicity (which creates no ambiguity, as the subscript of $r$ already indicates in which graph we are computing the rank) $$ r_G(D)\leq r_{G^{(m,n)}} (D) \leq r_{(G^{(m,n)})^{(n,m)}} (D)= r_{G^{(l,l)}} (D)= r_G(D) $$ where $l=m+n+mn$. (We used the trivial fact that for any graph $H$ and positive integers $h,k$ we have $(H^{(h)})^{(k)}= H^{(h+k+hk)}$). Hence all the inequalities above must be equalities and the result follows. Thus, we are left to prove Inequality~\eqref{ineq*}. Let $r=r_{G}(D)$. We have to show that for any effective divisor $E^*$ on $G^{(m,n)}$ of degree $r$ we have $$ r_{G^{(m,n)}}( \sigma_{m,n}^*D - E^*)\geq 0. $$ By \cite[Thm. 1.5]{luo} (or \cite{HKN}), $V(G)$ is a rank-determining set in $G^{(m,n)}$. Therefore it will be enough to show the above claim for divisors of the form $E^*=\sigma^*_{m,n}E$ for any effective divisor $E$ of degree $r$ on $G$. Summarizing, we need to show that for every $E\in \operatorname{Div} ^r_+(G)$ there exists $T\in \operatorname{Prin} (G^{(m,n)})$ such that \begin{equation} \label{summ} T + \sigma_{m,n}^*D - \sigma_{m,n}^*E \geq 0. \end{equation} Now, since $r=r_G(D)$, there exists a principal divisor $\widetilde{T}\in \operatorname{Prin}(G)$ such that $$ \widetilde{T} + D-E \geq 0. $$ By the previous part, $\sigma_{m,n}^*\widetilde{T}$ is a principal divisor of $G^{(m,n)}$; set $T:=\sigma_{m,n}^*\widetilde{T}$. Then we have $$ 0\leq \sigma_{m,n}^*(\widetilde{T} + D-E)=T+ \sigma_{m,n}^*D - \sigma_{m,n}^*E. $$ Therefore (\ref{summ}) holds, and we are done. \end{proof} \section{Riemann-Roch for weighted graphs} \subsection{Divisor theory for graphs with loops} Our goal here is to set up a divisor theory for graphs with loops, so that the Riemann-Roch theorem holds. The Riemann-Roch theorem has been proved for loopless graphs in \cite{BN}; to generalize it we shall give a more subtle definition for the rank and for the canonical divisor. \begin{defi} \label{loopless} Let $G$ be a graph and let $\{ e_1,\ldots, e_c\} \subset E(G)$ be the set of its loop-edges. We denote by $\widehat{G}$ the graph obtained by inserting one vertex in the interior of the loop-edge $e_j$, for all $j=1,\ldots,c$. Since $V(G)\subset V(\widehat{G})$, we have a canonical injective morphism \begin{equation} \label{sigman} \sigma^*:\operatorname{Div}(G) \stackrel{}{\longrightarrow} \operatorname{Div}(\widehat{G}) . \end{equation} We set \begin{equation} \label{defr} r^{\#}_{G}(D):=r_{\widehat{G}}(\sigma^*D), \end{equation} and refer to $r^{\#}_{G}(D)$ as the {\it rank} of $D$. \end{defi} The superscript ``$\#$" is used to avoid confusion with the definition which disregard the loops. We often abuse notation and write just $r_{\widehat{G}}(D)$ omitting $\sigma^*$. Observe that $\widehat{G}$ is free from loops and has the same genus as $G$. (Recall that the genus of a connected graph $G = (V,E)$ is by definition equal to $|E|-|V|+1$.) With the above notation, let $u_j\in V(\widehat{G})$ be the vertex added in the interior of $e_j$ for all $j=1,\ldots,c$. It is clear that the map \eqref{sigman} induces an isomorphism of $\operatorname{Div}(G)$ with the subgroup of divisors $\widehat{D}$ on $\widehat{G}$ such that $\widehat{D}(u_j)=0$ for all $j=1,\ldots,c$. \begin{example} Here is an example in the case $c=1$. \begin{figure}[h] \begin{equation*} \xymatrix@=.5pc{ &&&&&&&&&&&&&&\\ G = &&\ar@{-}@(ul,dl)*{\bullet}\ar @{-}[rrr]\ar @{-} @/_.9pc/[rrr]_(.1)v_(.9)w \ar@{-} @/^.9pc/[rrr] &&& *{\bullet} &&&&&&& {\widehat{G}} = &&*{\bullet} \ar @{-} @/_.9pc/[rrr]_(.1){u_1} \ar@{-} @/^.9pc/[rrr] &&& *{\bullet}\ar @{-} @/_.9pc/[rrr]_(.1)v_(.9)w \ar@{-} @/^.9pc/[rrr]\ar @{-}[rrr] &&& *{\bullet} &&&\\ &&&&&&&&&&&&&&\\ } \end{equation*} \end{figure} \end{example} \begin{remark} We have \begin{equation} \label{BI} r_G(D)\geq r^{\#}_G(D). \end{equation} Indeed, let $G_0$ be the graph obtained from $G$ by removing all its loop-edges; then, by definition, $r_G(D)=r_{G_0}(D)$. On the other hand, by Lemma~\ref{sepvertex} \eqref{sepvertex4}, writing $\widehat{G}=G_0\vee H$ for some graph $H$, we have $r_{G_0}(D) \geq r_{\widehat{G}}(D)=r^{\#}_G(D)$, hence \eqref{BI} follows. \end{remark} Definition~\ref{loopless} may seem a bit arbitrary, as the choice of the refinement $\widehat{G}$ may seem arbitrary. In fact, it is natural to ask whether adding some (positive) number of vertices, different from one, in the interior of the loop-edges of $G$ can result in a different rank. This turns out not to be the case, as we now show. \begin{prop}\label{thm:refinement} Let $G$ be a graph and let $e_1,\ldots, e_c$ be its loop-edges. For every $\underline{n}=(n_1,\ldots, n_c)\in \mathcal N^c$ let $G^{(\underline{n})}$ be the refinement of $G$ obtained by inserting $n_i$ vertices in the interior of $e_i$. Then for every $D\in \operatorname{Div} G$ we have $$ r^{\#}_G (D) = r_{G^{(\underline{n})}}(\sigma^*D) $$ where $\sigma^*:\operatorname{Div}(G)\hookrightarrow \operatorname{Div}(G^{(\underline{n})})$ is the natural map. \end{prop} \begin{proof} It will be enough to prove the proposition for $c=1$ since the general statement can be obtained easily by induction on the number of loop-edges of $G$. Let $H_1$ be the graph obtained from $G$ by removing its loop-edge, $e$, and let $v$ be the vertex of $G$ adjacent to $e$. We can thus decompose $G$ with respect to $v$: $$ G=H_1\vee C_1 $$ where, for $m\geq 1$ we denote by $C_m$ the ``m-cycle", i.e., the 2-regular graph of genus 1, having $m$ vertices and $m$ edges. Observe that for every $h\geq 1$ we have (recall that $C_m^{(h)}$ denotes the $h$-subdivision of $C_m$) \begin{equation} \label{Cmh} C_m^{(h)}=C_{m(h+1)}. \end{equation} Therefore, with the notation of Proposition~\ref{sepref}, we have, for every $n\geq 0$, \begin{equation} \label{G0n} G^{(0,n)} = H_1^{(0)}\vee C_{1}^{(n)} =H_1\vee C_{n+1}. \end{equation} For any divisor $D$ on $G$, by definition, we have $$ r^{\#}_{G}(D) =r_{G^{(0,1)}}(\sigma_{0,1}^* D). $$ So we need to prove that for any $n\geq 1$, \begin{equation} \label{end} r_{G^{(0,1)}}(\sigma_{0,1}^*D) = r_{G^{(0,n)}}(\sigma_{0,n}^*D). \end{equation} This is now a simple consequence of Proposition \ref{sepref} (\ref{sepref2}). Indeed, by applying it to the loopless graph $G^{(0,1)}=H_1\vee C_2$ and the $n$-subdivision of $C_2$, we get, simplifying the notation by omitting the pull-back maps $\sigma^*_{\dots}$ , $$ r^{\;}_{G^{(0,1)}}(D)= r^{\;}_{(G^{(0,1)})^{(0,n)}}(D)=r_{H_1\vee C_2^{(n)}}(D)=r^{\;}_{H_1\vee C_{2n+2}^{\;}}(D) $$ by (\ref{Cmh}). On the other hand, applying the proposition a second time to $G^{(0,n)}=H_1\vee C_{n+1}$ and the $1$-subdivision of $C_{n+1}$, we get $$ r^{\;}_{G^{(0,n)}}(D)= r^{\;}_{(G^{(0,n)})^{(0,1)}}(D)= r_{H_1\vee C_{n+1}^{(1)}}(D)=r^{\;}_{H_1\vee C_{2n+2}^{\;}}(D). $$ The last two equalities prove (\ref{end}), hence the result is proved. \end{proof} \begin{remark} \label{linrk} The definition of linear equivalence for divisors on a graph with loops can be taken to be the same as in Subsection~\ref{graphprel}. Indeed, let $D,D'\in \operatorname{Div} (G)$; then $D$ and $D'$ can be viewed as divisors on the graph $G_0$ obtained from $G$ by removing all the loop-edges, or as divisors on the graph $\widehat{G}$. By Lemma~\ref{sepvertex} we have that $D$ and $D'$ are linearly equivalent on $G_0$ if and only if and only if they are linearly equivalent on $\widehat{G}$. \noindent It is thus obvious that if $D\sim D'$ for divisors in $\operatorname{Div}(G)$, then $r^{\#}_G(D)=r^{\#}_G(D')$. \end{remark} The canonical divisor $K^{\#}_{\G}\in \operatorname{Div}(G)$ of $G$ is defined as follows \begin{equation} \label{defK} K^{\#}_{\G}:=\sum_{v\in V(G)} (\operatorname{val} (v) -2)v. \end{equation} \begin{thm} \label{RRloop} Let $G$ be a graph with $c$ loops, and let $D\in \operatorname{Div}(G)$. \begin{enumerate} \item \label{RR} \emph{(}Riemann-Roch theorem\emph{)} $$r^{\#}_{G}(D)-r^{\#}_{G}(K^{\#}_{\G}-D)=\deg D -g +1. $$ In particular, we have $r^{\#}_{G}(K^{\#}_{\G})=g-1$ and $\deg K^{\#}_{\G}=2g-2$. \item \label{Riemann} \emph{(}Riemann theorem\emph{)} If $\deg D\geq 2g-1$ then $$r^{\#}_{G}(D)=\deg D-g.$$ \end{enumerate} \end{thm} \begin{proof} Let $U=\{u_1,\ldots, u_c\}\subset V(\widehat{G})$ be the set of vertices added to $G$ to define $\widehat{G}$. The canonical divisor $K_{\widehat{G}}$ of $\widehat{G}$ is $$ K_{\widehat{G}}=\sum_{\widehat{v}\in V(\widehat{G})} (\operatorname{val} (\widehat{v}) -2)\widehat{v} =\sum_{\widehat{v}\in V(\widehat{G})\smallsetminus U} (\operatorname{val} (\widehat{v}) -2)\widehat{v} $$ because the vertices in $U$ are all 2-valent. On the other hand we have an identification $V(G)=V(\widehat{G})\smallsetminus U$ and it is clear that this identification preserves the valencies. Therefore, by definition (\ref{defK}) we have $$ \sigma^* K^{\#}_{\G}= K_{\widehat{G}}. $$ Hence, since the map (\ref{sigman}) is a degree preserving homomorphism, $$ r^{\#}_{G}(D)-r^{\#}_{G}(K^{\#}_{\G}-D)=r_{\widehat{G}}(\sigma^*D)-r_{\widehat{G}}(K_{\widehat{G}} -\sigma^*D))=\deg D-g+1 $$ where, in the last equalty, we applied the the Riemann-Roch formula for loopless graphs (proved by Baker-Norine in \cite{BN}), together with the fact that $G$ and $\widehat{G}$ have the same genus. Part (\ref{Riemann}) follows from the Riemann-Roch formula we just proved, noticing that, if $\deg D\geq 2g-1$, then $\deg K^{\#}_{\G}-D<0$ and hence $r^{\#}_{G}(K^{\#}_{\G}-D)=-1$. \end{proof} The next Lemma, which we will use later, computes the rank of a divisor on the so called ``rose with $g$ petals", or ``bouquet of $g$ loops" $R_g$. \begin{lemma} \label{rose} Set $g\geq 1$ and $d\leq 2g$. Let $R_g$ be the connected graph of genus $g$ having only one vertex (and hence $g$ loop-edges). For the unique divisor $D\in \operatorname{Div} ^d(R_g)$ we have $$ r^{\#}_{R_g}(D) =\left\lfloor{\frac{d}{2}}\right\rfloor. $$ \end{lemma} \begin{proof} Let $v$ be the unique vertex of $G=R_g$, hence $D=dv$. To compute $r^{\#}_{R_g}(D)$ we must use the refinement $\widehat{G}$ of $R_g$ defined above. In this case $\widehat{G}$ is the 1-subdivision of $R_g$. So $V(\widehat{G})=\{\widehat{v}, u_1,\ldots, u_g\}$ with each $u_i$ of valency $2$, and $\widehat{v}$ of valency $2g$. We have $u_i\cdot v=2$ for all $i=1,\ldots, g$, and $u_i\cdot u_j=0$ for all $i\neq j$. Let $\widehat{D}=d\widehat{v}$ be the pull-back of $D$ to $\widehat{G}$. Set $r:=\left\lfloor{\frac{d}{2}}\right\rfloor$. We will first prove that $r_{\widehat{G}}(\widehat{D})\geq r$. Let $E$ be a degree $r$ effective divisor on $\widehat{G}$; then for some $I\subset \{1,\ldots,g\}$ we have $$E=e_0\widehat{v}+\sum_{i\in I} e_i u_i $$ with $e_i> 0$ and $\sum_{i=0}^r e_i=r$. Notice that $|I|\leq r$. Now, $$ \widehat{D}-E\sim d\widehat{v}-e_0\widehat{v}-\sum_{i\in I} e_i u_i-\sum_{i\in I}\left\lceil{\frac{e_i}{2}}\right\rceil T_{u_i}=:F. $$ Let us prove that $F\geq 0$. Recall that $T_{u_i}(\widehat{v})=2$, hence $$ F(\widehat{v})=d-e_0-2\sum_{i\in I} \left\lceil{\frac{e_i}{2}}\right\rceil \geq d-e_0-\sum_{i\in I}(e_i+1)\geq 2r-r - |I| = r-|I|\geq 0 $$ as, of course, $|I|\leq r$. Next, since $T_{u_i}(u_i)=-2$ and $T_{u_i}(u_j)=0$ if $i\neq j$, we have for all $i\in I$, $$ F(u_i)=-e_i+2\left\lceil{\frac{e_i}{2}}\right\rceil\geq 0, $$ and $F(u_j)=0$ for all $u_j\not\in I$. Therefore $r_{\widehat{G}}(\widehat{D})\geq r$. Finally, since $d\leq 2g$, we can apply Clifford's theorem \cite[Cor. 3.5]{BN}, and therefore equality must hold. \end{proof} \subsection{Divisors on weighted graphs.} \label{wgsec} Let $(G, \omega )$ be a {\it weighted graph}, by which we mean that $G$ is an ordinary graph and $\omega :V(G)\to \mathbb{Z}_{\geq 0}$ a {\it weight} function on the vertices. The genus, $g(G, \omega )$, of $(G,\omega)$ is \begin{equation} \label{gw} g(G, \omega )=b_1(G)+\sum_{v\in V(G)}\omega (v). \end{equation} We associate to $(G, \omega )$ a weightless graph $G^{\omega} $ as follows: $G^{\omega} $ is obtained by attaching at every vertex $v$ of $G$,\ $\omega (v)$ loops (or ``1-cycles"), denoted by $C_v^1,\ldots ,C^{\omega (v)}_v.$ We call $G^{\omega} $ the {\it virtual} (weightless) graph of $(G,\omega )$, and we say that the $C^i_v$ are the virtual loops. The initial graph $G$ is a subgraph of $G^{\omega} $ and we have an identification \begin{equation} \label{vertid} V(G)=V(G^\omega ). \end{equation} It is easy to check that \begin{equation} \label{wg} g (G, \omega )=g(G^\omega ). \end{equation} For the group of divisors of the weighted graph $(G, \omega )$, we have \begin{equation} \label{wdiv} \operatorname{Div} (G, \omega )=\operatorname{Div} (G^\omega )=\operatorname{Div}(G). \end{equation} The canonical divisor of $(G, \omega )$ is defined as the canonical divisor of $G^{\omega} $, introduced in the previous section, namely, \begin{equation} \label{wcan} K_{(G, \omega )}:= K^{\#}_{G^{\omega} }=\sum_{v\in V(G^\omega )} (\operatorname{val}_{G^{\omega} }(v) -2)v. \end{equation} Note that $K_{(G, \omega )}\in \operatorname{Div}(G,\omega )$. By (\ref{wg}) and Theorem~\ref{RRloop} we have $$ \deg K_{(G, \omega )} =2g (G, \omega )-2. $$ For any $D\in \operatorname{Div}(G, \omega )$ we define (cf. Definition~\ref{loopless}) \begin{equation} \label{defrw} r_{(G, \omega )}(D):=r^{\#}_{G^{\omega} }(D)=r_{\widehat{G^{\omega}}}(D). \end{equation} \begin{thm} \label{wRR} Let $(G,\omega)$ be a weighted graph. \begin{enumerate} \item For every $D\in \operatorname{Div}(G, \omega)$ we have $$ r_{(G, \omega )}(D)-r_{(G, \omega )}(K_{(G, \omega )}-D)=\deg D-g+1. $$ \item \label{wRR2} For every $D,D'\in \operatorname{Div}(G)$ such that $D\sim D'$, we have $r_{(G, \omega )}(D)=r_{(G, \omega )}(D')$. \end{enumerate} \end{thm} \begin{proof} The first part is an immediate consequence of Theorem~\ref{RRloop}. For \eqref{wRR2}, recall Remark~\ref{linrk}; we have that $D\sim D'$ on $G$ if and only if $D$ and $D'$ are equivalent on the graph $G_0$ obtained by removing all loop-edges from $G$. Now, $G_0$ is a subgraph of $\widehat{G^{\omega}}$, moreover $\widehat{G^{\omega}}$ is obtained from $G_0$ by attaching a finite set of 2-cycles at some vertices of $G_0$. Therefore, by iterated applications of Lemma~\ref{sepvertex}, we have that $D$ is linearly equivalent to $D'$ on $\widehat{G^{\omega}}$. Hence the statement follows from the fact that $r_{\widehat{G^{\omega}}}$ is constant on linear equivalence classes of $\widehat{G^{\omega}}$. \end{proof} \section{Specialization Lemma for weighted graphs} \label{secspe} In this section we fix an algebraically closed field and assume that all schemes are of finite type over it. By ``point" we mean closed point. By {\it nodal curve} we mean a connected, reduced, projective, one-dimensional scheme, having at most nodes (ordinary double points) as singularities. All curves we shall consider in this section are nodal. Let $X$ be a nodal curve; its {\it dual graph}, denoted by $G_X$, is such that $V(G_X)$ is identified with the set of irreducible components of $X$, $E(G_X)$ is identified with the set of nodes of $X$, and there is an edge joining two vertices for every node lying at the intersection of the two corresponding components. In particular, the loop-edges of $G_X$ correspond to the nodes of the irreducible components of $X$. The {\it weighted dual graph} of $X$, denoted by $(G_X,\omega_X)$, has $G_X$ as defined above, and the weight function $\omega_X$ is such that $\omega_X(v)$ is the geometric genus of the component of $X$ corresponding to $v$. In particular, let $g_X$ be the (arithmetic) genus of $X$, then $$ g_X=b_1(G_X)+\sum_{v\in V(G_X)}\omega_X(v). $$ \subsection{Specialization of families of line bundles on curves} Let $\phi:\mathcal X\to B$ be a family of curves, and denote by $\pi: \operatorname{Pic}_{\phi}\to B$ its Picard scheme (often denoted by $\operatorname{Pic}_{\mathcal X/B}$). The set of sections of $\pi$ is denoted as follows $$ \operatorname{Pic}_{\phi} (B):=\{\mathcal L:B\to \operatorname{Pic}_{\phi}:\quad \pi\circ\mathcal L=id_B \}. $$ (The notation $\mathcal L$ indicates that $\mathcal L(b)$ is a line bundle on $X_b=\phi^{-1}(b)$ for every $b\in B$.) Let $b_0\in B$ be a closed point and set $X_0=\phi^{-1}(b_0)$; denote by $(G, \omega )$ the weighted dual graph of $X_0$. We identify $\operatorname{Div}(G)=\mathbb{Z}^{V(G)}$, so that we have a map \begin{equation} \label{mdeg} \operatorname{Pic} (X_0) \longrightarrow \operatorname{Div}(G)=\mathbb{Z}^{V(G)};\quad \quad L\mapsto \underline{\operatorname{deg}} \ L \end{equation} where $\underline{\operatorname{deg}}$ denotes the multidegree, i.e., for $v\in V(G)$ the $v$-coordinate of $ \underline{\operatorname{deg}} \ L$ is the degree of $L$ restricted to $v$ (recall that $V(G)$ is identified with the set of irreducible components of $X_0$). Finally, we have a {\it specialization} map $\tau$ \begin{equation} \label{tau} \operatorname{Pic}_{\phi}(B) \stackrel{\tau}{\longrightarrow} \operatorname{Div}(G) ;\quad \quad L \mapsto \underline{\operatorname{deg}} \ \mathcal L(b_0). \end{equation} \begin{defi} Let $X_0$ be a nodal curve. A projective morphism $\phi:\mathcal X\to B$ of schemes is a {\it regular one-parameter smoothing of} $X_0$ if: \begin{enumerate} \item $B$ is smooth, quasi-projective, $\dim B=1;$ \item $\mathcal X$ is a regular surface; \item there is a closed point $b_0\in B$ such that $X_0\cong\phi^{-1}(b_0)$. (We shall usually identify $X_0=\phi^{-1}(b_0)$.) \end{enumerate}\end{defi} \begin{remark} \label{connect2} As we mentioned in Remark~\ref{connect}, there is a connection between the divisor theory of $X_0$ and that of its dual graph $G$. We already observed in \eqref{mdeg} that to every divisor, or line bundle, on $X_0$ there is an associated divisor on $G$. Now we need to identify $\operatorname{Prin} (G)$. As we already said, the elements of $\operatorname{Prin} (G)$ are the multidegrees of certain divisors on $X_0$, called twisters. More precisely, fix $\phi:\mathcal X\to B$ a regular one-parameter smoothing of $X_0$; we have the following subgroup of $\operatorname{Pic} X_0$: $$ \operatorname{Tw}_{\phi} (X_0):=\{L\in \operatorname{Pic} X_0: \ L\cong \mathcal O_{\mathcal X}(D)_{|X_0} \text{ for some } D\in \operatorname{Div} \mathcal X: \ \operatorname{Supp} D\subset X_0 \}. $$ The set of twisters, $\operatorname{Tw}(X_0)$, is defined as the union of the $\operatorname{Tw}_{\phi} (X_0)$ for all one-parameter smoothings $\phi$ of $X_0$. The group $\operatorname{Tw}_{\phi} (X_0)$ depends on $\phi$, but its image under the multidegree map \eqref{mdeg} does not, so that $\underline{\operatorname{deg}}(\operatorname{Tw}_{\phi} (X_0))=\underline{\operatorname{deg}}(\operatorname{Tw} (X_0))$. Moreover, the multidegree map induces an identification between the multidegrees of all twisters and $\operatorname{Prin} (G)$: $$ \underline{\operatorname{deg}} (\operatorname{Tw} (X_0))=\operatorname{Prin} (G)\subset \mathbb{Z}^{V(G)}. $$ See \cite{cner}, \cite [Lemma 2.1]{bakersp} or \cite{CBNgraph} for details. \end{remark} \begin{defi} \label{phieqdef} Let $\phi$ be a regular one-parameter smoothing of $X_0$ and let $\mathcal L,\mathcal L'\in \operatorname{Pic}_{\phi}(B)$. We define $\mathcal L$ and $\mathcal L'$ to be $\phi$-equivalent, writing $ \mathcal L \sim _{\phi}\mathcal L'$, as follows \begin{equation} \label{phieq} \mathcal L \sim _{\phi}\mathcal L'\ \ \ \text{ if } \ \ \mathcal L(b)\cong \mathcal L'(b),\ \ \forall b\neq b_0. \end{equation} \end{defi} \begin{example} \label{L(C)} Let $\phi$ be as in the definition and let $C\subset X_0$ be an irreducible component. Denote by $\mathcal L'=\mathcal L(C)\in \operatorname{Pic}_{\phi}(B)$ the section of $\operatorname{Pic}_{\phi} \to B$ defined as follows: $\mathcal L'(b)=\mathcal L(b)$ if $b\neq b_0$ and $\mathcal L'(b_0)=\mathcal L\otimes \mathcal O_{\mathcal X}(C)\otimes \mathcal O_{X_0}$. Then $\mathcal L(C)\sim _{\phi}\mathcal L$. The same holds replacing $C$ with any $\mathbb{Z}$-linear combination of the components of $X_0$. \end{example} \begin{lemma} \label{philm} Let $\phi$ be a regular one-parameter smoothing of $X_0$ and let $\mathcal L,\mathcal L'\in \operatorname{Pic}_{\phi}(B)$ such that $\mathcal L\sim _{\phi}\mathcal L'$. Then the following hold. \begin{enumerate} \item \label{philm1} $\tau(\mathcal L)\sim \tau (\mathcal L')$. \item \label{philm2} If $h^0(X_b,\mathcal L(b))\geq r+1$ for every $b\in B\smallsetminus {b_0}$, then $h^0(X_b,\mathcal L'(b))\geq r+1$ for every $b\in B$. \end{enumerate} \end{lemma} \begin{proof} To prove both parts we can replace $\phi$ by a finite \'etale base change (see \cite[Claim 4.6]{CBNgraph}). Hence we can assume that $\mathcal L$ and $\mathcal L'$ are given by line bundles on $\mathcal X$, denoted again by $\mathcal L$ and $\mathcal L'$. \eqref{philm1}. Since $\mathcal L$ and $\mathcal L'$ coincide on every fiber but the special one, there exists a divisor $D\in \operatorname{Div} \mathcal X$ such that $ \operatorname{Supp} D\subset X_0$ for which $$ \mathcal L\cong\mathcal L'\otimes \mathcal O_{\mathcal X}(D). $$ Using Remark~\ref{connect2} we have $\mathcal O_{\mathcal X}(D)_{|X_0}\in \operatorname{Tw} (X_0)$ and $$ \tau (\mathcal O_{\mathcal X}(D))=\underline{\operatorname{deg}} \ \mathcal O_{\mathcal X}(D)_{|X_0}\in \operatorname{Prin} (G) $$ so we are done. \eqref{philm2}. This is a straightforward consequence of the upper-semicontinuity of $h^0$. \end{proof} By the Lemma, we have a commutative diagram: \begin{equation}\label{diag1} \xymatrix{ \operatorname{Pic}_{\phi}(B)\ar@{^{}->}[r]^{\tau} \ar@{->>}[d]_{} & \operatorname{Div}(G)\ar@{->>}[d] \\ \operatorname{Pic}_{\phi}(B)/_{\sim_{\phi}}\ar@{->}[r] &\operatorname{Jac}(G) } \end{equation} and, by Remark~\ref{connect2}, the image of $\tau$ contains $\operatorname{Prin}(G)$. \subsection{Weighted Specialization Lemma} We shall now prove Theorem~\ref{spe}, generalizing the original specialization Lemma \cite[Lemma 2.8]{bakersp} to weighted graphs. Our set-up is similar to that of \cite[Prop.4.4]{CBNgraph}, which is Theorem~\ref{spe} for the (easy) special case of weightless graphs admitting loops. Before proving Theorem~\ref{spe} we need some preliminaries. \medskip Let $G$ be a connected graph. For $v,u\in V(G)$, denote by $ d(v,u) $ the distance between $u$ and $v$ in $G$; note that $ d(v,u) $ is the minimum length of a path joining $v$ with $u$, so that $d(v,u)\in \mathbb{Z}_{\geq 0}$ and $d(v,u)=0$ if and only if $v=u$. \medskip Fix $v_0\in V(G)$; we now define an ordered partition of $V(G)$ (associated to $v_0$) by looking at the distances to $v_0$. For $i\in \mathbb{Z}_{\geq 0}$ set $$ Z_i^{(v_0)}:=\{u\in V(G): d(v_0,u)=i\}; $$ we have $Z_0^{(v_0)}=\{v_0\}$ and, obviously, there exists an $m$ such that $Z_n^{(v_0)}\neq \emptyset $ if and only if $0\leq n\leq m$. We have thus an ordered partition of $V(G)$ \begin{equation} \label{partition} V(G)=Z_0^{(v_0)}\sqcup\ldots \sqcup Z_m^{(v_0)}. \end{equation} We refer to it as {\it the distance-based partition starting at $v_0$}. We will often omit the superscript $(v_0)$. \begin{remark} One checks easily that for every $u\in V(G)\smallsetminus \{v_0\}$ with $u\in Z_i$ and for any $0\leq i\neq j\leq m$, we have \begin{equation} \label{partitionw} u \cdot Z_j\neq 0 \ \ \text{ if and only if } \ \ j=i\pm 1. \end{equation} Therefore for any $0\leq i\neq j\leq m$, we have $ Z_i\cdot Z_j\neq 0$ if and only if $|i-j|=1.$ \end{remark} Whenever $G$ is the dual graph of a curve $X_0$, we identify $V(G)$ with the components of $X_0$ without further mention and with no change in notation. Similarly, a subset of vertices $Z\subset V(G)$ determines a subcurve of $X_0$ (the subcurve whose components are the vertices in $Z$) which we denote again by $Z$. \medskip The following result will be used to prove Theorem~\ref{spe}. \begin{prop} \label{component-general} Let $X_0$ be a nodal curve, $C_0, C_1,\dots, C_n \subset X_0$ its irreducible components of arithmetic genera $g_0, g_1, \dots, g_n$, respectively, and $G$ the dual graph of $X_0$. Fix $\phi:\mathcal X \to B$ a regular one-parameter smoothing of $X_0$, and $\mathcal L\in \operatorname{Pic}_{\phi}(B)$ such that $ h^0(X_b, \mathcal L (b))\geq r+1>0 $ for every $b\in B$. Consider a sequence $r_0, r_1,\dots, r_n$ of non-negative integers such that $r_0 + r_1 +\dots +r_n =r$. Then there exists an effective divisor $E\in \operatorname{Div}(G)$ such that $E\sim \tau(\mathcal L)$ and for any $0\leq i\leq n$ \begin{equation}\label{eq1} E(C_i)\geq\left\{ \begin{array}{ll} 2r_i & \text { if } r_i\leq g_i-1\\ \\ r_i+g_i & \text { if } r_i\geq g_i\\ \end{array}\right. \end{equation} (viewing $C_i$ as a vertex of $G$, as usual). \end{prop} In the proof we are going to repeatedly use the following easy observation. \begin{claim}\label{claim:increasing} Let $g$ be a nonnegative integer and $s: \mathbb N \rightarrow \mathbb N$ the function defined by $$ s(t)=\left\{ \begin{array}{ll} 2t & \text { if } t\leq g-1\\ \\ t+g& \text { if } t\geq g.\\ \end{array}\right. $$ \begin{enumerate} \item \label{claim:increasing1} $s(t)$ is an increasing function. \item \label{claim:increasing2} Let $C$ be an irreducible nodal curve of genus $g$ and $M$ a line bundle of degree $s(t)$ on $C$. Then $h^0(C,M)\leq t+1$. \end{enumerate} \end{claim} \begin{proof} Part \eqref{claim:increasing1} is trivial. Part \eqref{claim:increasing2} is an immediate consequence of Clifford's inequality and Riemann's theorem (which are well known to hold on an irreducible nodal curve $C$). \end{proof} \begin{proof} [\it Proof of Proposition~\ref{component-general}] Consider the distance-based partition \ $V(G)=Z_0\sqcup\ldots \sqcup Z_m$ starting at $C_0$, defined in (\ref{partition}). For every $i$ the vertex set $Z_i$ corresponds to a subcurve, also written $Z_i$, of $X_0$. We thus get a decomposition $X_0=Z_0\cup\ldots \cup Z_m$. We denote by $s_i$ the quantity appearing in the right term of inequalities~\eqref{eq1}: $s_i:= 2r_i$ if $r_i \leq g_i-1$ and $s_i = r_i+g_i$ if $r_i \geq g_i$. The proof of the proposition proceeds by an induction on $r$. For the base of the induction, i.e. the case $r=0$, we have $r_i =0$ for all $i\geq 0$. We have to show the existence of an effective divisor $E\in \operatorname{Div}(G)$ such that $E\sim \tau(\mathcal L)$. This trivially follows from our hypothesis because $\mathcal{L}(b_0)$ has a nonzero global section and so $\tau(\mathcal{L})$ itself is effective. \ Consider now $r \geq 1$ and assume without loss of generality that $r_0 \neq 0$. By the induction hypothesis (applied to $r-r_0$ and the sequence $r'_0=0, r_1'=r_1, \dots, r'_n=r_n$) we can choose $\mathcal L$ so that for the divisor $E =\tau(\mathcal L)$, all the Inequalities~\eqref{eq1} are verified for $i\geq 1$, and $E(C_0) \geq 0$. Furthermore, we will assume that $E$ maximizes the vector $(E(C_0), E(Z_1), \dots, E(Z_m))$ in the lexicographic order, i.e., $E(C_0)$ is maximum among all elements in $ |\tau(\mathcal L)|$ verifying Inequalities~\eqref{eq1} for $i \geq 1$, next, we require that $E(Z_1)$ be maximum among all such $E$, and so on. Up to changing $\mathcal L$ within its $\phi$-equivalence class we can assume that $E=\tau (\mathcal L)$. Note that by Lemma~\ref{philm}(2), the new $\mathcal{L}$ is still satisfying the hypothesis of the proposition. \noindent In order to prove the proposition, we need to show that $E(C_0) \geq s_0$. \medskip We now consider (see example~\ref{L(C)}) $$\mathcal L':=\mathcal L(-C_0)\in \operatorname{Pic}_{\phi}(B).$$ We denote $L_0=\mathcal L(b_0)\in \operatorname{Pic}(X_0)$, and similarly $L'_0=\mathcal L'(b_0)\in \operatorname{Pic}(X_0)$. \begin{claim}\label{claim2} The dimension of the space of global sections of $L'_0$ which identically vanish on $\overline{X_0\smallsetminus C_0} $ is at least $r_0 +1$. \end{claim} Set $W_0=\overline{X_0\smallsetminus C_0}.$ To prove the claim, set $E'=\tau(\mathcal L')=\underline{\operatorname{deg}} L'_0$, so that $E'\sim E$. Now, for every component $C\subset X_0$ we have \begin{equation} \label{degL'} E'(C)=\deg _{C}L'_0=E(C)-C\cdot C_0; \end{equation} in particular $E'(C_0) >E(C_0)$. Therefore, by the maximality of $E(C_0)$, the divisor $E'_0$ does not verify some of the inequalities in~\eqref{eq1} for $i\geq 1$, and so the subcurve $Y_1\subset X_0$ defined below is not empty $$ Y_1:= \bigcup_{E'(C_i)< s_i}C_i= \bigcup_{E(C_i)+C_i\cdot W_0<s_i}C_i. $$ Since the degree of $L'_0$ on each component $C_i$ of $Y_1$ is strictly smaller than $s_i$, by Claim~\ref{claim:increasing}\eqref{claim:increasing2} on $C_i$ we have $h^0(C_i, L'_0 )\leq r_i$. Let $\Lambda_1\subset H^0(X_0, L'_0)$ be the space of sections which vanish on $Y_1$, so that we have a series of maps $$ 0\longrightarrow \Lambda_1=\ker\rho \longrightarrow H^0(X_0, L'_0)\stackrel{\rho}{\longrightarrow} H^0(Y_1, L'_0)\hookrightarrow \bigoplus_{C_i \subset Y_1}H^0(C_i, L'_0) $$\ where $\rho$ denotes the restriction. From this sequence and the above estimate we get $$ \dim \Lambda_1\geq h^0(X_0, L'_0)-\sum_{i: C_i \subset Y_1} r_i\geq r+1-\sum_{i\geq 1} r_i=r_0+1. $$ Hence we are done if $Y_1=W_0$. Otherwise, for $h\geq 2$ we iterate, setting $$ W_{h-1}:=\overline{X_0\smallsetminus (C_0\cup Y_1\cup\ldots \cup Y_{h-1})} \quad \quad { \text{ and }} \quad \quad Y_h:=\bigcup_{\stackrel{C_i\subset W_{h-1},}{E(C_i) +C_i\cdot W_{h-1}<s_i}}C_i. $$ Let $\Lambda_h\subset H^0(X_0, L'_0)$ denote the space of sections which identically vanish on $Y_1 \cup \dots \cup Y_h$. We will prove that $\operatorname{codim} \Lambda_h\leq \sum_{i: C_i \subset Y_1\cup \dots \cup Y_{h}}r_i$, and that $Y_h$ is empty only if $W_{h-1}$ is empty. This will finish the proof of Claim~\ref{claim2}. To prove the first statement we use induction on $h$. The base case $h=1$ has been done above. Consider $C_j\subset Y_h$, so that $E(C_j) <s_j-C_j\cdot W_{h-1}$, hence $$ E'(C_j)=E(C_j)-C_0\cdot C_j<s_j -C_j\cdot W_{h-1}-C_0\cdot C_j =s_j + C_j\cdot (\sum_{i=1}^{h-1}Y_i). $$ as $C_j\cdot W_{h-1}=-C_j\cdot (C_0+\sum_{i=1}^{h-1}Y_i)$. Hence $(L'_0)_{|C_j}(- C_j\cdot\sum_{i=1}^{h-1}Y_i)$ has degree smaller than $s_j $, therefore by Claim~\ref{claim:increasing}\eqref{claim:increasing2} on $C_j$, \begin{equation} \label{Cj} h^0(C_j, L'_0(- C_j\cdot\sum_{i=1}^{h-1}Y_i)\leq r_j. \end{equation} Let us denote by $\rho_h:\Lambda_{h-1} \to H^0(Y_h, L'_0)$ the restriction map. Then we have the following series of maps $$ 0\longrightarrow \Lambda_h=\ker \rho_h\longrightarrow \Lambda_{h-1} \stackrel{\rho_h}{\longrightarrow} {\operatorname {Im}} \rho_h \hookrightarrow \bigoplus_{C_j \subset Y_h}H^0(C_j, L'_0(- C_j\cdot\sum_{i=1}^{h-1}Y_i).$$ Hence the codimension of $\Lambda_h$ in $\Lambda_{h-1}$, written $\operatorname{codim}_{\Lambda_{h-1}}\Lambda_h$, is at most the dimension of the space on the right, which, by \eqref{Cj}, is at most $\sum _{j:C_j \subset Y_h}r_j$. Therefore $$ \operatorname{codim} \Lambda_h=\operatorname{codim} \Lambda_{h-1}+\operatorname{codim}_{\Lambda_{h-1}}\Lambda_h\leq \sum_{i:C_i\subset Y_1\cup \dots \cup Y_{h-1}} r_i+\sum _{j:C_j \subset Y_h}r_j $$ where we used the induction hypothesis on $\Lambda_{h-1}$. The first claim is proved. For the proof of the second statement, suppose, by contradiction, $Y_h=\emptyset$ and $ W_{h-1}\neq \emptyset$. Set \begin{equation} \label{Eh} E_{h}:=E+T_{W_{h-1}} \end{equation} where $T_{W_{h-1}}\in \operatorname{Prin}(G)$ as defined in (\ref{defT}); hence $E_{h}\sim E$. Since $Y_h$ is empty, we get $E_{h}( C )\geq s_i$ for any $C \subseteq W_{h-1}$. On the other hand, for any $C \subset \overline{X\smallsetminus W_{h-1}}$, we have $E_{h}( C ) \geq E( C )$. Therefore, by the choice of $E$, and the maximality assumption, we must have $E_{h}( C_0) =E( C_0 )$, i.e., $W_{h-1}\cdot C_0=0$. Therefore $W_{h-1}\subset \cup_{j\geq 2}Z_j$ and hence $W_{h-1}\cdot Z_1\geq 0$. In particular, we have $E_h(Z_1) \geq E(Z_1)$. But, by the maximality of $E(Z_1)$, we must have $E_{h}(Z_1)=E(Z_1)$, i.e., $W_{h-1}\cdot Z_1=0$. Therefore $W_{h-1}\subset \cup_{j\geq 3}Z_j$. Repeating this argument, we conclude that $W_{h-1}\subset Z_{m+1} = \emptyset,$ which is a contradiction. Claim~\ref{claim2} is proved. \ Let $\Lambda$ be the set of sections of $L'_0$ which identically vanish on $W_0$; by the claim, $\dim \Lambda \geq r_0+1$. We have a natural injection $\Lambda \hookrightarrow H^0(C_0, L'_0(-C_0\cap W_0))=H^0(C_0,L_0)$, hence $r_0+1 \leq h^0(C_0, L_0).$ Set $\widehat{r_0}:=h^0(C_0, L_0)-1$ so that $\widehat{r_0}\geq r_0$. By Claim~\ref{claim:increasing}\eqref{claim:increasing2} on $C_0$ we obtain, \begin{displaymath} E(C_0)=\deg _{C_0}L_0\ \left\{ \begin{array}{ll} \geq 2\widehat{r_0} & \text { if } \widehat{r_0}\leq g_0-1\\ \\ = \widehat{r_0}+g_0 & \text { if } \widehat{r_0}\geq g_0.\\ \end{array}\right . \end{displaymath} By Claim~\ref{claim:increasing} \eqref{claim:increasing1}, we infer that $E(C_0) \geq s_0$ , and the proof of Proposition~\ref{component-general} is complete. \end{proof} \begin{thm}[Specialization Lemma] \label{spe} Let $\phi:\mathcal X \to B$ be a regular one-parameter smoothing of a projective nodal curve $X_0$. Let $(G,\omega )$ be the weighted dual graph of $X_0$. Then for every $\mathcal L\in \operatorname{Pic}_{\phi}(B)$ there exists an open neighborhood $U\subset B$ of $b_0$ such that for every $b\in U$ such that $b\neq b_0$ \begin{equation} \label{mixed} r(X_b, \mathcal L (b))\leq r_{(G,\omega )}(\tau (\mathcal L)). \end{equation} \end{thm} \begin{proof} To simplify the presentation, we will assume $G$ free from loops, and indicate, at the end, the (trivial) modifications needed to get the proof in general. \medskip Up to restricting $B$ to an open neighborhood of $b_0$ we can assume that for some $r\geq -1$ and for every $b\in B$ we have \begin{equation} \label{h0} h^0(X_b, \mathcal L(b))\geq r+1 \end{equation} with equality for $b\neq b_0$. Set $D=\tau (\mathcal L)$; we must prove that $r_{(G,\omega )}(D)\geq r$. As in Proposition~\ref{component-general}, we write $C_0, C_1,\dots, C_n$ for the irreducible components of $X$, with $C_i$ of genus $g_i$. We denote by $v_i\in V(G)$ the vertex corresponding to $C_i$. Recall that we denote by $\widehat{G^{\omega}}$ the weightless, loopless graph obtained from $G$ by adding $g_i=\omega(v_i)$ \ 2-cycles at $v_i$ for every $v_i\in V(G)$. We have a natural injection (viewed as an inclusion) $ \operatorname{Div}(G)\subset \operatorname{Div} (\widehat{G^{\omega}}) $ and, by definition, $r_{(G,\omega )}(D)=r_{\widehat{G^{\omega}}}(D)$. Summarizing, we must prove that \begin{equation} \label{thspec} r_{\widehat{G^{\omega}}}(D)\geq r. \end{equation} The specialization Lemma for weightless graphs gives that the rank of $D$, as a divisor on the weightless graph $G$, satisfies \begin{equation} \label{oldspec} r_{G}(D) \geq r. \end{equation} Now observe that the graph obtained by removing from $\widehat{G^{\omega}}$ every edge of $G$ is a disconnected (unless $n=0$) graph $R$ of type $$ R=\sqcup_{i=0}^nR_i $$ where $R_i=\widehat{R_{g_i}}$ is the refinement of the ``rose" $R_{g_i}$ introduced in \ref{rose}, for every $i=0,\ldots,n$. Note that if $g_i=0$, the graph $R_i$ is just the vertex $v_i$ with no edge. Now, extending the notation of \ref{sepvertex} to the case of multiple cut-vertices, we have the following decomposition of $\widehat{G^{\omega}}$ $$ \widehat{G^{\omega}}=G\vee R $$ with $G\cap R=\{v_0,\ldots, v_n\}$. By Lemma~\ref{sepvertex}\eqref{sepvertex3} for any $D\in \operatorname{Div}(G)$ such that $r_{G}(D)\geq 0$ we have $r_{\widehat{G^{\omega}}}(D)\geq 0$. \medskip We are ready to prove (\ref{thspec}) using induction on $r$. If $r=-1$ there is nothing to prove. If $r=0$, by (\ref{oldspec}) we have $r_{G}(D)\geq 0$ and hence, by what we just observed, $r_{\widehat{G^{\omega}}}(D)\geq 0$. So we are done. Let $r\geq 1$ and pick an effective divisor $E\in \operatorname{Div}^r(\widehat{G^{\omega}})$. \noindent Suppose first that $E(v)=0$ for all $v\in V(G)$; in particular, $E$ is entirely supported on $R$. We write $r_i$ for the degree of the restriction of $E$ to $R_i$, so that for every $i=0,\ldots, n,$ we have \begin{equation} \label{inr} r_i\geq 0 \quad\quad \quad {\text{ and }} \quad\quad \quad\sum_{i=0}^nr_i=r. \end{equation} It is clear that it suffices to prove the existence of an effective divisor $F \sim D$ such that the restrictions $F_{R_i}$ and $E_{R_i}$ to $R_i$ satisfy $r_{R_i}(F_{R_i}-E_{R_i})\geq 0$ for every $i=0,\ldots, n$. \noindent By Proposition~\ref{component-general} there exists an effective divisor $F \sim D$ so that \eqref{eq1} holds for every $i=0,\ldots, n$, i.e. \begin{displaymath} F(C_i)\geq\left\{ \begin{array}{ll} 2r_i & \text { if } r_i\leq g_i-1\\ \\ r_i+g_i & \text { if } r_i\geq g_i.\\ \end{array}\right . \end{displaymath} (Proposition~\ref{component-general} applies because of the relations \eqref{inr}). Now, $F(C_i)$ equals the degree of $F_{R_i}$, hence by the above estimate combined with Theorem~\ref{RRloop}(\ref{Riemann}) and Lemma~\ref{rose}, one easily checks that $r_{R_i}(F_{R_i})\geq r_i$, hence, $r_{R_i}(F_{R_i}-E_{R_i})\geq 0$. We can now assume that $E(v)\neq 0$ for some $v\in V(G) \subset V(\widehat{G^{\omega}})$. We write $E=E'+v$ with $E'\geq 0$ and $\deg E'=r-1$. Arguing as for \cite[Claim 4.6]{CBNgraph}, we are free to replace $\phi:\mathcal X\to B$ by a finite \'etale base change. Therefore we can assume that $\phi$ has a section $\sigma$ passing through the component of $X_0$ corresponding to $v$. It is clear that for every $b\in B$ we have $$ r(X_b, L_b(-\sigma(b)))\geq r(X_b, L_b)-1\geq r-1. $$ Now, the specialization of $\mathcal L\otimes \mathcal O(-\sigma(B))$ is $D-v$, i.e., $$ \tau (\mathcal L\otimes \mathcal O(-\sigma(B)))=D-v. $$ By induction we have $r_{\widehat{G^{\omega}}}(D-v)\geq r-1$. Hence, the degree of $E'$ being $r-1$, there exists $T\in \operatorname{Prin} (\widehat{G^{\omega}})$ such that $$ 0\leq D-v -E' +T= D-v-(E-v)+T=D-E+T. $$ We thus proved that $0\leq r_{\widehat{G^{\omega}}}(D-E)$ for every effective $E\in \operatorname{Div} ^r(\widehat{G^{\omega}})$. This proves (\ref{thspec}) and hence the theorem, in case $G$ has no loops. \medskip If $G$ admits some loops, let $G'\subset G$ be the graph obtained by removing from $G$ all of its loop edges. Then $\widehat{G^{\omega}}$ is obtained from $G'$ by adding to the vertex $v_i$ exactly $g_i$ \ 2-cycles, where $g_i$ is the arithmetic genus of $C_i$ (note than $g_i$ is now equal to $\omega(v_i)$ plus the number of loops adjacent to $v_i$ in $G$). Now replace $G$ by $G'$ and use exactly the same proof. (Alternatively, one could apply the same argument used in \cite[Prop. 5.5]{CBNgraph}, where the original Specialization Lemma of \cite{bakersp} was extended to weightless graphs admitting loops.) \end{proof} \section{Riemann-Roch on weighted tropical curves} \subsection{Weighted tropical curves as pseudo metric graphs} Let $\Gamma=(G, \omega,\ell)$ be a weighted tropical curve, that is, $(G,\omega )$ is a weighted graph (see Section~\ref{wgsec}) and $\ell:E(G)\to \mathbb{R}_{>0}$ is a (finite) length function on the edges. We also say that $(G, \ell)$ is a {\it metric graph}. If $\omega$ is the zero function, we write $\omega=\underline{0}$ and say that the tropical curve is {\it pure}. Weighted tropical curves were used in \cite{BMV} to bordify the space of pure tropical curves; notice however that we use the slightly different terminology of \cite{CHBK}. For pure tropical curves there exists a good divisor theory for which the Riemann-Roch theorem holds, as proved by Gathmann-Kerber in \cite{GK} and by Mikhalkin-Zharkov in \cite{MZ}. The purpose of this section is to extend this to the weighted setting. \ \noindent{\it Divisor theory on pure tropical curves.} Let us quickly recall the set-up for pure tropical curves; we refer to \cite{GK} for details. Let $\Gamma=(G, \underline{0},\ell)$ be a pure tropical curve. The group of divisors of $\Gamma$ is the free abelian group $\operatorname{Div}(\Gamma)$ generated by the points of $\Gamma$. A {\it rational function} on $\Gamma$ is a continuous function $f:\Gamma \to \mathbb{R}$ such that the restriction of $f$ to every edge of $\Gamma$ is a piecewise affine integral function (i.e., piecewise of type $f(x)=ax+b$, with $a\in \mathbb{Z}$) having finitely many pieces. Let $p\in \Gamma$ and let $f$ be a rational function as above. The order of $f$ at $p$, written $\operatorname{ord}_pf$, is the sum of all the slopes of $f$ on the outgoing segments of $\Gamma$ adjacent to $p$. The number of such segments is equal to the valency of $p$ if $p$ is a vertex of $\Gamma$, and is equal to 2 otherwise. The divisor of $f$ is defined as follows $$ \div (f):=\sum_{p\in \Gamma}\operatorname{ord}_p(f)p\in \operatorname{Div}(\Gamma). $$ Recall that $\div f$ has degree $0$. The divisors of the from $\div (f)$ are called {\it principal} and they form a subgroup of $\operatorname{Div} (\Gamma)$, denoted by $\operatorname{Prin} (\Gamma)$. Two divisors $D,D'$ on $\Gamma$ are said to be linearly equivalent, written $D\sim D'$, if $D-D'\in \operatorname{Prin} (\Gamma)$. Let $D\in \operatorname{Div} \Gamma$. Then $R(D)$ denotes the set of rational functions on $\Gamma $ such that $\div (f)+D\geq 0$. The rank of $D$ is defined as follows $$ r_{\Gamma}(D):=\max \,\{k:\ \forall E\in \operatorname{Div}^k_+ (\Gamma), \ \ R(D-E)\neq \emptyset\} $$ so that $r_{\Gamma}(D)=-1$ if and only if $R(D)=\emptyset$. The following trivial remark is a useful consequence of the definition. \begin{remark} \label{trivrk} Let $\Gamma_1$ and $\Gamma_2$ be pure tropical curves and let $\psi:\operatorname{Div}(\Gamma_1)\to\operatorname{Div}(\Gamma_2)$ be a group isomorphism inducing an isomorphism of effective and principal divisors (i.e., $\psi(D)\geq 0$ if and only if $D\geq 0$, and $\psi(D)\in \operatorname{Prin} (\Gamma_2)$ if and only if $D\in \operatorname{Prin} (\Gamma_1)$). Then for every $D\in \operatorname{Div}(\Gamma_1)$ we have $r_{\Gamma_1}(D)=r_{\Gamma_2}(\psi(D)).$ \end{remark} \ To extend the theory to the weighted setting, our starting point is to give weighted tropical curves a geometric interpretation by what we call pseudo-metric graphs. \begin{defi} A {\it pseudo-metric graph} is a pair $(G,\ell)$ where $G$ is a graph and $\ell$ a {\it pseudo-length} function $\ell:E(G)\to \mathbb{R}_{\geq 0}$ which is allowed to vanish only on loop-edges of $G$ (that is, if $\ell(e)=0$ then $e$ is a loop-edge of $G$). \end{defi} Let $\Gamma=(G, \omega,\ell)$ be a weighted tropical curve, we associate to it the pseudo-metric graph, $ (G^{\omega} ,\ell^\omega )$, defined as follows. $G^{\omega} $ is the ``virtual" weightless graph associated to $(G,\omega )$ described in subsection~\ref{wgsec} ($G^{\omega} $ is obtained by attaching to $G$ exactly $\omega (v)$ loops based at every vertex $v$); the function $\ell^\omega :E(G^\omega )\to \mathbb{R}_{\geq 0}$ is the extension of $\ell$ vanishing at all the virtual loops. It is clear that $(G^{\omega} ,\ell^\omega )$ is uniquely determined. Conversely, to any pseudometric graph $(G_0,\ell_0)$ we can associate a unique weighted tropical curve $(G, \omega ,\ell)$ such that $G_0=G^{\omega}$ and $\ell_0=\ell^{\omega}$ as follows. $G$ is the subgraph of $G_0$ obtained by removing every loop-edge $e\in E(G)$ such that $\ell_0(e)=0$. Next, $\ell$ is the restriction of $\ell_0$ to $G$; finally, for any $v\in V(G)=V(G_0)$ the weight $\omega (v)$ is defined to be equal to the number of loop-edges of $G^0$ adjacent to $v$ and having length $0$. Summarizing, we have proved the following. \begin{prop} \label{pseudo} The map associating to the weighted tropical curve $\Gamma=(G, \omega,\ell)$ the pseudometric graph $(G^{\omega} ,\ell^\omega )$ is a bijection between the set of weighted tropical curves and the set of pseudometric graphs, extending the bijection between pure tropical curves and metric graphs (see \cite{MZ}). \end{prop} \subsection{Divisors on weighted tropical curves.} Let $\Gamma=(G, \omega,\ell)$ be a weighted tropical curve. There is a unique pure tropical curve having the same metric graph as $\Gamma$, namely the curve $ \Gamma^{\underline{0}}:=(G, \underline{0},\ell). $ Exactly as for pure tropical curves, we define the group of divisors of $\Gamma$ as the free abelian group generated by the points of $\Gamma$: $$ \operatorname{Div} (\Gamma)= \operatorname{Div}(\Gamma^{\underline{0}})=\{\sum_{i=1}^mn_i p_i,\ n_i\in \mathbb{Z}, \ p_i\in (G,\ell)\}. $$ The canonical divisor of $\Gamma$ is $$ K_{\Gamma}: =\sum_{v\in V(G)} (\operatorname{val} (v) +2\omega (v)-2)v $$ where $\operatorname{val} (v)$ is the valency of $v$ as vertex of the graph $G$. Observe that there is an obvious identification of $K_{\Gamma}$ with $K_{(G, \omega )}$, in other words, the canonical divisor of $K_{\Gamma}$ is the canonical divisor of the virtual graph $G^{\omega} $ associated to $(G,\omega )$. Consider the pseudo-metric graph associated to $\Gamma$ by the previous proposition: $(G^{\omega} ,\ell ^\omega )$. Note that $(G^{\omega} ,\ell ^\omega )$ is not a tropical curve as the length function vanishes at the virtual edges. We then define a pure tropical curve, $\Gamma^{\omega} _{\epsilon}$, for every $\epsilon>0$ $$ \Gamma^{\omega} _{\epsilon}=(G^{\omega} ,\underline{0},\ell_{\epsilon} ^\omega ) $$ where $\ell_\epsilon^{\omega} (e)=\epsilon$ for every edge lying in some virtual cycle, and $\ell_\epsilon^\omega (e)=\ell(e)$ otherwise. Therefore $(G^{\omega},\ell ^\omega )$ is the limit of $\Gamma^{\omega} _{\epsilon}$ as $\epsilon$ goes to zero. Notice that for every curve $\Gamma^{\omega} _{\epsilon}$ we have a natural inclusion $$ \Gamma^{\underline{0}}\subset \Gamma^{\omega} _{\epsilon} $$ (with $\Gamma^{\underline{0}}$ introduced at the beginning of the subsection). We refer to the loops given by $\Gamma^{\omega} _{\epsilon}\smallsetminus \Gamma^{\underline{0}}$ as {\it virtual} loops. Now, we have natural injective homomorphism for every $\epsilon$ \begin{equation} \label{iotae} \iota_{\epsilon}:\operatorname{Div} (\Gamma)\hookrightarrow \operatorname{Div}(\Gamma_\epsilon^\omega ) \end{equation} and it is clear that $\iota_{\epsilon}$ induces an isomorphism of $\operatorname{Div} (\Gamma)$ with the subgroup of divisors on $\Gamma_\epsilon^\omega$ supported on $\Gamma^{\underline{0}}$. \begin{thm} \label{RRwc} Let $\Gamma=(G,\omega,\ell)$ be a weighted tropical curve of genus $g$ and let $D\in \operatorname{Div} (\Gamma)$. Using the above notation, the following hold. \begin{enumerate} \item \label{RRwc1} The number $r_{\Gamma_\epsilon^{\omega} }(\iota_{\epsilon}(D))$ is independent of $\epsilon$. Hence we define $$ r_\Gamma(D):=r_{\Gamma_\epsilon^{\omega} }(\iota_{\epsilon}(D)).$$ \item \label{RRwc2} \emph{(}Riemann-Roch\emph{)} With the above definition, we have $$ r_{\Gamma}(D)-r_{\Gamma}(K_{\Gamma}-D)=\deg D-g+1. $$ \end{enumerate} \end{thm} \begin{proof} The proof of (\ref{RRwc1}) can be obtained by a direct limit argument to compute $r_{{\Gamma}_\epsilon^{\omega} }(D)$, using Proposition~\ref{thm:refinement}. A direct proof is as follows. For two $\epsilon_1, \epsilon_2>0$, consider the homothety of ratio $\epsilon_2/\epsilon_1$ on all the virtual loops. This produces a homeomorphism$$\psi^{(\epsilon_1,\epsilon_2)}: \Gamma^\omega_{\epsilon_1} \longrightarrow \Gamma^\omega_{\epsilon_2} $$ (equal to identity on $\Gamma$), and hence a group isomorphism $$\psi^{(\epsilon_1,\epsilon_2)}_*: \operatorname{Div}(\Gamma^\omega_{\epsilon_1}) \rightarrow \operatorname{Div}(\Gamma^\omega_{\epsilon_2}); \quad \quad \quad\quad \sum_{p\in \Gamma} n_p p \mapsto \sum_{p\in \Gamma} n_p \psi^{(\epsilon_1,\epsilon_2)}(p). $$ Note that $\psi^{(\epsilon_2,\epsilon_1)}_*$ is the inverse of $\psi^{(\epsilon_1,\epsilon_2)}_*$, and that $\psi^{(\epsilon_1,\epsilon_2)}_*\circ\iota_{\epsilon_1} = \iota_{\epsilon_2}$; see \eqref{iotae}. Note also that $\psi^{(\epsilon_1,\epsilon_2)}_*$ induces an isomorphism at the level of effective divisors. We claim that $\psi^{(\epsilon_1,\epsilon_2)}_*$ induces an isomorphism also at the level of principal divisors. By Remark~\ref{trivrk}, the claim implies part~\eqref{RRwc1}. To prove the claim, let $f$ be a rational function on $\Gamma_{\epsilon_1}^\omega$. Let $\alpha: \mathbb R \rightarrow \mathbb R$ be the homothety of ratio $\epsilon_2/\epsilon_1$ on $\mathbb R$, i.e., the automorphism of $\mathbb{R}$ given by $\alpha(x) = x\epsilon_2/\epsilon_1 $ for any $x\in \mathbb R$. Define the function $\alpha\bullet f $ on $\Gamma^\omega_{\epsilon_1}$ by requiring that for any point of $x\in \Gamma$, $\alpha\bullet f(x) =f(x)$, and for any point $u$ of a virtual loop of $\Gamma^\omega_{\epsilon_1}$ attached at the point $v\in \Gamma$ we set $$\alpha\bullet f(u) = f(v)+\alpha(f(u)-f(v)).$$ The claim now follows by observing that $(\alpha\bullet f)\circ \psi^{(\epsilon_2,\epsilon_1)}$ is a rational function on $\Gamma^\omega_{\epsilon_2}$, and $$\div((\alpha\bullet f)\circ \psi^{(\epsilon_2,\epsilon_1)}) = \psi_*\, ^ {(\epsilon_1,\epsilon_2)}(\div(f)).$$ Part~\eqref{RRwc1} is proved. To prove part \eqref{RRwc2}, recall that, as we said before, for the pure tropical curves ${\Gamma}_\epsilon^{\omega} $ the Riemann-Roch theorem holds, and hence this part follows from the previous one. \end{proof} \begin{remark} It is clear from the proof of Theorem~\ref{RRwc} that there is no need to fix the same $\epsilon$ for all the virtual cycles. More precisely, fix an ordering for the virtual cycles of $G^{\omega}$ and for their edges; recall there are $\sum_{v\in V(G)} \omega(v)$ of them. Then for any $\underline{\epsilon}\in \mathbb{R}^{\sum \omega(v)}_{>0}$ we can define the pure tropical curve $\Gamma^{\omega} _{\underline{\epsilon}}$ using ${\underline{\epsilon}}$ to define the length on the virtual cycles in the obvious way. Then for any $D\in \operatorname{Div} (\Gamma)$ the number $r_{\Gamma_{\underline{\epsilon}}^{\omega} }(\iota_{{\underline{\epsilon}}}(D))$ is independent of ${\underline{\epsilon}}$ (where $\iota_{\underline{\epsilon}}$ is the analog of \eqref{iotae}). \end{remark}
1,314,259,995,166
arxiv
\section{Introduction} The recent observation of two pulsars with approximately two solar masses \cite{Demorest:2010bx,Antoniadis:2013pzd} presents a severe challenge to the theoretical description of cold high-density matter in $\beta$-equilibrium. The equation of state (EoS) has to be sufficiently stiff in order to support such high masses of compact stars. Many models that are solely based on nucleonic (neutrons and protons) and leptonic (electrons and muons) degrees of freedom are able to reproduce maximum neutron star masses above two solar masses if the effective interaction between the nucleons becomes strongly repulsive at high baryon densities. However, additional hadronic particle species can appear at densities above two or three times the nuclear saturation density $n_{\rm sat} \approx 0.16$~fm$^{-3}$. In most cases, these additional degrees of freedom lead to a substantial softening of the EoS resulting in a reduced maximum mass of the compact star below the observed values. This feature is well-known for models with hyperons -- the so-called ''hyperon puzzle'', see, e.g., \cite{Lonardoni:2014bwa,Fortin:2014mya} and references therein -- but was also observed in approaches that take excited states of the nucleons such as $\Delta(1232)$ resonances into account, see, e.g., \cite{Drago:2014oja,Cai:2015hya} and references therein. Usually, only specifically designed interactions can avoid the problem of too low maximum masses. Successful models of the baryonic contribution to the stellar EoS should be scrutinized whether they comply with other experimental constraints, e.g.\ with respect to the employed interactions. In the center of compact stars very high baryon densities are reached exceeding several times $n_{\rm sat}$ and the corresponding Fermi momenta of the particles are much larger than those at saturation. This is particularly significant for models with only nucleonic degrees of freedom. Hence, not only the density dependence of the effective in-medium interaction between nucleons but also their momentum dependence becomes relevant. For densities near $n_{\rm sat}$ this information is contained in the optical potential of nucleons that can be extracted from the systematics of elastic proton scattering on nuclei, see, e.g., \cite{Hama:1990vr,Cooper:1993nx}. A saturation of the real part of the optical potential is observed at high kinetic energies approaching $1$~GeV. The momentum dependence of the in-medium interaction is also crucial in simulations of heavy-ion collisions \cite{Zhang:1994hpa}. Typical approaches for the baryonic contribution to the stellar EoS are energy density functionals that originate from nonrelativistic or relativistic mean-field models of nuclear matter. The most prominent cases among the former class are Skyrme energy density functionals, see reference \cite{Dutra:2012mb} for an overview of different parametrizations. They are derived originally from the zero-range Skyrme interaction with a two-body contribution, which is an expansion up to second order in the particle momenta, and a density dependent three-body contribution, which is included in order to reproduce saturation properties of nuclear matter. Obviously, an extrapolation of the model to high momenta is questionable given the limited form of the momentum dependence. Examples of the latter class, frequently denominated covariant density functionals, can be inferred from relativistic Lagrangian densities. In conventional models, see reference \cite{Dutra:2014qga} for a wide collection of different parametrizations, a nucleon optical potential in the medium can be derived from the relativistic scalar and vector self-energies, see section \ref{sec:res}. It exhibits a linear increase with energy, which is in contradiction with the expectation from experiment. In general, one would expect that the nucleon self-energies itself depend explicitly on the particle momentum or energy as, e.g., in Dirac-Brueckner calculations of nuclear matter \cite{Fuchs:2003zn}. However, this is not realized in standard RMF models. There are particular extensions of relativistic mean-field (RMF) models that contain nucleon self-energies with an explicit energy or momentum dependence. This dependence cannot be introduced in a relativistic model in a simple parametric form because it affects, e.g., the definition of the conserved currents. In extended systematic approaches new derivative couplings between the nucleon and meson fields are introduced that allow to reproduce the energy dependence of the optical potential as extracted from experiments. One of the earliest RMF models with scalar derivative couplings was presented in reference \cite{Zimanyi:1990np}. A rescaling of the nucleon fields removed the explicit momentum dependence of the self-energies but lead to a considerable softening of the EoS. More general couplings of the mesons to linear derivatives of the nucleon fields were considered in reference \cite{Typel:2002ck} with an application to uniform nuclear matter. With appropriately chosen coupling constants a reduction of the optical potential was found as compared to the strong linear energy dependence in conventional RMF models. The model was further extended in reference \cite{Typel:2005ba} assuming a density dependence of the couplings. It was successfully applied to the description of finite nuclei. Notable new features were the increase of the effective nucleon masses (usually rather small in order to explain the strong spin-orbit interaction in nuclei) and correspondingly higher level densities close to the Fermi energy in nuclei in better concordance with expectations from experiments. Though, using couplings only linear in the derivatives leads to a quadratic dependence of the optical potential on the kinetic energy with a decrease for energies exceeding $1$~GeV. Couplings to all orders in the derivative of the nucleons were introduced in the so-called nonlinear derivative (NLD) model \cite{Gaitanos:2009nt,Gaitanos:2011yb} assuming a particular exponential dependence on the derivatives but no density dependence of the nucleon-meson couplings or self-couplings of the mesons. The general formalism was developed and applied to infinite isospin symmetric and asymmetric nuclear matter with a particular choice of the non-linear derivative terms that lead to an energy dependence of the self-energies. In reference \cite{Gaitanos:2012hg} the approach was extended with generalized non-linear derivative couplings of any functional form in the field-theoretical formalism allowing for a momentum or energy dependence. Nonlinear self-couplings of the $\sigma$ meson field were added in order to improve the description of characteristic nuclear matter parameters at saturation. The application of this version of the NLD model to stellar matter yielded a maximum neutron star mass of $2.03~M_{\rm sol}$ barely satisfying the observational constraints but the dependence of the result on the model parameters was not explored in detail. The NLD model was also applied to the description of bulk properties of nuclear matter in reference \cite{Chen:2012rn}. A softening of the EoS and maximum masses of neutron stars substantially below $2~M_{\rm sol}$ were found with different parametrizations assuming an energy dependence of the couplings but nonlinear self-couplings of the mesons or density dependent meson-nucleon couplings were not considered. Properties of finite nuclei were studied in reference \cite{Chen:2014xha} after adding meson self-interactions in the Lagrangian. A qualitative description similar to conventional RMF models was achieved but neutron star properties were not examined in this extended model. In this work we introduce a more flexible extension of the nonlinear derivative model assuming density dependent meson-nucleon couplings in addition. Instead of using derivative operators that generate an explicit momentum dependence of the self-energies, we will use a functional form that leads to an energy dependence. This approach will also be more suitable for a future applications of the DD-NLD approach to nuclei since the relevant equations and their numerical implementation are simplified. Here, the equations of state of symmetric and asymmetric nuclear matter will be calculated for different choices of the derivative coupling operators that lead to a saturation of the optical potential at high energies as derived from experiments. They are compared to the results of a standard RMF model with density dependent couplings that is consistent with essentially all modern constraints for the characteristic nuclear matter parameters at saturation. The parameters of the DD-NLD models are chosen such that these saturation properties are reproduced. The effect of the optical potential constraint on the mass-radius relations of neutron stars will be studied. The paper is organized as follows: In section \ref{sec:model} the Lagrangian density of the DD-NLD approach is presented. The field equations in mean-field approximation and the energy-momentum tensor will be derived. The relevant equations for the case of infinite nuclear matter will be considered in more detail in section \ref{sec:inm}. The parametrization of the density dependent couplings and the functional form of the derivative coupling functions is discussed in section \ref{sec:para}. Results for the energy dependence of the optical potential, the EoS of nuclear matter and the mass-radius relation are presented in section \ref{sec:res} for various versions of the model. Conclusions are given in section \ref{sec:con}. Detailed expression for various densities are collected in \ref{sec:app_a}. \section{Lagrangian density and field equations of the DD-NLD model} \label{sec:model} In most RMF models the effective interaction between nucleons is described by an exchange of mesons. Usually, $\sigma$ and $\omega$ mesons are introduced to consider the attractive and repulsive contributions to the nucleon-nucleon potential, respectively. They are represented by isoscalar Lorentz scalar and Lorentz vector fields $\sigma$ and $\omega_{\mu}$. In order to model the isospin dependence of the interaction, the exchange of $\rho$ mesons is included. It is denoted by the isovector Lorentz vector field $\bm{\rho}_{\mu}$ in the following. The Lagrangian density in the DD-NLD approach\footnote{Natural units with $\hbar = c = 1$ are used in the following.} \begin{equation} \mathcal{L} = \mathcal{L}_{\rm nuc} + \mathcal{L}_{\rm mes} + \mathcal{L}_{\rm int} \end{equation} contains contributions of the free nucleons $\Psi = (\Psi_{p},\Psi_{n})$ with mass $m$ \begin{equation} \label{eq:L_nuc} \mathcal{L}_{\rm nuc} = \frac{1}{2} \left( \overline{\Psi}\gamma_{\mu}i\overrightarrow{\partial^{\mu}}\Psi - \overline{\Psi} i \overleftarrow{\partial^{\mu}}\gamma_{\mu}\Psi \right) - m \overline{\Psi}\Psi \end{equation} in a symmetrized form and of free mesons \begin{eqnarray} \mathcal{L}_{\rm mes} & = & \frac{1}{2} \left( \partial_{\mu}\sigma\partial^{\mu}\sigma - m^{2}_{\sigma}\sigma^{2} - \frac{1}{2} F^{(\omega)}_{\mu\nu}F^{(\omega)\mu\nu} + m^{2}_{\omega}\omega_{\mu}\omega^{\mu} \right. \\ \nonumber & & \left. - \frac{1}{2} \bm{F}^{(\rho)}_{\mu\nu} \bm{F}^{(\rho)\mu\nu} + m^{2}_{\rho} \bm{\rho}_{\mu} \bm{\rho}^{\mu} \right) \end{eqnarray} with the field tensors \begin{equation} F^{(\omega)}_{\mu\nu} = \partial_{\mu} \omega_{\nu} - \partial_{\nu} \omega_{\mu} \quad \mbox{and} \quad \bm{F}^{(\rho)}_{\mu\nu} = \partial_{\mu} \bm{\rho}_{\nu} - \partial_{\nu} \bm{\rho}_{\mu} \end{equation} of the isoscalar $\omega$ meson and the isovector $\rho$ meson, respectively. The arrows in equation (\ref{eq:L_nuc}) denote the direction of differentiation. Standard RMF models assume a minimal coupling of the nucleons to the meson fields leading to \begin{equation} \label{eq:L_int_st} \mathcal{L}_{\rm int} = \Gamma_{\sigma} \sigma \overline{\Psi} \Psi - \Gamma_{\omega} \omega_{\mu}\overline{\Psi}\gamma^{\mu}\Psi - \Gamma_{\rho} \bm{\rho}_{\mu}\overline{\Psi}\bm{\tau}\gamma^{\mu}\Psi \end{equation} for the interaction contribution to the total Lagrangian density $\mathcal{L}$ with meson-nucleon couplings $\Gamma_{i}$ ($i=\sigma,\omega,\rho$). We assume that they depend on the vector density $n_{v}$, see equation (\ref{eq:n_v}) for the explicit definition. In the derivative coupling model the nucleon field $\Psi$ ($\overline{\Psi}$) is replaced in $\mathcal{L}_{\rm int}$ by $\mathcal{D}_{m}\Psi$ ($\overline{\mathcal{D}_{m}\Psi}$) with operator functions $\mathcal{D}_{m}$, which can be different for the various mesons $m=\sigma,\omega,\rho$. They can be expanded in a series \begin{equation} \mathcal{D}_{m}(x) = \sum_{n=0}^{\infty} \frac{d_{n}^{(m)}}{n!} x^{n} \end{equation} with numerical coefficients $d_{n}^{(m)}$. The argument $x$ contains derivatives $i\partial_{\beta}$ that act on the nucleon field. More specifically we write \begin{equation} \label{eq:def_x} x = v^{\beta}i\partial_{\beta} - sm \end{equation} as a hermitian Lorentz scalar operator with an auxiliary Lorentz vector $v^{\beta}=(v_{0},\vec{v})$ and a scalar factor $s$. Hence the interaction contribution in the NLD model is written as \begin{eqnarray} \mathcal{L}_{\rm int} & = & \frac{1}{2} \Gamma_{\sigma} \sigma \left( \overline{\Psi}\overleftarrow{\mathcal{D}}_{\sigma}\Psi + \overline{\Psi} \overrightarrow{\mathcal{D}}_{\sigma} \Psi \right) \\ \nonumber & & - \frac{1}{2} \Gamma_{\omega} \omega_{\mu}\left( \overline{\Psi}\overleftarrow{\mathcal{D}}_{\omega}\gamma^{\mu}\Psi + \overline{\Psi}\gamma^{\mu}\overrightarrow{\mathcal{D}}_{\omega}\Psi \right) \\ \nonumber & & - \frac{1}{2} \Gamma_{\rho} \bm{\rho}_{\mu} \left( \overline{\Psi}\overleftarrow{\mathcal{D}}_{\rho}\gamma^{\mu}\bm{\tau}\Psi + \overline{\Psi}\bm{\tau}\gamma^{\mu}\overrightarrow{\mathcal{D}}_{\rho}\Psi \right) \end{eqnarray} in a symmetrized form with respect to the derivative operators $\mathcal{D}_{m}$, i.e., \begin{eqnarray} \overrightarrow{\mathcal{D}}_{m} &=& \sum^{\infty}_{k=0} C_{k}^{(m)} (v^{\beta }i \overrightarrow{\partial}_{\beta})^{k} \\ \overleftarrow{\mathcal{D}}_{m} &=& \sum^{\infty}_{k=0} C_{k}^{(m)} (-v^{\beta }i \overleftarrow{\partial}_{\beta})^{k} \end{eqnarray} with coefficients \begin{equation} C_{k}^{(m)} = \sum^{k}_{n=0} \frac{d_{n}^{(m)}}{n!} \binom{n}{k} (-sm)^{n-k} \: . \end{equation} Obviously, no derivatives appear for the choice $\mathcal{D}_{m}=1$ (corresponding to $d_{n}^{(m)} = \delta_{n0}$) and the standard form (\ref{eq:L_int_st}) is recovered. When spatially inhomogeneous systems with Coulomb interaction are considered, $\mathcal{L}_{\rm mes}$ and $\mathcal{L}_{\rm int}$ can be complemented with the appropriate contributions. Since only uniform matter is considered in the following, we do not give them here explicitly. The field equations of nucleons are derived from the generalized Euler-Lagrange equation \begin{equation} \label{eq:EuLa} \frac{\partial\mathcal{L}}{\partial \varphi_r} + \sum^{\infty}_{i=1}(-)^i \partial_{\alpha_1,\dots,\alpha_i} \frac{\partial\mathcal{L}}{\partial(\partial_{\alpha_1,\dots,\alpha_i}\varphi_r)} = 0 \end{equation} for $\varphi_{r}=\Psi, \overline{\Psi}$. For the meson fields $\varphi_{r}= \sigma, \omega_{\mu}, \bm{\rho}_{\mu}$ the standard Euler-Lagrange equation applies, i.e.\ only the $i=1$ term in equation (\ref{eq:EuLa}) is relevant since higher-order derivatives of the meson fields do not appear in the Lagrangian density $\mathcal{L}$. Details can found in references \cite{Gaitanos:2009nt,Gaitanos:2012hg}. The Dirac equation \begin{equation} \label{eq:Dirac} \left[\gamma_{\mu} \left(i\partial^{\mu} - \Sigma^{\mu}\right) - \left( m - \Sigma\right)\right] \Psi = 0 \end{equation} for the nucleons looks formally the same as in standard RMF approaches but the scalar ($\Sigma$) and vector ($\Sigma^{\mu}$) self-energy operators now contain the derivative operators $\mathcal{D}_{m}$. They are given by \begin{equation}\ \label{eq:Sigma_s} \Sigma = \Gamma_{\sigma} \sigma \overrightarrow{\mathcal{D}}_{\sigma} \end{equation} and \begin{equation} \label{eq:Sigma_v} \Sigma^{\mu}= \Gamma_{\omega}\omega^{\mu}\overrightarrow{\mathcal{D}}_{\omega} + \Gamma_{\rho}\bm{\tau} \cdot \bm{\rho}^{\mu} \overrightarrow{\mathcal{D}}_{\rho} + \Sigma^{\mu}_{R} \end{equation} with the 'rearrangement' contribution \begin{eqnarray} \Sigma^{\mu}_{R} & = & \frac{j^{\mu}}{n_{v}} \left[ \Gamma_{\omega}^{\prime} \omega^{\nu} \frac{1}{2} \left( \overline{\Psi} \overleftarrow{\mathcal{D}}_{\omega} \gamma_{\nu} \Psi + \overline{\Psi} \gamma_{\nu} \overrightarrow{\mathcal{D}}_{\omega} \Psi \right) \right. \\ \nonumber & & \left. + \Gamma_{\rho}^{\prime} \bm{\rho}^{\nu} \frac{1}{2} \left( \overline{\Psi} \overleftarrow{\mathcal{D}}_{\rho} \gamma_{\nu} \bm{\tau} \Psi + \overline{\Psi} \gamma_{\nu} \bm{\tau} \overrightarrow{\mathcal{D}}_{\rho} \Psi \right) \right. \\ \nonumber & & \left. - \Gamma_{\sigma}^{\prime} \sigma \frac{1}{2} \left( \overline{\Psi} \overleftarrow{\mathcal{D}}_{\sigma} \Psi + \overline{\Psi} \overrightarrow{\mathcal{D}}_{\sigma} \Psi \right) \right] \end{eqnarray} containing derivatives \begin{equation} \Gamma_{i}^{\prime} = \frac{d\Gamma_{i}}{dn_{v}} \end{equation} of the coupling functions. In the case of inhomogeneous systems and a non-vanishing three-vector component $\vec{v}$ of the auxiliary vector $v^{\beta}$, additional contributions in (\ref{eq:Sigma_s}) and (\ref{eq:Sigma_v}) will appear. In the present application of the DD-NLD model, however, we do not consider this case. The field equations of the mesons are found as \begin{eqnarray} \partial_{\mu}\partial^{\mu} \sigma + m^{2}_{\sigma} \sigma & = & \frac{1}{2} \Gamma_{\sigma} \left( \overline{\Psi} \overleftarrow{\mathcal{D}}_{\sigma} \Psi + \overline{\Psi}\overrightarrow{\mathcal{D}}_{\sigma} \Psi \right) \\ \partial_{\mu} F^{(\omega)\mu\nu} + m^{2}_{\omega} \omega^{\nu} & = & \frac{1}{2} \Gamma_{\omega} \left( \overline{\Psi} \overleftarrow{\mathcal{D}}_{\omega} \gamma^{\nu} \Psi + \overline{\Psi}\gamma^{\nu} \overrightarrow{\mathcal{D}}_{\omega} \Psi \right) \\ \partial_{\mu} \bm{F}^{(\rho)\mu\nu} + m^{2}_{\rho} \bm{\rho}^{\nu} & = & \frac{1}{2} \Gamma_{\rho} \left( \overline{\Psi} \overleftarrow{\mathcal{D}}_{\rho} \gamma^{\nu} \bm{\tau}\Psi + \overline{\Psi}\bm{\tau}\gamma^{\nu} \overrightarrow{\mathcal{D}}_{\rho} \Psi \right) \end{eqnarray} with source terms containing derivative operators. The conserved baryon current in the DD-NLD model is given by \begin{equation} \label{eq:J} J^{\mu} = \sum_{i=p,n} \langle\overline{\Psi}_{i}N^{\mu}\Psi_{i}\rangle \end{equation} with the norm operator \begin{equation} \label{eq:norm} N^{\mu} = \gamma^{\mu} + \Gamma_{\sigma} \sigma \left(\partial^{\mu}_{p} \mathcal{D}_{\sigma} \right) - \Gamma_{\omega}\omega_{\alpha} \gamma^{\alpha} \left( \partial^{\mu}_{p} \mathcal{D}_{\omega}\right) - \Gamma_{\rho} \bm{\rho} _{\alpha} \gamma^{\alpha} \bm{\tau} \left( \partial^{\mu}_{p} \mathcal{D}_{\rho}\right) \end{equation} where $\partial^{\mu}_{p}\mathcal{D}_{m}$ is the derivative of $\mathcal{D}_{m}$ operator with respect to the momentum $p_{\mu}=i\partial_{\mu}$, i.e.\ \begin{equation} \partial^{\mu}_{p}\mathcal{D}_{m} = v^{\mu}\sum^{\infty}_{k=1} kC_{k}^{(m)} (v^{\beta }i\partial_{\beta})^{k-1} \: , \end{equation} and $\langle \dots \rangle$ denotes the summation over all occupied states. The current (\ref{eq:J}) is not identical to the vector current \begin{equation} \label{eq:J_v} J_{v}^{\mu} = \sum_{i=p,n} \langle\overline{\Psi}_{i}\gamma^{\mu}\Psi_{i}\rangle \: , \end{equation} which is used to define the vector density \begin{equation} \label{eq:n_v} n_{v} = \sqrt{J_{v}^{\mu}J_{v\mu}} \end{equation} appearing as the argument of the coupling functions $\Gamma_{i}$. The energy-momentum tensor assumes the form \begin{equation} T^{\mu \nu} = \sum_{i=p,n} \langle\overline{\Psi}_{i}N^{\mu} p^{\nu} \Psi_{i}\rangle - g^{\mu\nu}\langle \mathcal{L} \rangle \: . \end{equation} Then the energy density $\varepsilon$ and pressure $p$ are found from $\varepsilon = T^{00}$ and $p = \sum_{i=1}^{3}T^{ii}/3$, respectively. \section{DD-NLD model for nuclear matter} \label{sec:inm} In the case of stationary nuclear matter, the equations simplify considerably since the system is homogeneous and the meson fields, which are treated as classical fields, are constant in space and time. Positive-energy solutions of the Dirac equation (\ref{eq:Dirac}) are plane waves $\Psi_{i} = u_{i} \exp \left( - i p_{i}^{\mu} x_{\mu}\right)$ for protons and neutrons with Dirac spinors $u_{i}$, which are normalized according to \begin{equation} \overline{\Psi}_{i} N^{0} \Psi_{i} = \bar{u}_{i} N^{0} u_{i} = 1 \end{equation} with the time component of the norm operator (\ref{eq:norm}). They depend on the effective mass \begin{equation} m_{i}^{\ast} = m_{i} - \Sigma_{i} \end{equation} and effective momentum \begin{equation} p_{i}^{\ast\mu} = p_{i}^{\mu} - \Sigma_{i}^{\mu} \end{equation} related by the dispersion relation \begin{equation} p_{i}^{\ast\mu} p_{i\mu}^{\ast} = \left(m_{i}^{\ast}\right)^{2} \: . \end{equation} The derivative $i\partial^{\beta}$ in the $\mathcal{D}_{m}$ operators can be replaced by the corresponding four-momentum $p_{i}^{\beta}=(E_{i},\vec{p}_{i})$ resulting in a simple function $D_{m}$ depending on the energy $E_{i}$ and the momentum $\vec{p}_{i}$ of the nucleon. Using the identity \begin{equation} N^{\mu} \Psi_{i} = \left[ \gamma^{\mu} + \partial_{p}^{\mu} \Sigma_{i} - \gamma_{\alpha} \partial_{p}^{\mu} \left( \Sigma_{i}^{\alpha} - \Sigma_{R}^{\alpha} \right) \right] \Psi_{i} \end{equation} the conserved current and the energy-momentum tensor can be written as \begin{equation} J^{\mu} = \sum_{i=p,n} \kappa_{i} \int \frac{d^{3} p}{(2\pi)^{3}} \: \frac{\Pi^{\mu}_{i}}{\Pi^{0}_{i}} \end{equation} and \begin{equation} T^{\mu\nu}= \sum_{i=p,n} \kappa_{i} \int \frac{d^{3}p}{(2\pi)^{3}} \: \frac{\Pi^{\mu}_{i} p^{\nu}}{\Pi^{0}_{i}} - g^{\mu\nu}\langle\mathcal{L}\rangle \: , \end{equation} respectively, with the four-momentum \begin{equation} \Pi^{\mu}_{i} = p^{\ast\mu}_{i} + m^{\ast}_{i} \left(\partial^{\mu}_{p}\Sigma_{i}\right) - p^{\ast}_{i\beta} \left[\partial^{\mu}_{p} \left( \Sigma^{\beta}_{i}- \Sigma^{\beta}_{R}\right)\right] \end{equation} and spin degeneracy factors $\kappa_{i}=2$. The integration runs over all momenta $p$ with modulus lower than the Fermi momenta $p_{Fi}$ in the no-sea approximation. They are defined through the individual nucleon densities \begin{equation} \label{eq:n_i} n_{i} = \frac{\kappa_{i}}{6\pi^{2}} p_{Fi}^{3} \: . \end{equation} Without the preference for a particular direction in infinite nuclear matter, the spatial components of the Lorentz vector meson fields vanish and the auxiliary vector in equation (\ref{eq:def_x}) is set to $v^{\beta}=\delta_{\beta 0}$ such that the $D_{m}$ functions only depend on the nucleon energy $E_{i}$. Without isospin changing processes, only the third component of the isovector $\rho$ field has to be considered in the field equations for the mesons. Using the abbreviations $\omega = \omega^{0}$ and $\rho = \bm{\rho}^{0}_{3}$ the meson fields are immediately obtained from \begin{eqnarray} \label{eq:feq_sigma} \sigma & = & \frac{\Gamma_{\sigma}}{m_{\sigma}^{2}} n_{\sigma} = \frac{\Gamma_{\sigma}}{m_{\sigma}^{2}} \sum_{i=p,n} \langle \overline{\Psi}_{i} D_{\sigma} \Psi_{i} \rangle \\ \label{eq:feq_omega} \omega & = & \frac{\Gamma_{\omega}}{m_{\omega}^{2}} n_{\omega} = \frac{\Gamma_{\omega}}{m_{\omega}^{2}} \sum_{i=p,n} \langle \overline{\Psi}_{i} \gamma^{0} D_{\omega} \Psi_{i} \rangle \\ \label{eq:feq_rho} \rho & = & \frac{\Gamma_{\rho}}{m_{\rho}^{2}} n_{\rho} = \frac{\Gamma_{\rho}}{m_{\rho}^{2}} \sum_{i=p,n} \langle \overline{\Psi}_{i} \gamma^{0} \tau_{3} D_{\rho} \Psi_{i} \rangle \end{eqnarray} with source densities $n_{\sigma}$, $n_{\omega}$, and $n_{\rho}$. The self-energies simplify to \begin{eqnarray} \Sigma_{i} & = & \Gamma_{\sigma} \sigma D_{\sigma} \\ \Sigma_{i}^{0} & = & \Gamma_{\omega}\omega D_{\omega} + \Gamma_{\rho} \tau_{3,i} \rho D_{\rho} + \Sigma^{0}_{R} \\ \vec{\Sigma}_{i} & = & 0 \end{eqnarray} with $\tau_{3,i} = 1$ ($-1$) for protons (neutrons) and the 'rearrangement' contribution \begin{equation} \Sigma^{0}_{R} = \Gamma_{\omega}^{\prime} \omega n_{\omega} + \Gamma_{\rho}^{\prime} \rho n_{\rho} - \Gamma_{\sigma}^{\prime} \sigma n_{\sigma} \end{equation} is independent of the nucleon energy. The dispersion relation reads \begin{equation} \label{eq:disp} E_{i} = \sqrt{p^{2}+\left( m_{i} - S_{i} \right)^{2}} + V_{i} \end{equation} if we introduce the energy-dependent scalar potentials $S_{i}(E) = \Sigma_{i}$ and vector potentials $V_{i}(E) = \Sigma_{i}^{0}$. Explicit expressions for the various densities and thermodynamic quantities of the DD-NLD model are given in \ref{sec:app_a}. \section{Parametrization of the DD-NLD model} \label{sec:para} For the application of the DD-NLD model to nuclear matter the parameters need to be specified. Besides the usual parameters of a RMF model with density dependent couplings the form of the $D_{m}$ functions has to be given. We assume identical functions for all mesons, i.e.\ $D = D_{\sigma} = D_{\omega} = D_{\rho}$, and consider three functional dependencies: \begin{itemize} \item[D1] a constant $D = 1$, which corresponds to a usual RMF model with density dependent couplings, \item[D2] a Lorentzian form $D = 1/(1+x^{2})$, \item[D3] an exponential dependence $D=\exp\left( -x \right)$ \end{itemize} with $x=(E_{i}-m_{i})/\Lambda$ because we set $v^{\beta}=\delta_{\beta 0}$ and $s=1$ in equation (\ref{eq:def_x}). The parameter $\Lambda$ regulates the strength of the energy dependence. For the proton and neutron masses the experimental values of $m_{p} = 938.272046$~MeV/c${}^{2}$ and $m_{n} = 939.565379$~MeV/c${}^{2}$, respectively, are used. The meson masses are set to $m_{\sigma}=550$~MeV/c${}^{2}$, $m_{\omega}=783$~MeV/c${}^{2}$, and $m_{\rho}=763$~MeV/c${}^{2}$. The density dependence of the meson nucleon couplings has the same form as introduced in reference \cite{Typel:1999yq}. We assume a dependence on the vector density $n_{v}$ as defined in equation (\ref{eq:n_v}), which is not identical to the zero-component of the conserved baryon current (\ref{eq:J}). See \ref{sec:app_a} for explicit expressions. This choice simplifies the rearrangement terms considerably. For the isoscalar mesons $m=\sigma,\omega$ the coupling is written as \begin{equation} \Gamma_{m}(n_{v}) = \Gamma_{m}(n_{\rm ref}) f_{m}(x) \end{equation} with functions \begin{equation} f_{m}(x) = a_{m} \frac{1 + b_{m} (x + d_{m})^{2}}{1 + c_{m} (x +d_{m})^{2}} \end{equation} that depend on the argument $x=n_{v}/n_{\rm ref}$ and contain coefficients $a_{m}$, $b_{m}$, $c_{m}$, and $d_{m}$. For the $\rho$-meson coupling we set \begin{equation} \Gamma_{\rho}(n_v) = \Gamma_{\rho}(n_{\rm ref}) \exp\left[-a_{\rho} \left(x-1\right)\right] \: . \end{equation} In order to reduce the number of independent parameters we demand that the conditions $f_{\sigma}(1)=f_{\omega}(1)=1$ and $f_{\sigma}^{\prime\prime}(0) = f_{\omega}^{\prime\prime}(0)=0$ hold. Hence, there are only two independent coefficients in the functions $f_{m}$ for each of the isoscalar mesons. The overall magnitude of the couplings is given by the couplings $\Gamma_{m}(n_{\rm ref})$ at a reference density $n_{\rm ref}$. We require that the characteristic saturation properties for the three choices of the $D$ function are identical and close to current values extracted from experiments. In particular, we set the saturation density to $n_{\rm sat}= 0.15$~fm${}^{-3}$, the binding energy per nucleon at saturation to $B=16$~MeV, the compressibility to $K = 240$~MeV, the symmetry energy to $J = 32$~MeV and the symmetry energy slope coefficient to $L=60$~MeV. Furthermore, we set the effective nucleon mass at saturation to $m_{\rm eff} = 0.5625~m_{\rm nuc}$ (related to the strength of the spin-orbit potential in nuclei) and fix the ratios $f_{\omega}^{\prime}(1)/f_{\omega}(1)=-0.15$ and $f_{\omega}^{\prime\prime}(1)/f_{\omega}^{\prime}(1)=-1.0$ in order to determine the coefficients in the functions $f_{m}$ uniquely. These values are close to those of the parametrization DD2 \cite{Typel:2009sy} that was fitted to properties of nuclei and predicts a neutron star maximum mass of $2.4~M_{\rm sol}$. \begin{table}[t] \caption{\label{tab:1 Parameters of the meson coupling functions for three choices of the $D$ functions and different values of cut-off parameter $\Lambda$.} \centering \begin{tabular}{clccccc} \toprule meson & parameter & \multicolumn{1}{c}{D1} &\multicolumn{2}{c}{D2} & \multicolumn{2}{c}{D3} \\ \midrule & $\Lambda$ [MeV] & $-$ & $400$ & $500$ & $600$ & $700$ \cr \midrul $\sigma$ & $\Gamma_{\sigma}(n_{\rm ref})$ & 10.72913 & 10.93466 & 10.86315 & 9.74679& 9.89158 \cr & $a_{\sigma}$ & 1.36402 & 1.35816 & 1.36015 & 1.38410 & 1.38064 \cr & $b_{\sigma}$ & 0.53404 & 0.51914 & 0.52433 & 0.61515 & 0.60127 \cr & $c_{\sigma}$ & 0.86714 & 0.83989 & 0.84931 & 1.00615 & 0.98211 \cr & $d_{\sigma}$ & 0.62000 & 0.62998 & 0.62648 & 0.57558 & 0.58258 \cr \midrul $\omega$ & $\Gamma_{\omega}(n_{\rm ref})$ &13.29858 & 13.56462 & 13.47215 & 12.0503 & 12.23457 \cr & $a_{\omega}$ & 1.3822 & 1.3822 & 1.3822 & 1.3822 & 1.3822 \cr & $b_{\omega}$ & 0.42253 & 0.42253 & 0.42253 & 0.42253 & 0.42253 \cr & $c_{\omega}$ & 0.71932 & 0.71932 & 0.71932 & 0.71932 & 0.71932 \cr & $d_{\omega}$ & 0.60473 & 0.68073 & 0.68073 & 0.68073 & 0.68073 \cr \midrul $\rho$ & $\Gamma_{\rho}(n_{\rm ref})$ & 3.59367 & 3.67852& 3.64957 & 3.18819 & 3.25233 \cr & $a_{\rho}$ & 0.48762 & 0.48954 & 0.48872 & 0.34777 & 0.36279\cr \midrul & $n_{\rm ref}$ [fm$^{-3}$] & 0.15000 & 0.14618 & 0.147485 & 0.16515 & 0.16268\cr \bottomrul \end{tabular} \end{table} Explicit values of the model parameters are given in table \ref{tab:1} with two choices of the cut-off parameter $\Lambda$ for the cases of Lorentzian and exponential functions $D$. Note that the coefficients of the function $f_{\omega}$ are identical for all five parametrizations due to the constraints. The reference density $n_{\rm ref}$ is not necessarily identical to the saturation density $n_{\rm sat}$ because in the case of explicit derivative couplings the vector density $n_{v}$ is different from the conserved baryon density $n_{B} = J^{0}=n_{p}+n_{n}$. \section{Results} \label{sec:res} \begin{figure}[t] \centering \includegraphics[width=0.80\textwidth]{Uopt_D2.eps} \caption{\label{fig:1 The optical potential $U_{\rm opt}$ as a function of the kinetic energy $E_{\rm kin}=E-m_{\rm nuc}$ of a nucleon in symmetric nuclear matter at saturation density in RMF models with parametrizations D1 and D2 compared to two fits from Dirac phenomenology. See text for details.} \end{figure} The nonlinear derivative couplings are introduced in the RMF model in order to improve the energy dependence of the optical potential $U_{\rm opt}$. The elastic proton scattering on nuclei of different mass number $A$ can be well described in Dirac phenomenology with scalar ($S$) and vector ($V$) potentials, which smoothly vary with $A$ and the energy of the projectile \cite{Hama:1990vr,Cooper:1993nx}. From these global fits the optical potential in symmetric nuclear matter at saturation density is obtained as a function of the kinetic energy $E_{\rm kin} = E - m_{\rm nuc}$ in the limit $A\to \infty$. There are different definitions of the nonrelativistic optical potential when it is derived from relativistic scalar and vector self-energies. Here we use the form \begin{equation} \label{eq:U_opt} U_{\rm opt}(E) = \frac{E}{m_{\rm nuc}}V -S + \frac{S^{2} - V^{2}}{2m_{\rm nuc}} \end{equation} with $S=\Sigma_{p}$ and $V = \Sigma_{p}^{0}$ as in references \cite{Typel:2002ck,Typel:2005ba,Gaitanos:2009nt,Gaitanos:2011yb,Gaitanos:2012hg}. In conventional RMF models without derivative couplings, the scalar and vector potentials are constant in energy and the optical potential (\ref{eq:U_opt}) is just a linear function in energy. This is clearly seen in figures \ref{fig:1} and \ref{fig:2} as a full black line for the calculation with the parametrization D1. In contrast, the optical potentials derived from the scalar and vector potentials in Dirac phenomenology from two different fits \cite{Hama:1990vr} are much smaller at high energies and exhibit a saturation for $E_{\rm kin}$ approaching $1$~GeV. At low energies, the optical potentials from experiment behave more similar as that of the theoretical model concerning the absolute strength and the energy dependence. In figure \ref{fig:1} (\ref{fig:2}) the result for the DD-NLD parametrization D2 (D3) is depicted for two values of the cut-off parameter $\Lambda$. Here, a reasonable description of the experimental optical potential is achieved due to the energy dependence of the nucleon self-energies. The difference between parametrizations D2 and D3 is not very significant. The dependence of $U_{\rm opt}$ on $\Lambda$ is stronger for D2. The deflection of the DD-NLD curve for that of the standard RMF model D1 appears at lower kinetic energies for the D3 parametrization as compared to the D2 case. In the DD-NLD model, the optical potential can be calculated easily for other baryon densities and arbitrary neutron-proton asymmetries. Because there are no experimental data available for these general cases, we refrain from presenting the results here. But see reference \cite{Typel:2002ck} for the systematics with density in the linear derivative coupling model. \begin{figure}[t] \centering \includegraphics[width=0.80\textwidth]{Uopt_D3.eps} \caption{\label{fig:2 The optical potential $U_{\rm opt}$ as a function of the kinetic energy $E_{\rm kin}$ of a nucleon in symmetric nuclear matter at saturation density in RMF models with parametrizations D1 and D3 compared to two fits from Dirac phenomenology. See text for details.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{SM.eps} \caption{\label{fig:3 Energy per nucleon $E/A$ as a function of the baryon density $n_{B}$ for symmetric nuclear matter (a) and pure neutron matter (b). Results are given for three different choices of the $D$ function and different values of the cut-off parameter $\Lambda$.} \end{figure} The reduction of the optical potential at high kinetic energies, which originates from the energy dependence of the self-energies, is also reflected in the equation of state. In figure \ref{fig:3} the energy per nucleon $E/A$ (without the rest mass contribution) is depicted as a function of the baryon density $n_{B}$ in symmetric nuclear matter (left panel) and neutron matter (right panel). In both cases, a substantial softening of the EoS is found as compared to the standard RMF calculation with parametrisation D1. The effect is stronger for an exponential energy dependence of the self-energies (D3) than for the case of a Lorentzian dependence (D2). By construction, all EoS are identical at the saturation density $n_{\rm sat}$. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{MR.eps} \caption{\label{fig:4 Mass-radius relation of neutron stars for different choices of the $D$ functions and cut-off parameters $\Lambda$ in the DD-NLD model. The two shaded bands refer to astrophysical mass measurements of the pulsars PSR~$J1614-2230$ \cite{Demorest:2010bx} and PSR~$J0348+0432$ \cite{Antoniadis:2013pzd}.} \label{MR} \end{figure} The DD-NLD model can be used to predict the properties of neutron stars. Here, the EoS of stellar matter is required. It is obtained by adding the contribution of electrons to the energy density and pressure of the baryons. The conditions of charge neutrality and $\beta$ equilibrium fix the lepton density and proton-neutron asymmetry uniquely. Since the present model calculations treat only homogeneous matter, a suitable EoS for the crust of neutron stars has to be added at low densities. We use the standard Baym-Pethick-Sutherland (BPS) crust EoS \cite{Baym:1971pw}. The mass-radius relation of neutron stars is found finally by solving the Tolman-Oppenheimer-Volkoff equations \cite{Oppenheimer:1939ne,Tolman:1939jz}. It is shown for the five models of this work in figure \ref{fig:4} together with the masses of the two most massive pulsars observed so far. The model without an energy dependence (black full line) can explain without any difficulties the large neutron star masses from astrophysical observations \cite{Demorest:2010bx,Antoniadis:2013pzd} because the EoS is rather stiff at high densities. In contrast, for the EoS of neutron star matter in the DD-NLD models that are consistent with the optical potential constraint, a serious reduction of the maximum neutron star mass is seen. These models have even problems to reach typical masses of about $1.4~M_{\rm sol}$ of ordinary neutron stars. Deviations from the predictions of the standard model D1 start to appear already at masses below $0.7~M_{\rm sol}$. There is also an effect on the neutron star radius, which is found to be smaller in the D2 and in the D3 model. Explicit values for the maximum mass as well as the radius and central density at this extreme conditions are given in table \ref{tab:2}. For the DD-NLD models D2 and, in particular, D3 the central densities is a star of maximum mass are considerably higher than those of the standard RMF model without an energy dependence of the couplings. It is worthwhile to compare our results for the mass-radius relation of neutron stars with those of previous versions of the NLD model. In the approach of reference \cite{Gaitanos:2012hg} it was possible to reach a maximum neutron star mass of about $2~M_{\rm sol}$ with their choice of parameters. In contrast, the results of reference \cite{Chen:2012rn} using different functional forms of the couplings indicate a reduction of the maximum mass below the observed values in line with our calculations. All three versions of the NLD model are adjusted to similar values of the nuclear matter parameters at saturation, such as saturation density, binding energy, compressibility or symmetry energy consistent with experimental constraints. Nevertheless, the predictions for matter properties at supra-saturation densities are rather different due to the various choices in the models to represent the effective in-medium interaction. In our paper, an explicit density dependence of the couplings is considered whereas in \cite{Gaitanos:2012hg} nonlinear self-couplings of the meson fields were assumened. In the approach of \cite{Chen:2012rn} to nuclear matter neither meson self-couplings nor a density dependence of the couplings were used. A reasonable description of nuclear matter near saturation does not determine the high-density behaviour of the EoS uniquely, in particular due to the additional freedom with the explicit momentum/energy dependence in the NLD approach as compared to standard RMF models. Of course, the maximum mass constraint could be used directly in the determination of the model parameters. But also additional constraints at high densities, e.g.\ from heavy-ion collisions, might help to reduce the uncertainties in the extrapolation from low to high densities in the future. \begin{table}[t] \caption{\label{tab:2} Maximum mass, corresponding radius and central density of neutrons stars in the DD-NLD models with different parametrizations.} \vspace{2mm} \centering \begin{tabular}{lcccc} \toprul model & $M_{\rm max}$ [$M_{\rm sol}$] & $R(M_{\rm max})$ [km] & $n_{\rm central}(M_{\rm max})$ [fm${}^{-3}$] \\ \midrul D1 & 2.37 & 11.57 & 0.88 \cr D2, $\Lambda = 500$~MeV & 1.48 & 11.75 & 0.95 \\ D2, $\Lambda = 400$~MeV & 1.36 & 11.63 & 0.97 \\ D3, $\Lambda = 700$~MeV & 1.26 & 10.13 & 1.41 \\ D3, $\Lambda = 600$~MeV & 1.19 & 9.99 & 1.46 \\ \bottomrul \end{tabular} \end{table} \section{Conclusions} \label{sec:con} There are several aspects that have to be taken into account in constraining models of dense matter for the application to neutron stars. In phenomenological models, the characteristic saturation properties of nuclear matter and the density dependence of the effective interaction are usually addressed. However, less attention is paid to its energy or momentum dependence. Introducing non-linear derivative couplings into RMF models, it is possible to generate an energy dependence of the nucleon self-energies such that the optical potential in nuclear matter, which is extracted in Dirac phenomenology from elastic proton-nucleus scattering experiments, can be well described up to energies of $1$~GeV. Considering density-dependent nucleon-meson couplings at the same time, a very flexible model is obtained. Its parameters can be fitted to the usual nuclear matter constraints even for different functional forms of the energy dependent couplings. In the current version of the DD-NLD model, the energy dependence of the self-energies causes a softening of the EoS at high densities, both for symmetric nuclear matter and pure neutron matter. This effect is independent of the appearance of additional degrees of freedom, such as hyperons or deltas. As a result, it becomes more difficult to obtain very massive neutron stars consistent with the observational constraints. In the present work, the density dependence of the $\omega$ meson coupling was kept fixed in the parametrizations and only a few choices for the energy dependence were tested. Only constraints near the nuclear saturation density were used to determine the model parameters. There remains substantial freedom in the model to be explored in order to find a suitable parametrization that is consistent with the optical potential and maximum neutron mass constraint. Nevertheless, the results of the our study indicate that the optical potential constraint has to be taken seriously into account in the development of realistic phenomenological models for dense matter. RMF models with self-energies that explicitly depend on the nucleon momentum or energy can be applied to the simulation of heavy-ion collisions using relativistic transport approaches. Here the equation of state can be tested at supra-saturation densities. It is well known that an energy/momentum dependence of the effective in-medium interaction is mandatory for a proper description and analysis of experimental data. Such an approach can help to constrain the parameters of the present model at densities that are not accessible in the description of finite nuclei. In our work, an explicit energy dependence of the nucleon-meson couplings was favored. It allows to apply the DD-NLD approach to the description of nuclei without major difficulties. Such an additional investigation of the model will permit a better control on the parameters. In particular, the interplay between the choice of the functional form of the energy dependence, the cutoff parameters and the density dependence of the couplings can be studied with its impact for the prediction of maximum neutron star masses. With a larger number of observables as constraints it can be expected that firmer conclusions can be drawn on the compatibility of nuclear matter and finite nuclei descriptions in the DD-NLD model. Work in this direction is in progress. \section*{Acknowledgements} This work was supported by the Helmholtz Association (HGF) through the Nuclear Astrophysics Virtual Institute (NAVI, VH-VI-417). S.A. acknowledges support from the Helmholtz Graduate School for Hadron and Ion Research (HGS-HIRe for FAIR).
1,314,259,995,167
arxiv
\chapter*{Introduction} \addcontentsline{toc}{chapter}{Introduction} \newtheorem{thmintro}{Theorem} \newtheorem{corintro}[thmintro]{Corollary} Let $\mc R = (X,R_0,Y,R_0^\vee, F_0)$ be a based root datum with finite Weyl group $W_0$ and (extended) affine Weyl group $W^e = W_0 \ltimes X$. For every parameter function $q : W^e \to \C^\times$ there is an affine Hecke algebra $\mc H = \mc H (\mc R,q)$. The most important and most studied case is when $q$ takes the same value on all simple (affine) reflections in $W^e$. If this value is a prime power, then $\mc H$ is isomorphic to the convolution algebra of Iwahori-biinvariant functions on a reductive $p$-adic group with root datum $\mc R^\vee = (Y,R_0^\vee,X,R_0,F_0^\vee)$ \cite{IwMa}. Moreover the generic Hecke algebra obtained by replacing $q$ with a formal variable $\mathbf q$ is known \cite{KaLu1,ChGi} to be isomorphic to the $G_\C \times \C^\times$-equivariant $K$-theory of the Steinberg variety of $G_\C$, the complex Lie group with root datum $\mc R^\vee$. Via this geometric interpretation Kazhdan and Lusztig \cite{KaLu} classified and constructed all irreducible representations of $\mc H (\mc R,q)$ (when $q \in \C^\times$ is not a root of unity), in accordance with the local Langlands program. On the other hand, the definition of affine Hecke algebras with generators and relations allows one to choose the values of $q$ on non-conjugate simple reflections independently. Although this might appear to be an innocent generalization, much less is known about affine Hecke algebras with unequal parameters. The reason is that Lusztig's constructions in equivariant $K$-theory \cite{KaLu1} allow only one deformation parameter. Kato \cite{Kat2} invented a more complicated variety, called an exotic nilpotent cone, which plays a similar role for the three parameter affine Hecke algebra of type $C_n^{(1)}$. From this one can extract a classification of the tempered dual, for an arbitrary parameter function $q$ \cite{CiKa}. Like equal parameter affine Hecke algebras, those with unequal parameters also arise as intertwining algebras in the smooth representation theory of reductive $p$-adic groups. One can encounter them if one looks at non-supercuspidal Bernstein components (in the smooth dual) \cite{Lus-Uni,Mor}. Even for split groups unequal parameters occur, albeit not in the principal series. It is expected that every Bernstein component of a $p$-adic group can be described with an affine Hecke algebra or a slight variation on it. However, it has to be admitted that the support of this belief is not overwhelming. In \cite{BaMo1,BaMo2,BaCi} it is shown, in increasing generality, that under certain conditions the equivalence between the module category of an affine Hecke algebra and a Bernstein block (in the category of smooth modules) respects unitarity. Thus affine Hecke algebras are an important tool for the classification of both the smooth dual and the unitary smooth dual of a reductive $p$-adic group. The degenerate versions of affine Hecke algebras are usually called graded Hecke algebras. Their role in the representation theory of reductive $p$-adic groups \cite{Lus-Uni,BaMo1,Ciu}, is related to affine Hecke algebras in the way that Lie algebras stand to Lie groups. They have a very simple presentation, which makes is possible to let them act on many function spaces. Therefore one encounters graded Hecke algebras (with possibly unequal parameters) also independently from affine Hecke algebras, for instance in certain systems of differential equations \cite{HeOp} and in the classification of the unitary dual of real reductive groups \cite{CiTr}. In view of the above connections, it of considerable interest to classify the dual of an affine or graded Hecke algebra with unequal parameters. Since $\mc H (\mc R,q)$ is a deformation of the group algebra $\C [W^e]$, it is natural to expect strong similarities between the duals Irr$(\mc H (\mc R,q))$ and Irr$(W^e)$. Indeed, for equal parameters the Deligne--Langlands--Kazhdan--Lusztig parametrization provides a bijection between these duals \cite{KaLu,Lus-Rep}. For unequal parameters we approach the issue via harmonic analysis on affine Hecke algebras, which forces us to consider only parameter functions $q$ with values in $\R_{>0}$. We will assign to every irreducible $\mc H$-representation $\pi$ in a natural way a representation $\Spr (\pi)$ of the extended affine Weyl group $W^e$. Although this construction does not always preserve irreducibility, it has a lot of nice properties, the most important of which is: \begin{thmintro} \label{thm:0.1} \textup{(see Theorem \ref{thm:2.7})} \\ The collection of representations $\{ \Spr (\pi) : \pi \in \mr{Irr}(\mc H (\mc R,q)) \}$ forms a $\Q$-basis of the representation ring of $W^e$. \end{thmintro} Since the Springer correspondence for finite Weyl groups realizes Irr$(W_0)$, via Kazhdan--Lusztig theory, as a specific subset of Irr$(\mc H (\mc R,q))$, we refer to Theorem \ref{thm:0.1} as an affine Springer correspondence. It is possible to refine Theorem \ref{thm:0.1} to a continuous (with respect to the Jacobson topology) bijection Irr$(\mc H (\mc R,q)) \to \text{Irr}(W^e)$. This is related to a conjecture of Aubert, Baum and Plymen \cite{ABP1,ABP2} (ABP-conjecture for short) which we sketch here. Recall that $W^e = W_0 \ltimes X$ and let $T$ be the complex torus $\mr{Hom}_\Z (X,\C^\times)$. Clifford theory says that the irreducible $W^e$-representations with an $X$-weight $t \in T$ are in natural bijection with the irreducible representations of the isotropy group $W_{0,t}$. The extended quotient of $T$ by $W_0$ is defined as \[ \widetilde{T} / W_0 = \bigsqcup\nolimits_{w \in W_0} \{w\} \times T^w / W_0 , \] with respect to the $W_0$-action $w \cdot (w',t) = (w w' w^{-1},w t)$. It is a model for Irr$(W^e)$, in the sense that there exist continuous bijections $\widetilde{T} / W_0 \to \text{Irr}(W^e)$, which respect the projections to $T / W_0$. The Bernstein presentation (included as Theorem \ref{thm:1.1}) shows that $\C [X] \cong \mc O (T)$ is naturally to isomorphic to a commutative subalgebra $\mc A \subset \mc H (\mc R,q)$, and that the center of $\mc H (\mc R,q)$ is $\mc A^{W_0} \cong \mc O (T/W_0)$. Hence we have a natural map Irr$(\mc H (\mc R,q)) \to T / W_0$, sending a representation to its central character. A simplified version of the ABP-conjecture for affine Hecke algebras reads: \begin{thmintro} \label{thm:0.2} \textup{(see Theorem \ref{thm:5.9})} \\ Let $\mc H (\mc R,q)$ be an affine Hecke algebra with positive, possibly unequal, parameters. Let $\mc Q (\mc R)$ be the variety of parameter functions $W^e \to \C^\times$. There exist a continuous bijection $\mu : \widetilde{T} / W_0 \to \mr{Irr}(\mc H (\mc R,q))$ and a map $h : \widetilde{T} / W_0 \times \mc Q (\mc R) \to T$ such that: \begin{itemize} \item $h$ is locally constant in the first argument; \item for fixed $c \in \widetilde{T} / W_0, \; h(c,v)$ is a monomial in the variables $v(s)^{\pm 1}$, where $s$ runs through all simple affine reflections; \item the central character of $\mu (W_0 (w,t))$ is $W_0 \, h(W_0 (w,t), q^{1/2}) t$. \end{itemize} \end{thmintro} The author hopes that Theorem \ref{thm:0.2} will be useful in the local Langlands program. This could be the case for a Bernstein component $\mf s$ of a reductive $p$-adic group $G$ for which $\mr{Mod}_{\mf s} (G)$ is equivalent to Mod$(\mc H (\mc R,q))$ (see Section \ref{sec:padic} for the notations). Recall that a Langlands parameter for $G$ is a certain kind of group homomorphism $\phi : \mc W_{\mathbb F} \ltimes \C \to \prefix{^L}{}{G}$, where $\prefix{^L}{}{G}$ is the Langlands dual group and $\mc W_{\mathbb F}$ is the Weil group of the local field $\mathbb F$ over which $G$ is a variety. Let us try to extract a Langlands parameter from $W_0 (w,t) \in \widetilde{T} / W_0$. The image of the distinguished Frobenius element of $\mc W_{\mathbb F}$ describes the central character, so modulo conjugacy it should be $h(W_0 (w,t), q^{1/2}) t \in T \subset \prefix{^L}{}{G}$. Problematic is that the map $\mu$ from Theorem \ref{thm:0.2} is not canonical, its construction involves some arbitrary choices, which could lead to a different $w \in W_0$. Yet it is precisely this element $w$ that should determine the unipotent conjugacy class that contains the image of $\C \setminus \{0\}$ under $\phi$. Recently Lusztig \cite{Lus-conj} defined a map from conjugacy classes in a Weyl group to unipotent classes in the associated complex Lie group. Whether this map yields a suitable unipotent class for our hypothetical $\phi$ remains to be seen, for this it is probably necessary to find a more canonical construction of $\mu$. The rest of such a Langlands parameter $\phi$ is completely beyond affine Hecke algebras, it will have to depend on number theoretic properties of the Bernstein component $\mf s$. Now we describe the most relevant results needed for Theorems \ref{thm:0.1} and \ref{thm:0.2}. A large step towards the determination of Irr$(\mc H (\mc R,q))$ is the Langlands classification (see Theorem \ref{thm:2.4}), which reduces the problem to the tempered duals of parabolic subalgebras of $\mc H (\mc R,q)$. Although this is of course a well-known result, the proof for affine Hecke algebras has not been published before. In line with Harish-Chandra's results for reductive groups, every tempered irreducible $\mc H (\mc R,q)$-representation can be obtained via induction from a discrete series representation of a parabolic subalgebra, a point of view advocated by Opdam \cite{Opd-Sp}. We note that in this setting tempered and discrete series representations can be defined very easily via the $\mc A$-weights of a representation. For affine Hecke algebras with irreducible root data, Opdam and the author classified the discrete series in \cite{OpSo2}, but we emphasize that we do not use that classification in the present paper. Parabolic subalgebras are given by set of simple roots $P \subset F_0$, and induction from $\mc H_P$ allows an induction parameter in $T^P$, the subtorus of $T$ orthogonal to $P^\vee$. So we consider induction data $\xi = (P,\delta,t)$ where $P \subset F_0, t \in T^P$ and $\delta$ is a discrete series representation of $\mc H_P$. Such a triple gives rise to a parabolically induced representation \[ \pi (\xi) = \pi (P,\delta ,t) = \mr{Ind}_{\mc H^P}^{\mc H} (\delta \circ \phi_t) . \] Here $\mc H^P \subset \mc H$ is more or less a central extension of $\mc H_P$ and $\phi_t : \mc H^P \to \mc H_P$ is a twisted projection. The representation $\pi (\xi)$ is tempered if and only if $t \in T^P_{un}$, the unitary part of $T^P$. For the classification of the dual it remains to decompose all tempered parabolically induced representations and to determine when they have constituents in common. These phenomena are governed by intertwining operators between such representations \cite[Section 4]{Opd-Sp}. It is already nontrivial to show that these operators are well-defined on all $\pi (\xi)$ with $t \in T^P_{un}$, and it is even more difficult to see that they span $\mr{Hom}_{\mc H}(\xi,\xi')$. For reductive groups this is known as Harish-Chandra's completeness theorem, and for affine Hecke algebras it is a deep result due to Delorme and Opdam \cite{DeOp1}. The intertwining operators can be collected in a groupoid $\mc G$, which acts on the space $\Xi$ of induction data $(P,\delta,t)$. With these tools one can obtain a partial classification of the dual of $\mc H (\mc R,q)$: \begin{thmintro} \label{thm:0.3} \textup{(see Theorem \ref{thm:3.10})} \\ There exists a natural map $\mr{Irr}(\mc H (\mc R ,q)) \to \Xi / \mc G ,\; \rho \mapsto \mc G \xi^+ (\rho)$, such that: \begin{itemize} \item the map is surjective and has finite fibers; \item $\rho$ is a subquotient of $\pi (\xi^+ (\rho))$; \item $\rho$ does not occur in $\pi (\xi)$ when $\xi$ is "larger" than $\xi^+ (\rho)$. \end{itemize} \end{thmintro} We note that this part of the paper is rather similar to \cite{SolGHA} for graded Hecke algebras. The results here are somewhat stronger, and most of them can not be derived quickly from their counterparts in \cite{SolGHA}. Yet all this does not suffice for Theorem \ref{thm:0.1}, because we do not have much control over the number of irreducible constituents of parabolically induced representations. Ultimately the proof of Theorem \ref{thm:0.1} is reduced to irreducible tempered representations with central character in $\mr{Hom}_\Z (X ,\R_{>0})$. The author dealt with this case in \cite{SolHomGHA}, via the \pch \ of graded Hecke algebras. What we discussed so far corresponds more or less to chapters 1--3 of the article. From Theorem \ref{thm:0.1} to Theorem \ref{thm:0.2} is not a long journey, but we put Theorem \ref{thm:0.2} at the end of the paper because we prove it together with other parts of the ABP-conjecture. Chapters 4 and 5 are of a more analytic nature. The main object of study is the Schwartz algebra $\mc S (\mc R,q)$ of $\mc H (\mc R,q)$ \cite{Opd-Sp,DeOp1}, the analogue of the Harish-Chandra--Schwartz algebra of a reductive $p$-adic group. By construction a $\mc H (\mc R,q)$-representation extends continuously to $\mc S (\mc R,q)$ if and only if it is tempered. The Schwartz algebra is the ideal tool for the harmonic analysis of affine Hecke algebras, among others because it admits a very nice Plancherel theorem (due to Delorme and Opdam, see \cite{DeOp1} or Theorem \ref{thm:3.8}), because the discrete series of $\mc H (\mc R,q)$ is really discrete in the dual of $\mc S (\mc R,q)$, and because the inclusion $\mc H (\mc R,q) \to \mc S (\mc R,q)$ preserves Ext-groups of tempered representations \cite{OpSo1}. If we vary the parameter function $q$, we obtain families of algebras $\mc H (\mc R,q)$ and $\mc S (\mc R,q)$. It is natural to try to connect the representation theory of $\mc H (\mc R,q)$ with that of $\mc H (\mc R,q')$ for $q'$ close to $q$. For general parameter deformations this is too difficult at present, but we can achieve it for deformations of the form $q \mapsto q^\ep$ with $\ep \in \R$. On central characters of representations, this "scaling" of $q$ fits well with the map \[ \sigma_\ep : T \to T ,\; t \mapsto t \, |t|^{\ep -1} . \] Notice that $\sigma_\ep (t) = t$ for $t \in T_{un} = \mr{Hom}_\Z (X,S^1)$. Let $\mr{Mod}_{f,W_0 t} (\mc H (\mc R,q))$ be the category of finite dimensional $\mc H (\mc R,q)$-representations with central character $W_0 t \in T / W_0$. \begin{thmintro}\label{thm:0.4} \textup{(see Corollary \ref{cor:4.4})} \\ There exists a family of additive functors \[ \tilde \sigma_{\ep,t} : \mr{Mod}_{f,W_0 t} (\mc H (\mc R,q)) \to \mr{Mod}_{f,W_0 \sigma_\ep (t)} (\mc H (\mc R,q^\ep)) \qquad \ep \in [-1,1] , \] such that: \begin{itemize} \item for all $(\pi,V) \in \mr{Mod}_{f,W_0 t} (\mc H (\mc R,q))$ and all $w \in W^e$, the map $\ep \mapsto \tilde \sigma_{\ep,t} (\pi) (N_w) \in \mr{End}_\C (V)$ is analytic; \item for $\ep \neq 0 ,\; \tilde \sigma_{\ep,t}$ is an equivalence of categories; \item $\tilde \sigma_{\ep,t}$ preserves unitarity. \end{itemize} \end{thmintro} For $\ep > 0$ this can already be found in \cite{Opd-Sp}, the most remarkable part is precisely that it extends continuously to $\ep = 0$, that is, to the algebra $\mc H (\mc R,q^0) = \C [W^e]$. In general the functors $\tilde \sigma_{\ep,t}$ cannot be constructed if we work only inside the algebras $\mc H (\mc R,q^\ep)$, they are obtained via localizations of these algebras at certain sets of central characters. We can do much better if we replace the algebras by their Schwartz completions: \begin{thmintro}\label{thm:0.5} \textup{(see Theorem \ref{thm:4.8})} \\ For $\ep \in [0,1]$ there exist homomorphisms of Fr\'echet *-algebras $\Sc_\ep : \mc S (\mc R,q^\ep) \to \mc S (\mc R,q)$, such that: \begin{itemize} \item $\Sc_\ep$ is an isomorphism for all $\ep > 0$, and $\Sc_1$ is the identity; \item for all $w \in W^e$ the map $\ep \mapsto \Sc_\ep (N_w)$ is piecewise analytic; \item for every irreducible tempered $\mc H (\mc R,q)$-representation $\pi$ with central character $W_0 t$, the $\mc S (\mc R,q^\ep)$-representations $\tilde \sigma_{\ep,t}(N_w)$ and $\pi \circ \Sc_\ep$ are equivalent. \end{itemize} \end{thmintro} There are some similarities with the role played by Lusztig's asymptotic Hecke algebra \cite{Lus-C2} in \cite{KaLu}. In both settings an algebra is constructed, which contains $\mc H (\mc R,q)$ for a family of parameter functions $q$. The asymptotic Hecke algebra is of finite type over $\mc O (T / W_0)$, so it is only a little larger than $\mc H (\mc R,q)$. So far it has only been constructed for equal parameter functions $q$, but Lusztig \cite{Lus-Une} conjectures that it also exists for unqual parameter functions. On the hand, the algebra $\mc S (\mc R,q)$ is of finite type over $C^\infty (T_{un})$, so it is much larger than $\mc H (\mc R,q)$. Although $\Sc_\ep$ is an isomorphism for $\ep \in (0,1]$, the algebras $\mc H (\mc R,q^\ep)$ are embedded in $\mc S (\mc R,q)$ in a nontrivial way, in most cases $\Sc_\ep (\mc H (\mc R,q^\ep))$ is not contained in $\mc H (\mc R,q)$. Of particular interest is the homomorphism \begin{equation}\label{eq:0.Sc0} \Sc_0 : \mc S (W^e) = \mc S (\mc R,q^0) \to \mc S (\mc R,q) . \end{equation} It cannot be an isomorphism, but it is injective and for all irreducible tempered $\mc H (\mc R,q)$-representations $\pi$ we have $\pi \circ \Sc_0 \cong \tilde \sigma_0 (\pi) \cong \Spr (\pi)$. Together with Theorem \ref{thm:0.1} this results in: \begin{corintro}\label{cor:0.6} \textup{(see Corollary \ref{cor:Sprsigma0})} \\ The functor $\mr{Mod} (\mc S (\mc R,q)) \to \mr{Mod} (\mc S (W^e)) : \pi \mapsto \pi \circ \Sc_0$ induces an isomorphism between the Grothendieck groups of finite dimensional representations, tensored with $\Q$. \end{corintro} So Theorem \ref{thm:0.1} does not stand alone, but forms the end of a continuous family of representations (of a family of algebras). Actually the author first discovered the algebra homomorphism $\Sc_0$ and only later realized that the corresponding map on representations can also be obtained in another way, thus gaining in naturality. Apart from representation theory, the aformentioned results have some interesting consequences in the noncommutative geometry of affine Hecke algebras. Let $C^* (\mc R,q)$ be the $C^*$-completion of $\mc H (\mc R,q)$. It contains $\mc S (\mc R,q)$ and $\Sc_\ep$ extends to a $C^*$-algebra homomorphism $\Sc_\ep : C^* (\mc R,q^\ep) \to C^* (\mc R,q)$, for which Theorem \ref{thm:0.5} remains valid. It follows quickly from this and Corollary \ref{cor:0.6} that $\Sc_0$ induces an isomorphism on topological $K$-theory, see Theorem \ref{thm:5.3}. More precisely, \begin{equation}\label{eq:0.KQ} K_* (\Sc_0) \otimes \mr{id}_\Q : K_* (C^* (W^e) \rtimes \Gamma) \otimes_\Z \Q \to K_* (C^* (\mc R,q) \rtimes \Gamma) \otimes_\Z \Q \end{equation} is an isomorphism, while for equal parameters the argument also goes through without $\otimes_\Z \Q$. This solves a conjecture that was posed first by Higson and Plymen \cite{Ply1,BCH}. Furthermore $C^* (\mc R,q)$ and $\mc S (\mc R,q)$ have the same topological $K$-theory, and via the Chern character the complexification of the latter is isomorphic to the \pch \ of $\mc S (\mc R,q)$. As already proved in \cite{SolPadic}, $\mc H (\mc R,q)$ and $\mc S (\mc R,q)$ have the same \pch , so we obtain a commutative diagram \[ \begin{array}{*{7}{c}} \!\! HP_* (\C [W^e]) & \!\!\! \to \!\!\! & HP_* (\mc S (W^e)) & \!\!\! \leftarrow \!\!\! & K_* (\mc S (W^e)) \otimes_\Z \C & \!\!\! \to \!\!\! & K_* (C^* (W^e)) \otimes_\Z \C \\ \downarrow & & \downarrow \scriptstyle{HP_* (\Sc_0)} & & \downarrow \scriptstyle{K_* (\Sc_0)} & & \downarrow \scriptstyle{K_* (\Sc_0)} \\ \!\! HP_* (\mc H (\mc R,q)) & \!\!\! \to \!\!\! & HP_* (\mc S (\mc R,q)) & \!\!\! \leftarrow \!\!\! & K_* (\mc S (\mc R,q)) \otimes_\Z \C & \!\!\! \to \!\!\! & K_* (C^* (\mc R ,q)) \otimes_\Z \C , \end{array} \] where all the arrows are natural isomorphisms (see Corollary \ref{cor:5.6}). Notice that the Schwartz algebra $\mc S (\mc R,q)$ forms a bridge between the purely algebraic $\mc H (\mc R,q)$ and the much more analytic $C^* (\mc R,q)$. For the sake of clarity, the introduction is written in less generality than the actual paper. Most notably, we can always extend our affine Hecke algebras by a group $\Gamma$ of automorphisms of the Dynkin diagram of $\mc R$. On the one hand this generality is forced upon us, in particular by Lusztig's first reduction theorem (see Theorem \ref{thm:2.1}), which necessarily involves diagram automorphisms. On the other hand, one advantage of having $\mc H (\mc R,q) \rtimes \Gamma$ instead of just $\mc H (\mc R,q)$ is that our proof of the Aubert--Baum--Plymen conjecture applies to clearly more Bernstein components of reductive $p$-adic groups. For most of the results of this paper, the extension from $\mc H (\mc R,q)$ to $\mc H (\mc R,q) \rtimes \Gamma$ is easy, mainly a matter of some extra notation. An exception is the Langlands classification, which hitherto was only known for commutative groups of diagram automorphisms \cite{BaJa1}. In our generalization (see Corollary \ref{cor:2.8}) we add a new ingredient to the Langlands data, and we show how to save the uniqueness part. A substantial part of this article is based on the author's PhD-thesis \cite{SolThesis}, which was written under the supervision of Opdam. We refrain from indicating all the things that stem from \cite{SolThesis}, among others because some of proofs in \cite{SolThesis} were not worked out with the accuracy needed for research papers. Moreover, in the years after writing this thesis many additional insights were obtained, so that in the end actually no part of \cite{SolThesis} reached this article unscathed. The technical Chapter 4 comes closest. It should also be mentioned that the conjecture \eqref{eq:0.KQ} formed a central part of the author's PhD-research. At that time it was still too difficult for the author, mainly because Theorem \ref{thm:0.1} was not available yet. \\[2mm] \textbf{Acknowledgements.} The author learned a lot about affine Hecke algebras from Eric Opdam, first as a PhD-student a later as co-author. Without that support, this article would not have been possible. The author also thanks Roger Plymen for providing background information about several conjectures, and Anne-Marie Aubert for many detailed comments, in particular concerning Section \ref{sec:padic}. \chapter{Preliminaries} This chapter serves mainly to introduce some definitions and notations that we will use later on. The results that we recall can be found in several other sources, like \cite{Lus-Gr,Ree1,Opd-Sp}. By default, our affine Hecke algebras are endowed with unequal parameters and may be extended with a group of automorphism of the underlying root datum. In the section dedicated to $p$-adic groups we recall what is known about the (conjectural) relation between Bernstein components and affine Hecke algebras. On one hand this motivates the generality that we work in, on the other hand we will use it in Section \ref{sec:ABP} to translate a conjecture of Aubert, Baum and Plymen to the setting of affine Hecke algebras. \section{Root systems} Let $\mf a$ be a finite dimensional real vector space and let $\mf a^*$ be its dual. Let $Y \subset \mf a$ be a lattice and $X = \mr{Hom}_\Z (Y,\Z) \subset \mf a^*$ the dual lattice. Let \[ \mc R = (X, R_0, Y ,R_0^\vee ,F_0) . \] be a based root datum. Thus $R_0$ is a reduced root system in $X ,\, R^\vee_0 \subset Y$ is the dual root system, $F_0$ is a basis of $R_0$ and the set of positive roots is denoted $R_0^+$. Furthermore we are given a bijection $R_0 \to R_0^\vee ,\: \alpha \mapsto \alpha^\vee$ such that $\inp{\alpha}{\alpha^\vee} = 2$ and such that the corresponding reflections $s_\alpha : X \to X$ (resp. $s^\vee_\alpha : Y \to Y$) stabilize $R_0$ (resp. $R_0^\vee$). We do not assume that $R_0$ spans $\mf a^*$. The reflections $s_\alpha$ generate the Weyl group $W_0 = W (R_0)$ of $R_0$, and $S_0 := \{ s_\alpha : \alpha \in F_0 \}$ is the collection of simple reflections. We have the affine Weyl group $W^\af = \mh Z R_0 \rtimes W_0$ and the extended (affine) Weyl group $W^e = X \rtimes W_0$. Both can be considered as groups of affine transformations of $\mf a^*$. We denote the translation corresponding to $x \in X$ by $t_x$. As is well known, $W^\af$ is a Coxeter group, and the basis of $R_0$ gives rise to a set $S^\af$ of simple (affine) reflections. More explicitly, let $F_M^\vee$ be the set of maximal elements of $R_0^\vee$, with respect to the dominance ordering coming from $F_0$. Then \[ S^\af = S_0 \cup \{ t_\alpha s_\alpha : \alpha \in F_M \} . \] We write \begin{align*} &X^+ := \{ x \in X : \inp{x}{\alpha^\vee} \geq 0 \; \forall \alpha \in F_0 \} , \\ &X^- := \{ x \in X : \inp{x}{\alpha^\vee} \leq 0 \; \forall \alpha \in F_0 \} = -X^+ . \end{align*} It is easily seen that the center of $W^e$ is the lattice \[ Z(W^e) = X^+ \cap X^- . \] We say that $\mc R$ is semisimple if $Z (W^e) = 0$ or equivalently if $R_0$ spans $\mf a^*$. Thus a root datum is semisimple if and only if the corresponding reductive algebraic group is so. The length function $\ell$ of the Coxeter system $(W^\af ,S^\af )$ extends naturally to $W^e$, such that \cite[(1.3)]{Opd-Tr} \begin{equation}\label{eq:ellwtx} \ell (w t_x) = \ell (w) + \sum\nolimits_{\alpha \in R_0^+} \inp{x}{\alpha^\vee} \qquad w \in W_0 , x \in X^+ . \end{equation} The elements of length zero form a subgroup $\Omega \subset W^e$, and $W^e = W^\af \rtimes \Omega$. With $\mc R$ we also associate some other root systems. There is the non-reduced root system \[ R_{nr} := R_0 \cup \{ 2 \alpha : \alpha^\vee \in 2 Y \} . \] Obviously we put $(2 \alpha )^\vee = \alpha^\vee / 2$. Let $R_1$ be the reduced root system of long roots in $R_{nr}$: \[ R_1 := \{ \alpha \in R_{nr} : \alpha^\vee \not\in 2 Y \} . \] We denote the collection of positive roots in $R_0$ by $R_0^+$, and similarly for other root systems. \section{Affine Hecke algebras} \label{sec:defAHA} There are three equivalent ways to introduce a complex parameter function for $\mc R$. \begin{enumerate} \item[(1)] A map $q : S^\af \to \mh C^\times$ such that $q(s) = q(s')$ if $s$ and $s'$ are conjugate in $W^e$. \item[(2)] A function $q : W^e \to \C^\times$ such that \begin{equation}\label{eq:2.18} \begin{array}{lll@{\quad}l} q (\omega ) & = & 1 & \text{if } \ell (\omega ) = 0 , \\ q (w v) & = & q (w) q(v) & \text{if } w,v \in W^e \quad \text{and} \quad \ell (wv) = \ell (w) + \ell (v) . \end{array} \end{equation} \item[(3)] A $W_0$-invariant map $q : R_{nr}^\vee \to \C^\times$. \end{enumerate} One goes from (2) to (1) by restriction, while the relation between (2) and (3) is given by \begin{equation}\label{eq:parameterEquivalence} \begin{array}{lll} q_{\alpha^\vee} = q(s_\alpha) = q (t_\alpha s_\alpha) & \text{if} & \alpha \in R_0 \cap R_1, \\ q_{\alpha^\vee} = q(t_\alpha s_\alpha) & \text{if} & \alpha \in R_0 \setminus R_1, \\ q_{\alpha^\vee / 2} = q(s_\alpha) q(t_\alpha s_\alpha)^{-1} & \text{if} & \alpha \in R_0 \setminus R_1. \end{array} \end{equation} We speak of equal parameters if $q(s) = q(s') \; \forall s,s' \in S^\af$ and of positive parameters if $q(s) \in \R_{>0} \; \forall s \in S^\af$. We fix a square root $q^{1/2} : S^\af \to \mh C^\times$. The affine Hecke algebra $\mc H = \mc H (\mc R ,q)$ is the unique associative complex algebra with basis $\{ N_w : w \in W^e \}$ and multiplication rules \begin{equation}\label{eq:multrules} \begin{array}{lll} N_w \, N_v = N_{w v} & \mr{if} & \ell (w v) = \ell (w) + \ell (v) \,, \\ \big( N_s - q(s)^{1/2} \big) \big( N_s + q(s)^{-1/2} \big) = 0 & \mr{if} & s \in S^\af . \end{array} \end{equation} In the literature one also finds this algebra defined in terms of the elements $q(s)^{1/2} N_s$, in which case the multiplication can be described without square roots. This explains why $q^{1/2}$ does not appear in the notation $\mc H (\mc R ,q)$. Notice that $N_w \mapsto N_{w^{-1}}$ extends to a $\C$-linear anti-automorphism of $\mc H$, so $\mc H$ is isomorphic to its opposite algebra. The span of the $N_w$ with $w \in W_0$ is a finite dimensional Iwahori--Hecke algebra, which we denote by $\mc H (W_0,q)$. Now we describe the Bernstein presentation of $\mc H$. For $x \in X^+$ we put $\theta_x := N_{t_x}$. The corresponding semigroup morphism $X^+ \to \mc H (\mc R ,q)^\times$ extends to a group homomorphism \[ X \to \mc H (\mc R ,q)^\times : x \mapsto \theta_x . \] \begin{thm}\label{thm:1.1} \textup{(Bernstein presentation)} \enuma{ \item The sets $\{ N_w \theta_x : w \in W_0 , x \in X \}$ and $\{ \theta_x N_w : w \in W_0 , x \in X \}$ are bases of $\mc H$. \item The subalgebra $\mc A := \mr{span} \{ \theta_x : x \in X \}$ is isomorphic to $\mh C [X]$. \item The center of $Z(\mc H (\mc R ,q))$ of $\mc H (\mc R ,q)$ is $\mc A^{W_0}$, where we define the action of $W_0$ on $\mc A$ by $w (\theta_x ) = \theta_{wx}$. \item For $f \in \mc A$ and $\alpha \in F_0 \cap R_1$ \[ f N_{s_\alpha} - N_{s_\alpha} s_\alpha (f) = \big( q (s_\alpha )^{1/2} - q(s_\alpha )^{-1/2} \big) (f - s_\alpha (f)) (\theta_0 - \theta_{-\alpha} )^{-1} , \] while for $\alpha \in F_0 \setminus R_1$: \[ f N_{s_\alpha} - N_{s_\alpha} s_\alpha (f) = \big( q (s_\alpha )^{1/2} - q(s_\alpha )^{-1/2} + ( q_{\alpha^\vee}^{1/2} - q_{\alpha^\vee}^{-1/2}) \theta_{-\alpha} \big) {\ds \frac{f - s_\alpha (f)} {\theta_0 - \theta_{-2\alpha } }} . \] } \end{thm} \emph{Proof.} These results are due to Bernstein, see \cite[\S 3]{Lus-Gr}. $\qquad \Box$ \\[3mm] The following lemma was claimed in the proof of \cite[Lemma 3.1]{Opd-Tr}. \begin{lem}\label{lem:1.2} For $x \in X^+$ \begin{equation}\label{eq:spanthetax} \mr{span} \{ N_u \theta_x N_v : u,v \in W_0 \} = \mr{span} \{ N_w : w \in W_0 t_x W_0 \} . \end{equation} Let $W_x$ be the stabilizer of $x$ in $W_0$ and let $W^x$ be a set of representatives for $W_0 / W_x$. Then the elements $N_u \theta_x N_v$ with $u \in W^x$ and $v \in W_0$ form a basis of \eqref{eq:spanthetax}. \end{lem} \emph{Proof.} By \eqref{eq:ellwtx} $\ell (u t_x) = \ell (u) + \ell (t_x)$, so $N_u \theta_x = N_{u t_x}$. Recall the Bruhat ordering on a Coxeter group, for example from \cite[Sections 5.9 and 5.10]{Hum}. With induction to $\ell (v)$ it follows from the multiplication rules \eqref{eq:multrules} that \[ N_w N_v - N_{wv} \in \text{span} \{ N_{w \tilde{v}} : \tilde{v} < v \text{ in the Bruhat ordering} \} . \] Hence the sets $\{ N_u \theta_x N_v : v \in W_0 \}$ and $\{ N_w : w \in u t_x W_0 \}$ have the same span. They have the same cardinality and by definition the latter set is linearly independent, so the former is linearly independent as well. Clearly $W_0 t_x W_0 = \sqcup_{u \in W^x} u t_x W_0$, so \begin{multline*} \text{span} \{ N_w : w \in W_0 t_x W_0 \} = \bigoplus\nolimits_{u \in W^x} \text{span} \{ N_w : w \in u t_x W_0 \} \\ = \bigoplus\nolimits_{u \in W^x} \text{span} \{ N_u \theta_x N_v : v \in W_0 \} = \text{span} \{ N_u \theta_x N_v : u \in W^x , v \in W_0 \} . \end{multline*} The number of generators on the second line side equals the dimension of the first line, so they form a basis. $\qquad \Box$ \\[2mm] Let $T$ be the complex algebraic torus \[ T = \mr{Hom}_{\mh Z} (X, \mh C^\times ) \cong Y \otimes_\Z \C^\times , \] so $\mc A \cong \mc O (T)$ and $Z (\mc H ) = \mc A^{W_0} \cong \mc O (T / W_0 )$. From Theorem \ref{thm:1.1} we see that $\mc H$ is of finite rank over its center. Let $\mf t = \mr{Lie}(T)$ and $\mf t^*$ be the complexifications of $\mf a$ and $\mf a^*$. The direct sum $\mf t = \mf a \oplus i \mf a$ corresponds to the polar decomposition \[ T = T_{rs} \times T_{un} = \mr{Hom}_\Z (X, \R_{>0}) \times \mr{Hom}_\Z (X, S^1) \] of $T$ into a real split (or positive) part and a unitary part. The exponential map $\exp : \mf t \to T$ is bijective on the real parts, and we denote its inverse by $\log : T_{rs} \to \mf a$. An automorphism of the Dynkin diagram of the based root system $(R_0,F_0 )$ is a bijection $\gamma : F_0 \to F_0$ such that \begin{equation} \inp{\gamma (\alpha )}{\gamma (\beta )^\vee} = \inp{\alpha}{\beta^\vee} \qquad \forall \alpha ,\beta \in F_0 \,. \end{equation} Such a $\gamma$ naturally induces automorphisms of $R_0, R_0^\vee, W_0$ and $W^\af$. It is easy to classify all diagram automorphisms of $(R_0 ,F_0)$: they permute the irreducible components of $R_0$ of a given type, and the diagram automorphisms of a connected Dynkin diagram can be seen immediately. We will assume that the action of $\gamma$ on $W^{\af}$ has been extended in some way to $W^e$. For example, this is the case if $\gamma$ belongs to the Weyl group of some larger root system contained in $X$. We regard two diagram automorphisms as the same if and only if their actions on $W^e$ are equal. Let $\Gamma$ be a finite group of diagram automorphisms of $(R_0,F_0)$ and assume that $q_{\alpha^\vee} = q_{\gamma (\alpha^\vee)}$ for all $\gamma \in \Gamma, \alpha \in R_{nr}$. Then $\Gamma$ acts on $\mc H$ by algebra automorphisms $\psi_\gamma$ that satisfy \begin{equation}\label{eq:1.2} \begin{array}{lll@{\quad}l} \psi_\gamma (N_w) & = & N_{\gamma (w)} & w \in W^e , \\ \psi_\gamma (\theta_x) & = & \theta_{\gamma (x)} & x \in X . \end{array} \end{equation} Hence one can form the crossed product algebra $\Gamma \ltimes \mc H = \mc H \rtimes \Gamma$, whose natural basis is indexed by the group $(X \rtimes W_0) \rtimes \Gamma = X \rtimes (W_0 \rtimes \Gamma)$. It follows easily from \eqref{eq:1.2} and Theorem \ref{thm:1.1}.c that $Z(\mc H \rtimes \Gamma) = {\mc A}^{W_0 \rtimes \Gamma}$. We say that the central character of an (irreducible) $\mc H \rtimes \Gamma$-representation is positive if it lies in $T^{rs} / (W_0 \rtimes \Gamma)$. We always assume that we have an $\Gamma \ltimes W_0$-invariant inner product on $\mf a$. The length function of $W^\af$ also extends to $X \rtimes W_0 \rtimes \Gamma$, and the subgroup of elements of length zero becomes \[ \{ w \in W^e \rtimes \Gamma : \ell (w) = 0 \} = \Gamma \ltimes \{ w \in W^e : \ell (w) = 0 \} = \Gamma \ltimes \Omega . \] More generally one can consider a finite group $\Gamma'$ that acts on $\mc R$ by diagram automorphisms. Then the center of $\mc H \rtimes \Gamma'$ can be larger than $\mc A^{W_0 \rtimes \Gamma'}$, but apart from that the structure is the same. Another variation arises when $\Gamma \to \mr{Aut}(\mc H)$ is not a group homomorphism, but a homomorphism twisted by a 2-cocycle $\kappa : \Gamma \times \Gamma \to \C^\times$. Instead of $\mc H \rtimes \Gamma$ one can construct the algebra $\mc H \otimes \C [\Gamma,\kappa]$, whose multiplication is defined by \[ \begin{array}{lll} N_\gamma N_{\gamma'} & = & \kappa (\gamma,\gamma') N_{\gamma \gamma'} , \\ N_\gamma h N_\gamma^{-1} & = & \gamma (h), \end{array} \qquad \gamma, \gamma' \in \Gamma, h \in \mc H . \] By \cite[Section 7]{Mor} such algebras can appear in relevant examples, although it is no explicit nontrivial are known. Let $\Gamma^*$ be the Schur multiplier of $\Gamma$, also known as representation group \cite{CuRe1}. It is a central extension of $\Gamma$ that classifies projective $\Gamma$-representations, and its group algebra $\C [\Gamma^*]$ contains $\C [\Gamma,\kappa]$ as a direct summand. The algebra $\mc H \rtimes \Gamma^*$ is well-defined and contains $\mc H \otimes \C [\Gamma,\kappa]$ as a direct summand. Thus we can reduce the study of affine Hecke algebras with twisted group actions the case of honest group actions. \section{Graded Hecke algebras} Graded Hecke algebras are also known as degenerate (affine) Hecke algebras. They were introduced by Lusztig in \cite{Lus-Gr}. We call \begin{equation} \tilde{\mc R} = (\mf a^* ,R_0, \mf a, R_0^\vee, F_0 ) \end{equation} a degenerate root datum. We pick complex numbers $k_\alpha$ for $\alpha \in F_0$, such that $k_\alpha = k_\beta$ if $\alpha$ and $\beta$ are in the same $W_0$-orbit. The graded Hecke algebra associated to these data is the complex vector space \[ \mh H = \mh H (\tilde{\mc R},k) = S( \mf t^*) \otimes \C [W_0] , \] with multiplication defined by the following rules: \begin{itemize} \item $\mh C[W_0]$ and $S (\mf t^* )$ are canonically embedded as subalgebras; \item for $x \in \mf t^*$ and $s_\alpha \in S$ we have the cross relation \begin{equation}\label{eq:1.1} x \cdot s_\alpha - s_\alpha \cdot s_\alpha (x) = k_\alpha \inp{x}{\alpha^\vee} . \end{equation} \end{itemize} Multiplication with any $\ep \in \mh C^\times$ defines a bijection $m_\ep : \mf t^* \to \mf t^*$, which clearly extends to an algebra automorphism of $S(\mf t^* )$. From the cross relation \eqref{eq:1.1} we see that it extends even further, to an algebra isomorphism \begin{equation}\label{eq:1.3} m_\ep : \mh H (\tilde{\mc R},zk) \to \mh H (\tilde{\mc R}, k) \end{equation} which is the identity on $\mh C[W_0]$. Let $\Gamma$ be a group of diagram automorphisms of $\tilde{\mc R}$ and assume that $k_{\gamma (\alpha)} = k_\alpha$ for all $\alpha \in R_0 , \gamma \in \Gamma$. Then $\Gamma$ acts on $\mh H$ by the algebra automorphisms \begin{equation} \begin{split} & \psi_\gamma : \mh H \to \mh H \,, \\ & \psi_\gamma (x s_\alpha ) = \gamma (x) s_{\gamma (\alpha )} \qquad x \in \mf t^* , \alpha \in \Pi \,. \end{split} \end{equation} By \cite[Proposition 5.1.a]{SolHomGHA} the center of the resulting crossed product algebra is \begin{equation}\label{eq:1.4} Z (\mh H \rtimes \Gamma) = S(\mf t^*)^{W_0 \rtimes \Gamma} = \mc O (\mf t / (W_0 \rtimes \Gamma)) . \end{equation} We say that the central character of an $\mh H \rtimes \Gamma$-representation is real if it lies in $\mf a / (W_0 \rtimes \Gamma)$. \section{Parabolic subalgebras} \label{sec:parabolic} For a set of simple roots $P \subset F_0$ we introduce the notations \begin{equation} \begin{array}{l@{\qquad}l} R_P = \mh Q P \cap R_0 & R_P^\vee = \mh Q R_P^\vee \cap R_0^\vee , \\ \mf a_P = \R P^\vee & \mf a^P = (\mf a^*_P )^\perp ,\\ \mf a^*_P = \R P & \mf a^{P*} = (\mf a_P )^\perp ,\\ \mf t_P = \C P^\vee & \mf t^P = (\mf t^*_P )^\perp ,\\ \mf t^*_P = \C P & \mf t^{P*} = (\mf t_P )^\perp ,\\ X_P = X \big/ \big( X \cap (P^\vee )^\perp \big) & X^P = X / (X \cap \mh Q P ) , \\ Y_P = Y \cap \mh Q P^\vee & Y^P = Y \cap P^\perp , \\ T_P = \mr{Hom}_{\mh Z} (X_P, \mh C^\times ) & T^P = \mr{Hom}_{\mh Z} (X^P, \mh C^\times ) , \\ \mc R_P = ( X_P ,R_P ,Y_P ,R_P^\vee ,P) & \mc R^P = (X,R_P ,Y,R_P^\vee ,P) , \\ \tilde{\mc R}_P = ( \mf a_P^* ,R_P ,\mf a_P ,R_P^\vee ,P) & \tilde{\mc R}^P = (\mf a^*,R_P ,\mf a,R_P^\vee ,P) . \end{array} \end{equation} We denote the image of $x \in X$ in $X_P$ by $x_P$. Although $T_{rs} = T_{P,rs} \times T_{rs}^P$, the product $T_{un} = T_{P,un} T^P_{un}$ is not direct, because the intersection \[ K_P := T_{P,un} \cap T_{un}^P = T_P \cap T^P \] can have more than one element (but only finitely many). We define parameter functions $q_P$ and $q^P$ on the root data $\mc R_P$ and $\mc R^P$, as follows. Restrict $q$ to a function on $(R_P )_{nr}^\vee$ and use \eqref{eq:parameterEquivalence} to extend it to $W (\mc R_P )$ and $W (\mc R^P )$. Similarly the restriction of $k$ to $P$ is a parameter function for the degenerate root data $\tilde{\mc R}_P$ and $\tilde{\mc R}^P$, and we denote it by $k_P$ or $k^P$. Now we can define the parabolic subalgebras \[ \begin{array}{l@{\qquad}l} \mc H_P = \mc H (\mc R_P ,q_P ) & \mc H^P = \mc H (\mc R^P ,q^P ) , \\ \mh H_P = \mh H (\tilde{\mc R}_P ,k_P ) & \mh H^P = \mh H (\tilde{\mc R}^P ,k^P ) . \end{array} \] We notice that $\mh H^P = S (\mf t^{P*} ) \otimes \mh H_P$, a tensor product of algebras. Despite our terminology $\mc H^P$ and $\mc H_P$ are not subalgebras of $\mc H$, but they are close. Namely, $\mc H (\mc R^P ,q^P )$ is isomorphic to the subalgebra of $\mc H (\mc R ,q)$ generated by $\mc A$ and $\mc H (W (R_P) ,q_P)$. We denote the image of $x \in X$ in $X_P$ by $x_P$ and we let $\mc A_P \subset \mc H_P$ be the commutative subalgebra spanned by $\{ \theta_{x_P} : x_P \in X_P \}$. There is natural surjective quotient map \begin{equation}\label{eq:quotientP} \mc H^P \to \mc H_P : \theta_x N_w \mapsto \theta_{x_P} N_w . \end{equation} Suppose that $\gamma \in \Gamma \ltimes W_0$ satisfies $\gamma (P) = Q \subseteq F_0$. Then there are algebra isomorphisms \begin{equation}\label{eq:psigamma} \begin{array}{llcl} \psi_\gamma : \mc H_P \to \mc H_Q , & \theta_{x_P} N_w & \mapsto & \theta_{\gamma (x_P)} N_{\gamma w \gamma^{-1}} , \\ \psi_\gamma : \mc H^P \to \mc H^Q , & \theta_x N_w & \mapsto & \theta_{\gamma x} N_{\gamma w \gamma^{-1}} , \\ \psi_\gamma : \mh H_P \to \mh H_Q , & f_P w & \mapsto & (f_P \circ \gamma^{-1}) w , \\ \psi_\gamma : \mh H_P \to \mh H_Q , & f w & \mapsto & (f \circ \gamma^{-1}) w , \end{array} \end{equation} where $f_P \in \mc O (\mf t_P)$ and $f \in \mc O (\mf t)$. Sometimes we will abbreviate $W_0 \rtimes \Gamma$ to $W'$, which is consistent with the notation of \cite{SolGHA}. For example the group \begin{equation}\label{eq:GammaP} W'_P := \{ \gamma \in \Gamma \ltimes W_0 : \gamma (P) = P \} \end{equation} acts on the algebras $\mc H_P$ and $\mc H^P$. Although $W'_{F_0} = \Gamma$, for proper subsets $P \subsetneq F_0$ the group $W'_P$ need not be contained in $\Gamma$. In other words, in general $W'_P$ strictly contains the group \[ \Gamma_P := \{ \gamma \in \Gamma : \gamma (P) = P \} = W'_P \cap \Gamma . \] To avoid confusion we do not use the notation $W_P$. Instead the parabolic subgroup of $W_0$ generated by $\{ s_\alpha : \alpha \in P \}$ will be denoted $W (R_P)$. Suppose that $\gamma \in W'$ stabilizes either the root system $R_P$, the lattice $\Z P$ or the vector space $\Q P \subset \mf a^*$. Then $\gamma (P)$ is a basis of $R_P$, so $\gamma (P) = w (P)$ and $w^{-1} \gamma \in W'_P$ for a unique $w \in W(R_P)$. Therefore \begin{equation}\label{eq:WZP} W'_{\Z P} := \{ \gamma \in W' : \gamma (\Z P) = \Z P \} \text{ equals } W(R_P) \rtimes W'_P . \end{equation} For all $x \in X$ and $\alpha \in P$ we have \[ x - s_\alpha (x) = \inp{x}{\alpha^\vee} \alpha \in \Z P, \] so $t (s_\alpha (x)) = t(x)$ for all $t \in T^P$. Hence $t (w(x)) = t (x)$ for all $w \in W (R_P)$, and we can define an algebra automorphism \begin{equation} \phi_t : \mc H^P \to \mc H^P, \quad \phi_t (\theta_x N_w) = t (x) \theta_x N_w \qquad t \in T^P . \end{equation} In particular, for $t \in K_P$ this descends to an algebra automorphism \begin{equation}\label{eq:twistKP} \psi_t : \mc H_P \to \mc H_P , \quad \theta_{x_P} N_w \mapsto t(x_P) \theta_{x_P} N_w \qquad t \in K_P . \end{equation} We can regard any representation $(\sigma ,V_\sigma)$ of $\mc H (\mc R_P ,q_P )$ as a representation of $\mc H (\mc R^P ,q^P)$ via the quotient map \eqref{eq:quotientP}. Thus we can construct the $\mc H$-representation \[ \pi (P,\sigma ,t) := \mr{Ind}_{\mc H (\mc R^P ,q^P )}^{\mc H (\mc R ,q)} (\sigma \circ \phi_t ) . \] Representations of this form are said to be parabolically induced. Similarly, for any $\mh H_P$-representation $(\rho, V_\rho)$ and any $\lambda \in \mf t$ there is an $\mh H^P$-representation $(\rho_\lambda, V_\rho \otimes \C_\lambda)$. The corresponding parabolically induced representation is \[ \pi (P,\rho,\lambda) := \mr{Ind}_{\mh H^P}^{\mh H} (\rho_\lambda) = \mr{Ind}_{\mh H^P}^{\mh H} (V_\rho \otimes \C_\lambda) . \] \section{Analytic localization} \label{sec:localiz} A common technique in the study of Hecke algebras is localization at one or more characters of the center. There are several ways to implement this. Lusztig \cite{Lus-Gr} takes a maximal ideal $I$ of $Z (\mc H)$ and completes $\mc H$ with respect to the powers of this ideal. This has the effect of considering only those $\mc H$-representations which admit the central character corresponding to $I$. For reasons that will become clear only in Chapter \ref{chapter:scaling}, we prefer to localize with analytic functions on subvarieties of $T / W'$. Let $U \subset T$ be a nonempty $W'$-invariant subset and let $C^{an}(U)$ (respectively $C^{me}(U)$) be the algebra of holomorphic (respectively meromorphic) functions on $U$. There is a natural embedding \[ Z(\mc H \rtimes \Gamma ) = \mc A^{W'} \cong \mc O (T)^{W'} \to C^{an}(U)^{W'} \] and isomorphisms of topological algebras \[ C^{an}(U)^{W'} \otimes_{\mc A^{W'}} \mc A \cong C^{an}(U) ,\quad C^{me}(U)^{W'} \otimes_{\mc A^{W'}} \mc A \cong C^{me}(U) . \] Thus we can construct the algebras \begin{equation}\label{eq:defHan} \begin{array}{ccccc} \mc H^{an}(U) \rtimes \Gamma & := & C^{an}(U)^{W'} \otimes_{Z (\mc H \rtimes \Gamma)} \mc H \rtimes \Gamma & \cong & C^{an}(U) \otimes_\C \C [W'] , \\ \mc H^{me}(U) \rtimes \Gamma & := & C^{me}(U)^{W'} \otimes_{Z (\mc H \rtimes \Gamma)} \mc H \rtimes \Gamma & \cong & C^{me}(U) \otimes_\C \C [W']. \end{array} \end{equation} The isomorphisms with the right hand side are in the category of topological vector spaces, the algebra structure on the left hand side is determined by \begin{equation}\label{eq:multfh} (f_1 \otimes h_1)(f_2 \otimes h_2) = f_1 f_2 \otimes h_1 h_2 \qquad f_i \in C^{me} (U)^{W'}, h_i \in \mc H \rtimes \Gamma . \end{equation} By \cite[Proposition 4.5]{Opd-Sp} $Z (\mc H^{an}(U) \rtimes \Gamma) \cong C^{an}(U)^{W'}$, and similarly with meromorphic functions. For any $T' \subset T$, let $\mr{Mod}_{f,T'} (\mc H \rtimes \Gamma)$ be the category of finite dimensional $\mc H \rtimes \Gamma$-modules all whose $\mc A$-weights lie in $T'$. By \cite[Proposition 4.3]{Opd-Sp} $\mr{Mod}_f (\mc H^{an}(U) \rtimes \Gamma)$ is naturally equivalent to $\mr{Mod}_{f,U} (\mc H \rtimes \Gamma)$. (On the other hand, $\mc H^{me}(U) \rtimes \Gamma$ does not have any nonzero finite dimensional representations over $\C$.) Of course graded Hecke algebras can localized in exactly the same way, and the resulting algebras have analogous properties. By \eqref{eq:1.4} the center of the algebra $\mh H \rtimes \Gamma$ is isomorphic to $\mc O (\mf t / W') = \mc O (\mf t)^{W'}$. For nonempty open $W'$-invariant subsets $V$ of $\mf t$ we get the algebras \begin{equation}\label{eq:defgHan} \begin{array}{ccccc} \mh H^{an}(V) \rtimes \Gamma & := & C^{an}(V)^{W'} \otimes_{Z (\mh H \rtimes \Gamma)} \mh H \rtimes \Gamma & \cong & C^{an}(V) \otimes_\C \C [W'] , \\ \mh H^{me}(V) \rtimes \Gamma & := & C^{me}(V)^{W'} \otimes_{Z (\mh H \rtimes \Gamma)} \mh H \rtimes \Gamma & \cong & C^{me}(V) \otimes_\C \C [W']. \end{array} \end{equation} Let $\C (T/W')$ be the quotient field of $Z (\mc H \rtimes \Gamma) \cong \mc O (T /W')$ and consider \[ \C (T/W') \otimes_{Z (\mc H \rtimes \Gamma)} \mc H \rtimes \Gamma . \] As a vector space this is $\C (T) \otimes_{\mc A} \mc H \rtimes \Gamma \cong \C (T) \otimes_\C \C [W']$, while its multiplication is given by \eqref{eq:multfh}. Similarly, we let $\C (\mf t /W')$ be the quotient field of $\mc O (\mf t /W')$ and we construct the algebra \[ \C (\mf t /W') \otimes_{Z (\mh H \rtimes \Gamma)} \mh H \rtimes \Gamma . \] Its underlying vector space is \[ \C (\mf t) \otimes_{\mc O (\mf t)} \mh H \rtimes \Gamma \cong \C (\mf t) \otimes_\C \C [W'] , \] and its multiplication is given by the obvious analogue of \eqref{eq:multfh}. An important role in the harmonic analysis of $\mc H (\mc R ,q)$ is played by the Macdonald $c$-functions $c_\alpha \in \C (T)$ (cf. \cite[3.8]{Lus-Gr} and \cite[Section 1.7]{Opd-Tr}), defined by \begin{equation}\label{eq:calpha} c_\alpha = \left\{ \begin{array}{ll} {\ds \frac{\theta_\alpha + q(s_\alpha )^{-1/2} q_{\alpha^\vee}^{1/2}}{\theta_\alpha + 1} \, \frac{\theta_\alpha - q(s_\alpha )^{-1/2} q_{\alpha^\vee}^{-1/2}}{\theta_\alpha - 1} } & \alpha \in R_0 \setminus R_1 \\ (\theta_\alpha - q(s_\alpha )^{-1}) (\theta_\alpha - 1)^{-1} & \alpha \in R_0 \cap R_1 \end{array} \right. \end{equation} Notice that $c_\alpha = 1$ if and only if $q(s_\alpha) = q (t_\alpha s_\alpha) = 1$. With this function we can rephrase Theorem \ref{thm:1.1}.d as \[ f N_{s_\alpha} - N_{s_\alpha} s_\alpha (f) = q(s_\alpha)^{-1/2} \big( f - s_\alpha (f) \big) (q(s_\alpha) c_\alpha - 1) . \] For the graded Hecke algebra $\mh H (\tilde{\mc R},k)$ this is much easier: \begin{align} & \tilde c_\alpha = (\alpha + k_\alpha ) \alpha^{-1} = 1 + k_\alpha \alpha^{-1} \in \C (\mf t) , \\ & \nonumber x s_\alpha - s_\alpha \, s_\alpha (x) = \big( x - s_\alpha (x) \big) (\tilde c_\alpha - 1) \qquad \alpha \in F_0, x \in \mf t^* . \end{align} Given a simple root $\alpha \in F_0$ we define elements $\imath^0_{s_\alpha} \in \C (T/W_0) \otimes_{Z (\mc H)} \mc H$ and $\tilde \imath_{s_\alpha} \in \C (\mf t /W_0) \otimes_{Z (\mh H)} \mh H$ by \begin{equation}\label{eq:imatho} \begin{array}{rcl} q(s_\alpha) c_\alpha (1 + \imath^o_{s_\alpha}) & = & 1 + q(s_\alpha)^{1/2} N_{s_\alpha} , \\ \tilde c_\alpha (1 + \tilde \imath_{s_\alpha}) & = & 1 + s_\alpha . \end{array} \end{equation} \begin{prop}\label{prop:3.4} The elements $\imath^0_{s_\alpha}$ and $\tilde \imath_{s_\alpha}$ have the following properties: \enuma{ \item The map $s_\alpha \mapsto \imath^0_{s_\alpha}$ (respectively $s_\alpha \mapsto \tilde \imath_{s_\alpha}$) extends to a group homomorphism from $W'$ to the multiplicative group of $\C (T/W') \otimes_{Z (\mc H \rtimes \Gamma)} \mc H \rtimes \Gamma$ (respectively $\C (\mf t /W') \otimes_{Z (\mh H \rtimes \Gamma)} \mh H \rtimes \Gamma$). \item For $w \in W'$ and $f \in \C (T ) \cong \C (T/W') \otimes_{\mc O (T/W')} \mc A$ (respectively $\tilde f \in \C (\mf t)$) we have $\imath^0_w f \imath^0_{w^{-1}} = w (f)$ (respectively $\tilde \imath_w \tilde f \tilde \imath_{w^{-1}} = w (\tilde f)$). \item The maps \[ \begin{array}{rcl@{\; : \;}ccl} \C (T ) \rtimes W' & \to & \C (T/W') \otimes_{Z (\mc H \rtimes \Gamma)} \mc H \rtimes \Gamma & f w & \mapsto & f \imath^0_w , \\ \C (\mf t) \rtimes W' & \to & \C (\mf t /W') \otimes_{Z (\mh H \rtimes \Gamma )} \mh H \rtimes \Gamma & \tilde f w & \mapsto & \tilde f \tilde \imath_w \end{array} \] are algebra isomorphisms. \item Let $P \subset F_0$ and $\gamma \in W'$ be such that $\gamma (P) \subset F_0$. The automorphisms $\psi_\gamma$ from \eqref{eq:psigamma} satisfy \[ \begin{array}{lll@{\qquad}l} \psi_\gamma (h) & = & \imath^0_\gamma h \imath^0_{\gamma^{-1}} & h \in \mc H^P \text{ or } h \in \mc H_P, \\ \psi_\gamma (\tilde h) & = & \tilde{\imath}_\gamma \tilde h \tilde{\imath}_{\gamma^{-1}} & \tilde h \in \mh H^P . \end{array} \] } \end{prop} \emph{Proof.} (a), (b) and (c) with $W_0$ instead of $W'$ can be found in \cite[Section 5]{Lus-Gr}. Notice that Lusztig calls these elements $\tau_w$ and $\tilde \tau_w$, while we follow the notation of \cite[Section 4]{Opd-Sp}. We extend this to $W'$ by defining, for $\gamma \in \Gamma$ and $w \in W_0$: \[ \imath^0_{\gamma w} := \gamma \imath^0_w \quad \text{and} \quad \tilde \imath_{\gamma w} = \gamma \tilde \imath_w. \] For (d) see \cite[Section 8]{Lus-Gr} or \cite[Lemma 3.2]{SolGHA}. $\qquad \Box$ \\[2mm] We remark that by construction all the $\imath^0_w$ lie in the subalgebra $\C (T / T^{F_0}) \mc H (W_0,q) \rtimes \Gamma$ and that the $\tilde \imath_w$ lie in the subalgebra $\C (\mf t / \mf t^{F_0}) \C [W']$. As was noticed in \cite[Theorem 4.6]{Opd-Sp}, Proposition \ref{prop:3.4} can easily be generalized: \begin{cor}\label{cor:1.3} Proposition \ref{prop:3.4} remains valid under any of the following replacements: \begin{itemize} \item $\C (T)$ by $C^{me}(U)$ or, if all the functions $c_\alpha$ are invertible on $U$, by $C^{an}(U)$; \item $\C (\mf t)$ by $C^{me}(V)$ or, if all the functions $\tilde c_\alpha$ are invertible on $V$, by $C^{an}(V)$. \end{itemize} \end{cor} In particular \begin{equation}\label{eq:isoCme} C^{me}(U) \rtimes W' \to C^{me}(U)^{W'} \otimes_{\mc A^{W'}} \mc H \rtimes \Gamma : f w \to f \imath_w^o \end{equation} is an isomorphism of topological algebras. \section{The relation with reductive $p$-adic groups} \label{sec:padic} Here we discuss how affine Hecke algebras arise in the representation theory of reductive $p$-adic groups. This section is rather sketchy, it mainly serves to provide some motivation and to prepare for our treatment of the Aubert--Baum--Plymen conjecture in Section \ref{sec:ABP} The main sources for this section are \cite{BeDe,BeRu,Roc2,Hei,IwMa,Mor}. Let $\mathbb F$ be a nonarchimedean local field with a finite residue field. Let $\mathbf G$ be a connected reductive algebraic group defined over $\mathbb F$ and let $G = \mathbf G (\mathbb F)$ be its group of $\mathbb F$-rational points. One briefly calls $G$ a reductive $p$-adic group, even though the characteristic of $\mathbb F$ is allowed to be positive. An important theme, especially in relation with the arithmetic Langlands program, is the study of the category Mod$(G)$ of smooth $G$-representations on complex vector spaces. A first step to simplify this problem is the Bernstein decomposition, which we recall now. Let $P$ be a parabolic subgroup of $G$ and $M$ a Levi subgroup of $P$. Although $G$ and $M$ are unimodular, the modular function $\delta_P$ of $P$ is in general not constant. Let $\sigma$ be an irreducible supercuspidal representation of $M$. In this situation we call $(M,\sigma$ a cuspidal pair, and with parabolic induction we construct the $G$-representation \[ I_P^G (\sigma) := \mr{Ind}_P^G (\delta_P^{1/2} \otimes \sigma) . \] This means that first we inflate $\sigma$ to $P$ and then we apply the normalized smooth induction functor. The normalization refers to the twist by $\delta_P^{1/2}$, which is useful to preserve unitarity. Let Irr$(G)$ be the collection of irreducible representations in Mod$(G)$, modulo equivalence. For every $\pi \in \mr{Irr}(G)$ there is a cuspidal pair $(M,\sigma)$, uniquely determined up to $G$-conjugacy, such that $\pi$ is a subquotient of $I_P^G (\sigma)$. Let $G^0$ be the normal subgroup of $G$ generated by all compact subgroups. Recall that a character $\chi : G \to \C^\times$ is called unramified if its kernel contains $G^0$. The group $X_{ur}(G)$ of unramified characters forms a complex algebraic torus whose character lattice is naturally isomorphic to the lattice $X^* (G)$ of algebraic characters $G \to \mathbb F^\times$. We say that two cuspidal pairs $(M,\sigma)$ and $(M',\sigma')$ are inertially equivalent if there exist $g \in G$ and $\chi' \in X_{ur}(M)$ such that \[ M' = g M g^{-1} \quad \text{and} \quad \sigma' \otimes \chi' \cong \sigma^g . \] With an inertial equivalence class $\mf s = [M,\sigma]_G$ one associates a full subcategory $\mr{Mod}_{\mf s} (G)$ of Mod$(G)$. It objects are by definition those smooth representations $\pi$ with the property that for every irreducible subquotient $\rho$ of $\pi$ there is a $(M,\sigma) \in \mf s$ such that $\rho$ is a subrepresentation of $I_P^G (\sigma)$. The collection $\mf B (G)$ of inertial equivalence classes is countably infinite (unless $G = \{1\}$). The Bernstein decomposition \cite[Proposition 2.10]{BeDe} states that \begin{equation}\label{eq:Bdecomp} \mr{Mod}(G) = \prod\nolimits_{\mf s \in \mf B (G)} \mr{Mod}_{\mf s} (G) , \end{equation} a direct product of categories. The subcategories $\mr{Mod}_{\mf s} (G)$ (or rather their subsets of irreducible representations) are also called the Bernstein components of the smooth dual of $G$. The Hecke algebra $\mc H (G)$ is the vector space of locally constant, compactly supported functions on $G$, endowed with the convolution product. Mod$(G)$ is naturally equivalent to the category Mod$(\mc H (G))$ of essential $\mc H (G)$-modules. (A module $V$ is called essential if $\mc H (G) V = V$, which is not automatic because $\mc H (G)$ does not have a unit.) Corresponding to \eqref{eq:Bdecomp} there is a decomposition \[ \mc H (G) = \bigoplus\nolimits_{\mf s \in \mf B (G)} \mc H (G)_{\mf s} \] of the Hecke algebra of $G$ into two-sided ideals. In several cases $\mc H (\mc G)_{\mf s}$ is known to be Morita-equivalent to an affine Hecke algebra. In the classical case \cite{IwMa,Bor} $G$ is split and $\mr{Mod}_{\mf s} (G)$ is the category of Iwahori-spherical representations. That is, those smooth $G$-representations $V$ that are generated by $V^I$, where $I$ is an Iwahori-subgroup of $G$. Then $\mc H (G)_{\mf s}$ is Morita equivalent to the algebra $\mc H (G,I)$ of $I$-biinvariant functions in $\mc H (G)$, and $\mc H (G,I)$ is isomorphic to an affine Hecke algebra $\mc H (\mc R,q)$. The root datum $\mc R = (X,R_0,Y,R_0^\vee,F_0)$ is dual to the root datum of $(\mathbf G, \mathbf T)$, where $\mathbf T (\mathbb F)$ is a split maximal torus of $G = \mathbf G (\mathbb F)$. More explicitly \begin{itemize} \item $X$ is the cocharacter lattice of $\mathbf T$; \item $Y$ is the character lattice of $\mathbf T$; \item $R_0^\vee$ is the root system of $(\mathbf G ,\mathbf T)$; \item $R_0$ is the set coroots of $(\mathbf G ,\mathbf T)$; \item $F_0$ and $F_0^\vee$ are determined by $I$; \item $q$ is the cardinality of the residue field of $\mathbb F$. \end{itemize} For a general inertial equivalence class $\mf s = [M,\sigma]_G$ it is expected that $\mc H (G)_{\mf s}$ is Morita equivalent to an affine Hecke algebra or to a closely related kind of algebra. We discuss some ingredients of this conjectural relation between the representation theory of reductive $p$-adic groups and that of affine Hecke algebras. Let $\sigma^0$ be an irreducible subrepresentation of $\sigma \big|_{M^0}$ and define $\Sigma = \mr{ind}_{M^0}^M (\sigma^0)$. According to \cite[Theorem 23]{BeRu} $I_P^G (\Sigma)$ is a finitely generated projective generator of the category $\mr{Mod}_{\mf s}(G)$. By \cite[Lemma 22]{BeRu} $\mr{Mod}_{\mf s}(G) = \mr{Mod}(\mc H (G)_{\mf s})$ is naturally equivalent to the category of right $\mr{End}_G (I_P^G (\Sigma))$-modules. So if $\mr{End}_G (I_P^G (\Sigma))$ would be isomorphic to its opposite algebra (which is likely), then it is Morita equivalent to $\mc H (G)_{\mf s}$. Let us describe the center of $\mr{End}_G (I_P^G (\Sigma))$. The map \[ X_{ur}(M) \to \mr{Irr}_{[M,\sigma]_M} (M) : \chi \mapsto \chi \otimes \sigma \] is surjective and its fibers are cosets of a finite subgroup Stab$(\sigma) \subset X_{ur}(M)$. Let \[ M_\sigma := \bigcap\nolimits_{\chi \in \mr{Stab}(\sigma)} \! \ker (\chi) \; \subset \; M . \] Roche \cite[Proposition 5.3]{Roc2} showed that $\mr{End}_M (\Sigma)$ is a free $\mc O (X_{ur}(M_\sigma))$-module of rank $m^2$, where $m$ is the multiplicity of $\sigma^0$ in $\sigma \big|_{M^0}$. Moreover the center of $\mr{End}_M (\Sigma)$ is isomorphic to $\mc O (X_{ur}(M_\sigma))$, and $\mr{End}_M (\Sigma)$ embeds in $\mr{End}_G (I_P^G (\Sigma))$ by functoriality. The group \[ N_G (M,\sigma) := \{ g \in G : g M g^{-1} = M, \sigma^g \cong \chi \otimes \sigma \text{ for some } \chi \in X_{ur}(M) \} \] acts on the family of representations $\{ I_P^G (\chi \otimes \Sigma) : \chi \in X_{ur}(M) \}$, and via this on $X_{ur} (M_\sigma)$. The subgroup $M \subset N_G (M,\sigma)$ acts trivially, so we get an action of the finite group \[ W_\sigma := N_G (M,\sigma) / M . \] According to \cite[Th\'eor\`eme 2.13]{BeDe}, the center of $\mr{End}_G (I_P^G (\Sigma))$ is isomorphic to $\mc O (X_{ur}(M_\sigma))^{W_\sigma} = \mc O (X_{ur}(M_\sigma) / W_\sigma)$. By \cite[Lemma 7.3]{Roc2} $\mr{End}_G (I_P^G (\Sigma))$ is a free $\mr{End}_M (\Sigma)$-module of rank $|W_\sigma|$. Next we indicate how to associate a root datum to $\mr{End}_G (I_P^G (\Sigma))$. See \cite[Section 6]{Hei} for more details in the case of classical groups. Let $A$ be the maximal split torus of $Z(M)$, let $X^* (A)$ be its character lattice and $X_* (A)$ its cocharacter lattice. There are natural isomorphisms \[ X_{ur}(M) \cong X^* (A) \otimes_\Z \C^\times \cong \mr{Hom} (X_* (A), \C^\times ). \] In $X^* (A)$ we have the root system $R (G,A)$ and in $X_* (A)$ we have the set $R^\vee (G,A)$ of coroots of $(G,A)$. The parabolic subgroup $P$ determines positive systems $R(P,A)$ and $R^\vee (P,A)$. Altogether we constructed a (nonreduced) based root datum \[ \mc R_M := \big( X_* (A),R^\vee (G,A),X^*(A),R(G,A), R^\vee(P,A) \big) , \] from which one can of course deduce a reduced based root datum. Yet $\mc R_M$ is not good enough, it does not take $\sigma$ into account. Put \[ \begin{array}{ccccccc} X_\sigma & := & \mr{Hom} (X_{ur}(M_\sigma), \C^\times) & \cong & \mr{Hom} (X_{ur}(M_\sigma \cap A), \C^\times) & \subset & X_* (A), \\ Y_\sigma & := & \mr{Hom}(X_\sigma ,\Z) & \cong & X^* (M_\sigma \cap A) & \supset & X^* (A). \end{array} \] Assume for simplicity that $\sigma \big|_{M^0}$ is multiplicity free, or equivalently that $\mr{End}_M (\Sigma) = \mc O (X_{ur}(M_\sigma))$. Then the above says that $\mr{End}_G (I_P^G (\Sigma))$ is a free module of rank $|W_\sigma|$ over $\C [X_\sigma]$. We want to associate a root system to $W_\sigma$. In general $W_\sigma$ does not contain $W(G,A) = N_G (M) / M$, so we have to replace $R(G,A)$ by \[ R_{\sigma,nr} := \{ \alpha \in R (G,A) : s_\alpha \in W_\sigma \} , \] and $R^\vee (G,A)$ by $R^\vee_{\sigma,nr}$. Let $R_\sigma^\vee$ be the reduced root system of $R_{\sigma,nr}^\vee$ and $R_\sigma$ the dual root system, which consists of the non-multipliable roots in $R_{\sigma,nr}$. Let $F_\sigma$ be the unique basis of $R_\sigma$ contained in $R (P,A)$. Then $W (R_\sigma)$ is a normal subgroup of $W_\sigma$ and \[ W_\sigma \cong W (R_\sigma) \rtimes \Gamma_\sigma \quad \text{where} \quad \Gamma_\sigma = \{ w \in W_\sigma : w (F_\sigma) = F_\sigma \} . \] As $\sigma$ is not explicit, it is difficult to say which diagram automorphism groups $\Gamma_\sigma$ can occur here. A priori there does not seem to be any particular restriction. All this suggests that, if $\mr{End}_G (I_P^G (\Sigma))$ is isomorphic to some affine Hecke algebra, then to \begin{equation}\label{eq:AHAsigma} \mc H_\sigma \rtimes \Gamma_\sigma := \mc H (X_\sigma, R_\sigma^\vee, Y_\sigma, R_\sigma, F_\sigma^\vee, q_\sigma ) \rtimes \Gamma_\sigma . \end{equation} In fact it also possible that the $\Gamma_\sigma$-action is twisted by a cocycle \cite[Section 7]{Mor}, but we ignore this subtlety here. We note that little would change upon replacing $G$ by a disconnected group, that would only lead to a larger group of diagram automorphisms. We note that this description of $\mr{End}_G (I_P^G (\Sigma))$ is compatible with parabolic induction. Every parabolic subgroup $Q \subset G$ containing $P$ gives rise to a subalgebra \[ \mr{End}_Q (I_P^Q (\Sigma)) \subset \mr{End}_G (I_P^G (\Sigma)) , \] which via \eqref{eq:AHAsigma} and \[ R_{\sigma,nr}^Q = R(Q,A) \subset R(P,A) = R_{\sigma,nr} \] corresponds to a parabolic subalgebra $\mc H_\sigma^Q \rtimes \Gamma_{\sigma,Q} \subset \mc H_\sigma \rtimes \Gamma_\sigma$. By analogy with the Iwahori case the numbers $q_\sigma (w)$ are related to the affine Coxeter complex of $X_* (A) \rtimes W (R^\vee (G,A))$. After fixing a fundamental chamber $C_0$, every $w \in X_* (A) \rtimes W_\sigma$ determines a chamber $w (C_0)$. This affine Coxeter complex can be regarded as a subset of the Bruhat--Tits building of $G$, so $C_0$ has a stabilizer $K \subset G$. In view of \cite[Section 6]{Mor}, a good candidate for $q_\sigma (w)$ is $[K w K : K]$. In particular, for a simple reflection $s_\alpha \in W(R_\sigma^\vee)$ this works out to $q_\sigma (s_\alpha) = q^{d_\alpha}$, where $q$ is cardinality of the residue field of $\mathbb F$ and $d_\alpha$ is the dimension of the $\alpha$-weight space in the $A$-representation Lie$(G)$. Hence $q_\sigma$ is a positive parameter function and, if $A$ is not a split maximal torus of $G$, $q_\sigma$ tends to be non-constant on the set of simple reflections. As said before, most of the above is conjectural. The problem is that in general it is not known whether one can construct elements $N_w \; (w \in W_\sigma)$ that satisfy the multiplication rules of an extended affine Hecke algebra. To that end one has to study the intertwining operator between parabolically induced representations very carefully. Let us list the cases in which it is proven that $\mr{End}_G (I_P^G (\Sigma))$ is isomorphic to an (extended) affine Hecke algebra: \begin{itemize} \item $G$ split, $\mf s$ the Iwahori-spherical component \cite{IwMa,Bor}; \item $G = GL_n (\mathbb F)\; \mf s$ arbitrary - from the work of Bushnell and Kutzko on types \cite{BuKu1,BuKu2,BuKu3}; \item $G = SL_n$, many $\mf s$ \cite{GoRo} (for general $\mf s$ the Hecke algebra is known to have a closely related shape); \item $G = SO_n (\mathbb F), G = Sp_n (\mathbb F)$ or $G$ an inner form of $GL_n,\; \mf s$ arbitrary \cite{Hei}; \item $G = GSp_4 (\mathbb F)$ or $G = U(2,1),\; \mf s$ arbitrary \cite{Moy1,Moy2}; \item $G$ classical, certain $\mf s$ \cite{Kim1,Kim2,Blo}; \item $G$ split (with mild restrictions on the residual characteristic), $\mf s$ in the principal series \cite{Roc1}; \item $G$ arbitrary, $\sigma$ induced from a level 0 cuspidal representation of a parahoric subgroup of $G$ \cite{Mor,MoPr,Lus-Uni}; \end{itemize} Of course there is a lot of overlap in this list. For $GL_n, SL_n, GSP_4$ and $U(2,1)$ the above references do much more, they classify the smooth dual of $G$. In the level 0 case, Morris \cite{Mor} showed that the parameters $q_\alpha$ are the same as those for analogous Hecke algebras of finite Chevalley groups. Those parameters were determined explicitly in \cite{Lus-fin}, and often they are not equal on an all simple roots. Apart from this list, there are many inertial equivalence classes $\mf s$ for which $\mr{End}_G (I_P^G (\Sigma))$ is Morita-equivalent a commutative algebra. This is the case for supercuspidal $G$-representations $\sigma$ such that $\sigma \big|_{G^0}$ is multiplicity-free, and more generally it tends to happen when $R_\sigma$ is empty. \chapter{Classification of irreducible representations} This chapter leads to the main result of the paper (Theorem \ref{thm:2.7}). We decided to call it an affine Springer correspondence, by analogy with the classical Springer correspondence. Together with Kazhdan--Lusztig-theory, the classical version parametrizes the irreducible representations of a finite Weyl group with certain representations of an affine Hecke algebra with equal parameters. This correspondence is known to remain valid for affine or graded Hecke algebras with certain specific unequal parameters \cite{Ciu}. We construct a natural map from irreducible $\mc H$-representations to representations of the extended affine Weyl group $W^e$. Not all representations in the image are irreducible, but the image does form a $\Q$-basis of the representation ring of $W^e$. The proof proceeds by reduction to a result that the author previously obtained for graded Hecke algebras \cite{SolHomGHA}. To carry out this reduction, we need variations on three well-known results in representation theory of Hecke algebras. The first two are due Lusztig and allow one to descend from affine Hecke algebras to graded Hecke algebras. We adjust these results to make them more suitable for affine Hecke algebras with arbitrary positive parameters. Thirdly there is the Langlands classification (Theorem \ref{thm:2.4}), which comes from reductive groups and reduces the classification of irreducible representations to that of irreducible tempered ones. For affine Hecke algebras it did not appear in the literature before, although it was of course known to experts. Because we want to include diagram automorphisms in our affine Hecke algebras, we need a more refined version of the Langlands classification (Corollary \ref{cor:2.8}). It turns out that one has to add an extra ingredient to the Langlands parameters, and that the unicity claim has to be changed accordingly. However, these results do not suffice to complete the proof of Theorem \ref{thm:2.7} for nontempered representations, that will be done in the next chapter. \section{Two reduction theorems} \label{sec:reduction} The study of irreducible representations of $\mc H \rtimes \Gamma$ is simplified by two reduction theorems, which are essentially due to Lusztig \cite{Lus-Gr}. The first one reduces to the case of modules whose central character is positive on the lattice $\Z R_1$. The second one relates these to modules of an associated graded Hecke algebra. Given $t \in T$ and $\alpha \in R_0$, \cite[Lemma 3.15]{Lus-Gr} tells us that \begin{equation}\label{eq:sAlphaFixt} s_\alpha (t) = t \text{ if and only if } \alpha (t) = \left\{ \begin{array}{ll} 1 & \text{if } \alpha^\vee \notin 2 Y \\ \pm 1 & \text{if } \alpha^\vee \in 2 Y . \end{array} \right. \end{equation} We define $R_t := \{ \alpha \in R_0 : s_\alpha (t) = t \}$. The collection of long roots in $R_{t,nr}$ is $\{ \beta \in R_1 : \beta (t) = 1\}$. Let $F_t$ be the unique basis of $R_t$ that is contained in $R_0^+$. Then \[ W'_{F_t ,t} := \{ w \in W_0 \rtimes \Gamma : w(t) = t , w (F_t) = F_t \} \] is a group of automorphisms of the Dynkin diagram of $(R_t ,F_t)$. Moreover the isotropy group of $t$ in $W_0 \rtimes \Gamma$ is \[ W'_t = (W_0 \rtimes \Gamma)_t = W (R_t) \rtimes W'_{F_t ,t} . \] We can define a parameter function $q_t$ for the based root datum \[ \mc R_t := (X,R_t,Y,R_t^\vee,F_t) \] via restriction from $R_{nr}^\vee$ to $R_{t,nr}^\vee$. Since $F_t$ does not have to be a subset of $F_0 \,, \mc R_t$ does not always fit in the setting of Subsection \ref{sec:parabolic}, but this can be fixed without many problems. For $u \in T_{un}$ we define \[ P(u) := F_0 \cap \Q R_u . \] Then $R_{P(u)}$ is a parabolic root subsystem of $R_0$ that contains $R_u$ as a subsystem of full rank. Although this definition would also make sense for general elements of $T$, we use it only for $T_{un}$, to avoid a clash with the notation of \cite[Section 4.1]{Opd-Sp}. We note that the lattice \[ \Z P(u) = \Z R_0 \cap \Q R_u \] can be strictly larger than $\Z R_u$. To study $\mc H$-representations with central character $W' uc$ we need a well-chosen neighborhood of $uc \in T_{un} T_{rs}$. \begin{cond}\label{cond:ball} Let $B \subset \mf t$ be such that \enuma{ \item $B$ is an open ball centred around $0 \in \mf t$; \item $\Im (\alpha (b)) < \pi$ for all $\alpha \in R_0, b \in B$; \item $\exp : B \to \exp (B)$ is a diffeomorphism (this follows from (b) if $R_0$ spans $\mf a^*$); \item if $c_\alpha (t) \in \{0,\infty\}$ for some $\alpha \in R_0 , t \in uc \exp \overline{B}$, then $c_\alpha (uc) \in \{0,\infty\}$; \item if $w \in W'$ and $w (uc \exp \overline{B}) \cap uc \exp \overline{B} \neq \emptyset$, then $w(uc) = uc$. } \end{cond} Since $W'$ acts isometrically on $\mf t$, (a) implies that $B$ is $W'$-invariant. There always exist balls satisfiying these conditions, and if we have one such $B$, then $\ep B$ with $\ep \in (0,1]$ also works. We will phrase our first reduction theorem in such a way that it depends mainly on the unitary part of the central character, it will decompose a representation in subspaces corresponding to the points of the orbit $W' u$. We note that $R_{uc} \subset R_u$ and $W'_{uc} \subset W'_u$. Given $B$ satisfying the above conditions, we define \begin{equation*} U = W' uc \exp (B) ,\: U_{P(u)} = W'_{\Z P(u)} uc \exp (B) \text{ and } U_u = W'_u uc \exp (B) . \end{equation*} We are interested in the algebras $\mc H (\mc R,q)^{an}(U) \rtimes \Gamma ,\: \mc H (\mc R^{P(u)},q^{P(u)})^{an} (U_{P(u)}) \rtimes W'_{P(u)}$ and $\mc H (\mc R_u ,q_u)^{an} (U_u)\rtimes W'_{F_u ,u}$. Their respective centers \[ C^{an}(U)^{W'} ,\: C^{an}(U_{P(u)})^{W'_{\Z P(u)}} \text{ and } C^{an}(U_u)^{W'_u} \] are naturally isomorphic, via the embeddings $U_u \subset U_{P(u)} \subset U$. For any subset $\varpi \subset W' uc$ we define $1_\varpi \in C^{an}(U)$ by \[ 1_\varpi (t) = \left\{ \begin{array}{ll} 1 & \text{if } \: t \in \varpi \exp (B) \\ 0 & \text{if } \: t \in U \setminus \varpi \exp (B) . \end{array} \right. \] \begin{thm}\label{thm:2.1}\textup{(First reduction theorem)} \enuma{ \item There are natural isomorphisms of $C^{an}(U)^{W'}$-algebras \[ \begin{array}{lll} \mc H (\mc R^{P(u)}, q^{P(u)})^{an}(U_{P(u)}) \rtimes W'_{P(u)} & \cong & 1_{W'_{\Z P(u)} uc} (\mc H^{an}(U) \rtimes \Gamma) \, 1_{W'_{\Z P(u)} uc} , \\ \mc H (\mc R_u ,q_u)^{an} (U_u) \rtimes W'_{F_u ,u} & \cong & 1_{W'_u uc} (\mc H^{an}(U) \rtimes \Gamma) \, 1_{W'_u uc} . \end{array} \] \item These can be extended (not naturally) to isomorphisms of $C^{an}(U)^{W'}$-algebras \[ \begin{array}{lll} \mc H^{an}(U) \rtimes \Gamma & \cong & M_{[W' : W'_{\Z P(u)}]} \big( 1_{W'_{\Z P(u)} uc} (\mc H^{an}(U) \rtimes \Gamma) \, 1_{W'_{\Z P(u)} uc} \big) , \\ \mc H^{an}(U) \rtimes \Gamma & \cong & M_{[W' : W'_u]} \big( 1_{W'_u uc} (\mc H^{an}(U) \rtimes \Gamma) \, 1_{W'_u uc} \big) , \end{array} \] where $M_n (A)$ denotes the algebra of $n \times n$-matrices with coefficients in an algebra~$A$. \item The following maps are natural equivalences of categories: } \[ \begin{array}{ccccc} \!\!\! \mr{Mod}_{f,U}(\mc H (\mc R ,q) \! \rtimes \! \Gamma) \!\!\! & \leftrightarrow & \!\!\! \mr{Mod}_{f,U_{P(u)}} (\mc H^{P(u)} \! \rtimes \! W'_{P(u)}) \!\!\! & \leftrightarrow & \!\!\! \mr{Mod}_{f,U_u} (\mc H (\mc R_u ,q_u) \! \rtimes \! W'_{F_u ,u}) \\ V & \mapsto & 1_{W'_{\Z P(u)} uc} V & \mapsto & 1_{W'_u uc} V \\ \!\!\! \mr{Ind}_{\mc H (\mc R_u,q_u) \rtimes W'_{F_u,u}}^{\mc H \rtimes \Gamma} (\pi) \!\!\! & \mathrel{\reflectbox{$\mapsto$}} & \mr{Ind}_{\mc H (\mc R_u,q_u) \rtimes W'_{F_u,u}}^{\mc H^{P(u)} \rtimes W'_{P(u)}} (\pi) & \mathrel{\reflectbox{$\mapsto$}} & \pi \end{array} \] \end{thm} \emph{Proof.} (a) This is a variation on \cite[Theorem 4.10]{Opd-Sp}, which itself varied on \cite[Theorem 8.6]{Lus-Gr}. Compared to Lusztig we replaced his $R_{uc}$ by a larger root system, we added the group $\Gamma$ and we localized with analytic functions instead of formal completions at one central character. The first change is innocent since $R_u$ and $R_{P(u)}$ are actually easier than Lusztig's $R_{uc}$. By \cite[Lemma 8.13.b]{Lus-Gr} Lusztig's version of the isomorphisms (a) sends $\gamma \in \Gamma (\varpi)$ to $1_\varpi \imath^0_\gamma 1_\varpi$. Translated to our setting this means that we can include the appropriate diagram automorphisms by defining \begin{equation}\label{eq:firstredgamma} \begin{array}{cccccc} W'_{P(u)} & \ni & \gamma & \mapsto & 1_{W'_{\Z P(u)} uc} \, \imath^0_\gamma \, 1_{W'_{\Z P(u)} uc} , \\ W'_{F_u,u} & \ni & \gamma & \mapsto & 1_{W'_u uc} \, \imath^0_\gamma \, 1_{W'_u uc} . \end{array} \end{equation} Finally, that Lusztig's arguments also apply with analytic localization was already checked by Opdam \cite[Section 4.1]{Opd-Sp}. \\ (b) Knowing (a), this can proved just as in \cite[4.16]{Lus-Gr}.\\ (c) By \cite[Proposition 4.3]{Opd-Sp} the categories in the statement are of the categories of finite dimensional modules of the algebras figuring in (a) and (b). Therefore the maps in (c) are just the standard equivalences between the module categories of $B$ and $M_n (B)$, translated with (a) and (b). $\qquad \Box$ \begin{rem}\label{rem:firstred} This reduction theorem more or less forces one to consider diagram automorphisms: the groups $W'_{\Z P(u)}$ and $W'_{F_u ,u}$ can be nontrivial even if $\Gamma = \{ \text{id} \}$. The notation with induction functors in part (c) is a little sloppy, since $W'_{F_u,u}$ need not be contained in $\Gamma$ or in $W'_{\Z P(u)}$. In such cases these induction functors are defined via part (a). The first reduction theorem also enables us to make sense of $\mr{Ind}_{\mc H (\mc R^P,q^P) \rtimes \Gamma'_P}^{\mc H \rtimes \Gamma}$ for any $P \subset F_0$ and any subgroup $\Gamma'_P \subset W'_P$. Namely, first induce from $\mc H (\mc R^P,q^P) \rtimes \Gamma'_P$ to $\mc H (\mc R^P,q^P) \rtimes W'_{\Z P}$, then choose $u \in T_{un}$ such that $P(u) = P$ and finally use (c). \end{rem} By \eqref{eq:sAlphaFixt} we have $\alpha (u) = 1$ for all $\alpha \in R_1 \cap \Q R_t$, so $\alpha (t) = \alpha (u) \alpha (c) > 0$ for such roots. By definition $u$ is fixed by $W'_u$, so Theorem \ref{thm:2.1} allows us to restrict our attention to $\mc H \rtimes \Gamma$-modules whose central character is positive on the sublattice $\Z R_1 \subseteq X$. We define a parameter function $k_u$ for the degenerate root datum $\tilde{\mc R}_u$ by \begin{equation} k_{u,\alpha} = \left\{ \begin{array}{c@{\quad}ll} 0 & \text{if} & \alpha (u)^2 \neq 1 \\ \big( \log q(s_\alpha) + \alpha (u) \log q(t_\alpha s_\alpha) \big) / 2 & \text{if} & \alpha (u)^2 = 1 . \end{array}\right. \end{equation} We note that $k_{u,\alpha} = 0$ if $s_\alpha$ does not fix $u$. Indeed, by \eqref{eq:sAlphaFixt} $s_\alpha (t) \neq t$ implies that either $\alpha (u)^2 \neq 1$ or that $\alpha (u) = -1$ and $\alpha \in R_0 \cap R_1$. But in the latter case $s_\alpha$ and $t_\alpha s_\alpha$ are conjugate in $W^e$, so $q(s_\alpha) = q (t_\alpha s_\alpha)$ and $\log q(s_\alpha) + \alpha (u) \log q(t_\alpha s_\alpha) = 0$. We will see in \eqref{eq:lim0cep} that for this choice of $k_u$ the function $\tilde c_\alpha$ can be regarded as the first order approximation of $q(s_\alpha) c_\alpha$ in a neighborhood of $q=1$ and $u \in T$. Now we pick $u \in T_{un}^{W'}$, so $\alpha (u) = \pm 1$ for all $\alpha \in R_0$. Then the map \begin{equation}\label{eq:expMap} \exp_u : \mf t \to T ,\; \lambda \mapsto u \exp (\lambda) \end{equation} is $W'$-equivariant. To find a relation between $\mc H (\mc R ,q) \rtimes \Gamma$ and $\mh H (\tilde {\mc R}_u ,k_u) \rtimes \Gamma$, we first extend these algebras with analytic localization. For every open nonempty $W'$-invariant $V \subset \mf t$ we can define an algebra homomorphism \begin{align} \nonumber \Phi_u : \mc H^{me}(\exp_u (V)) \rtimes \Gamma & \to \mh H (\tilde{\mc R}_u,k_u)^{me}(V) \rtimes \Gamma , \\ \label{eq:Phiu} f \imath^0_w & \mapsto (f \circ \exp_u) \tilde \imath_w . \end{align} \begin{thm}\label{thm:2.2}\textup{(Second reduction theorem)} \\ Let $V$ be as above, such that $\exp_u : V \to \exp_u(V)$ is bijective. \enuma{ \item The map $\exp_u$ induces an isomorphism $C^{an}(\exp_u (V))^{W'} \to C^{an}(V)^{W'}$. \item Suppose that every $\lambda \in V$ satisfies \begin{equation}\label{eq:condLambda} \begin{array}{llr@{\quad}l} \inp{\alpha}{\lambda}, \inp{\alpha}{\lambda} + k_{u,\alpha} & \notin & 2 \pi i \Z \setminus \{0\} & \text{for } \alpha \in R_0 \cap R_1 , \\ \inp{\alpha}{\lambda}, \inp{\alpha}{\lambda} + k_{u,\alpha} & \notin & \pi i \Z \setminus \{0\} & \text{for } \alpha \in R_0 \setminus R_1 . \end{array} \end{equation} Then $\Phi_u$ restricts to an isomorphism of $C^{an}(U)^{W'}$-algebras \begin{equation*} \Phi_u : \mc H^{an} (\exp_u (V)) \rtimes \Gamma \to \mh H (\tilde{\mc R}_u, k_u)^{an}(V) \rtimes \Gamma . \end{equation*} } \end{thm} \emph{Proof.} (a) This is clear, it serves mainly to formulate (b).\\ (b) The case $\Gamma = \{ \textup{id} \}$ is essentially \cite[Theorem 9.3]{Lus-Gr}. The difference is that our conditions on $\lambda$ replace the conditions \cite[9.1]{Lus-Gr}. The general case follows easily under the assumption that $\Gamma$ fixes $u. \qquad \Box$ \\[2mm] Given $\mf t' \subset \mf t$ we denote by $\mr{Mod}_{f,\mf t'}(\mh H (\tilde {\mc R},k) \rtimes \Gamma)$ the category of finite dimensional $\mh H (\tilde {\mc R},k) \rtimes \Gamma$-modules all whose $\mc O (\mf t)$-weights lie in $\mf t'$. \begin{cor}\label{cor:2.3} For $u c \in T_{un} T_{rs}$ the following categories are equivalent: \enuma{ \item $\mr{Mod}_{f,W' uc} (\mc H (\mc R,q) \rtimes \Gamma)$ and $\mr{Mod}_{f,(W(R_u) \rtimes W'_{F_u,u}) \log (c)} (\mh H (\tilde{\mc R}_u, k_u) \rtimes W'_{F_u,u})$, \item $\mr{Mod}_{f,W' u T_{rs}} (\mc H (\mc R,q) \rtimes \Gamma)$ and $\mr{Mod}_{f,\mf a} (\mh H (\tilde{\mc R}_u, k_u) \rtimes W'_{F_u,u})$. } These equivalences are compatible with parabolic induction. \end{cor} \emph{Proof.} (a) follows from Theorems \ref{thm:2.1}.b and \ref{thm:2.2}.b. Notice that the conditions \eqref{eq:condLambda} are automatically satisfied because $q$ is positive and $\log (c) \in \mf a$, so $k_{u,\alpha} \in \R$ and $\inp{\alpha}{\log (c)} \in \R$. If we sum that equivalence over all $W' c \in T_{rs}/W'$, we find (b). By \cite[Theorem 6.2]{BaMo2} or \cite[Proposition 5.3.a]{SolGHA} these equivalences of categories are compatible with parabolic induction. $\qquad \Box$ \\[2mm] \section{The Langlands classification} In this section we discuss Langlands' classification of irreducible representations. Basically it reduces from general representations to tempered ones, and from there to the discrete series. Actually Langlands proved this only in the setting of real reductive groups, but it holds just as well for $p$-adic reductive groups, affine Hecke algebras and graded Hecke algebras. We will only write the results for affine Hecke algebras, the graded Hecke algebra case is completely analogous and can be found in \cite{Eve,KrRa,SolGHA}. An important tool to study $\mc H$-representations is restriction to the commutative subalgebra $\mc A \cong \mc O (T)$. We say that $t \in T$ is a weight of $(\pi,V)$ if there exists a $v \in V \setminus \{ 0 \}$ such that $\pi (a) v = a(t) v$ for all $a \in \mc A$. It is easy to describe how the collection of $\mc A$-weights behave under parabolic induction. Recall that \begin{equation} W^P := \{ w \in W_0 : w(P) \subset R_0^+ \} \end{equation} is the set of minimal length representatives of $W_0 / W (R_P)$. \begin{lem}\label{lem:2.12} Let $\Gamma'_P$ be a subgroup of $\Gamma_P$ and let $\sigma$ be a representation of $\mc H^P \rtimes \Gamma'_P$. The $\mc A$-weights of $\mr{Ind}_{\mc H^P \rtimes \Gamma'_P}^{\mc H \rtimes \Gamma} (\sigma)$ are the elements $\gamma w (t) \in T$, where $\gamma \in \Gamma, w \in W^P$ and $t$ is an $\mc A$-weight of $\sigma$. \end{lem} \emph{Proof.} From \cite[Proposition 4.20]{Opd-Sp} and its proof we see that this holds in the case $\Gamma = \Gamma'_P = \{ \mathrm{id} \}$. For the general case we only have to observe that the operation $\pi \mapsto \pi \circ \psi_\gamma^{-1}$ on $\mc H$-representations has the effect $t \mapsto \gamma (t)$ on all $\mc A$-weights $t. \qquad \Box$ \\[2mm] Temperedness of a representation is defined via its $\mc A$-weights. Given $P \subseteq F_0$, we have the following positive cones in $\mf a$ and in $T_{rs}$: \begin{equation} \begin{array}{lll@{\qquad}lll} \mf a^+ & = & \{ \mu \in \mf a : \inp{\alpha}{\mu} \geq 0 \: \forall \alpha \in F_0 \} , & T^+ & = & \exp (\mf a^+) , \\ \mf a_P^+ & = & \{ \mu \in \mf a_P : \inp{\alpha}{\mu} \geq 0 \: \forall \alpha \in P \} , & T_P^+ & = & \exp (\mf a_P^+) , \\ \mf a^{P+} & = & \{ \mu \in \mf a^P : \inp{\alpha}{\mu} \geq 0 \: \forall \alpha \in F_0 \setminus P \} , & T^{P+} & = & \exp (\mf a^{P+}) , \\ \mf a^{P++} & = & \{ \mu \in \mf a^P : \inp{\alpha}{\mu} > 0 \; \forall \alpha \in F_0 \setminus P \} , & T^{P++} & = & \exp (\mf a^{P++}) . \end{array} \end{equation} The antidual of $\mf a^{*+} := \{ x \in \mf a^* : \inp{x}{\alpha^\vee} \geq 0 \: \forall \alpha \in F_0 \}$ is \begin{equation} \mf a^- = \{ \lambda \in \mf a : \inp{x}{\lambda} \leq 0 \: \forall x \in \mf a^{*+} \} = \big\{ \sum\nolimits_{\alpha \in F_0} \lambda_\alpha \alpha^\vee : \lambda_\alpha \leq 0 \big\} . \end{equation} Similarly we define \begin{equation} \mf a_P^- = \big\{ \sum\nolimits_{\alpha \in P} \lambda_\alpha \alpha^\vee \in \mf a_P : \lambda_\alpha \leq 0 \big\} . \end{equation} The interior $\mf a^{--}$ of $\mf a^-$ equals $\big\{ {\ts \sum_{\alpha \in F_0}} \lambda_\alpha \alpha^\vee : \lambda_\alpha < 0 \big\}$ if $F_0$ spans $\mf a^*$, and is empty otherwise. We write $T^- = \exp (\mf a^-)$ and $T^{--} = \exp (\mf a^{--})$. Let $t = |t| \cdot t |t|^{-1} \in T_{rs} \times T_{un}$ be the polar decomposition of $t$. An $\mc H$-representation is called tempered if $|t| \in T^-$ for all its $\mc A$-weights $t$, and anti-tempered if $|t|^{-1} \in T^-$ for all such $t$. For infinite dimensional representations this is not entirely satisfactory, but we postpone a more detailed discussion to Section \ref{sec:Schwartz}. Since all irreducible $\mc H$-representations have finite dimension, this vagueness does not cause any problems. Notice that our definition mimics Harish-Chandra's definition of admissible smooth tempered representations of reductive $p$-adic groups \cite[Section III.2]{Wal}. In that setting the crucial condition says that all exponents of such a representation must lie in certain cone. More restrictively we say that an irreducible $\mc H$-representation belongs to the discrete series (or simply: is discrete series) if $|t| \in T^{--}$, for all its $\mc A$-weights $t$. In particular the discrete series is empty if $F_0$ does not span $\mf a^*$. This is the analogue of Casselman's criterium for square integrable representations of semisimple $p$-adic groups \cite[Theorem 4.4.6]{Cas}. The notions tempered and discrete series apply equally well to $\mc H \rtimes \Gamma$, since that algebra contains $\mc A$ and the action of $\Gamma$ on $T$ preserves $T^-$. It follows more or less directly from the definitions that the correspondence of Theorem \ref{thm:2.2} preserves temperedness and provides a bijection between discrete series representations with the appropriate central characters, see \cite[(2.11)]{Slo}. It easy to detect temperedness for $\mc H (\mc R ,1) \rtimes \Gamma = \C [X \rtimes W_0 \rtimes \Gamma] = \C [X \rtimes W']$. \begin{lem}\label{lem:2.9} A finite dimensional $\C [X \rtimes W']$-representation is tempered if and only if all its $\mc A$-weights lie in $T_{un}$. This algebra has no discrete series representations, unless $X = 0$. \end{lem} \emph{Proof.} Suppose that $V$ is a representation of this algebra, and that $t \in T$ is an $\mc A$-weight with weight space $V_t$. For every $g \in W' ,\: g V_t = V_{g(t)}$ is the $g(t)$-weight space of $V$, which shows that every element of the orbit $W' t$ is an $\mc A$-weight of $V$. But $W' |t|$ can only be contained in $T^-$ if it equals the single element $1 \in T_{rs}$. Hence $V$ can only be tempered if $|t| = 1$ for all its weights, or equivalently if all its weights lie in $T_{un}$. By definition the latter condition also suffices for temperedness. Unless $X = 0$, the condition $|t| =1$ implies $|t| \not\in T^{--}$, so $\C [X \rtimes W']$ has no discrete series representations. $\qquad \Box$ \\[2mm] The Langlands classification looks at parabolic subalgebras of $\mc H$ and irreducible representations of those that are essentially tempered. We will describe such representations with two data: a tempered representation and a "complementary" part of the central character. This is justified by the following result. \begin{lem}\label{lem:2.11} Let $P \subset F_0, t_P \in T_P$ and $t^P \in T^P$. \enuma{ \item The map $\sigma \mapsto \sigma \circ \phi_{t^P}$ defines an equivalence between the categories of $\mc H_P$-representations with central character $W (R_P) t_P \in T_P / W (R_P)$ and of $\mc H^P$-representations with central character $W (R_P) t_P t^P \in T / W (R_P)$. \item Every irreducible $\mc H^P$-representation is of the form $\sigma \circ \phi_{t^P}$, where $\sigma$ is an irreducible $\mc H_P$-representation and $t^P \in T^P$. Both these data are unique modulo twists coming from $K_P = T^P \cap T_P$, as in \eqref{eq:twistKP}. } \end{lem} \emph{Proof.} (a) The kernel of $\phi_{t^P}$ followed by the quotient map $\mc H^P \to \mc H_P$ is generated (as an ideal) by $\{ \theta_x - t^P (x) : x \in X \cap (P^\vee)^\perp \}$. If $\rho$ is an $\mc H^P$-representation with central character $W (R_P) t_P t^P$, then the kernel of $\rho$ clearly contains these generators, so $\rho$ factors via $\phi_{t^P}$ and this quotient map. \\ (b) Let $\rho$ be an irreducible $\mc H^P$-representation with central character $W (R_P) t \in T /W (R_P)$. Decompose $t = t_P t^P \in T_P T^P$. Then part (a) yields a unique irreducible $\mc H_P$-representation $\sigma$ such that $\rho = \sigma \circ \phi_{t^P}$. The only freedom in this constuction comes from elements $u \in K_P$. If we replace $t^P$ by $u t^P$, then part (a) again gives a unique $\sigma'$ with $\rho = \sigma' \circ \phi_{u t^P}$, and its follows directly that $\sigma' \circ \psi_u = \sigma. \qquad \Box$ \\[2mm] A Langlands datum for $\mc H$ is a triple $(P,\sigma,t)$ such that \begin{itemize} \item $P \subseteq F_0$ and $\sigma$ is an irreducible tempered $\mc H_P$-representation; \item $t \in T^P$ and $|t| \in T^{P++}$. \end{itemize} We say that two Langlands data $(P,\sigma,t)$ and $(P',\sigma',t')$ are equivalent if $P = P'$ and the $\mc H^P$-representations $\sigma \circ \phi_t$ and $\sigma' \circ \phi_{t'}$ are equivalent. \begin{thm} \label{thm:2.4} \textup{(Langlands classification)} \enuma{ \item For every Langlands datum $(P,\sigma,t)$ the $\mc H$-representation $\pi (P,\sigma,t) = \mr{Ind}_{\mc H^P}^{\mc H} (\sigma \circ \phi_t)$ has a unique irreducible quotient $L(P,\sigma,t)$. \item For every irreducible $\mc H$-representation $\pi$ there exists a Langlands datum $(P,\sigma,t)$, unique up to equivalence, such that $\pi \cong L (P,\sigma,t)$. } \end{thm} \emph{Proof.} The author learned this result from a preliminary version of \cite{DeOp2}, but unfortunately Delorme and Opdam did not include it in the final version. Yet the proof in the setting of affine Hecke algebras is much easier than for reductive groups. It is basically the same as the proof of Evens \cite{Eve} for graded Hecke algebras, see also \cite[Section 2.4]{KrRa}. For later use we rephrase some parts of that proof in our notation.\\ (a) The dominance ordering on $\mf a$ is defined by \begin{equation}\label{eq:domorder} \lambda \leq \mu \text{ if and only if } \inp{\lambda}{\alpha} \leq \inp{\mu}{\alpha} \text{ for all } \alpha \in F_0 . \end{equation} For $\alpha \in F_0$ we define $\delta_\alpha \in \mf a_{F_0}$ by \[ \inp{\beta}{\delta_\alpha} = \left\{ \begin{array}{lll} 1 & \mr{if} & \alpha = \beta \\ 0 & \mr{if} & \alpha \neq \beta \in F_0 \,. \end{array} \right. \] According to Langlands \cite[Lemma 4.4]{Lan}, for every $\lambda \in \mf a$ there is a unique subset $F (\lambda ) \subset F_0$ such that $\lambda$ can be written as \begin{equation}\label{eq:LanglandsF} \lambda = \lambda^{F_0} + \sum_{\alpha \in F_0 \setminus F (\lambda )} c_\alpha \delta_\alpha + \sum_{\alpha \in F (\lambda )} d_\alpha \alpha^\vee \quad \mr{with} \; \lambda^{F_0} \in \mf a^{F_0} , c_\alpha > 0, d_\alpha \leq 0 . \end{equation} We put $\lambda_0 = \sum_{\alpha \in F_0 \setminus F (\lambda )} c_\alpha \delta_\alpha \in \mf a^+$. According to \cite[(2.13)]{KrRa} \begin{equation}\label{eq:2.3} (w \mu)_0 < \mu_0 \text{ for all } \mu \in \mf a_P^- \oplus \mf a^{P++}, w \in W^P \setminus \{1\} . \end{equation} By the definition of a Langlands datum $\log |s| \in \mf a_P^- \oplus \mf a^{P++}$ for every $\mc A$-weight $s$ of $\sigma \circ \phi_t$. Choose $s$ such that $(\log |s|) _0$ is maximal with respect to the dominance order. By Lemma \ref{lem:2.12} and \eqref{eq:2.3} $(\log |s|) _0$ is also maximal for $s$ regarded as an $\mc A$-weight of $\pi (P,\sigma,t)$. Suppose that $\rho$ is an $\mc H$-submodule of $\pi (P,\sigma,t)$ of which $s$ is an $\mc A$-weight. By the maximalty of $s$, $\rho$ must contain the $s$-weight space of $\sigma \circ \phi_t$. The irreduciblity of $\sigma$ implies that $\rho$ contains the $\mc H^P$-submodule $1 \otimes_{\mc H^P} V_\sigma \subset \mr{Ind}_{\mc H^P}^{\mc H}(\sigma \circ \phi_t)$, and therefore $\rho = \pi (P,\sigma,t)$. Thus the sum of all proper submodules is again proper, which means that $\pi (P,\sigma,t)$ has a unique maximal submodule and a unique irreducible quotient.\\ (b) Let $s$ be an $\mc A$-weight of $\pi$ such that $(\log |s|)_0 \in \mf a$ is maximal and put $P = F (\log |s|)$. Let $\rho$ be the $\mc H^P$-subrepresentation $\rho$ of $\pi$ generated by the $s$-weight space. Then $\log |s| \in \mf a_P^- \oplus \mf a^{P++}$ and according to \cite[p. 38]{KrRa} $(\log |s'|)_0 = (\log |s|)_0$ for all $\mc A$-weights $s'$ of $\rho$. By Lemma \ref{lem:2.11} we can write every irreducible $\mc H_P$-subrepresentation of $\rho$ as $\sigma \circ \phi_t$, where $\sigma$ is an irreducible $\mc H_P$-representation and $\log |t| = (\log |s|)_0$. The $\mc A_P$-weights of $\sigma$ are of the form $s' t^{-1}$ and by construction \[ \log |s' t^{-1}| = \log |s´| - (\log |s´|)_0 \in \mf a_P^-, \] so $\sigma$ is tempered. The inclusion map $\sigma \circ \phi_t \to \pi$ induces a nonzero $\mc H$-homomorphism $\pi (P,\sigma,t) \to \pi$. Since $\pi$ is irreducible, this map is surjective. Together with part (a) this shows that $\pi$ is the unique quotient of $\pi (P,\sigma,t)$.\\ The proof that $(P,\sigma \circ \phi_t)$ is uniquely determined by $\pi$ is easy, and exactly the same as in the graded Hecke algebra setting, see \cite[Theorem 2.1.iii]{Eve} or \cite[Theorem 2.4.b]{KrRa}. $\qquad \Box$ \\[2mm] Theorem \ref{thm:2.4} can be regarded as the analogue of the Langlands classification for connected reductive $p$-adic groups. For disconnected reductive groups the classification is no longer valid as such, it has to be modified. In the case that the component group is abelian, this is worked out in \cite{BaJa1}, via reduction to cyclic component groups. We work with a diagram automorphism group $\Gamma$ which is more general than a component group and does not have to be abelian. For use in Section \ref{sec:Springer} we have to extend Theorem \ref{thm:2.4} to this setting. There is a natural action of $\Gamma$ on Langlands data, by \begin{equation}\label{eq:gammaLanglands} \gamma (P,\sigma,t) = (\gamma (P),\sigma \circ \psi_\gamma^{-1}, \gamma (t)) . \end{equation} Every Langlands datum yields a packet of irreducible quotients, and all data in one $\Gamma$-orbit lead to the same packet. For $\gamma \in \Gamma_P$ the Langlands classification for $\mc H^P$ shows that the irreducible $\mc H^P$-representations $\sigma \circ \psi_\gamma \circ \phi_{\gamma (t)}$ and $\sigma \circ \phi_t$ are equivalent if and only if $\gamma (P,\sigma,t) = (P,\sigma,t)$. To get a more precise statement one needs Clifford theory, as for example in \cite{RaRa} or \cite[Section 53]{CuRe1}. Let $\Gamma_{P,\sigma,t}$ be the isotropy group of the Langlands datum $(P,\sigma,t)$. In \cite[Appendix A]{SolGHA} a 2-cocycle $\kappa$ of $\Gamma_{P,\sigma,t}$ is constructed, giving rise to a twisted group algebra $\C [\Gamma_{P,\sigma,t} ,\kappa]$. We define a Langlands datum for $\mc H \rtimes \Gamma$ as a quadruple $(P,\sigma,t,\rho)$, where \begin{itemize} \item $(P,\sigma,t)$ is a Langlands datum for $\mc H$; \item $\rho$ is an irreducible representation of $\C [\Gamma_{P,\sigma,t} ,\kappa]$. \end{itemize} The action \eqref{eq:gammaLanglands} extends naturally to Langlands data for $\mc H \rtimes \Gamma$, since $\psi_\gamma^{-1}$ induces an isomorphism between the relevant twisted group algebras. From such a Langlands datum we can construct the $\mc H_P \rtimes \Gamma_{P,\sigma,t}$-representation $\sigma \otimes \rho$ and the $\mc H \rtimes \Gamma$-representation \begin{equation} \pi^\Gamma (P,\sigma,t,\rho) := \mr{Ind}_{\mc H^P \rtimes \Gamma_{P,\sigma,t}}^{\mc H \rtimes \Gamma} \big( (\sigma \circ \phi_t) \otimes \rho \big) = \mr{Ind}_{\mc H^P \rtimes \Gamma_{P,\sigma,t}}^{ \mc H \rtimes \Gamma} \big( (\sigma \otimes \rho) \circ \phi_t \big) . \end{equation} If $Q \supset P$, then $(P,\sigma,t,\rho)$ can also be considered as a Langlands datum for $\mc H^Q \rtimes \Gamma_Q$, and we denote the corresponding $\mc H^Q \rtimes \Gamma_Q$-representation by $\pi^{Q,\Gamma_Q} (P,\sigma,t,\rho)$. In particular $\pi^{P,\Gamma_P} (P,\sigma,t,\rho)$ is an irreducible $\mc H^P \rtimes \Gamma_P$-representation. \begin{cor} \label{cor:2.8} \textup{(extended Langlands classification)} \enuma{ \item The $\mc H \rtimes \Gamma$-representation $\pi^\Gamma (P,\sigma,t,\rho)$ has a unique irreducible quotient $L^\Gamma(P,\sigma,t,\rho)$. \item For every irreducible $\mc H \rtimes \Gamma$-representation $\pi$ there exists a Langlands datum $(P,\sigma,t,\rho)$, unique modulo the action of $\Gamma$, such that $\pi \cong L^\Gamma (P,\sigma,t,\rho)$. \item $L^\Gamma(P,\sigma,t,\rho)$ and $\pi^\Gamma (P,\sigma,t,\rho)$ are tempered if and only if $P = F_0$ and $t \in T_{un}^{F_0}$. } \end{cor} \emph{Proof.} (a) and (b) By \cite[Theorem A.1]{SolGHA} the $\mc H \rtimes \Gamma$-representation \begin{equation}\label{eq:Lrho} \mr{Ind}_{\mc H \rtimes \Gamma_{P,\sigma,t}}^{\mc H \rtimes \Gamma} (L(P,\sigma,t) \otimes \rho) \end{equation} is irreducible, and every irreducible $\mc H \rtimes \Gamma$-representation is of this form, for a Langlands datum which is unique modulo $\Gamma$. By construction \eqref{eq:Lrho} is a quotient of $\pi (P,\sigma,t,\rho)$. It is the unique irreducible quotient by Theorem \ref{thm:2.4}.a and because $\rho$ is irreducible. \\ (c) If $P \subsetneq F_0$, then $L(P,\sigma,t,\rho)$ and $\pi^\Gamma (P,\sigma,t,\rho)$ are never tempered. Indeed $|t| \not\in T^-$, so $|r t| \not\in T^-$ for any $\mc A_P$-weight $r$ of $\sigma$. But the construction of $L(P,\sigma,t)$ in the proof of \ref{thm:2.4}.a is precisely such that the $\mc A$-weight $rt$ of $\pi(P,\sigma,t)$ survives to the Langlands quotient. Since the group $\Gamma$ preserves $T^-$, its presence does not affect temperedness. Now assume that $P = F_0$. Since $T^{F^0++} \subset T^{F_0}_{rs}$ and $T^- \cap T^{F_0}_{rs} = \{1\}$, this representation can only be tempered if $|t| =1$. In that case $\sigma$ and $\pi^\Gamma(P,\sigma,t,\rho)$ have the same absolute values of $\mc A$-weights, modulo $\Gamma$. But $\Gamma T^- = T^-$, so the temperedness of $\pi^\Gamma(P,\sigma,t,\rho)$ and $L^\Gamma(P,\sigma,t,\rho)$ follows from that of $\sigma. \qquad \Box$ \\[2mm] For connected reductive $p$-adic groups the Langlands quotient always appears with multiplicity one in the standard representation of which it is a quotient. Although not stated explicitly in most sources, that is already part of the proof, see \cite{Kon} or \cite[Theorem 2.15]{SolPadic}. This also holds for reductive $p$-adic groups with a cyclic component group \cite{BaJa2}. Closer examination of the proof of Theorem \ref{thm:2.4} allows us to generalize and improve upon this in our setting. Let $W (R_P) r_\sigma \in T_P / W (R_P)$ be the central character of $\sigma$. Then $|r_\sigma| \in T_{P,rs} = \exp (\mf a_P)$, so we can define \begin{equation}\label{eq:ccdelta} cc_P (\sigma) := W (R_P) \log |r_\sigma| \in \mf a_P / W (R_P) . \end{equation} Since the inner product on $\mf a$ is $W'$-invariant, the number $\norm{cc_P (\sigma)}$ is well-defined. \begin{lem}\label{lem:2.10} Let $(P,\sigma,t,\rho)$ and $(P,\sigma',t,\rho')$ be Langlands data for $\mc H \rtimes \Gamma$. \enuma{ \item The functor $\mr{Ind}_{\mc H^P \rtimes \Gamma_P}^{\mc H \rtimes \Gamma}$ induces an isomorphism \begin{multline*} \mr{Hom}_{\mc H^P \rtimes \Gamma_P} (\pi^{P,\Gamma_P} (P,\sigma,t,\rho), \pi^{P,\Gamma_P} (P,\sigma',t,\rho')) \cong \\ \mr{Hom}_{\mc H \rtimes \Gamma} (\pi^\Gamma (P,\sigma,t,\rho), \pi^\Gamma (P,\sigma',t,\rho')) . \end{multline*} These spaces are one-dimensional if $(\sigma,t,\rho)$ and $(\sigma',t,\rho')$ are $\Gamma_P$-conjugate, and zero otherwise. \item Suppose that $L^\Gamma (Q,\tau,s,\nu)$ is a constituent of $\pi^\Gamma (P,\sigma,t,\rho)$, but not $L^\Gamma (P,\sigma,t,\rho)$. Then $P \subset Q$ and $\norm{cc_P (\sigma)} < \norm{cc_Q (\tau)}$. } \end{lem} \emph{Proof.} (a) We use the notation from \eqref{eq:LanglandsF}. For any weight $s$ of $\sigma \circ \phi_t$ we have $\log |s t^{-1}| \in \mf a_P^-$ and $(\log |s| )_0 = (\log |t|)_{F_0}$, where the subscript $F_0$ refers to the decomposition of elements of $\mf t$ with respect to $\mf t = \mf t_{F_0} \oplus \mf t^{F_0}$. Let $s'$ be a weight of $\sigma' \circ \phi_t$. By \eqref{eq:2.3} \begin{equation} (w \log |s'| )_0 < (\log |s'| )_0 = (\log |t|)_{F_0} \qquad \forall w \in W^P \setminus \{ \mathrm{id} \} , \end{equation} with respect to the dominance order on $\mf a_{F_0}^*$. Since $(\gamma \lambda)_0 = \gamma (\lambda_0)$ for all $\lambda \in \mf a$ and $\gamma \in \Gamma$, we get \begin{equation}\label{eq:2.4} \norm{(\gamma w \log |s'| )_0} < \norm{(\log |t|)_{F_0}} \qquad \forall \gamma \in \Gamma, w \in W^P \setminus \{ \mathrm{id} \} . \end{equation} In particular $\gamma w (s')$ with $w \in W^P$ can only equal the weight $s$ of $\sigma \circ \phi_t$ if $w = 1$. Let $v_s \in V_{\sigma \otimes \rho}$ be a nonzero weight vector. Since $\pi^{P,\Gamma_P} (P,\sigma,t,\rho)$ is an irreducible $\mc H^P \rtimes \Gamma_P$-representation, $1 \otimes v_s \in (\mc H \rtimes \Gamma) \otimes_{\mc H^P \rtimes \Gamma_{P,\sigma,t}} V_{\sigma \otimes \rho}$ is cyclic for $\pi^\Gamma (P,\sigma ,t,\rho )$. Therefore the map \begin{equation}\label{eq:2.5} \mr{Hom}_{\mc H \rtimes \Gamma} (\pi^\Gamma (P,\sigma,t,\rho) ,\pi^\Gamma (P,\sigma',t,\rho')) \to \pi^\Gamma (P,\sigma',t,\rho') : f \mapsto f (1 \otimes v_s ) \end{equation} is injective. By \eqref{eq:2.4} the $s$-weight space of $\pi^\Gamma (P,\sigma',t,\rho')$ is contained in $1 \otimes V_{\sigma' \otimes \rho'}$. So $f (1 \otimes v_s ) \in 1 \otimes V_{\sigma' \otimes \rho'}$ and multiplying by $\mc H^P \rtimes \Gamma_P$ yields \[ f (\C [\Gamma_P] \otimes_{\Gamma_{P,\sigma,t}} V_{\sigma \otimes \rho}) \subset \mh C [\Gamma_P] \otimes_{\Gamma_{P,\sigma',t}} V_{\sigma' \otimes \rho'} . \] Thus any $f \in \mr{Hom}_{\mc H \rtimes \Gamma} (\pi^\Gamma (P,\sigma,t,\rho) ,\pi^\Gamma (P,\sigma',t,\rho'))$ lies in \begin{equation}\label{eq:2.6} \mr{Ind}_{\mc H^P \rtimes \Gamma_P}^{\mc H \rtimes \Gamma} \Big( \mr{Hom}_{\mc H^P \rtimes \Gamma_P} (\pi^{P,\Gamma_P} (P,\sigma,t,\rho), \pi^{P,\Gamma_P} (P,\sigma',t,\rho')) \Big). \end{equation} From \eqref{eq:2.5} we see that this induction functor is injective on homomorphisms. The modules in \eqref{eq:2.6} are irreducible, so the dimension of \eqref{eq:2.6} is zero or one. By Corollary \ref{cor:2.8}.b it is nonzero if and only if $(\sigma,t,\rho)$ and $(\sigma',t,\rho')$ are $\Gamma_P$-conjugate. \\ (b) The proofs of Theorem \ref{thm:2.4}.a and Corollary \ref{cor:2.8}.a show that $L^\Gamma (P,\sigma,t,\rho)$ is the unique irreducible subquotient of $\pi^\Gamma (P,\sigma,t,\rho)$ which has an $\mc A$-weight $t_L$ with $(\log |t_L| )_0 = (\log |t|)_{F_0}$. Moreover of all $\mc A$-weights $s'$ of proper submodules of $\pi^\Gamma (P,\sigma,t,\rho)$ satisfy $(\log |s'| )_0 < (\log |t|)_{F_0}$, with the notation of \eqref{eq:LanglandsF}. In particular, for the subquotient $L^\Gamma (Q, \tau,s,\nu )$ of $\pi^\Gamma (P,\sigma,t,\rho)$ we find that \[ (\log |s| )_{F_0} = (\log |s'| )_0 < (\log |t| )_{F_0} . \] Since $\log |s| \in \mf a^{Q++}$ and $\log |t| \in \mf a^{P++}$, this implies $P \subset Q$ and \begin{equation}\label{eq:2.7} \norm{(\log |s| )_{F_0}} < \norm{(\log |t| )_{F_0}} . \end{equation} According to Lemma \ref{lem:2.12} all constituents of $\pi^\Gamma (P,\sigma,t,\rho)$ have central character $W'( r_\sigma t ) \in T / W'$. The same goes for $(Q,\tau,s,\nu )$, so $r_\sigma t$ and $r_\tau s$ lie in the same $W_0 \rtimes \Gamma$-orbit. Thus also \[ W' (\log |r_\sigma t| )_{F_0} = W' (\log |r_\tau s| )_{F_0} . \] By definition $(\log |t|)_{F_0} \perp \mf t_P$ and $(\log |s|)_{F_0} \perp \mf t_Q$, so \begin{multline} \| \Re (cc_P (\sigma )) \|^2 + \| (\log |t|)_{F_0} \|^2 = \| (\log |r_\sigma t| )_{F_0} \|^2 \\ = \| (\log |r_\tau s| )_{F_0} \|^2 = \| \Re (cc_Q (\tau )) \|^2 + \| (\log |s|)_{F_0} \|^2 . \end{multline} Finally we use \eqref{eq:2.7}. $\qquad \Box$ \\[2mm] \section{An affine Springer correspondence} \label{sec:Springer} Given an algebra or group $A$, let Irr$(A)$ be the collection of (equivalence classes of) irreducible complex $A$-representations. Let $G_\Z (A) = G_\Z (\mr{Mod}_f (A))$ denote the Grothendieck group of finite length complex representations of $A$, and write $G_{\mathbb F} (A) = G_\Z (A) \otimes_\Z \mathbb F$ for any field $\mathbb F$. The classical Springer correspondence \cite{Spr} realizes all irreducible representation of a finite reflection group $W_0$ in the top cohomology of the associated flag variety. Kazhdan--Lusztig theory (see \cite{KaLu,Xi}) allows one to interpret this as a bijection between Irr$(W_0)$ and a certain collection of irreducible representations of an affine Hecke algebra with equal parameters. As such, the finite Springer correspondence is a specialisation of an affine Springer correspondence between Irr$(W^e)$ and Irr$(\mc H)$, see \cite[Section 8]{Lus-Rep} and \cite[Theorem 2]{Ree2}. We will extend this result all extended affine Hecke algebras with unequal (but positive) parameters. For any $\mh H \rtimes \Gamma$-representation $\pi$, let $\pi \big|_{W_0 \rtimes \Gamma} = \pi \big|_{W'}$ be the restriction of $\pi$ to the subalgebra $\C [W'] = \C [W_0 \rtimes \Gamma] \subset \mh H \rtimes \Gamma$. Let $\mr{Irr}_0 (\mh H \rtimes \Gamma)$ be the collection of (equivalence classes of) irreducible tempered $\mh H \rtimes \Gamma$-representations with real central character. In \cite[Theorem 6.5.c]{SolHomGHA} the author proved that the set \begin{equation}\label{eq:Irr0} \{ \pi \big|_{W_0 \rtimes \Gamma} : \pi \in \mr{Irr}_0 (\mh H \rtimes \Gamma) \} \end{equation} is a $\Q$-basis of $G_\Q (W_0 \rtimes \Gamma)$. When $\Gamma$ is trivial, this is a kind of Springer correspondence for finite Weyl groups. The only problem is that $\pi \big|_{W_0}$ may be reducible, but that could be solved (a priori not naturally) by picking a suitable irreducible subrepresentation of $\pi \big|_{W_0}$. In fact it is known from \cite[Corollary 3.6]{Ciu} that in many cases the matrix that expresses \eqref{eq:Irr0} in terms of irreducible $W_0 \rtimes \Gamma$-representations is unipotent and upper triangular (with respect to a suitable ordering). That would provide a natural Springer correspondence, but unfortunately that improvement is still open in our generality. \begin{thm}\label{thm:2.7} There exists a unique system of maps \[ \Spr : \mr{Irr}(\mc H \rtimes \Gamma) \to \mr{Mod}_f (X \rtimes (W_0 \rtimes \Gamma)) , \] for all extended affine Hecke algebras $\mc H \rtimes \Gamma$, such that: \enuma{ \item The image of $\Spr$ is a $\Q$-basis of $G_\Q (X \rtimes (W_0 \rtimes \Gamma))$. \item $\Spr$ preserves the unitary part of the central character. \item $\Spr (\pi)$ is tempered if and only if $\pi$ is tempered. \item Let $u \in T_{un}$, let $\tilde \pi \in \mr{Irr}_0 (\mh H (\tilde{\mc R}_u ,k_u) \rtimes W'_{F_u ,u})$ and let $\tilde \pi \circ \Phi_u$ be the $\mc H (\mc R_u ,q_u) \rtimes W'_{F_u ,u}$-representation associated to it via Theorem \ref{thm:2.2}.b. Then \[ \Spr \big( \mr{Ind}_{\mc H (\mc R_u ,q_u) \rtimes W'_{F_u ,u}}^{\mc H \rtimes \Gamma} (\tilde \pi \circ \Phi_u) \big) = \mr{Ind}_{X \rtimes W'_u}^{X \rtimes W'} \big( \C_u \otimes \tilde \pi \big|_{W'_u} \big) , \] where $\C_u$ denotes the one-dimensional $X$-representation with character $u$. \item If $(P,\sigma,t,\rho)$ is a Langlands datum for $\mc H \rtimes \Gamma$, then \[ \Spr (L^\Gamma (P,\sigma,t,\rho)) = \mr{Ind}_{X \rtimes (W (R_P) \rtimes \Gamma_{P,\sigma,t})}^{X \rtimes (W_0 \rtimes \Gamma)} (\Spr (\sigma \otimes \rho) \circ \phi_t) . \] } \end{thm} \emph{Proof.} In view of Corollary \ref{cor:2.3}, properties (b) and (d) determine $\Spr$ uniquely for all irreducible tempered representations. A glance at Lemma \ref{lem:2.9} shows that $\Spr$ preserves temperedness. Next Corollary \ref{cor:2.8}.b and property (e) determine $\Spr$ for all irreducible representations. By Corollary \ref{cor:2.8} every nontempered irreducible $\mc H \rtimes \Gamma$-representation $\pi$ is of the form $L(P,\sigma,t,\rho)$ for some Langlands datum with $t \not\in T_{un}$. By construction all $\mc A$-weights of $\Spr (\sigma \otimes \rho)$ lie in $T_{un}$, so by property (e) the absolute values of $\mc A$-weights of $\Spr (\pi)$ lie in $W' |t|$. Together with Lemma \ref{lem:2.9} this shows that $\Spr (\pi)$ is not tempered. Now we have (b)--(e), let us turn to (a). By Corollary \ref{cor:2.3} and the result mentioned in \eqref{eq:Irr0}, (a) holds if we restrict to tempered representations on both sides. The proof that this restriction is unnecessary is more difficult, we postpone it to Section \ref{sec:dual}. \begin{rem}\label{rem:Spr} We will see in Corollary \ref{cor:Sprsigma0} that on tempered representations $\Spr$ is given by composition with an algebra homomorphism between suitable completions of $\C [X \rtimes (W_0 \rtimes \Gamma)]$ and $\mc H \rtimes \Gamma$. That is not possible for all irreducible representations, since sometimes $\Spr$ does not preserve the dimensions of representations. This happens if and only if $\pi (P,\sigma,t,\rho) \neq L (P,\sigma,t,\rho)$ in (e). \\ In view of Lemma \ref{lem:2.10}.c it is possible to replace (e) by the condition (e') : \[ \Spr (\pi^\Gamma (P,\sigma,t,\rho)) = \mr{Ind}_{X \rtimes (W (R_P) \rtimes \Gamma_{P,\sigma,t})}^{X \rtimes (W_0 \rtimes \Gamma)} (\Spr (\sigma \otimes \rho) \circ \phi_t) . \] The resulting map \[ \Spr' : G_\Z (\mc H \rtimes \Gamma) \to G_\Z (X \rtimes (W_0 \rtimes \Gamma)) \] commutes with parabolic induction, but it sends some irreducible $\mc H \rtimes \Gamma$-representations to virtual $W^e \rtimes \Gamma$-representations.\\ Of course there also exists a version of Theorem \ref{thm:2.7} for graded Hecke algebras. It can easily be deduced from the above using Theorem \ref{thm:2.2}.b. \end{rem} \chapter{Parabolically induced representations} Parabolic induction is a standard tool to create a large supply of interesting representations of reductive groups, Hecke algebras and related objects. In line with Harish-Chandra's philosophy of the cusp form, every irreducible tempered representation of an affine Hecke algebra can be obtained via unitary induction of a discrete series representation of a parabolic subalgebra. With the Langlands classification we can also reach irreducible representations that are not tempered. Hence we consider induction data $\xi = (P,\delta,t)$, where $\delta$ is a discrete series representation of $\mc H_P$ and $t \in T^P$ is an induction parameter. With this we associate a representation $\pi (\xi) = \mr{Ind}_{\mc H^P}^{\mc H} (\delta \circ \phi_t)$. Among these are the principal series representations, which already exhaust the dual space of $\mc H$. But that is not very satisfactory, since a principal series representation can have many irreducible subquotients, and it is not so easy to determine them, see \cite{Ree1}. Instead we are mostly interested in induction data $\xi$ for which $|t|$ is positive (in an appropriate sense) and in irreducible quotients of $\pi (\xi)$, because the Langlands classification applies to these. In Theorem \ref{thm:3.10} we construct, for every irreducible $\mc H$-representation $\pi$, an essentially unique induction datum $\xi^+ (\rho)$, such that $\rho$ is a quotient of $\pi (\xi^+ (\rho ))$. However, in general $\pi (\xi^+ (\rho))$ has more than one irreducible quotient. Another important theme in this chapter are intertwining operators between induced representations of the form $\pi (\xi)$. Their definition and most important properties stem from the work of Opdam and Delorme \cite{Opd-Sp,DeOp1}. Like in the setting of reductive groups, it is already nontrivial to show that normalized intertwining operators are regular on unitary induced representations. Under favorable circumstances such intertwining operators span $\mr{Hom}_{\mc H} (\pi (\xi), \pi (\xi'))$. This was already known \cite{DeOp1} for unitary induction data $\xi,\xi'$, in which case $\pi (\xi)$ and $\pi (\xi')$ are tempered representations. We generalize this to pairs of positive induction data (Theorem \ref{thm:3.9}). Crucial in all these considerations is the Schwartz algebra $\mc S$ of $\mc H$, the analogue of the Harish-Chandra--Schwartz algebra of a reductive $p$-adic group. For the geometry of the dual space of $\mc H$ it is important to understand the number $n(\xi)$ of irreducible $\mc H$-representations $\rho$ with $\xi^+ (\rho)$ equivalent to $\xi$. This is governed by a groupoid $\mc G$ that keeps track of all intertwining operators. Indeed, if $t \mapsto \xi_t$ is a continuous path of induction data such that all $\xi_t$ have the same isotropy group in $\mc G$, then $n(\xi_t)$ is constant along this path (Proposition \ref{prop:3.11}). From this we deduce that the dual of $\mc H$ is a kind of complexification of the tempered dual of $\mc H$. As a topological space, the tempered dual is built from certain algebraic subvarieties of compact tori, each with a multiplicity. In this picture the dual of $\mc H$ is built from the corresponding complex subvarieties of complex algebraic tori, with the same multiplicities. This geometric description is used to finish the proof of the affine Springer correspondence from Section \ref{sec:Springer}. \section{Unitary representations and intertwining operators} Like for Lie groups, the classification of the unitary dual of an affine Hecke algebra appears to be considerably more difficult than the classification of the full dual or of the tempered dual. This is an open problem that we will not discuss in this paper, cf. \cite{BaMo2,BaCi}. Nevertheless we will use unitarity arguments, mainly to show that certain representations are completely reducible. The algebra $\mc H \rtimes \Gamma$ is endowed with a sesquilinear involution * and a trace $\tau$, defined by \begin{align} \nonumber & (z N_w \gamma)^* = \bar{z} \gamma^{-1} N_{w^{-1}} \qquad z \in \C, w \in W^e, \gamma \in \Gamma , \\ \label{eq:*tau} & \tau (z N_w \gamma) = \left\{ \begin{array}{ll} z & \text{if } \gamma = w = e, \\ 0 & \text{otherwise} . \end{array} \right. \end{align} Since $q$ is real-valued, this * is anti-multiplicative and $\tau$ is positive. These give rise to an Hermitian inner product on $\mc H \rtimes \Gamma$: \begin{equation}\label{eq:inptau} \inp{h}{h'}_\tau = \tau(h^* h') \qquad h,h' \in \mc H \rtimes \Gamma. \end{equation} A short calculation using the multiplication rules \ref{eq:multrules} shows that the basis $\{ N_w \gamma : w \in W^e ,\gamma \in \Gamma \}$ of $\mc H \rtimes \Gamma$ is orthonormal for this inner product. We note that $\Gamma$ acts on $\mc H$ by *-automorphisms, and that $\mc H, \mc H (W_0,q)$ and $\C [\Gamma]$ are *-subalgebras of $\mc H \rtimes \Gamma$. In general $\mc A$ is not a *-subalgebra of $\mc H$. For $x \in X$ \cite[Proposition 1.12]{Opd-Sp} tells us that \begin{equation}\label{eq:thetax*} \theta_x^* = N_{w_0} \theta_{-w_0 (x)} N_{w_0}^{-1} , \end{equation} where $w_0$ is the longest element of the Coxeter group $W_0$. Let $\Gamma'_P$ be a subgroup of $\Gamma_P$ and let $\tau$ be a $\mc H^P \rtimes \Gamma'_P$-representation on an inner product space $V_\tau$. By default we will endow the vector space \[ \mc H \rtimes \Gamma \otimes_{\mc H^P \rtimes \Gamma'_P} V_\tau \cong \C [\Gamma W^P] \otimes_{\C [\Gamma'_P]} V_\tau \] with the inner product \begin{equation}\label{eq:inpind} \inp{h \otimes v}{h' \otimes v'} = \tau (h^* h') \inp{v}{v'} \qquad h,h' \in \C [\Gamma W^P] , v,v' \in V_\tau . \end{equation} Recall that a representation $\pi$ of $\mc H \rtimes \Gamma$ on a Hilbert space is unitary if $\pi (h^*) = \pi (h)^*$ for all $h \in \mc H \rtimes \Gamma$. In particular, such representations are completely reducible. \begin{lem}\label{lem:3.1} Let $\Gamma'_P$ be a subgroup of $\Gamma_P$, let $\sigma$ be a finite dimensional $\mc H_P \rtimes \Gamma'_P$-representation and let $t \in T^P$. \enuma{ \item If $\sigma$ is unitary and $t \in T^P_{un}$, then $\mr{Ind}_{\mc H^P \rtimes \Gamma'_P}^{\mc H \rtimes \Gamma} (\sigma \circ \phi_t)$ is unitary with respect to the inner product \eqref{eq:inpind}. \item $\mr{Ind}_{\mc H^P \rtimes \Gamma'_P}^{\mc H \rtimes \Gamma} (\sigma \circ \phi_t)$ is (anti-)tempered if and only if $\sigma$ is (anti-)tempered and $t \in T^P_{un}$. } \end{lem} \emph{Proof.} Since $\Gamma$ acts by *-algebra automorphisms and $\Gamma \cdot T^- = T^-$, it does not disturb the properties unitarity and temperedness. Hence it suffices to prove the lemma in the case $\Gamma = \Gamma'_P = $\{id\}. Then (a) and the "if"-part of (b) are \cite[Propositions 4.19 and 4.20]{Opd-Sp}. For the "only if"-part of (b), suppose that $t \in T^P \setminus T^P_u$. Since $X \cap (P^\vee)^\perp$ is of finite index in $X^P = X / (X \cap \Q P)$, there exists $x \in X \cap (P^\vee)^\perp$ with $|t(x)| \neq 1$. Possibly replacing $x$ by $-x$, we may assume that $|t(x)| > 1$. But $\delta(\phi_t (\theta_x)) (v) = x (t) v$ for all $v \in V_\delta$ and $x \in Z( X \rtimes W_P)$, so the $\mc H^P$-representation $\delta \circ \phi_t$ is not tempered. Hence its induction to $\mc H$ cannot be tempered. Similarly, if $\sigma$ is not tempered, then the restriction of $\sigma \circ \phi_t$ to $\mc H (X \cap \Q P, R_P, Y / Y \cap P^\perp, R_P^\vee, P,q_P)$ is not tempered. The same proof works in the anti-tempered case, we only have to replace $|t(x)| > 1$ by $|t(x)| < 1. \qquad \Box$ \\[2mm] \emph{Remark.} It is possible that $\mr{Ind}_{\mc H^P \rtimes \Gamma'_P}^{\mc H \rtimes \Gamma} (\sigma \circ \phi_t)$ is unitary with respect to some inner product other than \eqref{eq:inpind}, if the conditions of part (a) are not met.\\[2mm] We intend to partition Irr$(\mc H \rtimes \Gamma)$ into finite packets, each of which is obtained by inducing a discrete series representation of a parabolic subalgebra of $\mc H$. Thus our induction data are triples $(P,\delta,t)$, where \begin{itemize} \item $P \subset F_0$; \item $(\delta ,V_\delta)$ is a discrete series representation of $\mc H_P$; \item $t \in T^P$. \end{itemize} Let $\Xi$ be the space of such induction data, where we regard $\delta$ only modulo equivalence of $\mc H_P$-representations. We say that $\xi = (P,\delta,t)$ is unitary if $t \in T^P_{un}$, and we denote the space of unitary induction data by $\Xi_{un}$. Similarly we say that $\xi$ is positive if $|t| \in T^{P+}$, which we write as $\xi \in \Xi^+$. Notice that, in contrast to Langlands data, we do not require $|t|$ to be strictly positive. We have three collections of induction data: \begin{equation}\label{eq:inductionData} \Xi_{un} \subseteq \Xi^+ \subseteq \Xi . \end{equation} By default we endow these spaces with the topology for which $P$ and $\delta$ are discrete variables and $T^P$ carries its natural analytic topology. We will realize every irreducible $\mc H \rtimes \Gamma$-representation as a quotient of a well-chosen induced representation \[ \pi^\Gamma (\xi) := \mr{Ind}_{\mc H}^{\mc H \rtimes \Gamma} \pi (P,\delta,t) = \mr{Ind}_{\mc H^P}^{\mc H \rtimes \Gamma} (\delta \circ \phi_t ) . \] We note that for all $s \in T^{W_0 \rtimes \Gamma}$: \begin{equation} \pi^\Gamma (P,\delta,ts) = \pi^\Gamma (P,\delta,t) \circ \phi_s . \end{equation} As vector space underlying $\pi^\Gamma (\xi)$ we will always take $\C [\Gamma W^P] \otimes V_\delta$. This space does not depend on $t$, which will allow us to speak of maps that are continuous, smooth, polynomial or even rational in the parameter $t \in T^P$. The discrete series representations of affine Hecke algebras with irreducible root data were classified in \cite{OpSo2}. Here we recall only how their central characters can be determined, which is related to the singularities of the elements $\imath^0_s$. Consider the $W_0 \rtimes \Gamma$-invariant rational function \[ \eta = \prod\nolimits_{\alpha \in R_0} c_\alpha^{-1} \in \C (T), \] where $c_\alpha$ is as in \eqref{eq:calpha}. Notice that $\eta$ depends on the parameter function $q$, or more precisely on $q^{1/2}$. A coset $L$ of a subtorus of $T$ is said to be residual if the pole order of $\eta$ along $L$ equals $\dim_\C (T) - \dim_C (L)$, see \cite{Opd4}. A residual coset of dimension 0 is also called a residual point. Such points can exist only if $\mc R$ is semisimple, otherwise all residual cosets have dimension at least rank $Z(W^e) > 0$. According to \cite[Lemma 3.31]{Opd-Sp} the collection of central characters of discrete series representations of $\mc H (\mc R,q)$ is exactly the set of $W_0$-orbits of residual points for $(\mc R ,q)$. Moreover, if $\delta$ is a discrete series representation of $\mc H_P$ with central character $W (R_P) r$, then $r T^P$ is a residual coset for $(\mc R ,q)$ \cite[Proposition 7.4]{Opd-Sp}. Up to multiplication by an element of $W_0$, every residual coset is of this form \cite[Proposition 7.3.v]{Opd-Sp}. The map that assigns to $\xi \in \Xi$ the central character of $\pi (\xi) \in \mr{Mod}_f (\mc H (\mc R,q))$ is an algebraic morphism $\Xi \to T / W_0$. The above implies that the image of \begin{equation}\label{eq:rescosd} \{ (P,\delta,t) \in \Xi : |F_0 \setminus P| = d \} \end{equation} is the union of the $d$-dimensional residual cosets, modulo $W_0$. Let $\delta_\emptyset$ be the unique onedimensional representation of $\mc H_\emptyset = \C$ and consider \[ M(t) := \pi^\Gamma (\emptyset, \delta_\emptyset,t) = \mr{Ind}_{\mc A}^{\mc H \rtimes \Gamma} (\delta_\emptyset \circ \phi_t) \cong \mr{Ind}_{\mc O (T)}^{\mc H \rtimes \Gamma} \C_t , \qquad t \in T. \] The family consisting of these representations is called the principal series of $\mc H \rtimes \Gamma$ and is of considerable interest. For example, by Frobenius reciprocity every irreducible $\mc H \rtimes \Gamma$-representation is a quotient of some principal series representation. \begin{lem}\label{lem:3.13} Suppose that $h \in \mc H \rtimes \Gamma$ and that $M(t,h) = 0$ for all $t$ in some Zariski-dense subset of $T$. Then $h = 0$. \end{lem} \emph{Proof.} Since $M(t,h) \in \mr{End}_\C (\C [\Gamma W_0] )$ depends algebraically on $t$, it is zero for all $t \in T$. Write $h = \sum_{\gamma w \in \Gamma \rtimes W_0} a_{\gamma w} N_{\gamma w}$ with $a_{\gamma w} \in \mc A$ and suppose that $h \neq 0$. Then we can find $w' \in W_0$ that $a_{\gamma w'} \neq 0$ for some $\gamma \in \Gamma$, and such that $\ell (w')$ is maximal for this property. From Theorem \ref{thm:1.1}.d we see that \[ M(t,h) (N_e) = \sum_{\gamma w \in \Gamma \rtimes W_0} b_{\gamma w} N_{\gamma w} \] for some $b_{\gamma w}$ with $b_{\gamma w'} = a_{\gamma w'} \neq 0$. Therefore $M(t,h)$ is not identically zero. This contradiction shows that the assumption $h \neq 0$ is untenable. $\qquad \Box$ \\[2mm] By \cite[Corollary 2.23]{Opd-Sp} discrete series representations are unitary. (Although Opdam only worked in the setting $\Gamma =$ \{id\}, his proof also applies with general $\Gamma$.) From this and Lemma \ref{lem:3.1} we observe: \begin{cor}\label{cor:3.2} Let $\xi = (P,\delta,t) \in \Xi$. If $t \in T^P_{un}$, then $\pi^\Gamma (\xi)$ is unitary and tempered. If $t \in T^P \setminus T^P_{un}$, then $\pi^\Gamma (\xi)$ is not tempered. \end{cor} For any subset $Q \subset F_0$, let $\Xi^Q, \pi^Q, \ldots$ denote the things $\Xi ,\pi, \ldots$, but for the algebra $\mc H^Q$ instead of $\mc H$. For $\xi = (P,\delta,t) \in \Xi$ we define \begin{equation} P(\xi) := \{ \alpha \in R_0 : |\alpha (t)| = 1 \} . \end{equation} \begin{prop}\label{prop:3.3} Let $\xi = (P,\delta,t) \in \Xi^+$. \enuma{ \item The $\mc H^{P(\xi)} \rtimes \Gamma_{P(\xi)}$-representation $\pi^{P(\xi) ,\Gamma_{P(\xi)}}(\xi)$ is completely reducible. \item Every irreducible summand of $\pi^{P(\xi) ,\Gamma_{P(\xi)}}(\xi)$ is of the form $\pi^{P(\xi),\Gamma_{P(\xi)}} (P(\xi), \sigma, t^{P(\xi)}, \rho)$, where $(P(\xi), \sigma, t^{P(\xi)}, \rho)$ is a Langlands datum for $\mc H \rtimes \Gamma$ and $t^{P(\xi)} t^{-1} \in T_{P(\xi)}$. \item The irreducible quotients of $\pi^\Gamma (\xi)$ are the representations $L^\Gamma (P(\xi), \sigma, t^{P(\xi)}, \rho)$, with $(P(\xi), \sigma, t^{P(\xi)}, \rho)$ coming from (b). \item Every irreducible $\mc H \rtimes \Gamma$-representation is of the form described in (c). \item The functor $\mr{Ind}_{\mc H^{P(\xi)} \rtimes \Gamma_{P(\xi)}}^{\mc H \rtimes \Gamma}$ induces an isomorphism \[ \mr{End}_{\mc H^{P(\xi)} \rtimes \Gamma_{P(\xi)}} \big( \pi^{P(\xi) ,\Gamma_{P(\xi)}}(\xi) \big) \cong \mr{End}_{\mc H \rtimes \Gamma} (\pi (\xi)) . \] } \end{prop} \emph{Remarks.} Part (a) holds for any $\xi \in \Xi$. In (b) $t^{P(\xi)}$ is uniquely determined modulo $K_{P(\xi)}$. \\ \emph{Proof.} (a) By construction there exists $t^{P(\xi)} \in T^{P(\xi)}$ such that \begin{equation}\label{eq:ttPxi} t (t^{P(\xi)})^{-1} \in T_{P(\xi),un} . \end{equation} Then $\pi^{P(\xi)} (P,\delta,t) \circ \phi_{t^{P(\xi)}}^{-1} = \pi^{P(\xi)} \big( P,\delta,t (t^{P(\xi)})^{-1} \big)$ is unitary by Corollary \ref{cor:3.2}. In particular it is completely reducible, which implies that $\pi^{P(\xi)} \big( P,\delta,t (t^{P(\xi)})^{-1}\big)$ is also completely reducible. By \cite[Theorem A.1.c]{SolGHA} \begin{equation}\label{eq:piPxi} \pi^{P(\xi) ,\Gamma_{P(\xi)}}(\xi) = \pi^{P(\xi),\Gamma_{P(\xi)}} (P,\delta,t) = \mr{Ind}_{\mc H^{P(\xi)}}^{\mc H^{P(\xi)} \rtimes \Gamma_{P(\xi)}} \pi^{P(\xi)} (P,\delta,t) \end{equation} remains completely irreducible.\\ (b) By Corollary \ref{cor:3.2} $\pi^{P(\xi)} \big( P,\delta,t (t^{P(\xi)})^{-1}\big)$ is tempered and unitary, so by Lemma \ref{lem:2.10} all its irreducible summands are of the form \[ \pi^{P(\xi)} (P(\xi),\sigma,k) = L^{P(\xi)} (P(\xi),\sigma,k) \text{, where } k \in T^{P(\xi)}_{un} . \] Moreover $\pi^{P(\xi)} \big( P,\delta,t (t^{P(\xi)})^{-1}\big) \big|_{\C [X \cap (P(\xi)^\vee)^\perp]}$ consists only of copies of the trivial $X \cap (P(\xi)^\vee)^\perp$-representation, so $k \in K_{P(\xi)} = T^{P(\xi)}_{un} \cap T_{P(\xi),un}$. Together with \eqref{eq:ttPxi} this implies $k t^{P(\xi)} t^{-1} \in T_{P(\xi),un}$. Hence every irreducible summand of \eqref{eq:piPxi} is an irreducible summand of some \[ \mr{Ind}_{\mc H^{P(\xi)}}^{\mc H^{P(\xi)} \rtimes \Gamma_{P(\xi)}} \pi^{P(\xi)} (P(\xi),\sigma,k t^{P(\xi)}) . \] By Clifford theory (see the proof of Corollary \ref{cor:2.8}) these are of the required form $\pi^{P(\xi),\Gamma_{P(\xi)}} (P(\xi),\sigma,k t^{P(\xi)},\rho)$.\\ (c) Follows immediately from (b) and Corollary \ref{cor:2.8}.\\ (d) By Corollary \ref{cor:2.8}.b it suffices to show that every $\mc H^P \rtimes \Gamma_P$-representation of the form $(\sigma \circ \phi_t) \otimes \rho$ is a direct summand of some $\pi^{P,\Gamma_{P,\sigma,t}} (\xi^+)$. Without loss of generality we may assume that $P = F_0$ and that $\Gamma_{P,\sigma,t} = \Gamma$. The $\mc H$-representation $\sigma \circ \phi_{t |t|^{-1}}$ is irreducible and tempered, so by \cite[Theorem 3.22]{DeOp1} it is a direct summand of $\pi (\xi')$ for some $\xi' = (P',\rho',t') \in \Xi_{un}$. Then $(P',\rho',t' |t|) \in \Xi^+$ and $\sigma \circ \phi_t$ is a direct summand of $\pi (P',\rho',t' |t|)$. By Clifford theory \cite[Theorem A.1.b]{SolGHA} the $\mc H \rtimes \Gamma$-representation $(\sigma \circ \phi_t) \otimes \rho$ is a direct summand of $\pi^\Gamma (P',\delta',t' |t|)$. \\ (e) Follows from (a), (b) and Lemma \ref{lem:2.10}.b. $\qquad \Box$ \\[3mm] The parabolically induced representations $\pi^\Gamma (\xi)$ are by no means all disjoint. The relations among them are described by certain intertwining operators, whose construction we recall from \cite{Opd-Tr,Opd-Sp}. Suppose that $P,Q \subset F_0 , u \in K_P, g \in \Gamma \ltimes W_0$ and $g (P) = Q$. Let $\delta$ and $\sigma$ be discrete series representations of respectively $\mc H_P$ and $\mc H_Q$, such that $\sigma$ is equivalent with $\delta \circ \psi_u^{-1} \circ \psi_g^{-1}$. Choose a unitary map $I_\delta^{g u} : V_\delta \to V_\sigma$ such that \begin{equation}\label{eq:Idelta} I_\delta^{g u} (\delta (h) v) = \sigma (\psi_g \circ \psi_u (h)) (I_\delta^{g u} (v)) \qquad \forall v \in V_\delta, h \in \mc H_P . \end{equation} Notice that any two choices of $I_\delta^{g u}$ differ only by a complex number of norm 1. In particular $I_\delta^{g u}$ is a scalar if $\sigma = \delta$. We obtain a bijection \begin{align} & \nonumber I_{g u} : (\C (T/W_0) \otimes_{Z (\mc H)} \mc H) \rtimes \Gamma \otimes_{\mc H^P} V_\delta \to (\C (T/W_0) \otimes_{Z (\mc H)} \mc H) \rtimes \Gamma \otimes_{\mc H^Q} V_\sigma ,\\ & \label{eq:defintop} I_{g u} (h \otimes v) = h \imath^o_{g^{-1}} \otimes I_\delta^{g u} (v) . \end{align} \begin{thm}\label{thm:3.5} \enuma{ \item The map $I_{g u}$ defines an intertwining operator \[ \pi^\Gamma (g u,P,\delta,t) : \pi^\Gamma (P,\delta,t) \to \pi^\Gamma (Q,\sigma,g (ut)) . \] As a map $\C [\Gamma W^P] \otimes_\C V_\delta \to \C [\Gamma W^Q] \otimes_\C V_\sigma$ it is a rational in $t \in T^P$ and constant on $T^{F_0}$-cosets. \item This map is regular and invertible on an open neighborhood of $T^P_{un}$ in $T^P$ (with respect to the analytic topology). \item $\pi^\Gamma (g u,P,\delta,t)$ is unitary if $t \in T^P_{un}$. } \end{thm} \emph{Remark.} Due to the freedom in the choice of \eqref{eq:Idelta}, for composable $g_1, g_2 \in \mc G$ the product $\pi (g_1, g_2 \xi) \pi (g_2,\xi)$ need not be equal to $\pi (g_1 g_2,\xi)$. The difference is a locally constant function whose absolute value is 1 everywhere. \emph{Proof.} If $g = \gamma w \in \Gamma \ltimes W_0$, then $I_{g u} = I_\gamma \circ I_{w u}$, modulo this locally constant function. It follows directly from the definitions that the theorem holds for $I_\gamma$, so the difficult part is $I_{w u}$, which is dealt with in \cite[Theorem 4.33 and Corollary 4.34]{Opd-Sp}. $\qquad \Box$ \\[2mm] The intertwining operators for reflections acting on the unitary principal series can be made reasonably explicit: \begin{lem}\label{lem:3.15} Suppose that $\beta \in R_0$ and $t \in T_{un}$. Then $\pi^\Gamma (s_\beta, \emptyset, \delta_\emptyset, t)$ is a scalar operator if and only if $c_\beta^{-1} (t) = 0$. \end{lem} \emph{Proof.} Suppose that $\alpha \in F_0, t \in T$ and $c_\alpha^{-1}(t) = 0$. Then \eqref{eq:imatho} implies that $1 + \imath^0_{s_\alpha} (t) = 0$, regarded as an element of $\mc H (W_0,q)$. Hence $\pi^\Gamma (s_\alpha, \emptyset, \delta_\emptyset, t)$ is a scalar operator. Conversely, if $c_\alpha^{-1}(t) \neq 0$, then \eqref{eq:imatho} shows that $1 + \imath^0_{s_\alpha} (t)$ is not scalar, because the action of $1 + q(s_\alpha)^{1/2} N_{s_\alpha}$ on $\mc H (W_0,q)$ has two different eigenvalues. With Theorem \ref{thm:3.5} we can see that this is not specific for simple reflections. Find $w \in W_0$ such that $w (\beta) = \alpha$ is a simple root. Then $s_\beta = w^{-1} s_\alpha w$, so up to a nonzero scalar \[ \pi^\Gamma (s_\beta ,\emptyset, \delta_\emptyset, t) = \pi^\Gamma (w^{-1} ,\emptyset, \delta_\emptyset, w t) \, \pi^\Gamma (s_\alpha ,\emptyset, \delta_\emptyset, w t) \, \pi^\Gamma (w ,\emptyset, \delta_\emptyset, t) . \] Now we notice that $c_\beta^{-1} (t) = 0$ if and only if $c_\alpha^{-1}(w t) = 0$, and that $\pi^\Gamma (w^{-1} ,\emptyset, \delta_\emptyset, w t) = \pi^\Gamma (w ,\emptyset, \delta_\emptyset, t)^{-1}$ up to a scalar. $\qquad \Box$ \\[2mm] Thus it is possible to determine the $\mc H$-endomorphisms for unitary principal series representations, at least when the isotropy groups of points $t \in T_{un}$ are generated by reflections. The reducibility and intertwining operators for nonunitary principal series are more complicated, and have been subjected to ample study \cite{Kat1,Rog,Ree1}. For other parabolically induced representations the intertwining operators are less explicit. They can be understood better with the theory of R-groups \cite{DeOp2}. The action of these intertwining operators on the induction data space $\Xi$ is described most conveniently with a groupoid $\mc G$ that includes all pairs $(g,u)$ as above. The base space of $\mc G$ is the power set of $F_0$, and for $P,Q \subseteq F_0$ the collection of arrows from $P$ to $Q$ is \begin{equation}\label{eq:GPQ} \mc G_{PQ} = \{ (g,u) \in \Gamma \ltimes W_0 \times K_P : g (P) = Q \} . \end{equation} Whenever it is defined, the multiplication in $\mc G$ is \[ (g',u') \cdot (g,u) = (g' g, g^{-1} (u') u) . \] Usually we will write elements of $\mc G$ simply as $gu$. This groupoid acts from the left on $\Xi$ by \[ (g,u) \cdot (P,\delta,t) := (g (P),\delta \circ \psi_u^{-1} \circ \psi_g^{-1},g (ut)) , \] the action being defined if and only if $g (P) \subset F_0$. Since $T^+ \supset T^{P+}$ is a fundamental domain for the action of $W_0$ on $T$, every element of $\Xi$ is $\mc G$-associate to an element of $\Xi^+$. Although $\pi^\Gamma (gu(P,\delta,t))$ and $\pi^\Gamma (P,\delta,t)$ are not always isomorphic, the existence of rational intertwining operators has the following consequence: \begin{lem}\label{lem:3.6} The $\mc H \rtimes \Gamma$-representations $\pi^\Gamma (gu(P,\delta,t))$ and $\pi^\Gamma (P,\delta,t)$ have the same irreducible subquotients, counted with multiplicity. \end{lem} \emph{Proof.} This is not hard, the proof in the graded Hecke algebra setting \cite[Lemma 3.4]{SolGHA} also works here. $\qquad \Box$ \\[3mm] \section{The Schwartz algebra} \label{sec:Schwartz} We recall the construction of various topological completions of $\mc H$ \cite{Opd-Sp}: a Hilbert space, a $C^*$-algebra and a Schwartz algebra. The latter is the most relevant from the representation theoretic point of view. All tempered representations of $\mc H$ extend to its Schwartz completion, and a close study of this Schwartz algebra reveals facts about tempered representations for which no purely algebraic proof is known. Let $L^2 (\mc R ,q)$ be Hilbert space completion of $\mc H$ with respect to the inner product \eqref{eq:inptau}. By means of the orthonormal basis $\{ N_w : w \in W^e \}$ we can identify $L^2 (\mc R,q)$ with the Hilbert space $L^2 (W^e)$ of square integrable functions $W \to \C$. For any $h \in \mc H$ the map $\mc H (\mc R,q) \to \mc H (\mc R,q) : h' \mapsto h h'$ extends to a bounded linear operator on $L^2 (\mc R,q)$. This realizes $\mc H (\mc R,q)$ as a *-subalgebra of $B (L^2 (\mc R,q))$. Its closure $C^* (\mc R,q)$ is a separable unital $C^*$-algebra, called the $C^*$-algebra of $\mc H$. The Schwartz completion of $\mc H$ will, as a topological vector space, consist of all rapidly decaying functions on $W^e$, with respect to some length function. For this purpose the length function $\ell (w)$ of the Coxeter system $(W^\af ,S^\af)$ is unsatisfactory, because its natural extension to $W^e$ is zero on $Z(W^e)$. To overcome this inconvenience, recall that \[ X \otimes_\Z \R = \mf a^* = \mf a^*_{F_0} \oplus \mf a^{*F_0} = \mf a^*_{F_0} \oplus (Z(W^e) \otimes_\Z \R) . \] Thus we can decompose any $x \in X \subset \mf a^*$ uniquely as $x = x_{F_0} + x^{F_0} \in \mf a^*_{F_0} \oplus \mf a^{*F_0}$. Now we define \[ \mc N (w) = \ell (w) + \norm{w(0)^{F_0}} \qquad w \in W^e . \] Since $W^\af \oplus Z (W^e)$ is of finite index in $W^e$, the set $\{ w \in W^e : \mc N (w) = 0\}$ is finite. For $n \in \N$ we define the following norm on $\mc H$: \[ p_n \big( \sum_{w \in W^e} h_w N_w \big) = \sup_{w \in W^e} |h_w| (\mc N (w) + 1)^n . \] The completion $\mc S = \mc S (\mc R ,q)$ of $\mc H$ with respect to the family of norms $\{ p_n : n \in \N \}$ is a nuclear Fr\'echet space. It consists of all (possibly infinite) sums $h = \sum_{w \in W^e} h_w N_w$ such that $p_n (h) < \infty$ for all $n \in \N$. \begin{thm}\label{thm:3.7} There exist $C_q > 0 ,\, d \in \mh N$ such that $\forall h,h' \in \mc S (\mc R ,q), n \in \mh N$ \begin{align*} & \norm{h}_{B(L^2 (\mc R,q))} \leq C_q p_d (h) , \\ & p_n (h \cdot h') \leq C_q p_{n+d}(h) p_{n+d}(h') . \end{align*} In particular $\mc S (\mc R ,q)$ is a unital locally convex *-algebra, and it is contained in $C^* (\mc R ,q)$. \end{thm} \emph{Proof.} This was proven first with representation theoretic methods in \cite[Section 6.2]{Opd-Sp}. Later the author found a purely analytic proof \cite[Theorem A.7]{OpSo2}. $\qquad \Box$ \\[3mm] It is easily seen that the action of $\Gamma$ on $\mc H$ preserves all the above norms. Hence the crossed product $\mc S \rtimes \Gamma = \mc S (\mc R,q) \rtimes \Gamma$ (respectively $C^* (\mc R,q) \rtimes \Gamma$) is a well-defined Fr\'echet algebra (respectively $C^*$-algebra). For $q = 1$ we obtain the algebras \begin{equation} \begin{array}{rrrrr} \mc S(\mc R ,1) \rtimes \Gamma & = & \mc S (X) \rtimes W_0 \rtimes \Gamma & = & C^\infty (T_{un}) \rtimes W_0 \rtimes \Gamma , \\ C^* (\mc R ,1) \rtimes \Gamma & = & C^* (X) \rtimes W_0 \rtimes \Gamma & = & C (T_{un}) \rtimes W_0 \rtimes \Gamma , \end{array} \end{equation} where $\mc S (X)$ denotes the algebra of rapidly decreasing functions on $X$. We can use these topological completions to characterize discrete series and tempered representations. According to \cite[Lemma 2.22]{Opd-Sp}, an irreducible $\mc H \rtimes \Gamma$-representation $\pi$ is discrete series if and only if it is contained in the left regular representation of $\mc H \rtimes \Gamma$ on $L^2 (\mc R,q) \otimes \C [\Gamma]$, or equivalently if its character $\chi_\pi : \mc H \rtimes \Gamma \to \C$ extends to a continuous linear functional on $L^2 (\mc R,q) \otimes \C [\Gamma]$. By \cite[Lemma 2.20]{Opd-Sp} a finite dimensional $\mc H \rtimes \Gamma$-representation is tempered if and only if it extends continuously to an $\mc S \rtimes \Gamma$-representation. More generally, suppose that $\pi$ is a representation of $\mc H \rtimes \Gamma$ on a Fr\'echet space $V$, possibly of infinite dimension. As in \cite[Proposition A.2]{OpSo1}, we define $\pi$ to be tempered if it induces a jointly continuous map $(\mc S \rtimes \Gamma) \times V \to V$. A crucial role in the harmonic analysis on affine Hecke algebra is played by a particular Fourier transform, which is based on the induction data space $\Xi$. Let $\mc V_\Xi^\Gamma$ be the vector bundle over $\Xi$, whose fiber at $(P,\delta,t) \in \Xi$ is the representation space $\C [\Gamma \times W^P] \otimes V_\delta$ of $\pi^\Gamma (P,\delta,t)$. Let $\mr{End} (\mc V_\Xi^\Gamma)$ be the algebra bundle with fibers $\mr{End}_\C (\C [\Gamma \times W^P] \otimes V_\delta)$. The inner product \eqref{eq:inpind} endows $\mr{End}_\C (\C [\Gamma \times W^P] \otimes V_\delta)$ and $\mr{End} (\mc V_\Xi^\Gamma)$ with a canonical involution *. Of course these vector bundles are trivial on every connected component of $\Xi$, but globally not even the dimensions need be constant. Since $\Xi$ has the structure of a complex algebraic variety, we can construct the algebra of polynomial sections of $\mr{End} (\mc V_\Xi^\Gamma)$: \[ \mc O \big( \Xi ; \mr{End} (\mc V_\Xi^\Gamma) \big) := \bigoplus_{P,\delta} \mc O (T^P) \otimes \mr{End}_\C (\C [\Gamma \times W^P] \otimes V_\delta) . \] Given a reasonable subset (preferably a submanifold) $\Xi' \subset \Xi$, we define the algebras $L^2 \big( \Xi' ; \mr{End} (\mc V_\Xi^\Gamma) \big), C \big( \Xi' ; \mr{End} (\mc V_\Xi^\Gamma) \big)$ and $C^\infty \big( \Xi' ; \mr{End} (\mc V_\Xi^\Gamma) \big)$ in similar fashion. Furthermore, if $\mu$ is a sufficiently nice measure on $\Xi$ and $\Xi'$ is compact, then the following formula defines a Hermitian form on $L^2 \big( \Xi' ; \mr{End} (\mc V_\Xi^\Gamma) \big)$: \begin{equation}\label{eq:inpmu} \inp{f_1}{f_2}_\mu := \int_{\Xi'} \text{tr} (f_1 (\xi)^* f_2 (\xi)) \, \textup{d} \mu . \end{equation} The intertwining operators from Theorem \ref{thm:3.5} give rise to an action of the groupoid $\mc G$ on the algebra of rational sections of $\mr{End} (\mc V_\Xi^\Gamma)$, by \begin{equation}\label{eq:actSections} (g \cdot f) (\xi) = \pi^\Gamma (g,g^{-1} \xi ) f (g^{-1} \xi) \pi^\Gamma (g, g^{-1} \xi )^{-1} , \end{equation} whenever $g^{-1} \xi \in \Xi$ is defined. This formula also defines groupoid actions of $\mc G$ on $C \big( \Xi' ; \mr{End} (\mc V_\Xi^\Gamma) \big)$ and on $C^\infty \big( \Xi' ; \mr{End} (\mc V_\Xi^\Gamma) \big)$, provided that $\Xi'$ is a $\mc G$-stable submanifold of $\Xi$ on which all the intertwining operators are regular. Given a suitable collection $\Sigma$ of sections of $(\Xi', \mr{End} (\mc V_\Xi^\Gamma) )$, we write \[ \Sigma^{\mc G} = \{ f \in \Sigma : (g \cdot f) (\xi) = f(\xi) \text{ for all } g \in \mc G,\ \xi \in \Xi' \text{ such that } g^{-1} \xi \text{ is defined} \} . \] The Fourier transform for $\mc H \rtimes \Gamma$ is the algebra homomorphism \begin{align*}\ & \mc F : \mc H \rtimes \Gamma \to \mc O \big( \Xi ; \mr{End} (\mc V_\Xi^\Gamma) \big) , \\ & \mc F (h) (\xi) = \pi (\xi) (h) . \end{align*} The very definition of intertwining operators shows that the image of $\mc F$ is contained in the algebra $\mc O \big( \Xi ; \mr{End} (\mc V_\Xi^\Gamma) \big)^{\mc G}$. The Fourier transform also extends continuously to various topological completions of $\mc H \rtimes \Gamma$: \begin{thm}\label{thm:3.8} \textup{(Plancherel theorem for affine Hecke algebras)} \\ The Fourier transform induces algebra homomorphisms \[ \begin{array}{rrr} \mc H (\mc R,q) \rtimes \Gamma & \to & \mc O \big( \Xi ; \mr{End} (\mc V_\Xi^\Gamma) \big)^{\mc G} , \\ \mc S (\mc R,q) \rtimes \Gamma & \to & C^\infty \big( \Xi_{un} ; \mr{End} (\mc V_\Xi^\Gamma) \big)^{\mc G} , \\ C^* (\mc R,q) \rtimes \Gamma & \to & C \big( \Xi_{un} ; \mr{End} (\mc V_\Xi^\Gamma) \big)^{\mc G} . \end{array} \] The first one is injective, the second is an isomorphism of Fr\'echet *-algebras and the third is an isomorphism of $C^*$-algebras. Furthermore there exists a unique Plancherel measure $\mu_{Pl}$ on $\Xi$ such that \begin{itemize} \item the support of $\mu_{Pl}$ is $\Xi_{un}$; \item $\mu_{Pl}$ is $\mc G$-invariant; \item the restriction of $\mu_{pl}$ to a component $(P,\delta,T^P)$ is absolutely continuous with respect to the Haar measure of $T^P_{un}$; \item the Fourier transform extends to a bijective isometry \end{itemize} \[ \big( L^2 (\mc R ,q) \otimes \C [\Gamma] ; \inp{}{}_\tau \big) \; \to \; \big( L^2 \big( \Xi_{un} ; \mr{End} (\mc V_\Xi^\Gamma) \big)^{\mc G} ; \inp{}{}_{\mu_{Pl}} \big) . \] \end{thm} \emph{Proof.} Once again the essential case is $\Gamma = \{ \text{id} \}$, which is a very deep result proven by Delorme and Opdam, see Theorem 5.3 and Corollary 5.7 of \cite{DeOp1} and \cite[Theorem 4.43]{Opd-Sp}. To include $\Gamma$ in the picture we need a result of general nature. Let $A$ be any complex $\Gamma$-algebra and endow $A \otimes_\C \mr{End} (\C [\Gamma])$ with the $\Gamma$-action \begin{equation}\label{eq:Agammaact} \gamma \cdot (a \otimes f) (v) = \gamma \cdot a \otimes f (v \gamma) \gamma^{-1} \qquad a \in A, \gamma \in \Gamma, v \in \C [\Gamma], f \in \mr{End} (\C [\Gamma]) . \end{equation} There is a natural isomorphism \begin{equation}\label{eq:isocrossed} A \rtimes \Gamma \cong \big( A \otimes_\C \mr{End} (\C [\Gamma]) \big)^\Gamma . \end{equation} This is easy to show, but it appears to be one of those folklore results whose origins are hard to retrace. In any case a proof can be found in \cite[Lemma A.3]{SolThesis}. For $A = C \big( \Xi_{un} ; \mr{End} (\mc V_\Xi^\Gamma) \big)$ the action \eqref{eq:Agammaact} corresponds to the action of $\Gamma$ on $\mr{End}(\mc V_\Xi^\Gamma)$ described in \eqref{eq:actSections}. The greater part of the theorem follows from \eqref{eq:isocrossed} and the case $\Gamma = \{ \text{id} \}$. It only remains to see how the inner products $\inp{}{}_\tau$ and $\inp{}{}_{\mu_{Pl}}$ behave when $\Gamma$ is included. Let us distinguish the new inner products with a subscript $\Gamma$. On the Hecke algebra side it is easy, as the formula \eqref{eq:*tau} does not change, so \[ \inp{N_\gamma h}{N_{\gamma'} h'}_{\Gamma,\tau} = \left\{ \begin{array}{lll} \inp{h}{h'}_\tau & \text{if} & \gamma = \gamma', \\ 0 & \text{if} & \gamma \neq \gamma' . \end{array} \right. \] On the spectral side the inclusion of $\Gamma$ means that we replace every $\mc H$-representation $\pi (\xi)$ by $\mr{Ind}_{\mc H}^{\mc H \rtimes \Gamma} \pi (\xi)$. In such an induced representation the elements of $\Gamma$ permute the $\mc H$-subrepresentations $\gamma \pi (\xi)$, while $h \in \mc H$ acts by $\pi (\gamma^{-1}(h))$ on $\gamma \pi (\xi)$. The action of $\Gamma$ on $\mc H$ preserves the trace and the *, so \[ \mr{tr} \big( \pi^\Gamma (\xi, N_\gamma h)^* \pi^\Gamma (\xi, N_{\gamma'} h') \big) = \left\{ \begin{array}{lll} \mr{tr} \big( \pi (\xi, h)^* \pi (\xi, h') \big) & \text{if} & \gamma = \gamma', \\ 0 & \text{if} & \gamma \neq \gamma' . \end{array} \right. \] In view of \eqref{eq:inpmu}, this means that the $L^2$-extension of $\mc F$ is an isometry with respect to the Plancherel measure $\mu_{\Gamma,Pl} = |\Gamma |^{-1} \mu_{Pl}. \qquad \Box$ \\[1mm] \begin{cor}\label{cor:3.14} The center of $\mc S (\mc R,q) \rtimes \Gamma$ (respectively $C^* (\mc R,q) \rtimes \Gamma$) is isomorphic to $C^\infty (\Xi_{un})^{\mc G}$ (respectively $C (\Xi_{un})^{\mc G}$). \end{cor} \emph{Proof.} This is the obvious generalization of \cite[Corollary 5.5]{DeOp1} to our setting. $\qquad \Box$ \\[3mm] Notice that $Z (\mc S \rtimes \Gamma)$ is larger than the closure of $Z (\mc H \rtimes \Gamma)$ in $\mc S \rtimes \Gamma$, for example $Z (\mc S \rtimes \Gamma)$ contains a nontrivial idempotent for every connected component of $\Xi_{un} / \mc G$. Varying on the notation $\mr{Mod}_{f,U}(\mc H \rtimes \Gamma)$ we will denote by \[ \mr{Mod}_{f,\Sigma} (\mc S \rtimes \Gamma) \] the category of finite dimensional $\mc S \rtimes \Gamma$-modules with $Z(\mc S \rtimes \Gamma)$-weights in $\Sigma \subset \Xi_{un} / \mc G$. \\[2mm] Let us compare Schwartz algebras of affine Hecke algebras with those for reductive $p$-adic groups. Suppose that $G$ is reductive $p$-adic group and that $\mc H \rtimes \Gamma$ is Morita equivalent to $\mc H (G)_{\mf s}$, in the notation of Section \ref{sec:padic}. The (conjectural) isomorphism described in \eqref{eq:AHAsigma} is such that $\mf a^* = X \otimes_\Z \R$ corresponds to $X_* (A) \otimes_\Z \R$. The conditions for temperedness of finite length representations of $\mc H \rtimes \Gamma$ and $\mc H (G)_{\mf s}$ are formulated in terms of corresponding negative cones in $\mf a^*$ and in $X_* (A) \otimes_\Z \R$. Therefore such a Morita equivalence would preserve temperedness of representations. Thus $\mr{Mod}_f (\mc S \rtimes \Gamma)$ would be equivalent to the category of finite length modules \[ \mr{Mod}_{f,\mf s} (\mc S (G)) = \mr{Mod}_f (\mc S (G)_{\mf s}) , \] where $\mc S (G)$ is the Harish-Chandra--Schwartz algebra of $G$ and $\mc S (G)_{\mf s}$ is its two-sided ideal corresponding to the inertial equivalence class $\mf s \in \mf B (G)$. Moreover, is $I$ is an Iwahori subgroup of a split group $G$, it is shown in \cite[Proposition 10.2]{DeOp1} that the isomorphism $\mc H (G,I) \cong \mc H (\mc R,q)$ extends to an isomorphism $\mc S (G,I) \cong \mc S (\mc R,q)$. Therefore it is reasonable to expect that more generally $\mc S (G)_{\mf s}$ will be Morita equivalent to $\mc S (\mc R,q) \rtimes \Gamma$ in case of an isomorphism \eqref{eq:AHAsigma}. Further support of this is provided by Theorem \ref{thm:3.8} in comparison with the Plancherel theorem for $\mc S (G)$ \cite{Wal} and for $C_r (G)$ \cite{Ply2}. These show that $\mc S (\mc R,q) \rtimes \Gamma$ and $\mc S (G)_{\mf s}$, as well as their respective $C^*$-completions, have a very similar shape, which can almost entirely be deduced from their categories of finite length modules. \section{Parametrization of representations with induction data} Theorem \ref{thm:3.8} is extremely deep and useful, a large part of what follows depends on it. It shows a clear advantage of $\mc S$ over $\mc H$, namely that the Fourier transform consists of \emph{all} smooth sections. In particular one can use any smooth section, without knowing its preimage under $\mc F$. By Corollary \ref{cor:3.14} the irreducible tempered $\mc H \rtimes \Gamma$-representations are partitioned in finite packets parametrized by $\Xi_{un} / \mc G$. Moreover, from Theorem \ref{thm:3.8} Delorme and Opdam also deduce analogues of Harish-Chandra's Completeness Theorem \cite[Corollary 5.4]{DeOp1} and of Langlands' Disjointness Theorem \cite[Corollary 5.8]{DeOp1}. We will generalize these results to all irreducible representations. For that we do not need all induction data from $\Xi$, in view of Lemma \ref{lem:3.6} it suffices to consider $\xi \in \Xi^+$. At the same time this restriction to positive induction data enables us to avoid the singularities of the intertwiners $\pi (g,\xi)$. \begin{thm}\label{thm:3.9} Let $\xi = (P,\delta,t), \xi' = (P',\delta�,t') \in \Xi^+$. \enuma{ \item The $\mc H \rtimes \Gamma$-representations $\pi^\Gamma (\xi)$ and $\pi^\Gamma (\xi')$ have a common irreducible quotient if and only if there exists a $g \in \mc G$ such that $g \xi = \xi'$. \item The operators $\{ \pi^\Gamma (g,\xi) : g \in \mc G , g \xi = \xi' \}$ are regular and invertible, and they span $\mr{Hom}_{\mc H \rtimes \Gamma} (\pi^\Gamma (\xi), \pi^\Gamma (\xi'))$. } \end{thm} \emph{Proof.} (a) Suppose that there exists a $g \in \mc G$ with $g \xi = \xi'$. Since $\pi^\Gamma (\gamma, \xi')$ is invertible for $\gamma \in \Gamma$, we may replace $\xi'$ by $\gamma \xi'$, which allows to assume without loss of generality that $g = (w,u) \in W_0 \times K_P$. Recall that $T^+$ is a fundamental domain for the action of $W_0$ on $T_{rs}$. Since $|t|$ and $|t'|$ are both in $T^+$ we must have $|t| = w(|t|) = |t'|$ and hence $P(\xi) = P(\xi')$. Thus $w u (P,\delta,t |t|^{-1}) = (P',\delta',t' |t'|^{-1})$ and by Theorem \ref{thm:3.5}.b the $\mc H^{P(\xi)}$-representations \begin{align*} & \pi^{P(\xi)} (P,\delta,t |t|^{-1}) = \pi^{P(\xi)} (P,\delta,t) \circ \phi_{|t|}^{-1} \text{ and } \\ & \pi^{P(\xi)} (P',\delta',t' |t|^{-1}) = \pi^{P(\xi)} (P',\delta',t') \circ \phi_{|t|}^{-1} \end{align*} are isomorphic. Hence $\pi^{P(\xi)} (P,\delta,t) \cong \pi^{P(\xi)} (P',\delta',t')$, which implies that $\pi^\Gamma (\xi)$ and $\pi^\Gamma (\xi')$ are isomorphic. In particular $\pi^\Gamma (\xi)$ and $\pi^\Gamma (\xi')$ have the same irreducible quotients. Conversely, suppose that $\pi^\Gamma (\xi)$ and $\pi^\Gamma (\xi')$ have a common irreducible quotient. Again we may replace $\xi'$ by $\gamma \xi'$ for any $\gamma \in \Gamma$. In view of this, Proposition \ref{prop:3.3}.c and Corollary \ref{cor:2.8}.b we may assume that $P(\xi) = P(\xi')$ and that the $\mc H^{P(\xi)} \rtimes \Gamma_{P(\xi)}$-representations $\pi^{P(\xi),\Gamma_{P(\xi)}} (\xi)$ and $\pi^{P(\xi),\Gamma_{P(\xi)}} (\xi')$ have a common irreducible summand $\pi^{P(\xi),\Gamma_{P(\xi)}} (P(\xi),\sigma,t^{P(\xi)},\rho)$. Pick $s \in T^{P(\xi)}_{un} t^{P(\xi)}$ such that $t^{P(\xi)}$ and $t^{P(\xi)} s^{-1}$ have the same isotropy group in $W_0 \rtimes \Gamma$. This is possible because $T^g$ is a complex algebraic subtorus of $T$ for every $g \in W_0 \rtimes \Gamma$. The $\mc H^{P(\xi)} \rtimes \Gamma_{P(\xi)}$-representations $\pi^{P(\xi),\Gamma_{P(\xi)}} (P,\delta,t s^{-1})$ and $\pi^{P(\xi),\Gamma_{P(\xi)}} (P',\delta',t' s^{-1})$ are completely reducible by Proposition \ref{prop:3.3}.a, and they have the common irreducible summand \[ \pi^{P(\xi),\Gamma_{P(\xi)}} (P(\xi),\sigma,t^{P(\xi)} s^{-1},\rho) = \mr{Ind}_{\mc H^{P(\xi)} \rtimes \Gamma_{P(\xi),\sigma,t^{P(\xi)}}}^{\mc H^{P(\xi)} \rtimes \Gamma_{P(\xi)}} \big( \sigma \circ \phi_{t^{P(\xi)}} \circ \phi_s^{-1} \otimes \rho \big) . \] Moreover, because every irreducible summand is of this form, \begin{multline}\label{eq:3.1} \mr{Hom}_{\mc H^{P(\xi)} \rtimes \Gamma_{P(\xi)}} \big( \pi^{P(\xi),\Gamma_{P(\xi)}} (P,\delta,t s^{-1}) , \pi^{P(\xi),\Gamma_{P(\xi)}} (P',\delta',t' s^{-1}) \big) \cong \\ \mr{Hom}_{\mc H^{P(\xi)} \rtimes \Gamma_{P(\xi)}} \big( \pi^{P(\xi),\Gamma_{P(\xi)}} (P,\delta,t) , \pi^{P(\xi),\Gamma_{P(\xi)}} (P',\delta',t') \big) \neq 0. \end{multline} Since $t^{P(\xi)} s^{-1} \in T_{un}$, we have $t s^{-1}, t' s^{-1} \in T_{un}$. So $|t| = |t'|$ and the representations $\pi^{P(\xi),\Gamma_{P(\xi)}} (P,\delta,t s^{-1})$ and $\pi^{P(\xi),\Gamma_{P(\xi)}} (P',\delta',t' s^{-1})$ extend continuously to $\mc S (\mc R^{P(\xi)},q^{P(\xi)}) \rtimes \Gamma_{P(\xi)}$. Now Theorem \ref{thm:3.8} for this algebra shows that the left hand side of \eqref{eq:3.1} is spanned by the intertwiners $\pi^{P(\xi),\Gamma_{P(\xi)}} (g,P,\delta, t s^{-1})$ with $g (P,\delta, t s^{-1}) = (P',\delta',t' s^{-1})$. Since \eqref{eq:3.1} is nonzero, there exists at least one such $g \in \mc G$. The choice of $s$ guarantees that $g (P,\delta, t) = (P',\delta',t')$ as well.\\ (b) By Theorem \ref{thm:3.5} the $\pi^{P(\xi),\Gamma_{P(\xi)}} (g,P,\delta, t s^{-1})$ are invertible and constant on $T^{P(\xi)}$-cosets. Hence the $\pi^{P(\xi),\Gamma_{P(\xi)}} (g,P,\delta, t)$ span the right hand side of \eqref{eq:3.1}, and they are invertible. $\qquad \Box$ \\[3mm] It is interesting to compare Theorem \ref{thm:3.9} with \cite{Ree1}, which describes the $\mc H$-endomorphisms of principal series representations $M (t)$. It transpires that the results of \cite{Ree1} simplify considerably when $|t|$ is in the positive Weyl chamber: then $\mr{End}_{\mc H}(M(t))$ is semisimple and all its irreducible quotients occur with multiplicity one. Now we can prove the desired partition of Irr$(\mc H \rtimes \Gamma)$ in packets: \begin{thm}\label{thm:3.10} Let $\pi$ be an irreducible $\mc H \rtimes \Gamma$-representation. There exists a unique association class $\mc G (P,\delta,t) \in \Xi / \mc G$ such that the following equivalent properties hold: \enuma{ \item $\pi$ is isomorphic to an irreducible quotient of $\pi^\Gamma (\xi^+)$, for some $\xi^+ \in \Xi^+ \cap \mc G (P,\delta,t)$; \item $\pi$ is a constituent of $\pi^\Gamma (P,\delta,t)$, and $\norm{cc_P (\delta)}$ is maximal for this property. } \end{thm} \emph{Proof.} Proposition \ref{prop:3.3}.d says that there exists $\xi^+ = (P',\delta',t') \in \Xi^+$ satisfying (a), and by Theorem \ref{thm:3.9} its $\mc G$-association class is unique. Let $\xi = (P,\delta,t) \in \Xi$ such that $\pi$ is a constituent of $\pi^\Gamma (\xi)$ and $\norm{cc_P (\delta)}$ is maximal under this condition. By Lemma \ref{lem:3.6} we may assume that $\xi \in \Xi^+$. Suppose that $\pi$ is not isomorphic to a quotient of $\pi^\Gamma (\xi)$. In view of Proposition \ref{prop:3.3} this means that there exist Langlands data $(P(\xi+),\sigma',t'^{P(\xi^+)},\rho')$ and $(P(\xi), \sigma, t^{P(\xi)},\rho)$ such that $\pi \cong L^\Gamma (P(\xi+),\sigma',t'^{P(\xi^+)},\rho')$ is a constituent but not a quotient of $\pi^\Gamma (P(\xi), \sigma, t^{P(\xi)},\rho)$. Now Lemma \ref{lem:2.10}.b tells us that \[ \norm{cc_{P(\xi)}(\sigma)} < \norm{cc_{P(\xi^+)}(\sigma')} . \] But $cc_{P(\xi)}(\sigma) = W_{P(\xi)} cc_P (\delta)$ and $cc_{P(\xi^+)}(\sigma') = W_{P(\xi^+)} cc_{P'}(\delta')$, so \[ \norm{cc_P (\delta)} < \norm{cc_{P'} (\delta')} , \] contradicting the maximality of $\norm{cc_P (\delta)}$. Therefore $\pi$ must be a quotient of $\pi^\Gamma (\xi)$. Thus the association class $\mc G \xi$ satisfies not only (b) but also (a), which at the same time shows that is unique. In particular conditions (a) and (b) turn out to be equivalent. $\qquad \Box$ \\[3mm] All these constructs with induction data have direct analogues in the setting of graded Hecke algebras \cite[Sections 6 and 8]{SolGHA}. Concretely, $\tilde \Xi$ is the space of all triples $\tilde \xi = (Q,\sigma,\lambda)$, where $Q \subset F_0 \,, \sigma$ is a discrete series representation of $\mh H_Q$ and $\lambda \in \mf t^Q$. The subsets of unitary (respectively positive) induction data are obtained by imposing the restriction $\lambda \in i \mf a^Q$ (respectively $\lambda \in \mf a^{Q+} + i \mf a^Q$). The corresponding induced representation is \[ \pi^\Gamma (\tilde \xi) = \mr{Ind}_{\mh H}^{\mh H \rtimes \Gamma} \pi (Q,\sigma,\lambda) = \mr{Ind}_{\mh H_Q}^{\mh H \rtimes \Gamma} (\sigma_\lambda ) . \] The groupoid $\tilde{\mc G}$ and its action on $\tilde \Xi$ are defined like $\mc G$, but without the parts $K_P$. We would like to understand the relation between induction data for $\mc H \rtimes \Gamma$ and for $\mh H \rtimes \Gamma$. We consider, for every $u \in T_{un}$, the induction data for $(\tilde{\mc R}_u,k_u)$ with $\lambda \in \mf a$. Thus we arrive at the space $\widehat \Xi$ of quadruples $\hat \xi = (u, \tilde P, \tilde \delta, \lambda)$ such that: \begin{itemize} \item $u \in T_{un}$; \item $\tilde P \subset F_u$; \item $\tilde \delta$ is a discrete series representation of $\mh H (\tilde{\mc R}_{u,\tilde P},k_{u,\tilde P})$; \item $\lambda \in \mf a^{\tilde P}$. \end{itemize} The $\mc H \rtimes \Gamma$-representation associated to $\hat \xi$ is \[ \pi^\Gamma (u, \tilde P, \tilde \delta, \lambda) = \mr{Ind}_{\mc H (\mc R_u,q_u)}^{\mc H \rtimes \Gamma} \pi (\tilde P, \tilde \delta, \lambda) , \] where the $\mh H (\tilde{\mc R}_u,k_u)$-representation $\pi (\tilde P, \tilde \delta, \lambda)$ is considered as a representation of $\mc H (\tilde{\mc R}_u,q_u)$, via Theorem \ref{thm:2.2}. For $g \in \mc G$ the map $\psi_g$ from \eqref{eq:twistKP} and \eqref{eq:psigamma} induces an algebra isomorphism $\mh H (\tilde{\mc R}_u, k_u) \to \mh H (\tilde{\mc R}_{g(u)}, k_{g(u)})$, and the stabilizer in $\mc G$ of $u \in T_{un}$ is the groupoid $\tilde {\mc G}_u$ associated to $(\tilde{\mc R}_u, k_u)$. This leads to an action of $\mc G$ on $\widehat \Xi$. The collections of $\mc H \rtimes \Gamma$-representations corresponding to $\Xi$ and to $\widehat \Xi$ are almost the same, but not entirely: \begin{lem}\label{lem:3.5} There exists a natural finite-to-one surjection \[ \Xi / \mc G \to \widehat \Xi / \mc G, \: \mc G \xi \mapsto \mc G \hat{\xi} , \] with the following property. Given $\hat \xi \in \widehat \Xi$ one can find $\xi_i \in \Xi$ (not necessarily all different) such that $\bigcup_i \mc G \xi_i$ is the preimage of $\mc G \hat \xi$ and $\pi^\Gamma (\hat \xi) = \bigoplus_i \pi^\Gamma (\xi_i)$. \end{lem} \emph{Proof.} Given $\xi = (P,\delta,t) \in \Xi$, let $t = u^P c^P \in T^P_u \times T^P_{rs}$ be the polar decomposition of $t$ and let $u_P c_P \in T_{P,u} \times T_{P,rs}$ be an $\mc A_P$-weight of $\delta$. Put \begin{equation}\label{eq:WP+} W_{P,u_P}^+ = \{ w \in W (R_P) : w(u_P) = u_P, w (R_{P,u_P}^+) = R_{P,u_P}^+ \} . \end{equation} By Theorem \ref{thm:2.1} there exists a unique discrete series representation \[ \delta_1 \text{ of } \mc H (\mc R_{P,u_P} ,q_{P,u_P}) \rtimes W_{P,u_P}^+ \text{ such that } \delta \cong \mr{Ind}_{\mc H (\mc R_{P,u_P} ,q_{P,u_P}) \rtimes W_{P,u_P}^+}^{\mc H_P}(\delta_1). \] Then automatically \[ \delta \circ \phi_t \cong \mr{Ind}_{\mc H (\mc R^P_{u_P} ,q^P_{u_P}) \rtimes W_{P,u_P}^+}^{\mc H^P} (\delta_1 \circ \phi_t) . \] Let $\delta'$ be an irreducible direct summand of the restriction of $\delta_1$ to $\mc H (\mc R_{P,u_P} ,q_{P,u_P})$, such that $u_P c_P$ is a weight of $\delta'$. Then $\delta_1 \circ \phi_t$ is a direct summand of \begin{equation} \mr{Ind}_{\mc H (\mc R^P_{u_P} ,q^P_{u_P})}^{\mc H (\mc R^P_{u_P} ,q^P_{u_P}) \rtimes W_{P,u_P}^+} (\delta' \circ \phi_t) , \end{equation} so $\pi^\Gamma (\xi)$ is a direct summand of \begin{equation}\label{eq:pi''xi} \mr{Ind}_{\mc H (\mc R^P_{u_P} ,q^P_{u_P})}^{\mc H \rtimes \Gamma} (\delta' \circ \phi_t) . \end{equation} By Theorem \ref{thm:2.2} $\delta'$ can also be regarded as a discrete series representation $\tilde \delta$ of $\mh H (\tilde{\mc R}_{P,u_P} ,k_{P,u_P})$ with central character $W(R_{P,u_P}) c_P$. Then $\delta' \circ \phi_t$ corresponds to the representation $\tilde \delta_{\log (c^P)}$ of $\mh H (\tilde{\mc R}^P_{u_P} ,k^P_{u_P})$. Let $\tilde P$ be the unique basis of $R_{P,u_P}$ contained in $R_0^+$. All in all $(P,\delta,t)$ gives rise to the induction datum $\tilde \xi = (\tilde P,\tilde \delta, \log (c^P))$ for the graded Hecke algebra $\mh H (\tilde{\mc R}_{P,u_P} ,k_{P,u_P})$. Since $R_{P,u_P}$ is a parabolic root subsystem of $R_u \,, \tilde \xi$ can also be regarded as an induction datum for $\mh H (\tilde {\mc R}_u ,k_u)$. Let us check the possible freedom in the above construction. All $\mc A_P$-weights of $\delta$ are in the same $W_P$-orbit, so another choice of $u_P c_P$ differs only by an element of $W_P$. All possible choices of $\delta'$ above are conjugated by the action of the group $W_{P,u_P}^+$, and $(W_0 \rtimes \Gamma)u$ is the unitary part of the central character of $\pi^\Gamma (\xi)$. Therefore $\xi$ determines the quadruple \[ \hat{\xi} := (u,\tilde P, \tilde \delta, \log (c^P)) \in \widehat{\Xi} \] uniquely modulo the action of $\mc G$. That yields a map $\Xi \to \widehat \Xi / \mc G, \: \xi \to \mc G \hat{\xi}$, and since the actions of $\mc G$ are defined in the same way on both sides, this map factors via $\Xi / \mc G$. By reversing the above steps one can reconstruct the representations $\delta' \circ \phi_t$ and \eqref{eq:pi''xi} from $\tilde \xi \,, u_P$ and $u^P$. In fact it suffices to know $\tilde \xi$ and the product $u = u_P u^P \in T_{un}$. Namely, the only additional ambiguity comes from the group $K_P$, but this is inessential since \[ (\delta' \circ \psi_{k^{-1}}) \circ \phi_{k t} \cong \delta' \circ \phi_t \text{ for } k \in K_P . \] By construction $\delta_1$ is a direct summand of $\mr{Ind}_{\mc H (\mc R_{P,u_P} ,q_{P,u_P})}^{\mc H (\mc R_{P,u_P} ,q_{P,u_P}) \rtimes W_{P,u_P}^+}(\delta')$, and the other constituents $\delta_j \; (1 < j \leq n)$ are also discrete series representations. Hence \eqref{eq:pi''xi} is a direct sum of finitely many parabolically induced representations \[ \pi^\Gamma (P, \mr{Ind}_{\mc H (\mc R_{P,u_P} ,q_{P,u_P}) \rtimes W_{P,u_P}^+}^{\mc H_P}(\delta_j),t) . \] Now Corollary \ref{cor:2.3} assures that our map $\Xi / \mc G \to \widehat \Xi / \mc G$ is surjective and that the preimage of $\mc G (u,\tilde P, \tilde \delta, \log (c^P))$ consists precisely of the association classes \[ \mc G (P, \mr{Ind}_{\mc H (\mc R_{P,u_P} ,q_{P,u_P}) \rtimes W_{P,u_P}^+}^{\mc H_P}(\delta_j),t) \quad (1 \leq j \leq n) . \qquad \Box \] \emph{Remark.} Things simplify considerably if the group $W^+_{P,u_P} = \{ \mr{id} \}$ in \eqref{eq:WP+}, then the map $\Xi / \mc G \to \widehat{\Xi} / \mc G$ is bijective on $(P,\delta,T^P) / \mc G$. In many cases this group is indeed trivial, but not always. See \cite[Section 8]{OpSo2}, where $W^+_{P,u_P}$ is denoted $\Gamma_{s(e)}$. \section{The geometry of the dual space} \label{sec:dual} For any algebra $A$ the set Irr$(A)$ has a natural topology, the Jacobson topology. This is the noncommutative generalization of the Zariski topology, by definition all its closed sets are of the form \[ V(S) := \{ \pi \in \text{Irr}(A) : \pi (s) = 0 \: \forall s \in S \} \qquad S \subset A. \] In this section we discuss the topology and the geometry of Irr$(\mc H \rtimes \Gamma)$, and we compare it with the dual of $\mc S \rtimes \Gamma$. This will be useful for the proof of Theorem \ref{thm:2.7} and for our discussion of periodic cyclic homology in Section \ref{sec:pch}. Parabolic induction gives, for every discrete series representation $\delta$ of a parabolic subalgebra~$\mc H_P$, a family of $\mc H \rtimes \Gamma$-representations $\pi^\Gamma (P,\delta,t)$, parametrized by $t \in T^P$. The group \[ \mc G_{P,\delta} := \{ g \in \mc G : g (P) = P, \delta \circ \psi_g^{-1} \cong \delta \} \] acts algebraically on $T^P$, and by Lemma \ref{lem:3.6} points in the same orbit lead to representations with the same irreducible subquotients. Theorem \ref{thm:3.10} allows us to associate to every $\pi \in \text{Irr}(\mc H \rtimes \Gamma)$ an induction datum $\xi^+ (\pi) \in \Xi^+$, unique modulo $\mc G$, such that $\pi$ is a quotient of $\pi^\Gamma (\xi^+ (\pi))$. For any subset $U \subset T^P$ we define \[ \mr{Irr}_{P,\delta,U} (\mc H \rtimes \Gamma ) = \{ \pi \in \text{Irr}(\mc H \rtimes \Gamma) : \mc G \xi^+ (\pi) \cap (P,\delta,U) \neq \emptyset \} . \] For $U = T^P$ or $U = \{t\}$ we abbreviate this to $\mr{Irr}_{P,\delta}(\mc H \rtimes \Gamma)$ or $\mr{Irr}_{P,\delta,t}(\mc H \rtimes \Gamma)$. \begin{prop}\label{prop:3.11} Let $U$ be a subset of $T^{P+} T^P_{un}$ such that every $g \in \mc G_{P,\delta}$ with $g U \cap U \neq \emptyset$ fixes $U$ pointwise. For arbitrary $t \in U$ there are canonical bijections \[ \mr{Irr}_{P,\delta} (\mc H_P \rtimes \Gamma_{P,\delta,t} ) \times U \rightarrow \mr{Irr}_{P,\delta,U} (\mc H^P \rtimes \Gamma_{P,\delta,t} ) \rightarrow \mr{Irr}_{P,\delta,U} (\mc H \rtimes \Gamma ) . \] \end{prop} \emph{Remark.} It is not unreasonable to expect that the Jacobson topology of $\mc H \rtimes \Gamma$ induces the Zariski topology on $\mr{Irr}_{P,\delta} (\mc H_P \rtimes \Gamma_{P,\delta,t} ) \times U$, where $\mr{Irr}_{P,\delta} (\mc H_P \rtimes \Gamma_{P,\delta,t} )$ is regarded as a discrete space. However, while it is easy to see that all $V(h)$ become Zariski-closed in $\mr{Irr}_{P,\delta} (\mc H_P \rtimes \Gamma_{P,\delta,t} ) \times U$, it is not clear that one can obtain all Zariski-closed subsets in this way. That might require some extra conditions on $U$. \emph{Proof.} By assumption every $t \in U$ has the same stabilizer $\mc G_{P,\delta,t} \subset \mc G_{P,\delta}$. According to Theorem \ref{thm:3.9} the operators $\{\pi^\Gamma (g,P,\delta,t) : g \in \mc G_{P,\delta,t} \}$ span $\mr{End}_{\mc H \rtimes \Gamma} (\pi^\Gamma (P,\delta,t))$. By definition all elements of $\mr{Irr}_{P,\delta,t}(\mc H \rtimes \Gamma)$ occur as a quotient of $\pi^\Gamma (P,\delta,t)$, but the latter representation also has other constituents if it is not completely reducible. We have to avoid that situation if we want to find a direct relation between $\mr{End}_{\mc H \rtimes \Gamma} (\pi^\Gamma (P,\delta,t))$ and $\mr{Irr}_{P,\delta,t}(\mc H \rtimes \Gamma)$. Since $\pi^\Gamma (P,\delta,t)$ and $\pi^\Gamma (g,P,\delta,t)$ are unitary for $t \in T^P_{un}$, there exists an open $\mc G_{P,\delta}$-stable tubular neighborhood $T^P_\epsilon$ of $T^P_{un}$ in $T^P$, such that $\pi^\Gamma (P,\delta,t)$ is completely reducible and $\pi^\Gamma (g,P,\delta,t)$ is regular and invertible, for all $t \in T^P_\epsilon$ and $g \in \mc G_{P,\delta}$. For every $t \in U$ we can find $r \in \R_{>0}$ such that $t |t|^{r-1} \in T^P_\epsilon$. Let $U_\epsilon \subset T^P_\epsilon$ be the resulting collection. For every $t \in U_\epsilon$ the algebras \[ \{ \pi^\Gamma (P,\delta,t) (h) : h \in \mc H \rtimes \Gamma \} \text{ and span}\{ \pi^\Gamma (g,P,\delta,t) : g \in \mc G_{P,\delta,t} \} \] are each others' commutant in \[ \mr{End}_\C (\pi^\Gamma (P,\delta,t)) = \mr{End}_\C \big( \C [\Gamma \Gamma^P] \otimes_\C V_\delta \big) . \] Hence there is a natural bijection between \begin{itemize} \item isotypical components of $\pi^\Gamma (P,\delta,t)$ as a $\mc H \rtimes \Gamma$-representation; \item isotypical components of $\pi^\Gamma (P,\delta,t)$ as a $\mc G_{P,\delta,t}$-representation. \end{itemize} The operators $\pi^\Gamma (P,\delta,t)$ are rational in $t \in T^P$, and regular on $T^P_\epsilon$. As there are only finitely many inequivalent $\mc G_{P,\delta,t}$-representations of a fixed finite dimension, the isomorphism class of $\pi^\Gamma (P,\delta,t)$ as a $\mc G_{P,\delta,t}$-representation does not depend on $t \in U_\epsilon$. This provides a natural bijection \begin{equation}\label{eq:bijectionU1} \mr{Irr}_{P,\delta,U_\epsilon}(\mc H \rtimes \Gamma) \longleftrightarrow \mr{Irr}_{P,\delta,t}(\mc H \rtimes \Gamma) \times U_\epsilon \qquad t \in U_\epsilon . \end{equation} The extended Langlands classification (Corollary \ref{cor:2.8}) shows that there is a canonical bijection $\mr{Irr}_{P,\delta,t |t|^{r-1}}(\mc H \rtimes \Gamma) \leftrightarrow \mr{Irr}_{P,\delta,t}(\mc H \rtimes \Gamma)$ for every $r \in \R_{>0}$, which allows us to extend \eqref{eq:bijectionU1} uniquely to \begin{equation}\label{eq:bijectionU} \mr{Irr}_{P,\delta,U}(\mc H \rtimes \Gamma) \longleftrightarrow \mr{Irr}_{P,\delta,t_0}(\mc H \rtimes \Gamma) \times U \qquad t_0 \in U . \end{equation} The above also holds with the algebras $\mc H \rtimes \Gamma_{P,\delta,t}$ or $\mc H_P \rtimes \Gamma_{P,\delta,t}$ in the role of $\mc H \rtimes \Gamma$. Since the construction of the intertwiners corresponding to $g \in \mc G_{P,\delta,t}$ is the same in all three cases, we obtain natural isomorphisms \[ \mr{End}_{\mc H_P \rtimes \Gamma_{P,\delta,t}} \big( \mr{Ind}_{\mc H_P}^{\mc H_P \rtimes \Gamma_{P,\delta,t}} \delta \big) \cong \mr{End}_{\mc H^P \rtimes \Gamma_{P,\delta,t}} (\pi^{P,\Gamma_{P,\delta,t}} (P,\delta,t)) \cong \mr{End}_{\mc H \rtimes \Gamma} (\pi^\Gamma (P,\delta,t)) . \] Now the above bijection between isotypical components shows that the maps \[ \begin{array}{cccccc} \mr{Irr}_{P,\delta} (\mc H_P \rtimes \Gamma_{P,\delta,t} )& \to & \mr{Irr}_{P,\delta,t} (\mc H^P \rtimes \Gamma_{P,\delta,t} ) & \to & \mr{Irr}_{P,\delta,t} (\mc H \rtimes \Gamma ) \\ \rho & \mapsto & \rho \circ \phi_t & \mapsto & \mr{Ind}_{\mc H^P \rtimes \Gamma_{P,\delta,t}}^{\mc H \rtimes \Gamma} (\rho \circ \phi_t) \end{array} \] are bijective, and \eqref{eq:bijectionU} allows us to extend this from one $t$ to $U. \qquad \Box$ \\[2mm] Theorem \ref{thm:3.10} shows that $\mr{Irr}_{P,\delta}(\mc H \rtimes \Gamma)$ and $\mr{Irr}_{Q,\sigma}(\mc H \rtimes \Gamma)$ are either equal or disjoint, depending on whether or not $(P,\delta)$ and $(Q,\sigma)$ are $\mc G$-associate. The sets $\mr{Irr}_{P,\delta}(\mc H \rtimes \Gamma)$ are usually not closed in Irr$(\mc H \rtimes \Gamma)$, because we did not include all constituents of the representations $\pi^\Gamma (P,\delta,t)$. We can use their closures to define a stratification of Irr$(\mc H \rtimes \Gamma)$, and a corresponding stratification of Irr$(\mc S \rtimes \Gamma)$. By Corollary \ref{cor:3.2} we may identify $\mr{Irr}_{P,\delta} (\mc S \rtimes \Gamma)$ with the tempered part $\mr{Irr}_{P,\delta,T^P_{un}} (\mc H \rtimes \Gamma) $ of $\mr{Irr}_{P,\delta}(\mc H \rtimes \Gamma)$. Let $\Delta$ be a set of representatives for the action of $\mc G$ on pairs $(P,\delta)$. Then the cardinality of $\Delta$ equals the number of connected components of $\Xi_{un} / \mc G$ and by Theorem \ref{thm:3.8} \begin{equation}\label{eq:FourierDelta} \mc S \rtimes \Gamma \cong \bigoplus\nolimits_{(P,\delta) \in \Delta} \big( C^\infty (T^P_{un}) \otimes_\C \mr{End}(\C [\Gamma W^P] \otimes_\C V_\delta ) \big)^{\mc G_{P,\delta}} . \end{equation} \begin{lem}\label{lem:3.12} There exist filtrations by two-sided ideals \[ \begin{array}{ccccccc} \mc H \rtimes \Gamma = F_0 (\mc H \rtimes \Gamma) & \supset & F_1 (\mc H \rtimes \Gamma) & \supset & \cdots & \supset & F_{|\Delta |} (\mc H \rtimes \Gamma) = 0 ,\\ \mc S \rtimes \Gamma = F_0 (\mc S \rtimes \Gamma) & \supset & F_1 (\mc S \rtimes \Gamma) & \supset & \cdots & \supset & F_{|\Delta |} (\mc S \rtimes \Gamma) = 0 , \end{array} \] with $F_i (\mc H \rtimes \Gamma) \subset F_i (\mc S \rtimes \Gamma)$, such that for all $i > 0$: \enuma{ \item $\mr{Irr} \big( F_{i-1} (\mc S \rtimes \Gamma) / F_i (\mc S \rtimes \Gamma) \big) \cong \mr{Irr}_{P_i ,\delta_i}(\mc S \rtimes \Gamma)$, \item $\mr{Irr} \big( F_{i-1} (\mc H \rtimes \Gamma) / F_i (\mc H \rtimes \Gamma) \big) \cong \mr{Irr}_{P_i ,\delta_i}(\mc H \rtimes \Gamma)$. } \end{lem} \emph{Remark.} Analogous filtrations of Hecke algebras of reductive $p$-adic groups are described in \cite[Lemma 2.17]{SolPadic}. The proof in our setting is basically the same. \emph{Proof.} We number the elements of $\Delta$ such that \begin{equation}\label{eq:numberij} \norm{cc_{P_i}(\delta_i)} \geq \norm{cc_{P_j}(\delta_j)} \quad \text{if } j \leq i , \end{equation} and we define \[ \begin{array}{ccc} F_i (\mc H \rtimes \Gamma) & = & \{ h \in \mc H \rtimes \Gamma : \pi (h) = 0 \text{ for all } \pi \in \mr{Irr}_{P_j ,\delta_j}(\mc H \rtimes \Gamma) , j \leq i \} , \\ F_i (\mc S \rtimes \Gamma) & = & \{ h \in \mc S \rtimes \Gamma : \pi (h) = 0 \text{ for all } \pi \in \mr{Irr}_{P_j ,\delta_j}(\mc S \rtimes \Gamma) , j \leq i \} . \end{array} \] Clearly $F_i (\mc H \rtimes \Gamma) \subset F_i (\mc S \rtimes \Gamma)$ and \[ F_{i-1}(\mc S \rtimes \Gamma) / F_i (\mc S \rtimes \Gamma) \cong \big( C^\infty (T^P_{un}) \otimes_\C \mr{End}(\C [\Gamma \times W^P] \otimes_\C V_\delta ) \big)^{\mc G_{P_i ,\delta_i}} , \] which establishes (a). For (b), we first show that \begin{equation}\label{eq:J-closed} \bigcup\nolimits_{j \leq i} \mr{Irr}_{P_j ,\delta_j} (\mc H \rtimes \Gamma) \end{equation} is closed in the Jacobson topology of Irr$(\mc H \rtimes \Gamma)$. Its Jacobson-closure consists of all irreducible subquotients $\pi$ of $\pi^\Gamma (P_j,\delta_j,t)$, for $j \leq i$ and $t \in T^{P_j}$. Suppose that $\pi \notin \mr{Irr}_{P_j ,\delta_j}(\mc H \rtimes \Gamma)$. By Theorem \ref{thm:3.10} $\pi \in \mr{Irr}_{P,\delta}(\mc H \rtimes \Gamma)$ for some discrete series representation $\delta$ of $\mc H_P$ with $\norm{cc_P (\delta)} > \norm{cc_{P_j}(\delta_j)}$. Select $(P_n ,\delta_n)$ and $g \in \mc G$ such that $g (P,\delta) = (P_n, \delta_n)$. Then $\pi \in \mr{Irr}_{P_n ,\delta_n}(\mc H \rtimes \Gamma)$ and \[ \norm{cc_{P_n}(\delta_n)} = \norm{cc_P (\delta)} > \norm{cc_{P_j}(\delta_j)}, \] so $n < j \leq i$ by \eqref{eq:numberij}. Therefore \eqref{eq:J-closed} is indeed Jacobson-closed and \[ \mr{Irr}(F_i (\mc H \rtimes \Gamma)) \cong \bigcup\nolimits_{j < i} \mr{Irr}_{P_j ,\delta_j} (\mc H \rtimes \Gamma) , \] which implies (b). $\qquad \Box$ \\[2mm] The filtrations from Lemma \ref{lem:3.12} help us to compare the dual of $\mc H \rtimes \Gamma$ with its tempered dual, which can be identified with the dual of $\mc S \rtimes \Gamma$. The space $\mr{Irr}_{P,\delta}(\mc H \rtimes \Gamma)$ comes with a finite-to-one projection onto \begin{equation}\label{eq:projPdelta} T^{P+}T^P_{un} / \mc G_{P,\delta} \cong T^P / (W (R_P) \rtimes \mc G_{P,\delta}) . \end{equation} The subspace $\mr{Irr}_{P,\delta}(\mc S \rtimes \Gamma) \subset \mr{Irr}_{P,\delta}(\mc H \rtimes \Gamma)$ is the inverse image of $T^P_{un} / (W (R_P) \rtimes \mc G_{P,\delta})$ under this projection. By Proposition \ref{prop:3.11} the fiber at $t \in T^P$ essentially depends only on the stabilizer $\mc G_{P,\delta,t}$. Since $\mc G_{P,\delta}$ acts algebraically on $T^P$, the collection of points $t \in T^P$ such that the fiber at $(W_P \rtimes \mc G_{P,\delta})t$ has exactly $m$ points (for some fixed $m \in \N$) is a complex affine variety, say $T^{P,m}$. As the action of $\mc G_{P,\delta}$ stabilizes $T^P_{un}$, the variety $T^{P,m}$ is already determined by its intersection with $T^P_{un}$. Hence one can reconstruct the set $\mr{Irr}_{P,\delta}(\mc H \rtimes \Gamma)$ from $\mr{Irr}_{P,\delta}(\mc S \rtimes \Gamma)$. With these insights we can finally complete the proof of property (a) of our affine Springer correspondence. \\[2mm] \emph{Continuation of the proof of Theorem \ref{thm:2.7}}.\\ Extend $\Spr$ to a $\Q$-linear map \begin{equation}\label{eq:SprQ} \Spr_\Q : G_\Q (\mc H \rtimes \Gamma) \to G_\Q (X \rtimes (W_0 \rtimes \Gamma)) . \end{equation} We have to show that $\Spr_\Q$ is a bijection. To formulate the proof we introduce some new terminology, partly taken from \cite{Opd-ICM}. We know from Lemma \ref{lem:3.1} that representations of the form \[ \pi^{P,\Gamma_{P,\delta,u}} (P,\delta,u) = \mr{Ind}_{\mc H^P}^{\mc H^P \rtimes \Gamma_{P,\delta,u}} (\delta \circ \phi_u) \quad \text{with } (P,\delta,u) \in \Xi_{un} \] are tempered and unitary. Let $\sigma$ be an irreducible summand of such an $\mc H^P \rtimes \Gamma_{P,\delta,u}$-representation and let $T_1^{W(R_P) \rtimes \Gamma_{P,\delta,u}}$ be the connected component of $T^{W(R_P) \rtimes \Gamma_{P,\delta,u}}$ that contains $1 \in T$. We call \begin{equation}\label{eq:SmoothFamily} \big\{ \mr{Ind}_{\mc H^P \rtimes \Gamma_{P,\delta,u}}^{\mc H \rtimes \Gamma} (\sigma \circ \phi_t) : t \in T_1^{W(R_P) \rtimes \Gamma_{P,\delta,u}} \big\} . \end{equation} a smooth $d$-dimensional family of representations, where $d$ is the dimension of the complex algebraic variety $T^{W(R_P) \rtimes \Gamma_{P,\delta,u}}$. If we restrict the parameter $t$ to $T_{un}$, then we add the adjective tempered to this description. We note that these representations are irreducible for $t$ in a Zariski-open subset of $T_1^{W(R_P) \rtimes \Gamma_{P,\delta,u}}$. The proof of Proposition \ref{prop:3.3}.b shows that \[ \mr{Ind}_{\mc H^P \rtimes \Gamma_{P,\delta,u}}^{\mc H \rtimes \Gamma} (\sigma \circ \phi_t) \] is always a direct sum of representations of the form $\pi^\Gamma (P',\sigma',t',\rho')$, where $(P',\sigma',t',\rho')$ is almost a Langlands datum for $\mc H \rtimes \Gamma$, the only difference being that $|t'| \in T^{P'}_{rs}$ need not be positive. Nevertheless we can specify a unique "Langlands constituent" of $\pi^\Gamma (P',\sigma',t',\rho')$, by the following procedure. Choose $g \in \mc G$ such that $g(P',\delta',t') \in \Xi^+$. By Proposition \ref{prop:3.3}.b $g (P',\sigma',t',\rho')$ is a Langlands datum for $\mc H \rtimes \Gamma$ and, as in Lemma \ref{lem:3.6}, $\pi^\Gamma (g (P',\sigma',t',\rho'))$ has the same trace and the same semisimplication as $\pi^\Gamma (P',\sigma',t',\rho')$. We define \begin{equation}\label{eq:LanglandsC} L \big( \pi^\Gamma (P',\sigma',t',\rho') \big) = L^\Gamma (g (P',\sigma',t',\rho')) . \end{equation} In view of Corollary \ref{cor:2.8} the Langlands constituent appears only once in $\pi^\Gamma (P',\sigma',t',\rho')$ and Lemma \ref{lem:2.10}.b characterizes it as the constituent which is minimal in the appropriate sense. Let $L \big( \mr{Ind}_{\mc H^P \rtimes \Gamma_{P,\delta,u}}^{\mc H \rtimes \Gamma} (\sigma \circ \phi_t) \big)$ be the direct sum of the Langlands constituents of the irreducible summands of $\mr{Ind}_{\mc H^P \rtimes \Gamma_{P,\delta,u}}^{\mc H \rtimes \Gamma} (\sigma \circ \phi_t)$. The family \begin{equation}\label{eq:LSmoothFamily} \big\{ L \big( \mr{Ind}_{\mc H^P \rtimes \Gamma_{P,\delta,u}}^{\mc H \rtimes \Gamma} (\sigma \circ \phi_t) \big) : t \in T_0^{W(R_P) \rtimes \Gamma_{P,\delta,u}} \big\} \end{equation} cannot be called smooth, because the traces of these representations do not depend continuously on $t$. Let us call it an L-smooth $d$-dimensional family of representations. By properties (d) and (e) of Theorem \ref{thm:2.7} \begin{equation}\label{eq:SprFamilies} \Spr_\Q L \big( \mr{Ind}_{\mc H^P \rtimes \Gamma_{P,\delta,u}}^{\mc H \rtimes \Gamma} (\sigma \circ \phi_t) \big) = \mr{Ind}_{X \rtimes (W (R_P) \rtimes \Gamma_{P,\delta,u})}^{X \rtimes (W_0 \rtimes \Gamma)} (\Spr_\Q (\sigma) \circ \phi_t) . \end{equation} The right hand side is almost a smooth $d$-dimensional family of $X \rtimes (W_0 \rtimes \Gamma)$-representations. Not entirely, because $\Spr_\Q (\sigma)$ is in general reducible and because a priori this family could be only a part of a higher dimensional smooth family. Let $G_\Q^d (\mc H \rtimes \Gamma)$ be the $\Q$-submodule of $G_\Q (\mc H \rtimes \Gamma)$ spanned by the representations \eqref{eq:LSmoothFamily}, for all L-smooth families of dimension at least $d \in \Z_{\geq 0}$. This is a decreasing sequence of $\Q$-submodules of $G_\Q (\mc H \rtimes \Gamma)$, by convention \[ G_\Q^0 (\mc H \rtimes \Gamma) = G_\Q (\mc H \rtimes \Gamma) \] and $G_\Q^d (\mc H \rtimes \Gamma) = 0$ when $d > \dim_\C (T) = \text{rank}(X)$. We define $G_\Q^d (\mc S \rtimes \Gamma)$ analogously, with tempered smooth families of dimension at least $d$. Now \eqref{eq:SprFamilies} says that \begin{equation} \begin{array}{lll} \Spr_\Q \big( G_\Q^d (\mc H \rtimes \Gamma) \big) & \subset & G_\Q^d (X \rtimes W_0 \rtimes \Gamma) , \\ \Spr_\Q \big( G_\Q^d (\mc S \rtimes \Gamma) \big) & \subset & G_\Q^d (\mc S (X) \rtimes W_0 \rtimes \Gamma) . \end{array} \end{equation} Let us consider the graded vector spaces associated to these filtrations. For tempered representations $\Spr_\Q$ induces $\Q$-linear maps \begin{equation}\label{eq:SprTempGr} G_\Q^d (\mc S \rtimes \Gamma) / G_\Q^{d+1} (\mc S \rtimes \Gamma) \to G_\Q^d (\mc S (W^e) \rtimes \Gamma) / G_\Q^{d+1} (\mc S (W^e) \rtimes \Gamma) . \end{equation} We proved in Section \ref{sec:Springer} that \begin{equation} \Spr_\Q : G_\Q (\mc S \rtimes \Gamma) \to G_\Q (\mc S (W^e) \rtimes \Gamma) \end{equation} is bijective, so \eqref{eq:SprTempGr} is bijective for all $d \in \Z_{\geq 0}$. We will show that \begin{equation}\label{eq:SprGr} G_\Q^d (\mc H \rtimes \Gamma) / G_\Q^{d+1} (\mc H \rtimes \Gamma) \to G_\Q^d (X \rtimes W_0 \rtimes \Gamma) / G_\Q^{d+1} (X \rtimes W_0 \rtimes \Gamma) \end{equation} is bijective as well. For $d > \dim_\C (T)$ there is nothing to prove, so take $d \leq \dim_\C (T)$. Pick smooth $d$-dimensional families $\{ \pi_{i,t} : i \in I, t \in V_i \}$ such that \begin{equation}\label{eq:basisSmoothFam} \big\{ \pi_{i,t} : i \in I , t \in (V_i \cap T^P_{un}) / \mc G_{P,\delta} \big\} \end{equation} is a basis of $G_\Q^d (\mc S \rtimes \Gamma) / G_\Q^{d+1} (\mc S \rtimes \Gamma)$. The bijectivity of \eqref{eq:SprTempGr} implies that \[ \big\{ \Spr_\Q (\pi_{i,t}) : i \in I , t \in (V_i \cap T^P_{un}) / \mc G_{P,\delta} \big\} \] is a basis of $G_\Q^d (\mc S (W^e) \rtimes \Gamma) / G_\Q^{d+1} (\mc S (W^e) \rtimes \Gamma)$. The discussion following \eqref{eq:projPdelta} shows that L-smooth $d$-dimensional families and tempered smooth $d$-dimensional families are in natural bijection. Hence \[ \big\{ L (\pi_{i,t}) : i \in I , t \in V_i / \mc G_{P,\delta} \big\} \] is a basis of $G_\Q^d (\mc H \rtimes \Gamma) / G_\Q^{d+1} (\mc H \rtimes \Gamma)$ and \[ \big\{ \Spr_\Q (\pi_{i,t}) : i \in I , t \in V_i / \mc G_{P,\delta} \big\} \] is a basis of $G_\Q^d (X \rtimes W_0 \rtimes \Gamma) / G_\Q^{d+1} (X \rtimes W_0 \rtimes \Gamma)$. Therefore \eqref{eq:SprGr} is indeed bijective. From this we deduce, with some standard applications of the five lemma, that \eqref{eq:SprQ} is a bijection. $\qquad \Box$. \chapter{Parameter deformations} \label{chapter:scaling} Let $\mc H = \mc H (\mc R,q)$ be an affine Hecke algebra associated to an equal parameter function $q$. Varying the parameter $q \in \C^\times$ yields a family a algebras, whose members are specializations of an affine Hecke algebra with a formal variable $\mathbf q$. The Kazhdan--Lusztig-parametrization of Irr$(\mc H (\mc R,q))$ \cite{KaLu,Ree2} provides a bijection between the irreducible representations of $\mc H (\mc R,q)$ and $\mc H (\mc R,q')$, as long as $q,q' \in \C^\times$ are not roots of unity. Moreover, every $\pi_q \in \mr{Irr}(\mc H (\mc R,q))$ is part of a family of representations $\{ \pi_q : q \in \C^\times \}$ which depends algebraically on $q$. It is our conviction that a similar structure underlies the representation theory of affine Hecke algebras with unequal parameters. However, at present a proof seems to be out of reach. We have more control when we restrict ourselves to positive parameter functions and to parameter deformations of the form $q \mapsto q^\ep$ with $\ep \in \R$. We call this scaling of the parameter function, because it corresponds to multiplying the parameters of a graded Hecke algebra with $\ep$. Notice that $\mc H (\mc R,q^0) = \C [W^e]$. We can relate representations of $\mc H (\mc R,q)$ to $\mc H (\mc R,q^\ep)$-representations by applying a similar scaling operation on suitable subsets of the space $T \cong \mr{Irr}(\mc A)$. We construct a family of functors \[ \se : \mr{Mod}_f (\mc H (\mc R,q)) \to \mr{Mod}_f (\mc H(\mc R,q^\ep)) \] which is an equivalence of categories for $\ep \neq 0$, and which preserves many properties of representations (Corollary \ref{cor:4.4}). In particular this provides families of representations $\{ \se (\pi) : \ep \in \R \}$ that depend analytically on $\ep$. The Schwartz algebra $\mc S (\mc R,q)$ behaves even better under scaling of the parameter function $q$. As $q$ can be varied in several directions, we have a higher dimensional family of Fr\'echet algebras $\mc S (\mc R,q)$, which is known to depend continuously on $q$ \cite[Appendix A]{OpSo2}. This was exploited for the main results of \cite{OpSo2}, but the techniques used there to study general deformations of $q$ are specific to discrete series representations. To get going at the other series, we only scale $q$. Via a detailed study of the Fourier transform of $\mc S (\mc R,q)$ (see Theorem \ref{thm:3.8}) we construct homomorphisms of Fr\'echet *-algebras \[ \Sc_\ep : \mc S (\mc R,q^\ep ) \to \mc S (\mc R,q) \qquad \ep \in [0,1], \] which depend piecewise analytically on $\ep$ and are isomorphisms for $\ep > 0$ (Theorem \ref{thm:4.8}). It is not known whether this is possible with $\mc H (\mc R,q)$ instead of $\mc S (\mc R,q)$, when $q$ is not an equal parameter function. The most remarkable part is that these maps extend continuously to $\ep = 0$, that is, to a map $\Sc_0 : \mc S (W^e) \to \mc S (\mc R,q)$. Of course $\Sc_0$ cannot be an isomorphism, but it is injective and and some ways behaves like an isomorphism. In fact, we show that for irreducible tempered $\mc H (\mc R,q)$-representations the affine Springer correspondence from Section \ref{sec:Springer} and the functors $\tilde \sigma_0$ and $\pi \mapsto \pi \circ \Sc_0$ agree (Corollary \ref{cor:Sprsigma0}). \section{Scaling Hecke algebras} \label{sec:scarep} As we saw in Section \ref{sec:Springer}, there is a correspondence between tempered representations of $\mc H \rtimes \Gamma$ and of $W^e \rtimes \Gamma$. On central characters this correspondence has the effect of forgetting the real split part and retaining the unitary part. These elements of $T / (W_0 \rtimes \Gamma)$ are connected by the path $(W_0 \rtimes \Gamma) c^\ep u$, with $\ep \in [0,1]$. Opdam \cite[Section 5]{Opd-Sp} was the first to realize that one can interpolate not only the central characters, but also the representations themselves. In this section we will recall and generalize the relevant results of Opdam. In contrast to the previous sections we will not include an extra diagram automorphism group $\Gamma$ in our considerations, as the notation is already heavy enough. However, it can be checked easily that all the results admit obvious generalizations with such a $\Gamma$. First we discuss the situation for graded Hecke algebras, which is considerably easier. Let $V \subset \mf t$ be a nonempty open $W_0$-invariant subset. Recall the elements $\tilde \imath_w \in C^{me}(V)^{W_0} \otimes_{Z(\mh H (\tilde{\mc R},k))} \mh H (\tilde{\mc R},k)$ from Proposition \ref{prop:3.4}. Given any $\ep \in \C$ we define a scaling map \[ \lambda_\ep : V \to \ep V ,\; v \mapsto \ep v . \] For $\ep \neq 0$ it induces an algebra isomorphism \begin{align} \nonumber m_\ep : C^{me}(\ep V)^{W_0} \otimes_{Z(\mh H (\tilde{\mc R},\ep k))} \mh H (\tilde{\mc R},k) & \to C^{me}(V)^{W_0} \otimes_{Z(\mh H (\tilde{\mc R},k))} \mh H (\tilde{\mc R},k) , \\ f \tilde \imath_{w,\ep} & \mapsto (f \circ \lambda_\ep) \tilde \imath_w . \label{eq:defmep} \end{align} Let us calculate the image of a simple reflection $s_\alpha \in S_0$: \begin{align*} m_\ep (1 + s_\alpha) & = m_\ep \big( \tilde c_{\alpha,\ep}^{-1} (1 + \tilde \imath_{s_\alpha,\ep}) \big) = m_\ep \Big( \frac{\alpha}{\ep k_\alpha + \alpha} (1 + \tilde \imath_{s_\alpha,\ep}) \Big) = \frac{\ep \alpha}{\ep k_\alpha + \ep \alpha} (1 + \tilde \imath_{s_\alpha}) \\ & = \tilde c_\alpha^{-1} (1 + \tilde \imath_{s_\alpha}) = 1 + s_\alpha . \end{align*} That is, $m_\ep (w) = w$ for all $w \in W_0 \subset \mh H (\tilde{\mc R}, \ep k)$, so $m_\ep$ is indeed the same as \eqref{eq:1.3}. In particular we see that $m_\ep$ can be defined without using meromorphic functions. These maps have a limit homomorphism \begin{equation} \begin{array}{ccccc} m_0 : & \mh H (\tilde{\mc R},0) = \mc O (\mf t) \rtimes W_0 & \to & \mh H (\tilde{\mc R},k) , \\ & f w & \mapsto & f (0) w , \end{array} \end{equation} with the property that \[ \C \to C^{me}(V)^{W_0} \otimes_{Z(\mh H (\tilde{\mc R},k))} \mh H (\tilde{\mc R},k) : \ep \mapsto m_\ep (f w) \] is analytic for all $f \in \mc O (\mf t)$ and $w \in W_0$. From now we assume that $\ep$ is real. Let $q^\ep$ be the parameter function $q^\ep (w) = q(w)^\ep$, which is well-defined because $q(w) \in \R_{>0}$ for all $w \in W^e$. We obtain a family of algebras $\mc H_\ep = \mc H (\mc R, q^\ep)$ with $\mc H (\mc R ,q^0) = \C [W^e]$. To relate representations of these algebras we use their analytic localizations, as in Section \ref{sec:localiz}. Let $c_{\alpha,\ep}$ be the $c$-function with respect to $(\mc R ,q^\ep)$, as in \eqref{eq:calpha}. For $\ep \in [-1,1] \setminus \{0\}$ the ball $\ep B \subset \mf t$ satisfies the conditions \ref{cond:ball} with respect to $u c^\ep \in T$ and the parameter function $q^\ep$ (which enters via the function $c_{\alpha,\ep}$). For $\ep = 0$ the point $\ep B = \{0\}$ trivially fulfills all the conditions, except that it is not open. We write \[ U_\ep = W_0 u c^\ep \exp (\ep B) \qquad \ep \in [-1,1] , \] and we define a $W_0$-equivariant scaling map \[ \sigma_\ep : U_1 \to U_\ep \,, \; w(uc \exp (b)) \mapsto w (u c^\ep \exp (\ep b)) . \] As was noted in \cite[Lemma 5.1]{Opd-Sp}, $\sigma_\ep$ is an analytic diffeomorphism for $\ep \neq 0$, while $\sigma_0$ is a locally constant map with range $U_0 = W_0 u$. Let $\imath^0_{w,\ep}$ be the element constructed in Proposition \ref{prop:3.4}. By \eqref{eq:isoCme} $\sigma_\ep$ induces an algebra isomorphism \begin{equation}\label{eq:defrhoep} \begin{array}{ccll} \rho_\ep : \mc H^{me}_\ep (U_\ep) & \to & \mc H^{me}(U) & \ep \in [-1,1] \setminus \{0\} , \\ f \imath^0_{w,\ep} & \mapsto & (f \circ \sigma_\ep) \imath^0_w , \end{array} \end{equation} where $f \in C^{me}(U)$ and $w \in W_0$. We will show that these maps depend continuously on $\ep$ and have a well-defined limit as $\ep \to 0$. \begin{lem}\label{lem:4.1} For $\ep \in [-1,1] \setminus \{0\}$ and $\alpha \in R_0$ write $d_{\alpha,\ep} = (c_{\alpha, \ep} \circ \sigma_\ep) c_\alpha^{-1} \in C^{me}(U)$. This defines a bounded invertible analytic function on $U \times ([-1,1] \setminus \{0\})$, which extends to a function on $\overline U \times [-1,1]$ with the same properties. \end{lem} \emph{Proof.} This extends \cite[Lemma 5.2]{Opd-Sp} to $\ep = 0$. Let us write \begin{multline*} d_{\alpha, \ep}(t) \quad = \quad {\ds \frac{f_1 f_2 f_3 f_4 }{g_1 g_2 g_3 g_4}(t) \quad = \quad \frac{1 + \theta_{-\alpha} (t)}{1 + \theta_{-\alpha} (\sigma_\ep (t))} \: \times } \\ {\ds \frac{1 + q(s_\alpha)^{-\ep/2} q(t_\alpha s_\alpha)^{\ep /2} \theta_{-\alpha} (\sigma_\ep (t))}{1 + q(s_\alpha)^{-1/2} q(t_\alpha s_\alpha)^{1/2} \theta_{-\alpha}(t)} \frac{1 - \theta_{-\alpha}(t)}{1 - \theta_{-\alpha}(\sigma_\ep (t))} \frac{1 - q(s_\alpha)^{-\ep/2} q(t_\alpha s_\alpha)^{-\ep /2} \theta_{-\alpha}(\sigma_\ep (t))}{1 - q(s_\alpha)^{-1/2} q(t_\alpha s_\alpha)^{-1/2} \theta_{-\alpha}(t)} } \end{multline*} We see that $d_{\alpha ,\ep}(t)$ extends to an invertible analytic function on $\overline U \times [-1,1]$ if none of the quotients $f_n / g_n$ has a zero or a pole on this domain. By Condition \ref{cond:ball}.c there is a unique $b \in w ( \log (c) + \overline B )$ such that $t = w(u ) \exp (b)$. This defines a coordinate system on $w ( u c ) \exp (\overline B)$, and $\sigma_\ep (t) = w(u ) \exp (\ep b)$. By Condition \ref{cond:ball}.d, if either $f_n (t) = 0$ or $g_n (t) = 0$ for some $t \in w ( uc \exp (\overline B)) \subset \overline U$, then $f_n (w(uc)) = g_n (w(uc)) = 0$. One can easily check that in this situation \[ {\ds \frac{f_n (t)}{g_n (t)} = \left( \frac{1 - e^{-\alpha (b) \ep}}{1 - e^{-\alpha (b)}} \right)^{(-1)^n} } . \] Again by Condition \ref{cond:ball}.d the only critical points of this function are those for which $\alpha (b) = 0$. For $\ep \neq 0$ both the numerator and the denominator have a zero of order 1 at such points, so the singularity is removable. For the case $\ep = 0$ we need to have a closer look. In our new coordinate system we can write \begin{multline*} c_{\alpha ,\ep}(\sigma_\ep (t)) \quad = \quad {\ds \frac{f_2 (t) f_4 (t)}{g_1 (t) g_3 (t)}} \quad =\\ {\ds \frac{u (w^{-1} \alpha) + \big( q(s_\alpha)^{-1/2} q(t_\alpha s_\alpha)^{1/2} e^{-\alpha (b)} \big)^\ep}{u (w^{-1} \alpha) + e^{-\alpha (b) \ep}} \frac{u (w^{-1} \alpha) - \big( q(s_\alpha)^{-1/2} q(t_\alpha s_\alpha)^{-1/2} e^{-\alpha (b)} \big)^\ep}{u (w^{-1} \alpha) - e^{-\alpha (b) \ep}} } . \end{multline*} Standard calculations using L'Hospital's rule show that \[ \lim_{\ep \to 0} c_{\alpha ,\ep}(\sigma_\ep (t)) = \left\{ \begin{array}{c@{\qquad\mr{if}\quad}ccc} 1 & u (w^{-1} \alpha )^2 & \neq & 1 \\ {\ds \frac{2 \alpha (b) + \log q(s_\alpha) - \log q(t_\alpha s_\alpha)}{2 \alpha (b)} } & u (w^{-1} \alpha ) & = & -1 \\ {\ds \frac{2 \alpha (b) + \log q(s_\alpha) + \log q(t_\alpha s_\alpha)}{2 \alpha (b)} } & u (w^{-1} \alpha ) & = & 1 . \end{array} \right. \] In other words, \begin{equation}\label{eq:lim0cep} \lim_{\ep \to 0} c_{\alpha ,\ep}(\sigma_\ep (t)) = (k_{u,\alpha} + \alpha (b)) / \alpha (b) = \tilde c_\alpha (b) \quad \text{if } \alpha (w^{-1} u)^2 = 1 . \end{equation} Thus at least $d_{\alpha ,0} = \lim_{\ep \to 0} d_{\alpha ,\ep}$ exists as a meromorphic function on $\overline U$. For $u (w^{-1} \alpha )^2 \neq 1 ,\, d_{\alpha ,0} = c_\alpha^{-1}$ is invertible by Condition \ref{cond:ball}.d. For $u (w^{-1} \alpha) = -1$ we have \[ {\ds d_{\alpha ,0}(t) = \frac{1 - e^{-\alpha (b)}}{\alpha (b)} \frac{\alpha (b) + \log \big( q(s_\alpha) / q(t_\alpha s_\alpha) \big) / 2}{1 - q(s_\alpha)^{-1/2} q(t_\alpha s_\alpha)^{1/2} e^{-\alpha (b)}} \frac{1 + e^{-\alpha (b)}}{1 + q(s_\alpha)^{-1/2} q(t_\alpha s_\alpha)^{-1/2} e^{-\alpha (b)}} } \] while for $u (w^{-1} \alpha ) = 1$ \[ {\ds d_{\alpha ,0}(t) = \frac{1 - e^{-\alpha (b)}}{\alpha (b)} \frac{1 + e^{-\alpha (b)}}{1 + q(s_\alpha)^{-1/2} q(t_\alpha s_\alpha)^{1/2} e^{-\alpha (b)}} \frac{\alpha (b) + \log \big( q(s_\alpha) q(t_\alpha s_\alpha) \big) / 2}{1 - q(s_\alpha)^{-1/2} q(t_\alpha s_\alpha)^{-1/2} e^{-\alpha (b)}} } \] These expressions define invertible functions by Condition \ref{cond:ball}.c. We conclude that $d_{\alpha ,\ep}(t)$ and $d_{\alpha ,\ep}^{-1}(t)$ are indeed analytic functions on $\overline U \times [-1,1]$. Since this domain is compact, they are bounded. $\qquad \Box$ \\[2mm] We can use Lemma \ref{lem:4.1} to show that the maps $\rho_\ep$ preserve analyticity: \begin{prop}\label{prop:4.2} The maps \eqref{eq:defrhoep} restrict to isomorphisms of topological algebras \[ \rho_\ep : \mc H_\ep^{an}(U_\ep ) \isom \mc H^{an}(U) \] There is a well-defined limit homomorphism \[ \rho_0 = \lim_{\ep \to 0} \rho_\ep : \mh C [W^e] \to \mc H^{an}(U) \] such that for every $w \in W^e$ the map \[ [-1,1] \to \mc H^{an}(U) : \ep \mapsto \rho_\ep (N_w ) \] is analytic. \end{prop} \emph{Proof.} The first statement is \cite[Theorem 5.3]{Opd-Sp}, but for the remainder we need to prove this anyway. It is clear that $\rho_\ep$ restricts to an isomorphism between $C^{an}(U_\ep )$ and $C^{an}(U)$. For a simple reflection $s_\alpha \in S_0$ corresponding to $\alpha \in F_0$ we have \begin{equation}\label{eq:5.19} \begin{array}{lll} N_{s} + q(s)^{-\ep /2} & = & q(s)^{\ep /2} c_{\alpha ,\ep} (\imath^0_{s,\ep} + 1) \\ \rho_\ep (N_s ) & = & q(s)^{\ep /2} (c_{\alpha ,\ep} \circ \sigma_\ep )(\imath^0_s + 1) - q(s)^{-\ep /2} \\ & = & q(s)^{(\ep -1)/2}(c_{\alpha ,\ep} \circ \sigma_\ep ) c_\alpha^{-1} \big( N_s + q(s)^{-1/2} \big) - q(s)^{-\ep /2} \\ & = & q(s)^{(\ep -1)/2} d_{\alpha ,\ep} \big( N_s + q(s)^{-1/2} \big) - q(s)^{-\ep /2} \\ \end{array} \end{equation} By Lemma \ref{lem:4.1} such elements are analytic in $\ep \in [-1,1]$ and $t \in U$, so in particular they belong to $\mc H^{an}(U)$. Moreover, since every $d_{\alpha ,\ep}$ is invertible and by \eqref{eq:isoCme}, the set $\{ \rho_\ep (N_w ) : w \in W_0 \}$ is a basis for $\mc H^{an}(U)$ as a $C^{an}(U)$-module. Therefore $\rho_\ep$ restricts to an isomorphism between the topological algebras $\mc H_\ep^{an}(U_\ep)$ and $\mc H^{an}(U)$ for $\ep \neq 0$. Given any $w \in W^e$, there is a unique $x \in X^+$ such that $w \in W_0 x W_0$. By Lemma \ref{lem:1.2} there exist unique coefficients $c_{w,u,v}(q) \in \C$ such that \[ N_w = \sum_{u \in W^x , v \in W_0} c_{w,u,v}(q) N_u \theta_x N_v . \] From \eqref{eq:multrules} we see that in fact $c_{w,u,v}(q) \in \Q \big( \{ q(s)^{1/2} : s \in S^\af \} \big)$, so in particular these coefficients depend analytically on $q$. Moreover $\rho_\ep (\theta_x ) = \theta_x \circ \sigma_\ep$ depends analytically on $\ep$, as a function on $U$, so \[ \rho_\ep (N_w ) = \sum_{u \in W^x , v \in W_0} c_{w,u,v}(q^\ep) \rho_\ep (N_u) (\theta_x \circ \rho_\ep) \rho_\ep (N_v) \] is analytic in $\ep \in [-1,1]$. Thus $\rho_0$ exists as a linear map. But, being a limit of algebra homomorphisms, it must also be multiplicative. $\qquad \Box$ \\[2mm] Suppose that $u \in T_{un}$ is $W_0$-invariant, so that \[ \exp_u : W_0 (\log (c) + B) \to U = u W_0 c \exp (B) \] is a $W_0$-equivariant bijection. Then clearly \begin{equation}\label{eq:epexpu} \sigma_\ep \circ \exp_u = \exp_u \circ \lambda_\ep : \ep W_0 (\log (c) + B) \to U_\ep . \end{equation} Let $\Phi_u$ be as in \eqref{eq:Phiu}, with $V = W_0 \log (c) + B$. It follows from \eqref{eq:epexpu}, \eqref{eq:defmep} and \eqref{eq:defrhoep} that \[ m_\ep \circ \Phi_{u,\ep} = \Phi_u \circ \rho_\ep \qquad \ep \in [-1,1] \setminus \{0\} \] as maps $\mc H^{me}_\ep (U_\ep) \to C^{me}(W_0 \log (c) + B)^{W_0} \otimes_{Z (\mh H (\tilde {\mc R}_u ,k_u))} \mh H (\tilde {\mc R}_u ,k_u)$. The maps $\Phi_{u,\ep}$ can also be defined for $\ep \to 0$, simply by \[ \Phi_{u,0} (f N_w) = (f \circ \exp_u) w \in C^{an}(\mf t)^{W_0} \otimes_{Z (\mh H (\tilde {\mc R}_u ,k_u))} \mh H (\tilde {\mc R}_u ,k_u), \] where $f \in C^{an} (T)$ and $w \in W_0$. By \eqref{eq:5.19} for $\alpha \in F_0$ \begin{align*} m_\ep \circ \Phi_{u,\ep} (N_{s_\alpha}) & = m_\ep \circ \Phi_{u,\ep} \big(q(s_\alpha)^{\ep /2} c_{\alpha ,\ep} (\imath^0_{s,\ep} + 1) - q(s_\alpha)^{\ep/2} \big) \\ & = m_\ep \big( q(s_\alpha)^{\ep /2} (c_{\alpha ,\ep} \circ \exp_u) (\tilde \imath_{s_\alpha,\ep} + 1) - q(s_\alpha)^{\ep/2} \big) \\ & = q(s_\alpha)^{\ep /2} (c_{\alpha ,\ep} \circ \exp_u \circ \lambda_\ep) (\tilde \imath_{s_\alpha} + 1) - q(s_\alpha)^{\ep/2} \\ & = q(s_\alpha)^{\ep /2} (c_{\alpha ,\ep} \circ \exp_u \circ \lambda_\ep) \tilde c_\alpha^{-1} (s_\alpha + 1) - q(s_\alpha)^{\ep/2} \end{align*} The $W_0$-invariance of $u$ implies that $\alpha (u)^2 = 1$, so by \eqref{eq:lim0cep} $\lim\limits_{\ep \to 0}(c_{\alpha ,\ep} \circ \exp_u \circ \lambda_\ep) \tilde c_\alpha^{-1} = 1$. Hence \[ \lim_{\ep \to 0} m_\ep \circ \Phi_{u,\ep} (N_{s_\alpha}) = s_\alpha = m_0 \circ \Phi_{u,0} (N_{s_\alpha}) . \] On the other hand, it is clear that for $f \in \C [X] \cong \mc O (T)$ \[ \lim_{\ep \to 0} m_\ep \circ \Phi_{u,\ep} (f) = \lim_{\ep \to 0} f \circ \exp_u \circ \lambda_\ep = f(u) = m_0 \circ \Phi_{u,0} (f) . \] Since $\rho_0 = \lim_{\ep \to 0} \rho_\ep$ exists, we can conclude that \begin{equation}\label{eq:m0Phi} m_0 \circ \Phi_{u,0} = \Phi_u \circ \rho_0 : \C [X \rtimes W_0] \to \mh H (\tilde {\mc R}_u,k_u) . \end{equation} \section{Preserving unitarity} Proposition \ref{prop:4.2} shows that for $\ep \in [-1,1] \setminus \{0\}$ there is an equivalence between the categories $\mr{Mod}_{f,U} (\mc H)$ and $\mr{Mod}_{f,U_\ep} (\mc H_\ep)$. It would be nice if this equivalence would preserve unitarity, but that is not automatic. In fact these categories are not even closed under taking duals of $\mc H$-representations. From \eqref{eq:thetax*} we see that an $\mc H$-representation with central character $W_0 t$ can only be unitary if $\overline{t^{-1}} \in W_0 t$, where $\overline{t^{-1}}(x) = \overline{t (-x)}$ for $x \in X$. To define a * on $\mc H^{an}(U)$ we must thus replace $U$ by \[ U^{\pm 1} := U \cup \{ t^{-1} : t \in U \} . \] Let $\pm W_0$ be the group $\{ \pm 1 \} \times W_0$, which acts on $T$ by $-w (t) = w(t)^{-1}$. The above means that we need the following strengthening of Condition \ref{cond:ball}.e: \begin{itemize} \item[(e')] As Condition \ref{cond:ball}.e, but with $\pm W_0 \rtimes \Gamma$ instead of $W'$. \end{itemize} Lemma \ref{lem:4.1} and Proposition \ref{prop:4.2} remain valid under this condition, with the same proof. Equations \eqref{eq:thetax*} and \eqref{eq:defHan} show that the involution from $\mc H$ extends naturally to $\mc H^{me}(U^{\pm 1}) \rtimes \Gamma$ by \begin{equation}\label{eq:def*me} (N_\gamma f)^* = N_{w_0} (f \circ -w_0) N_{w_0}^{-1} N_{\gamma^{-1}} \qquad \gamma \in W' , f \in C^{me}(U^{\pm 1}) , \end{equation} where $w_0$ is the longest element of $W_0$. According to \cite[(4.56)]{Opd-Sp} \begin{equation}\label{eq:imath*} (\imath^0_w )^* = N_{w_0} \prod_{\alpha \in R_0^+ \cap w' R_0^-} \frac{c_\alpha}{c_{-\alpha}} \imath^0_{w'} N_{w_0}^{-1} , \end{equation} where $w \in W_0$ and $w' = w_0 w^{-1} w_0$. We extend the map from Proposition \ref{prop:4.2} to $\rho_\ep : \mc H_\ep^{an}(U_\ep ) \rtimes \Gamma \to \mc H^{an}(U) \rtimes \Gamma$ by defining $\rho_\ep (N_\gamma) = N_\gamma$ for $\gamma \in \Gamma$. Usually the maps $\rho_\ep$ do not preserve the *, but this can be fixed. For $\ep \in [-1,1]$ consider the element \[ M_\ep = \rho_\ep (N_{w_0 ,\ep}^{-1})^* N_{w_0} \prod\nolimits_{\alpha \in R_0^+} d_{\alpha ,\ep} \in \mc H^{an}(U) \] We will use $M_\ep$ to extend \cite[Corollary 5.7]{Opd-Sp}. However, this result contained a small mistake: the element $A_\ep$ in \cite{Opd-Sp} is not entirely correct, we replace it by $M_\ep$. \begin{thm}\label{thm:4.3} For all $\ep \in [-1,1]$ the element $M_\ep \in \mc H^{an}(U^{\pm 1})$ is invertible, positive and bounded. It has a positive square root $M_\ep^{1/2}$ and the map $\ep \mapsto M_\ep^{1/2}$ is analytic. The map \begin{align*} \tilde \rho_\ep : \mc H_\ep^{an}(U_\ep ) \rtimes \Gamma \to \mc H^{an}(U^{\pm 1}) \rtimes \Gamma \,,\; h \mapsto M_\ep^{1/2} \rho_\ep (h) M_\ep^{-1/2} \end{align*} is a homomorphism of topological *-algebras, and an isomorphism if $\ep \neq 0$. For any $w \in W^e \rtimes \Gamma$ the map \[ [-1,1] \to \mc H^{an}(U^{\pm 1}) \rtimes \Gamma : \ep \mapsto \tilde \rho_\ep (N_w ) \] is analytic. \end{thm} \emph{Proof.} By Lemma \ref{lem:4.1} and Proposition \ref{prop:4.2} the $M_\ep$ are invertible, bounded and analytic in $\ep$. Consider, for $\ep \neq 0$, the automorphism $\mu_\ep$ of $\mc H^{me}(U^{\pm 1})$ given by \[ \mu_\ep (h) = \rho_\ep (\rho_\ep^{-1} (h)^* )^* . \] We will discuss its effect on three kinds of elements. Firstly, for $f \in C^{me}(U^{\pm 1})$ we have, by \eqref{eq:def*me} and the $W_0$-equivariance of $\sigma_\ep $: \begin{equation} \begin{array}{lll} \mu_\ep (f) & = & \rho_\ep ( (f \circ \sigma_\ep )^* )^* \\ & = & \rho_\ep \big( N_{w_0} (f \circ (-w_0) \circ \sigma_\ep ) N_{w_0 ,\ep}^{-1} \big)^* \\ & = & \rho_\ep (N_{w_0 ,\ep}^{-1})^* \big( f \circ -w_0 \big)^* \rho_\ep (N_{w_0 ,\ep})^* \\ & = & \rho_\ep (N_{w_0 ,\ep}^{-1})^* N_{w_0} f N_{w_0}^{-1} \rho_\ep (N_{w_0 ,\ep})^* \\ & = & \rho_\ep (N_{w_0 ,\ep}^{-1})^* N_{w_0} \prod\limits_{\alpha \in R_0^+} d_{\alpha ,\ep} f \prod\limits_{\alpha \in R_0^+} d_{\alpha ,\ep}^{-1} N_{w_0}^{-1} \rho_\ep (N_{w_0 ,\ep})^* \\ & = & M_\ep f M_\ep^{-1} . \end{array} \end{equation} Secondly, suppose that the simple reflections $s$ and $s' = w_0 s w_0 \in S_0$ correspond to $\alpha$ and put $-w_0 \alpha \in F_0$. Using Propostion \ref{prop:3.4}.b for $\mc H^{me}_\ep (U^{\pm 1})$ and \eqref{eq:imath*} we find \begin{equation} \begin{array}{lll} M_\ep \imath^0_s M_\ep^{-1} & = & \rho_\ep (N_{w_0 ,\ep}^{-1} )^* N_{w_0} \prod\limits_{\alpha \in R_0^+} d_{\alpha ,\ep} \imath^0_s \prod\limits_{\alpha \in R_0^+} d_{\alpha ,\ep}^{-1} N_{w_0}^{-1} \rho_\ep (N_{w_0 ,\ep})^* \\ & = & \rho_\ep (N_{w_0 ,\ep}^{-1})^* N_{w_0} \imath^0_s d_{\alpha, \ep}^{-1} d_{-\alpha ,\ep} N_{w_0}^{-1} \rho_\ep (N_{w_0 ,\ep})^* \vspace{2mm}\\ & = & \rho_\ep (N_{w_0 ,\ep}^{-1})^* N_{w_0} \imath^0_s {\ds \frac{c_{-\alpha}}{c_\alpha} N_{w_0}^{-1} N_{w_0} \frac{c_{\alpha ,\ep} \circ \sigma_\ep}{c_{-\alpha ,\ep} \circ \sigma_\ep} } N_{w_0}^{-1} \rho_\ep (N_{w_0 ,\ep})^* \\ & = & \rho_\ep (N_{w_0 ,\ep}^{-1})^* (\imath^0_{s'})^* \left({\ds \frac{c_{\alpha' ,\ep} \circ \sigma_\ep}{c_{-\alpha' ,\ep} \circ \sigma_\ep} }\right)^* \rho_\ep (N_{w_0 ,\ep})^* \\ & = & \left( \rho_\ep (N_{w_0 ,\ep}) {\ds \frac{c_{\alpha' ,\ep} \circ \sigma_\ep}{c_{-\alpha' ,\ep} \circ \sigma_\ep} } \imath^0_{s'} \rho_\ep (N_{w_0 ,\ep}^{-1}) \right)^* \\ & = & \rho_\ep \left( N_{w_0} {\ds \frac{c_{\alpha'}}{c_{ -\alpha'}} } \imath^0_{s',\ep} N_{w_0,\ep}^{-1} \right)^* \\ & = & \rho_\ep \left( (\imath^0_{s,\ep})^* \right)^* \;=\; \mu_\ep (\imath^0_s ) . \end{array} \end{equation} Thirdly, for $\gamma \in \Gamma$ by definition \[ \mu_\ep (N_\gamma) = \rho_\ep (\rho_\ep^{-1} (N_\gamma)^* )^* = \rho_\ep (N_\gamma^{-1})^* = (N_\gamma^{-1})^* = N_\gamma . \] Since elements of the above three types generate $\mc H^{me}(U^{\pm 1}) \rtimes \Gamma$, we conclude that \[ \mu_\ep (h) = M_\ep h M_\ep^{-1} \quad \text{for all } h \in \mc H^{me}(U^{\pm 1}) \rtimes \Gamma . \] Now we can see that \[ \begin{array}{lll} \rho_\ep \big( N_{w_0 ,\ep}^{-1} \big)^* & = & \rho_\ep \big( (N_{w_0 ,\ep}^*)^{-1} \big)^* \;=\; \rho_\ep \big( (N_{w_0 ,\ep}^{-1})^* \big)^* \\ & = & \mu_\ep \big( \rho_\ep (N_{w_0 ,\ep}^{-1} ) \big) , \vspace{2mm} \;=\; M_\ep \rho_\ep (N_{w_0 ,\ep}^{-1} ) M_\ep^{-1} \\ N_e & = & M_\ep^{-1} \rho_\ep (N_{w_0 ,\ep}^{-1})^* N_{w_0} \prod\limits_{\alpha \in R_0^+} d_{\alpha ,\ep} \;=\; \rho_\ep (N_{w_0 ,\ep}^{-1}) M_\ep^{-1} N_{w_0} \prod\limits_{\alpha \in R_0^+} d_{\alpha ,\ep} , \\ M_\ep & = & N_{w_0} \prod\limits_{\alpha \in R_0^+} d_{\alpha ,\ep} \rho_\ep (N_{w_0 ,\ep}^{-1}) \;=\; \big( \rho_\ep (N_{w_0 ,\ep}^{-1})^* \big( N_{w_0} \prod\limits_{\alpha \in R_0^+} d_{\alpha ,\ep} \big)^* \big)^* \\ & = & \big( \rho_\ep (N_{w_0 ,\ep}^{-1})^* N_{w_0} \prod_{\alpha \in R_1^+} d_{\alpha ,\ep} \big)^* \;=\; M_\ep^* \end{array} \] Thus the elements $M_\ep$ are Hermitian $\forall \ep \neq 0$. By continuity in $\ep ,\; M_0$ is also Hermitian. Moreover they are all invertible, and $M_1 = N_e$, so they are in fact strictly positive. We already knew that the element $\ep \mapsto M_\ep$ of \[ C^{an} ([-1,1] ; \mc H^{an}(U^{\pm 1})) \cong C^{an} ([-1,1] \times U^{\pm 1}) \otimes_{\mc A} \mc H \] is bounded, so we can construct its square root using holomorphic functional calculus in the Fr\'echet algebra $C_b^{an}([-1,1] \times U^{\pm 1}) \otimes_{\mc A} \mc H$, where the subscript $b$ denotes bounded functions. This ensures that $\ep \to M_\ep^{1/2}$ is still analytic. Finally, for $\ep \neq 0$ \begin{equation} \begin{array}{lll} \tilde \rho_\ep (h)^* & = & \left( M_\ep^{1/2} \rho_\ep (h) M_\ep^{-1/2} \right)^* \\ & = & M_\ep^{-1/2} \rho_\ep (h)^* M_\ep^{1/2} \\ & = & M_\ep^{-1/2} \mu_\ep (\rho_\ep (h^*)) M_\ep^{1/2} \\ & = & M_\ep^{1/2} \rho_\ep (h^* ) M_\ep^{-1/2} \;=\; \tilde \rho_\ep (h^* ) \end{array} \end{equation} Again this extends to $\ep = 0$ by continuity. $\qquad \Box$ \\[2mm] \begin{cor}\label{cor:4.4} For $uc \in U$ and $\ep \in [-1,1]$ there is a family of additive functors \begin{align*} & \tilde \sigma_{\ep,uc} : \mr{Mod}_{f,W' uc} (\mc H \rtimes \Gamma) \to \mr{Mod}_{f,W' \sigma_\ep (uc)} (\mc H_\ep \rtimes \Gamma) , \\ & (\pi ,V) \mapsto (\pi \circ \tilde \rho_\ep ,V) \end{align*} with the following properties: \enuma{ \item for all $w \in W^e \rtimes \Gamma$ and $(\pi ,V)$ the map $[-1,1] \to \mr{End}_\C (V) : \ep \mapsto \tilde \sigma_{\ep,uc}(\pi) (N_w)$ is analytic; \item for $\ep \neq 0 \,, \sigma_{\ep,uc}$ is an equivalence of categories; \item $\tilde \sigma_{\ep,uc}$ preserves unitarity; \item for $\ep < 0 \,, \tilde \sigma_{\ep,uc}$ exchanges tempered and anti-tempered modules, where anti-tempered means that $|s|^{-1} \in T^-$ for all $\mc A$-weights $s \in T$; \item for $\ep \geq 0 \,, \tilde \sigma_{\ep,uc}$ preserves temperedness; \item for $\ep > 0 \,, \tilde \sigma_{\ep,uc}$ preserves the discrete series. } \end{cor} \emph{Proof.} Parts (a), (b) and (c) follow immediately from Theorem \ref{thm:4.3}. Let $(\pi,V)$ be a finite dimensional $\mc H^{an} (U^{\pm 1}) \rtimes \Gamma$-representation. Conjugation by $M_\ep^{1/2}$ does not change the isomorphism class of $\pi$, so $\tilde \sigma_{\ep,uc} (\pi)$ has the same $\mc A_\ep$-weights as $\pi \circ \rho_\ep$, which by construction are $\sigma_\ep$ of the $\mc A$-weights of $\pi$. Now parts (d), (e) and (f) are obvious consequences of $|\sigma_\ep (t)| = |t|^\ep. \qquad \Box$ \\[2mm] As the notation indicates, $\tilde \sigma_{\ep,uc}$ depends on the previously chosen base point $uc$. For one $t \in T$ there can exist several possible base points such that $t \in U$, and these could in principle give rise to different functors $\tilde \sigma_{\ep,t}$. This ambiguity disappears if we restrict to $t = uc$ in Corollary \ref{cor:4.4}. Then \[ \se := \bigoplus\nolimits_{t \in T / W_0} \tilde \sigma_{\ep,t} : \mr{Mod}_f (\mc H \rtimes \Gamma) \to \mr{Mod}_f (\mc H_\ep \rtimes \Gamma) \] is an additive functor which also has the properties described in Corollary \ref{cor:4.4}. The functor $\tilde \sigma_\ep$ was already used in \cite[Theorem 1.7]{OpSo1}. The image of $\tilde \sigma_0$ is contained in $\mr{Mod}_{T_{un}}(\C [W^e])$, so this map is certainly not bijective, not even after passing to the associated Grothendieck groups of modules. Nevertheless $\tilde \sigma_0$ clearly is related to the map $\Spr$ from Section \ref{sec:Springer}, in fact these maps agree on irreducible tempered $\mc H \rtimes \Gamma$-modules: \begin{lem}\label{lem:4.9} Suppose that $\ep \in [-1,1] ,\; uc \in T_{un} T_{rs}$ and $t \in T^{P(u)}$. Let $\Gamma'_u$ be a subgroup of $W'_{F_u,u}$ and $\pi \in \mr{Mod}_{f,(W(R_u) \rtimes \Gamma'_u) uc} (\mc H (\tilde{\mc R}_u, q_u) \rtimes \Gamma'_u)$. \enuma{ \item The following $\mc H_\ep \rtimes \Gamma$-representations are canonically isomorphic: \[ \begin{array}{l@{\qquad}l} \se \big( \mr{Ind}_{\mc H (\mc R_u ,q_u) \rtimes \Gamma'_u}^{\mc H \rtimes \Gamma} (\pi \circ \phi_t) \big) , & \mr{Ind}_{\mc H (\mc R_u ,q_u^\ep) \rtimes \Gamma'_u}^{\mc H_\ep \rtimes \Gamma} \big( \se (\pi) \circ \phi_{t \, |t|^{\ep -1}}) \big) , \\ \big( \mr{Ind}_{\mc H (\mc R_u ,q_u) \rtimes \Gamma'_u}^{\mc H \rtimes \Gamma} (\pi \circ \phi_t) \big) \circ \rho_\ep , & \mr{Ind}_{\mc H (\mc R_u ,q_u^\ep) \rtimes \Gamma'_u}^{\mc H_\ep \rtimes \Gamma} \big( (\pi \circ \phi_t) \circ \rho_\ep \big) . \end{array} \] \item $\tilde \sigma_0 = \Spr$ on $\mr{Irr}(\mc S \rtimes \Gamma)$. } \end{lem} \emph{Proof.} (a) By definition $\se$ is given by composition with $\tilde \rho_\ep$, and the difference between $\rho_\ep$ and $\tilde \rho_\ep$ is only an inner automorphism. Hence the map \[ \se \big( \mr{Ind}_{\mc H (\mc R_u ,q_u) \rtimes \Gamma'_u}^{\mc H \rtimes \Gamma} (\pi \circ \phi_t) \big) \to \big( \mr{Ind}_{\mc H (\mc R_u ,q_u) \rtimes \Gamma'_u}^{\mc H \rtimes \Gamma} (\pi \circ \phi_t) \big) \circ \rho_\ep : v \mapsto M_\ep^{-1/2} v \] is an invertible intertwiner. The two representations in the right hand column are naturally isomorphic if and only if the $\mc H (\mc R_u ,q_u^\ep) \rtimes \Gamma'_u$-representations \begin{equation}\label{eq:pitrhoep} \se (\pi) \circ \phi_{t \, |t|^{\ep -1}} \quad \text{and} \quad \pi \circ \phi_t \circ \rho_\ep \end{equation} are so. Notice that $\phi_t$ and $\phi_{t |t|^{\ep -1}}$ are well-defined because $t \in T^{P(u)} \subseteq T^{W(R_u)}$. As we just showed, the left hand side of \eqref{eq:pitrhoep} is naturally isomorphic to $\pi \circ \rho_\ep \circ \phi_{t \, |t|^{\ep -1}}$. Applying $\sigma_\ep$ once with center $uct$ and once with center $uc$ results in \[ \sigma_{\ep ,uct} (uct) = u t |t|^{-1} c^\ep |t|^\ep = \sigma_{\ep ,uc} (uc) t |t|^{\ep - 1} . \] This implies that $\rho_{\ep ,uc} \circ \phi_{t |t|^{\ep -1}} = \phi_t \circ \rho_{\ep, uct}$, which shows that the representations \eqref{eq:pitrhoep} can indeed by identified. Now we turn to the most difficult case, the two representations in the bottom row. In view of Theorem \ref{thm:2.1}.b \[ \big( \mr{Ind}_{\mc H (\mc R_u ,q_u) \rtimes \Gamma'_u}^{\mc H \rtimes \Gamma} (\pi \circ \phi_t) \big) \circ \rho_\ep \quad \text{and} \quad \mr{Ind}_{\mc H (\mc R_u ,q_u^\ep) \rtimes \Gamma'_u}^{\mc H_\ep \rtimes \Gamma} \big( \pi \circ \phi_t \circ \rho_\ep \big) \] are isomorphic if only if the \[ \mc H (\mc R_u ,q_u^\ep)^{an}(U_{\ep,u}) \rtimes W'_{F_u,u} \text{-representation } \mr{Ind}_{\mc H (\mc R_u ,q_u^\ep) \rtimes \Gamma'_u}^{\mc H (\mc R_u, q_u^\ep ) \rtimes W'_{F_u,u}} \big( \pi \circ \phi_t \circ \rho_\ep \big) \] corresponds to the \[ 1_{W'_u uc} (\mc H (\mc R ,q^\ep)^{an}(U_\ep) \rtimes \Gamma) 1_{W'_u uc} \text{-representation } 1_{W'_u uc} \big( \mr{Ind}_{\mc H (\mc R_u ,q_u) \rtimes \Gamma'_u}^{\mc H \rtimes \Gamma} (\pi \circ \phi_t) \big) \circ \rho_\ep \] via the isomorphism from Theorem \ref{thm:2.1}.a. It is clear from the definition \eqref{eq:defrhoep} that \[ \mr{Ind}_{\mc H (\mc R_u ,q_u^\ep) \rtimes \Gamma'_u}^{\mc H (\mc R_u, q_u^\ep ) \rtimes W'_{F_u,u}} \big( (\pi \circ \phi_t) \circ \rho_\ep \big) \cong \big( \mr{Ind}_{\mc H (\mc R_u ,q_u^\ep) \rtimes \Gamma'_u}^{\mc H (\mc R_u, q_u^\ep ) \rtimes W'_{F_u,u}} (\pi \circ \phi_t) \big) \circ \rho_\ep , \] so it suffices to show that the following diagram commutes: \[ \begin{array}{ccc} \mc H (\mc R_u ,q_u^\ep)^{an}(U_{\ep,u}) \rtimes W'_{F_u,u} & \to & 1_{W'_u uc} (\mc H (\mc R ,q^\ep)^{an}(U_\ep) \rtimes \Gamma) 1_{W'_u uc} \\ \downarrow \rho_\ep & & \downarrow \rho_\ep \\ \mc H (\mc R_u ,q_u)^{an}(U_u) \rtimes W'_{F_u,u} & \to & 1_{W'_u uc} (\mc H (\mc R ,q)^{an}(U) \rtimes \Gamma) 1_{W'_u uc}. \end{array} \] For elements of $\mc H (\mc R_u ,q_u^\ep)^{an}(U_{\ep,u})$ this is easy, since the effect of the vertical arrows is only extension of functions from $U_{\ep,u}$ (resp. $U_u$) to $U_\ep$ (resp. $U$) by 0. For elements of $W'_{F_u,u}$ the commutativity follows from \eqref{eq:firstredgamma} and \eqref{eq:defrhoep}.\\ (b) By Corollary \ref{cor:2.3}.b every irreducible tempered $\mc H \rtimes \Gamma$-representation $\pi$ is of the form $\mr{Ind}_{\mc H (\mc R_u ,q_u) \rtimes W'_{F_u ,u}}^{\mc H \rtimes \Gamma} (\tilde \pi \circ \Phi_u)$ for some $\tilde \pi \in \mr{Irr}_0 \big( \mc H (\mc R_u ,q_u)^{an}(U_u) \rtimes W'_{F_u,u} \big)$. By Theorem \ref{thm:2.7}.d \[ \Spr (\pi) = \mr{Ind}_{X \rtimes W'_u}^{X \rtimes W'} \big( \C_u \otimes \tilde \pi \big|_{W'_u} \big) . \] Using \eqref{eq:m0Phi} we can rewrite \[ \C_u \otimes \tilde \pi \big|_{W'_u} = \C_u \otimes (\tilde \pi \circ m_0) \big|_{W'_u} = \tilde \pi \circ m_0 \circ \Phi_{u,0} = \tilde \pi \circ \Phi_u \circ \rho_0 \cong \tilde \pi \circ \Phi_u \circ \tilde \rho_0 = \tilde \sigma_0 (\tilde \pi \circ \Phi_u) . \] Now we can apply part (a): \[ \Spr (\pi) \cong \mr{Ind}_{X \rtimes W'_u}^{X \rtimes W'} \big( \tilde \sigma_0 (\tilde \pi \circ \Phi_u ) \big) \cong \tilde \sigma_0 \big( \mr{Ind}_{\mc H (\mc R_u ,q_u) \rtimes W'_{F_u ,u}}^{\mc H \rtimes \Gamma} (\tilde \pi \circ \Phi_u) \big) = \tilde \sigma_0 (\pi) . \qquad \Box \] \vspace{2mm} \section{Scaling intertwining operators} We will show that the scaling maps $\se$ give rise to scaled versions of the intertwining operators $\pi^\Gamma (g,\xi)$. We will use this to study the behaviour of the components of the Fourier transform of $\mc S \rtimes \Gamma$ under scaling of $q$. As we remarked at the start of Section \ref{sec:scarep}, the results of that section can easily be extended to $\mc H \rtimes \Gamma$, and we will use that generality here. Recall that the groupoid $\mc G$ from \eqref{eq:GPQ} includes $\Gamma$ and is defined independently of $q$. Let us realize the representation \[ \pi_\ep^\Gamma (P,\se (\delta ),t) \text{ on } \C [\Gamma W^P] \otimes V_\delta \text{ as } \mr{Ind}_{\mc H_\ep^P}^{\mc H_\ep \rtimes \Gamma} (\delta \circ \tilde \rho_\ep \circ \phi_{t,\ep} ) . \] For all $\ep \in [-1,1]$ we obtain algebra homomorphisms \begin{equation}\label{eq:Fep} \begin{split} & \mc F_\ep : \mc H (\mc R,q^\ep) \rtimes \Gamma \to \bigoplus\nolimits_{P,\delta } \mc O (T^P) \otimes \mr{End}_\C (\C [\Gamma W^P] \otimes V_\delta) , \\ & \mc F_\ep (h) (P,\delta,t) = \pi_\ep^\Gamma (P,\se (\delta ),t,h) . \end{split} \end{equation} Rational intertwining operators $\pi_\ep^\Gamma (g,P,\delta',t)$ can be defined as in \eqref{eq:defintop} for all $\mc H_\ep$-representations of the form $\pi_\ep^\Gamma (P,\delta',t)$, where $\delta'$ is irreducible but not necessarily discrete series. In particular, for $\ep \neq 0$ the $(P,\delta)$-component of the image of $\mc F_\ep$ is invariant under an action of the group \[ \mc G_{P,\se (\delta)} := \{ g \in \mc G : g(P) = P, \se (\delta) \circ \psi_g^{-1} \cong \se (\delta) \} \] via such intertwiners. As in \eqref{eq:actSections}, the action is not on polynomial but on rational sections. \begin{prop}\label{prop:4.5} Let $\ep \in [-1,1] \setminus \{0\}$, let $g \in \mc G$ with $g(P) \subset F_0$ and let $\delta'$ be a discrete series representation of $\mc H_{g(P)}$ that is equivalent to $\delta \circ \psi_g^{-1}$. \enuma{ \item The $\mc H_{g(P),\ep}$-representations $\se (\delta')$ and $\se (\delta) \circ \psi_{g,\ep}^{-1}$ are unitarily equivalent. \item $\mc G_{P,\se (\delta)} = \mc G_{P,\delta}$ and $\mc G_{P,\se (\delta),t} = \mc G_{P,\delta,t}$ for all $t \in T^P$. \item The intertwiners $\pi_\ep^\Gamma (g,P,\se (\delta),t) \in \mr{Hom}_\C \big( \C [\Gamma W^P] \otimes V_\delta, \C [\Gamma W^{g(P)}] \otimes V_{\delta'} \big)$ depend rationally on $t \in T^P$ and analytically on $\ep$, whenever they are regular. \item For $t \in T^P_{un}$ the $\pi_\ep^\Gamma (g,P,\se (\delta),t)$ are regular and unitary, and $\pi_0^\Gamma (g,P,\tilde \sigma_0 (\delta),t) := \lim_{\ep \to 0} \pi_\ep^\Gamma (g,P,\se (\delta),t)$ exists. } \end{prop} \emph{Proof.} (a) First we show that \begin{equation}\label{eq:psirhoep} \psi_g \circ \rho_\ep = \rho_\ep \circ \psi_{g,\ep} . \end{equation} Write $g = \gamma^{-1} u$ with $u \in K_P$ and $\gamma \in W_0 \rtimes \Gamma$. The automorphism $\psi_u$ from \eqref{eq:twistKP} reflects the translation of $T^P$ by $u \in T^P_{un}$: that changes $U$ to $u U$, but apart from that it commutes with $\sigma_\ep$. Hence $\psi_u \circ \rho_\ep = \rho_\ep \circ \psi_{u,\ep}$. The isomorphism $\psi_\gamma : \mc H^P \to \mc H^{\gamma (P)}$ from \eqref{eq:psigamma} is more difficult to deal with, because it acts nontrivially on the $N_w$ with $w \in W_P$. However, by Propostion \ref{prop:3.4}.d this is the restriction to $\mc H^P$ of the automorphism \[ \psi_\gamma : h \mapsto \imath^0_\gamma h \imath^0_{\gamma^{-1}} \text{ of the algebra } \big( \C (T/W_0) \otimes_{Z (\mc H)} \mc H \big) \rtimes \Gamma . \] Similarly $\psi_{\gamma,\ep} (h) = \imath^0_{\gamma,\ep} h \imath^0_{\gamma^{-1},\ep}$. From these formula it is clear that $\psi_\gamma \circ \rho_\ep = \rho_\ep \circ \psi_{\gamma,\ep}$ on $\mc H^{me}_\ep (U_\ep)$. This establishes \eqref{eq:psirhoep}. Let $I^g_\delta : V_\delta \to V_{\delta'}$ be as in \eqref{eq:Idelta}. We claim that \begin{equation}\label{eq:Ideltaep} v \mapsto I_\delta^g \big( \delta (\psi_g^{-1} (M_\ep^{1/2}) M_\ep^{-1/2}) v \big) \end{equation} is an intertwiner between $\se (\delta) \circ \psi_{g,\ep}^{-1}$ and $\se (\delta')$. Indeed, for $v \in V_\delta$ and $h \in \mc H^{an}_\ep (U_\ep)$: \begin{align*} & I_\delta^g \big( \delta (\psi_g^{-1} (M_\ep^{1/2}) M_\ep^{-1/2}) \se (\delta) \circ \psi_{g,\ep}(h) v \big) = \\ & I_\delta^g \big( \delta (\psi_g^{-1} (M_\ep^{1/2} M_\ep^{-1/2}) \delta ( M_\ep^{1/2} \rho_\ep (\psi_{g,\ep}^{-1} h) M_\ep^{-1/2}) v \big) = \\ & I_\delta^g \big( \delta \circ \psi_g^{-1} (M_\ep^{1/2} \rho_\ep (h) M_\ep^{-1/2}) \delta (\psi_g^{-1} (M_\ep^{1/2}) M_\ep^{-1/2}) v \big) = \\ & \delta' (M_\ep^{1/2} \rho_\ep (h) M_\ep^{-1/2}) I_\delta^g \big( \delta (\psi_g^{-1} (M_\ep^{1/2}) M_\ep^{-1/2}) v \big) = \\ & \se (\delta') (h) I_\delta^g \big( \delta (\psi_g^{-1} (M_\ep^{1/2}) M_\ep^{-1/2}) v \big) . \end{align*} Obviously \eqref{eq:Ideltaep} is invertible, so it is an equivalence between the irreducible representations $\se (\delta) \circ \psi_{g,\ep}^{-1}$ and $\se (\delta')$. Since both are unitary, there exists a unitary intertwiner between, which by the irreducibility must be a scalar multiple of \eqref{eq:Ideltaep}. We define $I^g_{\delta,\ep}$ as the unique positive multiple of \eqref{eq:Ideltaep} that is unitary. \\ (b) By part (a) \[ g (P,\se (\delta),t) = (g(P),\se (\delta) \circ \psi_{g,\ep}^{-1},g(t)) \cong (g(P), \se (\delta'),g(t)) \cong (g(P) ,\se (\delta \circ \psi_g^{-1}),g(t)) , \] so the stabilizer of $(P,\se (\delta),t)$ does not depend on $\ep \in [-1,1]$.\\ (c) By Theorem \ref{thm:4.3} $I^g_{\delta,\ep}$ depends analytically on $\ep \in [-1,1]$. By definition $\imath^0_\gamma$ is rational in $t \in T$ and analytic in $\ep$, away from the poles. By definition \eqref{eq:defintop} \[ \pi_\ep^\Gamma (g,P,\se (\delta),t) (h \otimes_{\mc H^P_\ep} v) = h \imath^0_\gamma \otimes_{\mc H^{g(P)}_\ep} I^g_{\delta,\ep} (v) , \] so $\pi_\ep^\Gamma (g,P,\se (\delta),t)$ has the required properties.\\ (d) All possible singularities of the intertwining operators come from poles and zero of the $c$-functions from \eqref{eq:calpha}. By Theorem \ref{thm:3.5}.c $\pi_\ep^\Gamma (g,P,\se (\delta),t)$ is unitary for all $t \in T^P_{un}$ and $\ep \in (0,1]$. In particular all the singularities on this domain are removable. On the other hand, the explicit formula for $c_\alpha$ shows that the singularities for $q^\ep \; (\ep \neq 0)$ and $t \in T_{un}$ are the same as those for $q$ and $t \in T_{un}$. Therefore $\pi_\ep^\Gamma (g,P,\se (\delta),t)$ is also regular for $\ep \in [-1,0)$ and $t \in T^P_{un}$. Since these linear maps are analytic in $\ep \in [-1,1] \setminus \{0\}$ and unitary for $\ep > 0$, they are also unitary for $\ep < 0$. Hence all the matrix coefficients of $\pi_\ep^\Gamma (g,P,\se (\delta),t)$ are uniformly bounded on $T^P_{un} \times [-1,1] \setminus \{0\}$, which implies that every possible singularity at $\ep = 0$ is removable. In particular $\lim_{\ep \to 0} \pi_\ep^\Gamma (g,P,\se (\delta),t)$ exists. $\qquad \Box$ \\[2mm] We fix a discrete series representation $\delta$ of $\mc H_P$ and we abbreviate \[ A_{P,\delta} := C^\infty (T^P_{un}) \otimes_\C (\C [\Gamma W^P] \otimes V_\delta) . \] Proposition \ref{prop:4.5} says among others that for $g \in \mc G_{P,\delta}$ \[ [-1,1] \to A_{P,\delta}^\times : \ep \mapsto \pi_\ep^\Gamma (g,P,\se (\delta ), \cdot) \] is an analytic map. The group $\mc G_{P,\se (\delta)} = \mc G_{P,\delta}$ acts on $A_{P,\delta}$ by \begin{equation}\label{eq:Gdeltaact} (g \cdot_\ep f)(t) = \pi_\ep^\Gamma (g,P,\se (\delta ),g^{-1} t) f (g^{-1} t) \pi_\ep^\Gamma (g,P,\se (\delta ), g^{-1}t)^{-1} . \end{equation} By construction the $\delta$-component of the image of $\mc F_\ep$ consists of $\mc G_{P,\se (\delta)}$-invariant sections for $\ep \neq 0$, and by Proposition \ref{prop:4.5} this also goes for $\ep = 0$. We intend to show that the algebras $A_{P,\delta}^{\mc G_{P,\se (\delta)}}$ for $\ep \in [-1,1]$ are all isomorphic. (Although $\mc G_{P,\se (\delta)} = \mc G_{P,\delta}$ we prefer the longer notation here, because it indicates which action on $A_{P,\delta}$ we consider.) We must be careful when taking invariants, because \begin{equation}\label{eq:projrepG} \mc G_{P,\delta} \to A_{P,\delta}^\times : g \mapsto \pi_\ep^\Gamma (g,P,\se (\delta ), \cdot) \end{equation} is not necessarily a group homomorphism. However, the lack of multiplicativity is small, it is only caused by the freedom in the choice of a scalar in \eqref{eq:Idelta}. In other words, \eqref{eq:projrepG} defines a projective representation of $\mc G_{P,\delta}$ on $A_{P,\delta}$. Recall \cite[Section 53]{CuRe1} that the Schur multiplier $\mc G^*_{P,\delta}$ is a finite central extension of $\mc G_{P,\delta}$, with the property that every projective representation of $\mc G_{P,\delta}$ lifts to a unique linear representation of $\mc G^*_{P,\delta}$. This means that for every lift $g^* \in \mc G^*_{P,\delta}$ of $g \in \mc G_{P,\delta}$ there is a unique scalar multiple $\pi_\ep^\Gamma (g^*,P,\se (\delta ),\cdot)$ of $\pi_\ep^\Gamma (g,P,\se (\delta ),\cdot)$ such that \[ \mc G^*_{P,\delta} \to A_{P,\delta}^\times : g^* \mapsto \pi_\ep^\Gamma (g^*,P,\se (\delta ), \cdot) \] becomes multiplicative. Since $\pi_\ep^\Gamma (g,P,\se (\delta ), \cdot)$ is unitary, so is $\pi_\ep^\Gamma (g^*,P,\se (\delta ), \cdot)$. Notice that $\mc G_{P,\delta}$ and $\mc G^*_{P,\delta}$ fix the same elements of $A_{P,\delta}$, because the action \eqref{eq:Gdeltaact} is defined via conjugation with $\pi_\ep^\Gamma (g,P,\se (\delta ), \cdot)$. According to \cite[Section 53]{CuRe1} the way lift \eqref{eq:projrepG} from $\mc G_{P,\delta}$ to $\mc G^*_{P,\delta}$ is completely determined by the cohomology class of the 2-cocycle \begin{equation}\label{eq:2cocycle} \mc G_{P,\delta} \times \mc G_{P,\delta} \to \C^\times : (g_1,g_2) \mapsto I_{\delta,\ep}^{g_1} I_{\delta,\ep}^{g_2} I^{g_2^{-1} g_1^{-1}}_{\delta,\ep} . \end{equation} This cocycle depends analytically on $\ep$ and $\mc G_{P,\delta}$ is a finite group, so the class of \eqref{eq:2cocycle} in $H^2 (\mc G_{P,\delta},\C)$ does not depend on $\ep$. (In most cases this cohomology class is trivial, but examples are known in which it is nontrivial, see \cite[Section 6.2]{DeOp2}.) Hence the ratio between $\pi_\ep^\Gamma (g^*,P,\se (\delta ),\cdot)$ and $\pi_\ep^\Gamma (g,P,\se (\delta ),\cdot)$ also depends analytically on $\ep$. For $g^* \in \mc G^*_{P,\delta}$ we define $\lambda_{g^*} : T^P \to T^P$ by $\lambda_{g^*}(t) = g(t)$. In the remainder of this section we will work mostly with $\mc G^*_{P,\delta}$, and to simplify the notation we will denote its typical elements by $g$ instead of $g^*$. For $g \in \mc G^*_{P,\delta}$ and $t \in T^P_{un}$ we write \[ u_{g,\ep}(t) := \pi_\ep^{\Gamma} (g,\lambda_g^{-1} (t)) , \] so that the multiplicativity translates into \begin{equation}\label{eq:ugepmult} u_{g g',\ep} = u_{g,\ep} (u_{g',\ep} \circ \lambda_g^{-1}) . \end{equation} From the above we know that $u_{g,\ep} \in A_{P,\delta}$ is unitary and analytic in $\ep$. These elements can be used to identify $A_{P,\delta}^{\mc G_{P,\se (\delta)}}$ with a corner in a larger algebra. Consider the crossed product $A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta}$, where the action of $\mc G^*_{P,\delta}$ on $A_{P,\delta}$ comes only from the action on $C(T^P_{un})$ induced by the $\lambda_g$. In particular this action is independent of $\ep$. On $A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta}$ we can define actions of $\mc G^*_{P,\delta}$ by \begin{equation} g \cdot_\ep a = u_{g,\ep} g a g^{-1} u_{g,\ep}^{-1} . \end{equation} For $a \in A_{P,\delta}$ this recovers the action \eqref{eq:Gdeltaact}. An advantage of introducing the Schur multiplier is that, by \eqref{eq:ugepmult}, $g \mapsto u_{g,\ep} g$ is a homomorphism from $\mc G^*_{P,\delta}$ to the unitary group of $A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta}$. Hence \begin{equation} [-1,1] \to A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta} : \ep \mapsto p_{\delta,\ep} := |\mc G^*_{P,\delta} |^{-1} \sum\nolimits_{g \in \mc G^*_{P,\delta}} u_{g,\ep} g \end{equation} is a family of projections, depending analytically on $\ep$. This was first observed in \cite{Ros} and worked out in \cite[Lemma A.2]{SolThesis}, \begin{equation} A_{P,\delta}^{\mc G_{P,\se (\delta)}} \to p_{\delta,\ep} (A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta}) p_{\delta,\ep} : a \mapsto p_{\delta,\ep} a p_{\delta,\ep} \end{equation} is an isomorphism of Fr\'echet *-algebras. Its inverse is the map \[ \sum\nolimits_{g \in \mc G^*_{P,\delta}} a_g g \mapsto |\mc G^*_{P,\delta}| a_e . \] \begin{lem}\label{lem:4.6} The Fr\'echet *-algebras $A_{P,\delta}^{\mc G_{P,\se (\delta)}} \cong p_{\delta,\ep} (A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta}) p_{\delta,\ep}$ are all isomorphic, by $C^{\infty} (T^P_{un})^{\mc G_{P,\delta}}$-linear isomorphisms that are piecewise analytic in $\ep$. \end{lem} \emph{Proof.} According to \cite[Lemma 1.15]{Phi2} the projections $p_\delta (u_\ep )$ are all conjugate, by elements depending continuously on $\ep$. That already proves that all the Fr\'echet algebras $A_{P,\delta}^{\mc G_{P,\se (\delta)}}$ are isomorphic. Since $C^{\infty} (T^P_{un})^{\mc G_{P,\delta}}$ is the center of both $A_{P,\delta}^{\mc G_{P,\se (\delta)}}$ and $p_{\delta,\ep} (A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta}) p_{\delta,\ep}$, these isomorphisms are $C^{\infty} (T^P_{un})^{\mc G_{P,\delta}}$-linear. To show that the isomorphisms can be made piecewise analytic we construct the conjugating elements explicitly, using the recipe of \cite[Proposition 4.32]{Bla}. For $\ep ,\ep' \in [-1,1]$ consider \[ z(\delta ,\ep ,\ep' ) := (2 p_{\delta,\ep'} ) - 1)(2 p_{\delta,\ep} ) - 1) + 1 \in A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta} . \] Clearly this is analytic in $\ep$ and $\ep'$ and \[ p_{\delta,\ep'} z(\delta ,\ep ,\ep' ) = 2 p_{\delta,\ep'} p_{\delta,\ep} = z(\delta ,\ep ,\ep' ) p_{\delta,\ep} . \] The enveloping $C^*$-algebra of $A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta}$ is \[ C_\delta := \mr{End}_\C \big(\C [\Gamma W^P ] \otimes V_\delta \big) \otimes C (T^P_{un}) \rtimes_\lambda \mc G^*_{P,\delta} . \] Let $\norm{\cdot}$ be its norm and suppose that $\norm{p_\delta (u_\ep ) - p_\delta (u_\ep' )} < 1$. Then \[ \begin{array}{lll} \norm{z(\delta ,\ep ,\ep') - 2} & = & \norm{4 p_{\delta,\ep'} p_{\delta,\ep} - 2 p_{\delta,\ep} - 2 p_{\delta,\ep'} } \\ & = & \norm{-2 ( p_{\delta,\ep} - p_{\delta,\ep'} )^2} \\ & \leq & 2 \norm{p_{\delta,\ep} - p_{\delta,\ep'}}^2 \; < \; 2 \end{array} \] so $z(\delta ,\ep ,\ep')$ is invertible in $C_\delta$. But $A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta}$ is closed under the holomorphic functional calculus of $C_\delta$, so $z(\delta ,\ep ,\ep')$ is also invertible in this Fr\'echet algebra. Moreover, because the Fr\'echet topology on $A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta}$ is finer than the induced topology from $C_\delta$, there exists an open interval $I_\ep$ containing $\ep$ such that $z(\delta ,\ep ,\ep')$ is invertible for all $\ep' \in I_\ep$. For such $\ep,\ep'$ we construct the unitary element \[ u(\delta ,\ep ,\ep') := z(\delta ,\ep ,\ep') |z(\delta ,\ep ,\ep')|^{-1} \in A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta} . \] By construction the map \begin{equation}\label{eq:conjugateu} p_{\delta,\ep} (A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta} ) p_{\delta,\ep} \to p_{\delta,\ep'} (A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta} ) p_{\delta,\ep'} : x \mapsto u(\delta ,\ep ,\ep') x u(\delta ,\ep ,\ep')^{-1} \end{equation} is an isomorphism of Fr\'echet *-algebras. Notice that \eqref{eq:conjugateu} also defines an isomorphism between the respective $C^*$-completions. Let us pick a finite cover $\{I_{\ep_i}\}_{i=1}^m$ of $[-1,1]$. Then for every $\ep , \ep' \in [-1,1]$ an isomorphism between $p_{\delta,\ep} (A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta} ) p_{\delta,\ep}$ and $p_{\delta,\ep'} (A_{P,\delta} \rtimes_\lambda \mc G^*_{P,\delta} ) p_{\delta,\ep'}$ can be obtained by composing at most $m$ isomorphisms of the form \eqref{eq:conjugateu}. $\qquad \Box$ \\[2mm] \section{Scaling Schwartz algebras} It follows from Corollary \ref{cor:4.4} that, for $\ep \in (0,1]$, the functor $\se$ is an equivalence between the categories of finite dimensional tempered modules of $\mc H$ and of $\mc H_\ep = \mc H (\mc R ,q^\ep)$. We will combine this with the explicit description of the Fourier transform of $\mc S (\mc R,q)$ from Theorem \ref{thm:3.8} to construct "scaling isomorphisms" \[ \Sc_\ep : \mc S (\mc R ,q^\ep) \rtimes \Gamma \to \mc S (\mc R,q) \rtimes \Gamma . \] These algebra homomorphisms depend continuously on $\ep$ and they turn out to have a well-defined limit \[ \Sc_0 : \mc S (W^e) \rtimes \Gamma = \mc S (\mc R,q^0) \rtimes \Gamma \to \mc S (\mc R,q) \rtimes \Gamma . \] Although $\Sc_0$ is no longer surjective, it provides the connection between the representation theories of $\mc S (\mc R,q)$ and $\mc S (W^e)$ that we already discussed in Section \ref{sec:Springer}. Recall from \eqref{eq:FourierDelta} that $\Delta$ is a set of representatives for $\mc G$-association classes of discrete series representations of parabolic subalgebras of $\mc H$, and that \begin{equation}\label{eq:Fourier} \mc F : \mc S (\mc R,q) \rtimes \Gamma \to \bigoplus\nolimits_{(P,\delta ) \in \Delta} \big( C^\infty (T^P_{un}) \otimes \mr{End}_\C (\C [\Gamma W^P] \otimes V_\delta) \big)^{\mc G_{P,\delta}} \end{equation} is an isomorphism of Fr\'echet *-algebras. Together with \eqref{eq:Fep} and Lemma \ref{lem:4.6}, this implies the existence of a continuous family of algebra homomorphisms, with some nice properties: \begin{prop}\label{prop:4.7} There exists a collection of injective *-homomorphisms \[ \Sc_\ep : \mc H (\mc R ,q^\ep) \rtimes \Gamma \to \mc S (\mc R ,q) \rtimes \Gamma \qquad \ep \in [-1,1] , \] such that: \enuma{ \item For $\ep < 0$ (respectively $\ep > 0$) the map $\pi \mapsto \pi \circ \Sc_\ep$ is an equivalence between $\mr{Mod}_f (\mc S (\mc R,q) \rtimes \Gamma)$ and the category of finite dimensional anti-tempered (respectively tempered) $\mc H (\mc R ,q^\ep) \rtimes \Gamma$-modules. \item $\Sc_1 : \mc H \rtimes \Gamma \to \mc S \rtimes \Gamma$ is the canonical embedding. \item $\Sc_\ep (N_w) = N_w$ for all $w \in Z(W^e)$. \item For all $w \in W^e$ the map \[ [-1,1] \to \mc S (\mc R ,q) \rtimes \Gamma : \ep \mapsto \Sc_\ep (N_w) \] is piecewise analytic, and in particular analytic at 0. \item For all $\pi \in \mr{Mod}_f (\mc S (\mc R,q) \rtimes \Gamma)$ the representations $\pi \circ \Sc_\ep$ and $\se (\pi)$ are equivalent. In particular $\pi^\Gamma (P,\delta,t) \circ \Sc_\ep \cong \pi_\ep^\Gamma (P,\se (\delta ),t)$ for all $(P,\delta,t) \in \Xi_{un}$. \item $\Sc_\ep$ preserves the trace $\tau$. } \end{prop} \emph{Proof.} Let $\Sc_{P,\delta,\ep} : A_{P,\delta}^{\mc G_{P,\se (\delta)}} \to A_{P,\delta}^{\mc G_{P,\delta}}$ be the isomorphism from Lemma \ref{lem:4.6}. We already observed that $\delta$-component of the image of $\mc F_\ep$ is invariant under $\mc G_{P,\se (\delta)}$, so we can define \[ \Sc_\ep := \mc F^{-1} \circ \big( \bigoplus\nolimits_{(P,\delta) \in \Delta} \Sc_{P,\delta,\ep} \big) \circ \mc F_\ep . \] Now (b) holds by construction and (c) follows from the $C^{\infty} (T^P_{un})^{\mc G_{P,\delta}}$-linearity in Lemma \ref{lem:4.6}. For (d) we use Theorem \ref{thm:4.3} and Lemma \ref{lem:4.6}. From the last lines of the proof of Lemma \ref{lem:4.6} we see how we can arrange that $\Sc_\ep$ is analytic at 0: it suffices to take $\ep_i = 0$ and to use the elements $u(\delta,0,\ep')$ for $\ep'$ in a neighborhood of $\ep = 0$. Any finite dimensional module decomposes canonically as a direct sum of parts corresponding to different central characters. Hence in (e) it suffices to consider $(\pi,V) \in \mr{Mod}_{f,\mc G (P,\delta,t)} (\mc S (\mc R,q) \rtimes \Gamma)$.That is, $\pi : \mc S (\mc R ,q) \rtimes \Gamma \to \mr{End}_\C (V)$ factors via \[ \mr{ev}_{P,\delta,t} : \mc S (\mc R,q) \rtimes \Gamma \to \mr{End}_\C (\C [\Gamma W^P] \otimes V_\delta), \; h \mapsto \pi^\Gamma (P,\delta,t,h) . \] As $\Sc_{\ep,P,\delta}$ is $C^{\infty} (T^P_{un})^{\mc G_{P,\delta}}$-linear, $\pi \circ \Sc_\ep : \mc S (\mc R ,q^\ep) \rtimes \Gamma \to \mr{End}_\C (V)$ factors via \[ \mr{ev}_{P,\se (\delta ),t} : \mc S (\mc R,q^\ep) \rtimes \Gamma \to \mr{End}_\C (\C [\Gamma W^P] \otimes V_\delta) . \] Moreover, by Lemma \ref{lem:4.6} the finite dimensional $C^*$-algebras $\mr{ev}_{P,\delta,t} (\mc S (\mc R,q) \rtimes \Gamma)$ and $\mr{ev}_{P,\se (\delta),t} (\mc S (\mc R,q^\ep) \rtimes \Gamma)$ are isomorphic, by isomorphisms depending continuously on $\ep \in [-1,1]$. We have two families of representations on $V ,\; \pi \circ \Sc_\ep$ and $\se (\pi)$, which agree at $\ep = 1$ and all whose matrix coefficients are continuous in $\ep$. Since a finite dimensional semisimple algebra has only finitely many equivalence classes of representations on $V$, such equivalence classes are rigid under continuous deformations. Therefore $\pi \circ \Sc_\ep \cong \se (\pi)$ for all $\ep \in [-1,1]$. Now Lemma \ref{lem:4.9} (or a simpler version of the above argument) shows that $\pi^\Gamma (P,\delta,t) \circ \Sc_\ep \cong \pi_\ep^\Gamma (P,\se (\delta ),t)$, concluding the proof of (e). Property (a) is a consequence of (e), Corollary \ref{cor:4.4} and Lemma \ref{lem:3.1}.b. According to \cite[Lemma 5.5]{Opd-Sp} the scaling maps $\sigma_\ep$ preserve the Plancherel measure $\mu_{Pl}$ for $\ep > 0$. By part (e) and Theorem \ref{thm:4.3}, so do the maps $\pi \mapsto \pi \circ \Sc_\ep$. Now the last part of Theorem \ref{thm:3.8} shows that \begin{equation}\label{eq:tauSc} \tau (\Sc_\ep (h)) = \tau (h) \text{ for all } \ep \in (0,1] , h \in \mc H_\ep \rtimes \Gamma . \end{equation} By part (d) there is a small $\ep' > 0$ such that \eqref{eq:tauSc} also holds for all $\ep \in [-\ep',0]$. Moreover, from the end of the proof of Lemma \ref{lem:4.6} we see that we can make the maps from (d) analytic at any given $\ep \in [-1,1]$. Of course this involves some choices, but they do not influence $\tau$ because they all come from conjugation in certain algebras. Therefore \eqref{eq:tauSc} extends to all $\ep \in [-1,1]$. As concerns the injectivity of $\Sc_\ep$, suppose that $h \in \ker (\Sc_\ep)$. Then $M(t,h) = 0$ for all unitary principal series representations $M(t)$. Since $T_{un}$ is Zariski-dense in $T$, Lemma \ref{lem:3.13} for $\mc H_\ep \rtimes \Gamma$ shows that $h = 0. \qquad \Box$ \\[2mm] Notice that for $\ep < 0$ composition with $\Sc_\ep$ does not preserve the local (in the dual space) traces $\mu_{Pl} (\xi) \text{tr} \, \pi^\Gamma (\xi)$, because by Theorem \ref{thm:3.8} and Proposition \ref{prop:4.7} $\se$ does not preserve the support of the Plancherel measure. In general $\Sc_\ep (\mc H_\ep \rtimes \Gamma)$ is not contained in $\mc H \rtimes \Gamma$, for two reasons: $\Sc_{\delta,\ep}$ usually does not preserve polynomiality, and not every polynomial section is in the image of $\mc F$. For $\ep \geq 0$ the $\Sc_\ep$ extend continuously to $\mc S (\mc R,q^\ep) \rtimes \Gamma$: \begin{thm}\label{thm:4.8} For $\ep \in [0,1]$ there exist homomorphisms of Fr\'echet *-algebras \[ \begin{array}{rrrr} \Sc_\ep : & \mc S (\mc R ,q^\ep ) \rtimes \Gamma & \to & \mc S (\mc R ,q) \rtimes \Gamma \\ \Sc_\ep : & C^* (\mc R ,q^\ep ) \rtimes \Gamma & \to & C^* (\mc R ,q) \rtimes \Gamma \end{array} \] with the following properties: \enuma{ \item $\Sc_\ep$ is an isomorphism for $\ep > 0$, and $\Sc_0$ is injective. \item $\Sc_1$ is the identity. \item $\Sc_\ep (h) = h$ for all $h \in \mc S (Z(W))$. \item Let $x \in C^* (W^e \rtimes \Gamma)$ and let $h = \sum_{w \in W^e \rtimes \Gamma} h_w N_w$ with $p_n (h) < \infty$ for all $n \in \N$. Then the following maps are continuous: \[ \begin{array}{llllll} [0,1] & \to & \mc S (\mc R,q) \rtimes \Gamma , & \ep & \mapsto & \Sc_\ep (h) , \\ [0,1] & \to & B (L^2 (\mc R ,q)) , & \ep & \mapsto & \Sc_\ep^{-1} \Sc_0 (x) . \end{array} \] \item For all $\pi \in \mr{Mod}_f (\mc S (\mc R,q) \rtimes \Gamma)$ the representations $\pi \circ \zeta_\ep$ and $\se (\pi)$ are equivalent. \item $\Sc_\ep$ preserves the trace $\tau$. } \end{thm} \emph{Proof.} For any $(P,\delta ) \in \Delta$ the representation $\tilde \sigma_\ep (\delta )$, although not necessarily irreducible if $\ep = 0$, is certainly completely reducible, being unitary. Hence by Theorem \ref{thm:3.8} every irreducible constituent $\pi_1$ of $\tilde \sigma_\ep (\delta )$ is a direct summand of \[ \mr{Ind}_{\mc H_{\ep ,P}^{P_1}}^{\mc H_{\ep ,P}} (\delta_1 \circ \phi_{t_1,\ep} ) \] for a $P_1 \subset P$, a discrete series representation $\delta_1$ of $\mc H (\mc R_{P_1},q^\ep)$ and a \[ t_1 \in \mr{Hom}_{\mh Z} \left( (X_P)^{P_1} , S^1 \right) = \mr{Hom}_{\mh Z} \left( X / (X \cap (P^\vee )^\perp + \mh Q P_1 ), S^1 \right) \subset T_u \] Consequently, $\pi_\ep (P, \pi_1 ,t)$ is a direct summand of \[ \mr{Ind}_{\mc H_\ep^P}^{\mc H_\ep} \Big( \mr{Ind}_{\mc H_{\ep ,P}^{P_1}}^{\mc H_{\ep ,P}} (\delta_1 \circ \phi_{t_1 ,\ep}) \circ \phi_{t,\ep} \Big) = \pi_\ep (P_1 ,\delta_1 ,t t_1 ) \] In particular for $t \in T^P_{un}$ every matrix coefficient of $\pi_\ep^\Gamma (P, \se (\delta ),t)$ appears in the Fourier transform of $\mc S_(\mc R , q^\ep) \rtimes \Gamma$, so \eqref{eq:Fep} extends to the respective Schwartz and $C^*$-completions, as required. By Corollary \ref{cor:4.4}.b and \eqref{eq:Fourier} these maps are isomorphisms if $\ep > 0$. Since \eqref{eq:conjugateu} extends continuously to the appropriate $C^*$-completions, so does the algebra homomorphism $\Sc_\ep$ from Proposition \ref{prop:4.7}. Properties (b), (c), (e) and (f) are direct consequences of the corresponding parts of Proposition \ref{prop:4.7}. To see that $\Sc_0$ remains injective we vary on the proof of Proposition \ref{prop:4.7}. By (e) the family of representations \[ I_t \circ \Sc_\ep \cong \pi_\ep^\Gamma (\emptyset , \se (\delta_\emptyset), t) = \pi_\ep^\Gamma (\emptyset, \delta_\emptyset, t) \] with $t \in T_{un}$ becomes precisely the unitary principal series of $W^e \rtimes \Gamma$ when $\ep \to 0$. By Lemma \ref{lem:2.9} and Frobenius reciprocity every irreducible tempered representation of $\mc H (\mc R ,q^0) \rtimes \Gamma = \C [W^e \rtimes \Gamma]$ is a quotient of a unitary principal series representation. Hence every element of $C^* (\mc R ,q^0 ) \rtimes \Gamma = C^* (W^e \rtimes \Gamma)$ that lies in the kernel of $\Sc_0$ annihilates all irreducible tempered $W^e \rtimes \Gamma$-representations, and must be 0. The assumptions in (d) mean that we can consider $h$ as an element of $\mc S (\mc R,q^\ep) \rtimes \Gamma$ for every $\ep$. Moreover the sum $\sum_{w \in W^e \rtimes \Gamma} h_w N_w$ converges uniformly to $h$ in $\mc S (\mc R,q^\ep) \rtimes \Gamma$. For every finite partial sum $h'$ the map $\ep \mapsto \phi_\ep (h')$ is continuous by Proposition \ref{prop:4.7}.e, so this also holds for $h$ itself. For $\ep \in (0,1]$ we consider \begin{equation}\label{eq:Scep0} \Sc_\ep^{-1} \Sc_0 (x) - x = \mc F_\ep^{-1} \Big( \bigoplus_{P,\delta \in \Delta} \Sc_{P,\delta,\ep}^{-1} \Sc_{P,\delta,0} (\mc F_0 (x)) - \mc F_\ep (x) \Big) . \end{equation} Since $\Sc_{P,\delta,0}$ is invertible, both $ \Sc_{P,\delta,\ep}^{-1} \Sc_{P,\delta,0} (\mc F_0 (x))$ and $\mc F_\ep (x)$ are continuous in $\ep$ and converge to $\mc F_0 (x)$ as $\ep \downarrow 0$. The continuity of $\mc F_\ep$ with respect to $\ep$ implies that the $\mc F_\ep^{-1}$ are also continuous with respect to $\ep$, so $\ep \mapsto \Sc_\ep^{-1} \Sc_0 (x)$ is continuous. The expression between the large brackets in \eqref{eq:Scep0} also depends continuously on $\ep$ and converges to 0 as $\ep \downarrow 0$. Furthermore \[ \mc F_\ep^{-1} : \bigoplus\nolimits_{(P,\delta ) \in \Delta} \big( C (T^P_{un}) \otimes \mr{End}_\C (\C [\Gamma W^P] \otimes V_\delta) \big)^{\mc G_{P,\se (\delta)}} \to B (L^2 (\mc R ,q)) \] is a homomorphism of $C^*$-algebras, so it cannot increase the norms. Therefore \[ \lim_{\ep \downarrow 0} (\Sc_\ep^{-1} \Sc_0 (x) - x) = 0 \text{ in } B (L^2 (\mc R ,q)). \qquad \Box \] The homomorphisms from Theorem \ref{thm:4.8} are by no means natural, the construction involves a lot of arbitrary choices. Nevertheless one can prove \cite[Lemma 5.22]{SolThesis} that it is possible to interpolate continuously between two such homomorphisms, obtained from different choices. In other words, the homotopy class of $\Sc_\ep : \mc S (\mc R,q^\ep) \rtimes \Gamma \to \mc S (\mc R ,q) \rtimes \Gamma$ is canonical. It is quite remarkable that $\Sc_0$ preserves the trace $\tau$, because the measure space $(\Xi_{un}, \mu_{Pl})$ differs substantially from $T_{un}$ with the Haar measure (which corresponds to the algebra $\mc S (X) \rtimes W'$). For example, the former is disconnected and can have isolated points, while the latter is a connected manifold. So the preservation of the trace already indicates that it is not possible to separate all components of $\Xi_{un} / \mc G$ using only elements from the image of $\Sc_0$. \begin{cor}\label{cor:Sprsigma0} For $\pi \in \mr{Irr}(\mc S (\mc R,q) \rtimes \Gamma)$ we have $\Spr (\pi) \cong \tilde \sigma_0 (\pi) \cong \pi \circ \Sc_0$, where $\Spr$ is as in Section \ref{sec:Springer}. The map $\Sc_0$ induces a linear bijection \[ G_\Q (\Sc_0) : G_\Q (\mc S (\mc R,q) \rtimes \Gamma) \to G_\Q (\mc S (W^e) \rtimes \Gamma) . \] \end{cor} \emph{Proof.} The first claim follows from Theorem \ref{thm:4.8}.e and Lemma \ref{lem:4.9}.b. The second statement follows from the first and Theorem \ref{thm:2.7}.a $\qquad \Box$ \chapter{Noncommutative geometry} Affine Hecke algebras have some clear connections with noncommutative geometry. Already classical is the isomorphism between an affine Hecke algebra (with one formal parameter $\mathbf q$) and the equivariant $K$-theory of a Steinberg variety, see \cite{Lus-EqK,KaLu,ChGi}. Of a more analytic nature is $K$-theory of the $C^*$-completion $C^* (\mc R,q)$ of $\mc H (\mc R,q)$. It is relevant for the representation theory of affine Hecke algebras because topological $K$-theory is built from finitely generated projective modules. Since $K$-theory tends to be invariant under small perturbations, it is expected \cite{Ply1,BCH} that $K_* (C^* (\mc R,q))$ does not depend on $q$. We prove this modulo torsion (Theorem \ref{thm:5.3}). For the algebra $\mc H (\mc R,q)$ \pch \ is more suitable than $K$-theory. Although \pch \ is not obviously related to representation theory, there is a link for certain classes of algebras \cite{SolHomGHA}. From \cite{BaNi} it is known that $HP_* (\mc H (\mc R,q)) \cong HP_* (\C [W^e])$ when $q$ is an equal parameter function, but the proof is by no means straightforward. We connect these two theories via the Schwartz completion of $\mc H(\mc R,q)$. For this algebra both topological $K$-theory and \pch \ are meaningful invariants. Notwithstanding the different nature of the algebras $\mc H (\mc R,q)$ and $\mc S (\mc R,q)$, they have the same \pch \ (Theorem \ref{thm:5.5}). We deduce the existence of natural isomorphisms \[ HP_* (\mc H (\mc R,q)) \cong HP_* (\mc S (\mc R,q)) \cong K_* (\mc S (\mc R,q)) \otimes_\Z \C \cong K_* (C^* (\mc R,q)) \otimes_\Z \C . \] Moreover the scaling maps from Chapter \ref{chapter:scaling} provide isomorphisms from these vector spaces to the corresponding invariants of group algebras of $W^e$ (Corollary \ref{cor:5.6}). Notice the similarity with the ideas of \cite{BHP,SolPadic}. Our method of proof actually shows that $\mc S (\mc R,q)$ and $\mc S (W^e)$ are \emph{geometrically equivalent} (Lemma \ref{lem:5.7}), a term coined by Aubert, Baum and Plymen \cite{ABP1} to formulate a conjecture for Hecke algebras of $p$-adic groups. This conjecture (which we call the ABP-conjecture) describes the structure of Bernstein components in the smooth dual of a reductive $p$-adic group. Translated to affine Hecke algebras this conjectures says among others that the dual of $\mc H$ can be parametrized with the extended quotient $\widetilde{T} / W_0$. The topological space Irr$(\mc H (\mc R,q))$, with its central character map to $T / W_0$, should then by obtained from $\widetilde{T} / W_0$, with its canonical projection onto $T / W_0$, via translating the components of $\widetilde{T} / W_0$ in algebraic dependence on $q$. We verify the ABP-conjecture for all affine Hecke algebras with positive parameters, possibly extended with a group of diagram automorphisms (Theorem \ref{thm:5.9}). Hence the ABP-conjecture holds for all Bernstein components of $p$-adic groups, whose Hecke algebras are known to be Morita equivalent to affine Hecke algebras. In the final section we calculate in detail what happens for root data with $R_0$ of type $B_2 / C_2$. Interestingly, this also shows that the representation theory of $\mc H (\mc R,q)$ seems to behave very well under general deformations of the parameter function $q$. \section{Topological $K$-theory} By means of the canonical basis $\{ N_w : w \in W^e \rtimes \Gamma \}$ we can identify the topological vector spaces underlying the algebras $\mc S (\mc R,q) \rtimes \Gamma$ for all positive parameter functions $q$. It is clear from the rules \eqref{eq:multrules} that the multiplication in affine Hecke algebras depends continuously on $q$, in some sense. To make that precise, one endows finite dimensional subspaces of $\mc H (\mc R ,q) \rtimes \Gamma$ with their standard topology, and one defines a topology on the space of positive parameter functions by identifying them with tupels of positive real numbers. This can be extended to the Schwartz completions: by \cite[Lemma A.8]{OpSo2} the multiplication in $\mc S (\mc R ,q) \rtimes \Gamma$ is continuous in $q$, with respect to the Fr\'echet topology on $\mc S (\mc R,q) \rtimes \Gamma$. This opens the possibility to investigate this field of Fr\'echet algebras with analytic techniques that relate the algebras for different $q$'s, a strategy that was used at some crucial points in \cite{OpSo2}. We denote the topological $K$-theory of a Fr\'echet algebra $A$ by $K_* (A) = K_0 (A) \oplus K_1 (A)$. Since $\mc S (\mc R ,q) \rtimes \Gamma$ is closed under the holomorphic functional calculus of $C^* (\mc R,q) \rtimes \Gamma$ \cite[Theorem A.7]{OpSo2}, the density theorem \cite[Th\'eor\`eme A.2.1]{Bos} tells us that the inclusion $\mc S (\mc R ,q) \rtimes \Gamma \to C^* (\mc R ,q) \rtimes \Gamma$ induces an isomorphism \begin{equation}\label{eq:Kdensity} K_* (\mc S (\mc R,q) \rtimes \Gamma) \cong K_* (C^* (\mc R ,q) \rtimes \Gamma) . \end{equation} $K$-theory for $C^*$-algebras is homotopy-invariant, so it is natural to expect the following: \begin{conj} \label{conj:5.K} For any positive parameter function $q$ the abelian groups\\ $K_* (C^* (W^e) \rtimes \Gamma)$ and $K_* (C^* (\mc R ,q) \rtimes \Gamma)$ are naturally isomorphic. \end{conj} This conjecture stems from Higson and Plymen (see \cite[6.4]{Ply1} and \cite[6.21]{BCH}), at least when $\Gamma = \{\mr{id}\}$ and $q$ is constant on $S^\af$. It is similar to the Connes--Kasparov conjecture for Lie groups, see \cite[Sections 4--6]{BCH} for more background. Independently Opdam \cite[Section 1.0.1]{Opd-Sp} formulated Conjecture \ref{conj:5.K} for unequal parameters. We will discuss its relevance for the representation theory of affine Hecke algebras, and we will prove a slightly weaker version, obtained by applying the functor $\otimes_\Z \Q$. Recall \cite{Phi2} that for any unital Fr\'echet algebra $A ,\; K_0 (A)$ (respectively $K_1 (A)$) is generated by idempotents (respectively invertible elements) in matrix algebras $M_n (A)$. The $K$-groups are obtained by taking equivalence classes with respect to the relation generated by stabilization and homotopy equivalence. \begin{lem}\label{lem:5.1} Suppose that \[ \kappa_\ep : K_* (C^* (W^e) \rtimes \Gamma) \to K_* (C^* (\mc R,q) \rtimes \Gamma) \qquad \ep \in [0,1] \] is a family of group homomorphisms with the following property.\\ For every idempotent $e_0 \in M_n (\mc S (\mc R ,q^0) \rtimes \Gamma) = M_n (\mc S (W^e) \rtimes \Gamma)$ (resp. invertible element $x_0 \in M_n (\mc S (\mc R ,q^0) \rtimes \Gamma)$) there exists a $C^*$-norm-continuous path $\ep \mapsto e_\ep$ (resp. $\ep \mapsto x_\ep$) in the Fr\'echet space underlying $M_n (\mc S (\mc R ,q) \rtimes \Gamma)$, such that $\kappa_\ep [e_0] = [e_\ep]$ (resp. $\kappa_\ep [x_0] = [x_\ep]$). \\ Then $\kappa_\ep = K_* (\Sc_\ep^{-1} \Sc_0)$, with $\Sc_\ep : C^* (\mc R,q^\ep) \rtimes \Gamma \to C^* (\mc R,q) \rtimes \Gamma$ as in Theorem \ref{thm:4.8}. \end{lem} By "$C^*$-norm-continuous" we mean that the path in $B (L^2 (\mc R,q))$ defined by mapping $\ep$ to the operator of left multiplication by $e_\ep$ (with respect to $q^\ep$), is continuous. It follows from \cite[Proposition A.5]{OpSo2} that every Fr\'echet-continuous path is $C^*$-norm-continuous. By Theorem \ref{thm:4.8}.d the maps $K_* (\Sc_\ep^{-1} \Sc_0)$ have the property that the $\kappa_\ep$ are supposed to possess, so at least the statement is meaningful. \emph{Proof.} By definition $K_* (\Sc_\ep^{-1} \Sc_0) [e_0] = [ \Sc_\ep^{-1} \Sc_0 (e_0) ]$, where we extend the $\Sc_\ep$ to matrix algebras over $C^* (\mc R,q^\ep)$ in the obvious way. The paths $\ep \to e_\ep$ and $\ep \to \Sc_\ep^{-1} \Sc_0 (e_0)$ are both $C^*$-norm-continuous, so we can find $\ep' > 0$ such that \[ \norm{\Sc_\ep^{-1} \Sc_0 (e_0) - e_\ep} < \norm{2 e_0 - 1}^{-1} = \norm{2 \Sc_\ep^{-1} \Sc_0 (e_0) - 1}^{-1} \quad \text{for all } \ep \leq \ep' . \] Then by \cite[Proposition 4.3.2]{Bla} $e_\ep$ and $\Sc_\ep^{-1} \Sc_0 (e_0)$ are connected by a path of idempotents in $M_n (C^* (\mc R,q^\ep) \rtimes \Gamma)$, so \[ \kappa_\ep [e_0] = [e_\ep] = [ \Sc_\ep^{-1} \Sc_0 (e_0) ] = K_* (\Sc_\ep^{-1} \Sc_0) [e_0] \quad \text{for all } \ep \leq \ep' . \] For $\ep \geq \ep'$ \[ K_* (\Sc_\ep^{-1} \Sc_0) [e_0] = K_* (\Sc_{\ep}^{-1} \Sc_{\ep'}) K_* (\Sc_{\ep'}^{-1} \Sc_0) [e_0] = K_* (\Sc_{\ep}^{-1} \Sc_{\ep'}) [e_{\ep'}] . \] By parts (a) and (d) of Theorem \ref{thm:4.8}, $K_* (\Sc_{\ep}^{-1} \Sc_{\ep'}) \; (\ep \geq \ep')$ is the only family of maps $K_0 (C^* (\mc R ,q^{\ep'})) \rtimes \Gamma) \to K_0 (C^* (\mc R,q) \rtimes \Gamma)$ that comes from continuous paths of idempotents. Now $K_1$. Choose $\ep' > 0$ such that \[ \norm{\Sc_\ep^{-1} \Sc_0 (x_0) x_\ep^{-1} - 1} < 1 \quad \text{for all } \ep \leq \ep' . \] Then $\Sc_\ep^{-1} \Sc_0 (x_0) x_\ep^{-1}$ is homotopic to $1$ in $GL_n (C^( \mc R,q^\ep) \rtimes \Gamma)$, so \[ K_1 (\Sc_\ep^{-1} \Sc_0) [x_0] = [ \Sc_\ep^{-1} \Sc_0 (x_0) ] = [ x_\ep ] = \kappa_\ep [x_0] \quad \text{for all } \ep \leq \ep' . \] The argument for $\ep \geq \ep'$ is just as for $K_0. \qquad \Box$ \\[2mm] This lemma says that the map \[ K_* (\Sc_0 ) : K_* (C^* (W^e) \rtimes \Gamma) \to K_* (C^* (\mc R,q) \rtimes \Gamma \] is natural: it does not really depend on $\Sc_0$, only the topological properties of idempotents and invertible elements in matrix algebras over $\mc S (\mc R,q^\ep) \rtimes \Gamma$ with $\ep \in [0,1]$. To prove that $K_* (\Sc_0)$ becomes an isomorphism after tensoring with $\Q$, we need some preparations of a more general nature. For topological spaces $Y \subset X$ and a topological algebra $A$ we write \[ C_0 (X,Y;A) = \{ f : X \to A | f \text{ is continuous and } f \big|_Y = 0 \} . \] We omit $Y$ (resp. $A$) from the notation if $Y = \emptyset$ (resp. $A = \C$). A $C(X)$-algebra $B$ is a $\C$-algebra endowed with a unital algebra homomorphism from $C(X)$ to the center of the multiplier algebra of $B$. A morphism of $C(X)$-algebras is an algebra homomorphism that is also a $C(X)$-module map. \begin{lem}\label{lem:5.2} Let $\Sigma$ be a finite simplicial complex, let $A$ and $B$ be $C(\Sigma)$-Banach-algebras and let $\phi : A \to B$ a morphism of $C(\Sigma)$-Banach-algebras. Suppose that \enuma{ \item for every simplex $\sigma$ of $\Sigma$ there are finite dimensional $\C$-algebras $A_\sigma$ and $B_\sigma$ such that \[ C_0 (\sigma ,\delta \sigma ) A \cong C_0 (\sigma ,\delta \sigma ;A_\sigma ) \quad \mr{and} \quad C_0 (\sigma ,\delta \sigma ) B \cong C_0 (\sigma ,\delta \sigma ; B_\sigma ) ; \] \item for every $x \in \sigma \setminus \delta \sigma$ the localized map $\phi (x) : A_\sigma \to B_\sigma$ induces an isomorphism on $K$-theory. } Then $K_* (\phi ) : K_* (A) \isom K_* (B)$ is an isomorphism. \end{lem} \emph{Proof.} Let $\Sigma^n$ be the $n$-skeleton of $\Sigma$ and consider the ideals \begin{equation} I_0 = C (\Sigma ) \supset I_1 = C(\Sigma;\Sigma^0) \supset \cdots \supset I_n = C_0 (\Sigma ,\Sigma^{n-1} ) \supset \cdots \end{equation} They give rise to ideals $I_n A$ and $I_n B$. Because $\Sigma$ is finite, all these ideals are 0 for large $n$. We can identify \begin{equation}\label{eq:5.ideals} \begin{split} & I_n A / I_{n+1} A \cong C_0 (\Sigma^n ,\Sigma^{n-1}) A \cong \\ & \bigoplus_{\sigma \in \Sigma \,:\, \dim \sigma = n} A C_0 (\sigma ,\delta \sigma) := \bigoplus_{\sigma \in \Sigma \,:\, \dim \sigma = n} C_0 (\sigma ,\delta \sigma ;A_\sigma ) , \end{split} \end{equation} and similarly for $B$. Because $\phi$ is $C(\Sigma )$-linear, it induces homomorphisms \[ \phi (\sigma ) : C_0 (\sigma ,\delta \sigma ; A_\sigma ) \to C_0 (\sigma ,\delta \sigma ; B_\sigma ) . \] By the additivity of and the excision property of the $K$-functor (see e.g. \cite[Theorem 9.3.1]{Bla}), it suffices to show that every $\phi (\sigma )$ induces an isomorphism on $K$-theory. Let $x$ be any interior point of $\sigma$. Because $\sigma \setminus \delta \sigma$ is contractible, $\phi_\sigma$ is homotopic to $\mr{id}_{C_0 (\sigma ,\delta \sigma )} \otimes \phi (x_\sigma )$. By assumption the latter map induces an isomorphism on $K$-theory. With the homotopy invariance of $K$-theory it follows that \[ K_* (\phi (\sigma )) = K_* \big( \mr{id}_{C_0 (\sigma ,\delta \sigma )} \otimes \phi (x_\sigma ) \big) \] is an isomorphism. $\qquad \Box$ \\[2mm] Obviously this lemma is in no way optimal: one can generalize it to larger classes of algebras and one can relax the finiteness assumption on $\Sigma$. Because we do not need it, we will not bother to write down such generalizations. What will need however, is that Lemma \ref{lem:5.2} is also valid for similar functors, in particular for $A \mapsto K_* (A) \otimes_\Z \C$. \begin{thm}\label{thm:5.3} The map \[ K_* (\Sc_0) \otimes \mr{id}_\Q : K_* (C^* (W^e) \rtimes \Gamma) \otimes_\Z \Q \to K_* (C^* (\mc R,q) \rtimes \Gamma) \otimes_\Z \Q \] is a $\Q$-linear bijection. \end{thm} \emph{Proof.} Consider the projection \[ \text{pr} : \Xi_{un} / \mc G \to T_{un} / W' , \mc G (P,\delta,t) \mapsto W' r |r|^{-} t , \] where $W(R_P) r \in T_P / W(R_P)$ is the central character of $\delta$. With this projection and Theorem \ref{thm:3.8} we make $C^* (\mc R ,q)$ into a $C(T_{un}/W')$-algebra. By Theorem \ref{thm:4.8}.e $\Sc_0 : C^* (W^e) \rtimes \Gamma \to C^* (\mc R ,q) \rtimes \Gamma$ is a morphism of $C(T_{un}/W') - C^*$-algebras. Choose a triangulation $\Sigma$ of $T_{un}$, such that: \begin{itemize} \item $w (\sigma) \in \Sigma$ for every simplex $\sigma \in \Sigma$ and every $w \in W'$; \item $T_{un}^G$ is a subcomplex of $\Sigma$, for every subgroup $G \subset W'$; \item the star of any simplex $\sigma$ is $W'_\sigma$-equivariantly contractible. \end{itemize} Then $\Sigma / W'$ is a triangulation of $T_{un}/W'$. From Theorem \ref{thm:3.8} and the proof of \cite[Lemma 7]{SolChern} we see that $A = C^* (W^e) \rtimes \Gamma$ and $B = C^* (\mc R,q) \rtimes \Gamma$ and are of the form required in condition (a) of Lemma \ref{lem:5.2}. For any $u \in T_{un}$ we write \[ \begin{array}{lll} A_u & := & C^* (W^e) \rtimes \Gamma / \ker I_u , \\ B_u & := & \bigoplus_{\mc G \xi \in \Xi_{un} / \mc G , \mr{pr}(\xi) = W'u} C^* (\mc R,q) \rtimes \Gamma / \ker \pi^\Gamma (\xi) . \end{array} \] Condition (b) of Lemma \ref{lem:5.2} for $K_* (?) \otimes_\Z \Q$ means that the map $\Sc_0 (W' u) : A_u \to B_u$ should induce an isomorphism \begin{equation}\label{eq:KSc0u} K_* (\Sc_0 (W' u)) : K_* (A_u) \otimes_Z \Q \to K_* (B_u) \otimes_\Z \Q . \end{equation} As for all finite dimensional semisimple algebras, \[ K_* (A_u) = K_0 (A_u) = G_\Z (A) \quad \text{and} \quad K_* (B_u) = K_0 (B_u) = G_\Z (B_u) . \] With these identifications $K_0 (\Sc_0 (W' u))$ sends a projective module $e M_n (A_u)$ to the projective module $\Sc_0 (W' u)(e) M_n (B_u)$. The free abelian groups $G_\Z (A_u)$ and $G_\Z (B_u)$ have natural bases consisting of irreducible modules. With respect to these bases the matrix of $K_0 (\Sc_0 (W' u))$ is the transpose of the matrix of \[ \tilde \sigma_0 : G_\Z (B_u) \to G_\Z (A_u) , \pi \mapsto \pi \circ \Sc_0 (W' u) . \] By Theorem \ref{thm:2.7}.a $\tilde \sigma_0 \otimes \mr{id}_\Q : G_\Q (A_u) \to G_\Q (B_u)$ is a bijection, so \eqref{eq:KSc0u} is also a bijection. Now we can apply Lemma \ref{lem:5.2}, which finishes the proof. $\qquad \Box$ \\[2mm] So we proved Conjecture \ref{conj:5.K} modulo torsion, which raises the question what information is still contained in the torsion part. It is known that $K^* (C^* (\mc R,q) \rtimes \Gamma)$ is a finitely group. Indeed, by \cite[Theorem 6]{SolChern} this is the case for all Fr\'echet algebras if the type described in Theorem \ref{thm:3.8}. Hence the torsion subgroup of $K^* (C^* (\mc R,q) \rtimes \Gamma)$ is finite. In fact the author does not know any examples of nontrivial torsion elements in such $K$-groups, but it is conceivable that they exist. It turns out that this is related to the multiplicities of $W'$-representations in the affine Springer correspondence from Section \ref{sec:Springer}. \begin{lem}\label{lem:5.4} The following are equivalent: \enuma{ \item $K_* (\Sc_0) : K_* (C^* (W^e) \rtimes \Gamma) \to K_* (C^* (\mc R,q) \rtimes \Gamma)$ is an isomorphism. \item $K_0 (\Sc_0) : K_0 (C^* (W^e) \rtimes \Gamma) \to K_0 (C^* (\mc R,q) \rtimes \Gamma)$ is surjective. \item For every $u \in T_{un}$ the map $\Spr$ induces a surjection from the Grothendieck group of $\mr{Mod}_{f,W' u T_{rs}} (\mc S (\mc R,q) \rtimes \Gamma)$ to that of $\mr{Mod}_{f,W'u} (W^e \rtimes \Gamma)$. \item The map $\Spr$ induces a bijection $\Spr_\Z : G_\Z (\mc S (\mc R,q) \rtimes \Gamma) \to G_\Z (\mc S (W^e) \rtimes \Gamma)$. } \end{lem} \emph{Proof.} \\ (a) $\Rightarrow$ (b) Obvious.\\ (b) $\Rightarrow$ (c) We use the notation from the proof of Theorem \ref{thm:5.3}, in particular $K_0 (A_u)$ and $K_0 (B_u)$ are the Grothendieck groups referred to in (c). We claim that the canonical map \begin{equation}\label{eq:claimK} K_0 (C^* (\mc R,q) \rtimes \Gamma) \to K_0 (B_u) \end{equation} is surjective. Recall that $K_0 (B_u)$ is built from idempotents. Given any idempotent $e_u \in M_n (B_u)$ we want to find an idempotent $e \in M_n (C^* (\mc R ,q) \rtimes \Gamma)$ that maps to it. By Theorem \ref{thm:3.8} this means that on every connected component $(P,\delta,T^P_{un}) / \mc G_{P,\delta}$ of $\Xi_{un} / \mc G$ we have to find an idempotent $e_{P,\delta}$ in \[ M_n \big( C(T^P_{un}) \otimes \mr{End}_\C (\C [\Gamma W^P] \otimes V_\delta ) \big)^{\mc G_{P,\delta}} , \] which in every point of $\mr{pr}^{-1}(W' u) \cap (P,\delta,T^P_{un}) / \mc G_{P,\delta}$ takes the value prescribed by $e_u$. Recall that the groupoid $\mc G$ was built from elements of $W'$ and from the groups $K_P = T^P \cap T_P$. The latter elements permute the components of $\Xi_{un}$ freely, so $\mr{pr}^{-1}(W' u)$ intersects every component of $\Xi_{un}$ in at most one $\mc G$-association class. Therefore we can always find such a $e_{P,\delta}$, proving the claim \eqref{eq:claimK}. Together with assumption (b) this implies that \[ K_0 (C^* (W^e) \rtimes \Gamma) \xrightarrow{K_0 (\Sc_0)} K_0 (C^* (\mc R,q) \rtimes \Gamma) \to K_0 (B_u) \] is surjective. The underlying $C^*$-algebra homomorphisms factors via $C^* (W^e) \rtimes \Gamma \to A_u$, so \begin{equation}\label{eq:K0AB} K_0 (\Sc_0 (W' u)) : K_0 (A_u) \to K_0 (B_u) . \end{equation} is also surjective.\\ (c) $\Rightarrow$ (d) By Corollary \ref{cor:Sprsigma0} $\Spr_\Z (\pi) = \pi \circ \Sc_0$ for all $\pi \in \mr{Mod}_f (\mc S (\mc R,q) \rtimes \Gamma)$. So in the notation of \eqref{eq:KSc0u} $\Spr_\Z$ is the direct sum, over all $W' u \in T_{un}$, of the maps \begin{equation}\label{eq:RBAu} G_\Z (B_u) \to G_\Z (A_u) : \pi \mapsto \pi \circ \Sc_0 (W' u) . \end{equation} As we noticed in the proof of Theorem \ref{thm:5.3}, the matrix of this map is the transpose of the matrix of \eqref{eq:K0AB}. We showed in the aforementioned proof that the latter map becomes an isomorphism after applying $\otimes_\Z \Q$. As $K_0 (A_u)$ and $K_0 (B_u)$ are free abelian groups, this implies that $K_0 (\Sc_0 (W' u))$ is injective. So under assumption (c) \eqref{eq:K0AB} is in fact an isomorphism. Hence, with respect to the natural bases it is given by an integral matrix with determinant $\pm 1$. Then the same goes for \eqref{eq:RBAu}, so that map is also bijective. Therefore $\Spr_\Z$ is bijective. \\ (d) $\Rightarrow$ (a) The above shows that under assumption (d) the maps \eqref{eq:RBAu} and \eqref{eq:K0AB} are bijections. Since $K_1 (A_u) = K_1 (B_u) = 0$, we may apply Lemma \ref{lem:5.1}. $\qquad \Box$ \\[2mm] By Corollary \ref{cor:2.3}.b and property (d) of Theorem \ref{thm:2.7}, condition (c) of Lemma \ref{lem:5.4} can be reformulated as follows: for all $u \in T_{un}$ the map $\pi \mapsto \pi \big|_{W'_u}$ induces a bijection from the Grothendieck group of $\mr{Mod}_{f,\mf a} (\mh H (\tilde{\mc R}_u, k_u) \rtimes W'_{F_u,u})$ to $G_\Z (W'_u)$. According to \cite[Corollary 3.6]{Ciu} this statement is valid for all graded Hecke algebras of "geometric type". Hence Conjecture \ref{conj:5.K} holds, including torsion, for many important examples of affine Hecke algebras. In particular, let $I$ be an Iwahori subgroup of a split reductive $p$-adic group $G$ with root datum $\mc R$, as in Section \ref{sec:padic}. By \cite{Ply2} the completion $C^*_r (G,I)$ of $\mc H (G,I)$ is isomorphic to $C^* (\mc R,q)$, where $q$ is some prime power. It is interesting to combine Conjecture \ref{conj:5.K} with the Baum--Connes conjecture. Let $\beta G$ be the affine Bruhat--Tits building of $G$ and identify $\mf a^*$ with an apartment. The Baum--Connes conjecture for groups like $G$ and $W^e$ was proven by V. Lafforgue\cite{Laf}, see also \cite{SolPadic} (For $W^e$ it can of course be done more elementarily.) We obtain a diagram \begin{equation}\label{eq:K*BC} \begin{array}{ccccccc} \hspace{-5mm} K_*^{W^e} (\mf a^*) & \to & K_* (C_r^* (W^e)) & \to & K_* (C^* (\mc R,q)) & \to & K_* (C_r^* (G,I)) \\ & & & & & & \downarrow \; \uparrow \\ & & K_*^G (\beta G) & \to & K_* (C_r^* (G)) & \to & \bigoplus_{\mf s \in \mc B (G)} K_* (C_r^* (G)_{\mf s}) \end{array} \end{equation} in which all the horizontal maps are natural isomorphisms, while the vertical maps pick the factor of $K_* (C_r^* (G))$ corresponding to the Iwahori-spherical component in $\mc B (G)$. For the group $G = GL_n (\mathbb F)$ this goes back to \cite{Ply1}. Notice that \eqref{eq:K*BC} realizes $K_*^{W^e}(\mf a^*)$ as a direct summand of $K_*^G (\beta G)$, which is by no means obvious in equivariant $K$-homology. \section{Periodic cyclic homology} \label{sec:pch} Periodic cyclic homology is rather similar to topological $K$-theory, but the former functor is defined on larger classes of algebras. For example one can take the \pch \ of nontopological algebras like $\mc H (\mc R,q)$, while it is much more difficult to make sense of the topological $K$-theory of affine Hecke algebras without completing them. By definition the \pch \ of an algebra over a field $\mathbb F$ is an $\mathbb F$-vector space. Whereas topological $K$-theory for $C^*$-algebras is the generalization of $K$-theory for topological spaces, \pch \ for noncommutative algebras can be regarded as the analogue of De Rham cohomology for manifolds. In \cite[Theorem 3.3]{SolHomGHA} the author proved with homological-algebraic techniques that the \pch \ of an (extended) graded Hecke algebra $\mh H (\tilde{\mc R},k) \rtimes \Gamma$ does not depend on the parameter function $k$. Subsequently he translated this into a representation-theoretic statement, which we already used in \eqref{eq:Irr0}: the collection of irreducible tempered $\mh H (\tilde{\mc R},k) \rtimes \Gamma$-representations with real central character forms a basis of $G_\Q (W_0 \rtimes \Gamma)$. We will devise a reversed chain of arguments for affine Hecke algebras. Via topological $K$-theory we will use the affine Springer correspondence to show that $\mc H (\mc R,q) \rtimes \Gamma$ and $\mc S (\mc R,q) \rtimes \Gamma$ have the same \pch , and that it does not depend on the (positive) parameter function $q$. The material in this section can be compared with \cite{BHP,SolPadic}. Recall that the Chern character is a natural transformation $K_* \to HP_*$, where we write $HP_* (A) = HP_0 (A) \oplus HP_1 (A)$. By \eqref{eq:Kdensity} and \cite[Theorem 6]{SolChern} there are natural isomorphisms \begin{equation}\label{eq:Cherniso} K_* (C^* (\mc R,q) \rtimes \Gamma) \otimes_\Z \C \leftarrow K_* (\mc S (\mc R,q) \rtimes \Gamma) \otimes_\Z \C \to HP_* (\mc S (\mc R,q)) , \end{equation} the first one is induced by the embedding $\mc S(\mc R,q) \rtimes \Gamma \to C^* (\mc R,q) \rtimes \Gamma$, one the second one by the Chern character. Here and elsewhere in this paper the \pch \ of topological algebras is always meant with respect to the completed projective tensor product. (One needs a tensor product to build the differential complex whose homology is $HP_*$.) By contrast, in the definition of the \pch \ of nontopological algebras we simply use the algebraic tensor product over~$\C$. \begin{thm}\label{thm:5.5} The inclusion $\mc H (\mc R,q) \rtimes \Gamma \to \mc S (\mc R,q) \rtimes \Gamma$ induces an isomorphism on \pch . \end{thm} \emph{Proof.} In \cite[Theorem 3.3]{SolPadic} the author proved the corresponding result for Hecke algebras of reductive $p$-adic groups. The proof from \cite{SolPadic} also applies in our setting, the important representation-theoretic ingredients being Theorem \ref{thm:3.8}, Proposition \ref{prop:3.11} and Lemma \ref{lem:3.12}. A sketch of this proof already appeared in \cite{SolPCH}. $\qquad \Box$ \begin{cor}\label{cor:5.6} There exists a natural commutative diagram \[ \begin{array}{*{7}{c}} \!\! HP_* (\C [W^e] \rtimes \Gamma) & \!\!\! \to \!\!\! & HP_* (\mc S (W^e) \rtimes \Gamma) & \!\!\! \leftarrow \!\!\! & K_* (\mc S (W^e) \rtimes \Gamma) & \!\!\! \to \!\!\! & K_* (C^* (W^e) \rtimes \Gamma) \\ \downarrow & & \downarrow \scriptstyle{HP_* (\Sc_0)} & & \downarrow \scriptstyle{K_* (\Sc_0)} & & \downarrow \scriptstyle{K_* (\Sc_0)} \\ \!\! HP_* (\mc H (\mc R,q) \rtimes \Gamma) & \!\!\! \to \!\!\! & HP_* (\mc S (\mc R,q) \rtimes \Gamma) & \!\!\! \leftarrow \!\!\! & K_* (\mc S (\mc R,q) \rtimes \Gamma) & \!\!\! \to \!\!\! & K_* (C^* (\mc R ,q) \rtimes \Gamma) \end{array} \] After applying $\otimes_\Z \C$ to the $K$-groups, all these maps are isomorphisms. \end{cor} \emph{Proof.} The horizontal maps are induced the inclusion maps \[ \mc H (\mc R,q) \rtimes \Gamma \to \mc S (\mc R,q) \rtimes \Gamma \to C^* (\mc R,q) \rtimes \Gamma \] and by the Chern character $K_* \to HP_*$. The vertical maps (expect the leftmost one) are induced by the Fr\'echet algebra homomorphisms $\Sc_0$ from Theorem \ref{thm:4.8}. According to \eqref{eq:Cherniso} and Theorem \ref{thm:5.5} all the horizontal maps become isomorphisms after tensoring the $K$-groups with $\C$. By Lemma \ref{lem:5.1} the maps $K_* (\Sc_0)$ are natural, and by Theorem \ref{thm:5.3} they become isomorphisms after applying $\otimes_\Z \C$. The diagram commutes by functoriality, so $HP_* (\Sc_0)$ is also a natural isomorphism. Finally, we define $HP_* (\C [W^e] \rtimes \Gamma) \to HP_* (\mc H (\mc R,q) \rtimes \Gamma)$ as the unique map that makes the entire diagram commute. $\qquad \Box$ \begin{rem} Whether the leftmost vertical map comes from a suitable algebra homomorphism $\C [W^e] \rtimes \Gamma \to \mc H (\mc R,q) \rtimes \Gamma$ is doubtful, no such homomorphism is known if $q \neq 1$. \end{rem} Suppose that $X$ is the weight lattice of $R_0^\vee$, that $q \in \C \setminus \{0\}$ is any complex number which is not a root of unity, and that $q (s) = q$ for all $s \in S^\af$. In this setting an isomorphism $HP_* (\C [W^e]) \cong HP_* (\mc H (\mc R,q))$ was already constructed by Baum and Nistor \cite[Theorem 11]{BaNi}. Their proof makes essential use of the Kazhdan--Lusztig classification \cite[Theorem 7.12]{KaLu} of irreducible $\mc H (\mc R,q)$-representations, and of Lusztig's asymptotic Hecke algebra \cite{Lus-C2,Lus-C3}. For graded Hecke algebras things are even better than in Corollary \ref{cor:5.6}: in \cite[Theorem 3.4]{SolHomGHA} it was proven that not only $HP_* (\mh H (\tilde{\mc R},k) \rtimes \Gamma)$, but also the cyclic homology and the Hochschild homology of $\mh H (\tilde{\mc R},k) \rtimes \Gamma$ are independent of $k$. Whether or not this can be transferred to $\mc H(\mc R,q)$ is unclear to the author. The point is that the comparison of $\mh H (\tilde{\mc R},k) \rtimes \Gamma$ with $\mc H (\mc R,q) \rtimes \Gamma$ goes only via analytic localizations of these algebras. Since the effect of localization on the dual space is very easy, we can translate the comparison between localized Hecke algebras to a comparison between their dual spaces. By \cite[Theorem 4.5]{SolHomGHA} the \pch \ of a finite type algebra essentially depends only on its dual space, so it is not surprising that the parameter independence of $HP_*$ can be transferred from graded Hecke algebras to affine Hecke algebras, On the other hand, the Hochschild homology of an algebra changes in a nontrivial way under localization. Therefore one would in first instance only find a comparison between the Hochschild homology of two localized affine Hecke algebras with the same root datum but different parameters $q$. Possibly, provided that one would know enough about $HH_* (\mc H (\mc R,q) \rtimes \Gamma)$, one could deduce that also this vector space is independent of $q$. We remark that most certainly the $Z (\mc H (\mc R,q) \rtimes \Gamma)$-module structure of $HH_* (\mc H (\mc R,q) \rtimes \Gamma)$ will depend on $q$, because that is already the case for graded Hecke algebras, see the remark to Theorem 3.4 in \cite{SolHomGHA}. \section{Weakly spectrum preserving morphisms} \label{sec:weakly} For the statement and the proof of the Aubert--Baum--Plymen conjecture we need spectrum preserving morphisms and relaxed versions of those. These notions were developed in \cite{BaNi,Nis}. Baum and Nistor work in the category of finite type $\mathbf k$-algebras, where $\mathbf k$ is the coordinate ring of some complex affine variety. Since we are also interested in certain Fr\'echet algebras, we work in a larger class of algebras. We cannot do without some finiteness assumptions, but it suffices to impose them on representations. So, throughout this section we assume that for all our complex algebras $A$ there exists a $N \in \N$ such that the dimensions of irreducible $A$-modules are uniformly bounded by $N$. In particular $\pi \mapsto \ker \pi$ is a bijection from Irr$(A)$ to the collection of primitive ideals of $A$. A homomorphism $\phi : A \to B$ between two such algebras is called spectrum preserving if \begin{itemize} \item for every primitive ideal $J \subset B$, there is exactly one primitive ideal $I \subset A$ containing $\phi^{-1} (J)$; \item the map $J \mapsto I$ induces a bijection $\text{Irr}(\phi) : \text{Irr}(B) \to \text{Irr}(A)$. \end{itemize} We can relax these conditions in the following way. Suppose that there exists filtrations \begin{equation}\label{eq:filAB} \begin{array}{ccccccc} A = A_0 & \supset & A_1 & \supset & \cdots & \supset & A_n = 0, \\ B = B_0 & \supset & B_1 & \supset & \cdots & \supset & B_n = 0 \end{array} \end{equation} by two sided ideals, such that $\phi (A_i) \subset B_i$ for all $i$. We call $\phi : A \to B$ weakly spectrum preserving if all the induced maps $\phi_i : A_i / A_{i+1} \to B_i / B_{i+1}$ are spectrum preserving. In this case there are bijections \begin{align*} & \sqcup_i \text{Irr}(A_i / A_{i+1}) \to \text{Irr}(A), \\ & \sqcup_i \text{Irr}(B_i / B_{i+1}) \to \text{Irr}(B), \\ & \text{Irr}(\phi) := \sqcup_i \text{Irr}(\phi_i) : \text{Irr}(B) \to \text{Irr}(A) . \end{align*} Notice that Irr$(\phi)$ depends not only on $\phi$, but also on the filtrations of $A$ and $B$. \begin{lem}\label{lem:5.7} Let $\phi : A \to B$ be a weakly spectrum preserving morphism, and suppose that the dimensions of irreducible $B$-modules are uniformly bounded by $N \in \N$. Then $\text{Irr}(\phi)^{-1} (V(I)) = V ( \phi (I)^N )$ for every two-sided ideal $I \subset A$. In particular the bijection Irr$(\phi)$ is continuous with respect to the Jacobson topology (cf. Section \ref{sec:dual}). \end{lem} \emph{Proof.} We proceed with induction to the length $n$ of the filtration. For $n=1$ the morphism $\phi$ is spectrum preserving, so the statement reduces to \cite[Lemma 9]{BaNi}. For $n >1$ the induction hypothesis applies to the homomorphisms $\phi : A_1 \to B_1$ and $\phi_1 : A / A_1 \to B / B_1$. So for $\pi \in \text{Irr}(B / B_1) \subset \text{Irr}(B)$ we have \begin{align*} & \pi \in \text{Irr}(\phi)^{-1} (V(I)) \subset \text{Irr}(B) \Longleftrightarrow \\ & \pi \in \text{Irr}(\phi_1)^{-1} (V(I + A_1 / A_1)) \subset \text{Irr}(B / B_1) \Longleftrightarrow \\ & \pi \in V( \phi (I)^N + B_1 / B_1)) \subset \text{Irr}(B / B_1) \Longleftrightarrow \\ & \pi \in V( \phi (I)^N)) \subset \text{Irr}(B) . \end{align*} A similar argument applies to $\pi \in \text{Irr}(B_1) \subset \text{Irr}(B). \qquad \Box$ \\[2mm] The automatic continuity of Irr$(\phi)$ enables us to extract a useful map from the Fr\'echet algebra morphism $\Sc_0$: \begin{lem}\label{lem:5.8} The morphism $\Sc_0 : \mc S(W^e) \rtimes \Gamma \to \mc S (\mc R ,q) \rtimes \Gamma$ is weakly spectrum preserving. \end{lem} \emph{Proof.} We wil make use of the proofs of Lemma \ref{lem:5.2} and Theorem \ref{thm:5.3}. There we constructed a $W'$-equivariant triangulation of $T_{un}$, which lead to two-sided ideals \[ \begin{array}{lll} I_n = C_0^\infty (\Sigma, \Sigma^{n-1})^{W'} & \subset & C^\infty (T_{un}^{W'} , \\ I_n \mc S (\mc R,q) \rtimes \Gamma & \subset & \mc S (\mc R,q) \rtimes \Gamma , \\ I_n \mc S (W^e) \rtimes \Gamma & \subset & \mc S (W^e) \rtimes \Gamma . \end{array} \] (Here and below we regard the $n$-skeleton $\Sigma^n$ both as a simplicial complex and \ as a subset of $T_{un}$). It suffices to show that the induced map \begin{equation} \label{eq:Sc0n} \Sc_{0,n} : I_n \mc S (W^e) \rtimes \Gamma / I_{n+1} \mc S (W^e) \rtimes \Gamma \to I_n \mc S (\mc R,q) \rtimes \Gamma / I_{n+1} \mc S (\mc R,q) \rtimes \Gamma \end{equation} is spectrum preserving, for every $n$. Fortunately the dual spaces of these quotient algebras are rather simple, by \eqref{eq:5.ideals} \begin{equation}\label{eq:IrrSigma} \text{Irr} \big( I_n \mc S (\mc R,q) \rtimes \Gamma / I_{n+1} \mc S (\mc R,q) \rtimes \Gamma \big) \cong \bigsqcup_{\sigma \in \Sigma / W' , \dim \sigma = n} (\sigma \setminus \delta \sigma) \times \mr{Irr}_{x_\sigma} (\mc S (\mc R,q) \rtimes \Gamma) , \end{equation} where $x_\sigma \in \sigma \setminus \delta \sigma$ and $\mr{Irr}_{x_\sigma} (\mc S (\mc R ,q) \rtimes \Gamma)$ denotes the dual space of the algebra \begin{equation}\label{eq:Sxsigma} \bigoplus_{\mc G \xi \in \Xi_{un} / \mc G , \mr{pr}(\xi) = W' x_\sigma} \mc S (\mc R,q) \rtimes \Gamma / \ker \pi^\Gamma (\xi) . \end{equation} By construction $\Sc_{0,n}$ is $C_0^\infty$-linear, so in particular it is linear over \[ C_0^\infty (\Sigma^n , \Sigma^{n-1}) := I_n / I_{n+1} . \] We know from Theorem \ref{thm:2.7}.a and Corollary \ref{cor:Sprsigma0} that $G_\Q (\Sc_{0,n})$ is a bijection, so in particular \begin{equation}\label{eq:GQSc0} G_Q (\Sc_{0,n}) : G_\Q ( \mr{Mod}_{x_\sigma} (\mc S (\mc R ,q) \rtimes \Gamma) ) \to G_\Q ( \mr{Mod}_{x_\sigma} (\mc S (W^e) \rtimes \Gamma) ) \end{equation} is a bijection. Any ordering $(\pi_1 ,\pi_2 ,\ldots, \pi_k )$ of $\mr{Irr}_{x_\sigma} (\mc S (\mc R,q) \rtimes \Gamma)$ gives rise to a filtration of \eqref{eq:Sxsigma} by ideals \[ B_i := \bigcap\nolimits_{j=1}^i \ker \pi_j \qquad i=0,1,\ldots,k . \] Since we are dealing with two finite dimensional semisimple algebras of the same rank $k$, \eqref{eq:GQSc0} can be described completely with a matrix $M \in GL_k (\Z)$. Order $\mr{Irr}_{x_\sigma} (\mc S (\mc R,q) \rtimes \Gamma)$ and $\mr{Irr}_{x_\sigma} (\mc S (W^e) \rtimes \Gamma)$ such that all the principal minors of $M$ are nonsingular. Then the corresponding ideals $B_i$ of \eqref{eq:Sxsigma} and $A_i \subset \mc S (W^e) \rtimes \Gamma / \ker I_{x_\sigma}$ are such that $\Sc_{0,n} (x_\sigma)$ induces spectrum preserving morphisms $A_i / A_{i+1} \to B_i / B_{i+1}$. Hence $\Sc_{0,n}(x_\sigma)$ is weakly spectrum preserving. It follows from this and \eqref{eq:IrrSigma} that for any $n$-dimensional simplex $\sigma \in \Sigma / W'$ we can construct filtrations by two-sided ideals in \[ C_0^\infty (\Sigma , \delta \sigma)^{W'} \mc S (\mc R,q) \rtimes \Gamma / C_0^\infty (\Sigma , \sigma)^{W'} \mc S (\mc R,q) \rtimes \Gamma \] and in \[ C_0^\infty (\Sigma , \delta \sigma)^{W'} \mc S (W^e) \rtimes \Gamma / C_0^\infty (\Sigma , \sigma)^{W'} \mc S (W^e) \rtimes \Gamma , \] with respect to which the map induced by $\Sc_{0,n}$ is weakly spectrum preserving. We do this for all such simplices $\sigma$, and then \eqref{eq:5.ideals} show that \eqref{eq:Sc0n} is weakly spectrum preserving. $\qquad \Box$ \\[2mm] A related notion that we will use in the next section is geometric equivalence of algebras, as defined in \cite[Section 4]{ABP1}. The basic idea is to call $A$ and $B$ geometrically equivalent if they are Morita-equivalent or if there exists a weakly spectrum preserving morphism $\phi : A \to B$. Furthermore two finite type $\mathbf k$-algebras are geometrically equivalent if they only differ by an algebraic deformation of the $\mathbf k$-module structure. Now one defines geometric equivalence to be the equivalence relation (on the category of finite type $\mathbf k$-algebras) generated by these three elementary moves. So whenever two algebras $A$ and $B$ are geometrically equivalent, they are so by various sequences of elementary moves. Every such sequence induces a bijection between the dual spaces of $A$ and $B$, which however need not be continuous, since the map Irr$(\phi)$ from Lemma \ref{lem:5.7} is usually not a homeomorphism. Nevertheless, by \cite[Theorem 8]{BaNi} every weakly spectrum preserving morphism of finite type algebras $\phi : A \to B$ induces an isomorphism $HP_* (\phi) : HP_* (A) \to HP_* (B)$. The other two moves are easily seen to respect \pch , so geometric equivalence implies $HP$-equivalence. \section{The Aubert--Baum--Plymen conjecture} \label{sec:ABP} In a series of papers \cite{ABP1,ABP2,ABP3,ABP4} Aubert, Baum and Plymen developed a conjecture that describes the structure of Bernstein components in the smooth dual of a reductive $p$-adic group. We will rephrase this conjecture for affine Hecke algebras, and prove it in that setting. A central role is played by extended quotients. Let $G$ be a finite group acting continuously on a topological space $T$. We endow \[ \widetilde{T} := \{ (g,t) \in G \times T : g \cdot t = t \} \] with the subspace topology from $G \times T$. Then $G$ also acts continuously on $\widetilde T$, by \[ g \cdot (g',t) = (g g' g^{-1}, g \cdot t) . \] The extended quotient of $T$ by $G$ is defined as $\widetilde{T} / G$. It comes with a projection onto the normal quotient: \[ \widetilde{T} / G \to T / G : G (g,t) \mapsto G t. \] The fiber over $G t \in T /G$ can be identified with the collection $\langle G_t \rangle$ of conjugacy classes in the isotropy group $G_t$. The relevance of the extended quotient for representation theory comes from crossed product algebras. Suppose that $F(T)$ is an algebra of continuous complex valued functions on $T$, which separates the points of $T$ and is stable under the action of $G$ on $C(T)$. These conditions ensure that the crossed product $F(T) \rtimes G$ is well-defined. The dual space of this algebra was determined in classical results that go back to Frobenius and Clifford (see \cite[Section 49]{CuRe1}). The collection of irreducible representations with a $F(T)$-weight $t \in T$ is in natural bijection with Irr$(G_t)$, via the map \[ \pi \mapsto \mr{Ind}_{F(T) \rtimes G_x}^{F(T) \rtimes G} \C_x \otimes \pi . \] Since $| \text{Irr}(G_x) | = | \langle G_x \rangle |$, there exists a bijection \begin{equation}\label{eq:T/G} \widetilde{T} / G \to \text{Irr} (F(T) \rtimes G) \end{equation} which maps $G (g,t)$ to representation with a $F(T)$-weight $t$. With a little more work one can find a continuous bijection. However, it is not natural and a not a homeomorphism, except in very simple cases. We return to an extended affine Hecke algebra $\mc H (\mc R,q) \rtimes \Gamma$. As described in Section \ref{sec:defAHA}, the parameter function $q$ is completely determined by its values on the quotient set $R_{nr}^\vee / W_0 \rtimes \Gamma$. Let $\mc Q (\mc R)$ be the complex variety of all maps $R_{nr}^\vee / W_0 \rtimes \Gamma \to \C^\times$. To every $v \in \mc Q (\mc R)$ we associate the parameter function $q_{\alpha^\vee} = v (\alpha^\vee )^2$. \begin{conj}\label{conj:5.ABP} \textup{(ABP-conjecture for affine Hecke algebras)} \enuma{ \item The algebras $\C [W^e] \rtimes \Gamma$ and $\mc H (\mc R ,q) \rtimes \Gamma$ are geometrically equivalent. \item There exists a canonical isomorphism $HP_* (\C [W^e] \rtimes \Gamma) \cong HP_* (\mc H (\mc R ,q) \rtimes \Gamma)$. \item There exists a continuous bijection $\mu : \widetilde{T} / W' \to \mr{Irr}(\mc H (\mc R,q) \rtimes \Gamma)$ such that $\mu ( \widetilde{T_{un}} / W' ) = \mr{Irr}(\mc S (\mc R,q) \rtimes \Gamma)$. \item For every connected component $c$ of $\widetilde{T} / W'$ there exists a smooth morphism of algebraic varieties $h_c : \mc Q (\mc R) \to T$ with the following properties. For all components $c$ we have $h_c (1) = 1$, and \[ \mr{pr}_{q^{1/2}} (\widetilde{T} / W' - T / W') = \{ t \in T : I_t \text{ is reducible } \} / W' , \] where $\mr{pr}_v : \widetilde{T} / W' \to T / W'$ is defined by \[ \mr{pr}_v (W' (w,t)) = W' h_c (v) t \quad \text{for} \quad v \in \mc Q (\mc R), W' (w,t) \in c . \] Moreover $\mu$ can be chosen such that the central character of $\mu (W' (w,t))$ is $W' h_c (q^{1/2}) t$ for $W' (w,t) \in c$. } \end{conj} We will now discuss the different parts of the ABP-conjecture. As we mentioned at the end of Section \ref{sec:weakly}, every explicit geometric equivalence gives rise to an isomorphism on \pch . However, this isomorphism need not be natural, so (b) does not yet follow from (a). In \cite{ABP1,ABP3} we see that Aubert, Baum and Plymen have a geometric equivalence via Lusztig's asymptotic Hecke algebra \cite{Lus-C3} in mind. The corresponding isomorphism \cite[Theorem 11]{BaNi} \[ HP_* (\mc H (\mc R,q)) \cong HP_* (\C [W^e]) \] can be regarded as canonical, albeit in a rather weak sense. Unfortunately, for unequal parameter functions this asymptotic Hecke algebra exists only as a conjecture. Neither does the author know any other way to construct a geometric equivalence between $\mc H (\mc R,q) \rtimes \Gamma$ and $\C [W^e] \rtimes \Gamma$, so this part of the conjecture remains open for unequal parameter functions. As a substitute we offer Lemma \ref{lem:5.8}, which has approximately the same strength. It is weaker because it concerns only topologically completed versions of the algebras, but it is stronger in the sense that the geometric equivalence consists of only one weakly spectrum preserving morphism. Part (b) of Conjecture \ref{conj:5.ABP} was already dealt with in Corollary \ref{cor:5.6}. Let us construct a map $\mu$ as in part (c). Recall from \eqref{eq:basisSmoothFam} that there exist tempered smooth families $\{ \pi_{i,t} : t \in V_i \}$ which together form a basis of $G_\Q (\mc S (\mc R,q) \rtimes \Gamma)$. The parameter space of such a family is of the form $V_i = u_i \exp (\mf a^{g_i})$ for some $u_i \in T_{un}, g_i \in W'$. By \eqref{eq:SprTempGr} the number of families with parameter space of the form $g V_i$ for some $g \in W'$, is precisely the number of components of $\widetilde{T_{un}} / W'$ whose projection onto $T_{un} / W'$ is $W' V_i$. Hence we can find a continuous bijection \[ \widetilde{T_{un}} / W' \to \{ \pi_{i,t} : t \in V_i , i \in I \} . \] This will be our map $\mu$ whenever the image $\pi_{i,t}$ is irreducible, which is the case on a Zariski-dense subset of $\widetilde{T_{un}} / W'$. To extend $\mu$ continuously to all nongeneric points, we need to find irreducible subrepresentations $\pi'_{i,t} \subset \pi_{i,t}$, such that $\{ \pi_{i,t} : t \in V_i \} = \mr{Irr} (\mc S (\mc R,q) \rtimes \Gamma)$. For 0-dimensional components all point are generic, so there is nothing to do. If we have already defined $\mu$ on all components of $\widetilde{T_{un}} / W'$ of dimension smaller than $d$, and $(i,t)$ corresponds to a nongeneric point $W'(w,t)$ in a component of dimension $d$, than we choose for $\mu (W' (w,t))$ any irreducible subrepresentation of $\pi_{i,t}$ that we did not have yet in the image of the previously handled components. This process can be carried out completely can \eqref{eq:SprTempGr}, and yields a continuous bijection \begin{equation} \mu : \widetilde{T_{un}} / W' \to \mr{Irr} (\mc S (\mc R,q) \rtimes \Gamma). \end{equation} From this and Lemmas \ref{lem:5.7} and \ref{lem:5.8} we obtain continuous bijections \[ \widetilde{T_{un}} / W' \to \mr{Irr} (\mc S (\mc R,q) \rtimes \Gamma) \to \mr{Irr} (\mc S(W^e) \rtimes \Gamma) . \] As explained after \eqref{eq:projPdelta} and \eqref{eq:basisSmoothFam}, these can be extended canonically to continuous bijections \begin{equation} \widetilde{T} / W' \to \mr{Irr} (\mc H (\mc R,q) \rtimes \Gamma) \to \mr{Irr} (W^e \rtimes \Gamma). \end{equation} Now we show that part (d) of the conjecture is valid for this $\mu$. Suppose that $W' (w,t_0) \in \widetilde{T_{un}} / W'$ is such that the corresponding representation $\pi_{i,t}$ is irreducible. With Theorem \ref{thm:3.10} we can find an induction datum $\xi^+ (\pi_{i,t}) = (P,\delta,t_1) \in \Xi_{un}$ such that $\pi_{i,t}$ is a subquotient of $\pi^\Gamma (P,\delta,t_1)$. Then $\mu (W' (w,t_0 t_2))$ is a subquotient of $\pi^\Gamma (P,\delta,t_1 t_2)$ for all $t_2 \in \exp (\mf t^w)$, so its central character is $W' r t_1 t_2$, where $W (R_P) r \in T_P / W(R_P)$ is the central character of the $\mc H_P$-representation $\delta$. According to \cite[Lemma 3.31]{Opd-Sp} $r \in T_P$ is a residual point for $\mc R_P$, which by Proposition 2.63 and Theorem 2.58 of \cite{OpSo2} means that the coordinates of $r$ can be expressed as an element of $T_{P,un}$ times a monomial in the variables $\{ q (s)^{\pm 1/2} : s \in S_\af \}$. Hence we can write $|r| = h_c (q^{1/2})$, where $c$ is the component of $\widetilde{T} / W'$ containing $W' (w,t)$ and $h_c : \mc Q (\mc R) \to T_P \subset T$ is a smooth algebraic morphism with $h_c (1) = 1$. Now $W' h_c (q^{1/2}) t_0 t_2$ is by construction the central character of $\mu (W' (w,t_0 t_2))$. We note that the discrete series representation $\delta_\emptyset$ of $\mc H_\emptyset = \C$ has central character $1 \in T_\emptyset = \{1\}$, so $h_c = 1$ when $c$ has dimension rank$(X)$. Let $\mr{pr}_{q^{1/2}}$ be as in part (d) of Conjecture \ref{conj:5.ABP} and temporarily denote the difference of two sets by $-$. Then $\mr{pr}_{q^{1/2}} (\widetilde{T} / W' - T / W')$ is the set of central characters of $\mu (\widetilde{T} / W' - T / W')$. Since $\mu$ parametrizes irreducible representations, and since every $\pi \in \text{Irr}(\mc H (\mc R,q) \rtimes \Gamma)$ with central character $t$ is a quotient of the principal series representation $I_t$, no element of $\mr{pr}_{q^{1/2}} (\widetilde{T} / W' - T / W')$ can be the parameter of an irreducible principal series representation. Conversely, suppose that $t \in T$ is not in the aforementioned set. In view of Lemma \ref{lem:3.6} we may assume that $t \in T^+$. Then there is, up to isomorphism, only one $\pi_t \in \text{Irr}(\mc H (\mc R,q) \rtimes \Gamma)$ with central character $W' t$. Hence the induction datum $\xi^+ (\pi_t)$ is $(\emptyset, \delta_\emptyset,t) \in \Xi^+$. By Theorem \ref{thm:3.9} the intertwining operators \[ \{ \pi (g,\emptyset, \delta_\emptyset,t) : g \in \mc G , g (\emptyset, \delta_\emptyset,t) = (\emptyset, \delta_\emptyset,t) \} \] span $\mr{End}_{\mc H \rtimes \Gamma} (I_t)$. If any of these intertwiners were nonscalar, then $I_t$ would contain nonisomorphic irreducible constituents. The latter is impossible, so $\mr{End}_{\mc H \rtimes \Gamma} (I_t) = \C \text{id}$. This in turn implies that $\pi_t$ cannot be both a subrepresentation and a quotient of $I_t$, unless $\pi_t = I_t$. Therefore $\mr{pr}_{q^{1/2}} (\widetilde{T} / W' - T / W')$ is precisely the subset of $t \in T$ for which the principal series representation $I_t$ is reducible. By \eqref{eq:rescosd} this set contains all residual cosets of dimension smaller than $\dim_C (T)$. However, in general $\mr{pr}_{q^{1/2}} (\widetilde{T} / W' - T / W')$ is larger, because a unitary principal series representation can be reducible. We note that by Proposition \ref{prop:4.2} the same map $\mr{pr}_v$ also makes part (d) valid for the scaled parameter functions $q^\ep$ with $\ep \in \R$. However, for other parameter functions changes can occur. Summarizing, we showed that: \begin{thm} \label{thm:5.9} Parts (b), (c) and (d) of Conjecture \ref{conj:5.ABP} hold for every extended affine Hecke algebra with a positive parameter function $q$. Part (a) holds for the Schwartz completions of the algebras in question. Hence the Aubert--Baum--Plymen conjecture \cite{ABP1,ABP2} for a Bernstein component $\mf s$ of a reductive $p$-adic group $G$ holds whenever the algebra $\mc H (G)_{\mf s}$ is Morita-equivalent to an extended affine Hecke algebra in the way described in Section \ref{sec:padic}. In particular this applies to all Bernstein components listed at the end of that section. \end{thm} \section{Example: type $C_2^{(1)}$} In the final section we illustrate what the Aubert--Baum--Plymen conjecture looks like for an affine Hecke algebra with $R_0$ of type $B_2 / C_2$ and $X$ the root lattice. More general results for type $C_n^{(1)}$ affine Hecke algebras can be found in \cite{Kat2,CiKa}. For other examples we refer to \cite[Chapter 6]{SolThesis}. Consider the based root datum $\mc R$ with \begin{align*} & X = Y = \Z ,\\ & R_0 = \{ x \in X : \norm{x} = 1 \text{ or } \norm{x} = \sqrt 2 \} ,\\ & R_0^\vee = \{ \ y \in Y : \norm{x} = 2 \text{ or } \norm{x} = \sqrt 2 \} ,\\ & F_0 = \{ \alpha_1 = \vv{1}{-1} , \alpha_2 = \vv{0}{1} \} \end{align*} Then $\alpha_4 = \vv{1}{1}$ is the longest root and $\alpha_3^\vee = \vv{2}{0}$ is the longest coroot, so \[ S_\af = \{ s_{\alpha_1}, s_{\alpha_2}, t_{\alpha_3} s_{\alpha_3} \} . \] We write $s_i = s_{\alpha_i}$ for $1 \leq i \leq 4$ and $s_0 = t_{\alpha_3} s_{\alpha_3}$. The Weyl group $W_0$ is isomorphic to $D_4$ and consists of the elements \[ W_0 = \{ e, \rho_{\pi/2}, \rho_\pi, \rho_{-\pi/2} \} \cup \{ s_1,s_2,s_3,s_4 \} , \] where $\rho_\theta$ denotes the rotation with angle $\theta$. The affine Weyl group of $\mc R$ is the Coxeter group \[ W_\af = W^e = X \rtimes W_0 = \langle s_0, s_1, s_2 | s_i^2 = (s_0 s_2)^2 = (s_1 s_2)^4 = (s_0 s_1)^4 = e \rangle . \] Furthermore $R_{nr} = R_0 \cup \{ \pm \alpha_2, \pm \alpha_3 \}$ and $X^+ = \{ \vv{m}{n} \in X : m \geq n \geq 0 \}$. We note that $\mc R$ is the root datum of the algebraic group $SO_5$, while $\mc R^\vee$ corresponds to $Sp_4$. Let $\mathbb F$ be a $p$-adic field whose residue field has $q$ elements, and let $\mf s$ be the Iwahori-spherical component of $Sp_4 (\mathbb F)$. Then $\mr{Mod}_{\mf s} (Sp_4 (\mathbb F) \cong \mr{Mod}(\mc H (\mc R,q))$ and Kazhdan--Lusztig theory describes the irreducible representations in this category with data from $Sp_4 (\mathbb F)$. But there are many more parameter functions for $\mc R$. Since $s_0, s_1$ and $s_2$ are not conjugate in $W^e$, we can independently choose three parameters \[ q_0 = q(s_0) = q_{\alpha_2^\vee / 2}, q_1 = q(s_1) = q_{\alpha_1}, q_2 = q(s_2) = q_{\alpha_2^\vee} . \] Several combinations of these parameters occur in Hecke algebras associated to non-split $p$-adic groups, see \cite{Lus-Uni}. The $c$-functions are \[ c_{\alpha_1} = \frac{\theta_{\alpha_1} - q_1^{-1}}{\theta_{\alpha_1} - 1} \text{ and } c_{\alpha_2} = \frac{\theta_{\alpha_2} + q_2^{-1/2} q_0^{1/2}}{\theta_{\alpha_2} + 1} \frac{\theta_{\alpha_2} - q_1^{-1/2} q_0^{-1/2}}{\theta_{\alpha_2} - 1} . \] For $q_0 = q_2$ the relations from Theorem \ref{thm:1.1}.d simplify to \begin{equation}\label{eq:multq0=q2} f N_{s_i} = N_{s_i} s_i (f) = (q_i^{1/2} - q_i^{-1/2}) (f - s_i (f)) (1 - \theta_{-\alpha_i})^{-1} \qquad i = 1,2 . \end{equation} In contrast with graded Hecke algebras, $\mc H (\mc R\!=\!\mc R (SO_5), q_1, q_2 = q_0)$ is not isomorphic to $\mc H (\mc R^\vee\!=\!\mc R (Sp_4), q_2, q_1)$. The reason is that in $\mc H (\mc R^\vee, q_2, q_1)$ the relation \[ f N_{s_{\vv{0}{2}}} = N_{s_{\vv{0}{2}}} s_{\vv{0}{2}} (f) = (q_2^{1/2} - q_2^{-1/2}) (f - s_{\vv{0}{2}} (f)) (1 - \theta_{\vv{0}{-2}})^{-1} \qquad f \in \mc A \] holds, which really differs from \eqref{eq:multq0=q2} because the root lattice $\Z \vv{-1}{1} + \Z \vv{0}{2}$ does not equal $X$ for the root datum $\mc R^\vee$. We will work out the tempered dual of $\mc H (\mc R ,q)$ for almost all positive parameter functions $q$. To this end we discuss for each parabolic subalgebra $\mc H_P$ with $P \subset \{ \alpha_1, \alpha_2 \}$ separately. Its contribution to Irr$(\mc S (\mc R,q))$ will of course depend on $q$, and be even be empty in some cases. \vspace{3mm} {\Large $\bullet \qquad P = \emptyset$ } \begin{align*} & X_P = \{0\}, X^P = X, Y_P = \{0\}, Y^P = Y, R_P = \emptyset, R_P^\vee = \emptyset , W(R_P) = \{e\} , \\ & T_P = \{1\}, T^P = T, \mc G_P = W_0, \mc H_P = \C, \mc H^P = \mc A \cong \C [X] . \end{align*} We must determine the reducibility of the unitary principal series representations \[ I_t = \mr{Ind}_{\mc A}^{\mc H} \C_t = \pi (\emptyset, \delta_\emptyset,t) \qquad t \in T_{un} . \] By Theorem \ref{thm:3.9} $\mr{End}_{\mc H}(I_t)$ is spanned by the intertwining operators $\pi (w,\emptyset,\delta_\emptyset,t)$ with $w \in W_0$ and $w (t) = t$. For a root $\alpha \in R_0$ with $s_\alpha (t) = t$, Lemma \ref{lem:3.15} tells us that $\pi (s_\alpha,\emptyset, \delta_\emptyset,t)$ is a scalar if and only if $c_\alpha^{-1}(t) = 0$. \\ \begin{minipage}{11cm} Let us write $t = (t_3,t_2)$ with $t_i = t(\alpha_i)$. A fundamental domain for the action of $W_0$ on $T_{un}$ is $\{ t = (e^{i\phi}, e^{i \psi}) : 0 \leq \psi \leq \phi \leq \pi \} $: \end{minipage} \raisebox{-15mm}{\includegraphics[width = 3cm]{triangle}} The isotropy groups are trivial for all interior points, so $I_t$ is irreducible for such $t$. Below we list the necessary data for all boundary points: \[ \begin{array}{lllll} t & W_{0,t} & \text{conditions} & \mr{End}_{\mc H}(I_t) & \# \text{ irreducibles} \\ \hline (e^{i \phi},1), \phi \in (0,\pi) & \langle s_2 \rangle & q_0 q_2 \neq 1 & \C & 1 \\ & & q_0 q_2 = 1 & \C [\langle s_2 \rangle] & 2 \\ (-1, e^{i \psi}), \psi \in (0,\pi) & \langle s_3 \rangle & q_2 \neq q_0 & \C & 1 \\ & & q_2 = q_0 & \C [\langle s_3 \rangle] & 2 \\ (e^{i \phi},e^{i \phi}), \phi \in (0,\pi) & \langle s_1 \rangle & q_1 \neq 1 & \C & 1 \\ & & q_1 = 1 & \C [\langle s_1 \rangle] & 2 \\ (-1,1) & \langle s_2,s_3 \rangle & q_0 \neq q_2 , q_0 q_2 \neq 1 & \C & 1 \\ & & q_0 = q_2 \neq 1 & \C [\langle s_3 \rangle] & 2 \\ & & q_0 = q_2^{-1} \neq 1 & \C [\langle s_2 \rangle] & 2 \\ & & q_0 = q_2 = 1 & \C [\langle s_2,s_3 \rangle] & 4 \\ (-1,-1) & W_0 & q_0 \neq q_2, q_1 \neq 1 & \C & 1 \\ & & q_0 = q_2, q_1 \neq 1 & \C [\langle s_2 \rangle] & 2 \\ & & q_1 = 1, q_0 \neq q_2 & \C [\langle s_1 \rangle] & 2 \\ & & q_0 = q_2, q_1 = 1 & \C [W_0] & 5 \\ (1,1) & W_0 & q_0 q_2 \neq 1, q_1 \neq 1 & \C & 1 \\ & & q_0 = q_2^{-1}, q_1 \neq 1 & \C [\langle s_2 \rangle] & 2 \\ & & q_1 = 1, q_0 \neq q_2^{-1} & \C [\langle s_3 \rangle] & 2 \\ & & q_0 = q_2^{-1}, q_1 = 1 & \C [W_0] & 5 \\ \end{array} \] {\Large $\bullet \qquad P = \{\alpha_1\}$ } \begin{align*} & X_P = X / \Z \alpha_4 \cong \Z \alpha_1 / 2, X^P = X / \Z \alpha_1, Y_P = \Z \alpha_1^\vee, Y^P = \Z \alpha_4^\vee, R_P = \{\pm \alpha_1\}, \\ & R_P^\vee = \{\pm \alpha_1^\vee \}, W(R_P) = \{e, s_1 \}, T^P =\{ t \in T : t_3 = t_2 \}, T_P = \{ t \in T : t_2 t_3 = 1 \}, \\ & T^P \cap T_P = \{ (1,1), (-1,-1) \}, \mc G_P = \{e,s_4\} \times T^P \cap T_P , \\ & \mc H_P = \mc H (\mc R_P,q(s_1) = q_1 = q_{\alpha_1^\vee}, \mc H^P = \mc H_P \ltimes \C [X^P] . \end{align*} The root datum $\mc R_P$ is of type $C_1^{(1)}$, which means that $R_0^\vee$ is of type $C_1 = A_1$ and generates the lattice $Y_P$. For $q_1 = 1$ there are no residual points, for $q_1 \neq 1$ there are two orbits, namely $W (R_P) (q_1^{1/2},q_1^{-1/2})$ and $W (R_P) (-q_1^{1/2}, -q_1^{-1/2})$. Both orbits carry a unique discrete series representation, which has dimension one. The formulas for these representations are not difficult, but they depend on whether $q_2 > 1$ or $q_2 < 1$. So we obtain two families of $\mc H (\mc R,q)$-representations: \begin{align*} & \pi \big( \{\alpha_1\}, \delta_1, (t_2,t_2) \big) = \mr{Ind}_{\mc H^P}^{\mc H} (\delta_1 \circ \phi_{(t_2,t_2)}) \qquad q_1 \neq 1, t_2 \in S^1 , \\ & \pi \big( \{\alpha_1\}, \delta'_1, (t_2,t_2) \big) = \mr{Ind}_{\mc H^P}^{\mc H} (\delta'_1 \circ \phi_{(t_2,t_2)}) \qquad q_1 \neq 1, t_2 \in S^1 . \end{align*} The action of $\mc G_P$ on these families is such that $s_4 (t_2,t_2) = (t_2^{-1},t_2^{-1})$, while $(-1,-1) \in \mc G_P$ simultaneously exchanges $\delta_1$ with $\delta'_1$ and $(t_2,t_2)$ with $(-t_2,-t_2)$. A fundamental domain for this action is $\{ (\{\alpha_1\},\delta_1,(e^{i \phi},e^{i \phi})) : \phi \in [0,\pi] \}$. For $\phi \in (0,\pi)$ these points have a trivial stabilizer in $\mc G_P$, so the corresponding $\mc H$-representations are irreducible. On the other hand, the element $s_4 \in \mc G_P$ fixes the points with $\phi = 0$ or $\phi = \pi$, so the representations $\pi(\{\alpha_1\},\delta_1,(1,1))$ and $\pi (\{\alpha_1\},\delta_1,(-1,-1))$ can be reducible. Whether or not this happens depends on more subtle relations between $q_0,q_1$ and $q_2$. \vspace{3mm} {\Large $\bullet \qquad P = \{\alpha_2\}$} \begin{align*} & X_P = X / \Z \alpha_3 \cong \Z \alpha_2, X^P = X / \Z \alpha_2 \cong \Z \alpha_3, Y_P = \Z \alpha^\vee_1, Y^P = \Z \alpha_3^\vee / 2, \\ & R_0 = \{ \pm \alpha_2 \}, R_0^\vee = \{\pm \alpha_2^\vee \}, W (R_P) = \{e,s_2\}, \\ & T^P = \{ t \in T : t_2 = 1 \}, T_P = \{ t \in T : t_3 = 1 \}, \mc G_P = \{e,s_3\}, \\ & \mc H_P \cong \mc H ( \mc R_P , q(s_2) = q_2, q_{\alpha_2^\vee} = q_0 ), \mc H^P \cong \mc H_P \otimes \C [X^P] . \end{align*} The root datum $\mc R_P$ is of type $A_1^{(1)}$, which differs from $C_1^{(1)}$ in the sense that $X_P$ is the root lattice. There are two orbits of residual points: $W (R_P) (1, q_0^{1/2} q_2^{1/2})$ and $W (R_P) (1, -q_0^{1/2} q_2^{-1/2})$. That is, these points are residual unless they equal $(1,1)$ or $(1,-1)$. Both orbits admit a unique discrete series representation, of dimension one, which denote by $\delta_+$ or $\delta_-$. Like for $P = \{\alpha_1\}$, the explicit formulas depend on which of the $\mc A_P$-characters are in $T_P^{--}$. Again we find two families of $\mc H$-representations: \begin{align*} & \pi \big( \{\alpha_2\}, \delta_+, (t_3,1) \big) = \mr{Ind}_{\mc H^P}^{\mc H} (\delta_+ \circ \phi_{(t_3,1)}) \qquad q_0 q_2 \neq 1, t_3 \in S^1 , \\ & \pi \big( \{\alpha_2\}, \delta_-, (t_3,1) \big) = \mr{Ind}_{\mc H^P}^{\mc H} (\delta_- \circ \phi_{(t_3,1)}) \qquad q_0 / q_2 \neq 1, t_3 \in S^1 . \end{align*} The group $\mc G_P$ acts on these families by $s_3 (t_3,1) = (t_3^{-1},1)$. A fundamental domain is, for both families, given by $t \in \{ e^{i\phi} : \phi \in [0,\pi] \}$. For $\phi \in (0,1\pi)$ the representations $\pi \big( \{\alpha_2\}, \delta_+, (e^{i \phi},1) \big)$, because the isotropy group in $\mc G_P$ is trivial. For $\phi \in \{0,\pi \}$ the intertwining operator associated to $s_3 \in \mc G_P$ is not necessarily scalar, so we find either one or two irreducible constituents. Remarkably enough, this depends not only on the parameters $q_0$ and $q_2$ of $\mc H_P$, but also on $q_1$, as we will see later. \vspace{3mm} {\Large $\bullet \qquad P = \{\alpha_2\} = F_0$} \\ Here simply $\mc H^P = \mc H_P = \mc H (\mc R,q)$. We have to determine all residual points, and how many inequivalent discrete series they carry. The former can be done by hand, but that is quite elaborate. It is more convenient to use \cite[Theorem 7.7]{Opd-Sp} and \cite[Proposition 4.2]{HeOp}, which say that for generic $q$ there are 40 residual points. Representatives for the 5 $W_0$-orbits are \begin{align*} & r_1 (q) = (q_0^{1/2} q_2^{1/2} q_1^{-1}, q_0^{1/2} q_2^{1/2}) ,\\ & r_2 (q) = (q_0^{-1/2} q_2^{-1/2} q_1^{-1}, q_0^{-1/2} q_2^{-1/2}) ,\\ & r_3 (q) = (-q_0^{1/2} q_2^{-1/2} q_1^{-1}, -q_0^{1/2} q_2^{-1/2}) ,\\ & r_4 (q) = (-q_0^{-1/2} q_2^{1/2} q_1^{-1}, -q_0^{1/2} q_2^{-1/2}) ,\\ & r_5 (q) = (-q_0^{1/2} q_2^{-1/2}, q_0^{-1/2} q_2^{-1/2}) . \end{align*} Since every $W_0$-orbit of residual points carries at least one discrete series representation, \[ \dim_\Q \big( G_\Q^0 (\mc H (\mc R,q)) / G_\Q^1 (\mc H (\mc R,q)) \big) = \dim_\Q \big( G_\Q^0 (\mc S (\mc R,q)) / G_\Q^1 (\mc S (\mc R,q)) \big) \geq 5. \] On the other hand one can easily check, for example with the calculations for $P = \emptyset, q = 1$, that $\dim_\Q \big( G_\Q^0 (W^e) / G_\Q^1 (W^e) \big) = 5$. With \eqref{eq:SprGr} we deduce that every $W r_i (q)$ carries a unique discrete series representation $\delta (r_i)$. So far for generic parameter functions. For nongeneric $q$ the $W r_i (q)$ are still the only possible residual points, but some of them may cease to be residual for certain $q$. In such cases $r_i (q)$ is absorbed by the tempered part $r T^P_{un}$ of some onedimensional residual coset $r T^P$. (If $r_i (q)$ is absorbed by $T_{un}$, which is the tempered part of the twodimensional residual coset $T$, then $r_i (q)$ is also absorbed by a onedimensional residual coset.) This happens in the following cases: \[ \begin{array}{lll} \text{residual point} & q & \text{absorbed by} \\ \hline r_1 (q) & q_0 q_2 = q_1 & W_0 (q_1^{1/2},q_1^{-1/2}) T^{\alpha_1}_{un} \\ & q_0 q_2 = q_1^2 & W_0 (1,q_0^{1/2} q_2^{1/2}) T^{\alpha_2}_{un} \\ r_2 (q) & q_0 q_2 = q_1^{-1} & W_0 (q_1^{1/2},q_1^{-1/2}) T^{\alpha_1}_{un} \\ & q_0 q_2 = q_1^{-2} & W_0 (1,q_0^{1/2} q_2^{1/2}) T^{\alpha_2}_{un} \\ r_3 (q) & q_0 / q_2 = q_1 & W_0 (q_1^{1/2},q_1^{-1/2}) T^{\alpha_1}_{un} \\ & q_0 / q_2 = q_1^2 & W_0 (1,-q_0^{1/2} q_2^{-1/2}) T^{\alpha_2}_{un} \\ r_4 (q) & q_0 / q_2 = q_1^{-1} & W_0 (q_1^{1/2},q_1^{-1/2}) T^{\alpha_1}_{un} \\ & q_0 / q_2 = q_1^{-2} & W_0 (1,-q_0^{1/2} q_2^{-1/2}) T^{\alpha_2}_{un} \\ r_5 (q) & q_0 = q_2 & W_0 (1,q_0^{1/2} q_2^{1/2}) T^{\alpha_2}_{un} \\ & q_0 q_2 = 1 & W_0 (1,-q_0^{1/2} q_2^{-1/2}) T^{\alpha_2}_{un} \end{array} \] It is also possible that two orbits of residual points confluence, but stay residual. The deep result \cite[Theorem 3.4]{OpSo2} says that in general situations of this type the discrete series representations with confluencing central character do not merge and remain irreducible. The geometric content of the Aubert--Baum--Plymen conjecture is best illustrated with some pictures of the tempered dual of $\mc H (\mc R,q)$, for various $q$. Of course $T$ has real dimension four, so we cannot draw it. But the unitary principal series can be parametrized by $T_{un} / W_0$, which is simply a 45-45-90 triangle. The other components of Irr$(\mc S (\mc R,q))$ will lie close to $T_{un} / W_0$ if $q$ is close to 1, which we will assume in our pictures. We indicate what confluence occurs when $q$ is scaled to $1$ by drawing any $\pi \in \text{Irr}(\mc S (\mc R,q))$ close to the unitary part of its central character. To distinguish the three onedimensional components, we denote the series obtained from inducing $\delta_1 / \delta_+ / \delta_-$ by $L_1 / L_+ / L_-$. Finally, we have to represent graphically how many inequivalent irreducible representations a given parabolically induced representation $\pi (\xi)$ contains. By default, $\pi (\xi)$ is itself irreducible. When $\pi (\xi)$ contains two different irreducibles, we draw the corresponding point fatter. When there are more than two, we write the number of irreducibles next to it. \noindent \includegraphics[width = 14cm]{C21} \vspace{2mm} Almost everything in this picture can be deduced from the above calculations, the ABP-conjecture (or rather Theorem \ref{thm:5.9}) and \cite[Theorem 3.4]{OpSo2}. The only thing that cannot be detected with these methods is what happens at the confluences $r_1 (q) \to (q_1^{1/2},q_1^{-1/2}) \in L_1$ for $q_0 = q_2 = q_1^{1/2}$ and $r_1 (q) \to (1,q_2) \in L_+$ for $q_0 = q_1 = q_2$. For these $q$ there are four unitary induction data that give rise to representations with central character in $T_{rs} / W_0$. So three of them are irreducible, and one contains two different irreducibles. We can see that the reducibility does not occur in the unitary principal series or in the discrete series, which leaves the two intermediate series. Fortunately one can explicitly determine all subrepresentations of $\pi \big( \{\alpha_1\},\delta_1,(1,1) \big)$ (for $q_0 = q_2 = q_1^{1/2}$) and of $\pi \big( \{\alpha_2\},\delta_+,(1,1) \big)$ (for $q_0 = q_1 = q_2$), see \cite[Section 8.1.2]{Slo1}. In fact the graded Hecke algebras corresponding to these parameter functions are precisely the ones assciated to $Sp_4$ and to $SO_5$. Hence one may also determine the reducibilty of the aforementioned representations via the Deligne--Langlands--Kazhdan--Lusztig parametrization. Thirdly, it is possible to analyse these parabolically induced representations with R-groups, as in \cite{DeOp2}. The comparison of the tempered duals for different parameter functions clearly shows that this affine Hecke algebra behaves well with respect to general parameter deformations, not necessarily of the form $q \mapsto q^\ep$. We see that for small pertubations of $q$ it is always possible to find regions $U / W_0 \subset T / W_0$ such that the number of tempered irreducibles with central character in $U / W_0$ remains stable. (The $W_0$-types of these representations can change however.) It is reasonable to expect that something similar holds for general affine Hecke algebras, probably that would follow from the existence of an appropriate asymptotic Hecke algebra \cite{Lus-Une}.
1,314,259,995,168
arxiv
\section{Introduction} Phase-field models have recently enjoyed a rapidly growing popularity as a compact and elegant simulation tool for moving boundary problems in such diverse fields as solidification \cite{Boettinger02,Steinbach09}, fluid dynamics \cite{Anderson98} or solid-state transformations \cite{Chen02,Wang10}. Their technical advantage resides in the implicit representation of interfaces by one or several phase fields, i.e. fields that are defined in the entire space, take constant values within the bulk of each domain, and exhibit smooth but steep variations in well-localized interfacial regions. The tedious procedure of front tracking is avoided by introducing equations of motion for the phase fields that are coupled to the relevant transport variables. The price to pay for this advantage is the introduction of a new length scale into the model: the thickness $W$ of the phase-field front. For a given macroscopic problem, simulations with a phase-field model yield in general results that depend on the value of $W$. In the field of crystal growth, great progress towards efficient and precise simulations has been made by reducing this dependence on the interface thickness \cite{Karma98,Almgren99,Karma01,Echebarria04,Folch05}. This was made possible by a detailed analysis of the model equations using the method of matched asymptotic expansions, which is a systematic procedure to calculate the effective boundary conditions ``seen'' by the macroscopic transport field. Since this analysis is carried out within a perturbation approach, these boundary conditions are naturally expressed as a power series in $W$. Within the phase-field community the limit $W\to 0$ is referred to as {\sl the sharp-interface limit}. When the corrections due to the finite interface thickness are taken into account for choosing the model parameters, the accuracy of the phase-field method can be drastically improved. This procedure, which has been called {\sl thin-interface limit} \cite{Karma98}, has so far been worked out only for a few specific physical systems. To be more precise, let us consider the problem of solidification, in which the relevant transport process that limits the growth of the crystal is the diffusion of heat and/or solute. Two cases are completely solved: the symmetric model, where the diffusion coefficients in the two phases are identical \cite{Karma98}, and the one-sided model, in which no diffusion takes place within the solid phase \cite{Karma01,Echebarria04}. However, so far no method has been found to eliminate all thin-interface effects in the case of arbitrary diffusion coefficients in the two phases, despite some recent progress \cite{Almgren99,Ohno09,Steinbach09} (for a more detailed discussion, see \cite{Plapp11}). As will be pointed out here, part of this problem arises from the fact that for a truly two-sided model (with different diffusivity in each phase) even the stationary transport problem {\em without} interface motion exhibits thin-interface effects. This prevents a solution of the problem by the antitrapping approach, which has been successful for the one-sided model \cite{Karma01,Echebarria04} and for the two-sided model with vanishing diffusion current in one phase \cite{Ohno09}. In fact, such thin-interface effects are fairly general and arise in a whole class of problems, namely, two-phase transport through a complex structure. Examples of relevant physical situations are the conduction of electrical current or heat through a two-phase material with different conductivities, the magnetic flux through a two-phase material with different susceptibilities, or fluid flow though a porous medium with variations in permeabilities. At first sight, the advantages of using the phase-field method in these cases are less obvious since the interfaces do not move and therefore the problem of front tracking does not arise. However, the geometrical representation of complex-shaped surfaces can be difficult even without interfacial motion, especially in three dimensions. Additionally, the use of a stationary phase-field function makes it possible to prescribe a given boundary condition at the interface in a straightforward manner \cite{Kockelkoren03,Fenton05}, and to impose arbitrary boundary conditions at the border of a physical domain of complex shape, see \cite{Bueno-Orovio06,Li09} and references therein. The problem of representing a complex interface through a diffuse boundary gains additional relevance because the use of tomographic methods for structure determination becomes more and more widespread. In such methods, the structure representation takes the form of a matrix consisting of discrete pixels (or voxels in three dimensions) that contain binary or intensity data indicating whether a point in space is ``filled'' or ``empty''. From these data, the ``true'' structure (represented for example by discrete sharp surface elements) has to be reconstructed by image analysis techniques. The phase-field method is an easy and robust method to obtain a smoothened representation of such data \cite{Benes04,Kay09}. It could be interesting to use directly this smoothened structure for accurate calculation of transport processes instead of going through the additional steps of determining the ``sharp'' surface geometry. However, it will be shown below that thin-interface effects are also present in the ``simple'' problem of two-phase transport, even without interface motion. Therefore, these effects must be quantified and if possible eliminated. As we will demonstrate below, two effects that depend on the interface thickness are present: transport along the surface, and an interface resistance. In the standard phase-field formulation, where the transport is described by a scalar coefficient whose value depends on the phase field, these two effects cannot be eliminated simultaneously. In contrast, if the transport coefficient is allowed to become a {\em tensor} inside the diffuse interfaces, there are enough degrees of freedom in the model to eliminate both effects. \begin{figure}[t!] \centerline{ \epsfig{file=figure-1.eps,width=0.7\textwidth,clip=}} \caption{ \label{fig_1} (Color online) Total current as function of the ratio between the interface thickness and the radius of the circle in the cases of direct (black circles), inverse (red diamonds), and tensorial (blue squares) interpolation. We solve the spherical inclusion problem with parameters: $M_1=1$, $M_2=1/2$, $R=0.25$, $h = 10^{-3}$, and $\rho = 10^{-6}$. Lines are obtained by quadratic regression of the simulation data (see the main text for further details). The coefficients of these regressions are contained in Tab.\ \ref{tab_1}. All units are arbitrary.} \end{figure} \section{Analysis} \subsection{Problem formulation} All the problems listed above have a common structure, namely, the flux of a conserved quantity is driven by a potential gradient, \begin{equation} \label{trans_1} \bi{j} = - M(\phi) \nabla V, \end{equation} where $V$ is a potential. The structure of Eq.\ \eqref{trans_1} is standard in out-of-equilibrium thermodynamics: a linear relationship between a flux and a thermodynamic driving force (the potential gradient). The mobility coefficient $M(\phi)$ depends on the phase field $\phi$. If the transport process is electric conduction, $V$ is the electrostatic potential and $M$ the conductivity; for diffusive mass transport $V$ is the chemical potential and $M$ the atomic mobility. The transported quantity satisfies a conservation law, which is valid both in the bulk and at the surfaces, and for a time-independent solution (steady-state flow) reads \begin{equation} \nabla \cdot \bi{j} = 0. \label{eq_Poisson} \end{equation} The problem specification is completed by a boundary condition for the potential at the interfaces. We assume continuity of the potential, \begin{equation} V_+ = V_-, \label{continuity} \end{equation} where $V_+$ and $V_-$ are the values of the potential when the interface is approached from the two sides. This corresponds to a rapid exchange of the transported quantity between the two sides of the interface. Since we consider a fixed and immobile two-phase structure, the phase field is independent of time. We will assume that the two constant values that designate the two phases are $\phi=0$ and $\phi=1$, and that the field $\phi$ varies between these two limits continuously through a front region of width $W$. For the sake of concreteness, the reader may have in mind a sigmoid function such as $\left[1+\tanh(x/W)\right]/2$, but the explicit form of this function is not important. The only hypothesis we make is that for a straight interface, the profile of $\phi$ is odd with respect to the point $\phi=1/2$, that is, $\phi(x)=1-\phi(-x)$ for an interface centered at $x=0$. This is the case in all standard phase-field models. \begin{table} \begin{center} \begin{tabular}{|c|c|rr|c|} \hline interpolation type & $J_0$ & $c_1$ && $c_2$ \\[3pt] \hline direct & 1.752751 & $0.73889 \times$& $10^{-1}$ & -0.277364 \\[2pt] inverse & 1.752751 & $-1.51884 \times$&$ 10^{-1}$& -0.289096 \\[2pt] tensorial & 1.752749 & $-1.352 \times$&$ 10^{-3}$ & -0.358637 \\[2pt] \hline \end{tabular} \caption{\label{tab_1} Coefficients of the second order regression \mbox{$J(\epsilon) = J_0 + c_1 \epsilon + c_2 \epsilon^2$} for the three different interpolation methods. These coefficients are obtained from a quadratic regression to the data indicated by the symbols of Fig.\ \ref{fig_1}.} \end{center} \end{table} \subsection{Surface current} For simplicity of exposition, it is useful to focus on a concrete example. Consider the conduction of electric current through a two-phase material. Then, $V$ is the electrostatic potential, and $M(\phi)$ is the phase-dependent electric conductivity. Furthermore, consider a straight interface normal to the $x$ direction, centered at $x=0$. A potential gradient along the $y$ direction (along the interface) is imposed by sandwiching the material between two parallel plates located at $\pm L/2$ that are held at constant potentials $\pm U$. Since the phase field $\phi$ and hence the conductivity $M$ are constant along any line of constant $x$ (although their values differ for different values of $x$), Eq. (\ref{eq_Poisson}) yields a constant potential gradient $U/L$ directed along the $y$ direction. Therefore, the total current $J$ that flows between the two plates is given by \begin{equation} J=\int_{-\infty}^{\infty} M(\phi) \dfrac UL \; dx. \end{equation} Since we have considered a sample that extends to infinity, this current is clearly infinite. However, we will be concerned only with the excess of this current with respect to the sharp-interface value. The latter is obtained as the current that would flow if $\phi(x)$ was a step function, that is, the space between the two plates is filled with material 1 of conductivity $M_1$ for $x<0$, and with material 2 of conductivity $M_2$ for $x>0$. This yields \begin{equation} \bar{J}=\int_{-\infty}^0 M_1 \dfrac UL \; dx + \int_0^\infty M_2 \dfrac UL \; dx. \end{equation} The difference between these two expressions is the excess of current $\delta J$ due to the variation of conductivity over a zone of finite thickness. This excess \begin{equation} \delta J =\int_{-\infty}^0 \hskip -7pt \left[ M(\phi)-M_1\right] \dfrac UL \; dx + \int_0^\infty \hskip -7pt \left[M(\phi)-M_2\right] \dfrac UL \; dx, \end{equation} is localized in the interface, and can therefore be interpreted as a additional current along the surface. It can be written as the product of the potential gradient and a {\em surface conductivity} \begin{equation} M_s= \int_{-\infty}^0\hskip -2pt \left[M(\phi)-M_1\right] \; dx + \int_0^\infty \hskip -2pt \left[ M(\phi)-M_2\right] \; dx. \label{ms} \end{equation} This surface transport coefficient has two obvious properties: (i) for an interface profile of fixed functional form, $\phi(x)=f(x/W)$, $M_s$ is proportional to the interface thickness (as can be shown by a simple change of variables), and (ii) it is strictly zero for any value of $W$ if \begin{equation} \label{surf_cond} \int_0^{-\infty} \left[M(\phi)-M_1\right] \; dx = \int_0^\infty \left[ M(\phi)-M_2\right] \; dx. \end{equation} In this case, the excesses of current on the two sides of the interface exactly compensate. For a phase-field profile that satisfies $\phi(-x)=1-\phi(x)$, this can be simply achieved by choosing \begin{equation} M(\phi)=M_1\phi + M_2(1-\phi). \label{eq_parint} \end{equation} This will be called {\sl direct interpolation} in the following. \subsection{Surface resistance} Let us now analyze again a planar interface normal to the $x$ direction, but this time crossed by a steady current $J_\perp$ along $x$. In this case, the continuity equation immediately yields that the current is constant (independent of $x$). Then, the potential $V$ satisfies the simple equation \begin{equation} -M(\phi)\partial_x V = J_\perp. \end{equation} Integration along $x$ yields \begin{equation} V(x)-\overline V=-\int_0^x \frac {J_\perp}{M(\phi)} \; dx, \label{eq_Vdiff} \end{equation} where $\overline V$ is the potential at $x=0$ (an integration constant). In contrast, if the interface was sharp, the potential would simply be given by \begin{equation} V_0(x)-\overline V= - \frac{xJ_\perp}{M_{1,2}} \label{eq_Vsharp} \end{equation} for $x>0$ and $x<0$, respectively. Of course, outside the diffuse interfaces, the slopes of $V(x)$ are identical in Eqs. (\ref{eq_Vdiff}) and (\ref{eq_Vsharp}). Therefore, the asymptote of the diffuse-interface expression is of the form \begin{equation} V(x)-\overline V\approx - \frac{xJ_\perp}{M_{1,2}}+V_{+,-} \end{equation} for $x\to\pm\infty$. The constants $V_+$ and $V_-$ (the interface potentials ``seen'' from the region outside the diffuse interface) are readily obtained from the matching of this expression to Eq. (\ref{eq_Vdiff}), \begin{equation} V_{+,-}=\overline V + J_\perp\int_0^{\infty,-\infty} \left[\frac {1}{M(\phi)}-\frac{1}{M_{1,2}}\right]\; dx. \end{equation} Of particular interest is the fact that these surface potentials can be different, in contradiction to the assumption of Eq. (\ref{continuity}). The difference $\delta V=V_+-V_-$ can be written as the product of the current $J_\perp$ and an interface resistance \begin{equation} R_s=\int_{-\infty}^0 \left[\frac{1}{M(\phi)}-\frac{1}{M_1}\right] \; dx + \int_0^\infty \left[\frac{1}{M(\phi)}-\frac{1}{M_2}\right] \; dx. \end{equation} This resistance is often called Kapitza resistance and has been frequently observed in experiments and simulations \cite{Kapitza41,Wolf83,Swartz89,Barrat03,Xue03}. Again, it is obvious that it is proportional to the interface thickness, and that it vanishes if the integral is exactly zero. This can be achieved for any value of $W$ by the interpolation \begin{equation} \frac {1}{M(\phi)} = \frac{1}{M_1}\phi + \frac{1}{M_2}(1-\phi). \label{eq_perpint} \end{equation} This will be called {\sl inverse interpolation} in the following. \begin{figure}[t!] \centerline{ \epsfig{file=figure-2.eps,width=0.7\textwidth,clip=}} \caption{ \label{fig_2} (Color online) Rate of convergence of the three interpolations as function of the ratio between the interface thickness and the radius of the disk. Black circles, red diamonds, and blue squares stand for direct [Eq.\ \eqref{eq_parint}], inverse [Eq.\ \eqref{eq_perpint}], and tensorial interpolation [Eq.\ \eqref{eq_tensint}], respectively. The physical and numerical parameters are the same used in Fig.\ \ref{fig_1} and $J_0 = 1.75275$. Red and black solid lines are guide to eye with slope equal to $1.0$, while blue dashed line has slope equal to $2.0$. All units are arbitrary.} \end{figure} \subsection{Tensorial mobility} In summary, the two interface effects (surface current and surface resistance) can each be eliminated by a specific choice for the interpolation function of the mobility. Since these interpolations are mutually exclusive, it seems as if necessarily one of the two effects must remain nonzero. However, the current is a vector quantity, and the two effects are linked to distinct components of the current vector: the excess surface conductivity is relevant only for the components parallel to an interface, whereas the surface resistance modifies the boundary conditions for the normal component. Inside the two phases, where each medium is isotropic, the Curie principle requires to choose a scalar mobility to relate the current and the potential gradient. However, in the presence of an interface, isotropy of space is broken and a tensorial transport coefficient is permitted. Indeed, the gradient of the phase field can be readily used to define the interface normal $\bi{n} = \nabla\phi / |\nabla\phi|$, which provides a second direction that is independent of the potential gradient. Then, we can define the transport coefficient by \begin{equation} \label{eq_tensint} \bi{M}(\phi)=M_\perp \bi{n} \otimes \bi{n} + M_{||} (\mathbf {1}-\bi{n} \otimes \bi{n}), \end{equation} where $\mathbf{1}$ is the unit tensor, with two independent interpolation functions $M_\perp(\phi)$ and $M_{||}(\phi)$. If we interpolate $M_\perp$ according to Eq. (\ref{eq_perpint}) and $M_{||}$ according to Eq. (\ref{eq_parint}), both thin-interface effects are eliminated. Hence, for this interpolation the transport problem defined by Eq.\ \eqref{trans_1} becomes \begin{equation} \label{trans_2} \bi{j} = -\bi{M}(\phi)\cdot \nabla V, \end{equation} with components (in two dimensions) \begin{eqnarray} j_x &=& M_{xx} \partial_x V + M_{xy} \partial_y V, \\[5pt] j_y &=& M_{xy} \partial_x V + M_{yy} \partial_y V. \end {eqnarray} Here, we have designated by $M_{ij}$ the elements of the symmetric tensor $\bi{M}(\phi)$. The simple calculations developed above are valid only for planar interfaces. However, for a sufficiently smooth interface (that is, with a local radius of curvature $R$ satisfying $R\gg W$), a local curvilinear coordinate system can be defined in which the above relations remain valid at least up to second order in $\epsilon= W/R$. It should be mentioned that a similar strategy has been used recently to develop efficient phase-field models for surface diffusion \cite{Gugenberger08}. \begin{figure}[t!] \centerline{ \epsfig{file=figure-3.eps,width=0.7\textwidth,clip=}} \caption{ \label{fig_4} (Color online) Streamlines for the second geometrical configuration studied. The black dash-dotted, red dashed, and blue solid lines correspond to direct, inverse, and tensorial interpolation, respectively. The simulation parameters are $M_1=1$, $M_2=0.1$, $R=0.25$, $h = 10^{-3}$, $\rho = 10^{-6}$, and $W=4h$. For a larger version of this picture see \cite{epaps}. All units are arbitrary.} \end{figure} \section{Numerical validation} We quantify the thin-interface effects in the three different interpolations of the transport coefficient by solving the problem defined by Eq.\ \eqref{trans_1} [or \eqref{trans_2}] and Eq.\ \eqref{eq_Poisson} in a simple geometrical setup. We consider a square domain ${\cal D}\equiv\{(x,y)\in [0,1]\times[0,1]\}$ with Dirichelet boundary conditions on the lateral edges and zero flux at the upper and bottom edges. More precisely, we impose \mbox{$V(0,y) = 1$}, \mbox{$V(1,y) = -1$}, and \mbox{$\partial_y V(x,0) = \partial_y V(x,1) = 0$}. In the center of the domain we place a disk of radius $R$. The mobility coefficient takes a value of $M_1$ ($M_2$) outside (inside) the disk, respectively. We refer to this geometrical setup as spherical inclusion problem. By combining the flux equation Eq.\ \eqref{trans_1} [or its tensorial counterpart \eqref{trans_2}] with the conservation law of our problem, i.e.\ Eq.\ \eqref{eq_Poisson}, we obtain an elliptic equation for the case of direct and inverse interpolation \begin{equation} \nabla\cdot \left[ M(\phi) \, \nabla V \right] = 0, \label{elliptic_1} \end{equation} whereas the tensorial interpolation leads to the following equation \begin{equation} \label{elliptic_2} \nabla\cdot \left[ \bi{M}(\phi) \cdot \nabla V \right] = 0. \end{equation} Details about the discretization and the method used to solve these equations can be found in the appendix. \begin{figure}[t!] \centerline{ \epsfig{file=figure-4.eps,width=0.7\textwidth,clip=}} \caption{ \label{fig_3} (Color online) Total current as function of the ratio between the interface thickness and the radius of the circle for direct (black circles), inverse (red diamonds), and tensorial (blue squares) interpolations. The simulation parameters are the same of Fig.\ \ref{fig_4}. All units are arbitrary.} \end{figure} The thin-interface effects arising in the spherical inclusion problem are quantified by measuring the total flux at $x = 1$, \begin{equation} J = \int_0^1 j_x(1,y)\, dy \end{equation} and by plotting $J$ as a function of the ratio $\epsilon=W/R$ between the interface width $W$ and the radius of the disk $R$, see Fig.\ \ref{fig_1}. As shown in Fig.\ \ref{fig_1}, the three interpolations converge to the same value of $J_0$ when $\epsilon \to 0$. The estimation of $J_0$ is given by a quadratic regression in $\epsilon$ of the simulation data. The coefficients of the regressions are listed in Table\ \ref{tab_1}. Within the truncation error [that is ${\rm O}\left( h^2\right) \sim 10^{-6}$] the three interpolations give the same value of $J_0 = 1.752750\pm 10^{-6}$, as shown in the second column of Table\ \ref{tab_1}. In addition, we have estimated the rate of convergence of $J(\epsilon)$ to the sharp-interface limit $J_0$ for the three interpolations of the mobility. As shown in Fig.\ \ref{fig_2}, the direct and inverse interpolations converge only linearly with $\epsilon$ to this limiting value, whereas the tensorial interpolation suppress linear thin-interface effects, which leads to a convergence that is almost quadratic in $\epsilon$. For further illustration, we consider a second geometrical configuration for\-med by two half-disks of radius $R$ placed in a rectangular domain with the same boundary condition of the spherical inclusion problem, see Fig.\ \ref{fig_4}. We choose a small value of the ratio between $M$ inside and outside the half-disks, e.g.\ $M_2/M_1 = 0.1$, and in Fig.\ \ref{fig_4} we plot the streamlines for this configuration. In this configuration, the flux is constricted in the narrow space between the two half disks. The difference between the three interpolations is largest for streamlines which are locally almost tangent to the half-disks; note the divergence of the different lines close to the tips of the half-disks. As shown by Eq.\ \eqref{surf_cond}, choosing $M$ according to the direct interpolation cancels exactly the surface conductivity $M_s$ for a flat interface. Therefore, thin-interface errors affecting streamlines which are parallel to the boundary of variation of the transport coefficient are more pronounced in case of inverse interpolation (red dashed lines) than in case of direct interpolation (black dash-dotted lines), see Fig.\ \ref{fig_4}. In Fig.\ \ref{fig_3}, we show again the values of the total current as a function of the ratio $\epsilon$, and again the tensorial interpolation performs much better than the other two, as expected. \section{Conclusion} We have investigated a phase-field model with a mobility tensor, in which normal and parallel components of the flux are interpolated with distinct functions of the phase field. Contrary to phase-field models with scalar mobilities, this method makes it possible to eliminate at the same time the additional surface diffusion and the surface resistance, which are both linked to the finite thickness of the interfaces. This opens the possibility to perform accurate simulations of two-phase transport problems with enlarged interface thickness, which can lead to dramatic savings in computation time. The complete elimination of thin-interface effects to first order in the interface thickness is also a prerequisite for the development of a quantitative crystal growth model for arbitrary ratio of the diffusion coefficients in the two phases. Indeed, a tensorial diffusion coefficient can remove both surface diffusion and the Kapitza resistance \cite{Plapp11} from such models. Even if a second order asymptotic analysis of a time dependent version of the tensorial problem \eqref{eq_tensint} has been recently performed \cite{Ngoc}, this methodology is still not capable of removing all thin-interface effects from phase-field models with a non-stationary $\phi$. The obstacle that needs to be overcome for the successful development of such a model is to find a coupling of the transport equation to an evolution equation for the phase field that yields the correct boundary conditions for the transport field at {\em moving} interfaces. We hope to be able to report on this problem in the near future. \section*{Acknowledgements} Support provided by the European Commission through the MODIFY (FP7-NMP-2008-SMALL-2, Code 228320) research project is greatly acknowledged. \mbox{M.\ N.\ }also acknowledge partial support by MICINN (Spain) Grant No.\ FIS2009-12964-C05-01.
1,314,259,995,169
arxiv
\section{Introduction} Active galactic nuclei (AGN) emit over the entire electromagnetic spectrum and are powered by accretion onto a supermassive black hole \citep{1984REES}. The central engine emits UV photons which interact with energetic electrons in the so-called corona \citep{1994Nandra} producing the X-ray emission (e.g., \citealt{1993haardt}). This emission interacts with the circumnuclear dust and gas, producing the obscuration observed in the spectra. The dust emission can be observed in the infrared energy range, as a result of the thermalization of the UV photons by the dust. In the optical, the nuclear emission will be obscured, removing the continuum and the broad components in the emission lines \citep{1981osterbrock}. On the other hand, the gas will absorb and scatter the X-ray continuum producing the X-ray absorption, most noticeable at energies below 10 keV in the X-ray spectrum \citep{2011brightmanNandra}. Note that in general, the absorption in the optical and in the X-rays occurs together \citep{2007maineri, 2012malizia, 2014merloni,2015davies, 2017mike}. Obscuration gives evidence of material in the line of sight, which could be associated with the torus. Gas that is not in the line of sight of the observer can also imprint some features on the X-ray spectrum. Between 10 keV and up to hundreds of keV there is a reflection hump created by X-rays being reflected at the accretion disk \citep{2006Fabian} or more distant material, like the torus \citep{2011brightmanNandra}. Furthermore, the most robust emission line seen in the X-rays, the Fe K$\alpha$ emission line (e.g., \citealt{2006Fabian}), can be related to circumnuclear material, being broad and exhibiting relativistic effects due to its creation close to the supermassive black hole (SMBH), and narrow, presumably originating from more distant material. These reflection features are therefore a useful tool to study the configuration of the accretion disk and the torus. To better understand the properties of the reflector, many models have been developed, like \texttt{BORUS} \citep{2018balokovic} where the reprocessing medium is assumed to be a sphere with a conical cut-off at both poles, approximating a torus with variable covering factor, \texttt{cTORUS} \citep{2014liu}, similar to \texttt{BORUS} but clumpy and with the half-opening angle of the torus fixed at 60 degrees, \texttt{MYTORUS} \citep{2009Murphy} that proposes a toroidal geometry where the covering fraction is fixed to 0.5, \texttt{RXTorus} \citep{2017Apaltani} a model that assumes absorption and reflection from a torus with a varying ratio of the minor to major axis or \texttt{XILLVER} \citep{2013garcia} that calculates the reflected spectrum from the surface of an X-ray illuminated, ionized accretion disk by solving the equations of radiative transfer, energy balance, and ionization equilibrium in a Compton-thick and plane parallel medium. It is not clear how the reflecting structure is formed, but clues can be gathered from the relation between reflection strength and the nuclear accretion rate. From the observational point of view, the torus in the infrared (IR) becomes weaker in the low luminosity regime (i.e., for low accretion rates - below 10$^{-3}$, \citealt{2017Gonzalez-martin}). Furthermore, in the X-rays, it has been seen that the Compton thin absorption (N$_{\rm H}$<1.5x10$^{24}$ cm$^{-2}$) is less frequent in objects with low accretion rates: the fraction of Compton-thin obscured sources (10$^{22}$<N$_{\rm H}$<10$^{24}$ cm$^{-2}$) decreases in the low luminosity regime \citep{2017Riccii, 2022natalia}, while the fraction of Compton thick sources apparently remains constant. Both Compton thick and thin absorbers can produce reflection features, with different shapes and strengths. In addition, \cite{2022natalia} found that in a sample of 81 AGN, $\sim$13$\%$ of the objects are lacking reflection signatures and the remaining galaxies (with a detected reflection component) should be highly obscured. In the following, in this work, we attempt to measure the global distribution of gas around the nucleus, whether in the line of sight or not, through their contribution to the reflection. In particular, we aim to establish whether the changes in the gas configuration become flattered or overall optically thinner as the accretion rate goes down. Additionally, by modeling the X-ray reflection we are able to study the continuum emission, estimating the coronal parameters: the power-law ($\Gamma$) and the high energy cut-off (E$_{\rm cut}$). It has been shown that the slope of the power law depends on the accretion rate with changes at intermediate accretion rates ($L_{Bol}/L_{Edd}$=$\lambda_{Edd}$ $\sim$10$^{-3}$), pointing to a change in the accretion mechanism, for example between a corona on a thin disk to an advection dominated accretion flow \citep[ADAF,][]{1994narayan}. The relationship toward low accretion rates is usually seen with a lot of scattering, which can be intrinsic or due to observational uncertainties \citep{2006shem, 2009Gu, 2011younes, 2015yang, 2018She}. Our second objective is to re-evaluate this relationship in the low accretion rate range, through detailed modeling of the reflection and broadband X-ray data, using observations from \emph{XMM-Newton}+\emph{NuSTAR}+\emph{Swift}). This paper is organized as follows: in Sect. \ref{sect:sample} we present details of the observations and sample. The data reduction is reported in Sect. \ref{sect:data_reduction}. The methodology followed during this work is shown in Sect. \ref{sect:metodo}. All the results are reported in Sect. \ref{sect:results}. The implications of our X-ray spectral analysis are discussed in Sect. \ref{Sec:discussion}. Finally, a summary of our findings is presented in Sect. \ref{sect:conclusion}. \section{Sample and data} \label{sect:sample} Hard X-rays (E$\geq$10 keV) are not significantly affected by obscuration, at least up to $N_{\rm H}\sim 10^{24}\rm\,cm^{-2}$ \citep{2015ricci}, which allows us to obtain a highly complete AGN sample. Our work focuses on LLAGN selected through their hard-band X-ray emission as identified in the \emph{Swift}/BAT 70-month catalogue \citep{2013baumgartner} on board the \textit{Neil Gehrels Swift Observatory} \citep{gehrels2004}. BAT operates in the 14–195 keV energy band. The BAT AGN Spectroscopic Survey (BASS) is a survey that provides high-quality multi-wavelength data for the BAT AGN, including black hole mass measurements \citep{2017mike} and X-ray spectroscopy modeling \citep{2017ricciapJS}. The first data release (DR1) of the BASS project \citep{2017mike} includes 642 of \emph{Swift}/BAT AGN and the second release of optical spectroscopy (BASS/DR2) will also soon be publicly available \citep{Koss_DR2_overview, Oh_DR2_NLR}. Our sample of galaxies was selected from the BASS/DR2 with accretion rates $\log(\lambda_{Edd})\leq$-3.0 obtaining in total a sample of 24 AGN. We used the HEASARC\footnote{http://heasarc.gsfc.nasa.gov/} archive to search simultaneous and not simultaneous \emph{NuSTAR} and \emph{XMM–Newton} data public until August 2020. This analysis provided data with both telescopes for 16 sources. We include the proprietary data of the galaxy NGC\,5033 (PI: Diaz Y.; $\log(\lambda_{\rm Edd})=$-4.0), an AGN also contained in the BASS/DR2. Our final sample of LLAGN contains 17 objects, 11 of which are classified as Seyfert 2 (i.e. only narrow lines are visible in the optical spectrum), and six are classified as Seyfert 1.9 (a broad component is visible in H$\alpha$ but not in H$\beta$) in the BASS/DR2. Table \ref{table:sample} shows the general properties of our sample. Notes for the individual galaxy are in the Appendix \ref{app:notes_object} and Table \ref{table:observations} shows the log of the observations. \begin{table*} \caption{General properties of the sample galaxies} \label{table:sample} \centering \begin{tabular}{c c c c c c c c c c} \hline\hline Name & RA & DEC & Type & Redshift & N$_{\rm gal}$ & $M_{\rm BH}$ & $L_{\rm Bol}$ & $\lambda_{\rm Edd}$ & \\ & (J2000) & (J2000) & & & (10$^{20}$ cm$^{-2}$) & M$_{\odot}$ & ($\log$) & ($\log$) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline NGC\,3998 & 179.484 & 55.454 & Sy1.9 & 0.003 & 20.09 & 8.93$^{\rm L}$ & 42.29 & -4.74 \\ NGC\,3718 & 173.145 & 53.068 & Sy1.9 & 0.003 & 20.03 & 8.14 & 41.74 & -4.49 \\ NGC\,4258* & 184.740 & 47.304 & Sy1.9 & 0.001 & 20.08 & 7.56$^{\rm L}$ & 41.39 & -4.28 \\ NGC\,5033 & 198.364 & 36.593 & Sy1.9 & 0.002 & 20.00 & 7.68 & 41.78 & -4.00 \\ ESO\,253-G003* & 81.325 & -46.00 & Sy2 & 0.042 & 20.62 & 9.84+ & 43.89 & -3.95 \\ NGC 1052 & 40.270 & -8.256 & Sy2 & 0.005 & 20.49 & 8.67 & 42.83 & -3.94 \\ NGC\,2655 & 133.907 & 78.223 & Sy2 & 0.004 & 20.32 & 8.20 & 42.43 & -3.87 \\ NGC\,3147* & 154.223 & 73.400 & Sy2 & 0.009 & 20.54 & 8.81 & 43.10 & -3.81 \\ NGC\,2110* & 88.047 & -7.456 & Sy2 & 0.007 & 21.27 & 9.38 & 43.81 & -3.67 \\ LEDA\,96373* & 111.610 & -35.906 & Sy2 & 0.029 & 21.47 & 9.21 & 43.80 & -3.51 \\ NGC\,2992 & 146.425 & -14.326 & Sy1.9 & 0.007 & 20.72 & 8.33 & 43.13 & -3.30 \\ M\,51 & 202.484 & 47.230 & Sy2 & 0.001 & 20.19 & 6.59 & 41.40 & -3.29 \\ NGC\,2273* & 102.536 & 60.845 & Sy2 & 0.006 & 20.84 & 7.99 & 42.84 & -3.25 \\ HE\,1136-2304 & 174.713 & -23.360 & Sy1.9 & 0.027 & 20.63 & 9.39 & 44.28 & -3.21 \\ IGRJ\,11366-6002 & 174.175 & -60.052 & Sy1 & 0.014 & 21.81 & 8.56 & 43.51 & -3.15 \\ IC\,4518A & 224.421 & -43.132 & Sy2 & 0.016 & 20.96 & 8.79 & 43.83 & -3.06 \\ NGC\,7674* & 351.986 & 8.779 & Sy2 & 0.028 & 20.70 & 9.18 & 44.28 & -3.0 \\ \hline \end{tabular} \tablefoot{ (Col. 1) Name, (Col. 2 and 3) right ascension and declination in Equatorial (J2000.0) from \emph{Swift} BAT 105-month hard X-ray survey \citep{2018oh}, (Col. 4, 5, 7, 8 and 9) optical classification from BASS/DR2, redshift, Black hole mass using the velocity dispersion method, bolometric luminosity and accretion rate $\lambda_{Edd}$=L$_{\rm Bol}$/L$_{\rm Edd}$ from BASS/DR2 survey ($^{\rm L}$ identify masses taken from literature by the BASS/DR2 survey and the symbol + means from MgII). (Col. 6) represents the galactic absorption \citep{1990Dickey}. Objects marked with * are the galaxies with non-simultaneous observations with \xmm\ and \nus . } \end{table*} \section{Data Reduction} \label{sect:data_reduction} Data reduction was performed following the methodology explained in this section. Details on the observations can be found in Table \ref{table:observations}. \subsection{\emph{XMM-Newton data}} This satellite has two X-ray instruments, a grating spectrometer, and the European Photon Imaging Camera (EPIC). The EPIC instrument has three detectors, two MOS \citep{2001turner} and one PN CCDs \citep{2001struder}, we only used the observations from the EPIC-PN because of its higher throughput \citep{2001struder} and because of inclusion of the EPIC-MOS spectra resulted in too much statistical weight to the low energy range data points compared to the \nus and \sft/BAT data. We processed the Observation Data Files (ODFs) from the European Photon Imaging Camera (EPIC) PN detector using the Science Analysis System (SAS version 17.0.0). We followed standard procedures to obtain calibrated and concatenated event lists, filter them for periods of high background flaring activity, and extract light curves and spectra. Source events were extracted using a circular region of 49 arcsec centered on the target, and background events were extracted from a circular region of 98 arcsec on the same chip far from the source. We verified the photon pile-up is negligible in the filtered event list with the XMMSAS task \textsc{epatplot}. We generated response matrix files (RMFs) and ancillary response files (ARFs) and rebinned the spectra in order to include a minimum of 25 counts in each background-subtracted spectral channel and to not oversample the intrinsic energy resolution by a factor larger than 3. \subsection{\emph{NuSTAR data}} \emph{Nuclear Spectroscopic Telescope Array} (\emph{NuSTAR}) was successfully launched in 2012 June, \cite{2013harrison}. \emph{NuSTAR} has two identical co-aligned telescopes, each consisting of an independent set of X-ray mirrors and a focal-plane detector, referred to as focal plane modules A and B (FPMA and FPMB) that operate in the energy range 3--79 keV. The data reduction was performed with \textsc{nustardas v1.6.0}, available in the \emph{NuSTAR} Data Analysis Software. The event data files were calibrated with the \textsc{nupipeline} task using the response files from the Calibration Database \textsc{caldb} v.20180409 and \textsc{HEASOFT} version 6.25. With the \textsc{nuproducts} script, we generated both the source and background spectra, plus the ARF and RMF files. For both focal plane modules (FPMA, FPMB), we used a circular extraction region of radius 49 arcsec centered on the position of the source. The background selection was made, taking a region free of sources of twice the radius of the target and located in the same detector quadrant. Spectral channels were grouped with the \textsc{ftools} task \textsc{grppha} to have a minimum of 20 counts per spectral bin in the 3.0 -- 79.0 keV energy range. \subsection{\emph{Swift} data} The Neil Gehrels \emph{Swift} Observatory was launched on November 20, 2004. It carries three instruments to enable the most detailed observations: the \emph{Swift Burst Alert Telescope} (BAT; \citealt{2005barthelmy}; bandpass: 15-350 keV), the X-ray Telescope (XRT; \citealt{burrows2005}; bandpass: 0.3-10 keV), and the UV/Optical Telescope (UVOT 170–650 nm). In this work, we are focusing on \emph{Swift}/BAT and \emph{Swift}/XRT instruments. \begin{itemize} \item \underline{\emph{Swift/BAT:}} \\ We retrieved the binned and calibrated spectra, together with the response matrices for our targets, from the \emph{Swift}/BAT 105-month All-sky Hard X-Ray Catalog reported in \cite{2018kyu}. The observations were taken with the Burst Alert Telescope (BAT) on board the \emph{Swift} observatory. This survey has a sensitivity of 8.4$\times$10$^{-12}$ erg s$^{-1}$ cm$^{-2}$ in the 14 – 195 keV bands over 90$\%$ of the sky, with eight-channel spectra averaged over the 105-month duration of the survey. The complete analysis pipeline is described in the \emph{Swift}/BAT 22 All-sky Hard X-Ray Survey \citep{tueller2010}. \\ \item \underline{\emph{Swift/XRT:}} \\ For three sources (NGC\,7674, ESO\,253-G003, and IGRJ\,11366-6002) there are no simultaneous \emph{XMM-Newton} and \emph{NuSTAR} observations. We explored the \emph{Swift}/XRT archived and found simultaneous observations for the three of them. The data reduction of the \emph{Swift}/XRT in the Photon Counting mode was performed by following standard routines described by the UK \emph{Swift} Science Data Centre (UKSSDC) and using the software in HEASoft version 6.30.1. Calibrated event files were produced using the routine {\sc xrtpipeline}, accounting for bad pixels and effects of vignetting, and exposure maps were also created. Source and background spectra were extracted from circular regions with 25 arcsec and 50 arcsec radius. The {\sc xrtmkarf} task was used to create the corresponding ancillary response files. The response matrix files were obtained from the HEASARC CALibration DataBase. The spectra were grouped to have a minimum of 20 counts per bin using the {\sc grppha} task. After the data reduction, we find that both NGC\,7674 and ESO\,253-G003 present very low counts, preventing us from doing a proper spectral fit, so these will not be used in the analysis. \end{itemize} \section{Methodology} \label{sect:metodo} The analysis of the data comprises two steps: (1) a Combination of \xmm and \nus observations; and (2) homogeneous spectral fitting of the sample. All the spectra have been fitted using \rm{\texttt{xspec}} version 12.10.0 \citep{1996arnaud} and all the errors reported throughout the paper correspond to 90$\%$ of confidence level. \subsection{Combination of the \xmm and \nus observations} In this work, we have \nus observations, with an energy range from 3 to 79 keV, vital to study the Compton hump which is a key signature of the reflection. Additionally, we have \xmm data, which provides the best combination of sensitivity, bandpass, and spectral resolution at energies ranging from 0.5 - 10.0 keV. Objects with simultaneous observations with \xmm and \nus were fitted with all model parameters tied between the different spectra, except for a free cross-normalization factor. Objects with non-simultaneous observations (denoted with the symbol * in our work) were tested for spectral variability between the observation epochs. In order to detect spectral variability, we simultaneously fitted the \xmm + \nus spectra in the overlapping 3.0 -- 10.0 keV range for each object with a power-law model under neutral absorption. In cases where the spectrum was not well-fitted with this model, we added a Gaussian component centered at 6.4 keV and studied the improvement of the fit. At first, all parameters were tied between the spectra of the different epochs/instruments. If this model produced a satisfactory fit ($\chi^{2}_{\nu}\leq$1.2) the source is considered non-variable and treated in the same way as the objects with simultaneous observations. Two objects are well-fitted with a tied normalization of the power law (LEDA\,96373* and NGC\,2273*). For the remaining objects (NGC\,4258*, NGC\,3147*, NGC\,2110* and IC\,4518A*) we found that the normalization of the power-law varying between epochs resulted in a satisfactory fit. For these objects, in the subsequent fitting, the normalization of the power law was left free between the epochs, but the remaining parameters were tied. For one object, (ESO\,253-G003*) allowing the slope of the power law to vary freely improved the fit significantly according to the F-test. Given the spectral variability in this source, we had to leave most parameters untied between the epochs, and therefore the inclusion of the lower energy spectrum would not constrain the reflection model further. For this reason, in this source, we used the high-energy spectra only. Finally, one object (NGC\,7674) could not be fitted well with a free slope and normalization, as this is a known changing look AGN and its spectrum changed significantly in shape between observations \citep{2005bianchi}, so we retained only the \nus spectrum for the following analysis. The best model and the final configuration for each object are summarized in Table \ref{Table:not_simul}. \subsection{Spectral analysis} The aim of our spectral analysis is to quantify how much reflecting material \emph{can} exist around the central engine. For this purpose, we allow different types of reflectors and maximize the freedom of the fitted parameters of all required models, even if the fit results in unconstrained values for many of them. We tested the significance of the reflection component in the 3--195 keV band in all objects, returning significant detections in most of them. These tests are summarized in Appendix \ref{apnx:reflex_comp}. For all spectral fits, we included a multiplicative constant normalization between FPMA, FPMB, EPIC-PN, and \sft/BAT to account for calibration uncertainties between the instruments. We started with a baseline model and added different components until a satisfactory fit was obtained. We have selected three broad components in order to parametrize three scenarios: \begin{enumerate} \item \textbf{Cut off Power-law model (cPL) obscured by neutral material:} a single power law model, which corresponds to the primary emission of a non-thermal source. The column density, N$_{\rm H, los}$, is added as a free parameter to take the absorption by matter along our line of sight to the target into account. The free parameters in this model are the column density, N$_{\rm H, los}$, the slope of the power law, $\Gamma$, the high energy cut-off, E$_{\rm cut}$ and the normalization. \\ \item \textbf{Reflection models (Refl):} When the X-ray continuum is scattered by the surrounding gas, it can produce fluorescent emission lines (most notably Fe K$\alpha$ 6.4 keV) and a broad hump-like continuum peaking around 10--30 keV. The reflection was modeled with three possible scenarios: \\ \begin{itemize} \item A neutral reflector with a semi-infinite column density modelled with \texttt{PEXMON} \citep{2007nandra}. This model assumes the existence of optically thick and cold material, distributed in a slab and covering a given fraction of the X-ray source. The \texttt{PEXMON} model includes fluorescence, adding some spectral features, such the emission lines FeK$\alpha$ and Fek$\beta$, following the Monte Carlo calculations by \cite{1991george}. This model represents both the reflected and intrinsic emission defined with $\Gamma$ and the high energy cut-off (E$_{\rm cut}$) and the reflection fraction, R$_{\rm f}$. The free parameters in this model are the reflection fraction, R$_{\rm f}$ (to account for the reflection component and the contribution from the intrinsic power-law continuum), the spectral index, $\Gamma$, the high energy cut-off, E$_{\rm cut}$, the inclination and the normalization. \\ \item A smooth spherical distribution of neutral gas, with conical cavities along the polar direction, is modeled with \texttt{BORUS} \citep{2018balokovic}. This model calculates the reprocessed continuum of photons that are propagated through a cold and static medium. \texttt{BORUS} is similar to the torus model \texttt{BNtorus} of \cite{2011brightmanNandra} but it has additional free parameters (E$_{\rm cut}$, $A_{\rm Fe}$), additional chemical elements included, calculation extending to higher energies and line-of-sight component separated out. Furthermore, this model has a variable covering factor which is an advantage compared with other models, as \texttt{MYTORUS} \citep{2009Murphy} that proposes a toroidal geometry where the covering fraction is fixed to 0.5. In this work, we used the geometry of a smooth spherical distribution of gas, with conical cavities along the polar directions (\texttt{borus02}). The column density and the inclination of the torus are free parameters in this model. \texttt{borus02} includes fluorescent emission lines, according to fluorescent yields for K$\alpha$ and K$\beta$ lines from \cite{1979krause}, for all elements up to zinc (Z < 31). The reflected spectrum of this torus is calculated for a cut-off power-law illuminating continuum, where E$_{\rm cut}$, $\Gamma$ and normalization are free parameters. We modeled the direct coronal emission separately with a cut-off power law under a neutral absorber as described above. We have set as free parameters the column densities along the line-of-sight, N$_{\rm H, los}$, the inclination, Cos($\theta_{incl}$), the covering factor, CF, the column density of the reflector, $\log(\rm{N}_{\rm{H,refl}})$, the spectral index of the primary emission, $\Gamma$, the high energy cut off, E$_{\rm cut}$ and the normalization of the reflector tied to the primary emission. \\% We recall that $\theta_{\rm tor}$=0 (the opening angle) corresponds to a pole-on view. The free parameters in this model are the column densities along the line-of-sight, the inclination, Cos($\theta_{incl}$), the covering factor, CF, and column density of the reflector, $\log(N_{H,refl})$, the spectral index of the primary emission, $\Gamma$, the high energy cut off, E$_{\rm cut}$ and normalization, which is tied to the normalization of the reflector. \\ \item The accretion disk reflection modeled with \texttt{XILLVER} \citep{2013garcia} where the coronal spectrum is a power-law with an exponential cut-off described by the photon index, $\Gamma$ and the high energy cut-off, $E_{\rm{cut}}$. Another important parameter is the ionization parameter, $\xi$, defined as the incident flux divided by the density of the disk. This parameter is described by $\log(\xi)$ ranging from 0 for a neutral disk to 4.7\,erg\,cm$^{-2}$\,s$^{-1}$ for a heavily ionized disk (see \citealt{2013garcia}, for a more detailed description). Other parameters in this model are the iron abundance, $A_{\rm Fe}$ relative to the solar value (assumed to be solar in this work), redshift, reflection fraction, R$_{\rm f}$, and the inclination. Also, this model takes into account both the reflected continuum and the FeK$\alpha$. The free parameters in this model are the spectral index, $\Gamma$, the high energy cut-off, E$_{\rm{cut}}$, the ionization degree, $\log(\xi)$, the inclination, incl, the reflection fraction, R$_{\rm f}$ (to normalize the reflection component relative to the intrinsic power-law continuum) and the normalization. \\ \end{itemize} \item \textbf{Soft X-ray emission (SE):} When the combination of the above models does not produce a good fit, we explore if the addition of spectral component(s) improves the fit. The following spectral components are considered: \\ \begin{itemize} \item An absorbed scattered power-law: an absorbed power-law \texttt{PL} to model the scattered emission that is deflected by ionized gas. The photon index, $\Gamma$, of the scattered component is tied to the primary power law. We set as free parameters the column density, N$_{\rm H,ext}$ and the normalization of the scattered component but restricted to be less than 5$\%$ of the main one. \\ \item Thermal emission: An optically-thin thermal component, modeled by \texttt{MEKAL} in \texttt{xspec}, to model the soft excess observed below 1 keV, and potentially due to either star formation processes and/or thermal emission from a hot interstellar medium. We kept the hydrogen column density, abundance, and switch at their default values (1, 1, and 1 respectively) and we let the temperature, ionization, and normalization free to vary. \\ \item An ionized absorber (ab): a warm absorber was modelled with \texttt{zxipcf} within \texttt{xspec}. This model uses a grid of \texttt{xstar} photoionized absorption models (calculated assuming a microturbulent velocity of 200 km s$^{\rm -1}$) for the absorption, and it assumes an absorbent covering some fraction of the source, cf$_{\rm W}$ \citep{2008reeves}. \texttt{zxipcf} has as free parameters the column density, N$_{\rm H,W}$, the ionization state, $\log(\xi_{\rm W})$, the covering fraction, cf$_{\rm W}$, and redshift. We set the covering fraction to cf$_{\rm W}$=1 to mimic an absorber covering all the sky. We let as a free parameter N$_{\rm {H,W}}$ and $\log(\xi_{\rm W})$.\\ \end{itemize} \end{enumerate} We started our analysis by fitting a baseline model that is defined as \texttt{MOD = Refl + cPL} to the data. Then we added one SE emission or absorption component (we tested one by one: \texttt{MOD + PL}, \texttt{MOD + MEKAL} and \texttt{MOD*ab}) and explore if the inclusion of these components improves the fit (using an F-test in the case of the scattered power-law and MEKAL, and we evaluate the improvement of the fit using the values of the $\chi^2$ and a visual inspection of the residuals in the case of the ionized absorber). If any of the improvements was significant, we selected the model that returned the lowest value of $\chi^2/$d.o.f.\footnote{d.o.f. means the degree of freedom} and select it as the new baseline model and the process of including and testing an additional SE component was repeated. When none of the additional SE components provided a significant improvement, the iteration stopped. Up to 4 iterations were necessary for each object and reflection model. The method is represented in Fig. \ref{fig:method}. The process was repeated separately for each reflection model. Thus, we report up to three best-fitting models for each object. \begin{figure} \centering \includegraphics[width=8.0cm]{diagram_method.png} \caption{Schematic view of the methodology followed to fit the data. Note that the loop in blue iterate a maximum of four times. For a detailed explanation of the method, we refer the reader to the text.} \label{fig:method} \end{figure} The models that were selected to fit the data are represented in \texttt{xspec} as: \vskip 0.3cm \boxed{\rm{C} \times N_{\rm H,Gal} \times ab \times (N_{{\rm H},ext} \times SE + N_{{\rm H},los}\times $ cPL $ + N_{{\rm H, los}} \times \rm{Refl})} \vskip 0.5cm Where $\rm C$ represents the cross-calibration constant between different instruments, N$_{\rm H, Gal}$ is the Galactic absorption (\texttt{phabs} in \texttt{xspec}) predicted using N$_{\rm H}$ tool within \rm{FTOOLS} \citep{1990Dickey, 2005karberla}. ``ab'' is the ionized absorption component modelled with \texttt{zxipcf}, in cases where this component is used, otherwise is equal to unity. Two absorbing column densities are used, which will be called here N$_{\rm H,ext }$ and N$_{\rm H,los}$ (\texttt{zphabs} in \texttt{xspec}). N$_{\rm H,los}$ is assumed to cover the nuclear components (power-law and disk reflection)\footnote{In case of a torus-like reflection, the absorber is not acting in the torus-like reflector} and N$_{\rm H,ext}$ covers the SE component\footnote{In case of MEKAL, the absorber is not acting in this component}. Moreover, cPL is a cut-off power-law (\texttt{cutoffpl} in \texttt{xspec}) representing the primary X-ray emission and ``Refl'' represents the different reflection models used. Note that we imposed the following conditions to the resulting best-fit $\Gamma$>0.5, N$_{\rm H, Gal}\leq$ N$_{\rm H,ext }$ and N$_{\rm H,los}$>N$_{\rm H,ext }$. In the case of NGC 1052, additional Gaussian lines were required at soft energies from a visual inspection, we included S XIV at 2.4 keV and Si XIII at 1.85 keV, also in agreement with \citep{2020natalia1052}. They were added as a narrow Gaussian line with fixed centroid energy and a width fixed at 0.01 keV. \section{Results} \label{sect:results} We refer the reader to the following sections and tables for details on the analysis. Comparison with previous works and our results on individual objects can be found in Appendix \ref{app:notes_object}. The coronal parameters (i.e., $\Gamma$, E$_{\rm cut}$, and $\chi^2$) are listed in Table \ref{table:corona}. The reflection parameters, i.e., R$_f$ and inclination for \texttt{PEXMON}; $\log(\rm{N}_{\rm{H,refl}})$, CF and inclination for \texttt{borus02}; $\log(\xi)$, R$_{\rm f}$ and inclination for \texttt{XILLVER} are listed in Table \ref{table:reflection_parameters}. In Table \ref{table:soft} we show all the additional components required for the fit with each of the reflection models, i.e., the column density of the neutral absorbers in the line of sight to the extended and nuclear components and the temperature of the optically thin thermal emission components. Additional parameters, i.e., the column density and ionization parameter of the ionized absorbers in the line of sight and the normalization of the scattered power law can be seen in Table \ref{table:other}. All cross-calibration constants are listed in Table \ref{table:cab}. The plots of the spectra with the best-fit models and their residuals can be found in Appendix \ref{app:plots}. \subsection{Models} \label{sect:models} In this work, we used three reflection models (\texttt{PEXMON}, \texttt{borus02}, and \texttt{XILLVER}) that were used to fit the spectrum of each of the sources in the sample, i.e., each of the sources is fitted by three different models. The simplest model used in our work (\texttt{PEXMON}) is a good representation of the data, however, we will focus on models that can explore different reflector geometries as \texttt{borus02} and \texttt{XILLVER}. To decide which model provides the best description of the observations, we estimate the ``evidence ratio'' using the Akaike information criterion (AIC) for both models. This evidence ratio allows us to compare if one model is better than another one, it is defined using as $\epsilon$=W(AIC$_{\rm{torus}}$)/W(AIC$_{\rm{disk}}$) where W(AIC$_{\rm{torous}}$) and W(AIC$_{\rm{disk}}$) are the ``Akaike weight'' (see \citealt{2016emmanopolus} for more details). The evidence ratio is a measure of the relative likelihood of the torus versus the disk model. The torus model is 200 times more likely than the disk model when $\epsilon\leq$0.0067. The disk model is 200 times more likely than the torus model when $\epsilon\geq$150. The evidence ratio is listed in Table \ref{table:AIC}. \begin{table} \caption{Best model results according to Akaike criterion} \label{table:AIC} \centering \begin{tabular}{c c c } \hline\hline Name & $\epsilon$ & Model \\ \\ \hline NGC\,3998 & 8.06E-01 & T/D \\ NGC\,3718 & 1.68E+00 & T/D \\ NGC\,4258* & 2.57E-16 & Torus \\ NGC\,5033 & 5.67E-04 & Torus \\ ESO\,253-G003* & 1.20E+00 & T/D \\ NGC 1052 & 3.86E-59 & Torus \\ NGC\,2655 & 3.86E-02 & T/D\\ NGC\,3147* & 6.44E-02 & T/D \\ NGC\,2110* & 1.65E-53 & Torus \\ LEDA\,96373* & 8.97E-26 & Torus \\ NGC\,2992 & 1.77E-74 & Torus \\ M\,51 & 1.68E-04 & Torus \\ NGC\,2273* & 4.41E+07 & D \\ HE\,1136-2304 & 5.31E-15 & Torus \\ IGRJ\,11366-6002 & 2.49E+00 & T/D \\ IC\,4518A & 1.18E-06 & Torus\\ NGC\,7674* & 2.23E+02 & D \\ \hline \end{tabular} \tablefoot{ Evidence ratio for the Akaike method and resulting model best to each source. T/D represents the cases when either torus or disk models provide equally good fits. T represents the torus model (\texttt{borus02}) and D the disk model (\texttt{XILLVER}). Objects marked with * are the galaxies with non-simultaneous observations with \xmm\ and \nus . } \end{table} For nine (53$\%$) objects (NGC\,4258,NGC\,1052, NGC\,2110, LEDA\,96373, NGC\,2992, M\,51, HE\,1136-2304, IC\,451A, and NGC\,5033) \texttt{borus02} is preferred. Then in the following sections, we choose this model as the best representation of the data in these objects. On the other hand, two (12$\%$) objects (NGC\,2273 and NGC\,7674) are well fitted with a disk (\texttt{XILLVER}) model and in six (35$\%$) objects (NGC\,3998, NGC\,3718, ESO\,253-G003, NGC\,2655, NGC\,3147, and IGRJ\,11366-6002) both models fit similarly well the data. Since it is not possible to distinguish between a reflection dominated by a torus or a disk, we will treat them separately in the following sections. When referring to the torus case, we refer to all \texttt{borus02} models for the whole sample (15 galaxies: indistinguishable and distinguishable cases). In the disk case, we refer to the \texttt{XILLVER} model, considering the indistinguishable cases and the cases where the disk is a good representation of the data (8 galaxies in total), since in the other cases a torus model is not a good representation of the data. \subsubsection{X-Ray Continuum Properties} The X-ray continuum of AGN is described by a power law with a high-energy cutoff. The free parameters of this component are the spectral index ($\Gamma$) and the high energy of the cut-off (E$_{\rm{cut}}$). In Fig. \ref{fig:photon_index} we show the histogram derived from the broadband spectral analysis with each reflection model. Note that the spectral index between both the reflected and intrinsic emission are tied. We find that the mean values (dashed vertical lines) of $\Gamma$ for the sample using \texttt{PEXMON}, \texttt{borus02} and \texttt{XILLVER} are consistent (1.73$\pm$0.21, 1.72$\pm$0.17 and 1.72$\pm$0.20 respectively). Note that the simplest model used in our analysis is \texttt{PEXMON}, which shows values of the photon index consistent with more geometrical models. \begin{figure} \centering \includegraphics[width=0.40\textwidth]{Histograma_gamma2.pdf} \caption{ Comparison between the spectral index estimation, $\Gamma$, between the models. The dotted lines represent the mean values. } \label{fig:photon_index} \end{figure} Considering the torus model (15 galaxies), we found median values of $\Gamma$= 1.76 and $\sigma$=0.16, with values ranging between [1.40, 2.06]. Another important parameter that could be estimated with the \nus data is the high energy cut-off (E$_{\rm cut}$). This parameter can be considered as an indicator of the temperature of the X-ray corona. Consequently, its knowledge provides information about the dynamics of the corona and the physical processes occurring within it. Nevertheless, this parameter is poorly constrained. A lower (upper) limit of E$_{\rm cut}$ could be determined for eight (two) sources. The five AGN for which E$_{\rm cut}$ could be determined (NGC\,3998, ESO\,253-G003, NGC\,2110, NGC\,2992, and NGC\,5033) have a mean value of E$_{\rm cut}$=193.28 keV with a standard deviation of $\sigma$=99.19 keV. Furthermore, taking into account the disk model (8 galaxies), we found median values of $\Gamma$=1.71 and $\sigma$=0.23, with values ranging between [1.40, 2.06]. Regarding the high energy cut-off, we could find five (one) lower (upper) limits and for six objects we obtained a mean value of E$_{\rm cut}$=371.47.58 keV with a standard deviation of $\sigma$=619.99 keV. \subsubsection{Soft band spectral fit} In the soft (0.3 -- 10.0 keV) energy band, we added a thermal (MEKAL in \texttt{xspec}), scattered power-law, absorption by ionized gas (also referred to as “warm absorption”) (modelled with \texttt{zxipcf} in \texttt{xspec}) or a combination of these components to improve the spectral fit. When considering the torus model (15 objects), six objects (NGC\,3998, ESO\,253-G003, NGC\,3147, M\,51, IGRJ\,11366-6002, and NGC\,5033) do not require an additional component to improve the fit. Two objects (NGC\,3718 and HE\,1136-2304) required a \texttt{MEKAL} component (with kT=0.88$^{**}_{0.67}$ keV and kT=0.59$^{0.67}_{0.51}$ keV, respectively). Two objects are well-fitted with a combination of \texttt{MEKAL}, power-law, and warm absorber (NGC 1052 with \texttt{MEKAL+PL} and NGC IC\,451A with \texttt{MEKAL*ab}). Composite models are needed for five galaxies (NGC\,4258, NGC\,2655, NGC\,2110, LEDA\,96373, and NGC\,2992). In the cases where two \texttt{MEKAL} were required, the values of the temperatures are in the range kT$_{\rm 1}$ = [0.58 - 0.62] keV with a mean value of kT$_{1}$=0.60 keV and $\sigma$=0.02 keV, and kT$_{\rm 2}$ = [0.15 - 0.22] keV with a mean value of KT$_{2}$=0.19 keV and $\sigma$=0.03 keV. The mean value of the ionized absorber is N$_{\rm{H,W}}$=1.66 $\times$10$^{\rm{22}}$ cm$^{\rm{-2}}$ and $\sigma$=1.41 $\times$10$^{\rm{22}}$ cm$^{\rm{-2}}$ The degree of ionization is in the range [-1.14, 4.30] with the mean $\log(\xi_{\rm W})$=1.31 and $\sigma$=1.99. In relation to the disk model (8 objects), five galaxies do not require any component to improve the spectral fit (NGC\,3998, ESO\,253-G003, NGC\,3147, IGRJ\,11366-6002 and NGC\,7674). One galaxy (NGC\,3718) require a \texttt{MEKAL} component to improve the fit. Two galaxies (NGC\,2655 and NGC\,2273) are well-fitted with a composite model, \texttt{MEKAL*ab}. \subsubsection{Line-of-sight column density} Absorption of X-rays by neutral material is the result of the combined effect of Compton scattering and photoelectric absorption. The Compton scattering and the photoelectric absorption were modelled using \texttt{CABS} and \texttt{ZPHABS} in \texttt{xspec} respectively. In \texttt{ZPHABS}, we fixed the redshift at the value of each source. The only free parameter is the column density, which is tied in all fits (i.e., $\rm N_{\rm{H-ZPHABS}}=\rm N_{\rm{H-CABS }} = \rm N_{\rm{H-los} }$). According to the torus model, we can classify six galaxies as unobscured ($\log(\rm{N_{\rm{H, los}}})$<22) (NGC\,3998, NGC\,3147, NGC\,2992, HE\,1136-2304, IGRJ\,11366-6002 and NGC\,5033) with values between $\log(\rm{N_{\rm{H, los}}})$=[20.0, 21.89] and eight galaxies (NGC\,3718, NGC\,4258, ESO\,253-G003, NGC\,1052, NGC\,2655, NGC\,2110, IC\,451A, and LEDA\,96373) as obscured (22<$\log$($\rm{N}_{\rm H}$)<24.18) with values ranging $\log(\rm{N_{\rm{H, los}}})$=[22.01, 24.09]. According to our spectral analysis, one galaxy (M\,51) in our sample can be classified as Compton thick (CT) (using as a threshold N$_{\rm{H}}=1.5\times 10^{24}$cm$^{-2}$, or $\log(\rm{N_{\rm{H, los}}})$=24.18). The mean values of spectral index, column density in the line of sight, and column density of the torus are reported in Table \ref{table:result_group}. All the parameters are consistent between the groups. Note that the values of the cross-calibration constant between the groups are consistent. Regarding the disk model, two galaxies (NGC\,3998 and IGRJ\,11366-6002) can be classified as unobscured. Six galaxies (NGC\,3718, ESO\,253-G003, NGC\,2655, NGC\,3147, NGC\,2273, and NGC\,7674) as obscured with values $\log(\rm{N}_{\rm{H, los}})$=[22.03, 23.36]. The mean values of the spectral index, the column density in the line of sight, the ionization degree of the accretion disk, and the reflection fraction are reported in Table \ref{table:result_group} and showed values consistent between the categories. Note that according to a reflection dominated by an accretion disk, none of the galaxies in our sample can be classified as CT. \begin{table*} \caption{Mean values and standard deviation of the spectral parameters for the subgroups with the torus and the disk models.} \centering \begin{tabular}{c|c c c c c c} \hline Group & $\Gamma\pm\sigma$ & $\log(\rm{N}_{\rm{H, los}})\pm\sigma$ & $\log(\rm{N}_{\rm{H,refl}})\pm\sigma$ \\ \hline Unobscured (6) & 1.80$\pm$0.12 & 20.85$\pm$0.56 & 23.45$\pm$0.68 \\ Obscured (8) & 1.74$\pm$0.17 & 22.99$\pm$0.65 & 23.65$\pm$0.58 \\ \hline \end{tabular} \vskip 2.5mm \begin{tabular}{c|c c c c c c} \hline Group & $\Gamma\pm\sigma$ & $\log(\rm{N}_{\rm{H, los}})\pm\sigma$ & $\log(\xi)\pm\sigma$ & R$_{\rm{f}}\pm\sigma$ \\ \hline Unobscured (2) & 1.85$\pm$0.05 & 21.90$\pm$1.32 & 3.61$\pm$0.47 &3.97$\pm$0.12 \\ Obscured (6) & 1.74$\pm$0.26 & 22.90$\pm$0.44 & 2.43$\pm$0.77 & 6.59$\pm$3.29 \\ \hline \end{tabular} \tablefoot{Group, The standard deviation(s) and the mean value of the following parameters: For the Torus: $\Gamma$, column density in the line of sight in log units and column density of the torus like reflector in units of log. For a disk: $\Gamma$, column density in the line of sight in log units, ionization degree, and the reflection fraction. The parentheses show the number of AGN in each category. } \label{table:result_group} \end{table*} \subsubsection{The reflection component} The reflection features observed in the hard X-ray spectra of AGN may be caused by neutral and distant material such as the torus or by the ionized material of the accretion disk. For the case where the reflection is dominated by the torus, the mean value of our sample for the column density for this structure is $\log(N_{H, refl})$=23.69 and $\sigma$=0.76 with values between [22.50, 25.40]. Four objects (NGC 1052, M\,51, IGRJ\,11366-6002, and IC451A) show a column density of the torus consistent with a Compton thick structure. Another important parameter derived from the torus reflector model is the covering factor. We were only able to determine a lower (upper) limit for three (six) objects (a lower limit for NGC\,3998, NGC\,2655, and IC451A, and an upper limit for NGC\,3718, NGC\,4258, ESO\,253-G003, NGC\,3147, NGC\,2992, and M\,51). This parameter was determined for six ($40\%$) objects (NGC 1052, NGC\,2110, LEDA\,96373, HE\,1136-2304, IGRJ\,11366-6002, and NGC\,5033) with a mean value of CF =0.59 and $\sigma$=0.25. The half-opening angle of the polar cutouts, cos($\theta_{incl}$), is also measured with the torus model. However, we obtain only lower (upper) limits for seven (three) objects and properly constrained it for five sources. To the disk-like reflection, we constrain the value of the ionization degree of the accretion disk in six galaxies (NGC\,3718, ESO\,253-G003, NGC\,3147, NGC\,2273, IGRJ\,11366-6002 and NGC\,7674), we found median values of ionization degree of the disk of $\log(\xi)$=2.44 and $\sigma$=0.81. In one (one) objects, we only obtain upper (lower) limit (NGC\,3998 a lower limit and NGC\,2655 an upper limit). Regarding the reflection fraction, R$_{\rm f}$, we performed a test by fixing the value of $\log(\xi)$ to 0 and comparing the reflection fraction obtained with this \xill configuration and \texttt{PEXMON}. For the sample, we found consistent values between both models. However, since our goal is to restrict the accretion disk features, we left the ionization degree as a free parameter. We obtain six lower limits, one upper limit (NGC\,2655) and it is constrained in one case (NGC\,7674). This model also allows us to estimate the inclination, and this parameter is constrained in five sources with mean value Incl=70.10 deg and $\sigma$=27.23 deg and four upper limits. \subsubsection{Flux and Luminosity} We computed the X-ray flux and luminosity in two energy bands: 2.0 -- 10.0 keV and 10.0-- 79.0 keV using \texttt{xspec}. Note that the redshift of the sources was taken from NASA/IPAC Extragalactic Database (NED). The values can be seen in Table \ref{table:luminos}. Taking into account a torus model, the mean value of the intrinsic luminosities in the \bors case are $\log(L_{2.0-10.0})$=41.73 and $\sigma$=1.16 $\log(L_{10.0-79.0})$=42.14 with $\sigma$=1.29. In the \xill case, we found $\log(L_{2.0-10.0})$=41.57 with $\sigma$=1.06 and $\log(L_{10.0-79.0})$=41.85 with $\sigma$=1.24, and they are equivalent. The distribution of the intrinsic luminosity obtained in both cases can be seen in Fig. \ref{fig:lumin}. \begin{figure*} \centering \includegraphics[width=0.850\textwidth]{histogramas_fin.png} \caption{Histogram of the intrinsic luminosity in the band 2.0 -- 10.0 keV (left) and 10.0 -- 79.0 (right) in the \bors and \xill with number of objects in each group. } \label{fig:lumin} \end{figure*} \section{Discussion} \label{Sec:discussion} We have performed the X-ray spectral analysis of an AGN sample with accretion rates, $\log(L_{\rm Bol}/L_{\rm Edd})\leq$ -3 selected from the BASS/DR2 that have available \nus + \xmm + \emph{Swift} data. Models from a neutral reflector (\texttt{PEXMON}), reflection from an ionized accretion disk (\texttt{XILLVER}) and from the torus (\texttt{borus02}) have been used to fit the data. This sample is composed of 17 objects, and our main results are summarized as follows: \begin{enumerate} \item In our sample, six (35$\%$) objects are equally well-fitted with a disk or with a torus-like reflector. For nine (53$\%$) galaxies, the torus reflection model is the best representation of the data. In two cases (12$\%$) the disk model well fits the data. \item When modeling the reflection with \texttt{borus02}, seven objects are well-fitted by a single neutrally-absorbed cut-off power-law plus reflection (i.e., no components are required in the soft band). When modeling the reflection with \texttt{XILLVER} instead, five objects can be well modeled in the same way. The remaining objects require the addition of a MEKAL and/or scattered power law, an ionized absorber, or a combination of two or more of these components. \item According to the torus model, six sources can be classified as unobscured ($\log$(N$_{\rm H}$)<22), eight galaxies as obscured (22<$\log$(N$_{\rm H}$)<24.18) and one object have a column density in the line of sight consistent with a Compton thick source ($\log$(N$_{\rm H}$)>24.18). According to the disk reflection, two (six) objects can be classified as unobscured (obscured). These classifications are consistent among the models, except in the case of NGC\,3147 (unobscured according to the torus and obscured with the disk). \end{enumerate} The high quality and broad spectral coverage available combining \xmm+\nus+\emph{Swift} allowed us to put constraints on spectral parameters related to the accretion mechanism and reflection of LLAGN. Our analysis covers energies above 10.0 keV, where the reflection has an important role in the spectral fit, and considering this feature in the X-ray spectral analysis can affect the estimation of the coronal parameters (see \citealt{2020diaz}). In the following, we discuss the physical interpretations of the results presented in this paper. \subsection{ Determination of the $L_{\rm Bol}/L_{\rm Edd}$} \label{sec:bolo_corre} The selection of the sample presented in this work was based on sources with $\log(L_{\rm Bol}/L_{\rm Edd})\leq$ -3 according to those values reported in BASS/DR2. However, because variability is one of the properties that characterize AGN, we will estimate these accretion rates using the data that have been analyzed here. To estimate $L_{\rm Bol}/L_{\rm Edd}$, we follow the relation given in \citet{2010eracleous}, which uses the black hole mass and bolometric luminosities. According to \cite{2017mike}, the black hole masses available for the BASS sources were determined using different methods. For 14 of our sources, these were estimated using the velocity dispersion method, from the M$_{\rm BH}$-$\sigma_{*}$ relation by \cite{2013kormendy}. Two galaxies have M$_{\rm BH}$ taken from the literature (NGC\,3998 via the M-$\sigma$ relation and NGC\,4258 by a rotating H$_2$O maser disk) and for one source it was estimated from the MgII emission line (ESO\,253-G003). The uncertainties on these M$_{\rm BH}$ determinations are $\sim$0.3-0.4 dex (as explained in the BASS/DR1 paper - \citealt{2017mike}). We conservatively assume that the typical uncertainty on M$_{\rm BH}$ is 0.4 dex. The other key parameter is the bolometric luminosity; the best method to estimate it is by the integrated area under the spectral energy distribution (SED). However, observations from a variety of telescopes are necessary in order to build up a complete and detailed SED. The bolometric correction is another way to estimate it, which depends on the X-rays luminosity, and it is the one we will be using in this work. For instance, BASS/DR1 \citep{2017mike} focused on the bolometric correction derived by \cite{2007vasu, 2009vasu}, where L$_{\rm bol}$/L$_{\rm 2-10 keV}$ = 20 for $L_{\rm Bol}/L_{\rm Edd}\leq$ 0.4, and L$_{\rm Bol}$/L$_{\rm 2-10 keV}$ = 70 for $L_{\rm Bol}/L_{\rm Edd}\geq$ 0.4. As the bolometric luminosity is fundamental in the estimation of the accretion rate, we examine an alternative determination of $L_{\rm Bol}/L_{\rm Edd}$ based on the available X-ray luminosity estimated using the data of our sample of AGN. We use the intrinsic luminosity in the 2--10 keV rest-frame energy range, L$_{\rm 2.0-10.0 keV}$, derived from the best-fitting spectral models of the X-ray data. Note that in the case of indistinguishable cases, we use the values from the \bors model. Using the \xill model, the results are the same. A comparison between our $L_{\rm 2.0-10.0 keV}$ calculation and the BASS/DR2 is presented in Fig. \ref{fig:l2_10}, showing differences in the luminosity, possibly related with variability. The difference between the fluxes measured by BASS/DR2 integrated 70 months and the flux measured in the short exposures with \nus that we use here can be quite large for highly variable sources such as NGC\,2992 \citep{2000Gilli, 2010shu, 2017lore, 2018marinucci, 2020marinucci} and LEDA\,96373 \citep{2009landi}. To be consistent between the state of each AGN when measuring $\Gamma$ and other parameters, we recalculate $L_{\rm Bol}/L_{\rm Edd}$ using the fluxes measured here and refine it by changing the bolometric correction as described below. We use our L$_{\rm 2.0-10.0 keV}$ calculation in combination with the bolometric correction K(2.0-10.0 keV) from \cite{2020duras}, who used a sample of $\sim$1000 type 1 and type 2 AGN from five different AGN surveys for which they performed a SED -fitting. They reported a bolometric correction as a function of 2.0-10.0 keV X-ray luminosity. The resulting K(2.0-10.0 keV) are slightly smaller than those used previously (L$_{\rm bol}$/L$_{\rm 2-10 keV}$ = 20), with a median value of K(2.0-10.0 keV)= 15.60 with a scatter of $\sim$0.37 dex \citep{2020duras}. The values of bolometric luminosity and Eddington ratio are given in table \ref{table:lumi_kcorrect}. The errors in the bolometric luminosity correspond to the error propagation of M$_{\rm BH}$ (0.4 dex), K(2.0-10.0 keV) (0.37 dex), and L$_{\rm 2-10 keV}$ (estimated with \texttt{xspec}). In the following analysis, we will use these $L_{\rm Bol}/L_{\rm Edd}$ values to minimize the effects of source variability. \begin{figure} \label{Fig:gamma_tor_1} \centering \includegraphics[width=9.2cm]{plot_l_bass_yaher2.pdf} \caption{Intrinsic luminosity in the 2.0-10.0 keV range from BASS/DR2 and our work. Dotted black line represents x=y. } \label{fig:l2_10} \end{figure} \subsection{Accretion mechanism: The $\Gamma$ vs $L_{\rm Bol}/L_{\rm Edd}$ relation} It has been suggested that the accretion mechanism in LLAGN ($L_{\rm Bol}/L_{\rm Edd}$<10$^{-3}$) is different from that in more powerful AGN (e.g., Seyferts) and similar to that of X-ray binaries (XRB) in their low/hard state \citep{2005yamakoa, 2009Gu, 2011younes, 2011xu, 2014yuan, 2016Lore}. Some authors, following the relations obtained for XRB, have studied the accretion mechanism using the relation between the spectral index $\Gamma$ and the accretion rate $\lambda_{\rm Edd}$, finding a positive correlation between these quantities at high accretion rates, suggesting a geometrically thin and optically thick disk, known as the standard model for accretion disks \citep{1973shakura, 1999Pkora}. A negative correlation has also been found at low accretion rates, indicating radiatively inefficient accretion (e.g., \citealt{yuan2007}). In this configuration, the accretion disk becomes truncated near the SMBH, with a geometrically thick and optically thin disk at lower radii and a thin disk at higher radii. However, these correlations show a large scatter \citep{2006shem, 2009Gu, 2011younes, 2015yang, 2017benny, 2018She}, with $\Gamma$ values between [1,3] \citep{2009Gu, 2011younes} and [0.5, 3.5] \citep{2018She}. The high scatter in the spectral index estimate is still not understood - it could be due to the sensitivity of the measurements or to the intrinsic properties of the galaxies. Thanks to the excellent statistics of \nus in combination with \xmm, we were able to better constrain the spectral index $\Gamma$ in our low accretion rate sample. In Fig. \ref{fig:gamma_acrat} we show the relation between $\Gamma$ and $\lambda_{\rm Edd}$ using the best fitting reflection model (\texttt{borus02}). We have added data from \cite{2021esparza}, who studied the torus configuration of 36 AGN using \nus and \textit{Spitzer} data and estimated the spectral parameters using the same reflection model used in this work (\texttt{borus02}). We applied the same bolometric correction to these data (see Sect. \ref{sec:bolo_corre}). In Fig. \ref{fig:gamma_acrat}, the blue points correspond to this work and the light yellow stars represent the data points from \cite{2021esparza}. In order to check whether there exists a break in this relation, as in XRB, we will test different scenarios. We started fitting a 1st-degree polynomial to the data, using the \texttt{polyfit} tool in python to perform this analysis. However, previous works reported a break in the correlation (see, for example, \citealt{2018She}), where a 1st-degree polynomial with a negative slope is found on one side and a positive slope on the other. Then, we used \texttt{piecewise-regression}\footnote{https://github.com/chasmani/piecewise-regression} tool in python \citep{Pilgrim2021}, to test the existence of the breakpoint. This package fits breakpoint positions and linear models for the different fit segments, and it gives confidence intervals for all the model estimates (see \citealt{muggeo2003estimating} for more details). We found a breakpoint at $\log(\lambda_{\rm{Edd, break}})$=-2.39 with $\sigma=$0.45, in agreement with what was previously obtained by \cite{2018She}, who proposed a break at -2.5. To determine which model (with a breakpoint or not) best represents the data, we use the Bayesian information criterion (BIC), where the model with the lowest BIC value is considered the best. With no break, BIC is -113.2, and with a break, it is -115.5. This suggests that the model with a breakpoint at $\log(\lambda_{\rm{Edd, break}})$ represents well the data, i.e., AGN seems to follow the same relation as XRB. We also explored, the possibility of more breakpoints, and it does not change the previous result. In Figure \ref{fig:gamma_acrat} we also plot the relations given by other authors for comparison. For high luminosity AGN ($\log(\lambda_{\rm Edd})$ >-2.39), we compare with \cite{2013fanali}, who studied a sample of 71 type 1 AGN using \xmm data (purple dashed line). In the low luminosity branch ($\log(\lambda_{\rm Edd})$ < -2.39), we compare with \cite{2009Gu}, which used a sample of 55 LLAGN using \emph{Chandra} and \emph{XMM-Newton} data (green dashed line); \cite{2018She} used a sample of 314 AGN with \emph{Chandra} (cyan dashed line); and \cite{2011younes} used \emph{Chandra} and \emph{XMM-Newton} data from a sample of 13 LINER with accretion rates below -4.5 (black dashed line). In this work, we have shown that the inclusion of \emph{XMM-Newton} + \nus data and reflection models in the spectral fit improves the estimation of the spectral index - as also reported in \cite{2021hinkle} - which could improve the scatter compared to what was previously found by \cite{2009Gu, 2011younes, 2018She}. For details on the improvement of the uncertainties in the spectral index estimation, see Appendix \ref{apnx:scatter_error}. Indeed, in Fig. \ref{fig:gamma_acrat} can be seen that our results, when compared with previous studies, seem to agree with the correlations found by \cite{2009Gu}, \cite{2018She}, and \citet{2011younes}, but the effect of the large scatter in previous studies can be appreciated. The same is true for the high-accretion branch, where the relation of \cite{2013fanali} (at $\log(\lambda_{\rm Edd})$ > -2.39) fits well the data of \cite{2021esparza}. To determine whether there is a relation between $\Gamma$ and $\lambda_{Edd}$, we use the tool \texttt{pymccorrelation} in Python \citep{1986isobe, 2014curran, 2020privon} to test the relationship between two variables. This tool is able to calculate Pearson's r, Spearman's $\rho$, and Kendall's $\tau$ correlation coefficients. In this work, we use the Kendall $\tau$ correlation test, a non-parametric method for measuring the degree of association of two variables in censored data (upper/lower limits) and taking into account the uncertainties in the parameters (see \citealt{1996akritas, 1986isobe} for a detailed explanation of the calculus). A Kendall's $\tau$ close to zero indicates that there is no trend, and if they are perfectly related, Kendall's $\tau$ becomes 1.0 (or -1.0 for an anti-correlation). For the LLAGN, $\log(\lambda_{Edd}$) < -2.39, Kendall's correlation coefficient is $\tau$=-0.27. However, possibly because of the small number of sources, the associated p-value is 0.06, so the correlation is not formally significant and confirmation would require a larger sample. Note that our result is also consistent with a flat correlation for these objects. In the high luminosity branch ($\log(\lambda_{Edd})$ > -2.39), we obtain $\tau$=0.39 and a corresponding p-value of check 0.02, consistent with a positive correlation for high accreting sources. Thus, it appears that our sample provides evidence of a $\Gamma$-$\lambda_{Edd}$ relation that is consistent with previous studies, although at lower statistical significance. In any case, the change in correlation between these parameters at $\log(\lambda_{Edd}) \sim$-2.39 highlights the change in accretion physics between high- and low-luminosity AGN, consistent with previous studies (\citealt{2006shem, 2011younes} and reference therein). Despite the small number of sources in our sample, we study the anti-correlation of the sample presented here using the tool \texttt{linregress} in Python. Then, for the low-luminosity branch where $\log(\lambda_{\rm Edd})$ < -2.39: \begin{equation*} \Gamma=(-0.128\pm0.272)\times\log(\lambda_{\rm Edd})+(1.348\pm0.075) \end{equation*} Our work allowed us to identify the change in correlation between the spectral index and the accretion rate at $\log(\lambda_{\rm Edd})\sim$-2.39, which is highly suggestive of a change in accretion physics in AGN. We recall that a larger sample of sources combining \emph{XMM--Newton} and \emph{NuSTAR} data and fitting physical reflection models would be very useful to confirm this relation. \begin{figure*} \centering \includegraphics[width=13cm]{gamma_bestfit_yaher_final_fin45.pdf} \caption{Correlation between the spectral index, $\Gamma$, from individual fits, vs. the Eddington ratio, $\log(\lambda_{\rm Edd}) =\log(L_{\rm Bol}/L_{\rm Edd})$, for our sample of galaxies of the best fit models. The dot and dashed green line represent the relation given by \cite{2009Gu}, the orange dotted represents \cite{2011younes}, the magenta dashed line is the relation obtained by \cite{2018She}, while the solid black line is the correlation obtained in this work. The purple dashed line corresponds to the relation found by \cite{2013fanali}. The blue points represented the binned data. The pink points and light yellow stars are the data point of the best fit model in this work and the ones obtained by \cite{2021esparza}. } \label{fig:gamma_acrat} \end{figure*} \subsection{Reflection} An important feature in the spectra of AGN is the reflection that imprints its mark at X-ray energies. The shape of this reflection component is characterized by the FeK$\alpha$ emission line and the Compton hump, peaking at $\sim$ 30 keV \citep{1990Pounds}. The gas producing the X-ray reflection in AGN could be related to the accretion disk, a neutral reflector such as the torus, or a combination of both emissions. Because we cannot separate these scenarios, in the following we will analyze the scenarios in which each of the structures dominates the X-ray spectra. We started our analysis, by studying the torus-like reflector. Previous studies suggest a relation between the torus properties and the accretion rate. For example, \cite{2013muller}, used observations of \emph{VLT}/SINFONI AO-assisted integral-field spectroscopy of H2 1–0 S(1) emission of four LLAGN (NGC\,1052, NGC\,2911, NGC\,3169 and NGC\,1097), and found that on scales of 50–150 pc, the spatial distribution and kinematics of the molecular gas are consistent with a rotating thin disk, where the ratio of rotation (V) to dispersion ($\sigma$) exceeds the unity. However, in the central 50 pc in their sample, the observations reveal a geometrically and optically thick structure of molecular gas (V/$\sigma$<1 and $\rm N_{\rm H}$>10$^{23}$ cm$^{-2}$). This can be associated with the outer extent of any smaller-scale obscuring structure. In contrast to Seyfert galaxies, the molecular gas in LLAGN has V/$\sigma$<1 over an area that is $\sim$nine times smaller and column densities that are on average $\sim$three times smaller. They interpret these results as evidence for a gradual disappearance of the nuclear-obscuring structure, also in agreement with what was previously found by \cite{2017Gonzalez-martin} using a sample of 109 AGN using \emph{IRS/Spitzer} observations. Later, \cite{2017Riccii} found that the probability of a source being obscured in the X-rays (covering factor of gas) depends primarily on the Eddington ratio instead of on absolute luminosity. They propose that the radiation pressure on dusty gas is responsible for regulating the distribution of obscuring material around the central black hole. At high accretion rates, radiation pressure expels the obscuring material in the form of outflows \citep{2006Fabian}. However, this work was made for the line of sight (LOS) column density, which is different from the torus column density (N$_{\rm H-LOS}$ $\neq$ $\rm N_{\rm{H-refl}}$). Here we will analyze, the relation between the column density of the torus-like reflector and the Eddington ratio. We plot this relation in Fig. \ref{fig:reflex} and the values obtained in this work are presented in Table \ref{table:reflection_parameters}. The pink circles and light yellow stars are the data points of the best fit model (\texttt{borus02} in the indistinguishable cases) in this work and the ones obtained by \cite{2021esparza}, respectively. The blue points represent the binned data points for a bin size equal to 0.5 dex in $\lambda_{\rm Edd}$. Using Kendall’s tau correlation coefficient we found a correlation coefficient of $\tau$=0.22 and p-value of 0.04 for a torus-like reflector $\log(N_{H,refl})$ and $\lambda_{\rm Edd}$, suggestive of a correlation – but confirmation is required using a larger sample. As the parameters seem to be positively correlated, we perform a linear regression of the data using \texttt{polyfit} in python, and we found the following relation: \begin{equation*} \log(N_{H,refl})=(0.126\pm0.303)\times \lambda_{\rm Edd}+(24.166\pm0.102) \end{equation*} \begin{figure} \centering \includegraphics[width=9.5cm]{nh_yaheeeerr33.pdf} \caption{Relation between the column density of the torus-like reflector (in log) vs. the Eddington ratio, $\lambda_{\rm Edd} =L_{\rm bol}/L_{\rm Edd}$, for the sample of this work. The pink points and light yellow stars are the data point of the best fit model of this work (\texttt{borus02}) and the one obtained by \cite{2021esparza}. The blue points represented the binned data point for a bin size equal to 0.5. The black solid line represents the best fit and the light blue zone the 3$\sigma$ confidence level.} \label{fig:reflex} \end{figure} Therefore, we find that our data is consistent with the scenario where lower accretion rate objects have, on average, lower column density material in their surroundings. However, due to the size of the sample, our correlation shows a high dispersion, then it is also consistent with being flat at 3$\sigma$. We note that our torus fits allow for a free covering factor, so the lower column densities are not a consequence of a fixed covering factor in the model and a geometrically thinner reflector in lower accretion rate objects. For the LLAGN ($\log(\lambda_{\rm Edd})$<-2.39) we obtain a mean value of the torus column density $\log(N_{H,refl}$ cm$^{-2})$=23.76 with $\sigma$=0.74 and in the high luminosity regime, $\log(N_{H,refl}$ cm$^{-2})$=24.09 with a standard deviation $\sigma$=0.56. Consequently, our result is in line with the results in the infrared, which suggested a gradual disappearance of the torus \citep{2013muller, 2017Gonzalez-martin} and in agreement with the scenario proposed by \cite{2022ricci_BASSDR2}, where it is expected that LLAGN has lower column density. They proposed a model in which AGN move in the obscuration−accretion rate plane during their life cycle. The growth of AGN begins with an unobscured AGN accreting at $\log(\lambda_{\rm Edd})\leq$-4. Then, an accretion event then takes place, in which the SMBH is fueled, and as a result the accretion rate, column density, and covering fraction all increase. As a consequence, obscured AGN are preferentially observed. When the Eddington limit for dusty gas is reached, the covering factor and the column density will decrease, leading to an unobscured AGN being typically observed. As the remaining fuel is depleted, the SMBH goes back into a quiescent phase (see \citealt{2022ricci_BASSDR2} for more details). Even with small statistics, our results can be interpreted within the framework of this evolutionary model, in which radiation pressure regulates their evolution. Then, we compare the column density of the reflector and the column density in the line of sight (LOS). \cite{2021zhao}, using all AGN in the 100-month Palermo \sft/BAT catalog with line-of-sight column density between 10$^{23}$ and 10$^{24}$ cm$^{-2}$ with available \nus data shows that the average torus column density is similar for both Compton thin and CT-AGN, independent of the observing angle, with $\log(N_{\rm H-refl}$ cm$^{-2})$ $\sim$24.15. In Fig. \ref{fig:refex_nloss} we compare the column density of the torus and the absorption in the line of sight of our work. The black dotted line represents the mean value of $\log(N_{\rm H-refl}$ cm$^{-2})$ previously found by \cite{2021zhao}, and the green zone the interval of $\log$(N$_{\rm H-LOS})$ of their work. Note that our data points and their fit are in agreement in the interval of $\log$(L$_{\rm H-LOS}$) of their work, i.e. for the moderately obscured sources in our sample. The majority of galaxies with $\log$(L$_{\rm H-LOS}$)<23.0 in our sample are clearly below the value previously obtained, with a mean value of $\log(N_{\rm H-refl}$ cm$^{-2})$ $\sim$23.36 and $\sigma=0.59$. \begin{figure} \centering \includegraphics[width=9.5cm]{tor_los.pdf} \caption{Relation between the column density of the torus-like reflector (in log) vs. column density in the line of sight (in log). The red points represent the data point of this work. The black dashed line corresponds to the value obtained by \cite{2021zhao} and the black dotted represents x=y. The green zone is the interval of the column density in the line of sight analyzed in their work. } \label{fig:refex_nloss} \end{figure} In order to explore any correlation between these parameters, we calculate the Kendall $\tau$ correlation coefficient, and we found $\tau$=-0.17 and p-value of 0.51, suggestive of a negative correlation but compatible with a lack of correlation as well between them. Therefore, more data points are necessary to establish any correlation between these parameters. The majority of the objects show a larger $\log$(N$_{\rm H-refl}$) than $\log$(L$_{\rm H-LOS}$), suggesting that the torus is not seen through its densest part, consistent with what was reported by \cite{2021zhao}. Regarding the covering factor of the torus-like reflector, we obtain a mean value of CF=0.64 and $\sigma$=0.26. Note that this parameter could be constrained for seven sources. For another seven sources, the best fitting value and an upper limit could be placed, and for an additional three, the best fitting value and lower limit could be placed. In addition, we analyze the correlation between this parameter and the accretion rate by the Kendall $\tau$ correlation test. We find a correlation coefficient $\tau$=-0.06 and p-value of 0.73, suggesting that these parameters are not correlated. Considering a disk-like reflector, we could constrain R$_{\rm f}$ only for one object (NGC\,7674), while for the others we only obtain lower limits. Also, this model allows us to study the ionization degree of the disk, however, this parameter is also unconstrained with only six galaxies with well-constrained values and one upper limit and five lower limits. The results presented in this paper suggest that the distribution of the gas in the torus in AGN is a dynamic and very complex structure, showing changes in the physical properties of the torus linked to the luminosity of the AGN, in agreement with what was previously found in the literature in the X-rays and the infrared. Certainly, combining \xmm + \nus is key to exploring the structure and distribution of the reflector and constraining its physical and geometrical parameters, especially in the low luminosity range. \section{Conclusions} \label{sect:conclusion} In this work, we study the reflection of LLAGN by analyzing the broadband X-ray spectra of a BASS/DR2 sample with $\log(\lambda_{Edd})$ < -3 (17 objects) using \xmm+\nus+\sft\ observations and characterizing the reflection features using the \bors model to represent torus reflection and \xill to model accretion disk emission. The goal was to investigate the accretion mechanism by the relation between the spectral index and the accretion rate, as well as constraining the properties of the potential reflector. The main results are summarized below: \begin{enumerate} \item All objects in our sample are well-fitted with a torus-like reflector. Of these, eight objects are equally well-fitted with a torus and a disk (they are indistinguishable from a statistical point of view and visual inspection). These eight objects have consistent values for the spectral index $\Gamma$ and luminosities when modeled with a torus or a disk reflector. \\ \item In our sample we can classify six objects as unobscured ($\log$(N$_{\rm H}$)<22), nine galaxies as obscured (22<$\log$(N$_{\rm H}$)<24.18) and two as Compton-thick (using as a threshold N$_H=1.5\times 10^{24}$cm$^{-2}$, according to the torus model). According to the disk case, all the galaxies can be classified as Compton thin. \\ \item Combining \xmm+ \nus and considering the reflection component in the spectral fitting, the uncertainties on the spectral index and the scatter in the relation between this parameter and the accretion rate are reduced when compared to previous works over similar ranges in accretion rate. \\ \item Our work is consistent with the negative slope found in previous works at $\log(\lambda_{Edd})\leq$-3, and also consistent with the change in the $\Gamma-\log(\lambda_{Edd})$ relation at $\log(\lambda_{Edd})\sim$-3, where in high accretion rates sources, the slope is known to be positive. \\ \item We found a tentative correlation between the torus properties (column density) and the accretion rate, suggesting that the torus has a decreasing column density with decreasing accretion rate. Consequently, AGN at $\log(\lambda_{Edd})$<-3 has a lower torus column density compared with more luminous AGN. This column density is derived from reflection as opposed to absorption in the line of sight, so it is representative of the global column density of gas around the X-ray corona. \\ \item All AGN in our sample with a column density in the line of sight, $\log$(N$_{\rm H-LOS}$) < 23.0 have a torus with a column density higher than their $\log$(N$_{\rm H-LOS}$), then the torus could be observed through an underdense region. \end{enumerate} In the future, new X-ray missions such as HEX-P \citep{2019hexp}\footnote{https://hexp.org/} facilities will detect a large sample of LLAGN, which could help us further constrain the evolution of the AGN reflection and the accretion physics behind SMBH. \begin{acknowledgements} We thank the referee for the valuable comments that improved the manuscript. D.Y. acknowledges financial support from the Doctorate Fellowship program FIB-UV of the Universidad de Valparaíso and the Max Planck Society by a Max Planck partner group. LHG acknowledges funds by ANID – Millennium Science Initiative Program – ICN12$\_$009 awarded to the Millennium Institute of Astrophysics (MAS). ELN acknowledges financial support from ANID Beca 21200718. CR acknowledges support from the Fondecyt Iniciacion grant 11190831 and ANID BASAL project FB210003. MB acknowledges support from the YCAA Prize Postdoctoral Fellowship. NOC acknowledges support from CONACyT. J.A.G. acknowledges support from NASA grant 80NSSC21K1567. \end{acknowledgements} \bibliographystyle{aa}
1,314,259,995,170
arxiv
\section{Introduction} In the nonrelativistic constituent quark model for mesons and baryons the quarks are bound by confining potentials. Despite their limitations and concerns about their validity, these potential models describe the properties of the various mesons and baryons surprisingly well (see, for example, Silvestre-Brac \cite{Brac1} and the two excellent review articles by Lucha et al. \cite{Lucha} and Richard \cite{Richard} on the matter). Once the potential model approach is adopted, the three-quark problem can be solved using various approaches. Among them, the Hyperspherical Harmonics (HH) method is quite successful in applications as it is well suited to describe radial and orbital excitations \cite{FabreL,Bada1,Bada2,Haysak}. Within the framework of the HH method, the Hypercentral Approximation (HCA) has been used in the past \cite{FabreF,McTav,VHothers} to study the spectrum of the baryon. There are various reasons for adopting the HCA to study the three quark system: i) The two-body potential acting between quarks is quite soft and therefore in the HH expansion of the interaction only the first term gives a significant contribution to the binding energy. This of course means that the two-body correlations are not as strong as compared to the nuclear correlations; ii) it is quite simple and thus one avoids the complicated three-body calculations via, for example, the Faddeev equations \cite{Brac1,Fad}, and iii) the results obtained from it are accurate and the spectra are well reproduced. Another method, in the framework of the HH method, is the Integrodifferential Equation Approach (IDEA) \cite{Fmodels,FFS,Oe1,Oe2} which includes higher terms of the HH expansion in an average manner. The IDEA method takes two-body correlations into account exactly, reproduces the spectrum of the nucleon quite well, and provides wave functions reliably \cite{Oe1,Oe2} which is crucial in studying photoexcitation processes. These processes are manifested as resonances and can be excited through electromagnetic transitions giving rise to large enhancements in the total absorption cross section \cite{MMG}. The photoexcitation of the nucleon resonances has been studied in the past by various groups \cite{MMG,Brizol,Isgur}. The results obtained by them are rather unsatisfactory when compared to the experimental data. The inclusion of retardation effects and relativistic corrections does not improve the situation much \cite{MMG,Brizol}. In this work we consider the absorption of a single photon by a nucleon which then undergoes a transition from the ground state to an excited one. The photoabsorption cross section is calculated using various quark-quark potentials and by using the HCA and IDEA methods. In Sec. 2 we describe our formalism. In Sec. 3 we give details on how the $E1$ and $M1$ transition amplitudes are calculated while in Sec. 4 we present our results and discussions. \section{Formalism} The photoexcitation process is described by the transition amplitude \begin{equation} M_{fi}=<\Psi_f|H|\Psi_i>\,, \label{trans} \end{equation} where $\Psi_i$ is the initial ground state wave function of the nucleon, $\Psi_f$ is the wave function of the final excited state, and $H$ the perturbative electromagnetic Hamiltonian. In what follows we shall discuss these ingredients in some detail. \subsection{The wave functions} The fully antisymmetric total wave function for a three-quark system can be expressed as a product of configuration space, flavor, spin, and color functions. Since baryons are color singlets, the color wave function is totally antisymmetric ($A$) and thus the remaining product must be fully symmetric ($S$), \begin{equation} \Psi_{\mbox{total}}^A= \underbrace{ \psi_{\mbox{space}}\times \Phi_{\mbox{flavor}} \times \chi_{\mbox{spin}}}_S \times \underbrace{C_{\mbox{color}}}_A\,. \label{Psi} \end{equation} The structure of the symmetric component of Eq. ({\ref{Psi}) depends on the transition considered and can be constructed using the various symmetries involved.\\ For the construction of the symmetric part of the total wave function the fully symmetric, mixed symmetric, and mixed antisymmetric configuration space wave functions are required. These can be obtained using the IDEA \cite{Fmodels,FFS} method. In this method the fully symmetric ground state configuration space wave function is constructed from the Faddeev-type components $P(z,r)$ \cite{Oe2} \begin{equation} \Psi^S (\vec{\rho},\vec{\sigma}) = \frac{1}{r^{5/2}} \left [ P^S(z_{12},r)+ P^S(z_{23},r) + P^S(z_{31},r) \right ]\,, \label{IDEAS} \end{equation} where ($\vec{\rho},\vec{\sigma}$) are the Jacobi coordinates, $ r=\left [\frac{2}{3} \sum_{\alpha} \rho_{\alpha}^2 \right]^{1/2} $ is the hyperradius with $\rho_\alpha = r_\alpha\,\,, \alpha=12,23,31$, and the $z_{\alpha}$ are given by $ z_{\alpha} = 2\rho_{\alpha}^2/r^2 -1\,. \label{za} $ The required mixed symmetry states for $L=1$ are given by \begin{eqnarray} \Psi_1^{M^S}(\vec{\rho},\vec{\sigma}) & = & \frac{1} {r^{5/2}} \bigg \{(1+z_{12})^{1/2}Y_{10}(\omega_{12}) P_1^{S^\prime}(z_{12},r) \nonumber \\ & - & \frac{1}{2} \bigg [(1+z_{23})^{1/2}Y_{10}(\omega_{23}) P_1^{S^\prime}(z_{23},r) \\ & + & (1+z_{31})^{1/2}Y_{10}(\omega_{31}) P_1^{S^\prime}(z_{31},r) \bigg ] \bigg \} \,,\nonumber \\ \Psi_1^{M^A}(\vec{\rho},\vec{\sigma}) & = & \frac{1}{r^{5/2}} \bigg [(1+z_{31})^{1/2}Y_{10}(\omega_{31})P_1^{S^\prime}(z_{31},r) \nonumber \\ & - & (1+z_{23})^{1/2}Y_{10}(\omega_{23})P_1^{S^\prime}(z_{23},r) \bigg ] \,, \end{eqnarray} where the superscripts $M^S$ and $M^A$ denote the mixed symmetric and antisymmetric states with respect to the interchange of particles 1 and 2. The required symmetric spin-flavor states are given by \begin{equation} \left |\xi^S \right > = \frac{1}{\sqrt{2}} \left [ \Phi^{M^S} \chi^{M^S} + \Phi^{M^A} \chi^{M^A} \right ]\,, \end{equation} while the mixed symmetric states are \begin{eqnarray} \left |\xi^{M^S} \right > & = & \frac{1}{\sqrt{2}} \left [ \Phi^{M^S}\chi^{M^S}- \Phi^{M^A} \chi^{M^A} \right ]\,,\\ \left |\xi^{M^A} \right > &=& \frac{1}{\sqrt{2}} \left [ \Phi^{M^S} \chi^{M^A} + \Phi^{M^A} \chi^{M^S} \right ]\,. \end{eqnarray} The relevant flavor and spin states are given by various authors and therefore, will not be presented here (see, for example, Refs. \cite{Isgur,CLOSE}). The singlet, antisymmetric color state, \begin{equation} C_{color}^A = \frac{1}{\sqrt{6}}(RBY-BRY+BYR-YBR+YRB-RYB)\,, \end{equation} where R,B and Y stand for Red, Blue, and Yellow respectively, does not enter into the calculations and therefore, in what follows will be suppressed. The initial total wave function for the proton (P) ground state, with $L=0,\, S=1/2$, and $J=1/2$, is given by \begin{equation} \left|\Psi_i\right> = \frac{1}{\sqrt{2}}\left [ \Phi^{M^S}_{\rm P} \chi^{M^S}_{\rm P}+\Phi^{M^A}_{\rm P} \chi^{M^A}_{\rm P} \right] \left|\Psi^S_{0}\right>\,, \label{PROTONG} \end{equation} where the lower index of the space wave function $|\Psi_0^S>$ refers to the angular momentum $L$. The final wave function for the first excited state, with $L=1, \, S=1/2$, and $J=1/2 \,\, \mbox{or} \,\, 3/2$, of the proton is \begin{equation} \qquad \big |\Psi_f \big > = \frac{1}{2}\bigg [ \bigg ( \Phi^{M^S}_{\rm P} \chi^{M^S}_{\rm P}-\Phi^{M^A}_{\rm P} \chi^{M^A}_{\rm P}\bigg) \big |\Psi^{M^S}_{1} \big > + \bigg ( \Phi^{M^S}_{\rm P} \chi^{M^A}_{\rm P}+\Phi^{M^A}_{\rm P} \chi^{M^S}_{\rm P}\bigg)\big| \Psi^{M^A}_{1}\big > \bigg ]\,. \end{equation} For the $M1$ transition $ (S=1/2) \rightarrow (S=3/2)$, where the proton and $\Delta^+(1232)$ both have an angular momentum $L=0$, the total wave function for the initial state of the proton is still given by Eq. (\ref{PROTONG}), while the final wave function for the delta is \begin{equation} \left|\Psi_f \left>=\Phi_\Delta^S \chi_\Delta^S\right |\Psi_{\Delta}^S\right>\,. \label{FWP} \end{equation} \subsection{Electromagnetic transitions} The perturbative Hamiltonian for the electric dipole E1 transition, in the case of three quarks of equal mass m, is given by \begin{equation} H_{E1} = - \frac{1}{mc} \sum_{j=1}^3 \lambda_j \hat{\epsilon}_{\gamma} \cdot \vec{p}_j\,, \label{HE11} \end{equation} where $ \vec{p}_j$ is the momentum of quark $j$ and $\hat{\epsilon}_{\gamma}$ denotes a polarization direction of the incident photon. For $u$ and $d$ quarks, the charge operator has the form \begin{equation} \lambda_j = \frac{e}{6}(1+3\tau_j^z)\,, \end{equation} where $\tau_j^z $ is the third component of the isospin of the $j$-th quark with \begin{equation} \tau_j^z|u> = |u> ,\qquad \tau_j^z|d> = -|d>\,. \end{equation} Using commutation relations to express the momenta in terms of the three-quark Hamiltonian \cite{Roy} we may rewrite (\ref{HE11}) in a Siegert form \cite{Siegert,Ellerkmann} \begin{equation} H_{E1} = - \frac{i}{\hbar c}(E_f-E_i)\sum_{j=1}^3 \lambda_j \hat{\epsilon}_{\gamma} \cdot \vec{x}_j\,, \end{equation} where $\vec{x}_j$ is the coordinate of $j$-th quark conjugate to the momentum $\vec{p}_j$. In Jacobi coordinates we have \begin{equation} H_{E1}=-\frac{1}{2i\hbar c}(E_f-E_i)(\hat{\epsilon}_{\gamma}\cdot\vec{ \rho})I_p - \frac{\sqrt{3}}{3i\hbar c}(E_f-E_i)(\hat{\epsilon}_{\gamma} \cdot\vec{\sigma})I_q\,, \label{JC1} \end{equation} where $(E_f-E_i)$ is the difference between the final and initial binding energies and the operators $I{\rm p}$ and $I_q$ are given by \begin{eqnarray} I_p & = & \frac{e}{2}(\tau_1^z - \tau_2^z), \nonumber \\ I_q & = & \frac{e}{2}\left(\frac{\tau_1^z + \tau_2^z}{2} - \tau_3^z\right)\,. \end{eqnarray} Thus, instead of expressing the Hamiltonian in terms of the individual particle charge and coordinates, the more appropriate Jacobi coordinates and operators $I_p$ and $I_q$ which act on quasi-particles \cite{Sandhas} are used. The magnetic dipole $M_1$ causes the transition \hspace{1mm} $\gamma {\rm P} \rightarrow \Delta^+(1232)$,\hspace{1mm} in which the proton, after absorbing a photon ($\gamma$), is excited to the delta ($\Delta^+$). The corresponding perturbative Hamiltonian is \begin{equation} H_{M1}=-i\sum^3_{j=1}\left(\vec{\mu}_q^j\times \vec{k_\gamma} \right) \cdot \hat{\epsilon}_{\gamma}\,, \end{equation} where $\vec{\mu}_q^j$ is the magnetic moment operator of the $j$-th quark. Since $H_{M1}$ does not contain any orbital operators, in this transition the spin must change instead. \section{ The transition amplitudes} Noting that $E_\gamma = E_f - E_i$ and by letting the charge operators $I_p$ and $I_q$ act on the three-quark isospin states we finally obtain for the transition matrix elements \begin{equation} M_{E1} = \frac{eE_\gamma}{2\sqrt{6}i\hbar c}\bigg(\left<\Psi^{M^A}_{1} \left |\hat{\epsilon}_\gamma \cdot \vec{\rho} \right|\Psi^S_{0}\right> - \left < \Psi^{M^S}_{1}\left |\hat{\epsilon}_\gamma \cdot \vec{\sigma}\right|\Psi^S_{0}\right >\bigg)\,. \label{ME11} \end{equation} The integrals in (\ref{ME11}) were evaluated using Euler angles $\alpha,\,\beta,\,\gamma$ as external and $\rho,\,\sigma,\, x= {\vec{\rho}\cdot\vec{\sigma}}/{\rho \sigma}$ as internal coordinates. Here the $z^\prime$ axis is chosen to coincide with $\hat{\rho}$ and $\hat{\sigma}$ is lying in the $x^\prime - z^\prime$ plane. Thus only a five dimensional integration has to be done numerically since both $\Psi_0^S$ and the two components of $\Psi_1$ are invariant with respect to a rotation about the z-axis and thus do \it not} depend on $\alpha$. After averaging over the direction of $\vec{k}$ and polarization direction one obtains the following expression for the absolute square of the transition matrix elements \begin{equation} \overline{\big|{\cal M}_{E1}\big|^2} = \frac{e^2E_\gamma^2} {72\,(\hbar c)^2}\bigg|\left<\Psi_1^{M^A}\big|\rho_z \big|\Psi_0^S\right> - \left<\Psi_1^{M^S} \big|\sigma_z \big|\Psi_0^S\right>\bigg |^2\,. \end{equation} Therefore, the following integrals are required \begin{eqnarray} & & \left<\Psi_1^{M^A}\big|\rho_z \big| \Psi_0^S \right> = 2\pi \int_0^\pi \sin \beta d\beta \int_0^{2\pi}d\gamma \int_0^\infty \rho^2 d\rho \int_0^\infty \sigma^2 d\sigma \int_{-1}^1 dx \nonumber \\ & & \quad \qquad \qquad \qquad \Psi_1^{M^A}(\rho,\sigma,x,\beta,\gamma) \rho \cos \beta \, \Psi_0^S(\rho,\sigma,x)\,, \\ & & \nonumber \\ & & \left<\Psi_1^{M^S}\big|\sigma_z \big| \Psi_0^S \right> = 2\pi \int_0^\pi \sin \beta d\beta\int_0^{2\pi}d\gamma \int_0^\infty \rho^2 d\rho \int_0^\infty \sigma^2 d\sigma \int_{-1}^1 dx \nonumber \\ & &\qquad \Psi_1^{M^S}(\rho,\sigma,x,\beta,\gamma) \sigma (\cos \beta \cos \theta -\sin \beta \cos \gamma \sin \theta )\,\Psi_0^S (\rho,\sigma,x)\,. \end{eqnarray} The corresponding integrated photoabsorption cross section for a single excited electric dipole state, in the long wavelength limit \cite{Brizol,Eisen}, is \begin{equation} \Sigma_1 = \int dE_\gamma\sigma_\gamma^{E1}=\frac{4\pi^2\hbar c} {\hbar\omega}\,\overline{\big |{\cal M}_{E1} \big |^2} \,. \end{equation} For the $M1$ transition amplitude we have \begin{equation} M_{M1} = <\Psi_f|H_{M1}|\Psi_i> = i\left(\hat{\epsilon}_\gamma \times \vec{k_\gamma} \right) \sum^3_{j=1}<\Psi_f| \mu_q^j \sigma^z_j |\Psi_i>\,, \end{equation} where $\Psi_i$ and $\Psi_f$ are given by Eqs. (\ref{PROTONG}) and (\ref{FWP}) respectively. The process \hspace{0.7mm} $\gamma P \rightarrow \Delta^+(1232)$ ($\frac {1}{2}^+ \rightarrow \frac {3}{2}^+ $) \hspace{0.3mm} can take place either via the magnetic dipole ($M1$) or the electric quadrupole ($E2$) transition. In the quark model the latter transition is forbidden \cite{Becchi} because it is proportional to the charge operator which cannot cause transitions between quark spin $1/2$ and $3/2$ states, and hence the matrix element vanishes by orthogonality of the quark spin wave functions. The $M1$ transition involves the quark magnetic moments -- hence the spin operator -- and this can lead to transitions $ (S=1/2) \rightarrow (S=3/2)$ \cite{Dalitz}. The transition matrix element can be written as \begin{equation} M_{M1}=i\left(\hat{\epsilon}_\gamma \times \vec{k_\gamma} \right) <\Psi_f|\mu_q^1 \sigma^z_1 + \mu_q^2 \sigma^z_2 + \mu_q^3 \sigma^z_3 |\Psi_i>\,. \end{equation} Using the the flavor, spin, and configuration space wave functions and averaging over the two photon polarization directions we finally obtain \begin{equation} \overline{ \big|{\cal M}_{M1}\big|^2}=\frac{2\alpha}{9}\frac{E^2_\gamma \hbar c}{(mc^2)^2}I^2_{M1} \,, \end{equation} where $I_{M1}$ is the overlap integral given by \begin{equation} I_{M1}=8\pi^2 \int_0^\infty\rho^2d\rho\int_0^\infty\sigma^2d\sigma \int_{-1}^1 dx\, \Psi_\Delta^S(\rho,\sigma,x)\Psi_0^S(\rho,\sigma,x)\,. \end{equation} Like in the electric transitions, the photoabsorption cross section for a single excited magnetic dipole state is \begin{equation} \Sigma_{M1}= \int dE_\gamma\sigma_\Delta^{M1}=\frac{4\pi^2\hbar c} {\hbar\omega}\,\overline{\big|{\cal M}_{M1} \big|^2} \,. \end{equation} \section{Results and Discussions} The quark-quark potential can be written as a sum of the central and the spin-spin parts \begin{equation} V_{qq} = V^c + V^s\,, \end{equation} where $V^c$ contains, as usual, the confinement and the coulombic parts while $V^s$ is of the general form \begin{equation} V^s = f_{ij}(r)\vec{\sigma}_i\cdot\vec{\sigma}_j\,. \end{equation} { The spin-dependent interaction, suggested by the perturbative one-gluon exchange mechanism, is important in describing the splitting of the meson masses and the experimental mass difference $M_\Delta - M_N$. It contains the function $f_{ij}(r)$ which is singular as $\delta(r)$ and therefore the corresponding wave equation has no physically acceptable solutions and one expects a collapse in both the quark-quark and in baryon systems. To avoid this a cut-off or a smearing function is introduced to reduce the singularity and to make the calculations tractable. Since the resulting spin-spin potential is strongly attractive and short-ranged, it can generate significant short--range correlations which have stronger effects on the ground state than on the excited $ L > 0 $ states. The strong correlations at short distances in the $L=0$ partial wave are not expected to play a major role in photoexcitation processes as the excited states are shifted to the outer region and thus in the overlap integral short-range contributions are rather unimportant. Therefore any influence of the spin-spin force will come from the modification of the mass differences and of the corresponding wave functions. } In this work we performed calculations with the $V^c$ term alone as well as with both terms included. For the calculations with the $V^c$ part only, we employed the Martin \cite{Martin,Richard2}, the Cornell \cite{eichten,Bada3}, and the Lichtenberg \cite{Licht} potentials. For the calculations in which both terms are included we used the Ono-Sch\"{o}berl potential \cite{Ono} and two of the recently published potentials by Silvestre-Brac \cite{Brac1}, namely, the AP1 and AP2 versions which have the general form \begin{eqnarray} V_{q\bar q}(r_{ij})&=&-\frac{\kappa(1-\exp(-r_{ij}/r_c))} {r_{ij}}+\lambda r_{ij}^p-\Lambda \nonumber\\ &+ & \frac{2\pi}{3m_im_j}\kappa^{\prime}(1-\exp(-r_{ij}/r_c)) \frac{\exp(-r_{ij}^2/r^2_0)}{\pi^{3/2}r^3_0}\vec{\sigma}_i\vec{\sigma}_j \label{AP2} \end{eqnarray} with $$ r_0(m_i,m_j) = A\left(\frac{2m_im_j}{m_i + m_j}\right)^{-B}. $$ The parameters for AP1 are given by $$ \begin{array}{llll} & p = 2/3\,, & r_c=0\,, & m_u=m_d=0.277\,{\rm GeV}\,,\\ & \kappa = 0.4242\,, & \kappa^{\prime}=1.8025\,, & \lambda = 0.3898\,{\rm GeV}^{5/3}\,,\\ & \Lambda = 1.1313\, {\rm GeV}\,, & B=0.3263\,, & A=1.5296\,{\rm GeV}^{B-1}\,, \end{array} $$ while those of AP2 are given by $$ \begin{array}{llll} & p= 2/3\,, & r_c=0.3466\,{\rm GeV}^{-1}\,, & m_u=m_d=0.280\, {\rm GeV}\,,\\ & \kappa = 0.5743\,, & \kappa^{\prime}=1.8993\,, & \lambda = 0.3978\,{\rm GeV}^{5/3}\,,\\ & \Lambda = 1.1146\, {\rm GeV}\,, & B=0.3478\,, & A=1.5321\, {\rm GeV}^{B-1}\,. \end{array} $$ We remind here the reader that the quark-quark potential $V_{qq}$ is related to the quark-antiquark potential $V_{q\bar q}$ by Lipkin's rule \cite{Green}, i.e., $ V_{qq} = V_{q\bar{q}}/2$. The results obtained for the ground state and the first orbitally excited state of the nucleon with spin-independent potentials and by using the IDEA and HCA methods are given in table I. Both methods are in very good agreement with each other and the transition energy $E_1-E_0$, { shown in table II,} is reasonably well reproduced. This transition energy and the corresponding wave functions were used to calculate the integrated photoabsorption cross section $\Sigma_1$ due to the E1 transition, from the ground state to the N(1520) resonance. Our results are given in table II together with experimental values and those of other methods. It is seen that for both the IDEA and the HCA the cross sections are in good to very good agreement with the experimental data. { The results obtained by other methods for the transition energy are generally very low as compared to the experimental ones except those obtained via the Isgur-Karl (I.K.) model \cite{Isgur}, which, nevertheless, can not reproduce the experimental photoabsorption cross section $\Sigma_1$ well. This mainly implies that the corresponding wave functions are not adequate to describe the photoabsorption cross section for the E1 transition. The latter is also true for the wave functions employed by Brizzolara and Giannini in their thorough investigations on the nucleon photoabsorption \cite{Brizol} based on the nonrelativistic quark model of Isgur and Karl and the bag model. They explicitly demonstrated that the results are rather strongly dependent on the hyperfine mixing and on the dimensions of the system and thus on the h.o parameter used. The discrepancy with the experimental results still persists even when other contributions such as charge plus current densities with and without retardation effects are incorporated into the h.o model. Three-body force components of the quark interaction (in the 3q model), and relativistic corrections in the framework of the bag model do not improve matters to a significant degree either. Apart, of course, from the inadequacy of the wave functions these discrepancies probably stem also from some other fundamental absorption mechanism, such as the coupling to mesons and/or $q\bar q$ pairs \cite{Brizol}. } The results for the nucleon masses obtained by using spin-dependent potentials are given in table III. The corresponding transition energy $E_1-E_0$ together with the cross sections are presented in table IV. It is seen that both AP1 and AP2 potentials of Silvestre-Brac give excellent results for the photoabsorption cross section $\Sigma_1$. This came as a surprise since the transition energies are not as good as one would like them to be. This can be attributed to the fact that in the confining potential the excited state lies at a higher energy as compared to the experimental value and thus the wave function for $L=1$ is more spread out in space and therefore the overlap integral somehow compensates for the excess transition energy. Our results obtained with the Ono-Sch\"oberl potential are in fair agreement with the experimental data. The delta masses are given in table V while the transition energy $E_\Delta - E_N$ is shown in table VI; it is well reproduced but the results for the corresponding integrated photoabsorption cross section of the $\Delta$(1232) resonance are only in fair agreement with the experimental values. The same is true for the I.K model. This discrepancy can be connected to the strong dependence on the quark masses that enter the cross section via the magnetic moments. { Finally the energy and cross--section results obtained via the IDEA and HCA methods are quite similar. This comes as no major surprise as the quark-quark potentials are soft and thus two-body correlations do not play a significant role. This means that the corresponding wave functions are fairly similar and the small differences between them do not strongly influence the overlap integral and thus the cross sections. In the case where a spin-spin force was used the short--range correlations are not manifested in the overlap integral either due to the shifting of the $L>0$ wave function to the outer region.} In conclusion the nonrelativistic potential model reproduced the experimental data for the E1 transitions quite successfully. There is a weak potential dependence and in the case of the spin-independent potentials the best results are obtained with the Lichtenberg potential. In the case of the spin-dependent potentials both AP1 and AP2 give excellent results for the E1 photoabsorption cross sections. In the M1 transitions where there is a strong dependence on the quark masses the Ono-Sch\"oberl potential gives better results than the AP1 and AP2 potentials. \solong Financial support from the Foundation for Research Development is gratefully acknowledged. \newpage
1,314,259,995,171
arxiv
\subsubsection*{\bibname}} \usepackage{xargs} \usepackage{papercommands} \newcommand{Sine fitting }{Sine fitting } \renewcommandx{\mod}[2][2=\frac{N}{2}]{#1 \ \ensuremath{\mathrm{mod}\ #2}} \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} \usepackage{url} \usepackage[linesnumbered,lined,ruled,noend]{algorithm2e} \newcommand{\qoute}[1]{``#1''} \newcommand{\ONotation}[1]{O\left( #1 \right)} \newcommand{\OMNotation}[1]{\Omega\left( #1 \right)} \newcommand{\colorbox{cyan}{Murad's version:}}{\colorbox{cyan}{Murad's version:}} \newcommand{\colorbox{green}{ALAA's version: }}{\colorbox{green}{ALAA's version: }} \newcommand{\loss}[2]{\ensuremath{D\term{#1, #2}}} \usepackage{amsmath,amsthm, amssymb, latexsym,graphicx} \newcommand{\round}[1]{\ensuremath{\left\lfloor #1 \right\rceil}} \newcommandx{\Sin}[2][2=2]{\sin^{#2}\term{#1}} \usepackage[inline]{enumitem} \usepackage{comment} \usepackage{float} \usepackage{xr-hyper} \usepackage{hyperref} \usepackage{subcaption} \usepackage[round]{natbib} \renewcommand{\bibname}{References} \renewcommand{\bibsection}{\subsubsection*{\bibname}} \begin{document} \algnewcommand\algorithmicparfor{\textbf{parfor}} \algnewcommand\algorithmicpardo{\textbf{do}} \algnewcommand\algorithmicendparfor{\textbf{end\ parfor}} \algrenewtext{ParFor}[1]{\algorithmicparfor\ #1\ \algorithmicpardo} \algrenewtext{EndParFor}{\algorithmicendparfor} \twocolumn[ \aistatstitle{Coresets for Data Discretization and Sine Wave Fitting} \aistatsauthor{Alaa Maalouf \And Murad Tukan \aistatsaddress{ University of Haifa \And University of Haifa} \aistatsauthor{Eric Price \And Daniel Kane \And Dan Feldman} \aistatsaddress{University of Texas at Austin \And University of California \And University of Haifa} \runningauthor{Alaa Maalouf, Murad Tukan, Eric Price, Daniel Kane, Dan Feldman} ] \begin{abstract} In the \emph{monitoring} problem, the input is an unbounded stream $P={p_1,p_2\cdots}$ of integers in $[N]:=\{1,\cdots,N\}$, that are obtained from a sensor (such as GPS or heart beats of a human). The goal (e.g., for anomaly detection) is to approximate the $n$ points received so far in $P$ by a single frequency $\sin$, e.g. $\min_{c\in C}cost(P,c)+\lambda(c)$, where $cost(P,c)=\sum_{i=1}^n \sin^2(\frac{2\pi}{N} p_ic)$, $C\subseteq [N]$ is a feasible set of solutions, and $\lambda$ is a given regularization function. For any approximation error $\varepsilon>0$, we prove that \emph{every} set $P$ of $n$ integers has a weighted subset $S\subseteq P$ (sometimes called core-set) of cardinality $|S|\in O(\log(N)^{O(1)})$ that approximates $cost(P,c)$ (for every $c\in [N]$) up to a multiplicative factor of $1\pm\varepsilon$. Using known coreset techniques, this implies streaming algorithms using only $O((\log(N)\log(n))^{O(1)})$ memory. Our results hold for a large family of functions. Experimental results and open source code are provided. \end{abstract} \section{INTRODUCTION AND MOTIVATION} \textbf{Anomaly detection} is a step in data mining which aims to identify unexpected data points, events, and/or observations in data sets. For example, we are given an unbounded stream $P={p_1,p_2\cdots}$ of numbers that are obtained from a heart beats of a human (hospital patients) sensor, and the goal is to detect inconsistent spikes in heartbeats. This is crucial for proper examination of patients as well as valid evaluation of their health. Such data forms a wave which can be approximated using a \emph{sine} wave. Fitting a large data of this form (heart wave signals), will result in obtaining an approximation towards the distribution from which the data comes from. Such observation aids in detection of outliers or anomalies. \begin{figure}[htb!] \centering \includegraphics[width=0.8\linewidth]{figures/cost2.jpeg} \caption{\textbf{Sine fitting.} Given a set of integers $P$ (blue points on the $x$-axis), and $\sin^2(\cdot)$ wave (the red signal), then the cost of the Sine fitting problem with respect to this input, is the sum of vertical distances between the points in $P$ (on the x-axis) and the sine signal (the sum of lengths of the green lines). The goal is to find the sine signal that minimizes this sum.}\label{fig:ourcost} \end{figure} Formally speaking, the anomaly detection problem can be stated as follows. Given a large positive number $N$, a set $P \subseteq \br{1,2,\cdots,N}$ of $n$ integers, the objective is to fit a sine signal, such that the sum of the vertical distances between each $p \in P$ (on the $x$-axis) and its corresponding point $\Sin{\frac{2\pi}{N}pc}$ on the signal, is minimized; see Figure~\ref{fig:ourcost}. Hence, we aim to solve the following problem that we call the \emph{Sine fitting problem}: \begin{equation} \label{eq:our_cost} \begin{split} \min_{c\in C}\sum\limits_{p \in P} \sin^2\term{\frac{2\pi}{N} pc} + \lambda(c), \end{split} \end{equation} where $C$ is the set of feasible solutions, and $\lambda$ is a regularization function to put constraints on the solution. The generalized form of the fitting problem above was first addressed by~\cite{souders1994ieee} and later generalized by~\cite{ramos2008new}, where proper implementation have been suggested over the years~\cite{da2003new, chen2015improved, renczes2016efficient, renczes2021computationally}. In addition, the Sine fitting problem and its variants gained attention in recent years in solving various problems, e.g., estimating the shift phase between two signal with very high accuracy~\cite{queiros2010cross}, characterizing data acquisition channels and analog to digital converters~\cite{pintelon1996improved}, high-accuracy sampling measurements of complex voltage ratio of sinusoidal signals~\cite{augustyn2018improved}, etc. \textbf{Data discretization.} In many applications, we aim to find a proper choice of floating-point grid. For example, when we are given points encoded in $64$bits, and we wish to use a $32$ floating point grid. A naive way to do so is by simply removing the most/least significant $32$ bits from each point. However such approach results in losing most of the underlying structure that these points form, which in turn leads to unnecessary data loss. Instead, arithmetic modulo or sine functions that incorporate cyclic properties are used, e.g.,~\cite{naumov2018periodic, nagel2020up, gholami2021survey}. Such functions aim towards retaining as much information as possible when information loss is inevitable. This task serves well in the field of quantization~\cite{gholami2021survey}, which is an active sub-field in deep learning models. To solve this problem, we first find the sine wave that fits the input data using the cost function at~\eqref{eq:our_cost}. Then each point in the input data is projected to its nearest point from the set of roots of the signal that was obtained from the sine fitting operation; see Figure~\ref{fig:disc}. All of the applications above, e.g., monitoring, anomaly detection, and data discretization, are problems that are reduced to an instance of the Sine fitting problem. Although these problems are desirable, solving them on large-scale data is not an easy task, due to bounded computational power and memory. In addition, in the streaming (or distributed) setting where points are being received via a stream of data, fitting such functions requires new algorithms for handling such settings. To handle these challenges, we can use coresets. \subsection{Coresets} \label{sec:coresets} Coreset was first suggested as a data summarization technique in the context of computational geometry~\citep{agarwal2004approximating}, and got increasing attention over recent years~\citep{broder2014scalable,nearconvex,huang2021novel,cohen2021improving,huang2020coresets,mirzasoleiman2020coresets}; for extensive surveys on coresets, we refer the reader to~\citep{feldman2020core, phillips2016coresets}, and~\cite{jubran2019introduction,maalouf2021introduction} for an introductory. Informally speaking, a coreset is (usually) a small weighted subset of the original input set of points that approximates its loss for every feasible query $c$, up to a provable multiplicative error of $1 \pm \eps$, where $\eps \in (0, 1)$ is a given error parameter. Usually the goal is to have a coreset of size that is independent or near-logarithmic in the size of the input (number of points), in order to be able to store a data of the same structure (as the input) using small memory, and to obtain a faster time solutions (approximations) by running them on the coreset instead of the original data. Furthermore, the accuracy of existing (fast) heuristics can be improved by running them many times on the coreset in the time it takes for a single run on the original (big) dataset. Finally, since coresets are designed to approximate the cost of every feasible query, it can be used to solve constraint optimization problems, and to support streaming and distributed models; see details and more advantages of coresets in~\citep{feldman2020core}. In the recent years, coresets were applied to improve many algorithms from different fields e.g. logistic regression~\citep{huggins2016coresets,munteanu2018coresets,karnin2019discrepancy,nearconvex}, matrix approximation~\citep{feldman2013turning, maalouf2019fast,feldman2010coresets,sarlos2006improved,maalouf2021coresets}, decision trees~\citep{jubran2021coresets}, clustering~\citep{feldman2011scalable,gu2012coreset,lucic2015strong,bachem2018one,jubran2020sets, schmidt2019fair}, $\ell_z$-regression~\citep{cohen2015lp, dasgupta2009sampling, sohler2011subspace}, \emph{SVM}~\citep{har2007maximum,tsang2006generalized,tsang2005core,tsang2005very,tukan2021coresets}, deep learning models~\citep{baykal2018data,maalouf2021unified,liebenwein2019provable,mussay2021data}, etc. \textbf{Sensitivity sampling framework.} A unified framework for computing coresets to wide range family of problems was suggested in~\citep{braverman2016new}. It is based on non-uniform sampling, specifically, sensitivity sampling. Intuitively, the sensitivity of a point $p$ from the input set $P$ is a number $s(p)\in [0,1]$ that corresponds to the importance of this point with respect to the other points, and the specific cost function that we wish to approximate; see formal details in Theorem~\ref{thm:coreset}. The main goal of defining a sensitivity is that with high probability, a non-uniform sampling from $P$ based on these sensitivities yields a coreset, where each point $p$ is sampled i.i.d. with a probability that is proportional to $s(p)$, and assigned a (multiplicative) weight which is inversely proportional to $s(p)$. The size of the coreset is then proportional to (i) the total sum of these sensitivities $t=\sum_{p\in P}s(p)$, and (ii) the VC dimension of the problem at hand, which is (intuitively) a complexity measure. In recent years, many classical and hard machine learning problems~\citep{braverman2016new,sohler2018strong,maalouf2020tight} have been proved to have a total sensitivity (and VC dimension) that is near-logarithmic in or even independent of the input size $\abs{P}$. \begin{figure}[htb!] \centering \includegraphics[width=0.8\linewidth]{figures/discrit2.jpeg} \caption{\textbf{Discretization.} Given a set of points (blue points), we find a sine wave (red signal) that fits the input data. Then each input point is projected to its nearest point from the set of roots of the signal.}\label{fig:disc} \end{figure} \subsection{Our Contribution} We summarize our contribution as follows. \begin{enumerate} \item[(i)] Theoretically, we prove that for every integer $N>1$, and every set $P\subseteq [N]$ of $n>1$ integers: \begin{enumerate} \item The total sensitivity with respect to the Sine fitting problem is bounded by $O(\log^4{N})$, and the VC dimension is bounded by $O(\log{(nN)})$; see Theorem~\ref{mainthm} and Claim~\ref{VcbOUND} respectively. \item For any approximation error $\eps>1$, there exists a coreset of size\footnote{$\Tilde{O}$ hide terms related to $\varepsilon$ (the approximation factor), and $\delta$ (probability of failure).} $\Tilde{O}\term{\log(N)^{O(1)}}$ (see Theorem~\ref{thm:mainthm} for full details) with respect to the Sine fitting optimization problem. \end{enumerate} \item[(ii)] Experimental results on real world datasets and open source code~\citep{opencode} are provided. \end{enumerate} \section{PRELIMINARIES} In this section we first give our notations that will be used throughout the paper. We then define the sensitivity of a point in the context of the Sine fitting problem (see Definition~\ref{def:sens}), and formally write how it can be used to construct a coreset (see Theorem~\ref{thm:coreset}). Finally we state the main goal of the paper. \textbf{Notations.} Let $\INT$ denote the set of all positive integers, $[n] = \br{1,\ldots, n}$ for every $n \in \INT$, and for every $x \in \REAL$ denote the rounding of $x$ to its nearest integer by $\round{x}$ (e.g. $\round{3.2}=3$). We now formally define the sensitivity of a point $p\in P$ in the context of the Sine fitting problem. \begin{definition}[Sine fitting sensitivity]\label{def:sens} Let $N>1$ be a positive integer, and let $P \subseteq [N]$ be a set of $n>1$ integers. For every $p \in P$, the \emph{sensitivity} of $p$ is defined as $ \max_{c\in [N]}\frac{\sin^2(pc\cdot \frac{2\pi}{N})}{\sum_{q\in P}\sin^2(qc\cdot \frac{2\pi}{N})}. $ \end{definition} The following theorem formally describes how to construct an $\eps$-coreset via the sensitivity framework. We restate it from~\cite{braverman2016new} and modify it to be specific for our cost function. \begin{theorem}\label{thm:coreset} Let $N>1$ be a positive integer, and let $P \subseteq [N]$ be a set of $n>1$ integers. Let $s: P \to [0,1]$ be a function such that $s(p)$ is an upper bound on the sensitivity of $p$ (see Definition~\ref{def:sens}). Let $t = \sum_{p \in P} s(p)$ and $d'$ be the~\emph{VC dimension} of the Sine fitting problem; see Definition~\ref{def:dimension}. Let $\eps, \delta \in (0,1)$, and let $S$ be a random sample of $\abs{S} \in O\term{\frac{t}{\varepsilon^2}\left(d'\log{t}+\log{\frac{1}{\delta}}\right)}$ i.i.d points from $P$, where every $p \in P$ is sampled with probability $s(p)/t$. Let $v(p) = \frac{t}{s(p)\abs{S}}$ for every $p \in S$. Then with probability at least $1-\delta$, we have that for every $c\in [N]$, we have $\abs{1- \frac{\sum_{p\in S}v(p)\sin^2(pc\cdot \frac{2\pi} {N})}{\sum_{p\in P}\sin^2(pc\cdot \frac{2\pi}{N})}} \leq \eps .$ \end{theorem} \textbf{Problem statement.} Theorem~\ref{thm:coreset} raises the following question: Can we bound the the total sensitivity and the VC dimension of the Sine fitting problem in order to obtain small coresets? Note that, the emphasis of this work is on the size of the coreset that is needed (required memory) to approximate the Sine fitting cost function. \section{CORESET FOR SINE FITTING} In this section we state and prove our main result. For brevity purposes, some proofs of the technical results have been omitted from this manuscript; we refer the reader to the supplementary material for these proofs. Note that since the regularization function $\lambda$ at~\eqref{eq:our_cost} is independent of $P$, a $1\pm\eps$ multiplicative approximation of the $\sin^2(\cdot)$ terms at~\eqref{eq:our_cost}, yields a $1\pm\eps$ multiplicative approximation for the whole term in~\eqref{eq:our_cost}. The following theorem summarizes our main result. \begin{theorem}[Main result: coreset for the Sine fitting problem]\label{thm:mainthm} Let $N>1$ be a positive integer, $P \subseteq [N]$ be a set of $n>1$ integers, and let $\eps,\delta \in (0,1)$. Then, we can compute a pair $(S,v)$, where $S \subseteq P$, and $v:S\to [0,\infty)$, such that \begin{enumerate} \item the size of $S$ is polylogarithmic in $N$ and logarithmic in $n$, i.e., $$|S|\in O\left(\frac{\log^4{N}}{\varepsilon^2}\left(\log(nN)\log{(\log{N})}+\log{\frac{1}{\delta}}\right)\right).$$ \item with probability at least $1-\delta$, for every $c\in [N]$, $$\abs{1 -\frac{\sum_{p\in S}v(p)\sin^2(pc\cdot \frac{2\pi} {N})}{\sum_{p\in P}\sin^2(pc\cdot \frac{2\pi}{N})}} \leq \eps. $$ \end{enumerate} \end{theorem} To prove Theorem~\ref{thm:mainthm}, we need to bound the total sensitivity (as done in Section~\ref{sec:sensbound}) and the VC dimension (see Section~\ref{sec:vcbound}) of the Sine fitting problem. \subsection{Bound On The Total Sensitivity}\label{sec:sensbound} In this section we show that the total sensitivity of the Sine fitting problem is small and bounded. Formally speaking, \begin{theorem}\label{mainthm} Let $N\geq 1$ and $P\subseteq [N]$. Then \[ \sum_{p\in P}\max_{c\in [N]}\frac{\sin^2(pc\cdot \frac{2\pi}{N})}{\sum_{q\in P}\sin^2(qc\cdot \frac{2\pi}{N})}\in O(\log^4 N). \] \end{theorem} We prove Theorem~\ref{mainthm} by combining multiple claims and lemmas. We first state the following as a tool to use the cyclic property of the sine function. \begin{claim} \label{clm:ax} Let $a,b \in \INT$ be a pair of positive integers. Then for every $x \in \INT$, \[ \abs{\sin{\term{\frac{b\pi}{a} x}}} = \abs{\sin{\term{\frac{b\pi}{a} \term{\mod{x}[a]}}}}. \] \end{claim} \begin{comment} \begin{proof} Put $x \in \INT$ and observe that \begin{equation} \label{eq:x_a} x = \floor{\frac{x}{a}} a + \mod{x}[a]. \end{equation} Thus, \begin{equation} \label{eq:sin_prop_1} \begin{split} \abs{\sin{\term{\frac{b\pi}{a} x}}} &= \abs{\sin{\term{\frac{b\pi}{a} \floor{\frac{x}{a}}a + \frac{b\pi}{a}\mod{x}[a]}}}\\ &= \abs{\sin{\term{\floor{\frac{x}{a}} b\pi + \frac{b\pi}{a} \mod{x}[a]}}}, \end{split} \end{equation} where the first equality holds by~\eqref{eq:x_a}. Using trigonometric identities, we obtain that \begin{equation} \label{eq:sin_prop_2} \begin{split} &\left|\sin{\term{\floor{\frac{x}{a}} b\pi + \frac{b\pi}{a} \mod{x}[a]}}\right| =\\ &\quad \left|\sin{\term{\floor{\frac{x}{a}} b\pi}} \cdot \cos{\term{\frac{b\pi}{a}\term{\mod{x}[a]}}} \right. \\ &\quad\left.+ \sin{\term{\frac{b\pi}{a}\term{\mod{x}[a]}}} \cdot \cos{\term{\floor{\frac{x}{a}} b\pi}}\right|. \end{split} \end{equation} Since $\term{\floor{\frac{x}{a}} b\pi} \in \br{0, \pi, 2\pi, 3\pi, \cdots}$, we have that \[ \sin{\term{\floor{\frac{x}{a}} b\pi}} = 0, \] and \[ \abs{\cos{\term{\floor{\frac{x}{a}} b\pi}}} = 1. \] By combining the previous equalities with~\eqref{eq:sin_prop_1} and~\eqref{eq:sin_prop_2}, Claim~\ref{clm:ax} follows. \end{proof} \end{comment} We now proceed to prove that one doesn't need to go over all the possible integers in $[N]$ to compute a bound on the sensitivity of each $p \in P$, but rather a smaller compact subset of $[N]$ is sufficient. \begin{lemma}\label{lem:sensitivity_query_bound} Let $P \subseteq [N]$ be a set of $n$ integer points. For every $p \in P$, let $$C(p) = \br{c \in [N] \middle| \term{\mod{cp}} \in \left[\frac{N}{8}, \frac{3N}{8}\right]}.$$ Then for every $p \in P$, \begin{equation} \label{eq:sense_query_bound} \max_{c\in [N]}\frac{\Sin{\frac{2\pi}{N}pc}}{\sum\limits_{q\in P}\Sin{\frac{2\pi}{N}qc}}\leq 4\cdot \max\limits_{c\in C(p)}\frac{\Sin{\frac{2\pi}{N}pc}}{\sum\limits_{q\in P}\Sin{\frac{2\pi}{N}qc}}. \end{equation} \end{lemma} \begin{proof} Put $p \in P$, and let $c^*\in[N]$ be an integer that maximizes the left hand side of~\eqref{eq:sense_query_bound} with respect to $p$, i.e., \begin{equation} \label{maxx} \max_{c\in [N]}\frac{\Sin{pc\cdot \frac{2\pi}{N}}}{\sum_{q\in P}\Sin{qc\cdot \frac{2\pi}{N}}}=\frac{\Sin{pc^*\cdot \frac{2\pi}{N}}}{\sum_{q\in P}\Sin{qc^*\cdot \frac{2\pi}{N}}}. \end{equation} If $c^*\in C(p)$ the claim trivially holds. Otherwise, we have $c^* \in [N]\setminus C(p)$, and we prove the claim using case analysis: \begin{enumerate*}[label=Case~(\roman*)] \item $(\mod{c^*p})\in \left[0,\frac{N}{8}\right)$, \label{case:1} and \item $(\mod{c^*p})\in \left(\frac{N}{2}-\frac{N}{8},\frac{N}{2}\right]$\label{case:2}. \end{enumerate* \noindent\textbf{\ref{case:1}:} Let $b_2 \in [8]\setminus[3]$ be an integer, and let $z=\left\lceil \frac{N/b_2}{\mod{c^*p}}\right\rceil$. We first observe that \begin{equation} \label{eq:equality_z} \begin{split} z &= \left\lceil \frac{\frac{N}{b_2}}{\mod{c^*p}}\right\rceil\\ &=\frac{\frac{N}{b_2}}{\mod{c^*p}} + \frac{\mod{\term{-\frac{N}{b_2}}}[\term{\mod{c^*p}}]}{\mod{c^*p}}, \end{split} \end{equation} where the second equality hold by properties of the ceiling function. We further observe that, \begin{equation} \label{eq:bound_1_case_1} \begin{split} &z\term{\mod{c^*p}}\\ &= \frac{N}{b_2} + \mod{\term{-\frac{N}{b_2}}}[\term{\mod{c^*p}}]\\ &\in \left[\frac{N}{b_2}, \frac{N}{8} + \frac{N}{b_2}\right], \end{split} \end{equation} where the first equality holds by expanding $z$ using~\eqref{eq:equality_z}, and the last inclusion holds by the assumption of~\ref{case:1}. Since $\left[\frac{N}{b_2}, \frac{N}{8} + \frac{N}{b_2}\right]$ is entirely included in $\left[\frac{N}{8}, \frac{3N}{8}\right]$, then it holds that $z\term{\mod{c^*p}} \in C(p)$. Similarly, one can show that $\mod{\term{zc^*p}} \in \left[\frac{N}{b_2}, \frac{N}{8} + \frac{N}{b_2}\right]$, which means that $zc^* \in C(p)$. We now proceed to show that the sensitivity can be bounded using some point in $C(p)$. Since for every $x \in \left[0, \frac{\pi}{2}\right]$, $\sin{x} \leq x \leq 2\abs{\sin{x}}$, then it holds that \begin{equation} \label{eq:bound_2_case_1} \begin{split} &\Sin{\frac{2\pi}{N}pc^*} = \Sin{\frac{2\pi}{N} \term{\mod{c^*p}}}\\ &\leq \term{\frac{2\pi}{N} \term{\mod{c^*p}}}^2\\ &= \term{\frac{2\pi}{N} z\term{\mod{c^*p}}}^2 \frac{1}{z^2}\\ &\leq \frac{4}{z^2} \Sin{\frac{2\pi}{N} z\term{\mod{c^*p}}} \\ &= \frac{4}{z^2} \Sin{\frac{2\pi}{N} zc^*p}, \end{split} \end{equation} where the first equality holds by plugging $a:=\frac{N}{2}$, $b := 1$ and $x := c^*p$ into Claim~\ref{clm:ax}, the first inequality holds since $\frac{2\pi}{N}\term{\mod{c^*p}} \in [0, \pi]$, the second equality holds by multiplying and dividing by $z$, the second inequality follows from combining the fact that $\frac{2\pi}{N} z\term{\mod{c^*p}} \leq \frac{\pi}{4}+\frac{\pi}{b_2} \leq \pi$ which is derived from~\eqref{eq:bound_1_case_1} and the observation that $2\abs{\sin{x}} \geq x$ for every $x \in \left[ 0, \frac{\pi}{2}\right]$, and the last equality holds by plugging $a:=\frac{N}{2}$, $b := z$ and $x := c^*p$ into Claim~\ref{clm:ax}. In addition, it holds that for every $q \in P$ \begin{equation \label{eq:lower_bound_sin} \begin{split} &\Sin{\frac{2\pi}{N}\term{\mod{c^*q}}}\\ &\quad \geq \frac{1}{4} \term{\frac{2\pi}{N}\term{\mod{c^*q}}}^2\\ &\quad=\frac{\term{\frac{2z\pi}{N}\term{\mod{c^*q}}}^2}{4z^2}\\ &\quad\geq\frac{\Sin{\frac{2z\pi}{N} \term{\mod{c^*q}}}}{4z^2} \\ &\quad=\frac{1}{4z^2}\Sin{\frac{2\pi}{N} zc^*q}, \end{split} \end{equation} where the first inequality holds by combining the assumption of~\ref{case:1} and the observation that $\abs{\sin{x}} \geq \frac{x}{2}$ for every $x \in \left[0, \frac{\pi}{2}\right]$, the first equality holds by multiplying and dividing by $z$, the second inequality holds by combining~\eqref{eq:bound_1_case_1} with the observation that $x \geq \abs{\sin{x}}$ for every $x \in [0, \pi)$ where in this context $x := \frac{2z\pi}{N}\term{\mod{c^*q}}$, and finally the last equality holds by plugging $a:= \frac{N}{2}$, $b:=z\pi$, and $x:= c^*q$ into Claim~\ref{clm:ax}. Combining~\eqref{maxx},~\eqref{eq:bound_1_case_1},~\eqref{eq:bound_2_case_1} and~\eqref{eq:lower_bound_sin} yields that \begin{equation*} \begin{split} &\max_{c \in [N]}\frac{\Sin{\frac{2\pi}{N}pc}}{\sum\limits_{q \in P}\Sin{\frac{2\pi}{N}qc}}\\ &\leq \max_{c \in [N]} \frac{\frac{16}{z^2} \Sin{\frac{2\pi z}{N}\term{\mod{pc}}}}{\frac{1}{z^2}\sum\limits_{q\in P} \Sin{\frac{2\pi z}{N}\term{\mod{qc}}}}\\ &= \max_{c \in [N]} \frac{16\Sin{\frac{2\pi }{N}zpc}}{\sum_{q\in P}\Sin{\frac{2\pi }{N}zqc}} = \max_{\hat{c} \in C(p)} \frac{16\Sin{\frac{2\pi }{N}\hat{c}p}}{\sum_{q\in P}\Sin{\frac{2\pi }{N}\hat{c}q}}, \end{split} \end{equation*} where last equality holds from combining~\eqref{maxx} and $zc^* \in C(p)$. \noindent\textbf{\ref{case:2}:} Let $c^\prime=N-c^*$, and note that $c^\prime\in[N]$. For every $q\in P$, \[ \abs{\Sin{c^\prime q\cdot 2\pi/N}[]} =\abs{\Sin{c^*q\cdot 2\pi/N}[]}. \] We observe that \begin{align*} &(\mod{c^\prime p}) = \mod{(N - c^*)p}\\ &\quad\quad= \mod{\left(\mod{Np} + \mod{(-c^*p)}\right)}\\ &\quad\quad=\mod{\left( 0 + \mod{(-c^*p)} \right)} \\ &\quad\quad= \mod{(-c^*p)}\\ &\quad\quad \leq \mod{-\left( N/2 - N/8 \right)}\\ &\quad\quad= \mod{N/8} - \mod{N} = N/8. \end{align*} Hence, the proof of Claim~\ref{lem:sensitivity_query_bound} in~\ref{case:2} follows by replacing $c^*$ with $c'$ in~\ref{case:1}. \end{proof} In what follows, we show that the sensitivity of each point $p \in P$ is bounded from above by a factor that is proportionally polylogarithmic in $N$ and inversely linear in the number of points $q \in P$ that are not that far from $p$ in terms of arithmetic modulo. \begin{lemma} \label{lem:bounding_c(p)} Let $C(p)$ be as in Lemma~\ref{lem:sensitivity_query_bound} for every $p \in P$, and let \begin{equation} \label{eq:gpP} \begin{split} g(p,P)&=\min_{c\in C(p)}\left|\left\lbrace q\in P: (\mod{cq})\in \right.\right.\\ &\left.\left.\left[\frac{N}{16\log N}, \frac{N}{2}-\frac{N}{16\log N}\right] \right\rbrace\right|. \end{split} \end{equation} Then for every $p \in P$, \[ \max_{c\in C(p)}\frac{\Sin{pc\cdot \frac{2\pi}{N}}}{\sum_{q\in P}\Sin{qc\cdot \frac{2\pi}{N}}} \in O(\log^2 N)\cdot \frac{1}{g(p, P)}. \] \end{lemma} \begin{proof} Put $p \in P$, $c\in C(p)$, and let $P^\prime = \br{q \in P : \Sin{qc\cdot \frac{2\pi}{N}} \geq 1/(8\log{N})^2}$. First we observe that for every $q \in P$ such that $\Sin{qc\cdot \frac{2\pi}{N}} \geq 1/(8\log{N})^2$, it is implied that $\mod{(qc)} \geq \frac{N}{16\log{N}}$. By the cyclic property of $\sin$, it holds that $\mod{(qc)} \leq \frac{N}{2} - \frac{N}{16\log{N}}$. Combining the above with the fact that $\Sin{pc\cdot \frac{2\pi}{N}}\leq 1$, yields that \[ \begin{split} &\frac{\Sin{pc\cdot \frac{2\pi}{N}}}{\sum_{q\in P}\Sin{qc\cdot \frac{2\pi}{N}}} \\ & \quad \leq\frac{1}{\sum_{q\in P}\Sin{qc\cdot \frac{2\pi}{N}}} \leq \frac{1}{\sum_{q\in P^\prime}\Sin{qc\cdot \frac{2\pi}{N}}}\\ &\quad \leq \frac{1}{\abs{P^\prime}/\term{64\log^2N}} \leq \frac{64\log^2N}{g(p,P)}, \end{split} \] where the second inequality follows from $P^\prime \subseteq P$, and the last derivation holds since $g(p,P) \leq \abs{P^\prime}$ which follows from~\eqref{eq:gpP}. \end{proof} The bound on the sensitivity of each point $p \in P$ (from the Lemma~\ref{lem:bounding_c(p)}) still requires us to go over all possible queries in $C(p)$ to obtain the closest points in $P$ to $p$. Instead of trying to bound the sensitivity of each point by a term that doesn't require evaluation over every query in $C(p)$, we will bound the total sensitivity in a term that is independent of $C(p)$ for every $p \in P$. This is done by reducing the problem to an instance of the expected size of independent set of vertices in a graph (see Claim~\ref{clm:bound_gPp}). First, we will use the following claim to obtain an independent set of size polylogarithmic in the number of vertices in any given directed graph. \begin{claim}\label{three} Let $G$ be a directed graph with $n$ vertices. Let $d_i$ denote the out degree of the $i$th vertex, for $i=1,\cdots, n$. Then there is an independent set of vertices $Q$ in $G$ such that $|Q| \in \frac{\Theta(1)}{\log N}\cdot \sum_{i=1}^n \frac{1}{d_i+1}.$ \end{claim} \begin{proof} Let $V$ denote the set of vertices of $G$, and let $E$ denote the set of edges of $G$. Partition the vertices of $G$ into $O(\log n)$ induced sub-graphs, where each vertex in the $j$th sub-graph $H_j$ has out degree in $[2^{j-1},2^{j}-1]$ for any non-negative integer $j \leq \round{\log{\term{\max\limits_{i \in [n]} d_i}}}$. Let $H$ denote the sub-graph with the largest number of vertices. Pick a random sample $S$ of $|V|^2/(2|E|)$ nodes from $V$. The expected number of edges in the induced sub-graph of $H$ by $S$ is bounded by $|V|^2/(4|E|)$. Let \begin{align*} T= &\{v\mid (v,u) \text{ is an edge of the sub-graph of }H\\& \text{ induced by }S\}. \end{align*} By Markov inequality, with probability at least $1/2$ we have $|T|\leq V^2/2$. Assume that this event indeed holds. Hence, the sub-graph of $H$ that is induced by $S\setminus T$ is an independent set of $H$ with $|S\setminus T|= |V|^2/2|E|-V^2/4|E|\geq V^2/4|E|$ nodes. Since $|E| \in O\left(2^j \abs{V} \right)$ we have $|S_j\setminus T_j|=|V|/2^j\in\sum_{v\in V_j}\Theta(1)/d_v$. Hence, $\sum_j \abs{S_j\setminus T_j} \in \sum_{v\in V}\Theta(1)/d_v$. By the pigeonhole principle, there is $j$ such that $\abs{S_j\setminus T_j}$ can be bounded from below by $\sum_{v\in V}\Theta(1)/d_v/ O(\log n)$. \end{proof} \begin{claim}\label{clm:bound_gPp} There is a set $Q\subseteq P$ such that $g(q,Q)=1$ for every $q\in Q$, and $\sum_{p\in P}\frac{1}{g(p, P)}\in |Q|\cdot \Theta\term{\log N}.$ \end{claim} \begin{proof} Let $b_2$ be defined as in the proof of Lemma~\ref{lem:sensitivity_query_bound}. For every $p\in P$, let $g^{-1}(p,P)\in C(p)$ such that \begin{align*} &g^{-1}(p,P)\in \\ &\quad \arg\min_{c\in C(p)} \left| \left\lbrace q\in P\setminus{\br{p}}: \Sin{qc\cdot \frac{2\pi}{N}}\geq \frac{b2}{N}\right\rbrace\right|. \end{align*} Let $G=(P,E)$ denote the directed graph whose vertices are the integers in $P$, and whose edges are \begin{equation}\label{Edef} \begin{split} E &:=\bigg\{ (p,q)\in P\times P : p\neq q \text{ and }\\ &\Sin{\frac{2\pi}{N} qg^{-1}(p,P)}\geq \frac{1}{b_3} \bigg\}. \end{split} \end{equation} The out degree of a vertex $p$ of $G$ is $g(p,P)-1$. By Claim~\ref{three}, there is independent set $Q$ of $G$ such that $ \sum_{i=1}^n \frac{1}{g(p,P)} \in |Q|\cdot \Theta(\log N). $ Since $Q$ is independent set, for every $q\in Q$ we have $g(q,Q)=1$. \end{proof} The following claim serves to bound the size of the independent set, i.e., for every point $p \in P$ in the set, $g(p,P) = 1$. \begin{claim} \label{clm:bound_Q} Let $Q\subseteq[N]$ such that $g(q,Q)=1$ for every $q\in Q$. Then $ |Q|\leq \log N. $ \end{claim} Finally, combining Lemma~\ref{lem:sensitivity_query_bound}, Lemma~\ref{lem:bounding_c(p)}, Claim~\ref{clm:bound_gPp} and Claim~\ref{clm:bound_Q}, satisfies Theorem~\ref{mainthm} which presents a bound on the total sensitivity with respect to the $\Sin{\cdot}$ cost function. \subsection{Bound on The VC Dimension}\label{sec:vcbound} First, we define VC dimension with respect to the Sine fitting problem. \begin{definition}[VC-dimension~\citep{braverman2016new}] \label{def:dimension} Let $N>1$ be a positive integer, $P\subset [N]$ be a set of $n$ integer, and let $r \in [0,\infty)$, we define \[ \RANGES(x,r) = \br{p \in P \mid f(p,x) \leq r}, \] for every $x \in [N]$ and $r \geq 0$. The dimension of the Sine fitting problem is the size $\abs{S}$ of the largest subset $S \subset P$ such that \[ \abs{\br{S \cap \RANGES(x,r) \mid x \in [N], r \geq 0 }} = 2^{\abs{S}}. \] \end{definition} \begin{lemma}[Bound on the VC dimension of the Sine fitting problem]\label{VcbOUND} Let $n,N \geq 1$ be a pair of positive integers such that $n \leq N$, and let $P \subseteq [N]$ be a set of $n$ points. Then the VC dimension of the Sine fitting problem with respect to $P$ and $N$ is $O\term{\log\term{nN}}$. \end{lemma} \begin{proof} We note that the VC-dimension of the set of classifiers that output the sign of a sine wave parametrized by a single parameter (the angular frequency of the sine wave) is infinite. However since our query space is bounded, i.e., every query is an integer in the range $[1,N]$, then the VC dimension is bounded as follows. First, let $D(p,x)= \sin^2( px \cdot \frac{2\pi}{N})$ for every $p\in P$, and $x\in [n]$. We observe that for every $p \in P$ and $x \in [N]$, $D(p,x) \leq 1$. Hence, for every $x \in [N]$ and $r \in [0,\infty)$ it holds that $\br{\RANGES(x,r) \middle| r \geq 0} = \br{\RANGES(x,r) \middle| r \in [0,1]},$ where $\RANGES(x,r) = \br{p \in P \mid d(p,x) \leq r}$ is defined as in Definition~\ref{def:dimension}. Secondly, by the definition of $\RANGES$, we have that for every pair of $r_1,r_2 \in [0,1]$ and $x \in [N]$ where $r_2 \geq r_1$, $\RANGES\term{x,r_2} = \bigcup\limits_{r \in \left[r_1, r_2\right]} \RANGES\term{x,r}.$ This yields that $\abs{\br{\RANGES\term{x,r} \middle| r \in [0,1]}} \leq n$ for any $x \in [N]$, which consequently means that $ \abs{\br{\RANGES\term{x,r} \middle| x \in [N], r \geq 0}} = nN, $ since $x \in [N]$ is an integer, and each such $x$ would create a different set of $n$ subsets of $P$. Thus we get that $\forall \, S \subseteq P$: $ \abs{\br{S \cap \RANGES\term{x,r} \middle| x \in [N], r \geq 0}} \leq nN = 2^{\log\term{nN}}. $ The claim then follows since the above inequality states that the VC dimension is bounded from above by $\log\term{nN}$. \end{proof} \section{REMARKS AND EXTENSIONS} In this section briefly discuss several remarks and extensions of our work. \textbf{Parallel implementation. } Computing the sensitivities for $n$ input points requires $O(Nn)$ time, this is by computing $cost(c):= {\sum_{q\in P}\sin^2( qc \cdot \frac{2\pi}{N})}$ for every $c\in [n]$, and then bounding the sensitivity for very $p\in P$ by iterating over all queries $c\in [N]$, and taking the one which maximizes its term. However, this can be practically improved by applying a distributed fashion algorithm. Notably, one can compute the cost ${\sum_{q\in P}\Sin{qc\cdot \frac{2\pi}{N}}}$ of every query $c\in [N]$ independently from all other queries in $[N]$, similarly, once we computed the cost of every query $c\in[N]$, the sensitivity of each point $p\in P$ can be computed independently from all of the other points. Algorithm~\ref{alg:sens} utilises these observations: It receives as input an integer $N$ which indicates the query set range, a set $P\subset [N]$, and an integer $M$ indicating the number of machines given to apply the computations on. Algorithm~\ref{alg:sens} outputs a function $s:P\to (0,\infty)$, where $s(p)$ is the sensitivity of $p$ for every $p\in P$. \setcounter{AlgoLine}{0} \begin{algorithm} \caption{$\textsc{Calculate-Sensitivities}(P,N,M)$\label{alg:sens}} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{An integer $N>1$, a set $P\subset [N]$ of $n>1$ integers, and an integer $M\geq 1$.} \Output{A function $s:P\to (0,\infty)$, where for every $p\in P:$ $s(p)$ is the sensitivity of $p$.} $C_1,\cdots,C_M:=$ a partition of $[N]$ into $M$ disjoint subsets, each contains at most $\ceil{N/M}$ integers from $[N]$. \Comment{In some cases, the last set $C_M$ might be empty.} $P_1,\cdots,P_M:=$ a partition of $P$ into $M$ disjoint subsets, each contains at most $\ceil{n/M}$ integers from $P$. \Comment{In some cases, the last set $P_M$ might be empty.} \For{every $i\in [M]$, in distributed manner} { \For{every $c\in C_i$}{ Set $cost(c):= {\sum_{q\in P}\Sin{qc\cdot \frac{2\pi}{N}}}$ } } \For{every $i\in [M]$, in distributed manner} { \For{every $p\in P_i$}{ Set $s(p):= \max_{c\in [N]} \frac{\Sin{pc\cdot \frac{2\pi}{N}}}{cost(c)}$ } } \Return $s$ \end{algorithm} \textbf{Extension to high dimensional data.} Our results can be easily extended to the case where (i) the points (of $P$) lie on a polynomial grid of resolution $\Delta>0$ of any dimension $d\geq 1$, and (ii) they are further assumed to be contained inside a ball of radius $N>0$. Note that, such assumptions are common in the coreset literature, e.g., coresets for protective clustering~\cite{edwards2005no}, relu function~\cite{mussay2021data}, and logistic regression~\cite{tolochinsky2018generic}. The analysis with respect to the sensitivity can be directly extended, and the VC dimension is now bounded by $O\left(d \log\left(\frac{N}{d} \Delta n\right)\right)$. Both claims are detailed at Section~\ref{sec:highd} of the appendix. \textbf{Approximating the optimal solution via coresets. } Let $N>1$ be an integer, $P\subset [N]$, and let $(S,v)$ be a coreset for $P$ as in Theorem~\ref{thm:coreset}. Let $p^* \in \arg\min_{c\in [N] }\sum_{p \in P}\sin^2( pc\cdot \frac{2\pi} { N})$ and $c^* \in \arg\min_{c\in [N]}\sum_{p\in S}v(p)\sin^2(pc\cdot \frac{2\pi} {N})$ be the optimal solutions on the input and its coreset, respectively, then $ \sum_{p \in P}\sin^2( pc^* \cdot {2\pi}{N})\leq (1+\varepsilon) \sum_{p \in P}\sin^2( pp^* \cdot {2\pi}{ N})$. \section{EXPERIMENTAL RESULTS} \begin{figure*}[htb!] \centering \includegraphics[width=.32\textwidth]{figures/air-8-opt.png} \includegraphics[width=.32\textwidth]{figures/air-9-opt.png} \includegraphics[width=.32\textwidth]{figures/dog-heart-opt.png} \caption{Optimal solution approximation error: The x axis is the size of the chosen subset, the y axis is the optimal solution approximation error. Datasets, from left to right, (i)-(1), (i)-(2), and (iii).} \label{fig:optmal soltuion} \end{figure*} \begin{figure*}[htb!] \includegraphics[width=.32\textwidth]{figures/air-8-max.png} \includegraphics[width=.32\textwidth]{figures/air-9-max.png} \includegraphics[width=.32\textwidth]{figures/dog-heart-max.png} \caption{Maximum approximation error: The x axis is the size of the chosen subset, the y axis is the maximum approximation error across the whole set of queries. Datasets, from left to right, (i)-(1), (i)-(2), and (iii). } \label{fig:max error} \end{figure*} In what follows we evaluate our coreset against uniform sampling on real-world datasets. \textbf{Software/Hardware.} Our algorithms were implemented in Python 3.6~\citep{10.5555/1593511} using \say{Numpy}~\citep{oliphant2006guide}. Tests were performed on $2.59$GHz i$7$-$6500$U ($2$ cores total) machine with $16$GB RAM. \subsection{Datasets And Applications}\label{sed:datasets} \begin{enumerate}[label=(\roman*)] \item Air Quality Data Set~\citep{de2008field}, which contains $9,358$ instances of hourly averaged responses from an array of $5$ metal oxide chemical sensors embedded in an Air Quality Chemical Multisensor Device. We used two attributes (each as a separate dataset) of hourly averaged measurements of (1) tungsten oxide - labeled by (i)-(1) in the figures, and (2) NO2 concentration - labeled by (i)-(2). Fitting the sine function on each of these attributes aids in understanding their underlying structure over time. This helps us in finding anomalies that are far enough from the fitted sine function. Finding anomalies in this context could indicate a leakage of toxic gases. Hence, our aim is to monitor their behavior over time, while using low memory to store the data. \item Single Neuron Recordings~\citep{singlenueron} acquired from a cat's auditory-nerve fiber. The dataset has $127,505$ samples and the goal of Sine fitting with respect to such data is to infer cyclic properties from neuron signals which will aim in further understanding of the wave of a single neuron and it's structure. \item Dog Heart recordings of heart ECG~\citep{dogHeart}. The dataset has $360,448$ samples. We have used the absolute values of each of the points corresponding to the ``electrocardiogram'' feature which refers to the ECG wave of the dog's heart. The goal of Sine fitting on such data is to obtain the distribution of the heart beat rates. This aids to detects spikes, which could indicate health problems relating to the dog's heart. \end{enumerate} \subsection{Reported Results} \textbf{Approximation error.} We iterate over different sample sizes, where at each sample size, we generate two coresets, the first is using uniform sampling and the latter is using sensitivity sampling. For every such coreset $(S,v)$, we compute and report the following. \begin{enumerate}[label=(\roman*)] \item The optimal solution approximation error, i.e., we find $c^* \in \arg\min_{c \in C} \sum_{p \in S} v(p)\sin^2(\frac{2\pi}{N} \cdot pc)$. Then the approximation error $\varepsilon$ is set to be $\frac{\sum_{p \in P} \sin^2(\frac{2\pi}{N} pc^*)}{\min_{c \in C} \sum_{p \in P} \sin^2(\frac{2\pi}{N} pc)}-1$; see Figure~\ref{fig:optmal soltuion}. \item The maximum approximation error of the coreset over all query in the query set, i.e., $\max_{c\in C} \abs{1 - \frac{\sum_{p \in S} v(p)\sin^2(\frac{2\pi}{N} pc)}{\sum_{p \in P} \sin^2(\frac{2\pi}{N} pc)}}$; see Figure~\ref{fig:max error}. \end{enumerate} The results were averaged across $32$ trials. As can be seen in Figures~\ref{fig:optmal soltuion} and~\ref{fig:max error}, the coreset in such context (for the described applications in Section~\ref{sed:datasets}) encapsulates the structure of the dataset and approximate the datasets behavior. Our coreset obtained consistent smaller approximation errors in almost all the experiments in both experiments than those obtained by uniform sampling. Observe that our advantage on Dataset~(iii) is much more significant than the others as this dataset admits a clear periodic underlying structure. Note that, in some cases the coreset is able to encapsulate the entirety of the underlying structure at small sample sizes much better than uniform sampling due to its sensitivity sampling. This means that the optimal solution approximation error in practice can be zero; see the rightmost plot in Figure~\ref{fig:optmal soltuion}. \begin{figure*}[htb!] \includegraphics[width=\textwidth]{figures/spike-sin-fit.png} \caption{Sine fitting cost as a function of the given query. Dataset (ii) was used.} \label{fig:sine-fit-spike} \end{figure*} \begin{figure*}[htb!] \includegraphics[width=\textwidth]{figures/dogheartsinefit.png} \caption{Sine fitting cost as a function of the given query. Dataset (iii) was used.} \label{fig:sine-fit-dog} \end{figure*} \textbf{Approximating the Sine function's shape and the probability density function of the costs.} In this experiment, we visualize the Sine fitting cost as in~\eqref{eq:our_cost} on the entire dataset over every query in $[N]$ as well as visualizing it on our coreset. As depicted in Figures~\ref{fig:sine-fit-spike} and~\ref{fig:sine-fit-dog}, the large the coreset size, the smaller the deviation of both functions. This proves that in the context of Sine fitting , the coreset succeeds in retaining the structure of the data up to a provable approximation. In addition, due to the nature of our coreset construction scheme, we expect that the distribution will be approximated as well. This also can be seen in Figure~\ref{fig:sine-fit-spike} and~\ref{fig:sine-fit-dog}. Specifically speaking, when the coreset size is small, then the deviation (i.e., approximation error) between the cost of~\eqref{eq:our_cost} on the coreset from the cost of~\eqref{eq:our_cost} on the whole data, will be large (theoretically and practically), with respect to any query in $[N]$. As the coreset size increases, the approximation error decreases as expected also in theory. This phenomenon is observed throughout our experiments, and specifically visualized at Figures~\ref{fig:sine-fit-spike} and~\ref{fig:sine-fit-dog} where one can see that the alignment between probability density functions with respect to the coreset and the whole data increases with the coreset size. Note that, we used only $2000$ points from Dataset (iii) to generate the results presented at Figure~\ref{fig:sine-fit-dog}. \section{CONCLUSION, NOVELTY, AND FUTURE WORK} \textbf{Conclusion.} In this paper, we proved that for every integer $N>1$, and a set $P\subset [N]$ of $n>1$ integers, we can compute a coreset of size $O(\log(N)^{O(1)})$ for the Sine fitting problem as in~\eqref{eq:our_cost}. Such a coreset approximates the Sine fitting cost for every query $c$ up to a $1\pm\eps$ multiplicative factor, allowing us to support streaming and distributed models. Furthermore, this result allows us to gain all the benefits of coresets (as explained in Section~\ref{sec:coresets}) while simultaneously maintaining the underlying structure that these input points form as we showed in our experimental results. \textbf{Novelty.} The proofs are novel in the sense that the used techniques vary from different fields that where not previously leveraged in the context of coresets, e.g., graph theory, and trigonometry. Furthermore to our knowledge, our paper is the first to use sensitivity to obtain a coreset for problems where the involved cost function is trigonometric, and generally functions with cyclic properties. We hope that it will help open the door for more coresets in this field. \textbf{Future work} includes (i) suggesting a coreset for a high dimensional input, (ii) computing and proving a lower bound on the time it takes to compute the coreset, (iii) extending our coreset construction to a generalized form of cost function as in~\citep{souders1994ieee,ramos2008new}, and (iv) discussing the applicability of such coresets in a larger context such as quantization~\citep{hong2022daq,zhou2018adaptive,park2017weighted} of deep neural networks while merging it with other compressing techniques such as pruning~\citep{liebenwein2019provable,baykal2018data} and low-rank decomposition~\citep{tukan2021no,maalouf2020deep,liebenwein2021compressing}, and/or using it as a prepossessing step for other coreset construction algorithms that requires discretization constraints on the input, e.g.,~\citep{varadarajan2012near}. \label{sec:conclutions} \begin{comment} \subsection{Citations, Figure, References} \subsubsection{Citations in Text} Citations within the text should include the author's last name and year, e.g., (Cheesman, 1985). Be sure that the sentence reads correctly if the citation is deleted: e.g., instead of ``As described by (Cheesman, 1985), we first frobulate the widgets,'' write ``As described by Cheesman (1985), we first frobulate the widgets.'' The references listed at the end of the paper can follow any style as long as it is used consistently. \end{comment} \clearpage \section{ACKNOWLEDGEMENTS} This work was partially supported by the Israel National Cyber Directorate via the BIU Center for Applied Research in Cyber Security. \bibliographystyle{apalike}
1,314,259,995,172
arxiv
\section{Introduction} \label{sec-intro} The notion of the \emph{dimension} of a poset $P = (X, \le)$ was introduced by Dushnik and Miller~\cite{dushnik:partially-order:}, who defined it as the least $d$ so that $P$ embeds into a product of $d$ linear orders. In particular, the dimension of a countable poset $P$ is the least $d$ so that $P$ embeds into $\mathbb{R}^d$, the definition given by Ore~\cite{ore:theory-of-graph:}. Here we consider the dimension of downsets of integer partitions and compositions. The partial order on partitions we consider is simply the one in Young's lattice, namely containment of Ferrers diagrams, and we establish the result below. \begin{theorem} \label{thm-part-dim} A downset of integer partitions is finite dimensional if and only if it does not contain every partition. \end{theorem} We go on to study the dimension of downsets of compositions under the \emph{generalized subword order}. In this order we view compositions as words over the positive integers $\mathbb{P}$, and we denote the set of these words by $\mathbb{P}^\ast$. Given two compositions $u=u(1)\cdots u(k)$ and $w=w(1)\cdots w(n)$, we say that $u$ is \emph{contained} in $w$ and write $u\le w$ if there are indices $1\le i_1<\cdots<i_k\le n$ such that $u(j)\le w(i_j)$ for all $j$. This order can be illustrated graphically by way of skyline diagrams. The \emph{skyline diagram} of the composition $w=w(1)\cdots w(n)$ consists of $n$ columns of cells, with the $i$th column having $w(i)$ cells. For compositions $u$ and $w$, we have $u\le w$ if the skyline diagram of $u$ can be embedded into that of $w$. For example, the diagrams below show that $3413\le 141421143$. \begin{center} \begin{tabular}{ccc} \begin{tikzpicture}[scale=.35, baseline=(current bounding box.center)] \plotskyline{3,4,1,3}; \end{tikzpicture} & \begin{tikzpicture}[baseline=(current bounding box.center)] \node at (0,0) {$\le$}; \end{tikzpicture} & \begin{tikzpicture}[scale=.35, baseline=(current bounding box.center)] \plotskylineshaded{0,3,0,4,1,0,0,3,0}; \plotskyline{1,4,1,4,2,1,1,4,3}; \end{tikzpicture} \end{tabular} \end{center} The generalized subword order on compositions has received some attention since it was first considered by Bergeron, Bousquet-M\'elou, and Dulucq~\cite{bergeron:standard-paths-:}, who studied saturated chains in this poset. Snellman~\cite{snellman:standard-paths-:} extended their work. Later, Sagan and Vatter~\cite{sagan:the-mobius-func:} determined the M\"obius function of this poset, and Bj\"orner and Sagan~\cite{bjorner:rationality-of-:comp} showed that this M\"obius function has a rational generating function. Finally, Vatter~\cite{vatter:reconstructing-:} considered the analogue of the Reconstruction Conjecture in this poset. To state the analogue of Theorem~\ref{thm-part-dim} for compositions, we need to introduce a bit more notation and extend our viewpoint to include compositions. A possibly infinite composition is represented by a word over the alphabet $\mathbb{P}\cup\{n^\omega\::\: n\in\mathbb{P}\}\cup\{\omega,\omega^\omega\}$. In such a word, $n^\omega$ stands for an infinite number of parts all equal to $n$, $\omega$ stands for an infinite part, and $\omega^\omega$ stands for an infinite number of infinite parts. Given a word $u$ over the alphabet $\mathbb{P}\cup\{n^\omega\::\: n\in\mathbb{P}\}\cup\{\omega,\omega^\omega\}$, the \emph{age} of $u$, denoted $\operatorname{Age}(u)$ is the set of all compositions which embed into it (this term dates to Fra{\"{\i}}ss\'{e}~\cite{fraisse:sur-lextension-:}). For example, $\operatorname{Age}(\omega\omega\omega)$ is the set of compositions with at most three parts, $\operatorname{Age}(2^\omega)$ is the set of all compositions with all parts at most two, and $\operatorname{Age}(1^\omega \omega 2131^\omega)$ consists of all compositions which embed into the skyline diagram below. \begin{center} \begin{tikzpicture}[scale=.35] \plotskyline{1,1,1,4,2,1,3,1,1,1}; \filldraw[black] (-0.75,0.5) circle [radius=0.05cm]; \filldraw[black] (-0.5, 0.5) circle [radius=0.05cm]; \filldraw[black] (-0.25,0.5) circle [radius=0.05cm]; \filldraw[black] (3.5,4.25) circle [radius=0.05cm]; \filldraw[black] (3.5, 4.5) circle [radius=0.05cm]; \filldraw[black] (3.5,4.75) circle [radius=0.05cm]; \filldraw[black] (10.25,0.5) circle [radius=0.05cm]; \filldraw[black] (10.5 ,0.5) circle [radius=0.05cm]; \filldraw[black] (10.75,0.5) circle [radius=0.05cm]; \end{tikzpicture} \end{center} We can now state our result for compositions. \begin{theorem} \label{thm-comp-dim} A downset of compositions in the generalized subword order is finite dimensional if and only if it does not contain $\operatorname{Age}(\omega\omega\omega)$, $\operatorname{Age}(1^\omega 2 1^\omega 2 1^\omega)$, $\operatorname{Age}(\omega 1^\omega \omega 1^\omega)$, or $\operatorname{Age}(1^\omega\omega 1^\omega\omega)$. \end{theorem} We also use the concept of ages in the partition setting, where the age of a word $u$ over the alphabet $\mathbb{P}\cup\{n^\omega\::\: n\in\mathbb{P}\}\cup\{\omega,\omega^\omega\}$ is the set of all (finite) integer partitions which embed into $u$. While notationally identical, it will always be clear from the context whether an age consists of partitions or compositions. Dimension is a monotone property in that the dimension of a poset is at least that of any of its subposets. Thus to show that a poset is infinite dimensional we show that it contains subposets of arbitrarily large dimension. In particular, we recall that the \emph{crown} on the $2n$ elements $\{a_1,\dots,a_n,b_1,\dots,b_n\}$ is the poset in which the only comparisons are of the form $a_i < b_j$ for $i\ne j$, as depicted in the Hasse diagram below. \begin{center} \begin{footnotesize} \begin{tikzpicture}[xscale=2] \hassestp{a_1, a_2, \cdots, a_n}{b_1, b_2, \cdots, b_n} \end{tikzpicture} \end{footnotesize} \end{center} It is easily seen that the crown on $2n$ elements has dimension $n$, so we refer to it as the crown of dimension $n$. To establish that the poset of all integer partitions is infinite dimensional, it suffices to find arbitrarily large crowns of partitions. One such family of crowns is defined by taking \[ a_i = (n-i)^i \quad\text{and}\quad b_i = \bigvee_{j\neq i} a_j, \] i.e., taking $a_i$ to be the partition consisting of $i$ parts equal to $n-i$ and $b_i$ to be the join (in Young's lattice) of all $a_j$ for $j\neq i$. Similarly, one direction of Theorem~\ref{thm-comp-dim} can be established by finding arbitrarily large crowns in the four stated ages. For example, we see that $\operatorname{Age}(\omega\omega\omega)$ contains the crown of dimension $n-3$ shown below for all $n\ge 5$. \begin{center} \begin{footnotesize} \begin{tikzpicture}[xscale=2] \hassestp{2 (n-2), 3 (n-3), 4 (n-4), \cdots, (n-2) 2}{1 n (n-3), 2 n (n-4), 3 n (n-5), \cdots, (n-3) n 1} \end{tikzpicture} \end{footnotesize} \end{center} A slight modification of this crown shows that $\operatorname{Age}(1^\omega 2 1^\omega 2 1^\omega)$ is infinite dimension, as it contains the crown of dimension $n-3$ shown below for all $n\ge 5$. \begin{center} \begin{footnotesize} \begin{tikzpicture}[xscale=2] \hassestp{1^2 2 1^{n-2}, 1^3 2 1^{n-3}, 1^4 2 1^{n-4}, \cdots, 1^{n-2} 2 1^2}{1^1 2 1^n 2 1^{n-3}, 1^2 2 1^n 2 1^{n-4}, 1^3 2 1^n 2 1^{n-5}, \cdots, 1^{n-3} 2 1^n 2 1^1} \end{tikzpicture} \end{footnotesize} \end{center} The last two ages stated in Theorem~\ref{thm-comp-dim} are isomorphic, so it suffices to show that $\operatorname{Age}(\omega 1^\omega \omega 1^\omega)$ is infinite dimensional. This age contains the crown of dimension $n-1$ shown below for all $n\ge 3$. \begin{center} \begin{footnotesize} \begin{tikzpicture}[xscale=2] \hassestp{2 1^n, 3 1^{n-1}, 4 1^{n-2}, \cdots, n 1^2}{1 1^0 n 1^{n-1}, 2 1^1 n 1^{n-2}, 3 1^2 n 1^{n-3}, \cdots, (n-1) 1^{n-2} n 1^1} \end{tikzpicture} \end{footnotesize} \end{center} Thus it suffices to prove that downsets of compositions not containing any of these four ages are finite dimensional. Note that $\operatorname{Age}(2^\omega)$ is infinite dimensional---this follows from the fact that it contains $\operatorname{Age}(1^\omega 2 1^\omega 2 1^\omega)$, or more easily by observing that it contains the crown of dimension $n$ defined by $a_i=1^{i-1}21^{n-i}$ and $b_i=2^{i-1}12^{n-i}$. Consequently, the age of any (infinite) composition which includes any symbol of the form $\omega^\omega$ or $n^\omega$ for $n\ge 2$ is necessarily infinite dimensional. Therefore when characterizing the finite dimensional ages of infinite compositions we may restrict our attention to ages of words over the alphabet $\mathbb{P}\cup\{1^\omega,\omega\}$. \section{Tools} \label{sec-tools} In this section we introduce the tools we use to establish the other directions of Theorems~\ref{thm-part-dim} and \ref{thm-comp-dim}. A poset $P$ is \emph{well quasi-ordered} if it contains neither infinite antichains nor infinite strictly decreasing chains, i.e., $x_0 > x_1 > \cdots$. We begin by recalling the following well-known result. \begin{theorem}{(Higman's Lemma~\cite{higman:ordering-by-div:})} \label{thm-higman} If $(P,\le)$ is well quasi-ordered then $P^\ast$, the poset of words over $P$ ordered by the generalized subword order, is also well quasi-ordered. \end{theorem} As the poset of partitions is a subposet of $\mathbb{P}^\ast$ and the poset of compositions is precisely the poset $\mathbb{P}^\ast$, Higman's Lemma implies that both posets are well quasi-ordered. This allows us to appeal to the following result. \begin{proposition} \label{prop-wqo-subclasses-dcc} Downsets of well quasi-orders satisfy the \emph{descending chain condition}, i.e., there does not exist a sequence of downsets satisfying $\mathcal{C}^0\supsetneq \mathcal{C}^1\supsetneq \mathcal{C}^2\supsetneq \cdots$. \end{proposition} \begin{proof} Suppose to the contrary that the well quasi-ordered downset $\mathcal{C}$ were to contain an infinite strictly decreasing sequence of subdownsets $\mathcal{C}=\mathcal{C}^0\supsetneq \mathcal{C}^1\supsetneq \mathcal{C}^2\supsetneq\cdots$. For each $i\ge 1$, choose $x_i\in\mathcal{C}^{i-1}\setminus\mathcal{C}^i$. The set of minimal elements of $\{x_1,x_2,\ldots\}$ is an antichain and therefore finite, so there is an integer $m$ such that $\{x_1,x_2,\ldots,x_m\}$ contains these minimal elements. In particular, $x_{m+1}\ge x_i$ for some $1\le i\le m$. However, we chose $x_{m+1}\in\mathcal{C}^m\setminus\mathcal{C}^{m+1}$, and because $x_{m+1}\ge x_i$, $x_{m+1}$ does not lie in $\mathcal{C}^i$ and thus cannot lie in $\mathcal{C}^m$, a contradiction. \end{proof} Because of Proposition~\ref{prop-wqo-subclasses-dcc}, we can consider a minimal (with respect to set containment) counterexamples to prove Theorems~\ref{thm-part-dim} and \ref{thm-comp-dim}. Our next result shows that such minimal counterexamples cannot be unions of two proper subdownsets, but before proving it we need to make some more general remarks about dimension, and in particular, our approach to establishing that downsets are finite dimensional. A \emph{realizer} of the poset $P$ is a collection $\mathcal{R}$ of linear extensions of the poset such that $x \le_P y$ if and only if $x \le_L y$ for each $L \in \mathcal{R}$. Given that the elements of a realizer are extensions of the original poset, this is equivalent to saying that for each pair $x,y \in P$ of incomparable elements, there is some $L \in \mathcal{R}$ such that $y \le_L x$. A \emph{refinement} of the poset $P$ is another partial order, say $\le_R$, such that $x\le_R y$ for all pairs $x,y\in P$ with $x\le_P y$. Because every refinement can be extended to a linear extension, to establish that the dimension of the poset $P$ is at most $n$, it suffices to find a collection $\mathcal{R}$ of $n$ refinements of $P$ such that $x\le_P y$ if and only if $x\le_R y$ for each $R\in\mathcal{R}$. Frequently we go a step further than this. As every refinement of a subposet of $P$ can be extended to a linear extension of $P$, to show that $P$ has dimension at most $n$ it suffices to find a collection $\mathcal{R}$ of $n$ \emph{partial refinements} (meaning refinements of subposets of $P$) with this property. In constructing and analyzing these refinements or partial refinements, we use two additional terms. If the refinements $R_1$ and $R_2$ satisfy $x<_{R_1} y$ and $y<_{R_2} x$ (or vice versa) then we say that the pair $R_1$, $R_2$ \emph{breaks} the incomparison between $x$ and $y$. Finally, every homomorphism between a poset (or subposet of it) to a totally ordered set (typically $\mathbb{N}$ here) induces a refinement or partial refinement on the poset. In this situation we often say that the induced refinement \emph{sorts} the objects of $P$ according to the homomorphism. For example, a natural refinement of the either the poset of partitions or of compositions is the one that sorts them according to length (number of parts). \begin{proposition} \label{prop-union-dimension} Let $(P,\le)$ be a poset, and let $\mathcal{C}, \mathcal{D} \subseteq P$ be downsets of dimension $m$ and $n$ respectively. Then $\mathcal{C} \cup \mathcal{D}$ is a downset of dimension at most $m+n$. \end{proposition} \begin{proof} Certainly $\mathcal{C} \cup \mathcal{D}$ is a downset, so it suffices to show it has dimension at most $m+n$. Let $\{R_1, R_2, \dotsc, R_m\}$ and $\{S_1, S_2, \dotsc, S_n\}$ be realizers of $\mathcal{C}$ and $\mathcal{D}$ respectively. First, note that every member of $\mathcal{C}\setminus \mathcal{D}$ is incomparable with every member of $\mathcal{D}\setminus \mathcal{C}$. Define the refinements \[ R_1' = R_1 \oplus (\mathcal{D}\setminus \mathcal{C}),\,\dots,\, R_m' = R_m \oplus (\mathcal{D}\setminus \mathcal{C}) \] and \[ S_1' = S_1 \oplus (\mathcal{C}\setminus \mathcal{D}),\,\dots,\, S_n' = S_n \oplus (\mathcal{C}\setminus \mathcal{D}), \] where $A \oplus B$ is the \emph{ordinal sum} of $A$ and $B$, including all relations within both $A$ and $B$, as well as all relations of the form $a<b$ where $a \in A$ and $b \in B$. The collection $\{ R_1', \dotsc, R_m', S_1', \dotsc, S_n' \}$ realizes $\mathcal{C} \cup \mathcal{D}$, as it breaks all incomparisons between elements of $\mathcal{C}\setminus \mathcal{D}$ and $\mathcal{D}\setminus \mathcal{C}$ and realizes each of $\mathcal{C}$ and $\mathcal{D}$. This shows that $\mathcal{C} \cup \mathcal{D}$ has dimension at most $m+n$. \end{proof} We note that the hypothesis that $\mathcal{C}$ and $\mathcal{D}$ are both downsets in Proposition~\ref{prop-union-dimension} is essential, as shown by the fact that the crown of dimension $n$ can be expressed as the union of two antichains (which are thus each $2$-dimensional). The downsets of compositions which are not unions of proper subdownsets are precisely the ages, as shown by the following theorem of Fra{\"{\i}}ss\'{e} (which we have specialized to our contexts here). This result implies it suffices to prove Theorems~\ref{thm-part-dim} and \ref{thm-comp-dim} for ages. \begin{theorem}[Fra{\"{\i}}ss\'{e}~\cite{fraisse:sur-lextension-:}] \label{thm-atomic-tfae} The following are equivalent for a downset $\mathcal{C}$ of integer partitions or compositions: \begin{enumerate} \item[(1)] $\mathcal{C}$ cannot be expressed as the union of two proper subdownsets, \item[(2)] $\mathcal{C}$ satisfies the \emph{joint embedding property} meaning that for every $a,b\in\mathcal{C}$ there is some $c\in\mathcal{C}$ such that $a,b\le c$, and \item[(3)] $\mathcal{C}=\operatorname{Age}(u)$ for some word $u\in\left(\mathbb{P}\cup\{n^\omega\::\: n\in\mathbb{P}\}\cup\{\omega,\omega^\omega\}\right)^\ast$. \end{enumerate} \end{theorem} We conclude this section by providing the only specific dimension results of the paper. To realize the downset $\operatorname{Age}(\omega\omega)$ of compositions, we use a pair of linear extensions $L_1$ and $L_2$ and a refinement $R_3$. The first, $L_1$, orders compositions according to the \emph{shortlex order}, which sorts compositions first by their length, and within each length sorts compositions according to the lexicographical ordering. The second, $L_2$, orders compositions according to the \emph{shortcolex order}, which sorts compositions first by their length, and within each length sorts compositions according to the colexicographical ordering (lexicographical order, but sorting from right to left). Lastly, the refinement $R_3$ sorts compositions first by their largest part and then by their second largest part. Note that this sometimes leaves a composition and its reverse incomparable, and thus is not a linear extension. These three refinements constitute a realizer of $\operatorname{Age}(\omega\omega)$, implying that the dimension of $\operatorname{Age}(\omega\omega)$ is at most $3$. Observing that this age contains the crown of dimension $3$ below allows us to conclude that the dimension of $\operatorname{Age}(\omega\omega)$ equals $3$. \begin{center} \begin{footnotesize} \begin{tikzpicture}[xscale=2] \hassestp{21,12,3}{13,31,22} \end{tikzpicture} \end{footnotesize} \end{center} \begin{proposition} The dimension of $\operatorname{Age}(\omega\omega)$ is $3$. \end{proposition} Similar methods can be applied to show that the dimension of $\operatorname{Age}(\omega 1^\omega)$ is $2$, and that the dimensions of $\operatorname{Age}(1^\omega \omega 1^\omega)$, $\operatorname{Age}(\omega\omega 1^\omega)$, and $\operatorname{Age}(\omega1^\omega\omega)$ are each $4$. \section{Partitions} \label{sec-partitions} Having observed in the introduction that the poset of all integer partitions is infinite dimensional, Theorem~\ref{thm-part-dim} will follow once we show that all proper downsets of partitions are finite dimensional. By Theorem~\ref{thm-atomic-tfae}, every proper downset of partitions can be written as a finite union of ages of the form $\operatorname{Age}(u)$ for some word $u\in \mathbb{P}\cup\{n^\omega\::\: n\in\mathbb{P}\}\cup\{\omega,\omega^\omega\}$. Because the parts of partitions are ordered, each such age is contained in an age of the form $\operatorname{Age}(\omega^k\lambda\ell^\omega)$ for nonnegative integers $k$ and $\ell$ and a finite partition $\lambda$ whose parts are greater than $\ell$. The Ferrers diagram of the possibly infinite partition $\omega^k \lambda \ell^\omega$ is shown below. \begin{center} \begin{tikzpicture}[scale=0.5] \draw (0,-10) -- ( 0, 0) -- (10,0); \draw (3,-10) -- ( 3,-6); \draw (0, -3) -- (10,-3); \draw (0, -6) -- (3.5,-6) -- (3.5,-5) -- (4,-5) -- (4,-4.5) -- (4.5,-4.5) -- (4.5,-3.5) -- (6,-3.5) -- (6,-3); \draw[<->] (1.9,-0.25) -- (1.9,-2.75); \draw[<->] (0.25,-8.1) -- (2.75,-8.1); \node at (1.5,-1.5) {$k$}; \node[anchor=east] at (10.4,-1.5) {$\cdots$}; \node at (1.5,-4.5) {$\lambda$}; \node at (1.5,-7.5) {$\ell$}; \node at (1.5,-9.4) {$\vdots$}; \end{tikzpicture} \end{center} By Proposition~\ref{prop-union-dimension}, it suffices to show that each such age is finite dimensional. We see that $\operatorname{Age}(\omega^k\lambda\ell^\omega)$ is isomorphic (as a poset) to the product $\operatorname{Age}(\omega^k \lambda) \times \operatorname{Age}(\ell^\omega)$. The first of these ages is finite dimensional because it is isomorphic to a subposet of $\mathbb{N}^{k+|\lambda|}$ where $|\lambda|$ denotes the length (number of parts) of $\lambda$. The second of these ages is finite dimensional because it is isomorphic to $\operatorname{Age}(\omega^\ell)$, via conjugation, and that age is in turn isomorphic to a subposet of $\mathbb{N}^{\ell}$. Thus the dimension of $\operatorname{Age}(\omega^k \lambda \ell^\omega)$ is at most $k+\ell+|\lambda|$. This completes the proof of Theorem~\ref{thm-part-dim}. \section{Compositions} \label{sec-proof} We have shown in Section~\ref{sec-intro} that $\operatorname{Age}(\omega\omega\omega)$, $\operatorname{Age}(1^\omega 2 1^\omega 2 1^\omega)$, $\operatorname{Age}(\omega 1^\omega \omega 1^\omega)$, and $\operatorname{Age}(1^\omega\omega 1^\omega\omega)$ are infinite dimensional, and in Section~\ref{sec-tools} we showed that it suffices to show that the maximal ages not containing the four distinguished infinite dimensional ages are finite dimensional. The two types of these maximal ages are those of the forms $\operatorname{Age}(a\omega b 1^\omega c 1^\omega d \omega e)$ and $\operatorname{Age}(a 1^\omega b \omega c \omega d 1^\omega e)$ for finite compositions $a$, $b$, $c$, $d$, and $e$. We establish the finite dimensionality of these two types of ages with a series of results. Our first such result implies that we may assume $a$ and $e$ are empty. \begin{proposition} \label{prop-ku} If $\operatorname{Age}(u)$ is finite dimensional for $u\in\left(\mathbb{P}\cup\{1^\omega,\omega\}\right)^\ast$, then $\operatorname{Age}(k u)$ is finite dimensional for all $k\in\mathbb{N}$. \end{proposition} \begin{proof} We proceed by induction on $k$. The base case of $k=0$ is tautological, so let $k \in \mathbb{P}$ be given, and assume $\operatorname{Age}((k-1)u)$ is finite dimensional. Let $A = \operatorname{Age}(u)$, let $B = \operatorname{Age}(ku) \setminus A$, and for each $1 \le j \le k$, define \[ A_j = \{ j a \in A \} \quad\text{and}\quad B_j = \{ j a \in B \}. \] as well as $A_{>k} = \{\ell a \in A \::\: \ell > k\}$. By induction, $A \cup B_j$ is finite dimensional for each $1 \le j \le k-1$. Furthermore, $B$ is finite dimensional as it is isomorphic to a subposet of $\mathbb{N} \times A$. Therefore it suffices to show that $A_j \cup B_k$ and $A_{>k} \cup B_k$ are finite dimensional for each $1 \le j \le k$. Fix $1 \le j \le k$. Given $j a_1 \in A_j$ and $k a_2 \in B_k$, we have $j a_1 \le k a_2$ if and only if $a_1 \le a_2$. For this reason, we define \[ A_j' = \{ a \::\: j a \in A_j \} \quad\text{and}\quad B_k' = \{ a \::\: k a \in B_k \}, \] and consider a realizer $\{L_1, \dots, L_n\}$ of $A_j' \cup B_k'$, which is finite dimensional as it is contained in $A$. For each $1 \le i \le n$, we expand $L_i$ into a linear extension $\hat{L}_i$ of a set containing $A_j \cup B_k$. To do so, we replace the instance of each composition $v$ in $L_i$ with the two element chain $\{jv, kv\}$. If $j a_1 \in A_j$ and $k a_2 \in B_k$ with $j a_1 \not\le k a_2$, then $a_1 \not\le a_2$. Thus $a_2$ precedes $a_1$ in some $L_i$, meaning $k a_2$ precedes $j a_1$ in $\hat{L}_i$. Lastly, given $\ell a_1 \in A_{>k}$ and $k a_2 \in B_k$, we have $\ell a_1 \le k a_2$ if and only if $\ell a_1 \le a_2$. Let $\{R_1, \dots, R_m\}$ be a realizer of $A_{>k} \cup B_k'$, which is finite dimensional as it is contained in $A$. For each $1 \le i \le m$, we expand $R_i$ into a linear extension $\hat{R}_i$ of $A_{>k} \cup B_k$. To do so, we replace the instance of $a \in B_k'$ in $R_i$ with $k a$. Then, if $\ell a_1 \in A_{>k}$ and $k a_2 \in B_k$ with $\ell a_1 \not\le k a_2$, then $\ell a_1 \not\le a_2$. Thus $a_2$ precedes $\ell a_1$ in some $R_i$, meaning $k a_2$ precedes $\ell a_1$ in $\hat{R}_i$. \end{proof} By applying Proposition~\ref{prop-ku} twice, we obtain the following. \begin{corollary} For all compositions $a$ and $b$, both $\operatorname{Age}(a 1^\omega b)$ and $\operatorname{Age}(a \omega b)$ are finite dimensional. \end{corollary} The proof of our next result is more complicated. \begin{proposition} \label{prop-1oa1o} For all compositions $c$, $\operatorname{Age}(1^\omega c 1^\omega)$ is finite dimensional. \end{proposition} \begin{proof} We partition the age of interest into a finite collection of intervals and then construct a family of linear extensions which break the incomparisons between these intervals. These intervals are $[a,1^\omega a 1^\omega) = \{d \in \operatorname{Age}(1^\omega a 1^\omega) \::\: d \ge a\}$ for each $a = a(1) \cdots a(m) \in \operatorname{Age}(c)$, where the first and last parts of $a$ are at least $2$. Each such interval is itself finite dimensional as it is isomorphic to $\mathbb{N}^2$. Let $\mathcal{R}$ denote the (finite) collection of linear extensions realizing each $[a,1^\omega a 1^\omega)$. It suffices to consider the union of a pair of such intervals. Let $a,b \le c$ where $a=a(1) \cdots a(m)$ and $b=b(1)\cdots b(n)$ have the property that the first and last parts of each $a$ and $b$ are at least $2$. Note that there are only finitely many such pairs $a,b$ because $c$ is a finite composition. First, if $a$ and $b$ are such that $a \not\le b$, then none of the elements of $[b,1^\omega b 1^\omega)$ embed into any of the elements of $[a,1^\omega a 1^\omega)$, and these incomparisons can be broken with the refinement $[a,1^\omega a 1^\omega)\oplus [b,1^\omega b 1^\omega)$. Let $\mathcal{S}$ be the (finite) collection of these refinements for each $a,b$ with $a \not\le b$. This leaves us to consider the case where $a$ and $b$ are comparable with $a<b$, and the only incomparisons left to break are those of the form $1^i a 1^j \not\le 1^k b 1^\ell$. The bulk of the proof consists of contending with the fact that $a$ may have several embeddings into $b$. Of these, it suffices to consider the \emph{compact} embeddings, meaning those which cannot be shrunk. More precisely, let $\alpha_1<\cdots<\alpha_q$ denote the beginnings of these compact embeddings and $\beta_1<\cdots<\beta_q$ denote the ends. Because these are embeddings, for all $p$ we have \begin{align*} a &\le b(\alpha_p) b(\alpha_p+1)\cdots b(\beta_p),\\ \intertext{and because they are compact, we have both} a &\not\le b(\alpha_p+1) b(\alpha_p+2)\cdots b(\beta_p),\\ a &\not\le b(\alpha_p) b(\alpha_p+1)\cdots b(\beta_p-1). \end{align*} Consider an incomparison between elements of these two intervals, $1^i a 1^j \not\le 1^k b 1^\ell$. This means that, in $\mathbb{N}^2$, we have incomparisons of the form \[ (i,j)\not\le (k+\alpha_p-1, \ell+n-\beta_p) \] for each $1\le p\le q$. The set of points $\{(k+\alpha_p-1, \ell+n-\beta_p): 1\le p\le q\}$ is an antichain in $\mathbb{N}^2$ that lies weakly above and to the right of $(k,\ell)$ in the plane, as shown on the left of Figure~\ref{fig-tile-plane}. \NewDocumentCommand{\tilingshift}{ s m m }{ \@ifnextchar[{\@absdothollowlabel}{\@absdothollownolabel}{(#2,#3)} \IfBooleanT{#1}{\node[anchor=north] at (#2,#3) {$(k,\ell)$};} \draw [white, line cap = round, fill = lightgray] (2+#2,6+#3) -- (2+#2,5+#3) -- (4+#2,5+#3) -- (4+#2,3+#3) -- (6+#2,3+#3) -- (6+#2,2+#3) -- (7+#2,2+#3) -- (7+#2,6+#3) -- (2+#2,6+#3); \draw [dotted, thick, line cap = round] (2+#2,6+#3) -- (2+#2,5+#3) -- (4+#2,5+#3) -- (4+#2,3+#3) -- (6+#2,3+#3) -- (6+#2,2+#3) -- (7+#2,2+#3); \draw [thick, line cap = round] (2+#2,6+#3) -- (7+#2,6+#3) -- (7+#2,2+#3); \@ifnextchar[{\@absdotlabel}{\@absdotnolabel}{(2+#2,6+#3)} \@ifnextchar[{\@absdotlabel}{\@absdotnolabel}{(4+#2,5+#3)} \@ifnextchar[{\@absdotlabel}{\@absdotnolabel}{(6+#2,3+#3)} \@ifnextchar[{\@absdotlabel}{\@absdotnolabel}{(7+#2,2+#3)} \IfBooleanT{#1}{\node [above left] at (6.6+#2,4+#3) {$T_{k,\ell}$};} } \begin{figure} \begin{footnotesize} \begin{center} \begin{tikzpicture}[scale=0.34] \draw[thick, <->] (0,11) -- (0,0) -- (12,0); \@ifnextchar[{\@absdothollowlabel}{\@absdothollownolabel}{(3,2)} \node[anchor=north] at (3,2) {$(k,\ell)$}; \@ifnextchar[{\@absdotlabel}{\@absdotnolabel}{(5,8)} \@ifnextchar[{\@absdotlabel}{\@absdotnolabel}{(7,7)} \@ifnextchar[{\@absdotlabel}{\@absdotnolabel}{(9,5)} \@ifnextchar[{\@absdotlabel}{\@absdotnolabel}{(10,4)} \node[anchor=south] at (5,8) {$(k+\alpha_1-1,\ell+n-\beta_1)$}; \node[anchor=north] at (10,4) {$(k+\alpha_4-1,\ell+n-\beta_4)$}; \end{tikzpicture} \quad\quad \begin{tikzpicture}[scale=0.34] \draw[thick, <->] (0,11) -- (0,0) -- (12,0); \tilingshift*{3}{2} \end{tikzpicture} \quad\quad \begin{tikzpicture}[scale=0.17] \draw[thick, <->] (0,22) -- (0,0) -- (24,0); \draw[dotted] ( 2,0) -- ( 2,22); \draw[dotted] (11,0) -- (11,22); \draw[dotted] (20,0) -- (20,22); \draw[dotted] (0, 1) -- (24, 1); \draw[dotted] (0, 9) -- (24, 9); \draw[dotted] (0,17) -- (24,17); \tilingshift{3}{2} \tilingshift{3}{10} \tilingshift{12}{2} \tilingshift{12}{10} \end{tikzpicture} \end{center} \end{footnotesize} \caption{(Left) A point $(k,\ell)$ representing $1^k b 1^\ell$ together with associated points representing the minimal compositions of the form $1^i a 1^j$ which do not embed into $1^k b 1^\ell$. (Center) A point $(k,\ell)$ representing $1^k b 1^\ell$ and its associated set $T_{k,\ell}$ representing compositions of the form $1^i a 1^j$. (Right) The shaded regions indicate part of a family of compositions included in one refinement constructed at the end of the proof of Proposition~\ref{prop-1oa1o}.} \label{fig-tile-plane} \end{figure} We now introduce two refinements of $[a,1^\omega a 1^\omega)\cup [b,1^\omega b 1^\omega)$. The first sorts compositions by the largest $r$ such that $1^ra$ is contained in them, while the second sorts compositions by the largest $s$ such that $a1^s$ is contained in them. For a given $k$ and $\ell$, these two refinements break all incomparisons of the form $1^i a 1^j\not\le 1^k b 1^\ell$ where $i > k+\alpha_q-1$ or $j>\ell+n-\beta_1$. Still thinking of $k$ and $\ell$ as fixed, this leaves us with a finite set of incomparisons of the form $1^i a 1^j \not\le 1^k b 1^\ell$ to break, as illustrated in the center of Figure~\ref{fig-tile-plane}. Let $T_{k,\ell}$ denote the finite set of compositions of the form $1^i a 1^j$ whose incomparisons with $1^k b 1^\ell$ have not been dealt with. Thus $T_{k,\ell}$ is the set \[ \{ 1^i a 1^j \::\: \text{$(i,j) \le (k + \alpha_q - 1, \ell + n - \beta_1)$ and $(i,j) \not\le (k+\alpha_p-1,\ell+n-\beta_p)$ for all $1 \le p \le q$}\}. \] We identify each composition $1^i a 1^j\in T_{k,\ell}$ with the point $(i,j)$ in the plane. Thus the points corresponding to the compositions in $T_{k,\ell}$ are contained in the rectangle \[ [k, k+\alpha_q-1] \times [\ell, \ell+n-\beta_1]. \] Given $k,\ell$, we define a refinement $R_{k,\ell}$ of $\{1^k b 1^\ell\} \cup T_{k,\ell}$ in which $1^k b 1^\ell$ is less than each element of $T_{k,\ell}$. All that remains is to combine the collection of refinements $R_{k,\ell}$ into finitely many refinements of $[a,1^\omega a 1^\omega)\cup [b,1^\omega b 1^\omega)$. We achieve this by partitioning $\mathbb{N}^2$ into equivalence classes with respect to the equivalence relation $(k,\ell) \sim (k',\ell')$ if $k \equiv k' \operatorname{mod} \alpha_q$ and $\ell \equiv \ell' \operatorname{mod} n-\beta_1+1$. We further write $[(k,\ell)]$ to denote the equivalence class containing $(k,\ell)$. Note that there are only finitely many such equivalences classes. The motivation for this equivalence relation is that if $(k,\ell)\sim (k',\ell')$ then the relations defined by $R_{k,\ell}$ and $R_{k',\ell'}$ do not conflict. Thus for any $(k,\ell) \in \mathbb{N}^2$, all of the relations \[ \bigcup_{(k',\ell') \in [(k,\ell)]} R_{k',\ell'} \] can be combined into a single refinement. The compositions involved in one such refinement are drawn on the right of Figure~\ref{fig-tile-plane}. As there are only finitely many such equivalence classes in $\mathbb{N}^2$, and only finitely many pairs $a,b$ with $a \le b \le c$, this (finite) set of refinements, together with the refinements of $\mathcal{R}$ and $\mathcal{S}$, realizes $\operatorname{Age}(1^\omega c 1^\omega)$, completing the proof. \end{proof} With Proposition~\ref{prop-1oa1o} established, showing that ages of the forms $\operatorname{Age}(\omega a 1^\omega b 1^\omega c \omega)$ and $\operatorname{Age}(1^\omega a \omega b \omega c 1^\omega)$ are finite dimensional is accomplished by first proving that ages of the forms $\operatorname{Age}(\omega a 1^\omega b 1^\omega)$ and $\operatorname{Age}(1^\omega a \omega b 1^\omega)$ are finite dimensional. Each of these steps relies on Proposition~\ref{prop-1oa1o}. \begin{proposition} \label{prop-1oa1obo} For all compositions $a$ and $b$, $\operatorname{Age}(\omega a 1^\omega b 1^\omega)$ is finite dimensional. \end{proposition} \begin{proof} Let $m$ denote the maximum entry in $a$ or $b$ and let $\widebar{m} = m + 1$. By Propositions~\ref{prop-ku} and \ref{prop-1oa1o} we have that $\operatorname{Age}(\widebar{m} a 1^\omega b 1^\omega)$ is finite dimensional, so let $\{L_1, \dots L_n\}$ be a realizer of it. For each $1 \le i \le n$, we expand $L_i$ into a linear extension $\hat{L}_i$ of $\operatorname{Age}(\omega a 1^\omega b 1^\omega)$. To do so, we replace the instance of $\widebar{m} x$ in $L_i$ with the linearly ordered interval $[\widebar{m} x, \omega x)$. The only incomparisons yet to be handled are those of the form $k_1 x_1 \not\le k_2 x_2$ where $\widebar{m} x_1 \le \widebar{m} x_2$ and $k_1 > k_2 \ge \widebar{m}$. These are fixed by including a single refinement which sorts elements of $\operatorname{Age}(\omega a 1^\omega b 1^\omega)$ by their largest entry. \end{proof} \begin{proposition} \label{prop-oa1ob1oco} For all compositions $a$, $b$, and $c$, $\operatorname{Age}(\omega a 1^\omega b 1^\omega c \omega)$ is finite dimensional. \end{proposition} \begin{proof} We proceed by defining six sets, each of which is finite dimensional and whose union is the age of interest, and then construct a family of refinements which break the incomparisons between the sets. Let $m$ denote the maximum entry in $a$, $b$, or $c$, let $\widebar{m} = m+1$, let $\bar{\mbar} = \widebar{m}+1$, and define \[ \begin{array}{rlcl} &A &=& [\varepsilon, m a 1^\omega b 1^\omega c m),\\ &B_1 &=& [\widebar{m}, m a 1^\omega b 1^\omega c \widebar{m}),\\ &B_2 &=& [m, m a 1^\omega b 1^\omega c m),\\ &C_1 &=& [\widebar{m} \widebar{m}, \omega a 1^\omega b 1^\omega c \widebar{m}),\\ &C_2 &=& [\widebar{m} \widebar{m}, \widebar{m} a 1^\omega b 1^\omega c \omega),\\ &D &=& [\bar{\mbar} \bar{\mbar}, \omega a 1^\omega b 1^\omega c \omega). \end{array} \] Now, the complement of $D$, \begin{align*} \operatorname{Age}(\omega a 1^\omega b 1^\omega c \omega) \setminus D &= A \cup B_1 \cup B_2 \cup C_1 \cup C_2 \\ &= \operatorname{Age}(\omega a 1^\omega b 1^\omega c \widebar{m}) \cup \operatorname{Age}(\widebar{m} a 1^\omega b 1^\omega c \omega) \end{align*} is finite dimensional by Propositions~\ref{prop-union-dimension}, \ref{prop-ku}, and \ref{prop-1oa1obo}. Also, $C_1 \cup C_2 \cup D$ is finite dimensional as it is isomorphic to a subposet of $\mathbb{N} \times \operatorname{Age}(a 1^\omega b1^\omega c) \times \mathbb{N}$. Thus it suffices to show that the incomparisons between $A\cup B_1 \cup B_2$ and $D$ can be broken with finitely many refinements. Let $\{L_1, \dots, L_n\}$ be a realizer for $\operatorname{Age}(\widebar{m} a 1^\omega b 1^\omega c \widebar{m})$. For each $1 \le i \le n$, we expand $L_i$ into a refinement $\hat{L}_i$ of $\operatorname{Age}(\omega a 1^\omega b 1^\omega c \omega)$. To do so, for each $v$, we replace the instance of $\widebar{m} v \widebar{m}$ in $L_i$ with the interval $[\widebar{m} v \widebar{m}, \omega v \omega)$. If $u \in A \cup B_1 \cup B_2$ and $k_1 v k_2 \in D$ for integers $k_1, k_2 \ge \bar{\mbar}$ and $u \not\le k_1 v k_2$, then $u \not\le \widebar{m} v\widebar{m}$, so $\widebar{m} v \widebar{m}$ is less than $u$ in some $L_i$, and thus $k_1 v k_2$ is less than $u$ in $\hat{L}_i$. This completes the proof. \end{proof} The proofs of our next two results are very similar to those of Propositions~\ref{prop-1oa1obo} and \ref{prop-oa1ob1oco}. \begin{proposition} \label{prop-1oaob1o} For all compositions $a$ and $b$, $\operatorname{Age}(1^\omega a \omega b 1^\omega)$ is finite dimensional. \end{proposition} \begin{proof} Let $m$ denote the maximum entry in $a$ or $b$ and let $\widebar{m} = m + 1$. By Proposition~\ref{prop-1oa1o} we have that $\operatorname{Age}(1^\omega a \widebar{m} b 1^\omega)$ is finite dimensional, so let $\{L_1, \dots L_n\}$ be a realizer of it. For each $1 \le i \le n$, we expand $L_i$ into a linear extension $\hat{L}_i$ of $\operatorname{Age}(1^\omega a \omega b 1^\omega)$. To do so, we replace the instance of $x \widebar{m} y$ in $L_i$ with the linearly ordered interval $[x \widebar{m} y, x \omega y)$. The only incomparisons yet to be handled are those of the form $x_1 k_1 y_1 \not\le x_2 k_2 y_2$ where $x_1 \widebar{m} y_1 \le x_2 \widebar{m} y_2$ and $k_1 > k_2 \ge \widebar{m}$. These are fixed by including a single refinement which sorts elements of $\operatorname{Age}(1^\omega a \omega b 1^\omega)$ by their largest entry. \end{proof} \begin{proposition} \label{prop-1oaoboc1o} For all compositions $a$, $b$, and $c$, $\operatorname{Age}(1^\omega a \omega b \omega c 1^\omega)$ is finite dimensional. \end{proposition} \begin{proof} Let $m$ denote the maximum entry in $a$, $b$, or $c$, let $\widebar{m} = m+1$, let $\bar{\mbar} = \widebar{m}+1$, and define \[ \begin{array}{rlcl} &A &=& [\varepsilon, 1^\omega a m b m c 1^\omega),\\ &B_1 &=& [\widebar{m}, 1^\omega a m b \widebar{m} c 1^\omega),\\ &B_2 &=& [\widebar{m}, 1^\omega a \widebar{m} b m c 1^\omega),\\ &C_1 &=& [\widebar{m} \widebar{m}, 1^\omega a \omega b \widebar{m} c 1^\omega),\\ &C_2 &=& [\widebar{m} \widebar{m}, 1^\omega a \widebar{m} b \omega c 1^\omega),\\ &D &=& [\bar{\mbar} \bar{\mbar}, 1^\omega a \omega b \omega c 1^\omega). \end{array} \] The complement of $D$, \begin{align*} \operatorname{Age}(1^\omega a \omega b \omega c 1^\omega) &= A \cup B_1 \cup B_2 \cup C_1 \cup C_2 \\ &= \operatorname{Age}(1^\omega a \widebar{m} b \omega c 1^\omega) \cup \operatorname{Age}(1^\omega a \omega b \widebar{m} c 1^\omega) \end{align*} is finite dimensional by Propositions~\ref{prop-union-dimension}, \ref{prop-ku}, and \ref{prop-1oaob1o}. Also, $C_1 \cup C_2 \cup D$ is finite dimensional as it is isomorphic to a subposet of $\operatorname{Age}(1^\omega a) \times \mathbb{N} \times \operatorname{Age}(b) \times \mathbb{N} \times \operatorname{Age}(c 1^\omega)$. Thus it suffices to show that the incomparisons between $A\cup B_1 \cup B_2$ and $D$ can be broken with finitely many refinements. Let $\{L_1, \dots, L_n\}$ be a realizer for $\operatorname{Age}(1^\omega a \widebar{m} b \widebar{m} c 1^\omega)$. For each $1 \le i \le n$, we expand $L_i$ into a refinement $\hat{L}_i$ of $A \cup B_1 \cup B_2 \cup D$. To do so, for each $x,y,z$, we replace the instance of $x \widebar{m} y \widebar{m} z$ in $L_i$ with the interval $[x \widebar{m} y \widebar{m} z, x \omega y \omega z)$. Then, if $u \in A \cup B_1 \cup B_2$ and $x k_1 y k_2 z \in D$ with $k_1, k_2 \ge \bar{\mbar}$, and $u \not\le x k_1 y k_2 z$, then we have $u \not\le x \widebar{m} y \widebar{m} z$. Thus $x \widebar{m} y \widebar{m} z$ is less than $u$ in some $L_i$, and thus $x k_1 y k_2 z$ is less than $u$ in $\hat{L}_i$. This completes the proof. \end{proof} With Propositions~\ref{prop-oa1ob1oco} and \ref{prop-1oaoboc1o} established, we note that the proof of Theorem~\ref{thm-comp-dim} is complete, given the remarks at the beginning of Section~\ref{sec-proof}. \section{Concluding Remarks} \label{sec-conclusion} Theorems~\ref{thm-part-dim} and \ref{thm-comp-dim} characterize the finite dimensional downsets in the posets of integer partitions and compositions, respectively. There are several similar contexts in which the analogous questions have yet to be considered. One such context is the poset of permutations under the permutation pattern order. We refer to the second author's survey~\cite{vatter:permutation-cla:} for more information on this order. A related example is the poset of set partitions, first studied by Klazar~\cite{klazar:counting-patter:2,klazar:counting-patter:1,klazar:on-abab-free-an:} and Sagan~\cite{sagan:pattern-avoidan:}. Another natural context would be the generalized subword order over an arbitrary poset $P$, a context where McNamara and Sagan~\cite{mcnamara:the-mobius-func:} have recently determined the M\"obius function. Indeed, even the special case of words over a two-element antichain appears to be untouched. \bibliographystyle{acm}
1,314,259,995,173
arxiv
\section{Introduction} Question answering (QA) has been a blooming research field for the last decade. Selection-based QA implies a family of tasks that find answer contexts from large data given questions in natural language. Three tasks have been proposed for selection-based QA. Given a document, \textit{answer extraction}~\cite{shen2006exploring,sultan2016joint} finds answer phrases whereas \textit{answer selection}~\cite{wang:07a,yih2013question,yu:14a,wang:16a} and \textit{answer triggering}~\cite{yang:15a,jurczyk:16} find answer sentences instead, although the presence of the answer context is not assumed within the provided document for answer triggering but it is for the other two tasks. Recently, various QA tasks that are not selection-based have been proposed~\cite{reddy2006dialogue,hosseini2014learning,jauhar-turney-hovy:2016:P16-1,sachan-dubey-xing:2016:P16-2}; however, selection-based QA remains still important because of its practical value to real applications (e.g., IBM Watson, MIT \textsc{Start}). \noindent Several datasets have been released for selection-based QA. \newcite{wang:07a} created the \textsc{QASent} dataset consisting of 277 questions, which has been widely used for benchmarking the answer selection task. \newcite{feng:15a} presented \textsc{InsuranceQA} comprising 16K+ questions on insurance contexts. \newcite{yang:15a} introduced \WikiQA\ for answer selection and triggering. \newcite{jurczyk:16} created \SelQA\ for large real-scale answer triggering. \newcite{rajpurkar2016squad} presented \SQuAD\ for answer extraction and selection as well as for reading comprehension. Finally, \newcite{morales-EtAl:2016:EMNLP2016} provided \InfoQA\ for answer selection. These corpora make it possible to evaluate the robustness of statistical question answering learning. Although all of these corpora target on selection-based QA, they are designed for different purposes such that it is important to understand the nature of these corpora so a better use of them can be made. In this paper, we make both intrinsic and extrinsic analyses of four latest corpora based on Wikipedia, \WikiQA, \SelQA, \SQuAD, and \InfoQA. We first give a thorough intrinsic analysis regarding contextual similarities, question types, and answer categories (Section~\ref{sec:intrinsic-analysis}). We then map questions in all corpora to the current version of English Wikipedia and benchmark another selection-based QA task, answer retrieval (Section~\ref{sec:answer-retrieval}). Finally, we present an extrinsic analysis through a set of experiments cross-testing these corpora using a convolutional neural network architecture (Section~\ref{sec:extrinsic-analysis}).\footnote{All our resources are publicly available\\: \url{anonymous_url}} \begin{table*}[htbp!] \centering\small \resizebox{\textwidth}{!}{ \begin{tabular}{c||c|c|c|c} & \multicolumn{1}{c|}{\textbf{\textsc{WikiQA}}} & \multicolumn{1}{c|}{\textbf{\textsc{SelQA}}} & \multicolumn{1}{c|}{\textbf{\textsc{SQuAD}}} & \multicolumn{1}{c}{\textbf{\textsc{InfoboxQA}}} \\ \hline\hline Source & Bing search queries & Crowdsourced & Crowdsourced & Crowdsourced \\ Year & 2015 & 2016 & 2016 & 2016 \\ (AE, AS, AT) & (O, O, O) & (X, O, O) & (O, O, X) & (X, O, X) \\ $(q,c,\nicefrac{c}{q})$ & $(\numprint{1242},\:\numprint{12153},\:9.79)$ & $(\numprint{7904},\:\numprint{95250},\:12.05)$ & $(\textbf{\numprint{98202}},\:\numprint{496167},\:5.05)$ & $(\numprint{15271},\:\numprint{271038},\:\textbf{17.75})$ \\ $(w, t)$ & $(\numprint{386440},\:\numprint{30191})$ & $(\numprint{3469015},\:\numprint{44099})$ & $(\numprint{19445863},\:\textbf{\numprint{115092}})$ & $(\numprint{5034625},\:\numprint{8323})$ \\ $(\mu_q, \mu_c$) & $(6.44,\:25.36)$ & $(11.11,\:25.31)$ & $(11.33,\:27.86)$ & $(9.35,\:9.22)$ \\ $(\Omega_q, \Omega_a, \Omega_{f})$ & $(46.72,\:\textbf{11.05},\:16.96)$ & $(32.79,\:16.98,\:20.19)$ & $(32.27,\:12.15,\:\textbf{16.54})$ & $(\textbf{26.80},\:35.70,\:28.09)$ \\ \end{tabular}} \caption{\small Comparisons between the four corpora for answer selection. Note that both \WikiQA\ and \SelQA\ provide separate annotation for answer triggering, which is not shown in this table. The \SQuAD\ column shows statistics excluding the evaluation set, which is not publicly available. AE/AS/AT: annotation for answer extraction/selection/triggering, $q$/$c$: \# of questions/answer candidates, $w$/$t$: \# of tokens/token types, $\mu_{q/c}$: average length of questions/answer candidates, $\Omega_{q/a}$: macro average in \% of overlapping words between question-answer pairs normalized by the questions/answers lengths, $\Omega_{f}$: $\nicefrac{(2 \cdot \Omega_q \cdot \Omega_a)}{(\Omega_q + \Omega_a)}$.} \label{tbl:intrinsic-analysis} \vspace{-2ex} \end{table*} \section{Intrinsic Analysis} \label{sec:intrinsic-analysis} Four publicly available corpora are selected for our analysis. These corpora are based on Wikipedia, so more comparable than the others, and have already been used for the evaluation of several QA systems. \noindent\textbf{\WikiQA}~\cite{yang:15a} comprises questions selected from the Bing search queries, where user click data give the questions and their corresponding Wikipedia articles. The abstracts of these articles are then extracted to create answer candidates. The assumption is made that if many queries lead to the same article, it must contain the answer context; however, this assumption fails for some occasions, which makes this dataset more challenging. Since the existence of answer contexts is not guaranteed in this task, it is called answer triggering instead of answer selection. \vspace{1ex}\noindent\textbf{\SelQA}~\cite{jurczyk:16} is a product of five annotation tasks through crowdsourcing. It consists of about 8K questions where a half of the questions are paraphrased from the other half, aiming to reduce contextual similarities between questions and answers. Each question is associated with a section in Wikipedia where the answer context is guaranteed, and also with five sections selected from the entire Wikipedia where the selection is made by the Lucene search engine. This second dataset does not assume the existence of the answer context, so can be used for the evaluation of answer triggering. \vspace{1ex}\noindent\textbf{\SQuAD}~\cite{rajpurkar2016squad} presents 107K+ crowdsourced questions on 536 Wikipedia articles, where the answer contexts are guaranteed to exist within the provided paragraph. It contains annotation of answer phrases as well as the pointers to the sentences including the answer phrases; thus, it can be used for both answer extraction and selection. This corpus also provides human accuracy on those questions, setting up a reasonable upper bound for machines. To avoid overfitting, the evaluation set is not publicly available although system outputs can be evaluated by their provided script. \vspace{1ex}\noindent\textbf{\InfoQA}~\cite{morales-EtAl:2016:EMNLP2016} gives 15K+ questions based on the infoboxes from 150 articles in Wikipedia. Each question is crowdsourced and associated with an infobox, where each line of the infobox is considered an answer candidate. This corpus emphasizes the gravity of infoboxes, which summary arguably the most commonly asked information about those articles. Although the nature of this corpus is different from the others, it can also be used to evaluate answer selection. \begin{table*}[htbp!] \centering\small \begin{tabular}{c||c|c|c} & \multicolumn{1}{c|}{\textbf{\textsc{WikiQA}}} & \multicolumn{1}{c|}{\textbf{\textsc{SelQA}}} & \multicolumn{1}{c}{\textbf{\textsc{SQuAD}}} \\ \hline\hline $(\rho, \gamma_c, \gamma_p), t \geq 0.3$ & $(\TAB 92.00,\:\numprint{1203},\:96.86)$ & $(90.00,\:\numprint{7446},\:94.28)$ & $(100.00,\:\numprint{93928},\:95.61)$ \\ $(\rho, \gamma_c, \gamma_p), t \geq \textbf{0.4}$ & $(\TAB \textbf{94.00},\:\textbf{\numprint{1139}},\:\textbf{91.71})$ & $(\textbf{94.00},\:\textbf{\numprint{7133}},\:\textbf{90.31})$ & $(\textbf{100.00},\:\textbf{\numprint{93928}},\:\textbf{95.61})$\\ $(\rho, \gamma_c, \gamma_p), t \geq 0.5$ & $(100.00,\:\numprint{1051},\:84.62)$ & $(98.00,\:\numprint{6870},\:86.98)$ & $(100.00,\:\numprint{93928},\:95.61)$\\ \hline\hline $k = (1, \textbf{5}, 10, 20)$ & $(4.39, \textbf{12.47}, 16.59, 22.39)$ & $(20.01, \textbf{34.07}, 40.29, 46.40)$ & $(19.90, \textbf{35.08}, 40.96, 46.74)$ \\ \end{tabular} \caption{\small Statistics of the silver-standard dataset (first three rows) and the accuracies of answer retrieval in \% (last row).\\$\rho$: robustness of the silver-standard in \%, $\gamma_{c/p}$: \#$/$\% of retrieved silver-standard passages (coverage).} \label{tbl:mapping} \vspace{-2ex} \end{table*} \subsection*{Analysis} All corpora provide datasets/splits for answer selection, whereas only (\WikiQA, \SQuAD) and (\WikiQA, \SelQA) provide datasets for answer extraction and answer triggering, respectively. \SQuAD\ is much larger in size although questions in this corpus are often paraphrased multiple times. On the contrary, \SQuAD's average candidates per question ($\nicefrac{c}{q}$) is the smallest because \SQuAD\ extracts answer candidates from paragraphs whereas the others extract them from sections or infoboxes that consist of bigger contexts. Although \InfoQA\ is larger than \WikiQA\ or \SelQA, the number of token types ($t$) in \InfoQA\ is smaller than those two, due to the repetitive nature of infoboxes. All corpora show similar average answer candidate lengths ($\mu_c$), except for \InfoQA\ where each line in the infobox is considered a candidate. \SelQA\ and \SQuAD\ show similar average question lengths ($\mu_q$) because of the similarity between their annotation schemes. It is not surprising that \WikiQA's average question length is the smallest, considering their questions are taken from search queries. \InfoQA's average question length is relatively small, due to the restricted information that can be asked from the infoboxes. \InfoQA\ and \WikiQA\ show the least question-answer word overlaps over questions and answers ($\Omega_q$ and $\Omega_a$ in Table~\ref{tbl:intrinsic-analysis}), respectively. In terms of the F1-score for overlapping words ($\Omega_f$), \SQuAD\ gives the least portion of overlaps between question-answer pairs although \WikiQA\ comes very close. \noindent Fig.~\ref{fig:question-types} shows the distributions of seven question types grouped deterministically from the lexicons. Although these corpora have been independently developed, a general trend is found, where the \textit{what} question type dominates, followed by \textit{how} and \textit{who}, followed by \textit{when} and \textit{where}, and so on. \begin{figure}[htbp!] \centering \includegraphics[scale=0.5]{question-types.pdf} \caption{Distributions of question types in \%.} \label{fig:question-types} \end{figure} \noindent Fig.~\ref{fig:answer-categories} shows the distributions of answer categories automatically classified by our Convolutional Neural Network model trained on the data distributed by \newcite{li:02a}.\footnote{Our CNN model shows 95.20\% accuracy on their test set.} Interestingly, each corpus focuses on different categories, \textit{Numeric} for \WikiQA\ and \SelQA, \textit{Entity} for \SQuAD, and \textit{Person} for \InfoQA, which gives enough diversities for statistical learning to build robust models. \begin{figure}[htbp!] \centering \includegraphics[scale=0.5]{answer-categories.pdf} \caption{Distributions of answer categories in \%.} \label{fig:answer-categories} \end{figure} \section{Answer Retrieval} \label{sec:answer-retrieval} This section describes another selection-based QA task, called \textit{answer retrieval}, that finds the answer context from a larger dataset, the entire Wikipedia. \SQuAD\ provides no mapping of the answer contexts to Wikipedia, whereas \WikiQA\ and \SelQA\ provide mappings; however, their data do not come from the same version of Wikipedia. We propose an automatic way of mapping the answer contexts from all corpora to the same version of Wikipeda\footnote{\url{enwiki-20160820-pages-articles.xml.bz2}} so they can be coherently used for answer retrieval. Each paragraph in Wikipedia is first indexed by Lucene using \{1,2,3\}-grams, where the paragraphs are separated by WikiExtractor\footnote{\url{github.com/attardi/wikiextractor}} and segmented by NLP4J\footnote{\url{github.com/emorynlp/nlp4j}} (28.7M+ paragraphs are indexed). Each answer sentence from the corpora in Table~\ref{tbl:mapping} is then queried to Lucene, and the top-5 ranked paragraphs are retrieved. The cosine similarity between each sentence in these paragraphs and the answer sentence is measured for $n$-grams, say $n_{1,2,3}$. A weight is assigned to each $n$-gram score, say $\lambda_{1,2,3}$, and the weighted sum is measured: $t = \sum_{i=1}^3 \lambda_i\cdot n_i$. The fixed weights of $\lambda_{1,2,3} = (0.25, 0.35, 0.4)$ are used for our experiments, which can be improved. If there exists a sentence whose $t \geq \theta$, the paragraph consisting of that sentence is considered the silver-standard answer passage. Table~\ref{tbl:mapping} shows how robust these silver-standard passages are based on human judgement ($\rho$) and how many passages are collected ($\gamma$) for $\theta = [0.3, 0.5]$, where the human judgement is performed on 50 random samples for each case. For answer retrieval, a dataset is created by $\theta = 0.4$, which gives $\rho \geq 94\%$ accuracy and $\gamma_p > 90\%$ coverage, respectively.\footnote{\SQuAD\ mapping was easier than the others because it was based on a more recent version of Wikipedia.} Finally, each question is queried to Lucene and the top-$k$ paragraphs are retrieved from the entire Wikipedia. If the answer sentence exists within those retrieved paragraphs according to the silver-standard, it is considered correct. \begin{table*}[htbp!] \centering\small \resizebox{\textwidth}{!}{ \begin{tabular}{c||cc|c||cc|c||cc|c||cc|c} \multirow{3}{*}{\bf Trained on} & \multicolumn{12}{c}{\bf Evaluated on} \\ & \multicolumn{3}{c||}{\WikiQA} & \multicolumn{3}{c||}{\SelQA} & \multicolumn{3}{c||}{\SQuAD} & \multicolumn{3}{c}{\InfoQA} \\ & MAP & MRR & F1 & MAP & MRR & F1 & MAP & MRR & F1 & MAP & MRR & F1 \\ \hline\hline \WikiQA & \textbf{65.54} & \textbf{67.41} & 13.33 & 53.47 & 54.12 & $\TAB$ 8.68 & 73.16 & 73.72 & 11.26 & 30.85 & 30.85 & - \\ \SelQA & 49.05 & 49.64 & \textbf{24.30} & 82.72 & 83.70 & \textbf{48.66} & 77.22 & 78.04 & 44.70 & 63.13 & 63.13 & - \\ \SQuAD & 58.17 & 58.53 & 19.35 & 81.15 & 82.27 & 42.88 & 88.84 & 89.69 & \textbf{44.93} & 63.24 & 63.24 & - \\ \InfoQA & 45.17 & 45.43 & - & 53.48 & 54.25 & - & 65.27 & 65.90 & - & \textbf{79.44} & \textbf{79.44} & - \\ W+S+Q & 56.40 & 56.51 & - & \textbf{83.19} & \textbf{84.25} & - & 88.78 & 89.65 & - & 62.53 & 62.53 & - \\ W+S+Q+I & 60.19 & 60.68 & - & 82.88 & 83.97 & - & \textbf{88.92} & \textbf{89.79} & - & 70.81 & 70.81 & - \\ \end{tabular}} \caption{\small Results for answer selection and triggering in \% trained and evaluated across all corpora splits. The first column shows the training source, and the other columns show the evaluation sources. W: \WikiQA, S: \SelQA, Q: \SQuAD, I: \InfoQA.} \label{tbl:extrinsic-analysis} \end{table*} \section{Extrinsic Analysis} \label{sec:extrinsic-analysis} \subsection{Answer Selection} Answer selection is evaluated by two metrics, mean average precision (MAP) and mean reciprocal rank (MRR). The bigram CNN introduced by \newcite{yu:14a} is used to generate all the results in Table~\ref{tbl:extrinsic-analysis}, where models are trained on either single or combined datasets. Clearly, the questions in \WikiQA\ are the most challenging, and adding more training data from the other corpora hurts accuracy due to the uniqueness of query-based questions in this corpus. The best model is achieved by training on W+S+Q for \SelQA; adding \InfoQA\ hurts accuracy for \SelQA\ although it gives a marginal gain for \SQuAD. Just like \WikiQA, \InfoQA\ performs the best when it is trained on only itself. From our analysis, we suggest that to use models trained on \WikiQA\ and \InfoQA\ for short query-like questions, whereas to use ones trained on \SelQA\ and \SQuAD\ for long natural questions. \subsection{Answer Retrieval} \label{ssec:experiment-answer-retrieval} Finding a paragraph that includes the answer context out of the entire Wikipedia is an extremely difficult task (\nicefrac{1}{28.7M}). The last row of Table~\ref{tbl:mapping} shows results from answer retrieval. Given $k = 5$, \SelQA\ and \SQuAD\ show about 34\% and 35\% accuracy, which are reasonable. However, \WikiQA\ shows a significantly lower accuracy of 12.47\%; this is because the questions in \WikiQA\ is about twice shorter than the questions in the other corpora such that not enough lexicons can be extracted from these questions for the Lucene search. \subsection{Answer Triggering} The results of $k = 5$ from the answer retrieval task in Section~\ref{ssec:experiment-answer-retrieval} are used to create the datasets for answer triggering, where about 65\% of the questions are not expected to find their answer contexts from the provided paragraphs for \SelQA\ and \SQuAD\ and 87.5\% are not expected for \WikiQA. Answer triggering is evaluated by the F1 scores as presented in Table~\ref{tbl:extrinsic-analysis}, where three corpora are cross validated. The results on \WikiQA\ are pretty low as expected from the poor accuracy on the answer retrieval task. Training on \SelQA\ gives the best models for both \WikiQA\ and \SelQA. Training on \SQuAD\ gives the best model for \SQuAD\ although the model trained on \SelQA\ is comparable. Since the answer triggering datasets are about 5 times larger than the answer selection datasets, it is computationally too expensive to combine all data for training. We plan to find a strong machine to perform this experiment in near future. \section{Related work} Lately, several deep learning approaches have been proposed for question answering. \newcite{yu:14a} presented a CNN model that recognizes the semantic similarity between two sentences. \newcite{wang-nyberg:2015:ACL-IJCNLP} presented a stacked bidirectional LSTM approach to read words in sequence, then outputs their similarity scores. \newcite{feng:15a} applied a general deep learning framework to non-factoid question answering. \newcite{santos:16a} introduced an attentive pooling mechanism that led to further improvements in selection-based QA. \section{Conclusion} We present a comprehensive comparison study of the existing corpora for selection-based question answering. Our intrinsic analysis provides a better understanding of the uniqueness or similarity between these corpora. Our extrinsic analysis shows the strength or weakness of combining these corpora together for statistical learning. Additionally, we create a silver-standard dataset for answer retrieval and triggering, which will be publicly available. In the future, we will explore different ways of improving the quality of our silver-standard datasets by fine-tuning the hyper-parameters.
1,314,259,995,174
arxiv
\section{Introduction} \label{sec:Intro} \citet{Staley1970} pointed out that the adiabatic lapse rate for the lower atmosphere of Venus cannot be calculated using the ideal gas equation ($g/c_p$) due to the high temperature and pressure conditions and the presence of small amount of nitrogen. Considering an arbitrary equation of state for any gas mixture, \citet{Staley1970} derived the following expression for adiabatic lapse rate $\Gamma$ at the altitude $z$ in a planetary atmosphere \begin{equation} \Gamma=-\frac{dT}{dz}=-\frac{T}{\rho}\frac{\left(\frac{\partial p}{\partial T}\right)_{\rho}}{\left(\frac{\partial p}{\partial\rho}\right)_{T}}\left(\frac{g}{c_{p}}\right) = \frac{T}{\rho}\left(\frac{\partial\rho}{\partial T}\right)_{p}\left(\frac{g}{c_{p}}\right) \label{eq:adiabatic_lapse_rate} \end{equation} where $T$ is temperature, $p$ is pressure, $\rho$ is the density, $g$ is the acceleration due to gravity and $c_p$ is the isobaric specific heat capacity of the air at altitude $z$. Assuming that the atmosphere is composed of pure $CO_{2}$, \citet{Staley1970} calculated $c_{p}$ and hence $\Gamma$ using the real gas physical properties of pure $CO_{2}$ \citep{Hilsenrath1955} across a range of pressure and temperature that can be found in the atmosphere of Venus. The major shortcoming of this approach was that the presence of $N_{2}$ in the atmosphere was neglected. In order to overcome this, \citet{Seiff1980} calculated the adiabatic lapse rate by assuming an ideal binary gas mixture of real gas components: carbon dioxide ($CO_{2}$) and nitrogen ($N_{2}$) in a volume measured mixing ratio of $96.5:3.5$, arguing that the abundance of nitrogen is small. In this approach, adiabatic lapse rate is written as \begin{equation} \Gamma=-(aT)\frac{g}{c_{p}} \label{eq:adiabatic_lapse_rate_Seiff} \end{equation} where \begin{equation} a=-\frac{1}{\rho}\left(\frac{\partial\rho}{\partial T}\right)_{p} \label{eq:a_Seiff} \end{equation} In the case of an ideal binary gas mixture, the contribution of pure real gas component $i$ to the thermodynamic properties of the mixture is directly proportional to it's mole fraction $x_{i}$ which gives \begin{align} c_{p} & = \sum_{i}x_{i}c_{pi}\label{cp_Seiff}\\ aT & = \sum_{i}x_{i}(aT)_{i} \label{eq:combination_rules_Seiff} \end{align} The main drawback of this method is that the non-ideal interactions of $CO_{2}$ and $N_{2}$ in the mixture are neglected in calculating the thermodynamic properties of the mixture. Furthermore, the VIRA model \citep{Seiff1985} extrapolated the surface temperature of Venus below 12 km altitude (at which the last measurements were made by the sensors on the four Pioneer probes) by using the adiabatic lapse rate calculated by \citet{Seiff1980}. Thus surface temperatures reported for all Pioneer probes are slightly inaccurate. As a result, the values calculated for the surface conditions on Venus which has been used in most subsequent studies pertaining to the stability of atmosphere and atmospheric circulation can be made more accurate. VeGa2 lander is the only atmospheric probe which has provided us with accurate measurements down to the surface. VeGa2 lander data in the lower atmosphere was examined by \citet{Seiff1987} which showed near neutral and superadiabatic layers. Presence of superadiabatic layers on Venus raises some key questions about the source of near surface heat deposition and the resulting atmospheric circulation in the lower atmosphere. An important parameter in understanding such atmospheric processes on Venus is static stability which influences small-scale turbulence caused by convection or wind shear, mesoscale motions and large-scale circulations as well as topography induced disturbances by the ambient flow. Thus is it imperative to calculate the adiabatic lapse rate accurately for the known conditions on Venus. A more detailed derivation of the real gas adiabatic lapse rate for a planetary atmosphere with a multi-component real gas mixture composition varying with altitude is illustrated in \ref{Deriv_Gamma}. The same expressions for adiabatic lapse rate (Eqs.\ref{eq:adiabatic_lapse_rate}) as originally derived by \citet{Staley1970} are obtained. As can be seen from the expressions, accuracy in adiabatic lapse rate at any altitude depends on the accuracy in Venus atmosphere profiles available, composition of the atmosphere at that altitude, density $\rho$ and the isobaric specific heat capacity $c_{p}$ of air at that altitude. As there is limited experimental data available for the particular real gas binary mixture that largely makes up the Venus atmosphere, it becomes necessary to use an equation of state to predict the density $\rho$ and isobaric specific heat capacity $c_{p}$ at different pressures $p$ and temperatures $T$. We have already highlighted how the approaches followed by both \citet{Staley1970} and \citet{Seiff1980} introduced errors in the determination of these quantities for the real gas binary mixture of $CO_{2}-N_{2}$ that largely make up the Venus atmosphere. In this work, we determine $\rho$ and $c_{p}$ more accurately than previous approaches by considering the interactions between real gas components in the mixture through a equation of state for the mixture. A variety of different equations of state for fluids and mixtures exist \citep{Sengers2000}. Here, we determine the physical properties of the real gas binary mixture $CO_{2}-N_{2}$ by using thermodynamic models in Helmholtz energy. We consider two different Helmholtz energy mixture models proposed in \citep{Lemmon1999} and \citep{Kunz2012}. The advantage these models present over other equations of state for mixtures is that it allows us to obtain the mixture properties by combining properties of real gas components obtained through their respective equations of state. In Sections \ref{sec:HEOS_Models} and \ref{sec:Density_Solvers}, we review these models and how they can be used to calculate the desired thermodynamic quantities. This will be followed by a verification of the approach against experimental data and prior approaches in Section \ref{sec:Verification-against-Experiments}. In Section \ref{sec:Adiabatic-Lapse-Rate}, we show how the mixture models can be used to calculate the adiabatic lapse rate and static stability for the Venus atmosphere. Results are discussed in Section \ref{sec:Results}. Finally in Section \ref{sec:Conclusion-and-Future}, we highlight the results obtained and discuss future work. \section{Background} \label{sec:HEOS_Models} \subsection{Review of Mixture Models} Equations of state formulated in reduced Helmholtz free energy for mixtures were first proposed independently by \cite{tillner1993} and \cite{Lemmon1996}. These empirical multi-parameter models rely on mixing rules to obtain properties of multi-component mixtures from equations of states of the pure fluid components. These mixing rules and the equations of state of the pure fluid components themselves are obtained through fitting of experimental data of multiple thermodynamic properties. The first mixture model that we consider was proposed in \citep{Lemmon1999}. The second mixture model that we consider is the GERG-2008 model which was proposed in \citep{Kunz2012} and is considered the most accurate mixture model for obtaining thermodynamic properties of natural gases. The older GERG-2004 mixture model \citep{Kunz2004} was used by \cite{hagermann2007speed} to estimate the abundance of methane in Titan's atmosphere using speed of sound measurements through a Bayesian analysis. However, mixture models in Helmholtz energy have not been applied to compute adiabatic lapse rate of a multi-component planetary atmosphere before. A more recent mixture model was proposed in \citep{gernert2013new} to predict thermodynamic properties mixtures relevant for Carbon Capture Storage more accurately. However, the mixing rule suggested for the binary mixture of carbon dioxide and nitrogen in \citep{Kunz2012} remains unchanged. We consider the two different mixture models to reflect the effect of mixing rules on accuracy even when both models use the same pure fluid equations of state. From here on, we will refer to the mixture model introduced in \citep{Lemmon1999} as LJ-1999 model and \cite{Seiff1980} approach of considering ideal mixture of real gases as IMRG model. \subsection{Mixture Model in Helmholtz Free Energy} Any generalized mixture model in Helmholtz free energy $A$ with independent mixture variables $\rho$, temperature $T$ and molar composition $\bar{x}$ \citep{Lemmon1996, Lemmon1999, Kunz2012} can be written as \begin{equation} A(\tilde{\rho},T,\bar{x}) = A^{idmix}(\tilde{\rho},T,\bar{x}) + A^{E}(\tilde{rho},T,\bar{x})\label{eq:mix_lemmon} \end{equation} where $A^{idmix}$ is the Helmholtz energy of the ideal mixture of the real gas components, $A^{E}$ is the Helmholtz energy contribution to mixing, $\tilde{\rho}$ ($=\rho/M$) is the amount of substance density and $M$ is the molar mass of the mixture. In general, $M=\sum_{i}x_{i}M_{i}$ where $M_{i}$ is the molar mass of component $i$. \citet{Seiff1980} essentially neglected $A^{E}$ in calculating $c_p$ in Eq.\ref{cp_Seiff}. It is however easier to work with the following decomposition of the Helmholtz energy of the mixture \begin{equation} A(\tilde{\rho},T,\bar{x}) = A^{o}(\tilde{\rho},T,\bar{x}) + A^{r}(\tilde{\rho},T,\bar{x})\label{eq:mix_lemmon2} \end{equation} where $A^{o}$ is the contribution of the ideal gas and $A^{r}$ is the contribution from the residual Helmholtz energy of the pure fluid components and from the Helmholtz energy contribution to mixing. Non-dimensionalizing Eq.\ref{eq:mix_lemmon2} by dividing by $RT$ ($R=8.314510\,\text{J/(mol\ensuremath{\cdot}K)}$ is the universal gas constant and $T$ is the mixture temperature), we obtain \begin{equation} \alpha(\tilde{\rho},T,\bar{x}) = \alpha^{o}(\tilde{\rho},T,\bar{x}) + \alpha^{r}(\delta,\tau,\bar{x}) \end{equation} where $\delta$ is the reduced mixture density and $\tau$ is the inverse reduced mixture temperature given by \begin{eqnarray} \delta & = & \tilde{\rho}/\rho_r(\bar{x}) \\ \tau & = & T_r(\bar{x})/T \end{eqnarray} These reducing parameters are only functions of the composition as indicated above. They are specific to the mixing rule that is followed. For example, the reducing function used in the Lemmon's model \citep{Lemmon1999} is very different from that used in the GERG-2008 model \citep{Kunz2012}. The non-dimensionalized Helmholtz free energy of the ideal gas mixture is \begin{equation} \frac{A^o(\tilde{\rho},T,\bar{x})}{RT} = \alpha^{o}(\tilde{\rho},T,\bar{x}) = \sum \limits_{i=1}^{n}x_{i}\left[\alpha_{i}^{o}(\tilde{\rho},T)+\ln x_{i}\right]\label{eq:alpha_o_lemmon} \end{equation} where $\alpha_{i}^{o}$ is the ideal gas Helmholtz energy of component $i$ in the mixture which is a function of the mixture amount of substance density $\tilde{\rho}$ and temperature $T$, and not that of reduced density $\delta$ and inverse reduced temperature $\tau$. The term $\sum \limits_{i=1}^{n}x_{i}\ln x_{i}$ quantifies the entropy of mixing. The residual part of the non-dimensionalized Helmholtz free energy is \begin{equation} \frac{A^r}{RT} = \alpha^{r} = \sum \limits_{i=1}^{n}x_{i}\alpha_{i}^{r}(\delta,\tau) + \alpha^{E}(\delta,\tau,\bar{x})\label{eq:alpha_r_lemmon} \end{equation} where $\alpha_i^r$ is the non-dimensionalized residual part of Helmholtz free energy of component $i$ in the mixture and $\alpha_E$ is called the excess value of the non-dimensionalized Helmholtz free energy or the departure function \citep{Kunz2012}. The usual functional form is \begin{equation} \alpha^E(\delta,\tau,\bar{x}) = \sum \limits_{i=1}^{n-1} \sum \limits_{j=i+1}^{n} x_i x_j F_{ij} \alpha^{r}_{ij}(\delta,\tau) \label{eq:func_alphaE_HEOS} \end{equation} where the functional form of $\alpha^{r}_{ij}$ and the value of parameter $F_{ij}$ is prescribed by the mixing rule being used. All common thermodynamic properties such as pressure, isochoric heat capacity, isobaric heat capacity, sound of speed, enthalpy, saturated-liquid density and VLE data can be obtained from the derivatives of $\alpha^{0}$ and $\alpha^{r}$. A list of the expressions can be found in \citep{Kunz2012}. Here, we only list those that are of relevance to us \begin{equation} p=\tilde{\rho}RT\left[1+\delta\left(\frac{\partial\alpha^{r}}{\partial\delta}\right)_{\tau}\right] \label{eq:p_HEOS} \end{equation} \begin{equation} \frac{\tilde{c}_{v}}{R}=-\tau^{2}\left[\left(\frac{\partial^{2}\alpha^{0}}{\partial\tau^{2}}\right)+\left(\frac{\partial^{2}\alpha^{r}}{\partial\tau^{2}}\right)_{\delta}\right] \label{eq:cv_HEOS} \end{equation} \begin{equation} \frac{\tilde{c}_{p}}{R}=\frac{\tilde{c}_{v}}{R}+\frac{\left[1+\delta\left(\frac{\partial\alpha^{r}}{\partial\delta}\right)_{\tau}-\delta\tau\left(\frac{\partial^{2}\alpha^{r}}{\partial\delta\partial\tau}\right)\right]^{2}}{1+2\delta\left(\frac{\partial\alpha^{r}}{\partial\delta}\right)_{\tau}+\delta^{2}\left(\frac{\partial^{2}\alpha^{r}}{\partial\delta^{2}}\right)_{\tau}} \label{eq:cp_HEOS} \end{equation} To complete the mixture model setup, we still need to specify the mixing rules in order to evaluate the reduced mixture density $\delta$, reduced mixture temperature $\tau$ and the departure function $\alpha^E$. We also need to specify the equations of state for $CO_{2}$ and $N_{2}$ that we will use to calculate the ideal Helmholtz energy $\alpha_{i}^{0}$, residual Helmholtz energy $\alpha_{i}^{r}$ and their derivatives. One reason for considering LJ-1999 mixture model and the GERG-2008 mixture model is that they both consider the same set of equations of state for the pure components of $CO_{2}$ and $N_{2}$. \subsubsection{LJ-1999 Mixture Model} \label{subsubsec:LJ_1999} As mentioned before, a mixing rule specifies how the equations of state of the pure components will be combined to evaluate the properties of the mixture. Firstly, we require the evaluation of the reduced mixture density and temperature which depend on the expressions of the reducing functions of density and temperature. For the LJ-1999 mixture model, they are given by \begin{align} \rho_{r} & = \left[\sum_{i=1}^{n}\frac{x_{i}}{\tilde{\rho}_{ci}}+\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}x_{i}x_{j}\xi_{ij}\right]^{-1} \label{eq:rho_red_LJ1999}\\ T_{r} & = \sum_{i=1}^{n}x_{i}T_{ci}+\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}x_{i}^{\beta_{ij}}x_{j}^{\phi_{ij}}\zeta_{ij} \label{eq:T_red_LJ1999} \end{align} where $\tilde{\rho}_{ci}$ is the critical amount of substance density of component $i$, $T_{ci}$ is the critical temperature of component $i$, and $\xi_{ij}$, $\beta_{ij}$, $\phi_{ij}$ and $\zeta_{ij}$ are constant parameters particular to the mixture. For a binary mixture, the expressions for reducing values simplify to \begin{align} \rho_{r} & = \left[\frac{x_{1}}{\tilde{\rho}_{c1}}+\frac{x_{2}}{\tilde{\rho}_{c2}}+x_{1}x_{2}\xi_{12}\right]^{-1}\label{eq:rho_red_lemmon-1}\\ T_{r} & = x_{1}T_{c1}+x_{2}T_{c2}+x_{1}^{\beta_{12}}x_{2}^{\phi_{12}}\zeta_{12}\label{eq:T_red_lemmon-1} \end{align} For the LJ-1999 mixture model, the departure function is given by \begin{equation} \frac{A^E}{RT} = \alpha^{E} = \sum_{i=1}^{n-1}\sum_{j=i+1}^{n}x_{i}x_{j}F_{ij}\sum_{k=1}^{10}N_{k}\delta^{d_{k}}\tau^{t_{k}} \label{eq:alpha_E_LJ1999} \end{equation} The parameters in Eq.\ref{eq:alpha_E_LJ1999} which are not specific to the mixture are presented in the Table \ref{tab:param_alpha_E_LJ1999}. In the case of the binary mixture $CO_{2}-N_{2}$, $F_{12}=2.780647$, $\xi_{12}=0.00659978\, \text{d\ensuremath{m^{3}}mo\ensuremath{l^{-1}}}$, $\zeta_{12}=-31.149300\,\text{K}$, $\phi_{12}=1$ and $\beta_{12}=1$. \begin{table}[ht] \begin{centering} \begin{tabular}{|c|l|c|c|} \hline $k$ & $N_{k}$ & $d_{k}$ & $t_{k}$\tabularnewline \hline \hline 1 & $-0.245476271425\times10^{-1}$ & $1$ & $2$\tabularnewline \hline 2 & $-0.241206117483$ & $1$ & $4$\tabularnewline \hline 3 & $-0.513801950309\times10^{-2}$ & $1$ & $-2$\tabularnewline \hline 4 & $-0.239824834123\times10^{-1}$ & 2 & 1\tabularnewline \hline 5 & $\,\,0.259772344008$ & 3 & 4\tabularnewline \hline 6 & $-0.172014123104$ & 4 & 4\tabularnewline \hline 7 & $\,\,0.429490028551\times10^{-1}$ & 5 & 4\tabularnewline \hline 8 & $-0.202108593862\times10^{-3}$ & 6 & 0\tabularnewline \hline 9 & $-0.382984234857\times10^{-2}$ & 6 & 4\tabularnewline \hline 10 & $\,\,0.262992331354\times10^{-5}$ & 8 & -2\tabularnewline \hline \end{tabular} \par\end{centering} \protect\caption{Parameters for Eq.\ref{eq:alpha_E_LJ1999} \label{tab:param_alpha_E_LJ1999}} \end{table} \begin{comment} For a real gas mixture, the derivatives of $\alpha^{0}$ and $\alpha^{r}$ are given below under the assumption of constant composition and a binary mixture as is our case. \begin{equation} \left(\frac{\partial^{2}\alpha^{0}}{\partial\tau^{2}}\right)_{\delta}=\sum_{i=1}^{2}x_{i}\left(\frac{\partial^{2}\alpha_{i}^{0}}{\partial\tau^{2}}\right)_{\delta}\label{eq:dd_alpha0_tau_tau} \end{equation} \begin{equation} \left(\frac{\partial\alpha^{r}}{\partial\delta}\right)_{\tau}=x_{1}x_{2}F_{ij}\sum_{k=1}^{10}d_{k}N_{k}\delta^{d_{k}-1}\tau^{t_{k}}+\sum_{i=1}^{2}x_{i}\left(\frac{\partial\alpha_{i}^{r}}{\partial\delta}\right)_{\tau}\label{eq:d_alphaR_delta} \end{equation} \begin{equation} \left(\frac{\partial^{2}\alpha^{r}}{\partial\delta^{2}}\right)_{\tau}=x_{1}x_{2}F_{ij}\sum_{k=1}^{10}d_{k}(d_{k}-1)N_{k}\delta^{d_{k}-2}\tau^{t_{k}}+\sum_{i=1}^{2}x_{i}\left(\frac{\partial^{2}\alpha_{i}^{r}}{\partial\delta^{2}}\right)_{\tau}\label{eq:dd_alphaR_delta_delta} \end{equation} \begin{equation} \left(\frac{\partial^{2}\alpha^{r}}{\partial\delta\partial\tau}\right)=x_{1}x_{2}F_{ij}\sum_{k=1}^{10}d_{k}t_{k}N_{k}\delta^{d_{k}-1}\tau^{t_{k}-1}+\sum_{i=1}^{2}x_{i}\left(\frac{\partial^{2}\alpha_{i}^{r}}{\partial\delta\partial\tau}\right)\label{eq:dd_alphaR_delta_tau} \end{equation} \begin{equation} \left(\frac{\partial^{2}\alpha^{r}}{\partial\tau^{2}}\right)_{\delta}=x_{1}x_{2}F_{ij}\sum_{k=1}^{10}t_{k}(t_{k}-1)N_{k}\delta^{d_{k}}\tau^{t_{k}-2}+\sum_{i=1}^{2}x_{i}\left(\frac{\partial^{2}\alpha_{i}^{r}}{\partial\tau^{2}}\right)_{\delta}\label{eq:dd_alphaR_tau_tau} \end{equation} \end{comment} \subsubsection{GERG-2008 Model} \label{subsubsec:GERG_2008} The mathematical structure of the reducing functions for density and temperature for the GERG-2008 model are more complicated than the LJ-1999 model and are given by \begin{align} \rho_{r} & = \left[\sum_{i=1}^{n}x_{i}^2\frac{1}{\tilde{\rho}_{c,i}^2}+\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}2x_{i}x_{j}\beta_{v,ij}\gamma_{v,ij} \cdot \frac{x_i + x_j}{\beta_{v,ij}^{2}x_i + x_j} \cdot \frac{1}{8}\left(\frac{1}{\tilde{\rho}_{c,i}^{1/3} + \tilde{\rho}_{c,j}^{1/3}} \right)^{3} \right]^{-1}\label{eq:rho_red_gerg2008}\\ T_{r} & = \sum_{i=1}^{n}x_{i}^2 T_{ci} + \sum_{i=1}^{n-1} \sum_{j=i+1}^{n} 2x_{i}x_{j}\beta_{T,ij}\gamma_{T,ij} \cdot \frac{x_i + x_j}{\beta_{T,ij}^{2}x_i + x_j}(T_{c,i}\cdot T_{c,j})^{0.5}\label{eq:T_red_gerg2008} \end{align} where $\beta_{v,12}=0.977794634$, $\gamma_{v,12}=1.047578256$, $\beta_{T,12}=1.005894529$ and $\gamma_{T,12}=1.107654104$ for the binary mixture of $CO_{2}-N_{2}$. The function $\alpha^r_{ij}$ which is a part of $\alpha^E$ (Eq.\ref{eq:func_alphaE_HEOS}) is given by \begin{equation} \alpha^r_{12}(\delta, \tau) = \sum_{k=1}^{2} n_{k} \delta^{d_{k}} \tau^{t_{k}} + \sum_{k=3}^{6} n_{k} \delta^{d_{k}} \tau^{t_{k}} \cdot \exp \left[ -\eta_{k} (\delta - \epsilon_{k})^2 - \beta_{k}(\delta - \gamma_{k})\right] \label{eq:alpha_r_CO2_N2_GERG2008} \end{equation} and $F_{12}=1.0$ for $CO_{2}-N_{2}$. The values of the different parameters in Eq.\ref{eq:alpha_r_CO2_N2_GERG2008} are given in Table \ref{tab:param_alpha_E_GERG2008}. \begin{table}[ht] \begin{centering} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $k$ & $d_k$ & $t_k$ & $n_k$ & $\eta_k$ & $\epsilon_k$ & $\beta_k$ & $\gamma_k$ \tabularnewline \hline \hline 1 & 2 & 1.850 & 0.28661625028399 & 0.000 & 0.000 & 0.000 & 0.000\tabularnewline \hline 2 & 3 & 1.400 & -0.10919833861247 & 0.000 & 0.000 & 0.000 & 0.000\tabularnewline \hline 3 & 1 & 3.200 & -1.13740320822700 & 0.250 & 0.500 & 0.750 & 0.500\tabularnewline \hline 4 & 1 & 2.500 & 0.76580544237358 & 0.250 & 0.500 & 1.000 & 0.500\tabularnewline \hline 5 & 1 & 8.000 & 0.00426380009268 & 0.000 & 0.500 & 2.000 & 0.500\tabularnewline \hline 6 & 2 & 3.750 & 0.17673538204534 & 0.000 & 0.500 & 3.000 & 0.500\tabularnewline \hline \end{tabular} \par\end{centering} \protect\caption{Parameters for Eq.\ref{eq:alpha_E_LJ1999} \label{tab:param_alpha_E_GERG2008}} \end{table} \subsection{Equation of State for \texorpdfstring{$CO_{2}$}{CO2} and \texorpdfstring{$N_2$}{N2}\label{sub:EOS_CO2_N2}} For the pure fluids of carbon dioxide $CO_2$ and nitrogen $N_2$, we use the equations of state proposed by \citet{Span1996} and \citet{Span2000} respectively. As mentioned before, the LJ-1999 and GERG-2008 mixture models define mixing rules considering these pure fluid equations of state. They are also accurate in a vast temperature and pressure region as can be seen in Table \ref{tab:refs_consts_pure_CO2_N2_HEOS}. For $CO_2$, the equation of state can be extrapolated from the triple-point temperature down to $90\,\text{K}$ \citep{klimeck2000entwicklung,Kunz2012} without loss in accuracy and we can thus cover the entire range of $p$ and $T$ in the Venus atmosphere. \begin{table}[ht] \begin{centering} \footnotesize \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Substance} & \multirow{2}{*}{Reference} & \multicolumn{2}{c|}{Range of validity} & Molar mass & $T_c$ & $\tilde{\rho}_c$ \tabularnewline \cline{3-4} & & $T$ [K] & Max. $p$ [MPa] & [$\text{kg\ensuremath{\cdot}kmo\ensuremath{l^{-1}}}$] & [K] & [$\text{kmol\ensuremath{\cdot m^{-3}}}$] \tabularnewline \hline \hline $CO_2$ & \cite{Span1996} & $216 - 1100$ & $800$ & $44.0098$ & $304.1282$ & $10.6249$ \tabularnewline \hline $N_2$ & \cite{Span2000} & $63.151 -1000$ & $2200$ & $28.01348$ & $126.192$ & $11.1839$ \tabularnewline \hline \end{tabular} \par\end{centering} \protect\caption{References and critical parameters of $CO_2$ and $N_2$ \label{tab:refs_consts_pure_CO2_N2_HEOS}} \end{table} \begin{comment} \begin{table}[ht] \begin{centering} \footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Substance} & \multirow{2}{*}{Reference} & \multicolumn{2}{c|}{Range of validity} & Molar mass & $T_c$ & $\tilde{\rho}_c$ & $p_c$ \tabularnewline \cline{3-4} & & $T$ [K] & Max. $p$ [MPa] & [$\text{kg\ensuremath{\cdot}kmo\ensuremath{l^{-1}}}$] & [K] & [$\text{kmol\ensuremath{\cdot m^{-3}}}$] & [MPa] \tabularnewline \hline \hline $CO_2$ & \cite{Span1996} & $216 - 1100$ & $800$ & $44.0098$ & $304.1282$ & $10.6249$ & $7.3773$ \tabularnewline \hline $N_2$ & \cite{Span2000} & $63.151 -1000$ & $2200$ & $28.01348$ & $126.192$ & $11.1839$ & $3.3958$ \tabularnewline \hline \end{tabular} \par\end{centering} \protect\caption{Parameters for Eq.\ref{eq:alpha_E_LJ1999} \label{tab:refs_consts_pure_CO2_N2_HEOS}} \end{table} \end{comment} The equation of state for the pure fluids is explicit in the dimensionless Helmholtz energy $\alpha$ using independent variables of reduced density and temperature. \begin{equation} \frac{A_{i}(\rho,T)}{RT}=\alpha_{i}(\delta,\tau)=\alpha_{i}^{o}(\delta,\tau)+\alpha_{i}^{r}(\delta,\tau)\label{eq:eos_pure_fluid} \end{equation} where the subscript $i$ denotes the component of interest (i.e. $CO_2$ or $N_2$). In the above equation, $\delta$ is the mixture reduced density and $\tau$ is the mixture reduced temperature when calculating the contribution of component $i$ to any mixture. When calculating the Helmholtz energy for a system containing only the pure fluid $i$, $\delta=\tilde{\rho}/\tilde{\rho}_{c}$ and $\tau=T_{c}/T$. This would also be obtained from the reducing functions of Eqs.\ref{eq:rho_red_LJ1999} and \ref{eq:T_red_LJ1999} for the LJ-1999 mixture model or Eqs.\ref{eq:T_red_gerg2008} and \ref{eq:rho_red_gerg2008} for the GERG-2008 mixture model respectively \paragraph{Nitrogen}The ideal gas Helmholtz energy of $N_{2}$ is given by \begin{equation} \alpha_{N_{2}}^{o}(\delta,\tau)=\ln\delta+a_{1}\ln\tau+a_{2}+a_{3}\tau+a_{4}\tau^{-1}a_{5}\tau^{-2}+a_{6}\tau^{-3}+a_{7}\ln[1-exp(-a_{8}\tau)]\label{eq:alpha_0_N2} \end{equation} where $a_{1}=2.5$, $a_{2}=-12.76953$, $a_{3}=-0.007841630$, $a_{4}=-1.934819\times10^{-4}$, $a_{5}=-1.247742\times10^{-5}$, $a_{6}=6.678326\times10^{-8}$, $a_{7}=1.012941$ and $a_{6}=26.65788$. The residual gas Helmholtz energy of $N_{2}$ is given by \begin{dmath} \alpha_{N_{2}}^{r}(\delta,\tau)=\sum_{k=1}^{6}N_{k}\delta^{i_{k}}\tau^{j_{k}}+\sum_{k=7}^{32}N_{k}\delta^{i_{k}}\tau^{j_{k}}\exp(-\delta^{l_{k}})+\sum_{k=33}^{36}N_{k}\delta^{i_{k}}\tau^{j_{k}}\exp(-\psi_{k}(\delta-1)^{2}-\beta_{k}(\tau-\gamma_{k})^{2})\label{eq:alpha_r_N2} \end{dmath} The derivatives of $\alpha_{N_2}^r$ as required in the mixture model and values of the parameters $N_k$, $i_k$, $j_k$, $l_k$, $\psi_k$, $\beta_k$, and $\gamma_k$ (for different values of $k$) are given in \ref{appendix_eos_N2}. \paragraph{Carbon Dioxide} Ideal Helmholtz energy is given by \begin{equation} \alpha_{CO_{2}}^{o}(\delta,\tau)=\ln\delta+a_{1}^{0}+a_{2}^{0}\tau+a_{3}^{0}\ln\tau+\sum_{i=4}^{8}a_{i}^{0}\ln[1-\exp(-\tau\theta_{i}^{0})]\label{eq:alpha_0_CO2} \end{equation} \begin{table}[H] \begin{centering} \begin{tabular}{ccr||ccr} \hline $i$ & $a_{i}^{0}$ & $\theta_{i}^{0}$ & $i$ & $a_{i}^{0}$ & $\theta_{i}^{0}$\tabularnewline \hline 1 & 8.37304456 & & 5 & 0.62105248 & 6.11190\tabularnewline 2 & -3.70454304 & & 6 & 0.41195293 & 6.77708\tabularnewline 3 & 2.50000000 & & 7 & 1.04028922 & 11.32384\tabularnewline 4 & 1.99427042 & 3.15163 & 8 & 0.08327678 & 27.08792\tabularnewline \end{tabular} \par\end{centering} \protect\caption{Parameters as in Eq.\ref{eq:alpha_0_CO2}} \end{table} Residual Helmholtz energy is given by \begin{dmath} \alpha_{CO_{2}}^{r}(\delta,\tau) = \sum_{i=1}^{7}n_{i}\delta^{d_{i}}\tau^{t_{i}}+\sum_{i=8}^{34}n_{i}\delta^{d_{i}}\tau^{t_{i}}\exp(-\delta^{c_{i}})+\sum_{i=35}^{39}n_{i}\delta^{d_{i}}\tau^{t_{i}}\exp(-\alpha_{i}(\delta-\epsilon_{i})^{2}-\beta_{i}(\tau-\gamma_{i})^{2}) +\sum_{i=40}^{42}n_{i}\Delta^{b_{i}}\delta\Psi\label{eq:alpha_r_CO2} \end{dmath} with \begin{eqnarray} \theta & = & (1-\tau)+A_{i}[(\delta-1)^{2}]^{1/(2\beta_{i})}\\ \Delta & = & \theta^{2}+B_{i}[(\delta-1)^{2}]^{a_{i}}\\ \Psi & = & \exp(-C_{i}(\delta-1)^{2}-D_{i}(\tau-1)^{2}) \end{eqnarray} The derivatives of $\alpha_{CO_2}^r$, $\theta$, $\Delta$ and $\Psi$ as required in the mixture model and values of the parameters $n_i$, $d_i$, $t_i$, $c_i$, $\alpha_i$, $\beta_i$, $\gamma_i$, $\epsilon_i$, $a_i$, $b_i$, $A_i$, $B_i$, $C_i$ and $D_i$ are given in \ref{appendix_eos_CO2}. \subsection{Ideal Mixture of Real Gases Model} \label{subsec:IMRG} We will now discuss how the approach in \citep{Seiff1980} can be followed using the equations of state in Helmholtz energy. To obtain the thermodynamic properties of ideal mixture of real gases (IMRG), the first step is to neglect the contribution of non-ideal interactions between the different components in the mixture model. This can be done by setting $\alpha^E$ to zero. The non-dimensionalized Helmholtz free energy of the IMRG can then be written as \begin{equation} \alpha^{IMRG}(\tilde{\rho},T,\bar{x}) = \sum \limits_{i=1}^{n}x_{i}\left[\alpha_{i}^{o}(\tilde{\rho},T)+\ln x_{i} + \alpha_{i}^{r}(\delta_i,\tau_i)\right] \end{equation} where $\delta_i = \tilde{\rho}/\tilde{\rho}_{c,i}$ is the reduced density and $\tau_i = T_{c,i}/T$ is the reduced temperature of component $i$. It is important to note that $\alpha^r_i$ is not a function of the mixture reduced density $\delta$ and mixture reduced temperature $\tau$ here. These reduced values depend on mixing rules which vary from one real gas mixture model to another as we have seen in the case of LJ-1999 and GERG-2008 models. The IMRG must not depend on the mixing rule being used. This approach is similar to that followed in \citep{asmestandards2012}. The thermodynamic properties of the IMRG can be obtained by using Gibbs-Dalton law which is valid for ideal mixtures \begin{equation} p = \sum \limits_{i=1}^n x_i p_i ;\quad \tilde{c}_v = \sum \limits_{i=1}^n x_i \tilde{c}_{v,i} ;\quad \tilde{c}_p = \sum \limits_{i=1}^n x_i \tilde{c}_{p,i} \end{equation} where $p_i$ is the partial pressure of component $i$ for $\tilde{\rho}$ and $T$ which can be evaluated using Eq.\ref{eq:p_HEOS}. Similarly, $\tilde{c}_{v,i}$ is the partial specific isochoric heat capacity and $\tilde{c}_{p,i}$ is the partial isobaric specific heat capacity of component $i$ which can be evaluated using Eqs.\ref{eq:cv_HEOS} and \ref{eq:cp_HEOS}. Specifically, we have \begin{equation} p= \sum \limits_{i=1}^n \tilde{\rho}RT x_i \left[1+ \delta_i \left(\frac{\partial\alpha^{r}_i}{\partial\delta_i}\right)_{\tau_i}\right] \label{eq:p_IMRG_HEOS} \end{equation} \begin{equation} \frac{\tilde{c}_{v}}{R}= - \sum \limits_{i=1}^n x_i \tau^{2}_i \left[\left(\frac{\partial^{2}\alpha^{0}_i}{\partial\tau^{2}_i}\right) + \left(\frac{\partial^{2}\alpha^{r}_i}{\partial\tau^{2}_i}\right)_{\delta_i}\right] \label{eq:cv_IMRG_HEOS} \end{equation} \begin{equation} \frac{\tilde{c}_{p}}{R}=\frac{\tilde{c}_{v}}{R} + \sum \limits_{i=1}^n x_i \frac{\left[1 + \delta_i\left(\frac{\partial\alpha^{r}_i}{\partial\delta_i}\right)_{\tau_i} - \delta_i \tau_i \left(\frac{\partial^{2}\alpha^{r}_i}{\partial\delta_i\partial\tau_i}\right)\right]^{2}}{1 + 2\delta_i \left(\frac{\partial\alpha^{r}_i}{\partial\delta_i}\right)_{\tau_i}+\delta^{2}_i\left(\frac{\partial^{2}\alpha^{r}_i}{\partial\delta^{2}_i}\right)_{\tau_{i}}} \label{eq:cp_IMRG_HEOS} \end{equation} \section{Density Solvers} \label{sec:Density_Solvers} As we have seen in the previous section, the independent variables for the mixture models in Helmholtz free energy are amount of substance density $\tilde{\rho}$ and temperature $T$. When pressure $p$ and temperature are available to us, we need to solve for amount of substance density in Eqs.\ref{eq:p_HEOS} and \ref{eq:p_IMRG_HEOS}. We use MATLAB's inbuilt function fzero for root finding which uses a combination of bisection, secant, and inverse quadratic interpolation methods. The equations (Eqs.\ref{eq:p_HEOS} and \ref{eq:p_IMRG_HEOS}) for which we need to obtain roots are highly nonlinear and many roots are possible. It is thus important to ascertain which root $\tilde{\rho}$ is physically meaningful. We follow the suggestions in \citep{gernert2014calculation} to do this. MATLAB's root finding solver fzero requires an initial estimate or interval for $\tilde{\rho}$ in which we believe the root lies in. This was generated using the ideal gas law or exploration of the range of Eqs.\ref{eq:p_HEOS} and \ref{eq:p_IMRG_HEOS} for different values of $p$ and $T$. This could also be done through using an SRK equation of state as suggested in \citep{gernert2014calculation}. \section{Verification of the results with Available Experimental Results\label{sec:Verification-against-Experiments}} We compare the approaches of pure $CO_2$ model \citep{Staley1970}, IMRG model \citep{Seiff1980}, LJ-1999 model \citep{Lemmon1999} and GERG-2008 model \citep{Kunz2012} against experimental data. Our main motivation is to show that the real gas mixture models perform better than the other approaches in predicting the thermodynamic properties of the real gas mixture $CO_{2}-N_{2}$. \citet{Staley1970} and \citet{Seiff1980} used the compilation of experimentally determined properties of $CO_{2}$ and $N_{2}$ found in \citet{Hilsenrath1955}. To account for more recent experiments, we use the equations of state for $CO_{2}$ \citep{Span1996} and $N_{2}$ \citep{Span2000}. For comparison, the uncertainty in the isobaric specific heat capacity data of $CO_{2}$ tabulated in \citep{Hilsenrath1955} is of the order of $\pm2.0\%$ for $220\,\text{K\,}\leq T\leq\,600\,\text{K}$ at atmospheric pressure. This is considering the experimental data available at that time which had low reliability. The uncertainty in $c_{p}$ for $CO_{2}$ as obtained from the equation of state in Helmholtz energy \citep{Span1996} is of the order of $\pm0.15\%$ at the same pressure when considered against more reliable experimental data. Considering $N_{2}$, the uncertainty in $c_{p}$ data obtained using the equation of state in Helmholtz energy \citep{Span2000} is of the order of $\pm0.3\%$ against that of $\pm3.0\%$ uncertainty in $c_{p}$ data of \citet{Hilsenrath1955} for $100\,\text{K\,}\leq T\leq\,700\,\text{K}$ at atmospheric pressure. For the comparing accuracy of the different models, we look at experiments that reported results for $CO_{2}-N_{2}$ mixtures with $x_{CO_{2}}>0.9$ as the main contention by the approach proposed by \citet{Seiff1980} was that non-ideal interactions between $CO_{2}$ and $N_{2}$ can be safely neglected for such mixtures. Table \ref{tab:Expt_CO2_N2} summarizes the literature that was used. \begin{table}[H] \begin{centering} \begin{tabular}{|r|c|c|c|c|} \hline {\footnotesize{}Reference} & {\footnotesize{}Type of Expt. Data} & {\footnotesize{}Pressure Range (MPa)} & {\footnotesize{}Temperature Range (K)} & {\footnotesize{}$x_{CO_{2}}$ (\%)}\tabularnewline \hline \hline {\footnotesize{}\citet{Brugge1989}} & {\footnotesize{}$p\rho T$} & {\footnotesize{}0.21-6.63} & {\footnotesize{}300-320} & {\footnotesize{}90.92}\tabularnewline \hline {\footnotesize{}\citet{Brugge1997}} & {\footnotesize{}$p\rho T$} & {\footnotesize{}1.03-69.09} & {\footnotesize{}285-450} & {\footnotesize{}90.92}\tabularnewline \hline {\footnotesize{}\citet{Ely1989}} & {\footnotesize{}$p\rho T$} & {\footnotesize{}2.26-33.10} & {\footnotesize{}250-330} & {\footnotesize{}98.20}\tabularnewline \hline {\footnotesize{}\citet{Mantovani2012}} & {\footnotesize{}$p\rho T$} & {\footnotesize{}1.00-20.00} & {\footnotesize{}303-383} & {\footnotesize{}90.21, 95.85}\tabularnewline \hline {\footnotesize{}\citet{Bishnoi1972}} & {\footnotesize{}$c_{p}$} & {\footnotesize{}3.45-14.48} & {\footnotesize{}313-363} & {\footnotesize{}93.23}\tabularnewline \hline \end{tabular} \par\end{centering} \protect\caption{Experimental Data for $CO_{2}$ Rich Mixtures of $CO_{2}-N_{2}$\label{tab:Expt_CO2_N2}} \end{table} \begin{comment} The experimental values of pressure ($p$) and temperature ($T$) reported in \citet{Brugge1989} were used and not the maximum likelihood estimates provided for pressure. The deviations in isobaric heat capacities obtained for GERG-2008 model from experimental data in Fig.\ref{fig:Expt_Comp_Bishnoi} match with the results obtained in \cite{gernert2013new}. \end{comment} In Figures \ref{fig:Expt_Comp_Brugge1989}, \ref{fig:Expt_Comp_Brugge1997}, \ref{fig:Expt_Comp_Ely1989} and \ref{fig:Expt_Comp_Mantovani2012}, we look at the relative deviations of density calculated using the different models against the experimental data. The results indicate that the GERG-2008 mixture model is the most accurate for these temperature and pressure ranges followed by the LJ-1999 mixture model, then the IMRG model and lastly considering a pure $CO_2$ equation of state. In addition to comparing the trends of deviations, we can compare the percentage average absolute deviations in density (calculated over $N$ data points) which is given by \begin{equation} \text{AAD\%}_{calc - exp} = \frac{1}{N} \sum \limits_{i=1}^{N} 100 \frac{|\rho_{exp} - \rho_{calc}|}{\rho_{exp}} \end{equation} where $\rho_{exp}$ is the experimentally measured value of density and $\rho_{calc}$ is that predicted by the mixture model. For example, the $\text{AAD}\%$ in density obtained from the different mixture models against the experimental data of \citep{Brugge1989} are: (i) GERG-2008 -- 0.0671, (ii) LJ-1999 -- 0.2446, (iii) IMRG -- 0.4842, and (iv) Pure $CO_2$ -- 5.7316. The real gas mixture models are also able to give accurate values of density for the $CO_2 - N_2$ mixture in the supercritical region. Considering $x_{CO_2}=0.9585$, the $\text{AAD}\%$ in density obtained from the different mixture models against the experimental data of \citep{Mantovani2012} are: (i) GERG-2008 -- 1.3592, (ii) LJ-1999 -- 1.5421, (iii) IMRG -- 2.6761, and (iv) Pure $CO_2$ -- 9.8766. This indicates that the real gas mixture models can be used with confidence in calculating accurate values of the thermodynamic properties for the $CO_2 - N_2$ mixture which exists in a supercritical state in the lower parts of the Venus atmosphere. \begin{figure}[H] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.55]{Figures/Expt_Comparisons/brugge1989_T300.eps} \end{subfigure ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.55]{Figures/Expt_Comparisons/brugge1989_T320.eps} \end{subfigure} \caption{Deviations of density calculated using the different models from experimental data for $x_{CO_2}=0.90921$ in \citep{Brugge1989}} \label{fig:Expt_Comp_Brugge1989} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.55]{Figures/Expt_Comparisons/brugge1997_T225_T300.eps} \end{subfigure ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.55]{Figures/Expt_Comparisons/brugge1997_T320_T450.eps} \end{subfigure} \caption{Deviations of density calculated using the different models from experimental data for $x_{CO_2}=0.90921$ in \citep{Brugge1997}} \label{fig:Expt_Comp_Brugge1997} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.55]{Figures/Expt_Comparisons/ely1989_T250_T310.eps} \end{subfigure ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.55]{Figures/Expt_Comparisons/ely1989_T315_T330.eps} \end{subfigure} \caption{Deviations of density calculated using the different models from experimental data for $x_{CO_2}=0.982$ in \citep{Ely1989}} \label{fig:Expt_Comp_Ely1989} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.6]{Figures/Expt_Comparisons/mantovani_N1.eps} \caption{$x_{CO_2}=0.9585$} \end{subfigure ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.6]{Figures/Expt_Comparisons/mantovani_N2.eps} \caption{$x_{CO_2}=0.9021$} \end{subfigure} \caption{Deviations of density calculated using the different models from experimental data in \citep{Mantovani2012}} \label{fig:Expt_Comp_Mantovani2012} \end{figure} Lastly, we look at the relative deviations of isobaric specific heat capacity calculated using the different models against the experimental data \citep{Bishnoi1972}. The trends in Figure \ref{fig:Expt_Comp_Bishnoi1972} show that the GERG-2008 and LJ-1999 mixture models are far more accurate than the IMRG model and the pure $CO_2$ equation of state at predicting values of $c_p$. The $\text{AAD}\%$ in $c_p$ over the real gas mixture models against the experimental data of \citep{Bishnoi1972} are (i) GERG-2008 -- 1.7083, and (ii) LJ-1999 -- 2.1151. Through the comparison of the different mixture models against experimental data, we have seen that it is imperative to include the non-ideal interactions of $CO_2$ and $N_2$ in the mixture when calculating the thermodynamic properties of the mixture. Moreover, this served as a verification of our implementation of the different real gas mixture models. The trends of deviations in $\rho$ and $c_p$ obtained here closely match with those in \citep{gernert2013new} for the GERG-2008 model and \citep{Lemmon1996} for the LJ-1999 model for sets of common experimental data. \begin{figure}[H] \centering \includegraphics[scale=0.8]{Figures/Expt_Comparisons/bishnoi1972.eps} \caption{Deviations of isobaric heat capacities calculated using the different models from experimental data for $x_{CO_2}=0.9323$ in \citep{Bishnoi1972}} \label{fig:Expt_Comp_Bishnoi1972} \end{figure} \section{Adiabatic Lapse Rate\label{sec:Adiabatic-Lapse-Rate}} \subsection{LJ-1999, GERG-2008 and Pure $CO_2$ Models} Our starting point for calculating adiabatic lapse rate is Eq.\ref{eq:adiabatic_lapse_rate} \begin{equation} \Gamma=-\frac{T}{\rho}\frac{\left(\frac{\partial p}{\partial T}\right)_{\rho}}{\left(\frac{\partial p}{\partial\rho}\right)_{T}}\left(\frac{g}{c_{p}}\right) \label{eq:adiabatic_lapse_rate2} \end{equation} Isobaric heat capacity $c_p$ can be computed using Eq.\ref{eq:cp_HEOS}. We further note that from Eq.\ref{eq:p_HEOS} and using the definitions of reducing functions, the different partial derivatives of pressure can be computed from \begin{align} \left(\frac{\partial p}{\partial T}\right)_{\rho} &= \tilde{\rho} R \left(1 + \delta \alpha^r_{\delta} - \delta \tau \alpha^r_{\delta\tau} \right) \label{eq:dp_dT_HEOS} \\ \left(\frac{\partial p}{\partial\rho}\right)_{T} &= \frac{RT}{M} \left(1 + 2\delta \alpha^r_{\delta} + \delta^2 \alpha^r_{\delta\delta} \right) \label{eq:dp_drho_HEOS} \end{align} The adiabatic lapse rate can then be computed using \begin{equation} \Gamma=-\frac{\left(1 + \delta \alpha^r_{\delta} - \delta \tau \alpha^r_{\delta\tau} \right)}{\left(1 + 2\delta \alpha^r_{\delta} + \delta^2 \alpha^r_{\delta\delta} \right)} \left(\frac{g}{c_{p}}\right) \label{eq:adiabatic_lapse_rate_HEOS} \end{equation} In the above expression, acceleration due to gravity was assumed to change only with altitude $z$ as $g=g_{o}\frac{R_{o}^{2}}{(R_{o}+z)^{2}}$ where $g_{o}=8.869m/s^{2}$ and the radius of the planet of Venus $R_{o}$ was considered to be $6052\,\text{km}$. The atmospheric conditions of Venus are recorded in terms of pressure $p$ and temperature $T$. As a part of calculating $\Gamma$, density $\rho$ needs to be determined. We follow the discussion in Sec.\ref{sec:Density_Solvers} and additionally consider the initial estimate of density from interpolated values of density reported in \citep{moroz1981atmosphere} for altitude range of $0-100 \text{km}$. \subsection{IMRG Model} We follow the same approach as discussed in \cite{Seiff1980} to calculate the adiabatic lapse rate for the ideal mixture of real gases model (IMRG). The only difference is that the thermodynamic properties of the IMRG model are determined from equations of state in Helmholtz free energy as was discussed in Sec. \ref{subsec:IMRG}. The expression in Eq.\ref{eq:adiabatic_lapse_rate_Seiff} can be calculated using \begin{equation} \Gamma=b\left(\frac{g}{c_{p}}\right) \end{equation} where $c_{p}$ is calculated using Eq.\ref{eq:cp_IMRG_HEOS} and $b=-aT$ (with $a$ as defined in Eq.\ref{eq:a_Seiff}). For the IMRG model, $b$ can be calculated as \begin{align} b & = -\sum_{i}x_{i} \left[ \frac{T}{\rho} \left(\frac{\partial \rho}{\partial T} \right)_{p} \right]_{i}\\ & = -\sum_{i}x_{i} \left[ \frac{T}{\rho}\frac{\left(\frac{\partial p}{\partial T}\right)_{\rho}}{\left(\frac{\partial p}{\partial\rho}\right)_{T}} \right]_{i} \end{align} The expressions for the partial derivatives of $p$ for component $i$ can be written in a similar fashion to those in Eqs.\ref{eq:dp_dT_HEOS} and \ref{eq:dp_drho_HEOS}. The expression for adiabatic lapse rate for the IMRG model then becomes \begin{equation} \Gamma=- \left( \sum_{i}x_{i} \left[ \frac{\left(1 + \delta_i \alpha^r_{\delta_i} - \delta_i \tau_i \alpha^r_{\delta_i \tau_i} \right)}{\left(1 + 2\delta_i \alpha^r_{\delta_i} + \delta^2_i \alpha^r_{\delta_i \delta_i} \right)} \right] \right) \left(\frac{g}{c_{p}}\right) \label{eq:adiabatic_lapse_rate_IMRG} \end{equation} where $\alpha^r$ is different for each component $i$. \section{Results and Discussion \label{sec:Results}} \cite{oyama1980pioneer} reported a vertical gradient of $N_2$ between $22$ and $52$km altitudes. However, for the sake of comparison, we consider the Venus atmosphere to be composed of a real gas binary mixture of $CO_{2}-N_{2}$ in the constant volume mixing ratio of $96.5:3.5$ \citep{VonZahn1983}. Additionally, we can assume within experimental uncertainty that the atmosphere can be considered to be composed of a real gas binary mixture of $CO_{2}-N_{2}$ in a ratio of $96.5:3.5$ by mole fraction. Thus, we neglect any vertical variation in the composition. An atmospheric model of Venus was created in \citep{Seiff1985} using the measurements obtained from the four Pioneer Venus probes. Details of the profiles measured by these probes can be found in \citep{Seiff1980}. The vertical profile of the adiabatic lapse rate for this atmospheric model was computed with the different mixture models which we have discussed using Eqs. \ref{eq:adiabatic_lapse_rate_HEOS} and \ref{eq:adiabatic_lapse_rate_IMRG}. The results for the GERG-2008 mixture model which was shown to be the most accurate mixture model against experimental data in Section \ref{sec:Verification-against-Experiments} are shown in Figure \ref{fig:lapse_rate_Seiff1985_Linkin1987}. The adiabatic lapse rate decreases with decreasing pressure and temperature from the surface by almost $1.5$ K/km between surface and $50$ km and increases by the same amount in the next $20$ km between $50$-$70$ km. Figure \ref{fig:dev_adiabatic_lapse_rate_Seiff1985} shows the difference between \cite{Seiff1985} adiabatic lapse rate computed for ideal mixture of $CO_2 - N_2$ (i.e. ignoring the real gas $CO_2-N_2$ interactions) and adiabatic lapse rate computed from the GERG-2008 model. The differences in the calculations are as high as $0.02$ K/km around $20$ km. This is high enough to characterize layers in the atmosphere close to neutrally stable, which were thought to be initially stable as now unstable. This shows the importance of taking non-ideal interactions in the real gas mixture into account and using more recent experimental data of $CO_2$ and $N_2$ represented by their equations of state. VeGa 2 temperature profile \citep{Linkin1987} is the only one that provides measurements below $12$ km and the adiabatic lapse rates corresponding to this profile computed from the GERG-2008 model is shown in Figure \ref{fig:lapse_rate_Seiff1985_Linkin1987}. The corresponding static stability profiles are shown in Figure \ref{fig:profiles_static_stability_Seiff1985_Linkin1987} which was calculated using \begin{equation} \Delta\Gamma=\left(\frac{dT}{dz}\right)_{meas}-\left(\frac{dT}{dz}\right)_{ad} = \left(\frac{dT}{dz}\right)_{meas} + \Gamma \end{equation} where $\left(\frac{dT}{dz}\right)_{meas}$ is the gradient of the measured temperature with respect to altitude. This was computed using a second order centered scheme from the available temperature measurements. The nonlinear Savitzky-Golay filter was applied to the static stability profile computed for \citep{Linkin1987} with a span of $11$ points to remove spurious oscillations. Two superadiabatic layers are seen - one near the surface at about $4$ km and another at about $17$ km. A layer of near neutral stability or even slightly unstable layer is also seen in the VeGa 2 profile between $50$-$54$ km. In the static stability profile for \citet{Seiff1985}, the atmosphere is stable near the surface. This difference between the static stability plots in Figure \ref{fig:profiles_static_stability_Seiff1985_Linkin1987} can be explained by looking at the difference in adiabatic lapse rates obtained in Figure \ref{fig:lapse_rate_Seiff1985_Linkin1987} near the surface for altitudes of $0-15$km. For obtaining the adiabatic lapse rate and static stability for the higher altitudes in the Venus atmosphere, we use the $pT$ profiles (Figure \ref{fig:profile_Magellan1994}) obtained from radio occultation studies with the Magellan spacecraft \citep{steffes1994radio,jenkins1994radio}. The temperature profile used for the calculation of adiabatic lapse rate and static stability are from orbit 3212 of the spacecraft and is shown in Figure \ref{fig:profile_Magellan1994}. The results obtained for adiabatic lapse rate is shown in Figure \ref{fig:adiabatic_lapse_rate_Magellan1994} and for static stability is shown in Figure \ref{fig:static_stability_Magellan1994}. The vertical profile of static stability obtained using the GERG-2008 mixture model is similar to the one obtained in \citep{hinson1995magellan}. Differences are due to the fact that \citep{hinson1995magellan} used values of $\Gamma$ from \citep{Seiff1980}. From the infrared spectrometry data onboard Venera-15 \citep{Zasova2006}, it was observed that there are spatial and temporal variations in the upper atmosphere. To fully understand the convective stability in the Venus atmosphere, we take these into consideration when calculating adiabatic lapse rate and static stability. Figure \ref{fig:Results_Zasova2006} shows the profiles of adiabatic lapse rate and static stability for latitudes $\phi < 35^{\circ}$ and for various solar longitudes. Not only are there clear variations in the magnitude of static stability from $75 - 100$ km, we also observe that the atmosphere is unstable from $50-52$ km for solar longitude $L_S = 270^\circ - 310^\circ$ but stable otherwise. This indicates the importance of considering the variation in adiabatic lapse rate with both altitude and latitude. \begin{figure}[H] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.4]{Figures/profile2_T_vs_z_Seiff1985_Linkin1987.eps} \vspace{-1cm} \caption{Profiles of temperature with altitude} \label{fig:temperature_Seiff1985_Linkin1987} \end{subfigure ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.4]{Figures/profile2_lapse_rate_vs_z_Seiff1985_Linkin1987.eps} \vspace{-1cm} \caption{Profiles of adiabatic lapse rate with altitude} \label{fig:lapse_rate_Seiff1985_Linkin1987} \end{subfigure} \caption{Comparison of profiles of temperature and adiabatic lapse rate computed using the GERG-2008 mixture model for the VeGa-2 Lander \citep{Linkin1987} and the VIRA model \citep{Seiff1985} constructed from the four Pioneer Venus probes' data \citep{Seiff1980}} \label{fig:profiles_Seiff1985_Linkin1987} \end{figure} \begin{comment} \begin{figure}[H] \centering \begin{tabular}{c @{\qquad} c } \includegraphics[scale=0.4]{Figures/profile2_T_vs_z_Seiff1985_Linkin1987.eps} & \includegraphics[scale=0.4]{Figures/profile2_lapse_rate_vs_z_Seiff1985_Linkin1987.eps} \\ \small (a) Profiles of temperature with altitude \label{fig:temperature_Seiff1985_Linkin1987} & \small (b) Profiles of adiabatic lapse rate with altitude \label{fig:lapse_rate_Seiff1985_Linkin1987} \end{tabular} \caption{Comparison of profiles of temperature and adiabatic lapse rate computed using the GERG-2008 mixture model for the VeGa-2 Lander \citep{Linkin1987} and the VIRA model \citep{Seiff1985} constructed from the four Pioneer Venus probes' data \citep{Seiff1980}} \label{fig:profiles_Seiff1985_Linkin1987} \end{figure} \end{comment} \begin{figure}[H] \centering \includegraphics[scale=0.39]{Figures/Seiff1980/dev2_GERG2008_Seiff1985.eps} \vspace{-0.75cm} \caption{$\Gamma^{\text{Seiff1980}} - \Gamma^{\text{GERG2008}}$: Difference in adiabatic lapse rates computed for the VIRA model using the GERG-2008 model and that calculated in \citep{Seiff1985}} \label{fig:dev_adiabatic_lapse_rate_Seiff1985} \end{figure} \begin{figure}[H] \centering \centering \includegraphics[scale=0.4]{Figures/profile2_static_stability_vs_z_Seiff1985_Linkin1987.eps} \caption{Comparison of profiles of static stability with altitude in the Venus atmosphere computed using the GERG-2008 mixture model considering the profiles measured by the VeGa-2 Lander \citep{Linkin1987} and the VIRA model \citep{Seiff1985} constructed from the four Pioneer Venus probes' data \citep{Seiff1980}} \label{fig:profiles_static_stability_Seiff1985_Linkin1987} \end{figure} \begin{figure}[H] \centering \centering \includegraphics[scale=0.4]{Figures/Magellan1994/profile_T_vs_z_Magellan1994.eps} \caption{Profile of temperature with altitude of orbit 3212 of the Magellan spacecraft \citep{steffes1994radio,jenkins1994radio}} \label{fig:profile_Magellan1994} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[t]{0.5\textwidth} \includegraphics[scale=0.4]{Figures/Magellan1994/adiabatic_lapse_rate_Magellan1994_GERG2008.eps} \caption{Profile of adiabatic lapse rate with altitude} \label{fig:adiabatic_lapse_rate_Magellan1994} \end{subfigure ~ \begin{subfigure}[t]{0.5\textwidth} \includegraphics[scale=0.4]{Figures/Magellan1994/static_stability_Magellan1994_sgolay.eps} \caption{Profile of static stability with altitude} \label{fig:static_stability_Magellan1994} \end{subfigure} \caption{Profiles of adiabatic lapse rate and static stability with altitude in the Venus atmosphere considering the profile of orbit 3212 of the Magellan spacecraft \citep{steffes1994radio,jenkins1994radio} calculated using the GERG-2008 mixture model} \label{fig:Results_Magellan1994} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.4]{Figures/Zasova2006/adiabatic_lapse_rate_Zasova2006_phi35.eps} \caption{Profile of adiabatic lapse rate with altitude} \label{fig:adiabatic_lapse_rate_Zasova2006} \end{subfigure ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.4]{Figures/Zasova2006/static_stability_Zasova2006_phi35.eps} \caption{Profile of static stability with altitude} \label{fig:static_stability_Zasova2006} \end{subfigure} \caption{Profiles of adiabatic lapse rate and static stability with altitude as a function of solar longitude in the upper atmosphere of Venus atmosphere calculated using the GERG-2008 mixture model, considering the Venera 15 Fourier Spectrometer data \citep{Zasova2006}} \label{fig:Results_Zasova2006} \end{figure} \section{Conclusion and Future Work\label{sec:Conclusion-and-Future}} We have calculated more accurate values of the adiabatic lapse rate for a mixture of $96.5\%$ carbon dioxide and $3.5\%$ nitrogen using the GERG-2008 mixture model for the temperature and pressure conditions found in the Venus atmosphere. We were able to account for the difference in adiabatic lapse rate values due to non-ideal interactions between $CO_2$ and $N_2$. Near the altitudes of $20\text{km}$, the magnitude of our value is about $0.02 \text{K/km}$ lower than the approximate value calculated by \cite{Seiff1980}. We showed the importance of considering spatial variations in adiabatic lapse rate with latitude and altitude as well as temporal variations. These calculations can also be performed considering the Venus atmosphere composition to vary with altitude to reflect the measured differences in the composition. It was shown in \citep{oyama1980pioneer} that the abundance of nitrogen in the atmosphere can be as high as $4.6 \, \text{v}\%$ at $51.6\text{km}$ and more recent studies \citep{peplowski2016nitrogen} have reported higher values of $5.38 \, \text{v}\%$ at $60-70\text{km}$. Further, considering the gradient in molecular weight with altitude will alter all available profiles of $T(p)$ for occultation and entry probe measurements, and $T(z)$ for non occultation results. Moreover, this approach can be applied to other planets or moons such as Saturn's largest moon Titan which has an atmosphere composed of mainly nitrogen and methane. \section*{Acknowledgments} Arkopal Dutt acknowledges support from the Indo US Science and Technology Foundation for the S.N. Bose Scholarship at University of Wisconsin-Madison. Funding from NASA Grant NNX09AE85G for completion of this work is acknowledged.
1,314,259,995,175
arxiv
\section{Introduction} \label{sec:introduction} The HORUS spectrometer (\textbf{H}igh efficiency \textbf{O}bservatory for $\gamma$-\textbf{R}ay \textbf{U}nique \textbf{S}pectroscopy) \cite{Linn05b} is located at the 10 MV Tandem ion accelerator at the Institute for Nuclear Physics in Cologne. It consists of 14 high-purity Germanium (HPGe) detectors for high-resolution $\gamma$-ray spectroscopy. Six of these HPGe detectors can be equipped with BGO shields for active Compton suppression. In addition, two of the standard HPGe detectors can be replaced by Clover-type detectors \cite{Duch99} for Compton polarimetry experiments \cite{Butl73, Simp83}. The coincident detection of charged particles in the exit channel of a nuclear reaction and the deexciting $\gamma$-rays provides valuable additional information. For example in inelastic scattering experiments (e.g. (p,p$^{\prime}\gamma$), (d,d$^{\prime}\gamma$)) or transfer reactions (e.g. (p,d$\gamma$)), the excitation energy of the target nucleus can be derived on an event-by-event basis from the energy of the charged particle in the exit channel \cite{Catf86}. For this purpose, the particle detector array SONIC \cite{Pick14} can be embedded within the HORUS spectrometer. SONIC houses up to eight $\Delta$E-E silicon sandwich detectors which allow particle identification. To process up to 36 detector channels, the analog data acquisition system was replaced by a digital data acquisition system, based on the DGF-4C Rev. F modules manufactured by XIA LLC \cite{Warb00,Skul00,Hubb99}. This is a CAMAC based module that comprises four complete spectroscopic channels. The DGF-4C modules have been extensively used for the data acquisition at the Miniball spectrometer \cite{Warr13}, where Rev. D modules are used. Nowadays, newer revisions of the DGF-4C modules are available, that possess a USB connector for fast data readout. In contrast to the conventional analog signal processing approach, the preamplifier signal is directly digitized and hence, all spectroscopic information, e.g. energy and time information, can be extracted using digital filter algorithms \cite{Jord94}. With the HORUS spectrometer and the combined setup of HORUS and SONIC manifold aspects of nuclear physics can be studied. As examples we mention two applications: The study of the Pygmy Dipole Resonance (see \cite{Savr13} and references therein) and the in-beam investigation of nuclear reactions relevant for nuclear astrophysics (see \cite{Kaep11, Raus13, Arno07} and references therein). Both types of application require the detection of $\gamma$-rays in the range from 5 to 15 MeV. It turned out, that when operating the new data acquisition system at such an energy range (denoted as high dynamic range in the following), significant distortions of the peak shape, e.g. double- or even multiple-peak structures start evolving in the $\gamma$-ray spectra. Fig. \ref{fig:uncorrected} shows an excerpt of a $\gamma$-ray spectrum obtained with a $^{226}$Ra calibration source. The double-peak structure of the peaks is obvious. As already pointed out previously \cite{Vent01, Pasc13, Laue04}, the observed spectral distortions can be traced back to the digitization process, in particular to the differential nonlinearities (DNL) of the analog-to-digital converters (ADC) used in the spectroscopy system. \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_01.png} \end{center} \caption{Low-energy part of a $\gamma$-ray spectrum obtained with a $^{226}$Ra calibration source. The spectrum was taken with a 20\% coaxial HPGe detector at an input count-rate of 23 kcps. The dynamic energy range was set to 12.8 MeV. The double-peak structure of the peaks is evident.} \label{fig:uncorrected} \end{figure} In this paper, a new method is presented to correct the effect of the DNL of subranging ADCs in the $\gamma$-ray spectra by means of an offline correction-algorithm. The correction algorithm requires a calibration procedure of each individual channel, together with an online pulse-shape analysis (PSA). In section 2, the principle of digital pulse-processing with the DGF-4C is shortly presented. In section 3 the effect of the DNL on measured $\gamma$-ray spectra will be investigated systematically. The newly developed correction algorithm will be presented in section 4 and the results achieved with the new algorithm will be discussed in section 5. \section{Signal processing in the DGF-4C} \label{sec:processing} In this section, the signal processing in the DGF-4C is shortly sketched. We will constrain ourselves to the parts which are relevant for the effect and correction of the DNL. A more detailed description of the signal-processing technique in the DGF-4C can be found e.g. in Refs. \cite{Warb00,Skul00,Hubb99}. The preamplifier signals from the semiconductor detectors are characterized by a rising edge with a typical length of a few tens of ns followed by an exponential decay with a time constant of usually about 50~$\mu$s. The signal is directly connected to the input channels of the DGF-4C modules. The signal passes an analog gain and offset stage as well as a Nyquist filter, before it is digitized in an 80 MHz 14 bit sampling ADC, for which the ADC AD6645 from the company Analog Devices is used. Since this component is the origin of the DNL it will be further discussed in section 3. The digital filtering of the digitized preamplifier-signal is based on the moving window deconvolution (MWD) technique \cite{Geor93}, which is implemented on field programmable gate arrays (FPGAs). The height of the preamplifier signal, which contains the information on the energy deposited in the active volume of the detector can be obtained by taking the average of several samples of the digitized signal before and after the rising edge of the signal. This can be expressed by the following equation \begin{equation} \label{eq:trapez} V=-\sum\limits_{j=k-2L-G+1}^{k-L-G}V_j + \sum\limits_{j=k-L+1}^k V_j, \end{equation} \noindent where $L$ denotes the number of samples to be averaged while $G$ accounts for the width of the rising edge of the input signal. $V_k$ is the ADC value $L$ samples after the rising edge. The value $V$ is continuously calculated from the ADC sample stream $V_j$ \cite{Skul00}. Applying this type of filter to a step-like input-signal results in a trapezoidal-shape filter response \cite{Jord94, Skul00} with the width of the rising and falling slope of the trapezoid $L$ and the width of the flat top $G$. Two trapezoidal-shape filter algorithms are applied to the digitized preamplifier signal: A fast filter, which has a rather short filter length (usually the filter parameters $L$ and $G$ are in the range of a few tens of ns), is used for triggering, time determination and pile-up rejection. The height of the input signal is extracted using a slow filter, for which $L$ and $G$ are typically in the range of $1-10~\mu$s and $0.5-1~\mu$s, respectively. If an event is validated at the end of the slow filter, the filter sums are read out by a digital signal processor (DSP), which then determines the energy and time information from the provided filter sums, including a correction for ballistic deficit. Furthermore, the DSP is capable of applying algorithms for an online pulse-shape analysis (PSA), if parts of the digitized preamplifier signals have been captured in the FIFO. Finally, onboard pulse-height spectra are recorded by the DSP and listmode data are written to buffers, which then can be read out via the USB interface. In addition, it is possible to capture and read out a series of digitized samples via a First In First Out buffer (FIFO). In order to operate several modules synchronously, the clock of one module can be defined as a master and can be distributed among the others. Global event building for all modules is finally performed with an offline sorting-algorithm. \section{Systematic investigation of the DNL and its effect on $\gamma$-ray spectra} \label{sec:analysis} The effect of the DNL on the peak shape and linearity was investigated using pulses, produced by a BNC PB-5 programmable pulse generator and with a standard coaxial HPGe detector with a relative efficiency of 20 \% from the company ORTEC. Fig. \ref{fig:block} shows a simplified block diagram of the AD6645 which is used in the DGF-4C modules of Rev. F. The AD6645 is a subranging, pipelined ADC, that includes a 5-bit ADC1, followed by a 5-bit ADC2 and a 6-bit ADC3. Two of the bits are used for a digital error correction logic, so that the effective depth of the whole ADC amounts to 14 bit. \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_02.png} \end{center} \caption{Simplified block diagram of the 14bit, 80MSPS ADC AD6645 which is used for the digitization of the preamplifier signal in the DGF-4C, from \cite{Kest06}. The subranging, pipelined ADC includes two 5-bit ADC's (ADC1 and ADC2) followed a 6-bit ADC (ADC3). Significant DNL distortions occur at the ADC1 transition points \cite{Kest06}.} \label{fig:block} \end{figure} The DNL of this specific ADC was already measured by W. Kester \cite{Kest06} (see especially Fig. 9 in Ref. \cite{Kest06}). There are $2^5=32$ ADC1 transition points which are $2^9=512$ ADC channels apart. Significant DNL errors were found at these transition points with the expected spacing of 512 LSB. In this particular case, the DNL error amounts to about 1.5 LSB. In a schematic way, the effects on the transfer function of the ADC (analog input code vs. digital output code) in presence of a DNL are pointed out, which should be of step-like character (see Fig. 8 in Ref. \cite{Kest06}). In \cite{Pasc13} the DNL of the AD6645 was measured in a different manner. A signal from a pulse generator with the shape of a preamplifier signal and of variable height was uniformly distributed over the top half of the ADC range. In case of no DNL, the resulting pulse-height spectrum is a uniform distribution. In contrast, deviations of up to 5\% of the average number of counts per bin have been observed. To confirm these results a similar measurement was performed with the pulse generator connected to the DGF-4C, where a preamplifier-like pulse of variable height was uniformly covering the whole ADC range. The resulting pulse-height spectrum is shown in Fig. \ref{fig:ramped}, showing similar results to those obtained in \cite{Pasc13} (see Fig. 6 in \cite{Pasc13}). \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_03.png} \end{center} \caption{Pulse-height spectrum obtained with a high-precision pulse generator in the range of the lowest 5000 ADC channels. The pulse-height is ramped uniformly across the ADC range. A measurement with an ADC showing no DNL would result in a flat spectrum. In contrast, regular distortions are observed, indicated by the arrows.} \label{fig:ramped} \end{figure} Nevertheless, DNL errors tend not to be limited to this particular ADC only. The effect of the DNL using other types of fast pipeline-ADCs has been investigated as well \cite{Vent01}. Multi-peak structures have been observed, when analysing the pulse height of signals of fixed amplitude generated by a pulse generator, starting from different offset values \cite{Vent01}. Hence, the DNL effects become more significant at high count rates, where the baseline varies from one pulse to another as each pulse can be located on the tail of a previous one \cite{Pasc13}. To further elaborate the count rate dependency and thus, the dependency on the baseline, waveforms were captured in a measurement with the HPGe detector using a $^{226}$Ra calibration source. From this measurement an average ADC value right before the rising edge of the pulse was extracted to obtain a baseline value for each individual event. In Fig. \ref{fig:matrix}, these baseline values are shown together with the energy values, computed by the DGF-4C in a 2D-matrix. This figure confirms the observations from \cite{Vent01} and \cite{Pasc13}, that the double-peak structures seen in the $\gamma$-ray spectrum (which would be a projection of the matrix to the vertical axis) arise from baseline variations due to high count rates. Fig. \ref{fig:countrate} shows a systematic investigation of the count-rate dependence. The left panel shows the energy spectrum (y-axis projection of Fig. \ref{fig:matrix}) around the 609-keV $\gamma$-ray transition stemming from the $^{226}$Ra calibration source for different input count-rates together with the corresponding baseline distribution (x-axis projection of Fig. \ref{fig:matrix}) in the right panel. The double-peak structure is more pronounced with increasing width of the baseline distribution. \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_04.png} \end{center} \caption{Intensity plot of the measured $\gamma$-ray energy versus the baseline value (see text for further details). A projection of the matrix towards the y-axis would result in a $\gamma$-ray spectrum. The observed double-peak strucutre originates from the discontinuities, showing up at e.g. a baseline value of 168 ADC units. The lower panel is a closer look to the energy-region around 354~keV.} \label{fig:matrix} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_05.png} \end{center} \caption{Evolution of the spectral distortions with increasing count rate. The left panel shows the $\gamma$-ray spectrum around the peak at 609 keV, stemming from the $^{226}$Ra calibration source. The right panel shows the corresponding baseline distribution, which gets broader with increasing input count-rate. The double-peak structure gets more pronounced with increasing width of the baseline distribution.} \label{fig:countrate} \end{figure} The uppermost panel of Fig. \ref{fig:countrate} shows a nearly undistorted $\gamma$-ray peak in case of a narrow baseline distribution (low count rates). Nevertheless, from the discussion in \cite{Kest06}, one would expect that the DNL also causes integral nonlinearity (INL), i.e. deviations from a linear energy calibration. For an input count-rate of 1 kcps, using a $^{226}$Ra calibration source and an energy range of 12.8~MeV, Fig. \ref{fig:uncorr_lin} shows the deviation from a linear energy calibration as a function of $\gamma$-ray energy (dotted lines). Deviations up to $\pm 2~\mathrm{keV}$ are observed. It has to be emphasized, that this deviation occurs, although no double-peaks are observed in the $\gamma$-ray spectra. In contrast, the deviation has almost completely vanished for a dynamic range of 2.6 MeV (solid lines). \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_06.png} \end{center} \caption{Deviation of the measured peak positions from a linear fit to the data. For a dynamic range of 12.8 MeV (connected with dotted lines) the deviation reaches 2 keV and reflects the error function of the transfer function of the ADC 6645 (Fig. 8 in ref \cite{Kest06}). Almost no deviation is observed in the measurement with a dynamic range of 2.6~MeV (connected with solid lines).} \label{fig:uncorr_lin} \end{figure} The effect of the DNL depending on the number of bits of the sampling ADC has been already discussed \cite{Vent01}. To be more precise, one should discuss the effect more likely in terms of number of ADC values per energy range instead of total number of bits of the sampling ADC only. Simply speaking, an ADC with a depth of 12~bit and a dynamic range of 4~MeV would give the same spectral distortions as a 14-bit ADC operated with a maximum energy range of 16~MeV. Therefore, it is intuitive to introduce the ADC channel width $W$ in absolute energy units of keV, defined as the ratio of dynamic range and the number of possible ADC values: Operating a 14-bit ADC at a dynamic range of 16 MeV would correspond to a channel width of $W=0.98$~keV. Nevertheless, since the ADC used in the measurements of this paper is a 14-bit ADC, only a change of the dynamic energy range will affect the channel width in the following. The influence of the DNL is strongly reduced with increasing number of ADC values per energy range, i.e. with reduced channel width \cite{Vent01}. This is illustrated in Fig. \ref{fig:gain}, where the energy spectrum in the region of the 609-keV $\gamma$ transition from a $^{226}$Ra calibration source is shown for different channel widths. The degrading of the peak shape is evident. To conclude the investigations on the DNL, the effect gets more significant with increasing number of ADC-values per energy range. If the baseline distribution is broadened, double-peak structures evolve for large channel widths. Though no double-peak structures appear for narrow baseline distributions, deviations from a linear energy calibration are observed for large channel widths. \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_07.png} \end{center} \caption{Evolution of spectral distortions with increasing channel width $W$. The individual figures show an excerpt of the spectrum around the 609-keV peak. With increasing channel width, the peak is broadened ($W=0.39~$keV) and finally, double-peak structures start evolving ($W=0.78~$keV and $W=0.82~$keV). The input countrate was 23.9 kcps for all measurements.} \label{fig:gain} \end{figure} \section{The DNL correction algorithm} A common technique to overcome DNL in ADCs is the sliding-scale method \cite{Cott63} and in the case of fast pipeline-ADCs the dithering technique \cite{Kest06}. In the latter case, an additional noise signal is added to the analog input signal. After the digitizing process, the additional signal is substracted from the digitized sum-signal. The net effect is, that the the DNL errors of the ADC are randomly spread across the ADC range. Despite the impressive results of this technique \cite{Kest06}, it is not applicable in this case, since the amplitude of the additional signal to be added would have to span the whole ADC1 range \cite{Laue04}. Adding such an amount of noise would inacceptably affect the energy resolution of the $\gamma$-spectroscopy system. In a different approach \cite{Laue04}, the digital value obtained in each subranging ADC is extended by two fractional bits, which are fitted to the $\gamma$-ray spectra and stored in a look-up table. Though this technique yields impressive results for individual cases, this approach turned out to be impractical in the present application: Implementing a look-up table on the presently used FPGA would by far exceed its available memory, whereas a transfer of the samples to the DSP to perform a correction at that level would infer a large amount of deadtime. On the other hand, an offline correction of each ADC sample would require the storage of a huge amount of digitized samples on hard-disk and thus inferring a large amount of deadtime as well. However, this technique might be applicable with the development of FPGAs with larger capacity. For the correction presented in this work, an off\-line event-by-event correction algorithm was developed. The principle of this algorithm is to compare the position of each individual preamplifier pulse with the regions of the ADC showing DNL errors. The location of the preamplifier pulse in the ADC range is determined using the pulse-shape analysis (PSA) mode of the DGF-4C modules. For every pulse, the PSA mode is used to average a couple of sampling points right before the rising edge of the pulse and to average a couple of sampling points right after the rising edge of the pulse. Both values are computed online for each pulse and stored additionally in the listmode data. The first value will be denoted as \textit{baseline value}, the latter as \textit{peak value} in the following. The correction procedure can be divided into two parts: In a first step, a calibration of each input channel has to be performed to locate the regions of the ADC where the DNL error is present. In a second step, the position of each pulse within the ADC range is compared to these regions and the pulse height is adequately corrected. The two steps will be discussed in more detail in the next sections. \subsection{Calibration procedure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_08.png} \end{center} \caption{Section of the measured pulse height as a function of the baseline value. A preamplifier-like signal produced with a high-precision pulse-generator was fed into one input channel of a DGF-4C. The offset of the signal was constantly increased using the internal offset stage of the DGF-4C. 32 peaks are observed, corresponding to the 32 ADC1 transition points. Each of the individual peak shows a trapezoidal shape.} \label{fig:lattenzaun} \end{figure} For the calibration procedure, a preamplifier-like signal produced by a pulse generator is fed into the DGF-4C. If the complete signal is in a range of the ADC with no DNL error, a pulse height $A$ will be measured by the DGF-4C. To calibrate the ADCs of each input channel of the DGF-4Cs, the value for the internal offset stage is systematically increased and for each value a pulse-height spectrum is obtained. If the pulse reaches an ADC range with DNL error, a pulse height $A^{\prime}\neq A$ will be measured. In addition to the energy value, the peak and baseline values, as described in the previous section are measured. Fig. \ref{fig:lattenzaun} shows the measured pulse height as a function of the baseline value. Since the individual pulses from the pulser are well separated and of constant height for each offset value, the peak and baseline values are constant. With increasing offset value, the baseline value increases as well. The DNL-error regions are well separated from each other. In total, $N=32$ DNL-error regions are observed, with a spacing of 512 ADC channels. This corresponds to the expected number and spacing of ADC1 transitions (see section \ref{sec:analysis}). Fig. \ref{fig:lattenzaun} reveals a trapezoidal shape for each of the 32 peaks. The rising slope of this trapezoid arises, if the top of the pulse partly covers the DNL-error region in the ADC. If the pulse completely covers the region a flat top emerges. If the offset is further increased, the bottom of the pulse only partly covers the DNL region. This results in the falling slope of the trapezoid. However, because of the small pulse-height used in this measurement an almost triangular shape is obtained for the peaks in Fig. \ref{fig:lattenzaun}. A similar diagram is obtained for the pulse-height as a function of the peak value. The upper and lower boundaries $A_i^h$ and $A_i^l$ of the $i$-th DNL-error region ($i\in \{0,..,N\})$ can be obtained from the lower left corner of the $i$-th trapezoid in the baseline vs. pulse-height diagram and from the lower right corner in the peak vs. pulse-height diagram, respectively. Furthermore, the height of the trapezoid $C_i$ is the error in pulse height caused by the DNL if the pulse completely covers the DNL-error region. For each input channel of the data acquisition system, a look-up table is generated containing the values $A_i^h$, $A_i^l$ and $C_i$ for each DNL-error region. This look-up table is then used for the correction of pulse-height spectra. It has to be noted, that for the calibration procedure, the height of the pulse, generated by the pulser has to be large enough, to completely span a DNL-error region, but small enough, to not reach two regions at once. The calibration procedure has to be performed only once for every input channel. Every subsequent measurement can then be corrected with this look-up table and the correction algorithm, described in the next section. \subsection{Correction of pulse-height spectra} Measured $\gamma$-ray spectra are corrected by means of an offline-correction algorithm. For each pulse, the peak and baseline value is compared to the look-up table, obtained from the calibration procedure. Every time the pulse crosses a region affected by the DNL, the pulse height is reduced by the amount by which it deviates from an unaffected region. In the following, the first and last DNL region, which is crossed by the pulse is denoted with the index $f$ and $l$, respectively. Hence, $f$ is the lowest index for which $A_f>b$ and $l$ is the highest index for which $A_l<p$ holds, if $b$ and $p$ are the baseline and peak values obtained from the PSA and $A_i$ is the average position of the $i$-th region affected by the DNL: \begin{equation} \label{eq:cent} A_i=\frac{A_i^l+A_i^h}{2}. \end{equation} \noindent Thus, the corrected energy value is given as \begin{equation} \label{eq:corr} E^{\mathrm{corr}}=E^{\mathrm{raw}}-\sum\limits_{i=f}^{l}C_i \end{equation} \noindent where $E^{\mathrm{raw}}$ is the non-corrected energy value and $C_i$ is the difference in pulse height between the affected and unaffected ADC regions. Equation (\ref{eq:cent}) approximates the regions affected by the DNL as a single ADC value, although they are observed to span a range of three to four ADC values. Nevertheless, this approximation is validated by the effects of finite filter-lengths used for the trapezoidal-filter algorithm, which will be discussed in the following. The deduced pulse height of the input-signal is obtained from the trapezoidal-filter algorithm (see equation (\ref{eq:trapez})) presented in section \ref{sec:processing}. The $i$-th region of the DNL can affect the pulse-height determination, if $V_n\leq A_i\leq V_m$ for every $n$ and $m$ which are indices entering in the first or second sum of equation (\ref{eq:trapez}), respectively. In this case, the energy value can be corrected by means of equation (\ref{eq:corr}). This is no longer sufficient, if the sums of equation (\ref{eq:trapez}) contain values that are both, larger and smaller than $A_i$, i.e. if \begin{equation} \label{eq:ocrbl} V_{k-2L-G+1} \geq A_i \geq V_{k-L-G} \end{equation} \noindent along with $b<A_i$, or \begin{equation} \label{eq:ocrpeak} V_{k-L+1} \geq A_i \geq V_{k} \end{equation} \noindent along with $p>A_i$. Equation (\ref{eq:ocrbl}) corresponds to those cases, where the pulse is located on the trailing edge of a previous pulse, so that the first filter sum of equation (\ref{eq:trapez}) contains ADC values above and below the discontinuity (see Fig. \ref{fig:summing}). This is most likely to occur for the DNL region at $A_f$. Similarly, equation (\ref{eq:ocrpeak}) corresponds to the case, where the pulse only slightly exceeds the discontinuity, but due to the exponential decay of the pulse height, the second filter sum of equation (\ref{eq:trapez}) contains again ADC values located above and below the discontinuity. This is most likely to occur for the DNL region at $A_l$. In both cases, the energy value would be over-corrected so that equation (\ref{eq:corr}) has to be modified to \begin{equation} \label{eq:ocrcorr} E^{\mathrm{corr}}=E^{\mathrm{raw}}-\sum\limits_{i=f}^{l}C_i + w_fC_f + w_lC_l \end{equation} \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_09.png} \end{center} \caption{Two schematic preamplifier signals $S_0(t)$ and $S_1(t)$ (solid line) as a function of time. $S_0(t)$ starts at $t=0$ while $S_1(t)$ is located on the trailing edge of $S_0(t)$. The sum signal is linearly approximated in the vicinity of $t=t_1$. Because of the finite filter-length $L$, sampling-points enter into the sums of equation \ref{eq:trapez}, which are above and below the first relevant DNL region at $A_f$. Note that the x axis is not to scale.} \label{fig:summing} \end{figure} \noindent including weighting factors $w_f$ and $w_l$, that depend on how many sampling points contained in the filter-sums are above and below the discontinuity. In order to determine the weighting factor $w_f$, assume two pulses $S_0(t)$ and $S_1(t)$, starting at $t=0$ and $t=t_1$, with amplitudes $N_0$ and $N_1$ and a common time constant $\tau$ (see Fig. \ref{fig:summing}): \begin{equation} S_0(t)=N_0\exp\left(-\frac{t}{\tau}\right) \end{equation} \begin{equation} S_1(t)=N_1\exp\left(-\frac{t-t_1}{\tau}\right) \end{equation} \noindent $N_0$ is unknown, while $N_1$ can be written as $N_1=p-b$. For $L\ll\tau$, $S_0(t)$ and $S_1(t)$ can be approximated using a first order Taylor approximation in point $t_1$ (see Fig. \ref{fig:summing}), so that the summed signal can be written as \begin{equation} S(t)=S_0(t)+S_1(t)\approx b\left(1-\frac{t}{\tau}-\ln\left(\frac{b}{N_0}\right)\right) \end{equation} \noindent Since $S(t)$ is linear in $t$ within this approximation, $w_f$ can be written as \begin{equation} w_f=\frac{S(t_1-L)-A_0}{S(t_1-L)-S(t_1)}, \end{equation} \noindent and finally, using $S(t_1)=b$: \begin{equation} \label{eq:wf} w_f = \begin{cases} 0 & \mathrm{if }~ A_f > b\cdot\left(1+\frac{L}{\tau}\right) \\ 1+ \frac{b-A_f}{b\frac{L}{\tau}} & \mathrm{if }~ A_f \leq b\cdot\left(1+\frac{L}{\tau}\right) \end{cases} \end{equation} \noindent Since $A_f>b$, $w_f$ is smaller than one. In a similar way, an expression for $w_l$ can be obtained: \begin{equation} \label{eq:wl} w_l = \begin{cases} 0 & \mathrm{if }~ A_l < p\cdot\left(1-\frac{L}{\tau}\right) \\ 1+ \frac{A_l-p}{p\frac{L}{\tau}} & \mathrm{if }~ A_l \geq p\cdot\left(1-\frac{L}{\tau}\right) \end{cases} \end{equation} Equations (\ref{eq:wf}) and (\ref{eq:wl}) yield the important result, that $w_f$ and $w_l$ are independent of the unknown height of the previous pulse $N_0$. It has to be emphasized, that this is no longer true, if $S_0(t)$ and $S_1(t)$ are approximated taking into account quadratic or even higher-order terms of $t/\tau$. \section{Results} \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_10.png} \end{center} \caption{Low-energy part of the raw $\gamma$-ray spectrum (upper panel) and the corrected $\gamma$-ray spectrum (lower panel). The measurement was performed at an input count-rate of 23~kcps and a channel width of 0.78~keV. Using the correction algorithm, the Gaussian peak-shape can be restored. Note that both y axes have the same scale.} \label{fig:226Ra_corr} \end{figure} The correction algorithm has been tested using a standard coaxial HPGe detector from the company ORTEC with a relative efficiency of 20 \% along with a standard $^{226}$Ra calibration source which provides $\gamma$-rays in the energy range from 180~keV up to 2447~keV. Fig. \ref{fig:226Ra_corr} shows a section of the low-energy part of a raw $\gamma$-ray spectrum (upper panel) together with the spectrum which is obtained, by applying the correction algorithm (lower pannel). The spectrum was taken at a count rate of 23 kcps and a dynamic range of 12.8 MeV, corresponding to a channel width of $W=0.78$~keV. Using the correction algorithm, the double-peak structure can be removed. \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_11.png} \end{center} \caption{Measured energy resolution as a function of $\gamma$-ray energy. The data points connected with solid lines are obtained using the correction algorithm without taking a quenching factor into account, thus using $q=0$. The data-points connected with the dotted lines are obtained with a value of $q=0.0006$. The energy-resolution for the peaks at 964~keV, 1764~keV and 2447~keV can be significantly improved by introducing the quenching factor. The data were taken with a channel width of 0.78~keV and an input count-rate of 23~kcps. Note that no comparison to non-corrected values can be drawn for this case because of the observed double-peak structure.} \label{fig:fwhm_rate1} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_12.png} \end{center} \caption{Fit of the quenching factor $q$ to minimize the figure of merit $F$, defined in equation \ref{eq:fom}. $F$ is minimized for a value of $q=0.0006$.} \label{fig:fom_fwhm} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_13.png} \end{center} \caption{Deviation of the corrected peak positions from a linear fit to the data using a quenching factor $q=0$ (solid line) and $q=0.0006$ (dotted line). The data were taken at a count rate of 23~kcps and a channel width of 0.78~keV. The two horizontal lines indicate the width of one energy bin of the $\gamma$-ray spectrum. The introduction of the quenching factor has only minor influence on the linearity. Nevertheless, the deviation of the peak position is found to be smaller than one energy bin.} \label{fig:lin_rate1} \end{figure} The algorithm described in the previous section assumes a few simplifications, i.e. the exponential decay of the preamplifier signal is linearly approximated and the finite rise-time is neglected. Furthermore, in the trapezoidal filter (see equation (\ref{eq:trapez})) implemented in the DGF-4C also contributions from the rising edge are taken into account for ballistic deficit corrections. Fig. \ref{fig:fwhm_rate1} shows the peak-width (FWHM) of several peaks in the corrected $\gamma$-ray spectrum as a function of energy. It can be recognized, that at certain energies, the peak is broadened compared to the neighbouring ones (e.g. at $E_{\gamma}=1764~$keV and $E_{\gamma}=2447~$keV). This may result from the simplifications sketched above. To improve the results, an additional quenching factor $q$ was introduced and the weighting factor $w_l$ in equation (\ref{eq:wl}) is modified in the case of $A_l \geq p\cdot\left(1-\frac{L}{\tau}\right)$ to \begin{equation} w_l = 1+ \frac{A_l-p}{p\frac{L}{\tau}}+\frac{1}{C_f}q\cdot E^{\mathrm{raw}} \end{equation} \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_14.png} \end{center} \caption{Same as Fig. \ref{fig:fwhm_rate1} (upper panel) and \ref{fig:lin_rate1} (lower panel), for an input count-rate of 1.06 kcps and compared to the non-corrected results. In terms of energy resolution, the results for the non-corrected and the corrected spectra are nearly the same, while the linearity is significantly improved using the correction algorithm.} \label{fig:res_caseII} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{fig_15.png} \end{center} \caption{Same as Fig. \ref{fig:fwhm_rate1} (upper panel) and \ref{fig:lin_rate1} (lower panel), for a channel width of 0.16~keV and compared to the non-corrected results. The energy resolution is worsened using the correction algorithm by 0.2 - 0.3 keV in the whole energy range. In terms of linearity, the corrected and non-corrected spectra yield the same results.} \label{fig:fwhm_lin_others} \end{figure} \noindent i.e. the correction is additionally reduced linearly with energy. $q$ can be fitted to obtain the best overall energy-resolution. For this purpose, a figure of merit $F$ has been defined as \begin{equation} \label{eq:fom} F=\frac{1}{Z}\sum\limits_{i=1}^{Z}\sigma_i^2 \end{equation} \noindent where $\sigma_i$ are the peak-widths (FWHM) of a total number of $Z$ peaks in the $\gamma$-ray spectrum. Hence, the optimum value for $q$ is obtained for a minimum value of $F(q)$. Fig. \ref{fig:fom_fwhm} shows $F$ as a function of $q$, yielding a minimum at $q=0.0006$. Using this value for the correction algorithm, the peak width of the outliers can be significantly reduced, while the widths of the other peaks remain almost unchanged (see Fig. \ref{fig:fwhm_rate1}). It has to be emphasized that because of the double-peak structures, no comparison of the peak width to the non-corrected values can be made. Besides the energy resolution, the corrected spectra were investigated with respect to the integral linearity. Fig. \ref{fig:lin_rate1} shows the deviation of their peak position from a linear fit. The dashed horizontal lines indicate the width of one bin in the energy spectrum, indicating that the deviation is smaller than this. The quenching factor $q$ has only a minor effect on the integral linearity. The measurement described above was performed with a channel width of 0.78~keV in combination with a count rate of 23~kcps. Reducing the count rate to a value of 1.06~kcps while keeping the dynamic range constant leads to a narrower baseline distribution and the double-peak structure vanishes. Hence, the corrected and non-corrected spectra can be compared in terms of peak width and linearity. The lower panel of Fig. \ref{fig:res_caseII} reveals significant deviations from a linear fit for the non-corrected peak positions. Though the two peaks at 244~keV and 298~keV are only about 50~keV apart, they deviate by about 2~keV from the linear fit in opposite directions. The linearity is significantly improved by the correction algorithm. When comparing the peak-width in the corrected and non-corrected spectra (Fig. \ref{fig:res_caseII} upper panel), no major differences can be observed. The quenching factor has only a minor effect on the peak widths in this case. Finally, a spectrum has been taken for a low channel width of 0.16~keV, corresponding to a dynamic range of 2.6~MeV, and a high count rate of 24 kcps. As noted in section \ref{sec:analysis}, the effect of the DNL should be less significant in this case. The results in terms of linearity and peak width are shown in Fig. \ref{fig:fwhm_lin_others}. It turns out that the linearity is almost unaffected by the correction algorithm, leading to a maximum deviation from a linear fit, that slightly exceeds the width of one energy bin for the corrected and non-corrected spectra. In contrast, the correction algorithm worsens the peak width by an almost constant value of about $0.3~$keV. \section{Conclusions} A correction algorithm for differential nonlinearities in subranging, pipelined analog-to-digital converters used for digital $\gamma$-ray spectroscopy has been developed and tested. The algorithm is especially successful in restoring Gaussian peak-shapes in the case of measurements with large dynamic ranges, i.e. large channel widths. Additionally, the integral linearity can be significantly improved in the case of low count rates. Nevertheless, for small channel widths, the obtained results in terms of peak width are still slightly worse compared to the non-corrected values. A special feature of this method in contrast to other approaches \cite{Laue04} is, that the calibration procedure for the look-up table has to be performed only once and can be used for every subsequent experiment. Furthermore, the algorithm incorporates only one parameter to be fitted. These advantages make this method feasible for large- and medium scale experiments with a high number of HPGe detectors. It has to emphasized that the method presented in this work is limited to this particular, but for this application widely used type of ADC. Other ADCs like e.g. Flash-ADCs or Wilkinson-type ADCs need different methods (see e.g. \cite{Cott63}). \section*{Acknowledgements} We thank V. Derya and J. Eberth for fruitful discussions. We furthermore thank S. G. Pickstone for carefully reading the manuscript. This work is supported by the Deutsche Forschungsgemeinschaft under Contract ZI 510/4-2 and the Bonn-Cologne Graduate School of Physics and Astronomy. \section*{References}
1,314,259,995,176
arxiv
\section{More Details of Dataset and Experiments} \label{appendix::dataset} \paragraph{Linux Server} We run all the experiments on a Linux server, some important information is listed: \begin{itemize} \item CPU: Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz $\times$ 40 \item GPU: NVIDIA GeForce RTX2080TI-11GB $\times$ 8 \item RAM: 125GB \item cuda: 11.1 \end{itemize} \paragraph{Python Package} We implement all deep learning methods based on Python3.7. The experiment code is included in the supplementary materials. The versions of some important packages are listed: \begin{itemize} \item torch~\citep{code_pytorch}: 1.9.1+cu111 \item torch-geometric~\citep{code_geometric}: 2.0.1 \item torch-cluster:1.5.9 \item torch-sparse: 0.6.12 \item scikit-learn: 1,0 \item numpy:1.20.3 \item scipy:1.7.1 \end{itemize} \paragraph{Datasets} The statistical information of our experimental datasets are shown in Table~\ref{table::dataset}. All these data are publicly available and the URLs listed as follows: \begin{itemize} \item Planetoid Citation Datasets~\citep{dataset_ccp} (\textit{CORA/CiteSeer/PubMed}): \url{https://github.com/rusty1s/pytorch_geometric/blob/master/torch_geometric/datasets/planetoid.py} \item Amazon Co-purchasing Datasets~\citep{dataset_real_amazon} (\textit{Photo/Computers}): \url{https://github.com/rusty1s/pytorch_geometric/blob/master/torch_geometric/datasets/amazon.py} \item Reddit Comment Dataset~\citep{model_sage}:\url{https://github.com/TUM-DAML/pprgo_pytorch/blob/master/data/get_reddit.md} \item MAG-Scholar Dataset~\citep{pr_pprgo} (coarse grained version): \url{https://figshare.com/articles/dataset/mag_scholar/12696653/2} \end{itemize} \begin{table}[t] \centering \caption{Statistical information about datasets. M indicates million.} \sisetup{detect-all,mode=text,group-separator={,},group-minimum-digits=3,input-decimal-markers=.} \setlength{\tabcolsep}{4pt} { \begin{tabular}{@{}l|rrrrrc@{}} \toprule \bf Dataset & \textbf{\#Node} & \textbf{\#Edge} & \textbf{\#Feature} & \textbf{\#Class} &\textbf{Training} \\\midrule CORA & 2,708 & 5,429 & 1,433 & 7 & Transductive \\ CiteSeer & 3,327 & 4,732 & 3,703 & 6 & Transductive \\ PubMed & 19,717 & 44,338 & 5,00 & 3 & Transductive \\ Photo & 7,487 & 119,043 & 745 & 8 & Transductive \\ Computers & 13,381 & 245,778 & 767 & 10 & Transductive \\ \midrule Reddit & 232,965 & 11,606,919 &602 & 41 & Inductive \\ MAG-Scholar & 10.5145M & 132.8176M & 2.7842M & 8 & Inductive \\ \bottomrule \end{tabular}} \label{table::dataset} \end{table} \begin{table*}[t] \centering \caption{Dataset Topology-Imbalance Level} \resizebox{.5\columnwidth}{!} { \begin{tabular}{l|c|c|c} \toprule $\sum_{v\in{\bm{\mathcal{L}}}}\bm{T}_v$ & \textbf{LOW} & \textbf{MIDDLE} & \textbf{HIGH} \\ \midrule CORA & 4.26\tiny$\pm$0.27 & 6.03\tiny$\pm$0.21 & 7.39\tiny$\pm$0.43 \\ CiteSeer & 1.19\tiny$\pm$0.11 & 2.26\tiny$\pm$0.01 & 4.37\tiny$\pm$0.23 \\ Pubmed & 0.14\tiny$\pm$0.02 & 0.25\tiny$\pm$0.01 & 0.42\tiny$\pm$0.05 \\\bottomrule \end{tabular}} \label{table::conflict} \end{table*} \paragraph{Dataset Splitting} In training, we run 5 different random splittings for each dataset to relieve the randomness introduce by the training set selection following~\citet{graph_pitfall}. We repeat experiments 3 times for each splitting to relieve the training splitting. The final performance (weighted F1, macro F1, and the standard deviation) is calculated based on the 15 repeated experiments. The dataset splitting seed list is $[0,1,2,3,4]$; the model training random seed list is: $[0,1,2]$. \paragraph{Method Hyperparameters} For all encoders ($\mathcal{F}$ and $\mathcal{F'}$), we stacked two GNN or linear layers with the ReLU~\citep{relu} activation function\footnote{Except the SGC model, which increases the power iteration times of the normalized adjacency matrix to replace stacking GNN layers.}. All the hyper-parameters are tuned on the validation set. The tuning range of dataset-specific hyperparameters is as follows: \begin{itemize} \item PageRank teleport probability $\alpha$: $[0.05,0.1,0.15,0.2]$; \item Dimension of hidden layer: $[16,32,64,128,256]$; \item Lower bound of the cosine annealing $w_{min}$: $[0.25,0.5,0.75]$; \item Upper bound of the cosine annealing $w_{max}$: $[1.25,1.5,1.75]$; \end{itemize} \paragraph{Training Setting} We take the Adam~\citep{kingma2014adam} as the model optimizer. The learning ratio begins to decay after 20 epochs with a ratio of 0.95. We early stop the training process if there is no improvement in 20 epochs. The tuning range of dataset-specific hyperparameters is as follows: \begin{itemize} \item Learning Rate:$[0.005,{0.0075},0.01,0.015]$, \item Dropout Probability: $[0.2,0.3,0.4,{0.5},0.6]$. \end{itemize} \section{Supplement to the ReNode Method} Apart from the relative ranking re-weight method in ReNode, we also tried to adjust the training weight based on the following scheduling methods: \begin{itemize} \item Linear decay based on the original node Totoro values; \item Linear decay based on the rank of node Totoro values; \item Discrete values for different nodes with a piece-wise function; \end{itemize} Among all these methods, the presented cosine annealing method works best. We analyze the reason lies in that, PageRank is proposed for node ranking; hence adjusting weights based on the original values is not robust and can be largely affected by outliers. Comparing to the linear decay schedule, the cosine schedule methods pay more attention to nodes with middle-level conflict, distinguishing which is of great importance for the model training. The ReNode method assigns more weights to nodes far away from the graph class boundaries, which it is different from methods used in metric learning~\citep{hn_metric1,hn_metric2} or contrastive learning~\citep{hn_mixing,hn_sample} that pay more attention to the 'hard' samples closing to class boundaries. In semi-supervised node classification, most message-passing based GNN model (e.g. GCN) relies on smoothing the adjacent nodes to transfer the category information from the labeled nodes to the unlabeled nodes~\citep{analysis_smoothing,chen_smoothing}. Thus, the 'easy' labeled nodes far away from the class boundaries are expected to better represent class prototypes. Enlarging the training weights of those 'hard' nodes that are close to the class boundaries makes it easier to confuse the class prototype with others. Besides, the labeling size in semi-supervised learning is much smaller than supervised learning (usually 20 nodes per class) and usually, the training nodes are sampled randomly. Hence, a very likely scenario is that the 'hard' samples for some categories are very close to the true class boundaries, while the 'hard' samples for other categories are far away from the true class boundaries. Relying on these 'hard' nodes to decide decision boundaries will cause a large shift of decision boundaries from the true ones. \section{Settings of Dataset Topology-Imbalance Levels} In Section~3, we evaluate the model performance under different levels of topology imbalance. We introduce the settings for the topology-imbalance levels. For each experiment dataset, we randomly sampled 100 training sets, and calculate the dataset overall conflict as introduced in Section~2.3. Then we choose the 3 training sets with the highest/middle/lowest overall conflict as the high/middle/low-level topology-imbalance setting and report the average results on the 3 training sets for each dataset. The specific conflict values of different levels are displayed in Table~\ref{table::conflict}. \section{Submission Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{The main claims stated in the abstract and introduction accurately reflect the paper's contributions and scope.} \item Did you describe the limitations of your work? \answerYes{We discuss the limitation in Section~4.2.} \item Did you discuss any potential negative societal impacts of your work? \answerNA{We think that our proposal has no obvious potential negative societal effect.} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{We have read ethics review guidelines and ensure that our paper conforms to them.} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerNA{} \item Did you include complete proofs of all theoretical results? \answerNA{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{We include them in the supplemental material.} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{We describe them in detail in the paper main body and the appendix.} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{We report error bars and the random seed for all experiments in the paper main body and the appendix.} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{We include them in the appendix.} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{We cite all the existing assets used in this work.} \item Did you mention the license of the assets? \answerYes{We mention the license in appendix.} \item Did you include any new assets either in the supplemental material or as a URL? \answerNA{We have no new assets.} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerYes{All the datasets we use is open-source and can be obtained from their public release.} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerYes{ the datasets we use has no personally identifiable information or offensive content.} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \medskip \section{Introduction} Graph is a widely-used data structure~\citep{survey_gnn}, where the nodes are connected to each other through natural or handcrafted edges. Similar to other data structures, the representation learning for node classification faces the challenge of quantity-imbalance issue, where the labeling size varies among classes and the decision boundaries of trained classifiers are mainly decided by the majority classes~\citep{imb_value_of_labels}. There have been a series of studies~\citep{imb_node_dr_gcn,imb_node_ra_gcn,imb_node_graph_smote} handling the Quantity-Imbalance Node Representation Learning (short as QINL). However, different with other data structures, graph-structured data suffers from another aspect of the imbalance problem: the imbalance caused by the asymmetric and uneven topology of labeled nodes, where the decision boundaries are driven by the labeled nodes close to the topological class boundaries (left of Figure~\ref{figure::intro}) thus interfering with the model learning. \textbf{Present Work.} For the first time, we recognize the \textbf{Topology-Imbalance Node Representation Learning} (short as TINL) as a graph-specific imbalance learning topic, which mainly focus on the decision boundaries shift phenomena driven by the topology imbalance in graph and is an essential component for node imbalance learning. Comparing with the well-explored QINL that studies the imbalance caused by the numbers of labeled nodes, TINL explores the imbalance caused by the positions of labeled nodes and owns the following characteristics: \begin{itemize} \item \textbf{Ubiquity}: Due to the complex connections of the graph nodes, the topology structure of nodes in different categories is naturally asymmetric, which makes TINL an essential characteristic in node representation learning. Hence, it is difficult to construct a completely symmetric labeling set even with an abundant annotation budget. \item \textbf{Perniciousness}: The influence from labeled nodes decays with the topology distance~\citep{LP3}. The asymmetric topology of labeled nodes in different classes and the uneven distribution of labeled nodes in the same class will cause the influence conflict and influence insufficient problems (left of Figure~\ref{figure::intro}) respectively, resulting in a shift of decision boundaries. \item \textbf{Orthogonality}: Quantity-imbalance studies~\citep{imb_node_graph_smote,imb_gen_cb,imb_gen_ldam} usually treat the labeled nodes of the same class as a whole and devise solutions based on the total numbers of each class, while TINL explores the influence of the unique position of each labeled node on decision boundaries. Thus, TINL is independent of QINL in terms of the object of study. \end{itemize} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{plot/new2.pdf} \caption{Schematic diagram of the topology-imbalance issue in node representation learning. The color and the hue denote the type and the intensity of each node's received influence from the labeled nodes, respectively. The left shows that nodes close to the boundary have the risk of information conflict and nodes far away from labeled nodes have the risk of information insufficient. The right shows that our method can decrease the training weights of labeled nodes (R1) close to the class boundary and increase the weights of labeled nodes (B and R2) close to the class centers, thus relieving the topology-imbalance issue.} \label{figure::intro} \end{figure*} Exploring TINL is of great importance for node representation learning due to its ubiquity and perniciousness. However, the methods~\citep{imb_rw1,imb_gen_focal} for quantity imbalance can be hardly applied to TINL because of the orthogonality. To remedy the topology-imbalance issue, thus promoting the node classification, we propose a model-agnostic training framework \textbf{ReNode} to re-weight the labeled nodes according to their positions. We devise the conflic\textbf{t} detecti\textbf{o}n-based \textbf{To}pology \textbf{R}elative L\textbf{o}cation (\textbf{Totoro}) metric to leverage the interaction among labeled nodes across the whole graph to locate their structural positions. Based on the Totoro metric, we further increase the training weights of nodes with small conflict that are highly likely to be close to topological class centers to make them play a more pivotal role during training, and vice versa (right of Figure~\ref{figure::intro}). Empirical results of various imbalance scenarios (TINL, QINL, large-scale graph) and multiple graph neural networks (GNNs) demonstrate the effectiveness and generalizability of our method. Besides, we provide the sensitivity to topology imbalance as a new evaluation perspective for different GNN architectures. \section{Topology-Imbalance Node Representation Learning} \subsection{Notations and Preliminary} \label{section::notations} In this work, we follow the well-established semi-supervised node classification setting~\citep{semi-supervised,model_gcn} to conduct analyses and experiments. Given an undirected and unweighted graph $\mathcal{G}=(\bm{\mathcal{V}},\bm{\mathcal{E}},\bm{\mathcal{L}})$, where $\bm{\mathcal{V}}$ is the node set represented by the feature matrix $\bm{X} \in \mathbb{R}^{n*d}$ ($n = |\bm{\mathcal{V}}|$ is the node size and $d$ is the node embedding dimension), $\bm{\mathcal{E}}$ is the edge set which is represented by an adjacency matrix $\bm{A} \in \mathbb{R}^{n*n}$, $\bm{\mathcal{L}} \subset \bm{\mathcal{V}}$ is the labeled node set and usually we have $|\bm{\mathcal{L}}| \ll |\bm{\mathcal{V}}|$, the node classification task is to train a classifier $\mathcal{F}$ (usually a GNN) to predict the class label $\mathbf{y}$ for the unlabeled node set $\bm{\mathcal{U}} = \bm{\mathcal{V}} - \bm{\mathcal{L}}$. The training sets for different classes are represented by $(\bm{\mathcal{C}}_1,\bm{\mathcal{C}}_2,\cdots,\bm{\mathcal{C}}_k)$ and $k$ is the number of classes. The labeling ratio $\delta = \bm{\mathcal{L}}/\bm{\mathcal{V}}$ is the proportion of labeled nodes in all nodes. In this work, we focus on TINL in homogeneously-connected graphs and hope to inspire future studies on the critical topology-imbalance issue. \begin{figure*}[t] \centering \begin{subfigure}{0.305\textwidth} \centering \includegraphics[width=\linewidth]{plot/tic9.pdf} \subcaption{Predictions of GCN and LP} \end{subfigure}% \hspace{0.1in} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=\linewidth]{plot/right_shift3.pdf} \subcaption{Uniform Sampling} \end{subfigure}% \hspace{0.1in} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=\linewidth]{plot/left_shift3.pdf} \subcaption{Quantity-balanced Sampling} \end{subfigure}% \caption{Node influence and boundary shift caused by quantity- and topology-imbalance. (a): The prediction results of GCN and LP are highly consistent (t-SNE~\citep{tsne} visualization of the \textit{CORA} dataset). (b): The node influence boundary (the yellow dotted line) is shifted towards the small class from the true class boundary (the black dotted line) under the quantity- and topology-imbalance scene. (c): The node influence boundary is shifted towards the large class under the quantity-balanced, topology-imbalanced scene. We regard the large class as positive class to indicate the results. } \label{figure::tic} \end{figure*} \subsection{Understanding Topology Imbalance via Label Propagation} \label{section::analysis} From Figure~\ref{figure::intro}, we can intuitively perceive the imbalance brought by the positions of labeled nodes; in this part, we further explore the nature of topology imbalance with the well-known Label Propagation~\citep{LP1} algorithm (short as LP) and provide a uniform analysis framework for the comprehensive node imbalance issue. In LP, labels are propagated from the labeled nodes and aggregated along edges, which can also be viewed as a random walk process from labeled nodes. The convergence result $\bm{Y}$ after repeated propagation is regarded as the nodes soft-labels: \begin{equation} \bm{Y} = \alpha (\bm{I} - (1-\alpha)\bm{A}')^{-1}\bm{Y}^{0},\label{equation::lp} \end{equation} where $\bm{I}$ is the identity matrix, $\alpha \in (0,1]$ is the random walk restart probability, $\bm{A}' = \bm{D}^{-\frac{1}{2}}\bm{A}\bm{D}^{-\frac{1}{2}}$ is the adjacency matrix normalized by the diagonal degree matrix $\bm{D}$, $\bm{Y}^{0}$ is the initial label distribution where labeled nodes are represented by the one-hot vectors. The prediction label for the $i$-th node is $\bm{q}_i =\arg\max_j \bm{Y}_{ij}$. LP is a simple yet successful model~\citep{LP2} and can be unified with GNN models owning the message-passing mechanism~\citep{LP4}. From Figure~\ref{figure::tic}(a), we can empirically find that there is a significant correlation between the results of LP and GCN (T/F indicates prediction is True/False). The LP prediction $\bm{q}$ can be viewed as the distribution of the \textit{(labeled) node influence}~\citep{LP4} (i.e. each node is mostly influenced by which class's information); hence the boundaries of the node influence can act as an effective reflection for the GNN model decision boundaries considering the high consistency between LP and GNN. Moreover, node influence offers a unified view of TINL and QINL: ideally, the node influence boundaries should be consistent with the true class boundaries, but both the labeled nodes' numbers (QINL) and positions (TINL) can cause a shift of the node influence boundaries from the true one, resulting in deviation of the model decision boundaries. \textbf{Node imbalance issue is composed of topology- and quantity-imbalance.} Figure~\ref{figure::tic} illustrates two examples of node influence boundary shift. In Figure~\ref{figure::tic}(b), when the uniform selection is adopted to generate training set, both the quantity and the topology are imbalanced for model training; then the large class with more total nodes (denotes by blue color) will own stronger influence than the small class with fewer total nodes (denotes by red color) due to the quantity advantage and the node influence boundary is shifted towards the small class. In Figure~\ref{figure::tic}(c), when the quantity-balanced strategy is adopted for sampling training nodes, it will be easier for the small class to has more labeled nodes close to the class boundary and the boundary of the node influence is shifted into the large class. We can find that even when the training set is quantity-balanced, the topology-imbalance issue still exists and hinders the node classification learning. Hence, we can conclude that node imbalance learning is caused by the joint effect of TINL and QINL. Separately considering TINL or QINL will lead to a one-sided solution to node imbalance learning. \begin{figure*} \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\linewidth]{plot/c6.pdf} \end{subfigure} \hspace{0.1in} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\linewidth]{plot/cora_reg_8.pdf} \end{subfigure} \caption{Effectiveness of Totoro at (a) Node Level: labeled nodes (t-SNE visualization of the \textit{CORA} dataset) with less influence conflict (lighter color) are farther-away from class boundaries than those with high conflict (darker color), and (b) Dataset Level: There is a significant negative correlation between the GNN (GCN) performance and overall conflict of the training set (the Pearson correlation coefficient is $-0.618$ over 50 randomly selected training sets with the $p$ value smaller than 0.01).} \label{figure::totoro} \end{figure*} \subsection{Measuring Topology Imbalance by Influence Conflict} \label{section::totoro} Although we have realized that the imbalance of node topology interferes with model learning, how to measure the labeled node's relative topological position to its class (being far away from or close to the class center) remains the key challenge in handling the topology-imbalance issue due to the complex graph connections and the unknown class labels for most nodes in the graph. As the nodes are homogeneously connected when constructing the graph, even nodes close to the class boundaries own similar characteristics to their neighbors. Thus it is unreliable to leverage the difference between the characteristics of one labeled node and its surrounding subgraphs to locate its topological position. Instead, we propose to utilize the node topology information by considering the node influence conflict across the whole graph and devise the Conflic\textbf{t} Detecti\textbf{o}n-based \textbf{To}pology \textbf{R}elative L\textbf{o}cation metric (\textbf{Totoro}). Similar to Eq~(\ref{equation::lp}), we calculate the Personalized PageRank~\citep{pr_pagerank} matrix $\bm{P}$ to measure node influence distribution from each labeled node: \begin{equation} \bm{P} = \alpha (\bm{I} - (1-\alpha)\bm{A'})^{-1}. \label{equation::ppr} \end{equation} \textbf{Node influence conflict denotes topological position.} According to related studies~\citep{LP4,gnn_ppnp,pr_pprgo}, $\bm{P}$ can be viewed as the distribution of influence exerted outward from each node. We assume that if a labeled node $v\in\bm{\mathcal{V}}$ encounters strong heterogeneous influence from the other classes' labeled nodes in the subgraph around node $v$ where node $v$ itself owns great influence, we have the conclusion that node $v$ meets large \textit{influence conflict} in message passing and it is close to topological class boundaries, and vice versa. Based on this hypothesis, we take the expectation of the influence conflict between the node $v$ and the labeled nodes from other classes when node $v$ randomly walks across the entire graph as a measurement of how topologically close node $v$ is to the center of the class it belongs to. The Totoro value of node $v$ is computed as: \begin{flalign} \bm{T}_{v} &=\mathbb{E}_{x\sim\bm{P}_{v,:}}[\sum_{j\in[1,k],j \neq \bm{y}_v}\frac{1}{|\bm{\mathcal{C}}_j|}\sum_{i\in \bm{\mathcal{C}}_j}\bm{P}_{i,x}], \label{equation::expectation} \end{flalign} where $\bm{y}_v$ is the ground-truth label of node $v$, $\bm{P}_v$ indicates the personalized PageRank probability vector for the node $v$. A larger Totoro value $\bm{T}_{v}$ indicates that node $v$ is topologically closer to class boundaries, and vice versa. The normalization item $\nicefrac{1}{|\bm{\mathcal{C}}_j|}$ is added to make the influence from the different classes comparable when computing conflict. We visualize the node labels and the Totoro values (scaled to $[0,1]$) of labeled nodes in Figure~\ref{figure::totoro}(a). We can find that the labeled nodes with smaller Totoro values are farther away from the class boundaries, demonstrating the effectiveness of Totoro in locating the positions of labeled nodes. Besides, we sum the conflict of all the labeled nodes $\sum_{b\in\bm{\mathcal{L}}}\bm{T}_v$ to measure the overall conflict of the dataset, which can be viewed as the metric for the overall topology imbalance given the graph $\bm{\mathcal{G}}$ and the training set $\bm{\mathcal{L}}$. Figure~\ref{figure::totoro}(b) shows that there is a significant negative correlation between the overall conflict and the model performance, which further demonstrates the effectiveness of Totoro in measuring the intensity of topology imbalance at the dataset level. \subsection{Alleviate Topology Imbalance by Instance-wise Node Re-weighting} \label{method::reNode} In this section, we introduce \textbf{ReNode}, a model-agnostic training weight schedule mechanism to address TINL for general GNN encoder in a plug-and-play manner. Inspired by the analysis in Section~\ref{section::analysis}, the ReNode method is devised to promote the training weights of the labeled nodes that are close to the topological class centers, so as to make these nodes play a more active role in model learning, and vice versa. Specifically, we devise a cosine annealing mechanise~\footnote{We also try other schedules and the cosine method works best. More details can be found in Appendix B.} for the training node weights based on their Totoro values: \begin{flalign} \bm{w}_v = w_{\text{min}} + \frac{1}{2}(w_{\text{max}}-w_{\text{min}})(1+\mathrm{cos}(\frac{\mathrm{Rank}(\bm{T}_v)}{|\bm{\mathcal{L}}|}\pi)),\quad v\in \bm{\mathcal{L}} \label{equation::wi} \end{flalign} where $\bm{w}_v$ is the modified training weight for the labeled node $v$, $w_{\text{min}},w_{\text{max}}$ are the hyper-parameters indicating the lower bound and upper bound of the weight correction factor, ${\mathrm{Rank}}(\bm{T}_v)$ is the ranking order of $\bm{T}_v$ from the smallest to the largest. The training loss $L_T$ for the quantity-balanced, topology-imbalanced node classification task is computed by the following equations: \begin{equation} L_T = -\frac{1}{|\bm{\mathcal{L}}|} \sum_{v \in \bm{\mathcal{L}}} \bm{w}_v \sum_{c=1}^{k} \bm{y}^{\ast c}_v\,\log\,\,{\bm{g}}^c_v, \quad \bm{g} = \mathrm{softmax}(\mathcal{F}(\bm{X},\bm{A}, \bm{\theta})), \label{equation::training objective} \end{equation} where $\mathcal{F}$ denotes any GNN encoder, $\bm{\theta}$ is the parameter of $\mathcal{F}$, $\bm{g}_i$ is the GNN output for node $i$, $\bm{y}^{\ast}_i$ is the gold label for node $i$ in one-hot embedding. By encouraging the positive effects of the labeled nodes near the class topological centers, and reducing the negative effects of those near the topological class boundaries, our ReNode method is expected to minimize the deviation between the node influence boundaries and the true class boundaries, so as to correct the class imbalance caused by the positions of labeled nodes. \paragraph{ReNode to Jointly Handle TINL and QINL} In this part, we introduce the application of the ReNode method in a more general graph imbalance scenario where both the topology- and quantity-imbalance issues exist. As analyzed in previous sections, the TINL and QINL are orthogonal problems. Therefore, we propose that our ReNode method based on (labeled) node topology can be seamlessly combined with the existing methods designed for the quantity-imbalance learning. Without loss of generality, we present how our ReNode method can be combined with the vanilla class frequency-based re-weight method~\citep{imb_rw1}. The training loss $L_Q$ for the quantity-imbalanced, topology-imbalanced node classification task is formalized in the following equation: \begin{flalign} L_Q &= -\frac{1}{|\bm{\mathcal{L}}|} \sum_{v \in \bm{\mathcal{L}}} \bm{w}_v\frac{\bar{|\bm{\mathcal{C}}|}}{|\bm{\mathcal{C}}_j|} \sum_{c=1}^{k} \bm{y}^{\ast c}_v\,\log\,\,{\bm{g}}^c_v, \label{equation::training objective2} \end{flalign} where $\bar{|\bm{\mathcal{C}}|}$ is the average number of the class training sizes. With this method, the final weight of the labeled node is affected by two perspectives: training examples of the minority classes will have higher weights than that of the majority classes; training examples close to the topological class centers will have higher weights than those are close to the topological class boundaries. \paragraph{ReNode for Large-scale Graph} There are mainly two challenges when applying ReNode to large-scale graphs: (1) how to calculate the PageRank matrix, and (2) how to train the GNN model in an inductive setting~\citep{model_sage}. In this work, we follow the PPRGo method~\citep{pr_pprgo} to implement our method on the large-scale graph, which can decouple the feature learning process from the information transmission process to resolve the dependence on the global graph topology structure and can be carried out much efficiently. Following PPRGo, the Personalized PageRank matrix $\hat{\bm{P}}$ and the corresponding training ReNode factor $\hat{\bm{w}}$ are generated by the estimation method from~\citet{ppr_quick} and then $\hat{\bm{P}}$ is directly employed as the aggregation weights from all the other nodes regardless of their topology distance from the current node: \begin{flalign} \bm{g}' &= \mathrm{softmax}(\hat{\bm{P}} \mathcal{F}'(\bm{X},\theta')), \label{equation::pprgo} \end{flalign} where $\mathcal{F}'$ can be a linear layer or a multi-layer perceptron with parameter $\theta'$. The final training loss for large-scale graph $L_L$ follows Eq~(\ref{equation::training objective}) and (\ref{equation::training objective2}), and replaces ${\bm{w}}$ and ${\bm{g}}$ with $\hat{\bm{w}}$ and ${\bm{g}'}$. \begin{table*}[t] \centering \caption{ReNode (short as RN) for the pure topology-imbalance issue. We report Weighted-F1 (W-F, \%), Macro-F1 (M-F, \%) and the corresponding standard deviation for each group of experiments. $*$ and $**$ represent the result is significant in student t-test with $p<0.05$ and $p<0.01$, respectively.} \resizebox{.99\textwidth}{!} { \begin{tabular}{l|r|ll|ll|ll|ll|ll@{}} \toprule \multicolumn{1}{l|}{\multirow{2}{*}{Model}} & \multirow{2}{*}{Training} & \multicolumn{2}{c|}{\textbf{CORA}} & \multicolumn{2}{c|}{\textbf{CiteSeer}} & \multicolumn{2}{c|}{\textbf{PubMed}} & \multicolumn{2}{c|}{\textbf{Photo}} & \multicolumn{2}{c}{\textbf{Computers}} \\ \cline{3-12} \multicolumn{1}{c|}{} & & W-F & M-F & W-F & M-F & W-F & M-F & W-F & M-F & W-F & M-F \\ \hline \multirow{2}{*}{GCN} & w/o RN & 79.1\tiny$\pm$1.1 & 77.8\tiny$\pm$1.5 & 66.2\tiny$\pm$1.0 & 62.0\tiny$\pm$1.3 & 74.6\tiny$\pm$2.1 & 74.7\tiny$\pm$1.9 & 86.8\tiny$\pm$2.0 & 84.7\tiny$\pm$1.7 & 74.2\tiny$\pm$2.6 & 73.6\tiny$\pm$2.9 \\ & w/ \ \ RN & \textbf{79.8}$^{**}$\tiny$\pm$0.9 & \textbf{78.6}$^{**}$\tiny$\pm$1.2 & \textbf{66.9}$^{*}$\tiny$\pm$1.1 & \textbf{62.8}$^{*}$\tiny$\pm$1.4 & \textbf{76.1}$^{**}$\tiny$\pm$1.5 & \textbf{76.1}$^{**}$\tiny$\pm$1.8 & \textbf{87.7}$^{**}$\tiny$\pm$2.2 & \textbf{85.4}$^{**}$\tiny$\pm$1.9 & \textbf{74.7}$^{*}$\tiny$\pm$2.2 & \textbf{74.5}$^{**}$\tiny$\pm$2.3 \\ \hline \multirow{2}{*}{GAT} & w/o RN & 76.0\tiny$\pm$1.7 & 74.9\tiny$\pm$1.9 & 66.3\tiny$\pm$2.8 & 62.4\tiny$\pm$2.6 & 73.9\tiny$\pm$2.2 & 73.9\tiny$\pm$2.1 & 88.3\tiny$\pm$2.0 & 86.2\tiny$\pm$2.2 & \textbf{79.0}\tiny$\pm$2.1 & \textbf{78.8}\tiny$\pm$2.3 \\ & w/ \ \ RN & \textbf{77.7}$^{**}$\tiny$\pm$2.0 & \textbf{76.2}$^{**}$\tiny$\pm$1.8 & \textbf{67.1}$^{*}$\tiny$\pm$1.9 & \textbf{63.2}$^{*}$\tiny$\pm$1.6 & \textbf{75.2}$^{**}$\tiny$\pm$2.0 & \textbf{75.1}$^{**}$\tiny$\pm$2.5 & \textbf{89.1}$^{**}$\tiny$\pm$2.0 & \textbf{87.1}$^{**}$\tiny$\pm$2.0 & 78.8\tiny$\pm$1.9 & 78.7\tiny$\pm$2.0 \\ \hline \multirow{2}{*}{PPNP} & w/o RN & 80.5\tiny$\pm$1.6 & 79.1\tiny$\pm$1.4 & 67.5\tiny$\pm$1.8 & 63.2\tiny$\pm$1.6 & 74.6\tiny$\pm$1.9 & 74.7\tiny$\pm$1.7 & 89.3\tiny$\pm$1.3 & 86.8\tiny$\pm$1.4 & 78.7\tiny$\pm$1.5 & 77.7\tiny$\pm$1.7 \\ & w/ \ \ RN & \textbf{81.9}$^{**}$\tiny$\pm$0.6 & \textbf{80.5}$^{**}$\tiny$\pm$0.8 & \textbf{68.1}$^{*}$\tiny$\pm$1.4 & \textbf{63.7}$^{*}$\tiny$\pm$2.0 & \textbf{76.0}$^{**}$\tiny$\pm$2.0 & \textbf{76.1}$^{**}$\tiny$\pm$2.2 & \textbf{89.7}$^{*}$\tiny$\pm$1.0 & \textbf{87.2}$^{*}$\tiny$\pm$1.3 & \textbf{79.0}$^{*}$\tiny$\pm$1.1 & \textbf{78.3}$^{*}$\tiny$\pm$1.1\\ \hline \multirow{2}{*}{SAGE} & w/o RN & 75.1\tiny$\pm$1.7 & 74.6\tiny$\pm$1.4 & 67.0\tiny$\pm$1.4 & 63.0\tiny$\pm$1.4 & 74.2\tiny$\pm$2.2 & 74.2\tiny$\pm$2.1 & 86.2\tiny$\pm$2.6 & 83.9\tiny$\pm$2.4 & 73.5\tiny$\pm$3.4 & 71.6\tiny$\pm$2.5 \\ & w/ \ \ RN & \textbf{75.7}$^{**}$\tiny$\pm$1.7 & \textbf{75.1}$^{**}$\tiny$\pm$1.4 & \textbf{67.3}\tiny$\pm$1.4 & \textbf{63.5}$^{*}$\tiny$\pm$1.2 & \textbf{74.9}$^{**}$\tiny$\pm$1.9 & \textbf{78.2}$^{**}$\tiny$\pm$2.3 & \textbf{86.5}\tiny$\pm$1.7 & \textbf{84.1}\tiny$\pm$1.7 & \textbf{74.9}$^{**}$\tiny$\pm$3.0 & \textbf{72.3}$^{**}$\tiny$\pm$2.5 \\ \hline \multirow{2}{*}{CHEB} & w/o RN & 74.5\tiny$\pm$1.1 & 73.4\tiny$\pm$1.1 & 66.8\tiny$\pm$1.8 & 63.2\tiny$\pm$1.6 & 75.1\tiny$\pm$1.8 & 75.2\tiny$\pm$1.1 & 82.1\tiny$\pm$2.2 & 79.4\tiny$\pm$3.5 & 70.3\tiny$\pm$4.0 & 68.4\tiny$\pm$3.4 \\ & w/ \ \ RN & \textbf{75.3}$^{**}$\tiny$\pm$1.1 & \textbf{74.0}$^{**}$\tiny$\pm$1.1 & \textbf{67.5}$^{**}$\tiny$\pm$1.6 & \textbf{63.8}$^{**}$\tiny$\pm$1.5 & \textbf{76.2}$^{**}$\tiny$\pm$1.4 & \textbf{76.3}$^{**}$\tiny$\pm$1.2 & \textbf{84.8}$^{**}$\tiny$\pm$2.4 & \textbf{82.1}$^{**}$\tiny$\pm$2.8 & \textbf{70.5}\tiny$\pm$4.0 & \textbf{68.6}\tiny$\pm$3.4 \\ \hline \multirow{2}{*}{SGC} & w/o RN & 74.9\tiny$\pm$2.1 & 73.8\tiny$\pm$2.1 & 65.7\tiny$\pm$1.6 & 61.8\tiny$\pm$1.6 & 72.9\tiny$\pm$2.3 & 73.1\tiny$\pm$2.6 & 87.1\tiny$\pm$1.3 & 84.9\tiny$\pm$1.1 & 77.4\tiny$\pm$1.7 & 76.8\tiny$\pm$1.8 \\ & w/ \ \ RN & \textbf{77.0}$^{**}$\tiny$\pm$1.1 & \textbf{76.0}$^{**}$\tiny$\pm$1.1 & \textbf{67.2}$^{**}$\tiny$\pm$1.3 & \textbf{62.9}$^{**}$\tiny$\pm$1.8 & \textbf{73.7}$^{**}$\tiny$\pm$2.8 & \textbf{73.8}$^{**}$\tiny$\pm$2.1 & \textbf{87.4}\tiny$\pm$1.5 & \textbf{85.2}\tiny$\pm$1.5 & \textbf{78.2}$^{**}$\tiny$\pm$1.8 & \textbf{77.8}$^{**}$\tiny$\pm$1.2 \\ \bottomrule \end{tabular}} \label{table::tinl_result} \end{table*} \begin{table*}[t] \centering \caption{Result of different dataset conflict levels (High/Middle/Low). Our ReNode method improve the GNN (GCN) performance most when the conflict level of graph is high.} \resizebox{.99\textwidth}{!} { \begin{tabular}{l|l|l|l|l|l|l|l|l|l} \toprule W-F(\%) & CORA-H & CORA-M & CORA-L & CiteSeer-H & CiteSeer-M & CiteSeer-L & PubMed-H & PubMed-M & PubMed-L \\ \midrule w/o RN & 76.5\tiny$\pm$1.3 & 78.4\tiny$\pm$0.7 & 79.7\tiny$\pm$0.8 & 62.6\tiny$\pm$1.5 & 65.3\tiny$\pm$0.6 & 67.3\tiny$\pm$1.1 & 72.1\tiny$\pm$2.4 & 74.7\tiny$\pm$1.8 & 78.3\tiny$\pm$1.8 \\ w/ \ \ RN & \textbf{78.7}$^{**}$\tiny$\pm$0.8 & \textbf{79.3}$^{**}$\tiny$\pm$0.6 & \textbf{80.4}$^{**}$\tiny$\pm$0.6 & \textbf{63.8}$^{**}$\tiny$\pm$1.3 & \textbf{66.0}$^{**}$\tiny$\pm$0.8 & \textbf{67.5}\tiny$\pm$1.4 & \textbf{74.3}$^{**}$\tiny$\pm$2.1 & \textbf{75.6}$^{**}$\tiny$\pm$1.9 & \textbf{78.8}$^{*}$\tiny$\pm$1.5 \\\bottomrule \end{tabular} } \label{table::explan} \end{table*} \section{Experiments} In this section, we will first introduce the experimental datasets for both transductive and inductive semi-supervised node classification. Then we introduce the experiments to verify the effectiveness of the proposed ReNode method in three different imbalance situations: (1) TINL only, (2) TINL and QINL, (3) Large-scale Graph. \subsection{Datasets} We adopt two sets of graph datasets to conduct experiments. For the transductive setting~\citep{model_sage}, we take the widely-used Plantoid paper citation graphs~\citep{dataset_ccp} (\textit{CORA},\textit{CiteSeer}, \textit{Pubmed}) and the Amazon co-purchase graphs~\citep{dataset_real_amazon} (\textit{Photo},\textit{Computers}) to verify the effectiveness of our method. For the inductive setting, we conduct experiments on the popular \textit{Reddit} dataset~\citep{model_sage} and the enormous \textit{MAG-Scholar} dataset (coarse-grain version)~\citep{pr_pprgo} which owns millions of nodes and features. For each of these datasets, we repeat experiments on $5$ different datasets splittings~\citep{graph_pitfall} and we run $3$ times for each splitting to reduce the random variance. More details about the datasets and experiment settings are presented in Appendix~A. \subsection{ReNode for the Pure Topology-imbalance Issue} \label{section::tinl} \paragraph{Settings} When considering topology-imbalance only, the labeling set takes a balanced setting and the annotation size for each class is all equal to ${|\bm{\mathcal{L}}|}/{k}$. Following the most widely-used semi-supervised setting in node classification studies~\citep{semi-supervised,model_gcn}, we randomly select 20 nodes in each class for training and 30 nodes per class for validation; all the remaining nodes form the test set. We display the experiment results for the $5$ transductive datasets on $6$ widely-used GNN models: GCN~\citep{model_gcn}, GAT~\citep{model_gat}, PPNP~\citep{gnn_ppnp}, GraphSAGE~\citep{model_sage} (short as SAGE), ChebGCN~\citep{model_cheb} (short as CHEB) and SGC~\citep{model_sgc}. We strictly align the hyperparameters in each group of experiments to show the pure improvement brought by our ReNode method (similarly hereinafter). The training loss $L_T$ from section~\ref{method::reNode} is adopted. \paragraph{Results} From Table~\ref{table::tinl_result}, we can find that our ReNode method can effectively improve the overall performance (Weighted-F1) and the class-balance performance (Macro-F1) for all the 6 experiment GNNs in most cases, which proves the effectiveness and generalizability of our method. Our method considers the graph-specific topology imbalance issue which has been usually neglected in existing methods and conducts a fine-grained and self-adaptive adjustment to the training node weights based on their topological positions. We notice that the improvement for the \textit{CiteSeer} dataset is less than the other datasets. We analyze the reason lies in that the connectivity of \textit{CiteSeer} is poor, which makes the conflict detection--based method fail to reflect the node topological position well. To verify the motivation of relieving topology-imbalance, we set training sets with different levels of topology-imbalance to test our method\footnote{The settings of the dataset topology-imbalance levels is shown in Appendix C.}. Table~\ref{table::explan} displays that our ReNode method improves the performance of GNN (GCN) most when the dataset is highly topologically imbalanced, which demonstrates that our method can effectively alleviate topology-imbalance and improve GNN performance. \subsection{ReNode for the Compound Scene of TINL and QINL} \label{section::qinl} \paragraph{Settings} When jointly considering both topology- and quantity-imbalance issues, following existing studies~\citep{imb_gen_ldam,imb_gen_step}, we take the step imbalance setting, in which all the minority classes have the same labeling size $n_i$ and all the majority classes have the same labeling size $n_a = \rho * n_i$. The imbalance ratio $\rho$ denotes the intensity of quantity imbalance which is equal to the ratio of the node size of the most frequent to least frequent class. In this work, the imbalance ratio $\rho$ is set to $[5,10]$ for each dataset. The fraction of the majority classes is $\mu$, and for all experiments, we set $\mu=0.5$ and round down the result $\mu * k$. The training loss $L_Q$ from section~\ref{method::reNode} is adopted. We implement two groups of baselines for comparison: (1) Popular quantity-imbalance methods for general scenarios: Re-weight~\citep{imb_rw1} (RW), Focal Loss~\citep{imb_gen_focal} (Focal) and Class Balanced Loss~\citep{imb_gen_cb} (CB); (2) Graph-specific quantity-imbalance methods: DR-GCN~\citep{imb_node_dr_gcn}, RA-GCN~\citep{imb_node_ra_gcn} and GraphSMOTE~\citep{imb_node_graph_smote}. To jointly handle the topology- and quantity-imbalance issues and demonstrate the orthogonality of them, we combine our ReNode method with these three general quantity-imbalance methods (RW, Focal, CB)\footnote{The three graph-specific methods have special operations for the training loss, which can be hardly combined with our method.}. The backbone model is GCN~\citep{model_gcn}, and the labeling ratio $\delta$ is set to $5\%$. \paragraph{Results} From Table~\ref{table::qinl_result} (Macro-F1 is reported here for a fair comparison with these methods designed for class-balance performance), we can find that our ReNode method significantly outperforms both the general and the graph-specific quantity-imbalance methods in most situations by simultaneously alleviating the topology- and quantity-imbalance issues. Even when the training set is severely quantity-imbalanced ($\rho$=10), our method still effectively alleviates the imbalance issue and promotes model performance well. The performance of the quantity-imbalance methods from the general field (RW, Focal, CB) is on par with or less effective than the graph-specific quantity-imbalance methods (DR-GCN, RA-GCN, G-SMOTE), while the combination of our ReNode method and these general quantity-imbalance methods can surpass the graph-specific quantity-imbalance methods, which demonstrates that the node imbalance learning can be further solved by jointly handling the topology- and quantity-imbalance issues instead of considering the quantity-imbalance issue only. \begin{table*}[t] \centering \caption{ReNode method for the compound scene of TINL and QINL. The imbalance ratio $\rho$ is set to different levels ($[5,10]$) to test the effect of our method under different imbalance intensities. } \resizebox{.99\textwidth}{!} { \begin{tabular}{l|l|l|l|l|l|l|l|l|l|l} \toprule Macro-F1(\%) & \multicolumn{2}{c|}{\textbf{CORA}} & \multicolumn{2}{c|}{\textbf{CiteSeer}} & \multicolumn{2}{c|}{\textbf{PubMed}} & \multicolumn{2}{c|}{\textbf{Photo}} & \multicolumn{2}{c}{\textbf{Computers}} \\ \midrule Imbalance Ratio & \multicolumn{1}{c|}5 & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}5 & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}5 & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}5 & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}5 & \multicolumn{1}{c}{10} \\ \midrule CE & 60.9\tiny$\pm$1.5 & 41.0\tiny$\pm$3.5 & 53.6\tiny$\pm$2.1 & 47.6\tiny$\pm$2.8 & 61.0\tiny$\pm$1.9 & 49.7\tiny$\pm$2.6 & 62.0\tiny$\pm$2.7 & 40.7\tiny$\pm$3.4 & 50.4\tiny$\pm$2.6 & 35.5\tiny$\pm$3.2 \\\midrule DR-GCN & 67.7\tiny$\pm$1.1 & 51.3\tiny$\pm$1.4 & 54.7\tiny$\pm$1.7 & 52.5\tiny$\pm$2.6 & 79.4\tiny$\pm$1.2 & 78.0\tiny$\pm$1.6 & 80.8\tiny$\pm$2.3 & 79.5\tiny$\pm$2.8 & 66.9\tiny$\pm$3.5 & 67.4\tiny$\pm$3.6 \\ RA-GCN & 69.0\tiny$\pm$1.5 & 51.7\tiny$\pm$1.7 & \textbf{55.6}\tiny$\pm$1.3 & 52.7\tiny$\pm$2.1 & 80.6\tiny$\pm$1.8 & 78.1\tiny$\pm$2.1 & 81.4\tiny$\pm$2.6 & 79.4\tiny$\pm$3.2 & 71.2\tiny$\pm$2.8 & 68.7\tiny$\pm$3.0 \\ G-SMOTE & 68.1\tiny$\pm$0.9 & 49.6\tiny$\pm$1.1 & 54.0\tiny$\pm$1.6 & 51.8\tiny$\pm$1.3 & 79.7\tiny$\pm$1.2 & 76.4\tiny$\pm$1.5 & 82.2\tiny$\pm$1.8 & 77.5\tiny$\pm$2.1 & 71.9\tiny$\pm$2.5 & 61.3\tiny$\pm$3.2 \\ \midrule RW (w/o RN) & 69.1\tiny$\pm$1.4 & 49.7\tiny$\pm$1.6 & 53.6\tiny$\pm$2.3 & 52.9\tiny$\pm$2.6 & 80.5\tiny$\pm$1.5 & 78.0\tiny$\pm$2.0 & 80.5\tiny$\pm$2.7 & 80.4\tiny$\pm$3.3 & 70.5\tiny$\pm$3.2 & 67.8\tiny$\pm$4.2 \\ RW (w/\ \ \ RN) & 70.0$^{*}$\tiny$\pm$1.3 & 50.1\tiny$\pm$1.7 & 55.2$^{**}$\tiny$\pm$1.8 & 54.0$^{**}$\tiny$\pm$2.5 & \textbf{81.2}$^{*}$\tiny$\pm$1.0 & 78.5$^{*}$\tiny$\pm$2.2 & \textbf{83.9}$^{**}$\tiny$\pm$2.1 & \textbf{81.3}$^{**}$\tiny$\pm$3.2 & 72.4$^{**}$\tiny$\pm$2.6 & \textbf{70.2}$^{**}$\tiny$\pm$2.4 \\ \midrule FOCAL (w/o RN) & 66.4\tiny$\pm$1.6 & 51.9\tiny$\pm$1.8 & 54.3\tiny$\pm$1.3 & 54.0\tiny$\pm$1.9 & 80.5\tiny$\pm$0.7 & 78.0\tiny$\pm$1.6 & 79.3\tiny$\pm$1.9 & 79.2\tiny$\pm$2.2 & 65.8\tiny$\pm$2.7 & 63.9\tiny$\pm$2.6 \\ FOCAL (w/\ \ \ RN) & 68.7$^{**}$\tiny$\pm$0.7 & \textbf{52.6}$^{**}$\tiny$\pm$1.9 & 54.6\tiny$\pm$1.2 & \textbf{54.7}$^{*}$\tiny$\pm$1.5 & 80.9$^{*}$\tiny$\pm$0.8 & \textbf{78.7}$^{**}$\tiny$\pm$1.4 & 80.0$^{**}$\tiny$\pm$2.3 & 80.7$^{**}$\tiny$\pm$2.9 & 68.6$^{**}$\tiny$\pm$3.1 & 65.5$^{**}$\tiny$\pm$3.5 \\\midrule CB (w/o RN) & 69.8\tiny$\pm$1.5 & 51.5\tiny$\pm$1.5 & 54.1\tiny$\pm$1.3 & 53.5\tiny$\pm$0.8 & 80.6\tiny$\pm$0.8 & 77.6\tiny$\pm$1.6 & 77.9\tiny$\pm$2.6 & 78.8\tiny$\pm$3.1 & 69.6\tiny$\pm$2.2 & 64.8\tiny$\pm$2.9 \\ CB (w/\ \ \ RN) & \textbf{71.1}$^{**}$\tiny$\pm$0.6 & 51.9$^{*}$\tiny$\pm$1.2 & 54.7$^{*}$\tiny$\pm$1.6 & 54.3$^{**}$\tiny$\pm$2.3 & 81.2$^{*}$\tiny$\pm$1.8 & 78.3$^{**}$\tiny$\pm$2.6 & 79.6$^{**}$\tiny$\pm$2.7 & 80.4$^{**}$\tiny$\pm$3.3 & \textbf{73.1}$^{**}$\tiny$\pm$3.1 & 66.5$^{**}$\tiny$\pm$3.6 \\ \bottomrule \end{tabular}} \label{table::qinl_result} \end{table*} \subsection{ReNode for Large-scale Graphs} \label{section::large-scale} \paragraph{Settings} We conduct experiments on the two large-scale datasets: Reddit and MAG-Scholar, to verify the effectiveness of our ReNode method in the inductive setting. We conduct experiments with different labeling sizes (20/50/100 training nodes per class) and imbalance settings (TINL-only, TINL and QINL). The backbone GNN model is PPRGo~\citep{pr_pprgo}~\footnote{We do not implement other inductive backbone GNN models like Cluster-GCN~\citep{deep_cluster} or APPNP~\citep{gnn_ppnp} due to the low efficiency~\citep{pr_pprgo} of them when handling the enormous \textit{MAG-Scholar} dataset.}. For QINL, we take the uniform selection to sample training nodes to be consistent with PPRGo. The training loss $L_L$ from section~\ref{method::reNode} is adopted. Both baseline and our methods are not combined with any quantity-imbalance method. \paragraph{Results} In Figure~\ref{figure::large}, we present the experiment results with different labeling sizes and imbalance settings, we can find that our method can effectively promote the performance on the large-scale graphs comparing to the popular PPRGo model across different settings, which demonstrates the applicability of our method for extremely-large graphs. We also notice that our method can bring greater improvement when the labeling size is large. We explain the reason lies in that when the labeling size is large, the positions located by the conflicts among nodes will be more accurate, thus bringing more reasonable weight adjustments. On the other hand, when the labeling ratio is extremely small (especially for the enormous \textit{MAG-Scholar} graph) and the influence conflict between the labeled nodes is negligible, our method exhibits the cold start problem. \begin{figure*}[t] \centering \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{plot/reddit_rainbow_0.pdf} \end{subfigure}% \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{plot/reddit2_rainbow_0.pdf} \end{subfigure}% \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{plot/mag_rainbow_0.pdf} \end{subfigure}% \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{plot/mag2_rainbow_0.pdf} \end{subfigure}% \caption{Experimental results (Weighted-F1,\%) on the large-scale \textit{Reddit} and \textit{MAG-Scholar} graphs. Our ReNode method can effectively improve the model performance under different labeling sizes.} \label{figure::large} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\linewidth]{plot/cora_1.pdf} \end{subfigure}% \hspace{0.05in} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\linewidth]{plot/citeseer_1.pdf} \end{subfigure}% \hspace{0.05in} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\linewidth]{plot/pubmed_3.pdf} \end{subfigure}% \caption{Evaluating GNNs from the aspect of topology-imbalance sensitivity (Metric: Weighted-F1 (\%)). We can summarize the ranking of topology-imbalance sensitivity: GCN > PPNP > GAT.} \label{figure::eval} \end{figure*} \section{Discussions} \subsection{Evaluating GNNs from the Aspect of Topology-Imbalance Sensitivity} \label{section::eval} In Figure~\ref{figure::eval}, we evaluate the GNN's capability for handling topology-imbalance and find that different GNNs present significant difference in the topology-imbalance sensitivity across multiple datasets. The GCN model is susceptible to the topology-imbalance level of the graph and its performance decays greatly when the topology-imbalance increases. On the opposite, the GAT model is less sensitive to the topology-imbalance level and can achieve the best results when the topology-imbalance level is high. The PPNP model can achieve ideal performance when the topology-imbalance level is low, and its performance does not drop as sharply as GCN when the topology-imbalance level is high. We analyze the reason lies in that: (1) the aggregation operation of GCN is equivalent to directly averaging neighbor features~\citep{ana_powerful} that lacks the noise filtering mechanism, so it is more sensitive to the topology-imbalance level of the graph; (2) the GAT model can dynamically adjust the aggregation weight from different neighbors, which increases its robustness to the high topology-imbalance situation but hinders the model performance when the graph topology-imbalance level is low and there is less need to filter neighbor information; (3) the infinite convolution mechanism of the PPNP model makes it possible to aggregate the information from distant nodes to enhance its robustness to the graph topology imbalance. \citet{graph_pitfall} notice that the performance ranking of GNNs varies with the training set selection. Hence, existing node classification studies~\citep{drop_edge,gcl_multiview} usually repeat experiments multiple times with different training sets to reduce this randomness. The results from Figure~\ref{figure::eval} inspire us that the topology imbalance can partly explain the randomness of GNN performance caused by the training set selection and we can adopt the topology-imbalance sensitivity as a new aspect in evaluating the performance of different GNN architectures. \subsection{Limitations of Method} \label{section::limitation} Although our ReNode method has proven effective in multiple scenarios, we also notice some limitations of it because of the complexity of node imbalance learning. First, the ReNode method is devised for homogeneously-connected graphs (linked nodes are expected to be similar, such as the various datasets in experiments), and it needs a further update for heterogeneously-connected graphs (such as protein networks). Besides, the ReNode method improves less when the graph connectivity is poor (Section~\ref{section::tinl}) or the labeling ratio is extremely low (Section~\ref{section::qinl}) because in these cases, the conflict level among nodes is low thus the nodes topological positions are insufficiently reflected. \section{Related Work} \label{rw_QIL} Imbalanced classification problems are widespread in real scenarios and have attracted extensive attention from both academia and industry. Most existing studies on this topic focus on the class-imbalanced quantity distribution~\citep{imb_survey_old}, where the model's inference ability for the majority classes will be significantly better than that of minority classes~\citep{imb_survey_new}. The existing methods for solving the quantity-imbalance issue can be roughly divided into methods for the data selection phase and the model training phase. Active learning~\citep{al_survey,imb_gen_al,imb_gen_al_new} and Re-sampling~\citep{imb_gen_smote,imb_gen_adasyn,imb_gen_oversampling} are two classical examples designed to construct a quantity-balanced training set . On the other hand, Re-weighting is a simple but effective solution for the model training phase, which adjusts the weights of training samples in different classes based on the labeling sizes~\citep{imb_rw1,imb_rw2,imb_gen_cb,imb_gen_ldam}. However, directly applying these methods into the graph scene lacks the consideration for the graph-specific topology-imbalance issue. Unlike the re-weight methods which conduct class-lever re-weighting, our ReNode method is a more fine-grained one and assign weights to each node individually. There have been quantity-imbalance studies (Tomek links~\citep{imb_old_tomek}, NearMiss~\citep{imb_old_near_miss}, One-Sided Selection~\citep{imb_old_one_sided}) trying to exclude the negative influence of labeling samples close to class boundaries by measuring the similarity of sample features. However, in the graph scene, the prior knowledge contained in node connections is more reliable than directly calculating the feature similarity. Besides, the number of labeled nodes is quite small in the semi-supervised setting. Thus it is not robust to locate their positions by computing similarity among a small number of nodes and we propose to leverage the influence conflict across the whole graph to locate node position to boundaries. Graph data structure owns a wide range of applications, such as social media~\citep{model_sage}, stock exchange~\citep{liwei_stock}, shopping~\citep{graph_pitfall}, medicine~\citep{dataset_qm9}, transportation~\citep{graph_transport} and so on. Similar to other data structures, graph node representation learning also suffer from the quantity-imbalance issue~\citep{imb_node_dr_gcn}. Apart from the universal quantity-balance approaches introduced in Section~\ref{rw_QIL} which can be transferred to the graph scene, there are some graph-specific quantity-imbalance methods recently proposed. DR-GCN~\citep{imb_node_dr_gcn} propose two types of regularization to tackle quantity imbalance: class-conditioned adversarial training and unlabeled nodes latent distribution constraint. RA-GCN~\citep{imb_node_ra_gcn} propose to automatically learn to weight the training samples in different classes in an adversarial training manner. AdaGCN~\citet{imb_node_boost} propose to leverage the boosting algorithm to handle the quantity-imbalance issue for the node classification task. GraphSMOTE~\citep{imb_node_graph_smote} combines the synthetic node generation and the edge generation to up-sample nodes for the minority classes. However, these studies only pay attention to the quantity imbalance and overlook the topology imbalance. Different from these studies~\citep{pos_pgnn,position_psgnn,pos_graphreach} that try to locate the absolute positions for all the nodes by measuring their distance from the selected anchor nodes, our Totoro metric is devised to locate the relative positions to the class boundary for the labeling nodes by considering the influence conflict and can get rid of the dependence on the anchor nodes. Besides, our relative positions can more accurately reflect node class information because we distinguish the information from different classes while existing studies~\citep{pos_pgnn,position_psgnn} treat all the anchor nodes the same and ignore the class difference. \section{Conclusion and Future Work} In this work, we recognize the topology-imbalance node representation learning (TINL) as a graph-specific imbalance learning problem that has not been studied so far. We find that the topology-imbalance issue widely exists in graphs and severely hinders the learning of node classification. We unify TINL with the quantity-imbalance node representation learning (QINL) by considering the shift of the node influence boundaries from true class boundaries. To measure the degree of topology imbalance, we devise a conflict detection--based metric Totoro to locate node position, and further propose the ReNode method to adaptively adjust the training weights of labeled nodes based on their topological positions. Extensive empirical results have verified the effectiveness of our method in various settings: TINL-only, both TINL and QINL, and large-scale graph. Besides, we also propose the topology-imbalance sensitivity as a new metric to evaluate GNNs. Considering the importance of the topology-imbalance issue and the limitations of our approach, advanced methods with stronger theoretical or experimental support are expected in future work. Moreover, since topology imbalance is widespread in graph-related tasks other than node classification, how to measure and solve the topology-imbalance issues in broader graph scopes remains a meaningful challenge for future study. \section{Acknowledgement} We appreciate all the thoughtful and insightful suggestions from reviews. This work was supported in part by a Tencent Research Grant and National Natural Science Foundation of China (No. 61673028). Xu Sun is the corresponding author of this paper. \medskip
1,314,259,995,177
arxiv
\section{Introduction} With the global availability of the digital elevation model (DEM) produced by the German Aerospace Center's (DLR) TanDEM-X mission, topographic data with so far nonexistent spatial resolution and height accuracy have become accessible on a global scale. The fact that a complete satellite mission was set in motion and executed for this sole purpose shows the demand and need for such data. Even more attention should be paid not to compromise resolution and accuracy after acquisition by imperfect processing steps. Phase denoising is a mandatory step within any InSAR DEM production workflow. A more accurate phase estimate results not only in a less noisy DEM but also eases phase unwrapping. Indiscriminate spatial averaging of the phase, also called boxcar multilooking, while being fast to compute and reducing the variance of the estimate, degrades resolution. To address this issue, more advanced filtering methods have been the topic of research for more than two decades. Lee's sigma filter and its later extensions~\cite{lee_sigma_filter_1983, lee_improved_sigma_filter_2009, lee_polsar_speckle_filtering_extended_sigma_2015} are examples of SAR and polarimetric SAR filters that include statistical tests for selecting pixels in the averaging process. Nonlocal filters were first introduced for denoising optical images~\cite{buades_non-local_2005}. In recent years, the have become increasingly popular within the denoising community, due to their unsurpassed noise reduction and detail preservation. The foundation of their performance is a highly discriminate search for statistically homogeneous pixels, somewhat akin to the sigma filter, within a large area during the filtering process. These features sparked research into adapting them to new domains, such as denoising regular SAR amplitude images~\cite{deledalle2009iwm, parrilli_nonlocal_2012, martino_scattering_based_nonlocal_2016}, interferograms~\cite{deledalle_nl-insar:_2011, lin_insar_tensor_svd_2015}, polarimetric SAR~\cite{chen_nonlocal_polsar_pretest_2011}, and a unified approach for SAR amplitude images, interferograms and polarimetric SAR images~\cite{deledalle_nl-sar:_2015}. Recent publications applied the nonlocal filtering paradigm to SAR stacks in the fields of differential SAR interferometry~\cite{sica_nonlocal_multipass_2015} and 3D reconstruction using SAR tomography~\cite{hondt_nonlocal_tomosar_2018}. The first nonlocal InSAR filter~\cite{deledalle_nl-insar:_2011} piqued our interest to produce DEMs from bistatic TanDEM-X strip map interferograms with improved resolution and accuracy compared to boxcar multilooking, which is employed in DLR's processing chain for the global TanDEM-X DEM\@. For the original operational processing chain~\cite{breit_itp_2010, fritz_itp_2011, rossi_tandemx_rawdem_2012}, the need to cope with the data volume of the global DEM acquistion imposed severe design restrictions due to computational costs. Boxcar multilooking was finally chosen, as the resulting DEM fulfills the TanDEM-X accuracy requirements~\cite{krieger_tandemx_2007} and its computational costs are negligible compared to the other processing steps. Our research was motivated by the need for even higher-resolution DEMs which led DLR to commence research on the high-resolution DEM (HDEM) product, with increased horizontal resolution and vertical accuracy over selected areas~\cite{lachaise_eusar_tandemx_hdem_insar_update_2016} compared to the default TanDEM-X DEM product. HDEMs rely on several new acquisitions with larger baselines resulting in smaller height errors from phase noise. For comparison, the heights of ambiguity for HDEM range from \SI{10}{\meter} to \SI{20}{\meter} whereas the values for the regular DEM start at \SI{35}{\meter} and go up to \SI{50}{\meter}. Thus, a boxcar averaging phase filter with a smaller spatial extent compared to the default processing toolchain suffices to fulfill the vertical accuracy goal and more of the original spatial resolution can be preserved. \cref{tab:demspec} gives the specifications of the two available DEM products from DLR\@. Our goal was to create a DEM similar in accuracy to the HDEM specifications by reprocessing the acquisitions made for the global TanDEM-X DEM\@. The findings of our earlier investigation~\cite{zhu_improving_2014, zhu_nldem_2017} suggest that the qualities of nonlocal filters do indeed transfer to DEM generation. We were able to produce a RawDEM, the initial DEM product used for creating the final TanDEM-X DEM, with \SI{6}{\meter} $\times$ \SI{6}{\meter} resolution showing more details and less noise compared to the operational product with a resolution of \SI{12}{\meter} $\times$ \SI{12}{\meter}. Yet our straightforward application of NL-InSAR, the nonlocal filter introduced in~\cite{deledalle_nl-insar:_2011}, led to undesired terrace-like artifacts in the final DEM\@. We also found, that the more recently published NL-SAR filter~\cite{deledalle_nl-sar:_2015} was unsuitable for DEM generation as it showed a tendency for oversmoothing. \begin{table*} \newcolumntype{Y}{>{\centering\arraybackslash}X} \caption{Resolution and accuracy requirements of the standard global TanDEM-X DEM and the locally available HDEM~\cite{hoffmann_tdx_specs_2016}.} \begin{tabularx}{\textwidth}{lYYY} \toprule & Independent pixel spacing & Absolute horizontal and & Relative vertical accuracy \\ && vertical accuracies (\SI{90}{\percent}) & (\SI{90}{\percent} linear point-to-point) \\ \midrule (global) TanDEM-X DEM & \SI{12}{\meter} (\si{\ang{;;0.4}} at equator) & \SI{10}{\meter} & \SI{2}{\meter} (slope $\leq$ \SI{20}{\percent}) \\ &&& \SI{4}{\meter} (slope $>$ \SI{20}{\percent}) \\ \midrule (local) TanDEM-X HDEM & \SI{6}{\meter} (\si{\ang{;;0.2}} at equator) & \SI{10}{\meter} & goal: \SI{0.8}{\meter} \\ &&& (\SI{90}{\percent} random height error) \\ \bottomrule \end{tabularx}% \label{tab:demspec} \end{table*} This paper further elaborates on the issues we encountered when applying the nonlocal filtering paradigm to InSAR denoising and proposes a new nonlocal InSAR filter that takes these into consideration. A key feature is its compensation of the deterministic, topographic phase component, which hampers the search for statistically homogeneous pixels in mountainous terrain. It further factors in the diversity of natural terrain by using a local scene heterogeneity measure to select key filtering parameters instead of relying on a global, fixed set. These techniques can readily be integrated into existing nonlocal InSAR filters to also bolster their performance. A comparison with a LiDAR DEM gives an impression of and quantifies the level of improvement that can be achieved by employing nonlocal filters instead of conventional filters to real data. Concerning the vastly increased computational cost, with the advances in semiconductor manufacturing processes and computing architecture, especially graphics processing units (GPUs), large-scale nonlocal filtering of SAR interferograms is nowadays feasible~\cite{baier_igarss_gpu_nonlocal_2016}. The paper is structured as follows. \cref{sec:nlinsar} briefly introduces the nonlocal filtering concept with respect to SAR interferometry. The design decisions of the proposed filter are described in \cref{seg:nlswag} and are backed up by the experiments in \cref{sec:experiments}. We discuss the impact of the new filter in \cref{sec:discussion} and conclude together with an outlook in \cref{sec:conclusion}. \section{Nonlocal InSAR Filtering}% \label{sec:nlinsar} What sets nonlocal filters apart from other filters is the large area they operate over for denoising each pixel. This area, called the search window, is inspected for similar pixels. Their absolute position does not influence the later filtering process, true to their name “nonlocal”, unlike with many conventional neighborhood filters. For detecting similar pixels, nonlocal filters do not only rely on comparing the pixel value alone, but also take their surrounding areas, henceforth referred to as patches, into account. By doing so, textures, structures and features help identifying similar pixels and influence the filtering results to a far larger degree than with conventional filters. \cref{fig:nl_concept} illustrates this filtering process, where, in order to denoise the pixel marked by the red cross, all pixels inside the search window (blue square) are considered by comparing their surrounding patches to the center pixel's patch (all as green squares). The resulting similarity map is depicted on the right and shows that the most similar pixels are located along the edge. \begin{figure} \centering \makeatletter% \if@twocolumn% \newcommand{0.33\columnwidth}{0.49\columnwidth} \else \newcommand{0.33\columnwidth}{0.3\columnwidth} \fi \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/nl_concept/nl_concept_data} \caption{Search window (blue square) and patches (green squares)}% \label{subfig:nl_concept_data} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/nl_concept/nl_concept_weight_map} \caption{Similarity map}% \label{subfig:nl_concept_weight_map} \end{subfigure} \caption{The nonlocal filtering process: Inside the search window (blue square) centered at the pixel that is to be filtered \subref{subfig:nl_concept_data}, all pixels are checked for their similarity by comparing their surrounding patches to the center patch (green squares). The corresponding similarity map \subref{subfig:nl_concept_weight_map} shows that similar pixels are located along the edge.}% \label{fig:nl_concept} \end{figure} In the original version of the nonlocal filter, the Euclidean distance between patches was used as a measure of similarity. This measure is the least square estimate for additive white Gaussian noise, a common and practical model for optical images. As the noise characteristics of SAR profoundly differ, the earlier referenced filters for SAR, InSAR and polarimetric SAR all define similarity criteria depending on the statistics of the observed quantities: the speckle noise for SAR amplitude images, the interferometric phase for InSAR, or the covariance matrix for (Pol)(In)SAR\@. The patch dissimilarities $\Delta$ in the search window are mapped into weights $w$ by a kernel. In most cases, an exponential kernel or a slight adaption thereof is used \begin{align} w = e^{-\frac{\Delta}{h}} \;, \label{eq:exp_kernel} \end{align} where $h$ sets the trade-off between filtering strength and detail preservation. In the following, we assume that the weights are normalized to sum to one. The estimate of an image $z$, in our case the interferogram \begin{align} z = u_1 \bar u_2 = A_1 A_2 e^{\mathrm{j}\varphi} = \lvert z \rvert e^{\mathrm{j}\varphi} \end{align} of the master and slave images, at the pixel location $\mathbf{x}$ is computed as the weighted mean over the corresponding search window $\partial_\mathbf{x}$ \begin{align} \hat z_\mathbf{x} = \sum\limits_{\mathbf{y} \in \partial_\mathbf{x}} w_{\mathbf{x}, \mathbf{y}} z_\mathbf{y} \;. \label{eq:weighted_mean} \end{align} The argument $\hat \varphi = \angle \hat z$ of $\hat z$ is the estimate of the true interferometric phase $\theta$. In a similar fashion, estimates of the intensity \begin{align} \hat I_\mathbf{x} &= \sum\limits_{\mathbf{y} \in \partial_\mathbf{x}} w_{\mathbf{x}, \mathbf{y}} \frac{\lvert u_{1, \mathbf{y}} \rvert^2 + \lvert u_{2, \mathbf{y}}\rvert^2}{2} \label{eq:nl_int} \end{align} and coherence \begin{align} \hat \gamma_\mathbf{x} &= \frac{ \left\lvert \sum_{\mathbf{y} \in \partial_\mathbf{x}} w_{\mathbf{x}, \mathbf{y}} u_{1, \mathbf{y}} \bar u_{2, \mathbf{y}} \right\rvert}{\sqrt{\sum_{\mathbf{y} \in \partial_\mathbf{x}} w_{\mathbf{x}, \mathbf{y}} \lvert u_{1, \mathbf{y}} \rvert^2\sum_{\mathbf{y} \in \partial_\mathbf{x}} w_{\mathbf{x}, \mathbf{y}} \lvert u_{2, \mathbf{y}} \rvert^2}} \label{eq:nl_coh} \end{align} can be obtained. One can think of the nonlocal filter as a selector for statistically homogeneous pixels for the averaging process. When dealing with SAR images and InSAR images in particular, there are several additional factors to consider when applying the nonlocal filter paradigm. The next section highlights these pitfalls and describes how they are addressed specifically by the proposed method. \section{Proposed Filter}% \label{seg:nlswag} In the following, we will refer to the proposed method as NL-SWAG, short for \textbf{N}on\textbf{L}ocal-\textbf{S}AR interferogram filter for \textbf{w}ell-performing \textbf{A}ltitude Map \textbf{G}eneration. \Cref{fig:flowgraph} shows a high-level flow graph of NL-SWAG\@. The following paragraphs describe in greater detail the individual operations and how they affect the filtering performance and outcome. We have highlighted in gray operations that are explicitly explained in the respectively named subsections, which will also cover other related blocks. \begin{figure} \centering \makeatletter% \if@twocolumn% \newcommand{0.85\textwidth}{\columnwidth} \else \newcommand{0.85\textwidth}{0.6\textwidth} \fi \makeatother \centering \includegraphics[width=0.85\textwidth]{figures/flowgraph/flowgraph} \caption{Flow graph of the proposed filter. Blocks that are highlighted in gray have their own respective subsections, which will also cover other related operations. The second stage uses the prefiltered output of the first stage for computing a new, more reliable set of weights.}% \label{fig:flowgraph} \end{figure} \subsection{Aggregation} A common filtering artifact of nonlocal filters is the so-called \emph{rare patch effect}, which occurs when only few similar patches are located within the search window, resulting in subpar filtering performance. The problem is especially prevalent near edges, as \cref{fig:nl_concept} illustrates, where for all patches that include the edge only few similar patches are found. Aggregating multiple estimates is one approach to counter this behavior~\cite{lebrun_denoising_cuisine_2012}. Instead of the traditional pixel-wise nonlocal means filter as in \cref{eq:weighted_mean}, NL-SWAG computes the patch-wise weighted mean \begin{align} \mathbf{\hat z}_\mathbf{x} = \sum\limits_{\mathbf{y} \in \partial_\mathbf{x}} w_{\mathbf{x}, \mathbf{y}} \mathbf{z}_\mathbf{y} \;. \label{eq:weighted_mean_patch} \end{align} The overlapping patch estimates $\mathbf{\hat z}$ are then aggregated into a single pixel estimate, weighted by their equivalent number of looks $L$ \begin{align} \hat z_\mathbf{x} &= \frac{\sum_{\mathbf{y} \in \mathcal{P}_\mathbf{x}} L_\mathbf{y} \mathbf{\hat z}_\mathbf{y, x-y}}{\sum_{\mathbf{y} \in \mathcal{P}_\mathbf{x}} L_\mathbf{y}} \;, \label{eq:aggregation} \end{align} where $\mathcal{P}_\mathbf{x}$ denotes the set of all pixel indices within a patch centered at $\mathbf{x}$ and $\mathbf{x-y}$ being the relative index inside the respective patch, i.e., $\mathbf{z}_\mathbf{y, x-y} = z_\mathbf{x}$. The weighting by $L$ ensures that patch estimates with a higher number of looks, and therefore a smaller variance, have a larger impact on the final estimate. The effective number of looks, i.e., the variance reduction of the weighted mean, can directly be computed from the weight map~\cite{deledalle_nl-insar:_2011} \begin{align} L_\mathbf{x} = \frac{\left( \sum_{\mathbf{y} \in \partial_\mathbf{x}} w_{\mathbf{x, y}}\right)^2}{\sum_{\mathbf{y} \in \partial_\mathbf{x}} w^2_{\mathbf{x, y}}} \;. \end{align} Aggregation mitigates the rare patch effect as it also properly denoises pixels near features, such as edges, as long as they also belong to patches which do not contain said features. \subsection{Two Stage Filtering} SAR interferograms are affected by speckle and suffer from phase noise due to the innate coherence loss between two acquisitions, rendering the similarity estimates difficult and hereby degrading the denoising performance. A solution that is often employed is a two-stage approach~\cite{dabov_bm3d_2007, deledalle_nl-sar:_2015, salmon_two-stage_denoising_2012}, where in the first step the so-called \emph{guidance image} is generated by prefiltering the input image. In the second step, the guidance image is used to compute the patch similarities, which can now be more reliably estimated due to the reduced noise level. The stages of NL-SWAG, which are also depicted in \cref{fig:flowgraph}, employ the two similarity criteria derived in~\cite{deledalle_nl-insar:_2011} for two single look complex images (SLC) in the first stage and a filtered interferogram in the second stage. \subsubsection{First stage} The similarity of two pixels in the first stage is the conditional likelihood of observing $u_{i,\mathbf{x}}$ and $u_{i, \mathbf{y}}$ ($i = 1,2$), given that the true parameters, the coherence $\gamma$, the intensity $I$ and the interferometric phase $\theta$ are identical~\cite{deledalle_nl-insar:_2011}: \begin{multline} p(u_{1, \mathbf{x}}, u_{1, \mathbf{y}}, u_{2, \mathbf{x}}, u_{2, \mathbf{y}} \vert I_\mathbf{x} = I_\mathbf{y}, \theta_\mathbf{x} = \theta_\mathbf{y}, \gamma_\mathbf{x} = \gamma_\mathbf{y}) =\\ \delta_{\mathbf{x}, \mathbf{y}}^1 = \sqrt{\frac{B}{C}}^3 \left( \frac{A+C}{A} \sqrt{\frac{C}{A-C}} - \arcsin \sqrt{\frac{C}{A}} \right), \label{eq:pix_sim_likelihood} \end{multline} where \begin{equation*} \begin{aligned} A &= \left( A_{1,\mathbf{x}}^2 + A_{2,\mathbf{x}}^2+ A_{1,\mathbf{y}}^2 + A_{2,\mathbf{y}}^2 \right)^2 \;, \\ B &= A_{1,\mathbf{x}} A_{2,\mathbf{x}} A_{1,\mathbf{y}} A_{2,\mathbf{y}} \;\textnormal{and} \\ C &= 4 \left( A_{1,\mathbf{x}}^2 A_{2,\mathbf{x}}^2 + A_{1,\mathbf{y}}^2 A_{2,\mathbf{y}}^2 + 2 B \cos\left(\varphi_\mathbf{x} - \varphi_\mathbf{y} \right) \right) \;. \end{aligned} \end{equation*} The patch similarity in the first stage is computed as \begin{align} \Delta^1_{\mathbf{x}, \mathbf{y}} = \sum\limits_{\mathbf{o} \in \mathcal{O}} \log \delta^1_{\mathbf{x+o}, \mathbf{y+o}} \;, \label{eq:likelihood_patch_sim} \end{align} where $\mathcal{O}$ denotes the set of all index offsets in the patch. The dissimilarities are mapped into weights by an exponential kernel as in \cref{eq:exp_kernel}. As the purpose is only to reduce the noise level and remove outliers without introducing severe filtering artifacts before computing the similarities in the second step $h$ is set to a comparatively small value. Except for the aggregation step, the first stage is identical to the non-iterative version of NL-InSAR and its guidelines for picking $h$ can be used. The estimates of the phase, intensity and coherence are obtained via \cref{eq:weighted_mean}, \cref{eq:nl_int} and \cref{eq:nl_coh} together with the aggregation in \cref{eq:weighted_mean_patch} and \cref{eq:aggregation}. \subsubsection{Second stage} The second stage computes the similarities as a function of the coherence $\hat \gamma$, intensity $\hat I$ and interferometric phase $\hat \varphi$ estimates produced by the first stage. The symmetric Kullback-Leibler divergence of two zero-mean complex circular Gaussian distributions, the underlying joint distribution of $\hat \gamma, \hat I$ and $\hat \varphi$, is given by~\cite{deledalle_nl-insar:_2011} \makeatletter% \if@twocolumn% \begin{multline} \delta^2_{\mathbf{x}, \mathbf{y}} = \frac{4}{\pi} \left[ \frac{\hat I_\mathbf{x}}{\hat I_\mathbf{y}} \frac{1-\hat \gamma_\mathbf{x} \hat \gamma_\mathbf{y} \cos(\hat\varphi_\mathbf{x} - \hat\varphi_\mathbf{y})}{1-\hat \gamma_\mathbf{y}^2} \right. \\ + \left. \frac{\hat I_\mathbf{y}}{\hat I_\mathbf{x}} \frac{1-\hat \gamma_\mathbf{y} \hat \gamma_\mathbf{x} \cos(\hat\varphi_\mathbf{y} - \hat\varphi_\mathbf{x})}{1-\hat \gamma_\mathbf{x}^2} - 2 \right] \label{eq:kldivs} \end{multline} \else \begin{equation} \begin{aligned} \delta^2_{\mathbf{x}, \mathbf{y}} = \frac{4}{\pi} \left[ \frac{\hat I_\mathbf{x}}{\hat I_\mathbf{y}} \frac{1-\hat \gamma_\mathbf{x} \hat \gamma_\mathbf{y} \cos(\hat\varphi_\mathbf{x} - \hat\varphi_\mathbf{y})}{1-\hat \gamma_\mathbf{y}^2} + \frac{\hat I_\mathbf{y}}{\hat I_\mathbf{x}} \frac{1-\hat \gamma_\mathbf{y} \hat \gamma_\mathbf{x} \cos(\hat\varphi_\mathbf{y} - \hat\varphi_\mathbf{x})}{1-\hat \gamma_\mathbf{x}^2} - 2 \right] \;. \end{aligned} \label{eq:kldivs} \end{equation} \fi \makeatother and can be used as a similarity criterion. Instead of a fixed patch size, the second stage changes the patch size adaptively based on the local heterogeneity. The exact patch similarity and weight computation are covered in the following two sections since, as can be seen from \cref{fig:flowgraph}, it is based on other operations. Even though the two-step approach alleviates the problems caused by the high noise level in SAR images, we have to stress that a repeated application of any filter can potentially introduce staircase-like artifacts in the filtered output as we observed with NL-InSAR\@. To elaborate a little further: Just like traditional neighborhood filters, nonlocal filters can also be seen as diffusion filters~\cite{barash_framework_nonlinear_diffusion_bilateral_2004}. Diffusion filters have the interesting property that their repeated application steadily decreases the noise level and produces piecewise constant approximations of the original data~\cite{weickert_review_nonlinear_diffusion_filtering_1997}. While this can actually be a desired result for image segmentation or generating abstractions~\cite{winnemoeller_video_abstraction_2006}, for example, bilateral filters are often used to cartoonify photographs, in our case this phenomenon may lead to staircases in the generated DEM for iterative nonlocal algorithms, as errors of the phase estimate propagate and aggregate with every iteration. \subsection{Patch Size Selection} Patches contain information about the local texture and hence play a crucial role in distinguishing between suitable patches for averaging and patches that should be discarded. That raises the question: How to select the best patch size? In~\cite{duval_bias-variance_nl_means_2011}, the authors demonstrated that a global selection was suboptimal and that patch size should depend on the local neighborhood. The following paragraphs repeat their reasoning and puts it into the context of SAR interferogram denoising for DEM generation. For the original nonlocal filter, patch similarity, just like \cref{eq:likelihood_patch_sim}, is essentially the sum of all contained pixel similarities. Naturally large patches reduce the variance and provide the most robust estimate of patch similarity. This is indeed the best strategy for plains, agricultural fields or other slowly varying terrain. The situation is quite different for more complex terrain, for instance urban sites or mountain ridges. In these areas, a large patch size leads to the rare patch effect, since for every patch that contains some local structure only patches with similar features will have a significant impact on the averaging process. The likelihood of finding such patches decreases with increasing patch size. NL-SWAG's solution is to adaptively select the patch size as a function of local scene heterogeneity. This way, a more robust patch similarity can be computed in flat regions or moderately hilly areas, due to the larger patch size, while at the same time the rare patch effect is alleviated in areas with many features and details. Yet we have to stress that small patches come at the cost of less reliable patch similarity estimates. We would further like to draw attention to the fact that the argument for an adaptive patch size selection to avoid the rare patch effect is identical to the one for aggregation. Both measures favor patches that exclude local structures by either shrinking the patch or including estimates where the patch is moved off-center with respect to the pixel that is to be denoised. This is somewhat contrary to the initial argument that patch-based methods perform so well because they take textures and details into account. Patches indeed provide an effective mean for discarding patches of different classes. But to maximize the number of patches that are classified as similar, both techniques also try to use the patch modification schemes we just mentioned. To identify heterogeneous pixels and select the patch size accordingly we apply the local phase heterogeneity measure derived in~\cite{lee_insar_additive_1998} \begin{align} \eta_\mathbf{x} = \frac{\Var{\varphi}_\mathbf{x} - \sigma^2_{0, \mathbf{x}}}{\Var{\varphi}_\mathbf{x}} \;, \label{eq:phase_linear_estimate} \end{align} which lies in the interval $\left[0, 1\right)$. $\Var{\varphi}$ is the estimated variance of the phase in the search window and $\sigma^2_0$ the variance one would expected from the coherence~\cite{bamler_insar_1998}. For non-heterogeneous terrain, $\Var{\varphi}$ is comparable in magnitude to $\sigma^2_0$ as only phase noise causes phase changes and \cref{eq:phase_linear_estimate} is close to $0$. The situation changes when the search window contains structures, such as buildings. Their distinct phase profiles increase $\Var{\varphi}$ resulting in larger phase heterogeneity values. As the phase is wrapped, the filter first performs local unwrapping as in~\cite{lee_insar_additive_1998} to obtain the locally unwrapped phase $\tilde \varphi$ with respect to the average of the $5 \times 5$ pixels in the center. The phase variance is then estimated inside the search window, weighted by the respective weight map computed in the first stage \begin{align} \Var{\varphi}_\mathbf{x} &= \E{\varphi_\mathbf{x}^2} - \E{\varphi_\mathbf{x}}^2 \nonumber \\ &\approx \sum\limits_{\mathbf{y} \in \partial_\mathbf{x}} w_{\mathbf{x}, \mathbf{y}} \tilde \varphi_\mathbf{y}^2 - \left( \sum\limits_{\mathbf{y} \in \partial_\mathbf{x}} w_{\mathbf{x}, \mathbf{y}} \tilde \varphi_\mathbf{y} \right)^2 \;. \label{eq:phase_variance} \end{align} As $\Var{\varphi}$ is estimated in a local window due to insufficient sample size, \cref{eq:phase_linear_estimate} might be negative. In this case, the heterogeneity measure is set to zero. To yield a more reliable estimate of $\sigma_0$ the coherence is estimated following the methodology in~\cite{guarnieri_quick_dirty_coherence_estimator_1997} as \begin{align} \gamma = \frac{\E{\lvert u_1 \rvert^2 \cdot \lvert u_2 \rvert^2}}{\sqrt{\E{\lvert u_1 \rvert^4} \E{\lvert u_2 \rvert^4}}} \;. \end{align} This way, the coherence is estimated from the speckle pattern and is not influenced by the topographic phase, which would yield an underestimation of the coherence if the common coherence estimator is used. Just like in \cref{eq:phase_variance}, the expected value is replaced by the weighted mean over the respective quantities. An example of the heterogeneity measure is depicted in \cref{fig:phase_lhi}. The urban area is clearly detected as being heterogeneous, the grassland is classified as the most homogeneous site and the forested areas are identified as moderately heterogeneous regions. \begin{figure} \centering \makeatletter% \if@twocolumn% \newcommand{0.33\columnwidth}{0.49\columnwidth} \else \newcommand{0.33\columnwidth}{0.32\textwidth} \fi \makeatother \begin{subfigure}[t]{0.33\columnwidth} \includegraphics[width=\textwidth]{figures/heterogeneity/marseille_carnoux_optical} \caption{Optical image \textcopyright~Google~~~~~~~}% \label{subfig:phase_lhi_optical} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \includegraphics[width=\textwidth]{figures/heterogeneity/marseille_carnoux_ampl1} \caption{Master amplitude in \si{\decibel}}% \label{subfig:phase_lhi_ints} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \includegraphics[width=\textwidth]{figures/heterogeneity/marseille_carnoux_phi} \caption{Phase}% \label{subfig:phase_lhi_phase} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \includegraphics[width=\textwidth]{figures/heterogeneity/marseille_carnoux_lhi} \caption{Phase heterogeneity}% \label{subfig:phase_lhi} \end{subfigure} \caption{Phase heterogeneity computed in the first stage. Urban areas, forests and grassland show different levels of heterogeneity.}% \label{fig:phase_lhi} \end{figure} Instead of selecting a fixed patch size from a predefined set, depending on the local heterogeneity, NL-SWAG employs Gaussian windows of variable width. A possible mapping of the phase heterogeneity index into Gaussian window widths could be \begin{align} \sigma_{\textnormal{Gauss}} = 2\cdot(1-\eta) + 1 \;, \label{eq:lhi2gauss} \end{align} which gives strict lower and upper bounds for the window widths and is used in the remaining of the paper. Other mappings would also be possible as long as they result in wide Gaussian windows for homogeneous areas and the reverse for heterogeneous areas. As an alternative approach for selecting the best effective patch size, the phase variance in \cref{eq:phase_linear_estimate} could be computed in Gaussian windows of successively increasing widths. This process is halted as soon as the heterogeneity level exceeds a predefined threshold, i.e., when significant phase changes, which most likely are the result of heterogeneous structures inside the patch, are detected. A similar approach was presented in\ \cite{kervrann_adaptive_search_window_2008} for adaptively selecting the search window size. For Gaussian blurring, the reduction in variance is related to $\sigma_\textnormal{Gauss}$ by approximately $4 \pi \sigma_\textnormal{Gauss}^2$. So with \cref{eq:lhi2gauss}, the variance of the patch similarity estimation is reduced by a factor ranging from $4 \pi $ to $36 \pi$, roughly equivalent to $3 \times 3$ up to $11 \times 11$ patches. Correspondingly to \cref{eq:likelihood_patch_sim}, the adaptive patch similarities are computed as the sum over the pixel similarities weighted by a Gaussian window $g_\mathbf{x}$ \begin{align} \Delta^2_{\mathbf{x}, \mathbf{y}} = \frac{\sum_{\mathbf{o} \in \mathcal{O}} g_{\mathbf{x}, \mathbf{o}} \delta^2_{\mathbf{x+o}, \mathbf{y+o}}}{\sum_{\mathbf{o} \in \mathcal{O}} g_{\mathbf{x}, \mathbf{o}}} \;. \end{align} The patch dissimilarities still need to be mapped into weights, which in the second stage is also done by an exponential kernel. We now face the problem how to select the normalization factor $h$ to compromise between bias and variance reduction. The standard deviation of $\Delta^2$ is reciprocally proportional to $\sigma_\textnormal{Gauss}$, which effectively governs the patch size. Consequently, a fixed $h$ for all heterogeneity levels will be insufficient and a method is needed that accounts for varying patch sizes. For this purpose, we selected a homogeneous training area and analyzed how the patch similarity's standard deviation $\sigma_{\Delta^2}$ changed with varying $\tfrac{1}{\sigma_\textnormal{Gauss}}$. \cref{fig:polyfit} shows the relationship for a fixed set of Gaussian window widths at a homogeneous test site without any topography. Clearly, the relationship is non-linear, due to the correlation between pixel similarities, but a second order polynomial, also depicted, is a good fit. \begin{figure} \centering \includegraphics{figures/gauss_polyfit/gauss_polyfit} \caption{Relationship between the width of the Gaussian window $\sigma_\textnormal{Gauss}$ and the standard deviation of the resulting patch similarities $\sigma_{\Delta^2}$. Due to the correlation of the pixel similarities there is no linear mapping.}% \label{fig:polyfit} \end{figure} The weights are computed as \begin{align} w_\mathbf{x, y} = \exp \left\{ -\frac{\Delta^2_{\mathbf{x}, \mathbf{y}}}{h \cdot \xi\left(\sigma^{-1}_{\textnormal{Gauss}, \mathbf{x}}\right)} \right\} \;, \end{align} where $\xi$ is the second order polynomial that accounts for the varying effective patch sizes and $h$ provides a fixed compromise between detail preservation and noise reduction. In our experiments, we found that the interval $\left[ 1 \leq h \leq 2 \right]$ provided the best trade-off. To account for the fact that, due to the Gaussian window, not every pixel in the patch estimate contributed equally to the similarity computation in contrast to \cref{eq:aggregation} the respective pixels are additionally weighted by their Gaussian weight in the final aggregation step \begin{align} \hat z_\mathbf{x} &= \frac{\sum_{\mathbf{y} \in \mathcal{P}_\mathbf{x}} L_\mathbf{y} g_{\mathbf{y}, \mathbf{x-y}} \mathbf{\hat z}_\mathbf{y, x-y}}{\sum_{\mathbf{y} \in \mathcal{P}_\mathbf{x}} L_\mathbf{y} g_{\mathbf{y}, \mathbf{x-y}}} \;. \label{eq:aggregation_fringe} \end{align} \subsection{Fringe frequency estimation and compensation} Another obstacle hindering the use of nonlocal InSAR filters for DEM generation is the actual topography, which, together with the atmosphere, the deformation and noise, contributes to the measured interferometric phase. For the bistatic case, the acquisition mode of TanDEM-X interferograms for the generation of the global DEM, the deformation and the atmosphere components can be ignored, so that only the topography and noise components affect the similarity measure. Due to the topographic phase component it is considerably harder to detect statistically homogeneous pixels in regions with non-negligible height differences, that is pixels with identical noise distribution but different heights. \cref{fig:kldivs} shows the symmetric Kullback-Leibler divergence from \cref{eq:kldivs} with $\hat I_\mathbf{x} = \hat I_\mathbf{y}$ and $\hat \gamma_\mathbf{x} = \hat \gamma_\mathbf{y}$ as a function of the coherence and the phase difference $\Delta_{\hat \varphi} = \hat \varphi_\mathbf{x} - \hat \varphi_\mathbf{y}$, that is used in the second stage as the similarity criterion. Evidently the similarity quickly drops off with increasing $\Delta_{\hat \varphi}$ and the higher the coherence the more dramatic the decline. Consequently, the denoising performance suffers in hilly or mountainous terrain. This effect is quite pronounced for bistatic TanDEM-X data due to their generally high coherence. This analysis is not exclusive to the Kullback-Leibler divergence. Similar arguments can be made for different similarity criteria, i.e., the one employed in~\cite{deledalle_nl-sar:_2015} and \cref{eq:pix_sim_likelihood}. \begin{figure} \centering \makeatletter% \if@twocolumn% \newcommand{0.85\textwidth}{0.8\columnwidth} \else \newcommand{0.85\textwidth}{0.5\textwidth} \fi \makeatother \includegraphics[width=0.85\textwidth]{figures/sim_measure/kldivs} \caption{Symmetric Kullback-Leibler divergence from \Cref{eq:kldivs} for two pixels with identical reflectivity and coherence, dependent on their phase difference.}% \label{fig:kldivs} \end{figure} To combat the reduced denoising performance for terrain with significant height changes, we incorporated a linear fringe model as in~\cite{suo_local_fringe_2010} that accounted for the deterministic, topographic phase component when computing the similarities and the weighted mean. Our approach is distantly related to~\cite{fedorov_affine_nonlocal_2017}, which employs affine transforms to find more similar patches. For every pixel, the fringe compensation algorithm obtained an estimate of the fringe frequencies in azimuth and range $\mathbf{f} = {\left[ f_\textnormal{r}, f_\textnormal{az} \right]}^T$ using the 2D Fourier transform. To circumvent abrupt changes of the fringe frequency estimates, we smoothed $\mathbf{f}$ with a Gaussian kernel. Without loss of generality we can consider \cref{eq:kldivs} as a function of only the phase difference between two pixels \begin{align} \delta^2_{\mathbf{x, y}}(\hat \varphi_\mathbf{x} - \hat \varphi_\mathbf{y}) \;. \end{align} The fringe compensation takes the fringe frequencies at $\mathbf{x}$ into account by changing the pixel similarity function to \begin{align} \delta^2_\mathbf{x, y}(\hat \varphi_\mathbf{x} - (\hat \varphi_\mathbf{y} - {(\mathbf{x} - \mathbf{y})}^T \mathbf{f_x}) \bmod 2 \pi), \end{align} that is, we remove the phase component caused by the fringe frequency in azimuth and range. The computation of the patch-wise weighted mean of the interferogram has to account for the phase model \begin{align} \mathbf{\hat z}_\mathbf{x} = \sum\limits_{\mathbf{y} \in \partial_\mathbf{x}} w_{\mathbf{x}, \mathbf{y}} \mathbf{z}_\mathbf{y} \cdot e^{- \mathrm{j} {(\mathbf{x} - \mathbf{y})}^T \mathbf{F_x}} \;. \label{eq:weighted_mean_patch_fringe_comp} \end{align} Here $\cdot$ denotes element-wise multiplication and $\mathbf{F} \in \mathds{R}^{2 \times p \times p}$ is a three dimensional tensor that contains all fringe frequencies of the pixels inside the $p \times p$ patch centered at $\mathbf{x}$. \Cref{fig:nonlinear_phase} shows the effect that fringe frequency compensation has on the noise reduction. Denoising of a nonlinear phase ramp with constantly increasing frequency was performed using NL-SWAG with and without fringe frequency compensation. If the fringe frequency is not accounted for, the phase estimate's standard deviation increases steadily with increasing frequency. With fringe frequency compensation, the standard deviation is limited. Due to the discrete nature of the frequency estimation by fast Fourier transform in our implementation, the frequency was not perfectly estimated and the performance was not entirely frequency independent, which resulted in the wave-like pattern of the standard deviation. A more sophisticated frequency estimation algorithm would certainly alleviate this problem. \begin{figure} \centering \makeatletter% \if@twocolumn% \newcommand{0.33\columnwidth}{0.95\columnwidth} \else \newcommand{0.33\columnwidth}{0.75\columnwidth} \fi \makeatother \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/nonlinear_phase/NL-TAP_no_FC} \caption{without fringe compensation} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/nonlinear_phase/NL-TAP} \caption{with fringe compensation} \end{subfigure} \caption{Standard deviation (shaded blue area) of NL-SWAG's estimate of a nonlinear phase profile (in black) with and without compensating for the fringe frequency. The maximum value of the standard deviation are marked with a horizontal blue line. If the filter does not account for the deterministic phase change inside the search window the denoising performance decreases substantially with increasing frequency.}% \label{fig:nonlinear_phase} \end{figure} As a final note, we would like to point out the difference between the fringe frequency compensation and the local phase heterogeneity-based adaptive patch size selection. Both approaches address deterministic phase changes which can hamper the search for similar patches. But whereas the fringe frequency compensation strictly deals with large-scale phase changes due to topography by a linear compensation, the role of the phase heterogeneity is more to take care of arbitrary small-scale phase changes, which would not necessarily be captured by a simple linear approximation. \section{Experimental Results}% \label{sec:experiments} We compared NL-SWAG using simulations and real world data sets with existing nonlocal filters. We used TanDEM-X bistatic strip map interferograms of three different test sites: Marseille, Munich, and Barcelona for the evaluation. The most pertinent parameters are listed in \cref{tab:tdx_params}. The experiments also substantiated our claim of creating a DEM close in quality to the HDEM specifications in~\cref{tab:demspec}. \begin{table} \caption{TanDEM-X strip map parameters of the test sites} \begin{tabularx}{\columnwidth}{lXl} \toprule Parameter & Test site & Value \\ \midrule Range bandwidth & --- & \SI{100}{\mega\hertz} \\ Ground range resolution & --- & \SI{3}{\meter}\\ Azimuth resolution & --- & \SI{3}{\meter}\\ Polarization & --- & HH \\ Height of ambiguity & Marseille & \SI{30}{\meter} \\ & Munich & \SI{48}{\meter} \\ & Barcelona & \SI{48.5}{\meter} \\ \bottomrule \end{tabularx}% \label{tab:tdx_params} \end{table} In addition, the comparison included the result of a simple $5 \times 5$ Boxcar filter. Boxcar filters, the dimensions of which depend on range resolution, incidence angle and imaging mode, are employed in DLR's integrated processor (ITP)~\cite{breit_itp_2010, fritz_itp_2011, rossi_tandemx_rawdem_2012} for generating the global TanDEM-X DEM\@. For strip map data the dimensions of all employed Boxcar filters are close to $5 \times 5$ and their individual results will not be reported here. We also analyzed NL-InSAR~\cite{deledalle_nl-insar:_2011}, the first nonlocal InSAR filter, where we set the search window size to $21 \times 21$, the patch size to $7 \times 7$ and used five iterations. We deviated from the suggested ten iterations in the original publication as in our experience the changes in estimation accuracy are negligible after about four to five iterations. Furthermore, the refinement provided by the iterations only resulted in improved detail preservation, which as we will show NL-InSAR already excels at, even with only five iterations. Also, more iterations aggrevate the aforementioned terrace-like artifacts. The second nonlocal filter in the comparison was NL-SAR~\cite{deledalle_nl-sar:_2015}. NL-SAR adaptively selects the best parameters from a predefined set, which includes the patch size, search window size and the strength of the initial prefiltering step. In our analysis, we used the same predefined set as in the original paper. In all subsequent experiments concerning NL-SWAG, the search window size was set to $21 \times 21$, $h$ to $4$ in the first stage and to $2$ in the second stage. The block size of the fringe estimation was $32 \times 32$ and the size of the discrete Fourier transform's was set to $64 \times 64$. This zero padding increases the accuracy of the fringe estimation. \subsection{Synthetic Data} Assuming fully developed speckle, the correlated complex normal distributed pixels of two SLCs have the covariance matrix~\cite{goodman_multivar_complex_gaussian_1963} \begin{align} \mathbf{C} = \begin{bmatrix} A^2 & A^2 \gamma e^{\mathrm{j} \varphi} \\ A^2 \gamma e^{- \mathrm{j} \varphi} & A^2 \end{bmatrix} \end{align} where $A$ denotes the amplitude, $\varphi$ the interferometric phase and $\gamma$ the coherence. Let $\mathbf{C} = \mathbf{L} \mathbf{L}^\dagger$ be the Cholesky decomposition of the covariance matrix $\mathbf{C}$, where $\dagger$ denotes conjugate transpose. A multiplication with $\mathbf{L}$ transforms two independent complex normal distributed samples $r_1$ and $r_2$ of zero mean and unit variance \begin{equation} \begin{bmatrix} u_1 \\ u_2 \end{bmatrix} = \mathbf{L} \begin{bmatrix} r_1 \\ r_2 \end{bmatrix} = A \begin{bmatrix} 1 & 0 \\ \gamma e^{- \mathrm{j} \varphi} & \sqrt{1 - \gamma^2} \end{bmatrix} \begin{bmatrix} r_1 \\ r_2 \end{bmatrix} \end{equation} to samples with the desired correlation properties, amplitude and phase defined by the covariance matrix. An analysis for the slope-dependent noise suppression was carried out by denoising phase ramps of different inclinations. In the simulations, the intensity was constant for the whole slope and the coherence was set to 0.7. \Cref{fig:slope_dependency} shows the standard deviation of the various filters' phase estimates for different inclinations, which is given as the phase change per pixel in radians. \begin{figure} \centering \makeatletter% \if@twocolumn% \newcommand{0.85\textwidth}{\columnwidth} \else \newcommand{0.85\textwidth}{0.75\textwidth} \fi \includegraphics[width=0.85\textwidth]{figures/slope_dependency/slope_dependency} \caption{Standard deviation of the phase estimate as a function of a constant ramp's inclination. The steeper the incline the higher the standard deviation. The fringe frequency estimation of NL-SWAG alleviates this problem.}% \label{fig:slope_dependency} \end{figure} The nonlocal filters are more sensitive to changes in inclination compared to the Boxcar filter as a result of their large search windows. NL-InSAR and NL-SAR in particular, since they do not compensate for the deterministic phase component. As mentioned earlier, the fringe estimation of NL-SWAG was not perfect due to the discrete nature of the fast Fourier transform used in the implementation and hence was still slope-dependent. Overall, we can see that NL-SWAG provided an improvement of roughly a factor of three compared to the Boxcar estimate over all frequencies. \cref{fig:phase_step} and \cref{fig:phase_intensity_coherence_step} give an impression of the resolution preservation capabilities of the various filters. Both figures are the result of Monte-Carlo simulations with 10,000 repetitions when estimating a phase jump from $-\tfrac{\pi}{3}$ to $\tfrac{\pi}{3}$. The expected values are plotted as blue dots and their standard deviations as shaded blue areas. In \cref{fig:phase_step}, intensity and coherence are constant, with coherence having a value of 0.7, whereas in \cref{fig:phase_intensity_coherence_step} coherence increases from 0.6 to 0.8 and the intensity difference is $\SI{6}{\decibel}$. \cref{fig:phase_step} shows that the Boxcar filter's result exhibited the expected smoothing. Both NL-InSAR and NL-SWAG were unable to perfectly preserve the edge but fared much better than NL-SAR\@. The reason for NL-SAR's poor performance is that NL-SAR initially produces an intentionally oversmoothed result and then applies a bias-reduction step based on terrain heterogeneity. This heterogeneity test, however, only considers the intensity and therefore breaks down in this particular case, where only the phase changes. \begin{figure} \centering \makeatletter% \if@twocolumn% \newcommand{0.33\columnwidth}{0.49\columnwidth} \else \newcommand{0.33\columnwidth}{0.35\textwidth} \fi \makeatother \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/step_function/unit_step_phase_flat_ref_coh_Boxcar_5x5} \caption{Boxcar $5\times5$} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/step_function/unit_step_phase_flat_ref_coh_NL-InSAR} \caption{NL-InSAR} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/step_function/unit_step_phase_flat_ref_coh_NL-SAR} \caption{NL-SAR} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/step_function/unit_step_phase_flat_ref_coh_NL-TAP} \caption{NL-SWAG} \end{subfigure} \caption{Expected value of a step function's phase estimate, constant amplitude and coherence of $0.7$. The shaded blue area delineates $\pm$ three times the estimate's standard deviation. We performed 10,000 simulations to obtain the statistics.}% \label{fig:phase_step} \end{figure} The situation changed when the phase jump was accompanied by an intensity jump as in \cref{fig:phase_intensity_coherence_step}. The intensity change aids nonlocal filters in discriminating between similar pixels, resulting in sharper transitions. The benefit of setting the patch size adaptively is highlighted by NL-SAR and NL-SWAG, which do not exhibit a halo of high variance at the discontinuity. We could deduce that the rare patch effect was indeed the cause of this performance degradation, as the width of the halo for NL-InSAR was equal to the employed patch size minus one. All patches in this area included the edge and consequently suffered from the rare patch effect. NL-SWAG additionally benefited by the aggregation step, which further reduced the variance along the edge. Even with these measures in place, we could still see that the variance is increased near the edge. \begin{figure} \centering \makeatletter% \if@twocolumn% \newcommand{0.33\columnwidth}{0.49\columnwidth} \else \newcommand{0.33\columnwidth}{0.35\textwidth} \fi \makeatother \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/step_function/unit_step_phase_ref_coh_Boxcar_5x5} \caption{Boxcar $5\times5$} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/step_function/unit_step_phase_ref_coh_NL-InSAR} \caption{NL-InSAR} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/step_function/unit_step_phase_ref_coh_NL-SAR} \caption{NL-SAR} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/step_function/unit_step_phase_ref_coh_NL-TAP} \caption{NL-SWAG} \end{subfigure} \caption{Estimated phase of a step function with a step in coherence from 0.6 to 0.8 and a intensity jump of 6\si{\decibel}. The additional change in intensity compared to \cref{fig:phase_step} helped the nonlocal filters to preserve the edge.}% \label{fig:phase_intensity_coherence_step} \end{figure} To illustrate the propensity of the filters to produce the earlier introduced terrace-like features and other biasing artifacts, we simulated a noisy interferogram from a synthetic terrain created by the diamond-square algorithm~\cite{fournier_diamond_square_1982}. \cref{fig:fractal} shows in the top row the simulated noisy interferogram with a constant coherence value of $0.7$ and the filters' denoised results. The second row shows the true simulated phase and its difference compared to the filter output. We also include a TanDEM-X interferogram whose phase resembles the simulation in our analysis to exemplify how these filtering characteristics affect real data, which is shown in the last row together with shaded reliefs of DEMs generated by the various filters. For NL-InSAR, a distinct pattern was visible in the difference plot which would manifest as terrace-like artifacts in a generated DEM\@. Indeed, the DEM produced by NL-InSAR from real data also exhibited similar patterns. Visually, we could asses that the overall noise level of all nonlocal filters was lower compared to the Boxcar filter, especially in regions where the fringe frequency was low. The difference plots also show that nonlocal filters suppressed the high-frequency component of the noise but created slowly varying undulations of spatially correlated noise. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/fractal/phase_estimates_comparison} \caption{Phase estimates of several filters for a synthetically-generated interferogram and their differences compared to the true phase are shown together with the noisy interferogram (the coherence was set to $0.7$) and the true phase in the first two rows. The last row shows a comparable TanDEM-X strip map interferogram and the shaded relief of DEMs generated by the corresponding filter. The phase estimate of NL-InSAR shows a distinct staircase-like pattern, which is also clearly visible in the shaded relief plot. All nonlocal filters suppress the high-frequency component of the noise but produce low frequency undulations in the estimate.}% \label{fig:fractal} \end{figure*} To further shed light on some of the mechanisms of nonlocal filters, \cref{fig:fractal_monte_carlo} shows the expected value and standard deviation of a Monte-Carlo simulation's phase estimate for the experiment with synthetic data in \cref{fig:fractal}. All nonlocal filters biased the estimate along the ridge at the interferogram's diagonal. In general, nonlocal filters have a higher propensity to bias the estimate due to their comparatively large search windows. The standard deviation plots show the fringe-frequency dependent noise suppression of NL-InSAR and NL-SAR\@. NL-SWAG was much less affected by this aspect, although it was also not completely immune as noted earlier. \cref{tab:fractal_std} lists the mean standard deviations and the average equivalent number of looks, rounded to the nearest integer, over the whole image and all simulation runs. In accordance with our previous experiments, it was considerably lower for nonlocal filters. Contrasting \cref{tab:fractal_std} with \cref{tab:demspec} reveals that NL-SWAG would fulfill the noise reduction by a factor of 2.5, which is required for the production of a DEM according to the HDEM specifications. \begin{table} \caption{Standard deviation in radians and average equivalent number of looks, rounded to the nearest integer, for the Monte-Carlo simulation in \cref{fig:fractal_monte_carlo}} \begin{tabularx}{\columnwidth}{lllll} \toprule & Boxcar $5 \times 5$ & NL-InSAR & NL-SAR & NL-SWAG \\ \midrule $\sigma_{\hat \varphi}$ in \si{\radian} & 0.1482 & 0.0969 & 0.0768 & 0.0537 \\ Number of looks & 25 & 58 & 93 & 190 \\ \bottomrule \end{tabularx}% \label{tab:fractal_std} \end{table} \begin{figure} \centering \makeatletter% \if@twocolumn% \newcommand{0.85\textwidth}{\columnwidth} \else \newcommand{0.85\textwidth}{0.85\textwidth} \fi \makeatother \includegraphics[width=0.85\textwidth]{figures/fractal/montecarlo_mu_and_std} \caption{Expected values (top) and standard deviation (bottom) for a Monte-Carlo simulation of the simulated phase in \Cref{fig:fractal}. Minor biases are present in the phase estimates. The slope dependent denoising performance of nonlocal filters is evident in the standard deviation plots.}% \label{fig:fractal_monte_carlo} \end{figure} \subsection{Real Data} Experiments on TanDEM-X bistatic strip map interferograms were carried out for three test sites that were chosen to showcase the previously described qualities and phenomena when using nonlocal filters and NL-SWAG in particular for DEM generation. The interferograms from the test sites were processed with DLR's ITP, and the aforementioned nonlocal filters were used in lieu of the default Boxcar filter. The first test area was an industrial site near the French city of Marseille and it provided a visual impression of the performance increase that could be expected with nonlocal filters. \cref{fig:dems_marseille} shows shaded reliefs of the generated DEMs, an optical image for better interpretation and a plot of the unfiltered phase. The resolution of the DEMs produced with the nonlocal filters was \SI{6}{\meter} for longitude and latitude. The DEM generated using the $5 \times 5$ Boxcar filter had a resolution of \SI{12}{\meter}, the default configuration for DLR's RawDEM\@. In the global TanDEM-X DEM processing chain, several RawDEMs are later combined to generate the final DEM product. The higher level of details visible in the nonlocal DEMs is evident, as is the improved noise reduction for agricultural fields and the hill to the south. NL-InSAR produced clearly discernible terraces for the hill, a result of the staircasing effect. The road in the lower half of the image serves as an example for what kind of details can be preserved by the proposed filter. Also noticeable are noisy artifacts near buildings for NL-InSAR at the industrial site, a consequence of the rare patch effect, which is avoided by NL-SAR and NL-SWAG\@. NL-SAR, however, tends to oversmooth some details, so that, for example, the road in the lower part of the test site is hardly distinguishable from its surrounding. \begin{figure}[ht!] \centering \makeatletter% \if@twocolumn% \newcommand{0.33\columnwidth}{0.49\columnwidth} \else \newcommand{0.33\columnwidth}{0.28\columnwidth} \fi \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/marseille/marseille_aubagne_optical} \caption{Optical, \textcopyright~Google} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/marseille/marseille_aubagne_phi} \caption{Unfiltered phase} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/marseille/marseille_aubagne_itp_12m} \caption{Boxcar $5\times5$} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/marseille/marseille_aubagne_nlinsar} \caption{NL-InSAR} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/marseille/marseille_aubagne_nlsar_deledalle} \caption{NL-SAR} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/marseille/marseille_aubagne_nltap} \caption{NL-SWAG} \end{subfigure} \caption{Shaded reliefs of DEMs generated with the various filters. The nonlocal filters improved the resolution and noise level compared to the Boxcar estimate. NL-InSAR suffered from the rare patch effect near structures due to its fixed patch size.}% \label{fig:dems_marseille} \end{figure} \cref{fig:dems_marseille_app} sheds some more light on NL-SWAG's filtering characteristics. It shows the employed width of the Gaussian window used for computing the patch similarities and the final equivalent number of looks after the aggregation step. Both show that homogeneous areas benefit from wide Gaussian windows, resulting in accurate patch similarity estimates, and a large number of similar pixels within the search window, leading to low-noise estimates. The reverse is true for the industrial site, where narrow Gaussian windows were employed, due to the region's heterogeneity. This heterogeneity was also the cause of only a comparatively low number of looks. The impact that the fringe frequency estimation and compensation had on the estimate could be inferred from the equivalent number of looks for the hilly terrain to the south, which was virtually unaffected by the trend of the phase. \begin{figure}[htb] \centering \makeatletter% \if@twocolumn% \newcommand{0.33\columnwidth}{0.49\columnwidth} \else \newcommand{0.33\columnwidth}{0.28\columnwidth} \fi \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/marseille/marseille_aubagne_gauss_win_cb} \caption{Sigma of Gaussian windows} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/marseille/marseille_aubagne_enl_cb} \caption{Equivalent number of looks} \end{subfigure} \caption{Width of the Gaussian windows used for computing the patch similarities and the equivalent number of looks for the test site from \cref{fig:dems_marseille}.}% \label{fig:dems_marseille_app} \end{figure} As a clearer example of detail preservation, \cref{fig:dems_munich} shows DEMs for an agricultural area near Munich, Germany. The resolution was the same as in the previous example: \SI{6}{\meter} for the nonlocal DEMs and \SI{12}{\meter} for the Boxcar filter. The data were acquired on August 19, 2011 when some of the fields had already been harvested so the outlines of different fields are clearly discernible, as electromagnetic waves in X-Band only marginally penetrate vegetation~\cite{rossi_paddy_rice_2015}. The shaded reliefs confirmed our simulation results in \cref{fig:phase_step} and \cref{fig:phase_intensity_coherence_step} that NL-InSAR provided the best result for this particular scenario, as it favors piecewise constant solutions and sharp edges. But this propensity was also the source of the highly unwelcomed staircasing for regions with a more interesting topographic profile. We can also see the effect that a change of $h$ has on the filtering result. A lower value of $h$ produced a sharper transition at the edges of the field but reduced denoising in flat terrain. \begin{figure}[htbp] \centering \makeatletter% \if@twocolumn% \newcommand{0.33\columnwidth}{0.48\columnwidth} \else \newcommand{0.33\columnwidth}{0.33\columnwidth} \fi \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/munich/munich_Goldach_optical} \caption{Optical, \textcopyright~Google} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/munich/munich_Goldach_itp_12m} \caption{Boxcar $5\times5$} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/munich/munich_Goldach_nlinsar} \caption{NL-InSAR} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/munich/munich_Goldach_nlsar_deledalle} \caption{NL-SAR} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/munich/munich_Goldach_nlswag_h05} \caption{NL-SWAG $h=\frac{1}{2}$} \end{subfigure} \begin{subfigure}[t]{0.33\columnwidth} \centering \includegraphics[width=\textwidth]{figures/munich/munich_Goldach_nlswag_h2} \caption{NL-SWAG $h=2$} \end{subfigure} \caption{Shaded reliefs of DEMs of an agricultural site. Clearly visible are height changes between fields. The bottom row shows the effect that changing $h$ in the second stage has on detail preservation and noise reduction.}% \label{fig:dems_munich} \end{figure} As a last example we compared NL-SWAG to a high-resolution LiDAR DEM, which served as a gold standard for our analysis. The test site was the town Terrassa close to Barcelona in Spain. The top row in \cref{fig:lidar_terrassa} shows an optical image from Google maps and the LiDAR DEM with \SI{5}{\meter} spacing plus DEMs generated from a single TanDEM-X interferogram by a $5 \times 5$ Boxcar filter and our proposed method. The DEMs were resampled to the grid of the LiDAR DEM\@. As LiDAR and SAR have fundamentally different imaging geometries and properties, we tried to remove areas with systematic errors, such as urban areas suffering from layover and shadowing or vegetation, where the LiDAR's last returns differed from the scattered wave's phase center at X-Band. In order to do so, we compared the LiDAR DEM to the global TanDEM-X DEM and excluded points with a height difference larger than \SI{2}{\meter}. The result is depicted in the bottom row of \cref{fig:lidar_terrassa} and a cleaned mask, using morphological operations, right next to it. The height differences are the remaining two pictures annotated with the standard deviation of the height difference computed over the masked area. This experiment had several noteworthy results. As expected, the SAR DEMs differed substantially from the LiDAR DEM for buildings, and the height values were unusable. However the SAR DEMs could still be used to detect buildings as the test site near Marseille (\cref{fig:dems_marseille}) showed as well. On the masked-out, moderately hilly, homogeneous terrain, NL-SWAG improved the noise level roughly by a factor of $\SI{1.3420}{\meter} / \SI{0.7980}{\meter} \approx 1.6817 $ almost equivalent to a filter with three times as many looks, which is, however, insufficient for completely fulfilling the requirements in \cref{tab:demspec}. At first glance, this improvement in noise reduction contradicted our findings reported in \cref{tab:fractal_std}. We could exclude systematic height differences due to the different physical properties of LiDAR and SAR as the penetration depth of electromagnetic waves at X-Band is negligible as an error source~\cite{nolan_penetration_depth_2003}. Coregistration errors of the LiDAR and SAR DEMs might also be a contributing factor for height differences but for the moderately hilly terrain they would only play a minor role. Such error sources would equally increase the difference compared to the LiDAR DEM, leading to a misrepresented noise level reduction. The true reason for this discrepancy is the resampling from approximately $\SI{3}{\meter}$ pixel spacing in range and azimuth to the $\SI{5}{\meter}$ LiDAR pixel spacing, which essentially increased the footprint of the Boxcar filter. For NL-SWAG, this effect was imperceptible due to its comparatively large search window. \begin{figure*}[htbp] \includegraphics[width=\textwidth]{figures/lidar_terrassa/lidar_terrassa_ts1.pdf} \caption{DEMs generated by NL-SWAG and a $5 \times 5$ Boxcar filter from a TanDEM-X interferogram are compared to a LiDAR DEM\@. The bottom row shows the height differences compared to the LiDAR DEM\@. For the masked-out area, standard deviations were computed for the height differences.}% \label{fig:lidar_terrassa} \end{figure*} \section{Discussion}% \label{sec:discussion} The initial goal of our investigation was to ascertain whether nonlocal filters were suitable for generating a DEM close to the HDEM standard (see \cref{tab:demspec}) from the globally available TanDEM-X data. In the following paragraphs, we will detail how the proposed filter held up to these challenges. All conducted experiments confirmed that nonlocal filters were able to deliver a vastly improved noise reduction over the exemplary local Boxcar filter. The reason is that, due to their large search windows, nonlocal filters found a multitude of pixels for the averaging process, even for comparatively heterogeneous terrain. To further quantify this improvement: For the experiments on synthetic data (\cref{fig:slope_dependency} and \cref{tab:fractal_std}) the standard deviation was lower by a factor of three and for the real data set of \cref{fig:lidar_terrassa} on moderately complex terrain it was still reduced by a factor of approximately $1.7$. Relating this to the level of noise reduction we aimed for in \cref{tab:demspec}, our filter fell short of reaching the target of $2.5$ roughly by a factor of $\sqrt{2}$. Depending on the type of terrain, this might still be sufficient to obtain a DEM that fulfills the requirements of the HDEM, as already the globally available TanDEM-X DEM often overfulfills its accuracy requirements. In any case, having twice as many acquisitions available would also satisfy the specification. Our proposed filter implemented several techniques to reach this level of noise reduction. It reduced the detrimental effect of topography by its fringe frequency compensation accounting for the deterministic topographic phase component, as evidenced by \cref{fig:slope_dependency} and \cref{fig:fractal_monte_carlo}. Furthermore, even on flat, homogeneous terrain, the high inherent noise level of InSAR hampered denoising, which was countered by the two-step approach. \cref{fig:fractal_monte_carlo} shows that nonlocal filters bias the estimate for nonlinear phase profiles. The bias is limited by approximately $\pm \tfrac{\pi}{100}$. With a height of ambiguity of \SI{40}{\meter}, which is a typical value for TanDEM-X interferograms, this translates to deterministic height errors of $\pm$ \SI{20}{\centi\meter}, well within the HDEM specifications. We also highlighted that for nonlocal filters it is far easier to denoise homogeneous terrain than heterogeneous targets, as more similar pixels are found. Nonetheless, nonlocal filters were well suited for preserving heterogeneous targets as shown by the simulation results in \cref{fig:phase_step} and \cref{fig:phase_intensity_coherence_step}, where the adaptive patch size and the aggregation step played a significant role to avoid the rare patch effect near the edge. For filtering SAR interferograms of urban areas or terrain with man-made structures, nonlocal filters were especially appealing as such heterogeneous targets exhibit a very high radar cross-section compared to their surroundings. These high intensity variations aid nonlocal filters to preserve details as their weight maps are more discriminant. The gain in resolution, compared to simple boxcar averaging, is evident for real data in \cref{fig:dems_marseille} and \cref{fig:dems_munich}. It wold also be rather straightforward to extend existing nonlocal filters with the proposed modifications. The fringe compensation requires only a minor adaption of the similarity criterion. Changing the patch size adaptively is an isolated modification, which could also be performed based on the intensity heterogeneity criterion derived in~\cite{lee_speckle_analysis_1981}, for example, in a nonlocal SAR despeckling filter. The aggregation step is an extension of the pixelwise weighted mean and can be treated separately from all other adjustments. \section{Conclusion}% \label{sec:conclusion} We showed that applying existing nonlocal filters led to artifacts when generating DEMs. Our analysis highlighted the mechanisms behind the encountered phenomena, like the topographic phase component and the myriad types of terrain and settings, from agricultural fields to city landscapes, in which InSAR filters have to operate. The proposed filter addressed these issues by accounting for the deterministic fringe frequency and setting its filtering parameters adaptively. We demonstrated the effectiveness of these measures which resulted in a comparable noise reduction and detail preservation compared to other nonlocal InSAR filters without any of the undesired properties. The derived DEMs also far surpassed the RawDEMs produced with the existing global TanDEM-X processing chain, which relies on conventional boxcar multilooking. We will further evaluate the proposed method on a wider array of real data, which will also highlight some of the characteristics of SAR compared to LiDAR for generating DEMs. Such an extensive evaluation is essential for considering nonlocal filters as a total replacement for the boxcar filter in the TanDEM-X processing chain. Promising paths for future research include exploiting spatial redundancies within a patch as is the case with SAR-BM3D~\cite{parrilli_nonlocal_2012} and also taking into account the slope-dependent reflectivity when computing similarities. The robustness of the filter could be increased by also setting the search window dimensions adaptively depending on the local scene heterogeneity, which could also be achieved by designing a weighting kernel with thresholding. Furthermore, the proposed filter only relies on the interferometric phase to classify scenes as heterogeneous, also taking the intensity into account might provide more accurate estimates, especially in urban areas. \section*{Acknowledgment} The authors would like to thank the anonymous reviewers for their valuable comments. They would also like to mention the enlightening discussions they had with their colleagues Thomas Fritz, Helko Breit and Michael Eineder from DLR, as well as Michael Schmitt of the Technical University of Munich. \bibliographystyle{IEEEtran}
1,314,259,995,178
arxiv
\section{Introduction}\label{sec_in} In the last decade, analysis of black hole low-mass X-ray binaries (BH LMXBs) using X-ray spectra has shown the presence of photoionised plasmas in such systems \cite[see][and references therein]{dia16}. The plasmas can be found as a bound atmosphere or they can flow outwards with velocity blueshifts well above 1000 km/s \citep{kal09}. Although it is not clear which mechanism is responsible for the wind launching, the best candidates include thermal pressure, radiative pressure and magnetic pressure. Thermal pressure consists of the heating of the gas by the central X-ray source, producing an outflow at large distances from the central object, when the thermal velocity is larger than the local escape velocity \citep{beg83,woo96,net06,hig15,hig17}. Radiative pressure can be produced by electrons scattering, at near- or super-Eddington luminosities, or lines \citep{roz14,has15,shi16}. However, it has been shown that, for BH LMXBs, radiation pressure cannot launch an outflow due to the low number of soft X-ray and UV lines \citep{pro02}. Finally, magnetic pressure or magnetocentrifugal forces can produce winds at small radii, although more work needs to be done, from the theoretical point of view, in order to reproduce the observed spectra \citep{bla82,pro03,cha16,li14}. From the observational point of view, thermal winds have been favored by a majority of observations \citep{kub07,nei12,roz14,dia16,all18,don18,tom18} although radiation pressure due to electrons \citep{kot00,kub07,nei09b,dia14,roz14,mil16d,shi16} and magnetic forces \citep{mil06a,mil08,luk10,kin14,mil06a,mil16c,fuk17,tet18} have also been invoked to explain a handful of cases. An open issue is the connection between the outflowing winds and the accretion state. During outburst, BH X-ray binaries show a hysteresis pattern in the hardness-intensity diagram that has been associated to transitions through different accretion states \citep{fen04,fen12}. Because the winds have been observed in a number of BHs to be stronger in the soft accretion state in which jets are quenched \citep{mil06a,mil06dd,dia07,kub07,ued09bb,dia14} it was proposed that jets and winds were preventing each other from forming \citep{neil09}. However, it has been recently shown that disc winds and jets may co-exist. For example, \citet{hom16} found indications that some sources could produce winds and jets in the same accretion state (albeit with non-simultaneous observations) and concluded that if the LMXB luminosity is above a few tens of percent of the Eddington luminosity, disc winds and jets may co-exist. In the case of the BH LMXB V404 Cygni, an optical wind has been identified simultaneously with a radio jet by \citet{mun17}. Also, \citet{rah14} found a broad Pa$\beta$ absorption feature in the hard state of BH LMXB GX~339--4 using observations taken with the ESO/Very Large Telescope (VLT) that they attributed to a wind. Despite the efforts made to improve our understanding of these phenomena, multiple questions remain, including: What dictates the balance of power and the matter/radiation content of the disc, wind and jet? Is the disc-jet connection defined by the accretion flow only or does it depend on the compact object? And how do winds affect the accretion process? The BH LMXB~4U~1630-47 constitutes an excellent laboratory to study the disc-jet connection. It has been identified as a recurrent transient \citep{jon76, par95,kuu97,tom05} with an inclination of $\sim$ 60--75 $^{\circ}$ \citep{kuu98,tom98}. Radio emission has been detected at flux levels always $<$ 3\,mJy\,beam$^{-1}$ and has been identified with the presence of jets in this system (\citealt{hje99}; \citealt{dia13}; but see \citealt{nei14} for a different interpretation). {\it Suzaku} spectra of this source were analyzed by \citet{kub07} who first identified a highly ionized disc wind traced by {\rm Fe}~{\sc XXVI} and {\rm Fe}~{\sc XXV} absorption lines during the 2006 outburst. They concluded that thermal and radiative pressure processes can be part of the launching mechanism without discarding completely magnetic processes. \citet{dia14} analyzed {\it XMM-Newton} X-ray spectra obtained during the 2011-2013 outburst. They identified {\rm Fe}~{\sc XXVI}, {\rm Fe}~{\sc XXV}, {\rm Ni}~{\sc XXVIII}, {\rm Ni}~{\sc XXVII} and {\rm S}~{\sc XVI} absorption lines associated with a disc wind being thermally-radiatively driven during a soft state of the source. They followed the source across the transition from a soft state to a very high state and attributed the disappearance of the wind in a very high state to strong ionization of the wind due to the hardening of the spectrum and the increase of luminosity during that state. \citet{nei14} analyzed {\it Chandra} high-resolution spectra obtained in 2012 during the same outburst. They fitted the continuum of the observation taken on January 2012 with a disc blackbody and classified the source as being in a soft accretion state, with the presence of absorption lines due to {\rm Fe}~{\sc XXVI}, {\rm Fe}~{\sc XXV}, {\rm Ca}~{\sc XX}, {\rm Ni}~{\sc XXVIII} and {\rm Ni}~{\sc XXVII}. In contrast, they found that the continuum of the observation taken in June 2012 shows an additional power-law component and no absorption lines. With the aim of studying further the connection of the wind, jet and accretion state across state transitions, we obtained two simultaneous {\it Chandra} and {\it VLA} observations during a soft-to-hard state transition. For comparison, we also used four Chandra observations from an earlier time in the outburst that show significant line absorption \citep{nei14}. The outline of this paper is as follows. In Section~\ref{sec_dat}, we describe the data selection and the reduction. The continuum modeling and the fit of the absorption lines identified are described in Sections~\ref{sec_cont} and \ref{sec_lines}, respectively. Results obtained by using photoionization models are reviewed in Section~\ref{sec_dis}. An analysis of the thermal stability curves derived for all observations is included in Section~\ref{sec_ther}. In Section~\ref{sec_dis2}, we discuss the possible launching mechanisms present in this system and Section~\ref{sec_con} summarizes the main results of our analysis. We assume a value of 10~kpc throughout this paper, as previous authors \citep{abe05,tom05,dia14} but note that \citet{kal18} have recently reported two potential distances of $4.7\pm0.3$ kpc and $11.5\pm 0.3$ kpc to the source based on an analysis of the dust scattering halo around the source. \begin{table*} \small \caption{\label{tab_data}{\it Chandra} High-Energy Grating observations of 4U~1630-47.} \centering \begin{tabular}{cccccccc} \hline Label&ObsID & Date & MJD & Exposure & Count-rate (cts/s) \\ &&&Start-time&(ks)&(1.5--10 keV)\\ \hline Obs1&13714& 17 Jan 2012& 55943.2& 28.9 & 14.8 \\ Obs2&13715& 20 Jan 2012& 55943.1& 29.2 & 14.4 \\ Obs3&13716& 26 Jan 2012& 55943.1& 29.2 & 13.7 \\ Obs4&13717& 30 Jan 2012& 55956.3& 29.4 & 15.6 \\ Obs5&14441& 03 Jun 2012& 56081.9& 19.0 & 20.8 \\ Obs6&15511& 25 Apr 2013& 56407.2& 49.4 & 9.8 \\ Obs7&15524& 27 May 2013& 56439.7& 48.9 & 0.6 \\ \end{tabular} \end{table*} \begin{figure*} \begin{center} \includegraphics[scale=0.38]{f1} \caption{Top panel: {\it MAXI/ASM} daily average lightcurves of 4U~1630-47. The black dashed line corresponds to the 2--20 keV lightcurve while black solid line corresponds to the 10--20 keV light curve. Bottom panel: {\it Swift/BAT} daily average lightcurve of the LMXB~4U~1630-47 in the 15--50 keV energy range. In both panels vertical red solid lines indicate the {\it Chandra} observation dates while vertical red dashed lines indicate the {\it XMM-Newton} observations analyzed by \citet{dia14}. }\label{fig_lc} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[scale=0.53]{f2} \caption{Hardness-intensity diagram of the 4U~1630-47 using {\it MAXI/ASM} daily average lightcurves. The positions of the pointed {\it Chandra} observations during the outburst are marked by blue crosses. }\label{fig_hr} \end{center} \end{figure} \begin{table*} \scriptsize \caption{\label{tab_con}4U~1630-47 {\it Chandra} HEG best-fit results. } \centering \begin{tabular}{llcccccc} \\ Component&Parameter&Obs1&Obs2&Obs3&Obs4 &Obs6&Obs7\\ \hline \hline \\ \multicolumn{8}{c}{Model B: {\tt tbabs*(diskbb)}}\\ {\tt Tbabs} & $N({\rm H})$&$9.15\pm 0.05 $&$9.14\pm 0.05 $&$9.12\pm 0.05 $&$9.23 \pm 0.05 $ &$- $&$-$ \\ {\tt diskbb}&$kT_{in}$ &$1.54\pm 0.01 $&$1.49\pm 0.01 $&$1.52\pm 0.01 $&$ 1.58\pm 0.01 $ &$- $&$- $ \\ & norm$_{dbb}$ &$113 \pm 3 $&$126\pm 3 $&$112\pm 3 $&$107 \pm 3 $&$- $&$- $ \\ Statistic&$\chi^{2}$/d.of.&$3056 /2810 $&$2968/2810 $&$ 2974/2810 $&$ 3084/2810 $&$ - $&$- $ \\ &red-$\chi^{2}$&$1.08 $ &$1.05 $ &$1.05 $ &$1.09 $ &$- $ &$-$ \\ Count-rate&Model &6.0$\times 10^{-5}$ &4.9$\times 10^{-5}$ &5.3$\times 10^{-5}$ & 7.9$\times 10^{-5}$ &$-$ &$-$ \\ (15-50 keV)&{\it Swift/BAT} & $<$ 5.2$\times 10^{-4}$ & $<$ 9.5$\times 10^{-4}$ & $<$ 1.1$\times 10^{-3}$ & $<$ 6.4$\times 10^{-4}$ &$-$ & $-$ \\ Flux&(0.0136--13.60 keV)&1.4$\times 10^{-8}$ &1.4$\times 10^{-8}$ &1.3$\times 10^{-8}$ & 1.5$\times 10^{-8}$ &$-$ &$-$ \\ &(1.5--10 keV)& 9.9$\times 10^{-9}$ & 9.7$\times 10^{-9}$ & 9.3$\times 10^{-9}$ & 1.0$\times 10^{-8}$ &$-$ &$-$ \\ &(15--50 keV)& 2.3$\times 10^{-11}$ & 2.6$\times 10^{-10}$ & 1.9$\times 10^{-11}$ & 3.1$\times 10^{-11}$ &$-$ &$-$ \\ \hline \\ \multicolumn{8}{c}{Model C: {\tt tbabs*(powerlaw+diskbb)}}\\ {\tt Tbabs} & $N({\rm H})$ &$9.15 \pm 0.04 $&$9.14 \pm 0.04 $&$9.12 \pm 0.04 $&$9.22 \pm 0.05 $ &$9.41\pm 0.27 $&$ 9.10$ (fixed)\\ {\tt powerlaw}&$\Gamma$ &$2.5$ (fixed)&$2.5$ (fixed)&$2.5$ (fixed)&$2.5$ (fixed) &$2.4\pm 0.2 $&$2.1\pm 0.3 $ \\ & norm$_{pow}$ &$ < 0.08 $&$<0.07 $&$ <0.07 $&$ <0.08 $& $ 1.9 _{-0.9}^{+1.5} $&$0.10\pm 0.05 $ \\ {\tt diskbb}&$kT_{in}$ &$1.54 \pm 0.01 $&$1.49 \pm 0.01 $&$1.52 \pm 0.01 $&$1.58 \pm 0.01 $ &$1.21\pm 0.03 $&$0.60\pm 0.03 $ \\ &norm$_{dbb}$ &$113 \pm 3 $&$126\pm 3 $&$112 \pm 3 $&$107 \pm 3 $ &$138\pm 23 $&$284_{-55}^{+75} $ \\ Statistic&$\chi^{2}$/d.of.&$ 3056/2810 $&$2968/2810 $&$2974/2810 $&$ 3084/2810 $& $ 2974/2810 $&$ 2490/2810 $ \\ &red-$\chi^{2}$&$1.08$ &$1.05$ &$1.06$ &$1.09$ &$1.06 $ &$0.88 $ \\ Count-rate&Model &6.5$\times 10^{-5}$ &5.0$\times 10^{-5}$&5.4$\times 10^{-5}$&8.4$\times 10^{-5}$ & 2.7$\times 10^{-3}$ & 3.8$\times 10^{-4}$ \\ (15-50 keV)&{\it Swift/BAT} & $<$ 5.2$\times 10^{-4}$ & $<$ 9.5$\times 10^{-4}$ & $<$ 1.1$\times 10^{-3}$ & $<$ 6.4$\times 10^{-4}$ & (2.5 $\pm$ 0.2)$\times 10^{-3}$ &(4.1 $\pm$ 3.1)$\times 10^{-4}$ \\ Flux&(0.0136--13.60 keV)&1.4$\times 10^{-8}$ &1.4$\times 10^{-8}$ &1.3$\times 10^{-8}$ & 1.4$\times 10^{-8}$ & 4.9$\times 10^{-8}$ & 2.1$\times 10^{-9}$ \\ &(1.5--10 keV)& 1.9$\times 10^{-9}$ & 9.7$\times 10^{-9}$ & 1.0$\times 10^{-8}$ & 1.0$\times 10^{-8}$ &7.9$\times 10^{-9}$ &5.4$\times 10^{-10}$ \\ &(15--50 keV)& 2.3$\times 10^{-11}$ & 2.5$\times 10^{-10}$ & 1.7$\times 10^{-11}$ & 2.9$\times 10^{-11}$ &1.0$\times 10^{-9}$ &1.4$\times 10^{-10}$ \\ \hline \\ \multicolumn{8}{c}{Model D: {\tt tbabs*simpl(diskbb)}}\\ {\tt Tbabs} & $N({\rm H})$ &$9.15 \pm 0.01 $&$9.13\pm 0.02 $&$9.12\pm 0.03 $&$ 9.23\pm 0.05 $ &$9.03\pm 0.09 $&$ 9.10$ (fixed)\\ {\tt simpl}&$\Gamma$ &$<2.00 $&$<2.00 $&$ <2.00 $&$<2.00 $ &$3.7 \pm 0.3 $&$2.1\pm 0.3 $ \\ &FracSca &$< 0.01 $&$< 0.01 $&$< 0.01 $&$< 0.01$ &$0.8_{-0.3}^{+0.8} $&$0.18\pm 0.03 $\\ {\tt diskbb}&$kT_{in}$ &$1.53\pm 0.01 $&$1.49\pm 0.01 $&$ 1.52\pm 0.01 $&$ 1.58\pm 0.01 $ &$0.90\pm 0.01 $&$0.58\pm 0.04 $ \\ &norm$_{dbb}$ &$115_{-4}^{+2E06}$&$125_{-3}^{+59}$&$112_{-1}^{+105} $&$ 108\pm 4 $ &$560_{-1}^{+350} $&$431^{+181}_{-113} $ \\ Statistic&$\chi^{2}$/d.of.&$ 3056/2810 $&$ 2971/2810 $&$2974/2810$&$ 3085/2810 $ &$2958/2810 $&$ 2493/2810 $ \\ &red-$\chi^{2}$&$1.08 $ &$1.05 $ &$1.06 $ &$1.09 $ &$1.05 $ &$0.88$ \\ Count-rate&Model&5.4$\times 10^{-5}$ & 4.4$\times 10^{-5}$ &5.4$\times 10^{-5}$ &7.4$\times 10^{-5}$ &2.2$\times 10^{-3}$ & 5.0$\times 10^{-4}$ \\ (15-50 keV)&{\it Swift/BAT} & $<$ 5.2$\times 10^{-4}$ & $<$ 9.5$\times 10^{-4}$ & $<$ 1.1$\times 10^{-3}$ & $<$ 6.4$\times 10^{-4}$ & (2.5 $\pm$ 0.2)$\times 10^{-3}$ &(4.1 $\pm$ 3.1)$\times 10^{-4}$ \\ Flux&(0.0136--13.60 keV)&1.4$\times 10^{-8}$ &1.4$\times 10^{-8}$ &1.3$\times 10^{-8}$ & 1.4$\times 10^{-8}$ &1.1$\times 10^{-8}$ &1.4$\times 10^{-9}$ \\ &(1.5--10 keV)& 9.9$\times 10^{-9}$ & 9.7$\times 10^{-9}$ & 9.3$\times 10^{-9}$ & 1.0$\times 10^{-8}$ &7.4$\times 10^{-9}$ & 5.5$\times 10^{-10}$ \\ &(15--50 keV)& 3.0$\times 10^{-10}$ & 1.7$\times 10^{-11}$ & 1.9$\times 10^{-11}$ & 2.9$\times 10^{-11}$ &3.4$\times 10^{-10}$ & 4.4$\times 10^{-10}$ \\ \hline \\ \multicolumn{8}{l}{Hydrogen column density ``$N({\rm H})$'' in units of $\times 10^{22}$~cm$^{-2}$. Temperature at inner disc radius ``$kT_{in}$'' in units of keV. }\\ \multicolumn{8}{l}{{\tt Powerlaw} normalization ``norm$_{pow}$'' in units of ph/keV/cm$^{2}$/s at 1 keV. }\\ \multicolumn{8}{l}{{\tt diskbb} normalization ``norm$_{dbb}=(R_{in}/D_{10})^{2}\cos\theta$'' where $R_{in}$ is the inner disc radius, }\\ \multicolumn{8}{l}{ $D_{10}$ is the distance in units of 10~kpc and $\theta$ is the inclination of the disc. }\\ \multicolumn{8}{l}{Count-rate {\it Swift/BAT} are daily averaged count rates.}\\ \multicolumn{8}{l}{Count-rate model refers to the count rate predicted by the model for the {\it Swift/BAT} energy band.}\\ \multicolumn{8}{l}{Unabsorbed fluxes are given in units of ergs cm$^{-2}$ s$^{-1}$.} \end{tabular} \end{table*} \begin{figure*} \begin{center} \includegraphics[scale=0.36]{f3} \caption{4U~1630-47 best continuum fit for all observations analyzed. Red solid lines indicate the best-fit obtained with model B (Obs~1-4) and model C (Obs~6-7). Lower panels indicate the data/model ratios obtained for the different models described in Table~\ref{tab_con}. }\label{fig_dat_1} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[scale=0.32]{f4} \caption{Contour plots of the $N({\rm H})$ and the $\Gamma$ parameters obtained from Model~C for Obs~6 and Obs~7. }\label{fig_contour} \end{center} \end{figure} \begin{table*} \scriptsize \caption{\label{tab_gauss}4U~1630-47 {\it Chandra} HEG Gaussian features included in the best-fit models listed in Table~\ref{tab_gauss}.} \centering \begin{tabular}{llccccccc} \\ Ion&Parameter&Obs1&Obs2&Obs3&Obs4 &Obs6&Obs7 \\ \hline \hline \\ {\rm Fe}~{\sc XXVI} K$\beta$&Energy & $8.27\pm 0.02 $& $8.29 \pm 0.02 $& $ 8.28 \pm 0.01 $& $8.28 \pm 0.01 $ & $8.28$(fixed) & $8.28$(fixed)\\ &Wavelength &$1.499\pm 0.004$ & $1.495\pm 0.004 $ &$ 1.497\pm 0.002 $ & $ 1.497\pm 0.002 $ &$1.49$(fixed)&$1.49$(fixed)\\ &$\sigma$& $0.03\pm 0.02 $& $0.04\pm 0.02 $& $ 0.03\pm 0.01 $& $<0.03 $& $0.03$(fixed)& $0.03$(fixed)\\ &norm& $0.0007\pm 0.0002 $& $0.0007\pm 0.0004 $& $ 0.0009\pm 0.0002 $& $0.0008\pm 0.0002 $&$<0.0001$&$<0.0001$\\ &EW & $36\pm 10 $& $38\pm 2 $& $ 49\pm 11 $& $36\pm 9 $&$<1$&$<1$\\ {\rm Fe}~{\sc XXV} K$\beta$&Energy & $7.87 \pm 0.03 $& $7.87\pm 0.01 $& $ 7.85 \pm 0.02 $& $7.86\pm 0.02 $ & $7.87$(fixed) & $7.87$(fixed)\\ &Wavelength &$1.575\pm 0.006 $ & $1.575\pm 0.002 $ & $1.579\pm 0.004 $ & $1.577\pm 0.004 $&$1.57$(fixed)&$1.57$(fixed) \\ &$\sigma$& $ 0.04 \pm 0.02 $& $0.02 \pm 0.01 $& $ 0.04 \pm 0.01 $& $0.04 \pm 0.02 $&$0.03$(fixed)&$0.03$(fixed)\\ &norm& $0.0006\pm 0.0002 $& $0.0006\pm 0.0001 $& $ 0.0008\pm 0.0002 $& $0.0009\pm 0.0002 $&$<0.0003$&$<0.0001$\\ &EW & $25\pm 8 $& $27\pm 5$& $ 37 \pm 9 $& $31 \pm 7 $&$<8$&$<1$\\ {\rm Fe}~{\sc XXVI} K$\alpha$&Energy & $6.979 \pm 0.003 $& $6.978\pm 0.003 $& $ 6.976\pm 0.003 $& $6.975 \pm 0.002 $ & $6.97$(fixed) & $6.97$(fixed)\\ &Wavelength & $1.776\pm 0.001$ & $1.777\pm 0.001$ & $1.777\pm 0.001$ & $1.777\pm 0.001$&$1.77$(fixed)&$1.77$(fixed) \\ &$\sigma$& $0.024 \pm 0.002 $& $0.023\pm 0.003 $& $ 0.023 \pm 0.003 $& $0.019\pm 0.002 $&$0.022$(fixed)&$0.022$(fixed)\\ &norm& $0.0022\pm 0.0001$& $0.0019\pm 0.0001 $& $ 0.0019\pm 0.0001 $& $0.0022\pm 0.0001 $&$<0.0001$&$<0.0001$\\ &EW & $54 \pm 2 $& $49\pm 3 $& $ 53\pm 3 $& $48\pm 2 $&$<1$&$<1$\\ {\rm Fe}~{\sc XXV} K$\alpha$&Energy & $6.700 \pm 0.003 $& $6.697 \pm 0.004 $& $ 6.689\pm 0.003 $& $6.697 \pm 0.003 $ & $6.7$(fixed) & $6.7$(fixed)\\ &Wavelength & $1.851\pm 0.001$ & $1.851\pm 0.001 $ & $1.854\pm 0.001$ & $1.851\pm 0.001$&$1.85$(fixed)&$1.85$(fixed) \\ &$\sigma$ & $0.017 \pm 0.003 $& $0.022\pm 0.003 $& $ 0.029\pm 0.003 $& $0.019\pm 0.003 $&$0.019$(fixed)&$0.019$(fixed)\\ &norm& $0.0015\pm 0.0001 $& $0.0015\pm 0.0001 $& $ 0.0021\pm 0.0001 $& $0.0016\pm 0.0001 $&$<0.0001$&$<0.0001$\\ &EW & $31\pm 2 $& $33 \pm 2 $& $ 47 \pm 2 $& $30 \pm 2 $&$<1$&$<1$\\ {\rm Ca}~{\sc XX} K$\alpha$&Energy & $4.109 \pm 0.006 $& $4.107 \pm 0.002 $& $ 4.109 \pm 0.002 $& $4.111\pm 0.003 $ &$- $&$- $\\ &Wavelength &$3.017\pm 0.004 $ & $3.019\pm 0.001 $ &$3.017\pm 0.001 $ &$3.016\pm 0.002 $ &$- $&$- $ \\ &$\sigma$& $0.015 \pm 0.005 $& $<0.003 $& $ 0.010\pm 0.002 $& $<0.001 $ &$- $&$- $\\ &norm& $0.0005\pm 0.0001 $& $0.0003\pm 0.0001 $& $ 0.0007\pm 0.0001 $& $0.0003\pm 0.0001 $&$- $&$- $\\ &EW & $36.0\pm 0.7 $& $24.0\pm 0.8 $& $47.0\pm 0.6 $& $23\pm 0.7 $&$- $&$- $\\ {\rm Ar}~{\sc XVIII} K$\alpha$&Energy & $- $& $- $& $ 3.323\pm 0.001 $& $- $&$- $&$- $\\ &Wavelength & $- $ & $- $ & $3.731\pm 0.001 $ &$-$&$- $&$- $ \\ &$\sigma$& $- $& $- $& $ <0.002 $& $- $&$- $&$- $\\ &norm& $- $& $- $& $ 0.0003\pm 0.0001 $& $- $&$- $&$- $\\ &EW & $- $& $- $& $ 24\pm 0.8 $& $- $&$- $&$- $\\ {\rm S}~{\sc XVI} K$\alpha$&Energy & $- $& $- $& $ 2.622 \pm 0.001 $& $- $&$- $&$- $\\ &Wavelength & $- $ & $- $ & $4.729\pm 0.002$ & $-$&$- $&$- $ \\ &$\sigma$& $- $& $- $& $ 0.0032\pm 0.0009 $& $- $&$- $&$- $\\ &norm& $- $& $- $& $ 0.0003\pm 0.0001 $& $- $&$- $&$- $\\ &EW & $- $& $- $& $ 5\pm 2 $& $- $&$- $&$- $\\ \\ \\ \multicolumn{8}{l}{Energies and $\sigma$ are given in keV. Equivalent widths (EWs) are given in eV. Wavelengths are given in \AA.}\\ \multicolumn{8}{l}{Gaussian normalizations are given in photons cm$^{-2}$ s$^{-1}$.} \end{tabular} \end{table*} \begin{table*} \small \caption{\label{tab_warm}Best fits to the {\it Chandra} HEG spectra for Obs~1-4 using model B but substituting the Gaussian features by a warm absorber component.} \centering \begin{tabular}{llcccc} \\ Component & Parameter&Obs $\#$1&Obs $\#$2&Obs $\#$3&Obs $\#$4 \\ \hline \hline \\ \multicolumn{6}{c}{Model: {\tt tbabs*warmabs*(diskbb)}}\\ {\tt Tbabs} & $N({\rm H})$&$9.19\pm 0.07 $&$9.08\pm 0.06 $&$9.10 \pm 0.04 $&$9.16\pm 0.03 $ \\ {\tt diskbb}&$T_{in}$&$1.54\pm 0.01 $&$1.50\pm 0.01 $&$1.51\pm 0.01$&$1.59\pm 0.01 $ \\ &norm$_{dbb}$&$111\pm 1 $&$122\pm 2 $&$116 \pm 1 $&$103 \pm 1 $ \\ {\tt Warmabs} & $\log(N({\rm H})/10^{22})$&$1.17\pm 0.07 $&$1.29\pm 0.08 $&$1.08\pm 0.12 $&$1.30\pm 0.09 $ \\ & $\log(\xi)$&$3.99\pm 0.09 $&$4.00_{-0.14}^{+0.10} $&$3.55\pm 0.22 $&$3.92\pm 0.10 $ \\ & $v_{turb}$&$194\pm 9 $&$148_{-13}^{+6} $&$200\pm 27 $&$157\pm 14 $ \\ & $z$&$ -(0.0020\pm 0.0001) $&$-(0.0019\pm 0.0002) $&$-(0.0010\pm 0.0002) $&$-(0.0019\pm 0.0001) $ \\ Flux&(0.0136--13.60 keV)&1.37$\times 10^{-8}$ &1.36$\times 10^{-8}$ &1.29$\times 10^{-8}$ & 1.44$\times 10^{-8}$ \\ Statistic &&$ 2969/2803 $&$ 2840/2803 $&$ 2987/2803 $&$ 2914/2803$\\ red-$\chi^{2}$&&$1.05$ &$1.01 $ &$ 1.06$ &$1.03 $ \\ \hline \\ \\ \multicolumn{6}{l}{ $N({\rm H})$ in units of $\times 10^{22}$~cm$^{-2}$. $kT_{in}$ in units of keV. }\\ \multicolumn{6}{l}{ norm$_{dbb}=(R_{in}/D_{10})^{2}\cos\theta$ where $R_{in}$ is the inner disc radius, $D_{10}$ is the distance in units of 10~kpc } \\ \multicolumn{6}{l}{ and $\theta$ is the inclination of the disc. } \end{tabular} \end{table*} \begin{figure*} \begin{center} \includegraphics[scale=0.28]{warmabs1}\\ \includegraphics[scale=0.28]{warmabs2}\\ \includegraphics[scale=0.28]{warmabs3} \caption{ Best fit results for Obs 1-4 modeled with {\tt warmabs} (See Section~\ref{tab_warm}). The main lines associated to the highly ionized absorber are indicated. The spectra have been rebinned for illustrative purposes.}\label{iron_warm} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[scale=0.5]{f6} \caption{Upper panel: $N({\rm H})$ versus $\log{\xi}$ obtained from the {\tt warmabs} fit for Obs~1-4. Middle panel: $N({\rm H})$ versus unabsorbed flux in the 0.013 -13.6 keV energy range. Lower panel: $\log{\xi}$ versus the unabsorbed flux in the 0.013 -13.6 keV energy range.}\label{ng_logxi} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.5]{f7} \caption{Upper panel: $\log{\xi}$ versus $kT_{in}$ obtained from the {\tt warmabs} fit for Obs~1-4 (Table~\ref{tab_warm}). Lower panel: $N({\rm H})$ versus $kT_{in}$. Results obtained by \citet{kub07} and \citet{dia14} are included as well.}\label{logxi_nh_kt} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.5]{f8} \caption{$N({\rm H})$ upper limits obtained with the {\tt warmabs} model for Obs~6 and 7 as function of $\log{\xi}$.}\label{limit_nh} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[scale=0.32]{f9} \includegraphics[scale=0.32]{f10} \includegraphics[scale=0.32]{f11}\\ \includegraphics[scale=0.32]{f12} \includegraphics[scale=0.32]{f13} \includegraphics[scale=0.32]{f14} \caption{Spectral energy distributions obtained from the continuum modeling of Obs~1-4, 6 and 7.}\label{fig_seds1} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.32]{f15} \includegraphics[scale=0.32]{f16} \includegraphics[scale=0.32]{f17}\\ \includegraphics[scale=0.32]{f18} \includegraphics[scale=0.32]{f19} \includegraphics[scale=0.32]{f20} \caption{Thermal stability curves obtained for all observations. For Obs~1-4 the model B SEDs were used. Vertical lines correspond to the best fit parameters obtained with the {\tt warmabs} model. Middle and right lower pannels shows the results obtained for Obs~6 and 7 using Model C (red region) and Model D (green region). In both cases the black stars indicates the best fit parameters obtained for Obs~1 with model~B. Horizontal arrows show the possible transition between stability curves assuming $nr^{2}=$ constant. }\label{fig_ther_curve1} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[scale=0.45]{f21} \caption{Ion fractions for the {\rm Fe}~{\sc XXIV}, {\rm Fe}~{\sc XXV} and {\rm Fe}~{\sc XXVI} ions as function of $\log{(\xi)}$ for Obs~1 (solid lines), Obs~6 (dotted lines) and Obs~7 (dashed lines). Top panel corresponds to the ion fractions obtained with model C for Obs~6 and 7 while bottom panel corresponds to the ion fractions obtained with model D for Obs~6 and 7. Vertical solid lines corresponds to the $\log{\xi}$ obtained for the best fit in Obs~1 including the uncertainty $\Delta\log{\xi}$ (gray region). The $\log{\xi}$ expected values obtained if the $nr^{2}$=constant condition is assumed are also included for Obs~6 (dotted vertical line) and Obs~7 (dashed vertical line), but see text for the caveats on this assumption. }\label{fig_ion_fractions} \end{center} \end{figure} \section{Observations and data reduction}\label{sec_dat} \subsection{X-ray observations} Table~\ref{tab_data} shows the specifications, including IDs, dates, exposure times and count rates, of the LMXB~4U~1630-47 {\it Chandra} observations analyzed in this paper. All observations were obtained using the Advanced CCD Imaging Spectrometer (ACIS) in combination with the High Energy Grating (HEG) of the High Energy Transmission Grating Spectrometer (HETGS) instrument. For each observation we combined both $\pm$ 1 HEG orders. The 1.5--10 keV {\it Chandra} count rates are included. The observations were reduced following the standard Chandra Interactive Analysis of Observations (CIAO, version 4.9) threads\footnote{\url{http://cxc.harvard.edu/ciao/threads/gspec.html}}. The spectral fitting was performed with the {\sc xspec} software (version 12.9.1p\footnote{\url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/}}), $\chi^{2}$ statistics were used and uncertainties are given at 90$\%$ confidence. Given that we are interested in the analysis of absorption features, we prefer to avoid the rebinning of the data due to the loss of information inherent to such a procedure. Instead, we prefer to use the \citet{chu96} weighting method which allows the analysis of low-counts spectra by assigning weights to each channel that correspond to the average counts in surrounding channels. In this way an almost unbiased accurate estimation of parameters is guaranteed but also a goodness-of-fit criterion is provided (contrary to, for example, the \citet{cas79} statistic). It is important to note that the smoothing of the data is performed only to calculate the weights, while the fitting procedure is applied to the original spectrum. This weighting method has been used in previous analyses of high-resolution X-ray spectra \citep{tof13,joa16,mil16c,gat18a,med18}. Finally, the abundances are given relative to \citet{gre98}. \subsubsection{X-ray light curves}\label{sec_light} Figure~\ref{fig_lc} shows the daily average lightcurves obtained during the 4U~1630-47 outburst with the Monitor of All-sky X-ray Image ({\it MAXI/ASM}) in the 2--20 keV and 10--20 keV energy ranges and the Neil Gehrels Swift Observatory with the Burst Alert Telescope ({\it Swift/BAT} in the 15--50 keV band (bottom panel). In the case of {\it Swift/BAT} data, negative values, although without a physical meaning, are obtained from Poisson fluctuations of low significance bins due to the background subtraction method. Those values are not included in the plot. Figure~\ref{fig_hr} shows the hardness-intensity diagram of the source using the {\it MAXI/ASM} daily average lightcurves, and where the hardness ratio is defined as the ratio between the observed fluxes in the 10-20 keV and the 2-4 keV bands. From the diagram, it is clear that Obs~6 is located in a similar region as Obs~1-4 while Obs~5 and 7 lie at a harder region of the diagram. In this sense, we use Obs~1-4 to compare with Obs~6-7. Given that Obs~5 does not show lines \citep{nei14} and it is yet in a particular state different from Obs~7, we decide to exclude it in the following analysis. \subsection{Radio observations}\label{sec_radio} During Obs~6 and 7 we made quasi-simultaneous radio observations with the Karl G.\ Jansky Very Large Array (VLA), under project code SE0242. We observed in two basebands. Each baseband comprised seven continuum spectral windows made up of sixty-four 2-MHz channels. We also centred an eighth spectral window in each baseband on the H94$\alpha$ and H110$\alpha$ recombination lines \citep[rest frequencies of 7.792871\,GHz and 4.874157\,GHz, respectively, and assuming a systemic radial velocity of $-170$\,km\,s$^{-1}$, see][]{lil68}, to test for recombination line emission. These spectral line frequencies were observed with narrower channels, of width 500\,kHz and 1\,MHz, respectively. The array was in its most compact D configuration for Obs 6 (2013 April 25, 09:00-10:00 UT; MJD 56407.40$\pm$0.01), and the low declination of the source meant that the antennas on the northern arm were all shadowed and had to be flagged, leading to a very elongated synthesised beam. We used 3C\,286 as a flux density and bandpass calibrator, and the more nearby calibrator J1626$-$2951 to set the complex gains. We achieved a total of 35.3\,min of time on the target field. However, the highly elongated synthesized beam and the large amount of diffuse structure in the field meant that the target source was not significantly detected, with a $3\sigma$ upper limit of 0.22\,mJy\,beam$^{-1}$ when stacking the two continuum basebands. For Obs~7, the VLA observed on 2013 May 28 (06:46--07:46 UT; MJD 56440.31$\pm$0.01), using the same frequency setup and calibrator sources. However, the array was in the slightly more extended DnC configuration, providing slightly better N-S resolution. Once again, the diffuse emission in the field coupled with the low elevation of the source, the compact array configuration and the relatively poor N-S resolution hampered our ability to detect the target. We experimented with both different weighting schemes and minimum {\it uv}-distance restrictions on the data, but we were only able to place $3\sigma$ upper limits of 0.48\,mJy\,beam$^{-1}$ at 5.3\,GHz and 0.12\,mJy\,beam$^{-1}$ at 7.2\,GHz (where the diffuse emission was fainter and we spatially resolved out a fraction of the more extended structure). In neither observation did we detect any significant recombination line emission (rms noise levels per channel of 0.47 and 0.69\,mJy\,beam$^{-1}$ for H94$\alpha$ and H110$\alpha$, respectively). \section{X-ray spectral modeling} \subsection{Continuum modeling}\label{sec_cont} In order to account for different spectral states we fitted each HEG-ACIS observation in the 1.5--10 keV energy range using multiple phenomenological models. Using {\sc xspec} nomenclature, the models are: \begin{enumerate} \item Model A: {\tt tbabs*(powerlaw)} \item Model B: {\tt tbabs*(diskbb)} \item Model C: {\tt tbabs*(powerlaw+diskbb)} \item Model D: {\tt tbabs*simpl(diskbb)} \end{enumerate} The {\tt diskbb} component corresponds to an accretion disc consisting of multiple blackbody components \citep{mit84,mak86}. The {\tt simpl} convolution component is a model of comptonization in which a fraction of the photons (FracSca parameter) is scattered into a power-law component \citep{ste09}. The {\tt tbabs} component models the absorption in the local interstellar medium (ISM) as described by \citet{wil00}. In this way, models A and B consider X-ray spectra dominated by hard and soft components, respectively, while models C and D correspond to a hybrid case. Also, we included several {\tt gaussians} in order to model absorption lines identified in the spectra, if present. The pileup effect, the detection of two or more photons as a single event, can affect the shape and level of the continuum\footnote{\url{http://cxc.harvard.edu/ciao/ahelp/acis\_pileup.html}}. The pileup effect is stronger in the Medium Energy Grating (MEG) than in the HEG instrument. Currently, the only model that can be used to estimate the pileup effect for high-resolution X-ray spectra is {\tt simple\_gpile2} developed by \citet{han09} for the {\sc isis} data analysis package. Using this model, we estimated the highest pileup degree in Obs~1-4, 6-7 to be $<5\%$ at $\approx 3.8$ keV. We have found that the continuum parameters listed are not affected by the inclusion of the {\tt simple\_gpile2} model and hence we decide to use the results obtained from the {\sc xspec} analysis. Table~\ref{tab_con} shows the best-fit parameters obtained for each observation using the models described above. All Gaussians included in the models are listed in Table~\ref{tab_gauss}. We consider as valid fits those satisfying (1) $\chi^{2}$/dof $<$ 1.50 and (2) the number of counts predicted by the model in the 15-50 keV energy range agrees with the {\it Swift/BAT} daily average measurements (within the uncertainties). Models that do not satisfy both conditions are not included in the analysis hereafter (including the Model~A). It is important to note that, because we do not have hard-energy range spectra ($>$ 10 keV), the {\tt powerlaw} photon-index cannot be well constrained because the fit can estimate a large, unrealistic, photon-index by increasing the absorption at low energies. Consequently, when using model C to fit Obs~1-4 we found that the $\Gamma$ parameter artificially increases to values $>$ 5 and therefore we decide to fix it to an acceptable value of $2.5$. \citet{sei14} reported $\Gamma$ values up to 3 by analyzing {\it RXTE} observations along multiple accretion states, although given that our $\Gamma$ value obtained from the more physical model {\tt simpl} is $<2.0$ we have considered that $\Gamma=3.0$ is a bit too large. On the other hand, the fit obtained using models C, D for Obs~7 tend to decrease $N({\rm H})$ to unrealistic values (i.e $< 0.01$ $\times 10^{22}$ cm$^{-2}$). In these cases, we decide to fix $N({\rm H})$ to the minimum value obtained from the best-fits in Obs~1 (i.e. $9.10$ $\times 10^{22}$ cm$^{-2}$ obtained with model B and considering the uncertainty). Figure~\ref{fig_dat_1} shows the best-fit models and residuals for all observations analyzed. Data were rebinned for illustrative purposes. For Obs~1-4 Gaussians were included to account for multiple absorption lines visible in the spectra (see Section~\ref{sec_lines}). Absorption lines were not detected in Obs~6-7. In all figures, lower panels indicate the data/model ratios obtained for the different models described in Table~\ref{tab_con}. Given the results obtained from the continuum modeling, in combination with the {\it Swift/BAT} fluxes, we classify the observations into accretion states as follows. Obs~1-4 are best modeled with a {\tt diskbb} with a relatively high temperature (1.49--1.58 keV) and show no significant hard X-ray flux (15--50 keV). Based on this, on their intensities and spectral hardnesses, and on their similarity to the {\it XMM-Newton} observations analyzed in \citet{dia14}, we classify them as being in a soft accretion state. In contrast, Obs~6 and 7 require two-component models ({\tt diskbb} and a {\tt powerlaw}) to predict the {\it Swift/BAT} flux within the errors. The temperature of the disc decreases steadily from Obs~1-4 ($kT$ $\sim$ $1.5-1.6$ keV) to Obs~6 ($kT$ $\sim$ $0.9-1.2$ keV) and Obs~7 ($kT$ $\sim$ $0.6$ keV). In contrast, the normalization of the power law representing the hard X-ray emission first increases from Obs~1-4 to 6 by a factor $>30$, highlighting the departure of Obs 6 from a soft state. Obs~7 shows a drop in total luminosity of one order of magnitude with respect to Obs~6. The decrease of luminosity is accompanied again by an increase of the power law fraction and therefore the hardness ratio, as indicated in Figure~\ref{fig_hr}. We have estimated the power law contribution to the total unabsorbed flux in the 2--20 keV energy range for Obs~7 to be $\sim 65\%$ and $\sim 62\%$ for Models C and D, respectively. In the case of Obs~6 we have estimated the powerlaw contribution to the total unabsorbed flux in the 2--20 keV energy range to be $\sim 51\%$ and $\sim 49\%$ for Models C and D, respectively. As a reference, \citet{mcc06} indicates that the powerlaw contribution should be $<25\%$ and $>80\%$ in order to be classified as a soft and hard accretion state, respectively. Following this prescription, both Obs~6 and~7 would correspond to intermediate states. However, we emphasize that the restricted energy band imposes some limitations on our fits. As an example, we show in Figure~\ref{fig_contour} a contour map of the $N({\rm H})$ and the $\Gamma$ parameters obtained from Model~C for Obs~6 and Obs~7. From this plot it is clear that Obs 7 favors harder photon index compared to Obs~6. Taking all the above into account we classify Obs~6 as being a relatively soft-intermediate state and Obs~7 as a likely hard-intermediate or hard state. Assuming a black-hole mass of 10 \(\textup{M}_\odot\) we have found that all observations analyzed correspond to to sub-Eddington luminosities. \subsection{Absorption lines}\label{sec_lines} Table~\ref{tab_gauss} shows the absorption lines identified in Obs~1-4, including their energies (keV), widths ($\sigma$), fluxes (photons cm$^{-2}$ s$^{-1}$) and equivalent widths (EWs). Common lines for Obs~1-4 include {\rm Fe}~{\sc XXVI} K$\alpha$,K$\beta$, {\rm Fe}~{\sc XXV} K$\alpha$,K$\beta$, {\rm Ca}~{\sc XX} K$\alpha$. On the other hand, {\rm Ar}~{\sc XVIII} K$\alpha$, {\rm S}~{\sc XVI} K$\alpha$, {\rm Si}~{\sc XIV} K$\alpha$ absorption lines are identified only in Obs~3 (see Table~\ref{tab_con}). We have not detected significant absorption lines in Obs~6-7, and we estimated upper limits of 1 eV for {\rm Fe}~{\sc XXVI} K$\alpha$ and {\rm Fe}~{\sc XXV} K$\alpha$, respectively (see Table~\ref{tab_gauss}). Considering the uncertainties of the measurements we cannot identify significant changes in the line positions and EWs among the observations, except for {\rm Fe}~{\sc XXV} K$\alpha$ in Obs~3. We found that the blueshift of the lines ranges from 200 to 500~km/s between the different observations, consistent with values reported by \citet{nei14}. We noted that the change in the energy position as well as the increase of the EW for the {\rm Fe}~{\sc XXV} K$\alpha$ line in Obs~3 is probably due to a blending of the line with {\rm Fe}~{\sc XXIV} K$\alpha$ located at $\sim 6.690$ keV. The {\rm Fe}~{\sc XXV} K$\alpha$/K$\beta$ and {\rm Fe}~{\sc XXVI} K$\alpha$/K$\beta$ line ratios correspond to a plasma with $N({\rm H})\gtrsim 10^{23}$ cm$^{-2}$ and $\nu_{turb}\gtrsim 200$ km/s \citep[table 3 of][]{roz06}. Finally, the EWs distribution as function of the flux for the {\rm Fe}~{\sc XXV} and {\rm Fe}~{\sc XXVI} ions suggest that both K$\alpha$ lines are saturated, in which case their width (i.e. their $\sigma$) may be over-estimated. However the uncertainties of the line property measurements do not allow to be conclusive in this respect. \subsection{Photoionization modeling}\label{sec_dis} Having obtained a phenomenological description of the spectra, we substitute the Gaussian components in Obs~1-4 described in Section~\ref{sec_cont} with the photoionization model {\tt warmabs}\footnote{https://heasarc.gsfc.nasa.gov/xstar/docs/html/node102.html}. This model is part of the {\sc xstar}\footnote{http://heasarc.nasa.gov/lheasoft/xstar/xstar.html} photoionization code which is designed to compute the physical conditions for an ionizing source surrounded by a gas taking into account physical processes such as photoionization, electron impact collisional ionization and excitation, and radiative and dielectronic recombination. The main assumptions include ionization equilibrium conditions, a Maxwellian electron velocity distribution and that the gas responsible for absorption and emission has an uniform ionization and temperature throughout. The {\tt warmabs} model parameters include the column density of the absorber ($N({\rm H})$), ionization parameter ($\log{\xi}$), elemental abundances (A$_{x}$), broadening turbulence ($v_{turb}$), and redshift ($z$). In order to perform a self-consistent modeling, we used the unabsorbed spectral energy distributions (SEDs) obtained from the continuum fits in Section~\ref{sec_cont} as the central source of ionizing radiation to compute the energy level populations required by {\tt warmabs}. Table~\ref{tab_warm} shows the best-fit parameters obtained for Obs~1-4 using the same continuum as for model~B (i.e. {\tt tbabs*warmabs*diskbb}). Abundances were fixed to solar values for both the ISM component and the ionized absorber. As the $\chi^2_{\rm red}$ values indicate, this model slightly improves the fit as compared to the Gaussians described in Section~\ref{tab_con}. Figure~\ref{iron_warm} shows the best fit obtained with the {\tt warmabs} model. The main absorption features/lines included in the model are indicated. The column densities derived for {\rm Fe}~{\sc XXV} and {\rm Fe}~{\sc XXVI} with {\tt warmabs} are $N({\rm H})> 10^{23}$ cm$^{-2}$, in agreement with the line ratios obtained from the fit with Gaussians. For Obs~3, residuals around {\rm Fe}~{\sc XXVI} K$\alpha$ and {\rm Fe}~{\sc XXVI} K$\beta$ may indicate the presence of a second plasma component or line saturation. However, we did not find a better fit (i.e. an improvement in the statistic) by adding a second {\tt warmabs} component. The best-fit {\tt warmabs} parameters are very similar between the observations except for the lower $\log{(\xi)}$ and a lower blueshift in Obs~3. In this sense, \citet{hig15} have shown that variations in the luminosity and/or plasma density affect the maximum blueshift of the disc wind. Figure~\ref{ng_logxi} shows a comparison between the $\log{(\xi)}$, $N({\rm H})$ and fluxes obtained from the best-fits. It is clear that Obs~3, which shows lower flux in the 0.013--13.6 keV energy range, requires both a lower column density and lower ionization parameter to be modeled. The decrease of $\log{(\xi)}$ to $\sim 3.55$ leads to the appearance of absorption lines associated to ions at lower ionization state, such as {\rm Ar}~{\sc XVIII} and {\rm S}~{\sc XVI} (see Figure~\ref{fig_dat_1}). The wind launching radius can be estimated from the ionization parameter obtained with {\tt warmabs} for Obs~1-4 through the well known relation $\xi = L/nr^{2}$ where $L$ is the total luminosity in the 0.013--13.6 keV energy range, $n$ is the hydrogen plasma density and $r$ is the radius of the innermost edge of the shell surrounding the source \citep{tar69}. Assuming $n=10^{12}$ cm$^{-3}$, we obtain values for the radius of ($1.3\pm 0.1$)($10^{12}$ cm$/n$)$ \times 10^{11}$~cm (Obs~1), ($1.3\pm 0.2$)($10^{12}$ cm$/n$)$\times 10^{11}$~cm (Obs~2), ($2.1\pm 0.5$)($10^{12}$ cm$/n$)$\times 10^{11}$~cm (Obs~3) and ($1.4\pm 0.2$)($10^{12}$ cm$/n$)$\times 10^{11}$~cm (Obs~4). The $n=10^{12}$ cm assumption is derived from the measuring the luminosity of the source and column density and ionization of the absorber and making the assumption that $\Delta R/R \sim 1$, i.e. that the thickness of the absorber is similar to its radius \citep[see][sect. 4.3 for such a derivation]{kub07}. Using the same $n$, \citet{kub07} and \citet{dia14} estimated a launching radius down to one order of magnitude lower. In this sense, discrepancies are due to differences in the total luminosity obtained from the SEDs and the $\xi$ values. Figure~\ref{logxi_nh_kt} shows $\log{\xi}$ and $N({\rm H})$ versus $kT_{in}$ obtained for Obs~1-4. For comparison, results obtained by \citet{kub07} and \citet{dia14} are included as well, which correspond to {\it Suzaku} and {\it XMM-Newton} observations, respectively. The plot shows that even for similar $kT_{in}$, differences can be found in the $\log{\xi}$ and $N({\rm H})$ parameters obtained. Figure~\ref{limit_nh} shows $N({\rm H})$ upper limits obtained with the {\tt warmabs} model for Obs~6 and 7 as function of $\log{\xi}$. In both cases the continuum corresponds to model C. Clearly, the column density upper limits are very restrictive for Obs~6, with a maximum column density of $2\times 10^{21}$~cm$^{-2}$ for a $log(\xi)$ of 4.2 and even lower column densities of $5\times 10^{20}$~cm$^{-2}$ for a $log(\xi)$ < 3. For Obs~7 the column density upper limits are less constraining, especially for $log(\xi)$ $\gtrsim$ 3.8 for which column densities as high as $1.2\times 10^{22}$~cm$^{-2}$ could be present. For $log(\xi) < 3.8$, the column density of a potential absorber is $<5\times 10^{21}$~cm$^{-2}$. \section{Stability curves}\label{sec_ther} The equilibrium states of a photoionized plasma can be studied through the stability curve (or thermal equilibrium curve), which consists of a $T$ versus $\xi/T$ diagram \citep{kro81}. When the plasma reaches a state outside the stability curve the heating and cooling processes will compete until reaching the equilibrium. Depending on the slope of the stability curve, we can identify parts of the curve during which the slope is positive, corresponding to thermally stable regions, and negative, corresponding to thermally unstable regions. We created stability curves using the {\sc xstar} photoionization code (version 2.41) to analyze the equilibrium conditions for the plasma associated to 4U~1630-47. We ran a grid in the ($log(T),log(\xi)$) parameter space, with values ranging from $4 < \log{(T)} <10$ and $-4 < \log{(\xi)} <8$. We assumed an optically thin plasma with a constant density $n=10^{12}$ cm$^{-3}$ and solar abundances. For each ($log(T),log(\xi)$) point, the heating and cooling rates, as well as the ionic fractions for all elements, are stored. In this way, we can determine the ($log(T),log(\xi)$) values corresponding to a thermal equilibrium state (i.e. heating = cooling). Because the stability curves are strongly affected by the shape of the SED \citep[see for example][]{kro81,cha09} we study the effects of all continuum models described in Section~\ref{sec_cont} as ionizing continuum in the {\sc xstar} grid calculation. Figure~\ref{fig_seds1} shows the SEDs for the different observations and models used to generate the stability curves. For Obs~1-4, different models yield differences in the SEDs only in the high energy region (consistent with the continuum degeneracy found in the {\it Chandra} spectra, see Table~\ref{tab_con}). In a way, we are forcing the difference in Obs~7 to be only at low energies by fixing the $N({\rm H})$. As explained in Section~\ref{sec_cont}, we found that for Obs~6 the model can yield dramatically different SEDs and therefore stability curves unless we fix $N({\rm H})$. Figure~\ref{fig_ther_curve1} shows the stability curves obtained for Obs~1-4 using model B. The vertical line indicates the $\log{(\xi)}$ obtained from the best-fit described in Section~\ref{sec_dis}. In all cases, the value of $\log{(\xi)}$ falls within a thermally stable region of the stability curve (defined by a positive slope in such a curve). Also, due to the low fraction of photons with energies >5$\times 10^{4}$ keV, we found no differences in the stability curves when using models C and D as ionizing SEDs. Middle and right lower panels, in the same Figure, show the stability curves for Obs~6 and 7 obtained using Model~C and using Model~D, respectively. For comparison, we also plot the stability curve of Obs~1. The shaded region corresponds to the uncertainties of the stability curves for heating and cooling errors of $15\%$ while the black stars indicate the best fit parameters obtained for Obs~1 (see Section~\ref{sec_disppaer} for an explanation about the horizontal arrows). When using model D the stability curve is practically identical between Obs~1-4 and~6 while for Obs~7, it evolves to a higher Compton temperature and with more unstable regions, characteristic of harder SEDs. When using model C, the curve shows a lower Compton temperature and more stable regions for Obs~6, compared to Obs~1-4, but this is likely a consequence of the degeneracies in the $\Gamma$-$N({\rm H})$ parameters of the model (see Figure~\ref{fig_contour}). Finally, we have considered the effect of dust scattering in the spectra of 4U~1630-47, which has been reported previously \citep{hor14,nei14,kal18}. The overall effect of the scattering in the spectra (and therefore in the SEDs) consist of the hardening of the continuum due to the $E^{-2}$ dependence of the dust model. Such effect could be larger if dust is close to the observer. For the current observations, we have found that the inclusion of a dust scattering component (namely {\tt xscat} in the {\sc xspec} software) does not improve the statistic of the fits, i.e. the dust contribution cannot be determined with our observations, which are degenerate to models including or not dust scattering. However, despite the model degeneracy, the dust scattering will undoubtedly have an effect in the recovered SED. Therefore, we tested its effect in Obs~6 by performing fits including dust scattering in two extreme cases (i.e. by fixing the location of the dust to be very close and far away from the source) to compare with the model without dust scattering. In both cases, the inclusion of dust scattering makes the recovered SED softer and therefore contributes to further enforce our conclusion that the wind in 4U~1630-47 has disappeared before the SED hardens enough to cause a thermal instability. \section{Discussion}\label{sec_dis2} We have performed a detailed analysis of six high-resolution {\it Chandra} spectra of the LMXB 4U~1630-47. We discuss below the findings of our study. \subsection{Launching mechanism}\label{sec_launching} From Obs~1-4, we have estimated the launching radius of the wind to be at $1.3$--$2.1$ $\times10^{11}$~cm, although, as described in Section~\ref{sec_dis}, such values depend on the hydrogen plasma density. It has been shown that a thermally driven wind can be launched at $\sim 10^{10} -5\times 10^{12}$ cm \citep{beg83,woo96,hig17}. Radiative pressure due to line transitions in the UV energy band is negligible in this case due to the high disc temperature, which causes the SEDs to peak in the X-ray energy band, requiring unphysically large X-ray opacities to allow the line force to launch the wind \citep{pro02}, although \citet{wat18} point to the possibility that line radiation pressure is also present in X-ray binaries. We suggest that thermal pressure is the dominant process, as was proposed by previous analyses of the source \citep{kub07,dia14}, and given that velocities up to $\sim 600$ km/s can be reproduced with such a mechanism \citep[see e.g.][]{hig15}. In principle, a magnetically driven wind cannot be excluded \citep{kub07}. However, more simulations are needed to understand if the parameters of the observed wind can be reproduced by a magnetically driven wind \citep[see e.g.][for cases where it may not]{wat18}. It is important to note that the launching radius obtained from the ionization parameter depends on the SED and therefore on the continuum modeling. In addition, systematic uncertainties due to differences in the calibration of the instruments need to be taken into account when computing fluxes. \citet{mad17} compute cross-normalization constants between {\it Chandra}, {\it Suzaku}, {\it NuSTAR}, {\it Swift} and {\it XMM-Newton} instruments in the 1--3 keV and 5--7 keV energy ranges using the quasar 3C~273 and the BL Lac PKS~2155-30 as calibration sources. They found that differences in flux measurements between the instruments in these energy ranges can be up to $\sim 15\%$. Also, the model degeneracy described in Section~\ref{sec_cont} increases the uncertainties in the flux calculation. For example, there is a factor 5 difference in the predicted 0.01-13 keV flux between Models C and D for Obs~6. The electron density value assumed affects the launching radius estimation as well. For example, if we include the instrumental flux uncertainty ($\sim 15\%$) and consider density values of $10^{11}$--$10^{13}$ cm$^{-3}$ we can estimate launching radii varying in the range $0.1$--$3.6$ $\times10^{11}$~cm for Obs~1--4, which then includes the values previously found by other authors. In conclusion, because any outcome about the plasma physical conditions, such as the wind launching radius (and hence the analysis of the launching mechanism), depends on the flux obtained from the continuum fitting and on the hardness of the spectrum via the value of $\xi$, it is crucial (1) to construct the most accurate SED by including all information available from multiple energy bands (e.g. we included {\it Swift/BAT} measurements as a fitting constraint) and (2) to consider the uncertainties inherent to the instruments. \subsection{The disappearance of the wind}\label{sec_disppaer} Given that disc winds are mainly found in soft states of BH LMXBs while radio jets are found mainly in hard states it has been suggested that disc winds and jets are mutually exclusive \citep{neil09}. Although recent studies have shown the presence of disc wind signatures in hard accretion states \citep{hom16,mun17,all18}, for 4U~1630-47 it is clear that the disc wind has already disappeared in Obs~6, well before reaching the hard state. Among the theories to explain the absence of a wind in the hard-state are: thermal instabilities \citep{cha13,bia17}, full ionization of the plasma \citep{ued10,dia12,dia14,dia16} and disc geometry changes \citep{ued10,mil12b,pon12}. However, and given that the wind has already disappeared during Obs~6, that we classify as a relatively soft-intermediate state, and Obs~7 (see Figure~\ref{limit_nh}), we next checked if any of the above reasons could be used to explain the absence of the wind. Thermal instabilities have been proposed as explanation for the disappearance of the wind during the hard state \citep{cha13,bia17}. Following \citet{cha16} and \citet{bia17}, if we assume that the the physical properties of the plasma do not change in the transition between different accretion states (i.e. $nr^{2}=$constant), we can trace the path from the best-fit value obtained in Obs~1 to the stability curves obtained for Obs~6 and 7. These transitions are indicated in Figure~\ref{fig_ther_curve1} by horizontal arrows. We found that, under this assumption, the best-fits lie in a thermally stable region of the curve of stability when reaching Obs~6 and therefore the absence of absorption lines related to the wind is not expected due to the thermal equilibrium state of the gas predicted. In the case of Obs~7 the stability curves indicate that the best-fit parameters lie in a slightly unstable region. Next, we checked whether the plasma could be fully ionized in Obs~6 and~7 and therefore not visible via line absorption. Figure~\ref{fig_ion_fractions} shows the ion fractions for {\rm Fe}~{\sc XXIV}, {\rm Fe}~{\sc XXV} and {\rm Fe}~{\sc XXVI} as function of $\log{(\xi)}$ for the Obs~1 (solid lines), Obs~6 (dotted lines) and Obs~7 (dashed lines). It is clear from the plot that {\rm Fe}~{\sc XXV} and {\rm Fe}~{\sc XXVI} absorption lines should be observed for Obs~6 given the predicted ion ratios for the $\log{(\xi)}$ obtained when assuming the $nr^{2}$=constant condition in the transition between observations (dotted vertical line). Also, a small fraction of {\rm Fe}~{\sc XXVI} should be observed for Obs~7, although such fraction is lower when considering model C and D. This is, thermal instabilities or over-ionization cannot explain the absence of absorption lines associated to the disc wind in Obs~6 and Obs~7. However, we note that the assumption of $nr^{2}$=constant is not correct for thermal winds \citep{don18}. In particular, \citet{don18} show that during hard states the Compton temperature is higher and consequently winds could be launched from radii that are smaller by up to one order of magnitude with respect to soft states. Indeed, when using model D, the Compton temperature for Obs~7 is significantly higher than for Obs~1-4 (see Figure~\ref{fig_ther_curve1}). In contrast, the assumption of $nr^{2}$=constant might be correct for our Obs~6, where a large change in the Compton temperature is not observed with the same model. Therefore, we went on to try to find an alternative explanation for the absence of the wind in Obs~6. \citet{dyd17} showed that the thermal equilibrium branches of the stability curve might never be reached if insufficient flux is heating the flow. High flux cases, on the other hand, can lead to acceleration of the flow. Such acceleration produces a decrease in the column density of the plasma at given $\xi$ and therefore the absorption lines will be not detected due to the low $N({\rm H})$ predicted. \citet{dyd17}, in particular, used SEDs obtained from 4U~1630-47 Obs~6 and 7 as a template for different states of X-ray binaries. They found that for Obs~6 the absorption measure distribution, i.e. the distribution of the absorber column density with the ionization parameter along the line of sight, shows two regions with very low column densities, when adiabatic cooling becomes important. However, stable plasma regions are predicted by their model, one with $5\times 10^{22}$ cm$^{-2}$ $< N({\rm H}) < 1\times 10^{23}$ cm$^{-2}$ for an ionization parameter $2.0<\log{\xi} < 2.1$ and a second one with $1\times 10^{21}$ cm$^{-2}$ $< N({\rm H}) < 1\times 10^{22}$ cm$^{-2}$ for $3.1<\log{\xi} < 4.0$. Such lower limits for the column densities are higher than the upper limits obtained from our fits (see Figure~\ref{limit_nh}). Given these results, we propose as an alternative possibility that the hot plasma has been exhausted during the soft state due to its continuous outflow and that ``new'' fresh plasma could not be sufficiently heated due to the lower temperature of the disc as the source starts the transition to the hard state. Finally, we found no indications that matter has been diverted from the wind into the jet but we cannot rule out the existence of a weak jet given the upper limits obtained from our radio observations (see Section~\ref{sec_radio}). \section{Conclusions and summary}\label{sec_con} We have analyzed six {\it Chandra} high-resolution spectra of the LMXB~4U~1630-47 obtained during the transition between soft and hard accretion states. We included {\it Swift/BAT} data in the 15-50~keV range as a constraint in the hard-energy range for the data fitting. We found that different phenomenological models can be used to fit the continuum. From the {\tt diskbb} component we estimated a disc inner radius between 33 km and 36 km (assuming a disc inclination of 75$^{\circ}$). Absorption lines are only identified for Obs~1-4, which correspond to a soft accretion state. Common lines include {\rm Fe}~{\sc XXVI} K$\alpha$, K$\beta$, {\rm Fe}~{\sc XXV} K$\alpha$, K$\beta$ and {\rm Ca}~{\sc XX} K$\alpha$ while {\rm Ar}~{\sc XVIII}, {\rm S}~{\sc XVI} and {\rm Si}~{\sc XIV} ions are only identified in Obs~3. We noted that the {\rm Fe}~{\sc XXV} K$\alpha$ line in Obs~3 may be blended with the {\rm Fe}~{\sc XXIV} K$\alpha$ line. We used the {\tt warmabs} photoionization model to fit the spectra of Obs~1-4. The best-fit parameters are similar between the observations except for the decreasing of $\log{(\xi)}$ in Obs~3, which leads to the formation of absorption lines associated to ions in lower ionization states. We inferred launching radii between $(1.3\sim 2.0)\times 10^{11}$~cm and column densities $N({\rm H})> 10^{23}$ cm$^{-2}$. The launching radius indicates that that thermal pressure is likely to be the dominant launching mechanism for the wind. We pointed out that discrepancies between the fluxes obtained from the continuum fitting in comparison with previous analyses of the same source can be due to instrumental systematic uncertainties as well as uncertainties in the modeling. We computed stability curves for all observations using the {\sc xstar} photoionization code. We found that the best-fit parameters obtained for Obs~1-4 lie in thermally stable parts of the curve. In the case of Obs~6 and 7 we found a solution thermally stable if we consider that $nr^{2}$=constant. In this sense, thermal instabilities cannot explain the absence of absorption lines associated to the disc wind in Obs~6, before reaching the hard state. From the radio observations we found no indications that the jet has diverted matter from the wind. The discrepancies between the observation and predictions from photoionization models may indicate an acceleration of the flow at the end of the soft state, producing a decrease in the plasma column density, or that the plasma has been exhausted during the soft state. More observations of LMXB systems during transitional states will confirm the proposed scenarios. \section{Acknowledgements} The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. EG acknowledge support by the DFG cluster of excellence `Origin and Structure of the Universe'. JCAM-J is the recipient of an Australian Research Council Future Fellowship (FT140101082). \bibliographystyle{mnras}
1,314,259,995,179
arxiv
\section{Introduction} Let $G$ be any finite simple graph with vertex set $V(G) = \{x_1,\ldots,x_n\}$ and edge set $E(G)$, where simple means no loops or multiple edges. The \textbf{edge ideal} of $G$ is the ideal $I(G)= \left\langle x_{i}x_{j} \mid \{x_{i},x_{j} \}\in E(G) \right\rangle$ in the polynomial ring $R = k[x_1,\ldots,x_n]$ ($k$ is any field). Describing the dictionary between the graph theoretic properties of $G$ and the algebraic properties of $I(G)$ or $R/I(G)$ is an active area of research, e.g., see \cite{MV,V}. Relating the homological invariants of $I(G)$ and the graph theoretic invariants of $G$ has proven to be a fruitful approach to building this dictionary. Recall that the \textbf{minimal graded free resolution} of $I(G) \subseteq R$ is a long exact sequence of the form \[ 0 \rightarrow \bigoplus_{j} R(-j)^{\beta_{l,j}(I(G))} \rightarrow \bigoplus_{j} R(-j)^{\beta_{l-1,j}(I(G))} \rightarrow \cdots \rightarrow \bigoplus_{j} R(-j)^{\beta_{0,j}(I(G))} \rightarrow I(G) \rightarrow 0 \] where $l \leq n$ and $R(-j)$ is the free $R$-module obtained by shifting the degrees of $R$ by $j$ (i.e., $R(-j)_a = R_{a-j}$). The number $\beta_{i,j}(I(G))$, the $i,j$-th \textbf{graded Betti number} of $I(G)$, equals the number of minimal generators of degree $j$ in the $i$-th syzygy module of $I(G)$. Two invariants that measure the ``size'' of the resolution are the \textbf{(Castelnuovo-Mumford) regularity} and the \textbf{projective dimension}, defined as \begin{eqnarray*} {\rm reg}(I(G)) & = & \max\{ j-i \mid \beta_{i,j}(I(G))\neq 0 \}, ~~\mbox{and}~~\\ {\rm pd}(I(G)) & = & \max\{ i \mid \beta_{i,j}(I(G))\neq 0 ~~\mbox{for some $j$}\}. \end{eqnarray*} One wishes to relate the numbers $\beta_{i,j}(I(G))$ to the invariants of $G$; e.g., see the survey of H\`a \cite{Ha} which focuses on describing ${\rm reg}(I(G))$ in terms of the invariants of $G$. In this note we give explicit formulas for ${\rm reg}(I(G))$ for the edge ideals of two infinite families of circulant graphs. Our results complement previous work on the algebraic and combinatorial topological properties of circulant graphs (e.g, \cite{EVMVT,M,Ri,RiRo,RR,R,VMVTW}). Fix an integer $n \geq 1$ and a subset $S \subseteq \{1,\ldots,\lfloor \frac{n}{2} \rfloor \}$. The \textbf{circulant graph} $C_n(S)$ is the graph on the vertex set $\{x_{1},\ldots,x_{n}\}$ such that $\{x_i,x_j\} \in E(C_n(S))$ if and only if $|i-j|$ or $n-|i-j| \in S$. To simplify notation, we write $C_n(a_1,\ldots,a_t)$ instead of $C_n(\{a_1,\ldots,a_t\})$. As an example, the graph $C_{10}(1,3)$ is drawn in Figure \ref{circulantex}. \begin{figure}[h!] \Circulant{10}{1,3} \caption{The circulant $C_{10}(1,3)$}\label{circulantex} \end{figure} When $S = \{1,\ldots,\lfloor \frac{n}{2} \rfloor\}$, then $C_n(S) \cong K_n$, the clique on $n$ vertices. On the other hand, if $S = \{1\}$, then $C_n(1) \cong C_n$, the cycle on $n$ vertices. For both of these families, the regularity of their edge ideals are known. Specifically, the ideal $I(K_n)$ has a linear resolution by Fr\"oberg's Theorem \cite{F}, so ${\rm reg}(I(K_n))=2$. The value of ${\rm reg}(I(C_n))$ can be deduced from work of Jacques \cite[Theorem 7.6.28]{J}. One can view these circulant graphs as ``extremal'' cases in the sense that $|S|$ is either as large, or small, as possible. Our motivation is to understand the next open cases. In particular, generalizing the case of $K_n$, we compute ${\rm reg}(I(C_n(S))$ when $S = \{1,\ldots,\widehat{j},\ldots,\lfloor \frac{n}{2} \rfloor\}$ for any $1 \leq j \leq \lfloor \frac{n}{2} \rfloor$ (Theorem \ref{maintheorem1}). For most $j$, the regularity follows from Fr\"oberg's Theorem and a result of Nevo \cite{E}. To generalize the case of $C_n$ (a circulant graph where every vertex has degree two), we compute the regularity of the edge ideal of any cubic (every vertex has degree three) circulant graph, that is, $G = C_{2n}(a,n)$ with $1 \leq a \leq n$ (Theorem \ref{maintheorem2}). Our proof of Theorem \ref{maintheorem2} requires a new technique to compute ${\rm reg}(I)$ for a square-free monomial ideal. Specifically, we show how to use partial information about ${\rm reg}(I)$, ${\rm pd}(I)$, and the reduced Euler characteristic of the simplicial complex associated with $I$, to determine ${\rm reg}(I)$ exactly (see Theorem \ref{new-reg-result}). We believe this result to be of independent interest. Our paper is structured as follows. In Section~\ref{Background} we recall the relevant background regarding graph theory and commutative algebra, along with our new result on the regularity of square-free monomial ideals. In Section~\ref{reg-hat-j} we compute the regularity of $I(G)$ for the family of graphs $G =C_{n}(1,\ldots,\hat{j},\ldots,\lfloor \frac{n}{2} \rfloor)$. In Section~\ref{regularity-two-n-a-n}, we give an explicit formula for the regularity of edge ideals of cubic circulant graphs. \noindent {\bf Acknowledgements.} The authors thank Federico Galetto and Andrew Nicas for their comments and suggestions. Computation using {\it Macaulay2} \cite{Mt} inspired some of our results. The first author thanks CONACYT for financial support, and the second author acknowledges the financial support of NSERC RGPIN-2019-05412. \section{Background} \label{Background} We review the relevant background from graph theory and commutative algebra. In addition, we give a new result on the regularity of square-free monomial ideals. \subsection{Graph theory preliminaries} Let $G = (V(G),E(G))$ denote a finite simple graph. We abuse notation and write $xy$ for the edge $\{x,y\} \in E(G)$. The {\bf complement} of $G$, denoted $G^c$, is the graph $(V(G^c),E(G^c))$ where $V(G^c) = V(G)$ and $E(G^c) = \{xy ~|~ xy \not\in E(G)\}$. The {\bf neighbours} of $x \in V(G)$ is the set $N(x) = \{y ~|~ xy \in E(G)\}$. The {\bf closed neighbourhood} of $x$ is $N[x] = N(x) \cup \{x\}$. The {\bf degree} of $x$ is $\deg(x) = |N(x)|$. A graph $H = (V(H),E(H))$ is a {\bf subgraph} of $G$ if $V(H) \subseteq V(G)$ and $E(H) \subseteq E(G)$. Given a subset $W \subseteq V(G)$, the {\bf induced subgraph} of $G$ on $W$ is the graph $G_W = (W,E(G_W))$ where $E(G_W) = \{ xy \in E(G) ~|~ \{x,y\} \subseteq W\}$. Notice that an induced subgraph is a subgraph of $G$, but not every subgraph of $G$ is an induced subgraph. An {\bf $n$-cycle}, denoted $C_n$, is the graph with $V(C_n) = \{x_1,\ldots,x_n\}$ and edges $E(C_n) = \{x_1x_2,x_2x_3,\ldots,x_{n-1}x_n,x_nx_1\}$. A graph $G$ has a {\bf cycle} of length $n$ if $G$ has a subgraph of the form $C_n$. A graph is a \textbf{chordal graph} if $G$ has no induced graph of the form $C_n$ with $n \geq 4$. A graph $G$ is \textbf{co-chordal} if $G^c$ is chordal. The \textbf{co-chordal number} of $G$, denoted ${\rm co\mbox{-}chord}(G)$, is the smallest number of subgraphs of $G$ such that $G = G_1 \cup \cdots \cup G_s$ and each $G_i^c$ is a chordal graph. A \textbf{claw} is the graph with $V(G) =\{x_1,x_2,x_3,x_4\}$ with edges $E(G) = \{x_1x_2,x_1x_3,x_1x_4\}$. A graph is {\bf claw-free} if no induced subgraph of the graph is a claw. A graph $G$ is {\bf gap-free} if no induced subgraph of $G^c$ is a $C_4$. Finally, the {\bf complete graph} $K_n$ is the graph with $V(K_n) = \{x_1,\ldots,x_n\}$ and $E(K_n) = \{x_ix_j ~|~ 1 \leq i < j \leq n\}$. \subsection{Algebraic preliminaries} We recall some facts about the regularity of $I(G)$. Note that for any homogeneous ideal, ${\rm reg}(I) = {\rm reg}(R/I) + 1$. We collect together a number of useful results on the regularity of edge ideals. \begin{theorem}\label{reg-results} Let $G$ be a finite simple graph. Then \begin{enumerate} \item[$(i)$] if $G = H \cup K$, with $H$ and $K$ disjoint, then \[{\rm reg}(R/I(G)) = {\rm reg}(R/I(H)) + {\rm reg}(R/I(K)).\] \item[$(ii)$] ${\rm reg}(I(G)) = 2$ if and only if $G^c$ is a chordal graph. \item[$(iii$)] ${\rm reg}(I(G)) \leq {\rm co\mbox{-}chord}(G) +1$. \item[$(iv)$] if $G$ is gap-free and claw-free, then ${\rm reg}(I(G)) \leq 3$. \item[$(v)$] if $x \in V(G)$, then ${\rm reg}(I(G))\in{ \{ {\rm reg}(I( G \setminus N_{G}[x]))+1, {\rm reg}(I( G \setminus x)) \} }.$ \end{enumerate} \end{theorem} \begin{proof} For $(i)$, see Woodroofe \cite[Lemma 8]{W}. Statement $(ii)$ is Fr\"oberg's Theorem \cite[Theorem 1]{F}. Woodroofe \cite[Theorem 1]{W} first proved $(iii)$. Nevo first proved $(iv)$ in \cite[Theorem 5.1]{E}. For $(v)$, see Dao, Huneke, and Schweig \cite[Lemma 3.1]{DHS}. \end{proof} We require a result of Kalai and Meshulam \cite{KM} that has been specialized to edge ideals. \begin{theorem}\cite[Theorems 1.4 and 1.5]{KM} \label{KM} Let $G$ be a finite simple graph, and suppose $H$ and $K$ are subgraphs such that $G = H \cup K$. Then, \begin{enumerate} \item[$(i)$] ${\rm reg}(R/I(G)) \leq {\rm reg}(R/I(H)) + {\rm reg}(R/I(K))$, and \item[$(ii)$] ${\rm pd}(I(G)) \leq {\rm pd}(I(H)) + {\rm pd}(I(K)) + 1$. \end{enumerate} \end{theorem} We now introduced a new result on the regularity of edge ideals. In fact, because our result holds for all square-free monomial ideals, we present the more general case. Recall the following facts about simplicial complexes. A \textbf{simplicial complex} $\Delta$ on a vertex set $V=\{x_{1},\ldots,x_{n}\}$ is a set of subsets of $V$ that satisfies: $(i)$ if $F\in{\Delta}$ and $G \subseteq F$, then $G\in{\Delta}$, and $(ii)$ for each $i\in{\{1,\ldots,n\}}$, $\{x_{i}\}\in \Delta$. Note that condition $(i)$ implies that $\emptyset \in{\Delta}$. The elements of $\Delta$ are called its \textbf{faces}. For any $W \subseteq V$, the restriction of $\Delta$ to $W$ is the simplicial complex $\Delta_{W}=\{ F\in{\Delta} \mid F\subseteq W\}$. The \textbf{dimension} of a face $F\in{\Delta}$ is given by ${\rm dim}(F)=\left|F\right|-1$. The \textbf{dimension} of a simplicial complex, denoted by ${\rm dim}(\Delta)$, is the maximum dimension of all its faces. Let $f_i$ be the number of faces of $\Delta$ of dimension $i$, with the convention that $f_{-1}=1$. If ${\rm dim}(\Delta)=D$, then the $f$-\textbf{vector} of $\Delta$ is the $(D+2)$-tuple $f(\Delta)=(f_{-1},f_0,\ldots,f_D)$. Given any simplicial complex $\Delta$ on $V$, associate with $\Delta$ a monomial ideal $I_{\Delta}$ in the polynomial ring $R=k[x_{1},\ldots,x_n]$ (with $k$ a field) as follows: \[ I_{\Delta}=\left\langle x_{j_{1}}x_{j_{2}} \cdots x_{j_{r}} \mid \{ x_{j_{1}},\ldots,x_{j_{r}} \}\notin{\Delta} \right\rangle. \] The ideal $I_{\Delta}$ is the \textbf{Stanley-Reisner ideal} of $\Delta$. This construction can be reversed. Given a square-free monomial ideal $I$ of $R$, the simplicial complex associated with $I$ is \[\Delta(I) = \left\{\{x_{i_1},\ldots,x_{i_r}\} ~|~ \mbox{the square-free monomial} ~x_{i_1}\cdots x_{i_r}\not\in I \right\}.\] Given a square-free monomial ideal $I$, Hochster's Formula relates the Betti numbers of $I$ to the reduced simplicial homology of $\Delta(I)$. See \cite[Section 6.2]{V} for more background on $\widetilde{H}_j(\Gamma;k)$, the $j$-th reduced simplicial homology group of a simplicial complex $\Gamma$. \begin{theorem}{\rm(Hochster's Formula)} \label{Hochsters-Formula} Let $I \subseteq R = k[x_1,\ldots,x_n]$ be a square-free monomial ideal, and set $\Delta = \Delta(I)$. Then, for all $i,j \geq 0$, \[ \beta_{i,j}(I)= \sum_{\left| W \right|=j,~ W \subseteq V} \dim_k \widetilde{H}_{j-i-2}(\Delta_W ; k). \] \end{theorem} Given a simplicial complex $\Delta$ of dimension $D$, the dimensions of the homology groups $\widetilde{H}_{i}(\Delta ; k)$ are related to the $f$-vector $f(\Delta)$ via the \textbf{reduced Euler characteristic}: \begin{equation} \label{def-Euler-characteristic} \widetilde{\chi}(\Delta)= \sum_{i=-1}^D (-1)^{i} \dim_k \tilde{H}_{i}(\Delta ; k)= \sum_{i=-1}^D (-1)^{i}f_i. \end{equation} Note that the reduced Euler characteristic is normally defined to be equal to one of the two sums, and then one proves the two sums are equal (e.g., see \cite[Section 6.2]{V}). Our new result on the regularity of square-free monomial ideals allows us to determine ${\rm reg}(I)$ exactly if we have enough partial information on the regularity, projective dimension, and the reduced Euler characteristic. \begin{theorem}\label{new-reg-result} Let $I$ be a square-free monomial ideal of $R = k[x_1,\ldots,x_n]$ with associated simplicial complex $\Delta = \Delta(I)$. \begin{enumerate} \item[$(i)$] Suppose that ${\rm reg}(I) \leq r$ and ${\rm pd}(I) \leq n-r+1$. \begin{enumerate} \item[$(a)$] If $r$ is even and $\widetilde{\chi}(\Delta) > 0$, then ${\rm reg}(I) =r$. \item[$(b)$] If $r$ is odd and $\widetilde{\chi}(\Delta) < 0$, then ${\rm reg}(I) =r$. \end{enumerate} \item[$(ii)$] Suppose that ${\rm reg}(I) \leq r$ and ${\rm pd}(I) \leq n-r$. If $\widetilde{\chi}(\Delta) \neq 0$, then ${\rm reg}(I) = r$. \end{enumerate} \end{theorem} \begin{proof} By Hochster's Formula (Theorem \ref{Hochsters-Formula}), note that $\beta_{a,n}(I) = \dim_k \widetilde{H}_{n-a-2}(\Delta;k)$ for all $a \geq 0$ since the only subset $W \subseteq V$ with $|W| = n$ is $V$. $(i)$ If ${\rm reg}(I) \leq r$ and ${\rm pd}(I) \leq n-r+1$ we have $\beta_{a,n}(I) = 0$ for all $a\leq n-r-1$ and $\beta_{a,n}(I) = 0$ for all $a \geq n-r+2$. Consequently, among all the graded Betti numbers of form $\beta_{a,n}(I)$ as $a$ varies, all but $\beta_{n-r,n}(I) = \dim_k \widetilde{H}_{r-2}(\Delta;k) $ and $\beta_{n-r+1,n}(I) = \dim_k \widetilde{H}_{r-3}(\Delta;k) $ may be non-zero. Thus by \eqref{def-Euler-characteristic} \begin{eqnarray*} \widetilde{\chi}(\Delta)& =& (-1)^{r-2} \dim_k \widetilde{H}_{r-2}(\Delta;k) + (-1)^{r-3} \dim_k \widetilde{H}_{r-3}(\Delta;k). \end{eqnarray*} If we now suppose that $r$ is even and $\widetilde{\chi}(\Delta) > 0$, the above expression implies \[\dim_k \widetilde{H}_{r-2}(\Delta;k) - \dim_k \widetilde{H}_{r-3}(\Delta;k) > 0, \] and thus $\beta_{n-r,r}(I) = \dim_k \widetilde{H}_{r-2}(\Delta;k) \neq 0$. As a consequence, ${\rm reg}(I) = r$, thus proving $(a)$. Similarly, if $r$ is odd and $\widetilde{\chi}(\Delta) < 0$, this again forces $\beta_{n-r,r}(I) = \dim_k \widetilde{H}_{r-2}(\Delta;k) \neq 0$, thus proving $(b)$. $(ii)$ Similar to part $(i)$, the hypotheses on the regularity and projective dimension imply that $\widetilde{\chi}(\Delta) = \beta_{n-r,n}(I) = (-1)^{r-2}\dim_k \widetilde{H}_{r-2}(\Delta;k)$. So, if $\widetilde{\chi}(\Delta) \neq 0$, then $\beta_{n-r,n}(I) \neq 0$, which implies ${\rm reg}(I) = r$. \end{proof} \begin{remark} There is a similar result to Theorem \ref{new-reg-result} for the projective dimension of $I$. In particular, under the assumptions of $(i)$ and if $r$ is even and $\widetilde{\chi}(\Delta) < 0$, or if $r$ is odd and $\widetilde{\chi}(\Delta) > 0$, then the proof of Theorem \ref{new-reg-result} shows that ${\rm pd}(I) = n-r+1.$ Under the assumptions of $(ii)$, then ${\rm pd}(I) = n-r$. \end{remark} We will apply Theorem \ref{new-reg-result} to compute the regularity of cubic circulant graphs (see Theorem \ref{maintheorem2}). We will also require the following terminology and results which relates the reduced Euler characteristic to the independence polynomial of a graph. A subset $W \subseteq V(G)$ is an \textbf{independent set} if for all $e\in{E(G)}$, $e \nsubseteq W$. The \textbf{independence complex} of $G$ is the set of all independent sets: \[ {\rm Ind}(G)=\{ W \mid W \mbox{ is an independent set of } V(G)\}. \] Note that ${\rm Ind}(G) = \Delta_{I(G)}$, the simplicial complex associated with the edge ideal $I(G)$. The \textbf{independence polynomial} of a graph $G$ is defined as \[ I(G,x)= \sum_{r=0}^\alpha i_{r}x^{r}, \] where $i_r$ is the number of independent sets of cardinality $r$. Note that $(i_0,i_1,\ldots,i_\alpha)=(f_{-1},f_0,\ldots,f_{\alpha-1})$ is the $f$-vector of ${\rm Ind}(G)$. Since $\widetilde{\chi}({\rm Ind}(G))=\sum_{i=-1}^{\alpha-1} (-1)^{i}f_i$, we get: \begin{equation} \label{eq-Euler-independence} \widetilde{\chi}({\rm Ind}(G))=-I(G,-1). \end{equation} Thus, the value of $\widetilde{\chi}({\rm Ind}(G))$ can be extracted from the independence polynomial $I(G,x)$. \section{The regularity of the edge ideals of $C_{n}(1,\ldots,\widehat{j},\ldots,\lfloor \frac{n}{2} \rfloor)$ } \label{reg-hat-j} In this section we compute the regularity of the edge ideal of the circulant graph $G = C_n(S)$ with $S = \{1,\ldots,\widehat{j},\ldots, \lfloor \frac{n}{2} \rfloor\}$ for any $j \in \{1,\ldots,\lfloor \frac{n}{2} \rfloor\}$. We begin with the observation that the complement of $G$ is also a circulant graph, and in particular, $G^c = C_n(j)$. Furthermore, we have the following structure result. \begin{lemma}\label{complement} Let $H = C_n(j)$ with $1 \leq j \leq \left\lfloor \frac{n}{2} \right\rfloor$, and set $d = {\rm gcd}(j,n)$. Then $H$ is the union of $d$ disjoint cycles of length $\frac{n}{d}$. Furthermore, $H$ is a chordal graph if and only if $n=2j$ or $n=3d$. \end{lemma} \begin{proof} Label the vertices of $H$ as $\{0,1,\ldots,n-1\}$, and set $d = {\rm gcd}(j,n)$. For each $0 \leq i < d$, the induced graph on the vertices $\{i,j+i,2j+i,\ldots,(d-1)j+i\}$ is a cycle of length $\frac{n}{d}$, thus proving the first statement (if $\frac{n}{d} = 2$, then $H$ consists of disjoint edges). For the second statement, if $n=3d$, then $H$ is the disjoint union of three cycles, and thus chordal. If $n=2j$, then $H$ consists of $j$ disjoint edges, and consequently, is chordal. Otherwise, $\frac{n}{d} \geq 4$, and so $H$ is not chordal. \end{proof} \begin{lemma} \label{reg-general} Let $G = C_{n}(1,\ldots,\widehat{j},\ldots,\lfloor \frac{n}{2} \rfloor)$, and $d = {\rm gcd}(j,n)$. If $\frac{n}{d} \geq 5$, then $G$ is claw-free. \end{lemma} \begin{proof} Suppose that $G$ has an induced subgraph $H$ on $\{z_1,z_2,z_3,z_4\} \subseteq V(G)$ that is a claw. Then $H^{c}$ is an induced subgraph of $G^{c}$ of the form: \begin{center} \begin{tikzpicture} [scale=.45] \draw[thick] (0,0) --(2,0)--(1,2)--(0,0); \draw [fill] (0,0) circle [radius=0.1]; \draw [fill] (2,0) circle [radius=0.1]; \draw [fill] (1,2) circle [radius=0.1]; \draw [fill] (3.5,1) circle [radius=0.1]; \node at (0,-0.5) {$z_{4}$}; \node at (2,-0.5) {$z_{2}$}; \node at (1,2.5) {$z_{3}$}; \node at (4,1) {$z_{1}$}; \end{tikzpicture} \end{center} But by Lemma \ref{complement}, the induced cycles of $G^c$ have length $\frac{n}{d} \geq 5$. Thus $G$ is claw-free. \end{proof} We now come to the main theorem of this section. \begin{theorem}\label{maintheorem1} Let $G =C_{n}(1,\ldots,\widehat{j},\ldots,\lfloor \frac{n}{2} \rfloor)$. If $d={\rm gcd}(j,n)$, then \[ {\rm reg}(I(G)) = \begin{cases} 2 & \mbox{$n=2j$ or $n=3d$} \\ 3 & \mbox{otherwise.} \end{cases} \] \end{theorem} \begin{proof} Consider $G^c = C_n(j)$. By Lemma \ref{complement}, $G^c$ consists of induced cycles of size $k = \frac{n}{d}$. Because $1 \leq j \leq \lfloor \frac{n}{2} \rfloor$, we have $2 \leq k \leq n$. If $k=2$ or $3$, i.e., if $n=2j$ or $n=3d$, Lemma \ref{complement} and Theorem \ref{reg-results} $(ii)$ combine to give ${\rm reg}(I(G)) = 2$. If $k \geq 5$, then Lemmas \ref{complement} and \ref{reg-general} imply that $G$ is gap-free and claw-free (but not chordal), and so Theorem \ref{reg-results} $(iv)$ implies ${\rm reg}(I(G)) = 3$. To compete the proof, we need to consider the case $k=4$, i.e., $G = C_{4j}(1,\ldots,\widehat{j},\ldots,2j)$. By Lemma \ref{complement}, $G^c$ is $j$ disjoint copies of $C_4$, and thus Theorem \ref{reg-results} $(ii)$ gives ${\rm reg}(I(G)) \geq 3$. To prove that ${\rm reg}(I(G)) = 3$, we show ${\rm co\mbox{-}chord}(G)= 2$, and apply Theorem \ref{reg-results} $(iii)$. Label the vertices of $G$ as $0,1,\ldots, 4j-1$, and let \begin{eqnarray*} V_1 &= &\{0,1,2,\ldots,j-1,2j,2j+1,\ldots,3j-1\} ~~\mbox{and}~~\\ V_2 &= &\{j,j+1,\ldots,2j-1,3j,3j+1,\ldots,4j-1\}. \end{eqnarray*} Observe that the induced graph on $V_1$ (and $V_2)$ is the complete graph $K_{2j}$. Let $G_1$ be the graph with $V(G_1) = V(G)$ and edge set $E(G_1) = (E(C_{4j}(1,\ldots,j-1)) \cup E(G_{V_1})) \setminus E(G_{V_2})$. Similarly, we let $G_2$ be the graph with $V(G_2) = V(G)$ and edge set $E(G_2) = (E(C_{4j}(j+1,\ldots,2j)) \cup E(G_{V_2})) \setminus E(G_{V_1})$. We now claim that $G = G_1 \cup G_2$, and furthermore, both $G_1^c$ and $G_2^c$ are chordal, and consequently, ${\rm co\mbox{-}chord}(G) =2$. The fact that $G = G_1 \cup G_2$ follows from the fact that \begin{eqnarray*} E(G_1) \cup E(G_2)& =& E(C_{4j}(1,\ldots,j-1)) \cup E(C_{4j}(j+1,\ldots,2j))\\ &=& E(G_{4j}(1,\ldots,\widehat{j},\ldots,2j)). \end{eqnarray*} To show that $G_1^c$ is chordal, first note that the induced graph on $V_1$, that is, $(G_1)_{V_1}$ is the complete graph $K_{2j}$. In addition, the vertices $V_2$ form an independent set of $G_1$. To see why, note that if $a,b \in V_2$ are such that $ab \in E(G)$, then $ab \in E(G_{V_2})$. But by the construction of $E(G_1)$, none of the edges of $E(G_{V_2})$ belong to $E(G_1)$. So $ab \not\in E(G_1)$, and thus $V_2$ is an independent set in $G_1$. The above observations therefore imply that in $G_1^c$, the vertices of $V_1$ form an independent set, and $(G_1^c)_{V_2}$ is the clique $K_{2j}$. To show that $G_1^c$ is chordal, suppose that $G_1^c$ has a induced cycle of length $t \geq 4$ on $\{v_1,v_2,v_3,\ldots,v_t\}$. Since the induced graph on $(G_1^c)_{V_2}$ is a clique, at most two of the vertices of $\{v_1,v_2,\ldots,v_t\}$ can belong to $V_2$. Indeed, if there were at least three $v_i,v_j,v_k \in \{v_1,v_2,\ldots,v_t\} \cap V_2$, then the induced graph on these vertices is a three cycle, contradicting the fact that $\{v_1,v_2,\ldots,v_t\}$ is minimal induced cycle of length $t \geq 4$. But then at least $t-2 \geq 2$ vertices of $\{v_1,v_2,\ldots,v_t\}$ must belong to $V_1$, and in particular, at least two of them are adjacent. But this cannot happen since the vertices of $V_1$ are independent in $G_1^c$. Thus, $G_1^c$ must be a chordal graph. The proof that $G_2^c$ is chordal is similar. Note that the vertices of $V_2$ are an independent set, and $(G_2^c)_{V_1}$ is the clique $K_{2j}$. The proof now proceeds as above. \end{proof} \section{Cubic circulant graphs} \label{regularity-two-n-a-n} We now compute the regularity of the edge ideals of {\bf cubic circulant graphs}, that is, a circulant graph where every vertex has degree three. Cubic circulant graphs have the form $G = C_{2n}(a,n)$ with $1 \leq a \leq n$. The main result of this section can also be viewed as an application of Theorem \ref{new-reg-result} to compute the regularity of a square-free monomial ideal. We begin with a structural result for cubic circulants due to Davis and Domke. \begin{theorem}\cite{DDG}\label{isomorphic-a-n} Let $1 \leq a < n$ and $t={\rm gcd}(2n,a)$. \begin{enumerate} \item[$(a)$] If $\frac{2n}{t}$ is even, then $C_{2n}(a,n)$ is isomorphic to $t$ copies of $C_{\frac{2n}{t}}(1,\frac{n}{t})$. \item[$(b)$] If $\frac{2n}{t}$ is odd, then $C_{2n}(a,n)$ is isomorphic to $\frac{t}{2}$ copies of $C_{\frac{4n}{t}}(2,\frac{2n}{t})$. \end{enumerate} \end{theorem} \noindent By combining the previous theorem with Theorem \ref{reg-results} $(i)$, to compute the regularity of the edge ideal of any cubic circulant graph, it suffices to compute the regularity of the edge ideals of $C_{2n}(1,n)$ and $C_{2n}(2,n)$. Observe that $n$ must be odd in $C_{2n}(2,n)$. It will be convenient to use the representation and labelling of these two graphs in Figure \ref{cubic-picture}. \begin{figure}[h!] \centering \begin{tikzpicture} [scale=.70]] \draw[thick] (-7,0) --(-7,1)--(-6,1.5)--(-4.5,2.5)--(-2.5,1.5)--(-2.5,0)--(-3.5,0)--(-3.5,1)--(-4.5,1.5)--(-6,2.5)--(-8,1.5)--(-8,0)--(-7,0); \draw[thick] (-7,1)--(-8,1.5); \draw[thick] (-6,1.5)--(-6,2.5); \draw[thick] (-4.5,1.5)--(-4.5,2.5); \draw[thick] (-3.5,1)--(-2.5,1.5); \draw[thick,dashed] (-7,0) --(-7,-1)--(-6,-1.5)--(-4.5,-1.5)--(-3.5,-1)--(-3.5,0)--(-2.5,0)--(-2.5,-1.5)--(-4.5,-2.5)--(-6,-2.5)--(-8,-1.5)--(-8,0)--(-7,0); \draw[thick,dashed] (-5.25,-1.5) --(-5.25,-2.5); \draw [fill] (-7,0) circle [radius=0.1]; \draw [fill] (-7,1) circle [radius=0.1]; \draw [fill] (-6,1.5) circle [radius=0.1]; \draw [fill] (-4.5,2.5) circle [radius=0.1]; \draw [fill] (-2.5,1.5) circle [radius=0.1]; \draw [fill] (-2.5,0) circle [radius=0.1]; \draw [fill] (-3.5,0) circle [radius=0.1]; \draw [fill] (-3.5,1) circle [radius=0.1]; \draw [fill] (-4.5,1.5) circle [radius=0.1]; \draw [fill] (-6,2.5)circle [radius=0.1]; \draw [fill] (-8,1.5) circle [radius=0.1]; \draw [fill] (-8,0) circle [radius=0.1]; \draw [fill] (-5.25,-1.5) circle [radius=0.1]; \draw [fill] (-5.25,-2.5) circle [radius=0.1]; \node at (-6.3,0) {$x_{n-2}$}; \node at (-6.3,0.7) {$x_{n-1}$}; \node at (-5.7,1.2) {$x_{n}$}; \node at (-4.5,3) {$x_{n+1}$}; \node at (-1.5,1.5) {$x_{n+2}$}; \node at (-1.5,0) {$x_{n+3}$}; \node at (-4,0) {$x_{3}$}; \node at (-4,0.7) {$x_{2}$}; \node at (-4.5,1.2) {$x_{1}$}; \node at (-6,3) {$x_{2n}$}; \node at (-9,1.5) {$x_{2n-1}$}; \node at (-9,0) {$x_{2n-2}$}; \node at (-5.25,-1) {$x_{i}$}; \node at (-5.25,-3) {$x_{n+i}$}; \draw[thick] (3.5,0) --(3.5,1)--(4.5,1.5)--(6,1.5)--(7,1)--(7,0)--(8,0)--(8,1.5)--(6,2.5)--(4.5,2.5)--(2.5,1.5)--(2.5,0)--(3.5,0); \draw[thick] (3.5,1)--(2.5,1.5); \draw[thick] (4.5,1.5)--(4.5,2.5); \draw[thick] (6,1.5)--(6,2.5); \draw[thick] (7,1)--(8,1.5); \draw[thick,dashed] (3.5,0) --(3.5,-1)--(4.5,-1.5)--(6,-1.5)--(7,-1)--(7,0)--(8,0)--(8,-1.5)--(6,-2.5)--(4.5,-2.5)--(2.5,-1.5)--(2.5,0)--(3.5,0); \draw[thick,dashed] (5.25,-1.5) --(5.25,-2.5); \draw [fill] (3.5,0) circle [radius=0.1]; \draw [fill] (3.5,1) circle [radius=0.1]; \draw [fill] (4.5,1.5) circle [radius=0.1]; \draw [fill] (6,1.5) circle [radius=0.1]; \draw [fill] (7,1) circle [radius=0.1]; \draw [fill] (7,0) circle [radius=0.1]; \draw [fill] (8,0) circle [radius=0.1]; \draw [fill] (8,1.5) circle [radius=0.1]; \draw [fill] (6,2.5) circle [radius=0.1]; \draw [fill] (4.5,2.5)circle [radius=0.1]; \draw [fill] (2.5,1.5) circle [radius=0.1]; \draw [fill] (2.5,0) circle [radius=0.1]; \draw [fill] (5.25,-1.5) circle [radius=0.1]; \draw [fill] (5.25,-2.5) circle [radius=0.1]; \node at (4.3,0) {$x_{2n-5}$}; \node at (4.2,0.6) {$x_{2n-3}$}; \node at (4.9,1.1) {$x_{2n-1}$}; \node at (6,1.1) {$x_{1}$}; \node at (6.5,0.7) {$x_{3}$}; \node at (6.5,0) {$x_{5}$}; \node at (9,0) {$x_{n+5}$}; \node at (9,1.5) {$x_{n+3}$}; \node at (6,3) {$x_{n+1}$}; \node at (4.5,3) {$x_{n-1}$}; \node at (1.5,1.5) {$x_{n-3}$}; \node at (1.5,0) {$x_{n-5}$}; \node at (5.25,-1) {$x_{i}$}; \node at (5.25,-3) {$x_{n+i}$}; \end{tikzpicture} \caption{The graphs $C_{2n}(1,n)$ and $C_{2n}(2,n)$.}\label{cubic-picture} \end{figure} Our strategy is to use Theorem \ref{new-reg-result} to compute the regularity of these two graphs. Thus, we need bounds on ${\rm reg}(I(G))$ and ${\rm pd}(I(G))$, and information about the reduced Euler characteristic of ${\rm Ind}(G)$ when $G = C_{2n}(1,n)$ or $G_{2n}(2,n)$. We first bound the regularity and the projective dimension. We introducing the following three families of graphs, where the $t\geq 1$ denotes the number of ``squares'': \begin{enumerate} \item[$(i)$] The family $A_t$: \[ \begin{tikzpicture} [scale=.50] \draw[thick] (-8,0.5) -- (-6,0.5) --(-4,0.5)--(-2,0.5)--(0,0.5)--(0,-0.5)--(-2,-0.5)--(-4,-0.5)--(-6,-0.5)-- (-6,0.5); \draw[thick] (-8,-.5) -- (-6,-0.5); \draw[thick] (-4,-0.5) --(-4,0.5); \draw[thick] (-2,-0.5)--(-2,0.5); \draw[thick,dashed] (0,.5)--(2,.5); \draw[thick,dashed] (0,-.5)--(2,-.5); \draw[thick] (2,0.5) --(4,0.5)--(6,0.5)--(6,-0.5)--(4,-0.5)--(2,-0.5)--(2,0.5); \draw[thick] (4,-0.5) --(4,0.5); \draw [fill] (-8,0.5) circle [radius=0.1]; \draw [fill] (-6,0.5) circle [radius=0.1]; \draw [fill] (-4,0.5) circle [radius=0.1]; \draw [fill] (-2,0.5) circle [radius=0.1]; \draw [fill] (0,0.5) circle [radius=0.1]; \draw [fill] (2,0.5) circle [radius=0.1]; \draw [fill] (4,0.5) circle [radius=0.1]; \draw [fill] (6,0.5) circle [radius=0.1]; \draw [fill] (-8,-0.5) circle [radius=0.1]; \draw [fill] (-6,-0.5) circle [radius=0.1]; \draw [fill] (-4,-0.5) circle [radius=0.1]; \draw [fill] (-2,-0.5) circle [radius=0.1]; \draw [fill] (0,-0.5) circle [radius=0.1]; \draw [fill] (2,-0.5) circle [radius=0.1]; \draw [fill] (4,-0.5) circle [radius=0.1]; \draw [fill] (6,-0.5) circle [radius=0.1]; \end{tikzpicture}\] \item[$(ii)$] The family $B_t$: \[ \begin{tikzpicture} [scale=.50] \draw[thick] (-6,0.5) --(-4,0.5)--(-2,0.5)--(0,0.5)--(0,-0.5)--(-2,-0.5)--(-4,-0.5)--(-6,-0.5)-- (-6,0.5); \draw[thick] (-4,-0.5) --(-4,0.5); \draw[thick] (-2,-0.5)--(-2,0.5); \draw[thick,dashed] (0,.5)--(2,.5); \draw[thick,dashed] (0,-.5)--(2,-.5); \draw[thick] (2,0.5) --(4,0.5)--(6,0.5)--(6,-0.5)--(4,-0.5)--(2,-0.5)--(2,0.5); \draw[thick] (4,-0.5) --(4,0.5); \draw [fill] (-6,0.5) circle [radius=0.1]; \draw [fill] (-4,0.5) circle [radius=0.1]; \draw [fill] (-2,0.5) circle [radius=0.1]; \draw [fill] (0,0.5) circle [radius=0.1]; \draw [fill] (2,0.5) circle [radius=0.1]; \draw [fill] (4,0.5) circle [radius=0.1]; \draw [fill] (6,0.5) circle [radius=0.1]; \draw [fill] (-6,-0.5) circle [radius=0.1]; \draw [fill] (-4,-0.5) circle [radius=0.1]; \draw [fill] (-2,-0.5) circle [radius=0.1]; \draw [fill] (0,-0.5) circle [radius=0.1]; \draw [fill] (2,-0.5) circle [radius=0.1]; \draw [fill] (4,-0.5) circle [radius=0.1]; \draw [fill] (6,-0.5) circle [radius=0.1]; \end{tikzpicture} \] \item[$(iii)$] The family $D_t$: \[ \begin{tikzpicture} [scale=.50] \draw[thick] (-6,0.5) --(-4,0.5)--(-2,0.5)--(0,0.5)--(0,-0.5)--(-2,-0.5)--(-4,-0.5)--(-6,-0.5)-- (-6,0.5); \draw[thick] (-8,-0.5) --(-6,-0.5); \draw[thick] (6,0.5) --(8,0.5); \draw[thick] (-4,-0.5) --(-4,0.5); \draw[thick] (-2,-0.5)--(-2,0.5); \draw[thick,dashed] (0,.5)--(2,.5); \draw[thick,dashed] (0,-.5)--(2,-.5); \draw[thick] (2,0.5) --(4,0.5)--(6,0.5)--(6,-0.5)--(4,-0.5)--(2,-0.5)--(2,0.5); \draw[thick] (4,-0.5) --(4,0.5); \draw [fill] (-6,0.5) circle [radius=0.1]; \draw [fill] (-4,0.5) circle [radius=0.1]; \draw [fill] (-2,0.5) circle [radius=0.1]; \draw [fill] (0,0.5) circle [radius=0.1]; \draw [fill] (2,0.5) circle [radius=0.1]; \draw [fill] (4,0.5) circle [radius=0.1]; \draw [fill] (6,0.5) circle [radius=0.1]; \draw [fill] (-8,-0.5) circle [radius=0.1]; \draw [fill] (-6,-0.5) circle [radius=0.1]; \draw [fill] (-4,-0.5) circle [radius=0.1]; \draw [fill] (-2,-0.5) circle [radius=0.1]; \draw [fill] (0,-0.5) circle [radius=0.1]; \draw [fill] (2,-0.5) circle [radius=0.1]; \draw [fill] (4,-0.5) circle [radius=0.1]; \draw [fill] (6,-0.5) circle [radius=0.1]; \draw [fill] (8,0.5) circle [radius=0.1]; \end{tikzpicture} \] \end{enumerate} \begin{lemma} \label{projective-bounds} With the notation as above, we have \begin{enumerate} \item[$(i)$] If $G = A_t$, then \[{\rm reg}(I(G)) \leq \begin{cases} \frac{t+4}{2} & \mbox{if $t$ even} \\ \frac{t+3}{2} & \mbox{if $t$ odd} \end{cases} ~~\mbox{and}~~~ {\rm pd}(I(G)) \leq \begin{cases} \frac{3t}{2}+1 & \mbox{if $t$ even} \\ \frac{3(t-1)}{2}+2 & \mbox{if $t$ odd.} \end{cases}\] \item[$(ii)$] If $G = B_t$, then \[{\rm reg}(I(G)) \leq \begin{cases} \frac{t+4}{2} & \mbox{if $t$ even} \\ \frac{t+3}{2} & \mbox{if $t$ odd.} \end{cases}\] \item[$(iii)$] If $G = D_t$ and $t=2l+1$ with $l$ an odd number, then ${\rm reg}(I(G)) \leq \frac{t+3}{2}$. \end{enumerate} \end{lemma} \begin{proof} $(i)$ The proof is by induction on $t$. Via a direct computation (for example, using {\it Macaulay2}), one finds ${\rm reg}(I(A_1)) = 2$, ${\rm reg}(I(A_2)) = 3$, ${\rm pd}(I(A_1)) = 2$, and ${\rm pd}(I(A_2)) =4$. Our values agree with the upper bounds given in the statement, so the base cases hold. Now suppose that $t \geq 3$. The graph $A_t$ can be decomposed into the subgraphs $A_{1}$ and $A_{t-2}$, i.e., \[ \begin{tikzpicture} [scale=.50] \draw[thick] (-8,0.5) -- (-6,0.5) --(-4,0.5); \draw[thick] (-2,0.5)--(0,0.5)--(0,-0.5)--(-2,-0.5); \draw[thick] (-4,-0.5)--(-6,-0.5)-- (-6,0.5); \draw[thick] (-8,-.5) -- (-6,-0.5); \draw[thick] (-4,-0.5) --(-4,0.5); \draw[thick,dashed] (0,.5)--(2,.5); \draw[thick,dashed] (0,-.5)--(2,-.5); \draw[thick] (2,0.5) --(4,0.5)--(6,0.5)--(6,-0.5)--(4,-0.5)--(2,-0.5)--(2,0.5); \draw[thick] (4,-0.5) --(4,0.5); \draw [fill] (-8,0.5) circle [radius=0.1]; \draw [fill] (-6,0.5) circle [radius=0.1]; \draw [fill] (-4,0.5) circle [radius=0.1]; \draw [fill] (-2,0.5) circle [radius=0.1]; \draw [fill] (0,0.5) circle [radius=0.1]; \draw [fill] (2,0.5) circle [radius=0.1]; \draw [fill] (4,0.5) circle [radius=0.1]; \draw [fill] (6,0.5) circle [radius=0.1]; \draw [fill] (-8,-0.5) circle [radius=0.1]; \draw [fill] (-6,-0.5) circle [radius=0.1]; \draw [fill] (-4,-0.5) circle [radius=0.1]; \draw [fill] (-2,-0.5) circle [radius=0.1]; \draw [fill] (0,-0.5) circle [radius=0.1]; \draw [fill] (2,-0.5) circle [radius=0.1]; \draw [fill] (4,-0.5) circle [radius=0.1]; \draw [fill] (6,-0.5) circle [radius=0.1]; \node at (-4,1) {$a$}; \node at (-2,1) {$a$}; \node at (-4,-1) {$b$}; \node at (-2,-1) {$b$}; \end{tikzpicture}\] Suppose that $t$ is even. By Theorem \ref{KM} and by induction (and the fact that ${\rm reg}(R/I) ={\rm reg}(I) -1$) we get \[ {\rm reg}(R/I(A_t)) \leq {\rm reg}(R/I(A_1)) + {\rm reg}(R/I(A_{t-2})) \leq 1 + \frac{(t-2)+4}{2}-1 = \frac{t+4}{2}-1 \] and \[ {\rm pd}(I(A_t)) \leq {\rm pd}(I(A_1)) + {\rm pd}(I(A_{t-2}) + 1 \leq 2 + \frac{3(t-2)}{2} +1 + 1 = \frac{3t}{2}+1.\] Because the proof for when $t$ is odd is similar, we omit it. $(ii)$ A direct computation shows ${\rm reg}(I(B_1)) = 2$ and ${\rm reg}(I(B_2)) = 3$. If $t \geq 3$, we decompose $B_t$ into the subgraphs $B_{1}$ and $A_{t-2}$, i.e., \[ \begin{tikzpicture} [scale=.50] \draw[thick] (-6,0.5) --(-4,0.5); \draw[thick] (-2,0.5)--(0,0.5)--(0,-0.5)--(-2,-0.5); \draw[thick] (-4,-0.5)--(-6,-0.5)-- (-6,0.5); \draw[thick] (-4,-0.5) --(-4,0.5); \draw[thick,dashed] (0,.5)--(2,.5); \draw[thick,dashed] (0,-.5)--(2,-.5); \draw[thick] (2,0.5) --(4,0.5)--(6,0.5)--(6,-0.5)--(4,-0.5)--(2,-0.5)--(2,0.5); \draw[thick] (4,-0.5) --(4,0.5); \draw [fill] (-6,0.5) circle [radius=0.1]; \draw [fill] (-4,0.5) circle [radius=0.1]; \draw [fill] (-2,0.5) circle [radius=0.1]; \draw [fill] (0,0.5) circle [radius=0.1]; \draw [fill] (2,0.5) circle [radius=0.1]; \draw [fill] (4,0.5) circle [radius=0.1]; \draw [fill] (6,0.5) circle [radius=0.1]; \draw [fill] (-6,-0.5) circle [radius=0.1]; \draw [fill] (-4,-0.5) circle [radius=0.1]; \draw [fill] (-2,-0.5) circle [radius=0.1]; \draw [fill] (0,-0.5) circle [radius=0.1]; \draw [fill] (2,-0.5) circle [radius=0.1]; \draw [fill] (4,-0.5) circle [radius=0.1]; \draw [fill] (6,-0.5) circle [radius=0.1]; \node at (-4,1) {$a$}; \node at (-2,1) {$a$}; \node at (-4,-1) {$b$}; \node at (-2,-1) {$b$}; \end{tikzpicture}\] Suppose that $t$ is even. Since ${\rm reg}(I(B_1)) =2$, Theorem~\ref{KM} and part $(i)$ above gives us: \[ {\rm reg}(R/I(B_t)) \leq {\rm reg}(R/I(B_1)) + {\rm reg}(R/I(A_{t-2})) \leq \frac{(t-2)+4}{2} =\frac{t+2}{2}. \] Therefore ${\rm reg}(I(B_t)) \leq \frac{t+2}{2}+1=\frac{t+4}{2}$. When $t$ is odd, the proof is similar. $(iii)$ Because $t=2l+1$ with $l$ odd, the graph $D_t$ can be decomposed into $l+1$ subgraphs of the form $A_{1}$, i.e., \[ \begin{tikzpicture} [scale=.50] \draw[thick] (-8,0.5)--(-6,0.5); \draw[thick] (-10,-0.5) --(-8,-0.5)--(-6,-0.5)--(-4,-0.5); \draw[thick] (-8,-0.5) --(-8,0.5); \draw[thick] (-6,-0.5) --(-6,0.5); \node at (-6,1) {$a$}; \node at (-4,1) {$a$}; \node at (-4,-1) {$b$}; \node at (-2,-1) {$b$}; \draw[thick] (-2,-0.5)--(0,-0.5); \draw[thick] (-4,0.5) --(-2,0.5)--(0,0.5)--(2,0.5); \draw[thick] (-2,-0.5) --(-2,0.5); \draw[thick] (0,-0.5) --(0,0.5); \draw[thick] (4,0.5)--(6,0.5); \draw[thick] (2,-0.5) --(4,-0.5)--(6,-0.5)--(8,-0.5); \draw[thick] (4,-0.5) --(4,0.5); \draw[thick] (6,-0.5) --(6,0.5); \draw[thick] (10,-0.5)--(12,-0.5); \draw[thick] (8,0.5) --(10,0.5)--(12,0.5)--(14,0.5); \draw[thick] (10,-0.5) --(10,0.5); \draw[thick] (12,-0.5) --(12,0.5); \draw[thick,dashed] (6,.5)--(8,.5); \draw[thick,dashed] (8,-.5)--(10,-.5); \draw [fill] (-8,0.5) circle [radius=0.1]; \draw [fill] (-6,0.5) circle [radius=0.1]; \draw [fill] (-4,0.5) circle [radius=0.1]; \draw [fill] (-2,0.5) circle [radius=0.1]; \draw [fill] (0,0.5) circle [radius=0.1]; \draw [fill] (2,0.5) circle [radius=0.1]; \draw [fill] (4,0.5) circle [radius=0.1]; \draw [fill] (6,0.5) circle [radius=0.1]; \draw [fill] (8,0.5) circle [radius=0.1]; \draw [fill] (10,0.5) circle [radius=0.1]; \draw [fill] (12,0.5) circle [radius=0.1]; \draw [fill] (14,0.5) circle [radius=0.1]; \draw [fill] (-10,-0.5) circle [radius=0.1]; \draw [fill] (-8,-0.5) circle [radius=0.1]; \draw [fill] (-6,-0.5) circle [radius=0.1]; \draw [fill] (-4,-0.5) circle [radius=0.1]; \draw [fill] (-2,-0.5) circle [radius=0.1]; \draw [fill] (0,-0.5) circle [radius=0.1]; \draw [fill] (2,-0.5) circle [radius=0.1]; \draw [fill] (4,-0.5) circle [radius=0.1]; \draw [fill] (6,-0.5) circle [radius=0.1]; \draw [fill] (8,-0.5) circle [radius=0.1]; \draw [fill] (10,-0.5) circle [radius=0.1]; \draw [fill] (12,-0.5) circle [radius=0.1]; \end{tikzpicture} \] Since ${\rm reg}(I(A_1)) = 2$, by Theorem~\ref{KM} we get ${\rm reg}(R/I(D_t)) \leq (l+1){\rm reg}(R/I(A_1)) = l+1$. Thus ${\rm reg}(I(D_t)) \leq l+2=\frac{t+3}{2}$. \end{proof} We now bound the projective dimensions of the edge ideals of $C_{2n}(1,n)$ and $C_{2n}(2,n)$. \begin{lemma}\label{pdim-bounds} Let $n \geq 4$. \hspace{.1cm}\vspace{.1cm} \begin{enumerate} \item[$(i)$] If $G = C_{2n}(1,n)$, then \[{\rm pd}(I(G)) \leq \begin{cases} 3k-1 & \mbox{if $n = 2k$} \\ 3k+1 & \mbox{if $n= 2k+1$.} \end{cases} \] \item[$(ii)$] If $G = C_{2n}(2,n)$, then ${\rm pd}(I(G)) \leq 3k+1$ where $n=2k+1$. \end{enumerate} \end{lemma} \begin{proof} $(i)$ Let $G = C_{2n}(1,n)$, suppose that $n= 2k+1$. The graph $C_{2n}(1,n)$ can be decomposed into the subgraphs $A_{1}$ and $A_{2k-2}$, i.e., \[ \begin{tikzpicture} [scale=.50] \draw[thick] (-8,0.5) -- (-6,0.5) --(-4,0.5); \draw[thick] (-2,0.5)--(0,0.5)--(0,-0.5)--(-2,-0.5); \draw[thick] (-4,-0.5)--(-6,-0.5)-- (-6,0.5); \draw[thick] (-8,-.5) -- (-6,-0.5); \draw[thick] (-4,-0.5) --(-4,0.5); \draw[thick,dashed] (0,.5)--(2,.5); \draw[thick,dashed] (0,-.5)--(2,-.5); \draw[thick] (2,0.5) --(4,0.5)--(6,0.5)--(6,-0.5)--(4,-0.5)--(2,-0.5)--(2,0.5); \draw[thick] (4,-0.5) --(4,0.5); \draw [fill] (-8,0.5) circle [radius=0.1]; \draw [fill] (-6,0.5) circle [radius=0.1]; \draw [fill] (-4,0.5) circle [radius=0.1]; \draw [fill] (-2,0.5) circle [radius=0.1]; \draw [fill] (0,0.5) circle [radius=0.1]; \draw [fill] (2,0.5) circle [radius=0.1]; \draw [fill] (4,0.5) circle [radius=0.1]; \draw [fill] (6,0.5) circle [radius=0.1]; \draw [fill] (-8,-0.5) circle [radius=0.1]; \draw [fill] (-6,-0.5) circle [radius=0.1]; \draw [fill] (-4,-0.5) circle [radius=0.1]; \draw [fill] (-2,-0.5) circle [radius=0.1]; \draw [fill] (0,-0.5) circle [radius=0.1]; \draw [fill] (2,-0.5) circle [radius=0.1]; \draw [fill] (4,-0.5) circle [radius=0.1]; \draw [fill] (6,-0.5) circle [radius=0.1]; \node at (-8,1) {$x_{n}$}; \node at (-8,-1) {$x_{2n}$}; \node at (-6,1) {$x_{n+1}$}; \node at (-6,-1) {$x_{1}$}; \node at (6,1) {$x_{2n}$}; \node at (6,-1) {$x_{n}$}; \node at (-4,1) {$x_{n+2}$}; \node at (-2,1) {$x_{n+2}$}; \node at (-4,-1) {$x_2$}; \node at (-2,-1) {$x_2$}; \node at (6,1) {$x_{2n}$}; \node at (6,-1) {$x_{n}$}; \end{tikzpicture} \] Note that since $n \geq 4$ and $n$ odd, $2k -2 \geq 2$. Combining Theorem~\ref{KM} and Lemma~\ref{projective-bounds} we get: \[ {\rm pd}(I(C_{2n}(1,n))) \leq {\rm pd}(I(A_{2k-2}))+{\rm pd}(I(A_1))+1 \leq \left(\frac{3(2k-2)}{2}+1\right)+3=3k+1. \] If $n=2k$, $C_{2n}(1,n)$ can be decomposed as in the previous case with the only difference being that $C_{2n}(1,n)$ can be decomposed into the union of the subgraphs $A_{1}$ and $A_{2k-3}$. By Theorem~\ref{KM} and Lemma~\ref{projective-bounds}: \[ {\rm pd}(I(C_{2n}(1,n))) \leq {\rm pd}(I(A_{2k-3}))+{\rm pd}(I(A_1))+1 \leq \left(\frac{3(2k-4)}{2}+2\right)+3=3k-1. \] $(ii)$ Let $G = C_{2n}(2,n)$ with $n= 2k+1$. We can draw $G$ as \[ \begin{tikzpicture} [scale=.50] \draw[thick] (-8,0.5) -- (-6,0.5) --(-4,0.5)--(-2,0.5)--(0,0.5)--(0,-0.5)--(-2,-0.5)--(-4,-0.5)--(-6,-0.5)-- (-6,0.5); \draw[thick] (-8,-.5) -- (-6,-0.5); \draw[thick] (-4,-0.5) --(-4,0.5); \draw[thick] (-2,-0.5)--(-2,0.5); \draw[thick,dashed] (0,.5)--(2,.5); \draw[thick,dashed] (0,-.5)--(2,-.5); \draw[thick] (2,0.5) --(4,0.5)--(6,0.5)--(6,-0.5)--(4,-0.5)--(2,-0.5)--(2,0.5); \draw[thick] (4,-0.5) --(4,0.5); \draw [fill] (-8,0.5) circle [radius=0.1]; \draw [fill] (-6,0.5) circle [radius=0.1]; \draw [fill] (-4,0.5) circle [radius=0.1]; \draw [fill] (-2,0.5) circle [radius=0.1]; \draw [fill] (0,0.5) circle [radius=0.1]; \draw [fill] (2,0.5) circle [radius=0.1]; \draw [fill] (4,0.5) circle [radius=0.1]; \draw [fill] (6,0.5) circle [radius=0.1]; \draw [fill] (-8,-0.5) circle [radius=0.1]; \draw [fill] (-6,-0.5) circle [radius=0.1]; \draw [fill] (-4,-0.5) circle [radius=0.1]; \draw [fill] (-2,-0.5) circle [radius=0.1]; \draw [fill] (0,-0.5) circle [radius=0.1]; \draw [fill] (2,-0.5) circle [radius=0.1]; \draw [fill] (4,-0.5) circle [radius=0.1]; \draw [fill] (6,-0.5) circle [radius=0.1]; \node at (-8,1) {$x_{2n}$}; \node at (-8,-1) {$x_{n}$}; \node at (-6,1) {$x_{n+1}$}; \node at (-6,-1) {$x_{1}$}; \node at (6,1) {$x_{2n}$}; \node at (6,-1) {$x_{n}$}; \end{tikzpicture} \] The previous representation of $G$ contains $2k$ squares. Then the graph $G$ can be decomposed into the subgraphs $A_{1}$ and $A_{2k-2}$, and the proof runs as in $(i)$. \end{proof} We now determine bounds on the regularity. \begin{lemma}\label{reg-bounds}Let $n \geq 4$. \hspace{.1cm}\vspace{.1cm} \begin{enumerate} \item[$(i)$] If $G = C_{2n}(1,n)$, then \[{\rm reg}(I(G)) \leq \begin{cases} k+1 & \mbox{if $n = 2k$, or if $n=2k+1$ and $k$ odd} \\ k+2 & \mbox{if $n= 2k+1$ and $k$ even.} \end{cases} \] \item[$(ii)$] If $G = C_{2n}(2,n)$, then \[{\rm reg}(I(G)) \leq \begin{cases} k+1 & \mbox{if $n=2k+1$ and $k$ even} \\ k+2 & \mbox{if $n=2k+1$ and $k$ odd.} \end{cases} \] \end{enumerate} \end{lemma} \begin{proof} $(i)$ Let $G = C_{2n}(1,n)$. We now consider three cases. \noindent {\it Case 1.} $n = 2k$. \noindent In Lemma~\ref{pdim-bounds} $(i)$ we saw that $G$ can be decomposed into the subgraphs $A_{1}$ and $A_{2k-3}$. By Theorem~\ref{KM} and Lemma~\ref{projective-bounds} we get: \[ {\rm reg}(R/I(G)) \leq {\rm reg}(R/I(A_1)) + {\rm reg}(R/I(A_{2k-3})) \leq k.\] \noindent {\it Case 2.} $n=2k+1$ with $k$ an odd number. \noindent Using Lemma~\ref{reg-results} $(v)$, we have: \[ {\rm reg}(I(G)) \in \{{\rm reg}(I(G\setminus x_1),{\rm reg}(I(G\setminus N[x_1])+1\}.\] If we set $W = G \setminus x_1$, then by applying Lemma \ref{reg-results} $(v)$ again, we have \[ {\rm reg}(I(G)) \in \{{\rm reg}(I(W\setminus x_{n+1}),{\rm reg}(I(W\setminus N[x_{n+1}])+1, {\rm reg}(I(G\setminus N[x_1])+1\}. \] We have $G \setminus N[x_1] \cong W \setminus N[x_{n+1}] \cong D_{2k-3}$. Moreover, $2k-3=2(k-2)+1$, and since $k$ is an odd number, $k-2$ is also odd. Thus by Lemma~\ref{projective-bounds} $(iii)$ we obtain ${\rm reg}(I(D_{2k-3}))\leq \frac{2k-3+3}{2}=k$. On the other hand, the graph $W \setminus x_{n+1} = (G\setminus x_1) \setminus x_{n+1} \cong B_{2k-1}$, so by Lemma~\ref{projective-bounds} $(ii)$ we have ${\rm reg}(I(W \setminus x_{n+1}))\leq \frac{2k-1+3}{2} \leq k+1$. Thus, ${\rm reg}(I(G)) \leq k+1$. \noindent {\it Case 3.} $n=2k+1$ with $k$ an even number. \noindent In Lemma~\ref{pdim-bounds} $(i)$ we saw that $G$ can be decomposed into the subgraphs $A_{1}$ and $A_{2k-2}$, and the proof runs as in Case 1. $(ii)$ Let $G = C_{2n}(2,n)$. We consider two cases. \noindent {\it Case 1.} $n = 2k+1$ with $k$ an even number. \noindent As in the second case of $(i)$, by Lemma~\ref{reg-results} $(v)$ we have \[ {\rm reg}(I(G)) \in \{{\rm reg}(I(W\setminus x_{n+1}),{\rm reg}(I(W\setminus N[x_{n+1}])+1, {\rm reg}(I(G\setminus N[x_1])+1\}. \] where $W = G\setminus x_1$. In particular, $W \setminus N[x_{n+1}] \cong G \setminus N[x_1]$. The graph $G\setminus N[x_1]$ can be represented as \[ \begin{tikzpicture} [scale=.50] \draw[thick] (-6,0.5) --(-4,0.5)--(-2,0.5)--(0,0.5)--(0,-0.5)--(-2,-0.5)--(-4,-0.5)--(-6,-0.5)-- (-6,0.5); \draw[thick] (-8,-0.5) --(-6,-0.5); \draw[thick] (6,-0.5) --(8,-0.5); \draw[thick] (-4,-0.5) --(-4,0.5); \draw[thick] (-2,-0.5)--(-2,0.5); \draw[thick,dashed] (0,.5)--(2,.5); \draw[thick,dashed] (0,-.5)--(2,-.5); \draw[thick] (2,0.5) --(4,0.5)--(6,0.5)--(6,-0.5)--(4,-0.5)--(2,-0.5)--(2,0.5); \draw[thick] (4,-0.5) --(4,0.5); \draw [fill] (-6,0.5) circle [radius=0.1]; \draw [fill] (-4,0.5) circle [radius=0.1]; \draw [fill] (-2,0.5) circle [radius=0.1]; \draw [fill] (0,0.5) circle [radius=0.1]; \draw [fill] (2,0.5) circle [radius=0.1]; \draw [fill] (4,0.5) circle [radius=0.1]; \draw [fill] (6,0.5) circle [radius=0.1]; \draw [fill] (-8,-0.5) circle [radius=0.1]; \draw [fill] (-6,-0.5) circle [radius=0.1]; \draw [fill] (-4,-0.5) circle [radius=0.1]; \draw [fill] (-2,-0.5) circle [radius=0.1]; \draw [fill] (0,-0.5) circle [radius=0.1]; \draw [fill] (2,-0.5) circle [radius=0.1]; \draw [fill] (4,-0.5) circle [radius=0.1]; \draw [fill] (6,-0.5) circle [radius=0.1]; \draw [fill] (8,-0.5) circle [radius=0.1]; \end{tikzpicture} \] The previous representation of $G\setminus N[x_1]$ contains $2k-3$ squares. It follows that $G\setminus N[x_1]$ can be decomposed into the subgraphs $D_{2k-5}$ and $A_{1}$, i.e., \[ \begin{tikzpicture} [scale=.50] \draw[thick] (-6,0.5) --(-4,0.5)--(-2,0.5)--(0,0.5)--(0,-0.5)--(-2,-0.5)--(-4,-0.5)--(-6,-0.5)-- (-6,0.5); \draw[thick] (-8,-0.5) --(-6,-0.5); \draw[thick] (-4,-0.5) --(-4,0.5); \draw[thick] (-2,-0.5)--(-2,0.5); \draw[thick,dashed] (0,.5)--(2,.5); \draw[thick,dashed] (0,-.5)--(2,-.5); \draw[thick] (2,0.5) --(2,-0.5); \draw[thick] (2,0.5) --(4,0.5); \draw[thick] (8,0.5) --(10,0.5)--(10,-0.5)--(8,-0.5)--(8,0.5); \draw[thick] (10,-0.5) --(12,-0.5); \draw[thick] (8,-0.5)--(6,-0.5); \draw [fill] (-6,0.5) circle [radius=0.1]; \draw [fill] (-4,0.5) circle [radius=0.1]; \draw [fill] (-2,0.5) circle [radius=0.1]; \draw [fill] (0,0.5) circle [radius=0.1]; \draw [fill] (2,0.5) circle [radius=0.1]; \draw [fill] (4,0.5) circle [radius=0.1]; \draw [fill] (-8,-0.5) circle [radius=0.1]; \draw [fill] (-6,-0.5) circle [radius=0.1]; \draw [fill] (-4,-0.5) circle [radius=0.1]; \draw [fill] (-2,-0.5) circle [radius=0.1]; \draw [fill] (0,-0.5) circle [radius=0.1]; \draw [fill] (2,-0.5) circle [radius=0.1]; \draw [fill] (8,0.5) circle [radius=0.1]; \draw [fill] (10,0.5) circle [radius=0.1]; \draw [fill] (10,-0.5) circle [radius=0.1]; \draw [fill] (8,-0.5) circle [radius=0.1]; \draw [fill] (6,-0.5) circle [radius=0.1]; \draw [fill] (12,-0.5) circle [radius=0.1]; \node at (4,1) {$a$}; \node at (2,-1) {$b$}; \node at (8,1) {$a$}; \node at (6,-1) {$b$}; \end{tikzpicture} \] Note that $2k-5=2(k-3)+1$, and because $k$ is even, then $k-3$ is odd. Using Theorem~\ref{KM} and Lemma~\ref{projective-bounds} we get: \[{\rm reg}(R/I(G\setminus N[x_1])) \leq {\rm reg}(R/I(D_{2k-5})) + {\rm reg}(R/I(A_{1})) \leq \frac{2k-2}{2}=k-1.\] The graph $W \setminus x_{n+1} \cong B_{2k-1}$. So by Lemma~\ref{projective-bounds} $(ii)$ we have ${\rm reg}(I(W \setminus x_{n+1}))\leq \frac{2k-1+3}{2}=k+1$. Consequently, ${\rm reg}(I(G)) \leq k+1$, as desired. \noindent {\it Case 2.} $n = 2k+1$ with $k$ an odd number. \noindent The result follows from the fact that the graphs $C_{2n}(2,n)$ can be decomposed into the subgraphs $A_{1}$ and $A_{2k-2}$, and so ${\rm reg}(I(G)) \leq {\rm reg}(I(A_1)) + {\rm reg}(I(A_{2k-2}))-1$. \end{proof} Our final ingredient is a result of Hoshino \cite[Theorem 2.26]{RH} (also see Brown-Hoshino \cite[Theorem 3.5]{BH}) which describes the independence polynomial for cubic circulant graphs. \begin{theorem}\label{independencepoly} For each $n \geq 2$, set \[I_n(x) = 1 + \sum_{\ell=0}^{\lfloor \frac{n-2}{4} \rfloor} \frac{2n}{2\ell+1} \binom{n-2\ell-2}{2\ell}x^{2\ell+1}(1+x)^{n-4\ell+2}.\] \begin{enumerate} \item[$(i)$] If $G = C_{2n}(1,n)$ with $n$ even, or if $G = C_{2n}(2,n)$ with $n$ odd, then $I(G,x) = I_n(x)$. \item[$(ii)$] If $G = C_{2n}(1,n)$ and $n$ odd, then $I(G,x) = I_n(x) + 2x^n$. \end{enumerate} \end{theorem} We now come to the main result of this section. \begin{theorem}\label{maintheorem2} Let $1 \leq a < n$ and $t={\rm gcd}(2n,a)$. \begin{enumerate} \item[$(a)$] If $\frac{2n}{t}$ is even, then: \[ {\rm reg}(I(C_{2n}(a,n))) = \begin{cases} kt+1 & \mbox{if $\frac{n}{t}=2k$, or $\frac{n}{t}=2k+1$ with $k$ an odd number} \\ (k+1)t+1 & \mbox{if $\frac{n}{t}=2k+1$ with $k$ an even number.} \end{cases} \] \item[$(b)$] If $\frac{2n}{t}$ is odd, then: \[ {\rm reg}(I(C_{2n}(a,n))) = \begin{cases} \frac{kt}{2}+1 & \mbox{if $\frac{2n}{t}=2k+1$ with $k$ an even number} \\ \frac{(k+1)t}{2}+1 & \mbox{if $\frac{2n}{t}=2k+1$ with $k$ an odd number.} \end{cases} \] \end{enumerate} \end{theorem} \begin{proof} The formulas can verified directly for the special cases that $n=2$ (i.e., $G = C_4(1,2)$) or $n=3$ (i.e., $G = C_6(1,3)$ and $C_6(2,3)$). We can therefore assume $n \geq 4$. In light of Theorem \ref{isomorphic-a-n} and Lemma \ref{reg-results} $(i)$ it will suffice to prove that the inequalities of Lemma \ref{reg-bounds} are actually equalities. We will make use Theorem \ref{new-reg-result}. We consider five cases, where the proof of each case is similar. \noindent {\it Case 1.} $G = C_{2n}(1,n)$ with $n=2k$. \noindent In this case, Lemma \ref{pdim-bounds} gives ${\rm pd}(I(G)) \leq 3k-1$, Lemma \ref{reg-bounds} gives ${\rm reg}(I(G)) \leq k+1$. Furthermore, since $\widetilde{\chi}({\rm Ind}(G)) = -I(G,-1)$ by equation \eqref{eq-Euler-independence}, Theorem \ref{independencepoly} gives $\widetilde{\chi}({\rm Ind}(G)) = -1$. Because $G$ has $4k = (k+1)+(3k-1)$ vertices, Theorem \ref{new-reg-result} $(ii)$ implies ${\rm reg}(I(G)) = k+1$. \noindent {\it Case 2.} $G = C_{2n}(1,n)$ with $n=2k+1$ and $k$ even. We have ${\rm reg}(I(G)) \leq k+2$ and ${\rm pd}(I(G)) \leq 3k+1 =(4k+2)-(k+2)+1 = n - (k+2)+1$ by Lemmas \ref{pdim-bounds} and \ref{reg-bounds}, respectively. Because $n$ is odd, $\widetilde{\chi}({\rm Ind}(G)) = -[I_n(-1) + 2(-1)^n] =-[1-2]=1 >0$. So, ${\rm reg}(I(G)) = k+2$ by Theorem \ref{new-reg-result} $(i)$ $(a)$ because $k+2$ is even and $\widetilde{\chi}({\rm Ind}(G)) > 0$. \noindent {\it Case 3.} $G = C_{2n}(1,n)$ with $n=2k+1$ and $k$ odd. We have ${\rm reg}(I(G)) = k+1$ by Theorem \ref{new-reg-result} $(ii)$ because ${\rm reg}(I(G)) \leq k+1$ (Lemma \ref{reg-bounds}), ${\rm pd}(I(G)) \leq 3k+1$ (Lemma \ref{pdim-bounds}), $2n=4k+2$ is the number of variables, and $\widetilde{\chi}({\rm Ind}(G)) \neq 0$. \noindent {\it Case 4.} $G = C_{2n}(2,n)$ with $n=2k+1$ and $k$ even. We have ${\rm reg}(I(G)) = k+1$ from Theorem \ref{new-reg-result} $(ii)$ since ${\rm reg}(I(G)) \leq k+1$ (Lemma \ref{reg-bounds}) ${\rm pd}(I(G)) \leq 3k+1$ (Lemma \ref{pdim-bounds}), and $\widetilde{\chi}({\rm Ind}(G)) = -I(G,-1) = -1 \neq 0$ (Theorem \ref{independencepoly}). \noindent {\it Case 5.} $G = C_{2n}(2,n)$ with $n=2k+1$ and $k$ odd. In our final case, ${\rm reg}(I(G)) \leq k+2$ by Lemma \ref{reg-bounds}, ${\rm pd}(I(G)) \leq 3k+1$ by Lemma \ref{pdim-bounds}. Since $n$ is odd, $\widetilde{\chi}({\rm Ind}(G)) = -I(G,-1) = -1$ by Theorem \ref{independencepoly}. Since $k$ is odd, $k+2$ is odd. Because $2n=4k+2$ is the number of variables, we have ${\rm reg}(I(G)) = k+2$ by Theorem \ref{new-reg-result} $(i)$ $(b)$. These five cases now complete the proof. \end{proof} \bibliographystyle{plain}
1,314,259,995,180
arxiv
\section{Preliminaries} \label{sec:prelims} For integer $n\geq 1$ let $[n]\doteq \{1,\ldots, n\}$. \paragraph{Local differential privacy:} In the local differential privacy (LDP) model \citep{Warner,EvfimievskiGS03,KasiviswanathanLNRS11} it is assumed that each data sample obtained by the server is randomized in a differentially private way. This is modeled by assuming that the server running the learning algorithm accesses the dataset via an oracle defined below. \begin{defn}[\citep{KasiviswanathanLNRS11}] An $\eps$-local randomizer $R:Z \rightarrow W$ is a randomized algorithm that satisfies $\forall z_1,z_2\in Z$ and $w\in W$, $\pr[R(z_1) = w] \leq e^\eps \pr[R(z_2) = w]$. For a dataset $S \in Z^n$, an $\LR_S$ oracle takes as an input an index $i$ and a local randomizer $R$ and outputs a random value $w$ obtained by applying $R(z_i)$. An algorithm is (compositionally) $\eps$-LDP if it accesses $S$ only via the $\LR_S$ oracle with the following restriction: for all $i\in [n]$, if $\LR_S(i,R_1),\ldots,\LR_S(i,R_k)$ are the algorithm's invocations of $\LR_S$ on index $i$ where each $R_j$ is an $\eps_j$-randomizer then $\sum_{j\in [k]} \eps_j \leq \eps$. \end{defn} For a non-interactive LDP algorithm one can assume without loss of generality that each sample is queried only once since the application of $k$ fixed local randomizers can be equivalently seen as an execution of a single $\eps$-randomizer with $\sum_{j\in [k]} \eps_j \leq \eps$. Further, in this definition the privacy parameter is defined as the composition of the privacy parameters of all the randomizers. A more general (and less strict) way to define the privacy parameter of an LDP protocol is as the differential privacy of the entire transcript of the protocol (see \citep{JosephMNR:19} for a more detailed discussion). This distinction does not affect our results since in our lower and upper bounds each sample is only queried once. For such protocols these two ways to measure privacy coincide. The local model of privacy can be contrasted with the standard, or central, model of differential privacy where the entire dataset is held by the learning algorithm whose output needs to satisfy differential privacy \citep{DworkMNS:06}. This is a stronger model and an $\eps$-LPD algorithm also satisfies $\eps$-differential privacy. \paragraph{Equivalence to statistical queries:} The statistical query model of \citet{Kearns:98} is defined by having access to $\STAT_P(\tau)$ oracle, where $P$ is the unknown data distribution. To solve a learning problem in this model an algorithm needs to succeed for any valid (that is satisfying the guarantees on the tolerance) oracle's responses. In other words, the guarantees of the algorithm should hold in the worst case over the responses of the oracle. A randomized learning algorithm needs to succeed for any SQ oracle whose responses may depend on the all queries asked so far but not on the internal randomness of the learning algorithm. A special case of statistical queries are counting or linear queries in which the distribution $P$ is uniform over the elements of a given database $S \in Z^n$. In other words the goal is to estimate the empirical mean of $\phi$ on the given set of data points. This setting is studied extensively in the literature on differential privacy (see \citep{DworkRoth:14} for an overview) and our discussion applies to this setting as well. For an algorithm in LDP and SQ models we say that the algorithm is {\em non-interactive} (or {\em non-adaptive}) if all its queries are determined before observing any of the oracle's responses. Similarly, we say that the algorithm is {\em label-non-adaptive} if all the queries that depend on oracle's response are label-independent (the query function depends only on the point). \citet{KasiviswanathanLNRS11} show that one can simulate $\STAT_P(\tau)$ oracle with success probability $1-\delta$ by an $\eps$-LDP algorithm using $\LR_S$ oracle for $S$ containing $n=O(\log(1/\delta)/(\eps\tau)^2)$ i.i.d.~samples from $P$. This has the following implication for simulating SQ algorithms. \begin{thm}[\citep{KasiviswanathanLNRS11}] \label{thm:sq-2-LDP} Let $\A_{SQ}$ be an algorithm that makes at most $t$ queries to $\STAT_P(\tau)$. Then for every $\eps >0$ and $\delta >0$ there is an $\eps$-local algorithm $\A$ that uses $\LR_S$ oracle for $S$ containing $n\geq n_0=O(t\log(t/\delta)/(\eps\tau)^2)$ i.i.d.~samples from $P$ and produces the same output as $\A_{SQ}$ (for some valid answers of $\STAT_P(\tau)$) with probability at least $1-\delta$. Further, if $\A_{SQ}$ is non-interactive then $\A$ is non-interactive. \end{thm} \citet{KasiviswanathanLNRS11} also prove a converse of this theorem. \begin{thm}[\citep{KasiviswanathanLNRS11}] \label{thm:LDP-2-SQ} Let $\A$ be an $\eps$-LPD algorithm that makes at most $t$ queries to $\LR_S$ for $S$ drawn i.i.d. from $P^n$. Then for every $\delta >0$ there is an SQ algorithm $\A_{SQ}$ that in expectation makes $O(t \cdot e^\eps)$ queries to $\STAT_P(\tau)$ for $\tau =\Theta(\delta/(e^{2\eps}t))$ and produces the same output as $\A$ with probability at least $1-\delta$. Further, if $\A$ is non-interactive then $\A_{SQ}$ is non-interactive. \end{thm} \paragraph{PAC learning and margin complexity:} Our results are for the standard PAC model of learning \citep{Valiant:84}. \begin{defn} \label{def:pac} Let $X$ be a domain and $C$ be a class of Boolean functions over $X$. An algorithm $\A$ is said to PAC learn $C$ with error $\alpha$ if for every distribution $D$ over $X$ and $f\in C$, given access (via oracle or samples) to the input distribution over examples $(x,f(x))$ for $x\sim D$, the algorithm outputs a function $h$ such that $\pr_D[f(x) \neq h(x)]\leq \alpha$ with probability at least $2/3$. \end{defn} We say that the learning algorithm is efficient if its running time is polynomial in $\log|X|$, $\log|C|$ and $1/\eps$. For dimension $d$, we denote by $\B^d(1)$ the unit ball in $\ell_2$ norm in $\R^d$. \begin{defn} \label{def:mc} Let $X$ be a domain and $C$ be a class of Boolean functions over $X$. The {\em margin complexity} of $C$, denoted $\mc(C)$, is the minimal number $M\ge 0$ such that for some $d$, there is an embedding $\Psi:X \to \B^d(1)$ for which the following holds: for every $f\in C$ there is $w \in \B^d(1)$ such that \[ \min_{x \in X} \{ f(x) \cdot \la w,\Psi(x)\ra \} \ge \frac{1}{M} . \] \end{defn} As pointed out in \citep{Feldman:08ev}, margin complexity\footnote{The results there are stated in terms of another notion that is closely related to margin complexity. Namely, the smallest dimension $d$ for which for which there exists a mapping of $X$ to $\zod$ such that every $f \in C$ becomes expressible as a majority function over some subset $T\subseteq [d]$ of variables. See the discussion in Sec.~6 of \citep{KallweitSimon:11}.} is equivalent (up to a polynomial) to the existence of a (possibly randomized) algorithm that outputs a small set of functions such that with significant probability one of those functions is correlated with the target function. The upper bound in \citep{Feldman:08ev} was sharpened by \citet{KallweitSimon:11} although they proved it only for deterministic algorithms (which corresponds to a single fixed set of functions and is referred to as the CSQ dimension). It is however easy to see that their sharper bound extends to randomized algorithms with an appropriate adjustment of the bound and we give the resulting statement below: \begin{lem}[\citep{Feldman:08ev,KallweitSimon:11}] \label{lem:csq2mc} Let $X$ be a domain and $C$ be a class of Boolean functions over $X$. Assume that there exists a (possibly randomized) algorithm $\A$ that generates a set of functions $h_1,\ldots, h_m$ satisfying: for every $f\in C$ and distribution $D$ over $X$ with probability at least $\beta >0$ (over the randomness of $\A$) there exists $i \in [m]$ such that $|\E_{x\sim D}[f(x)h_i(x)]| \geq 1/m$. Then $$\mc(C) \leq \frac{2}{\beta} m^{3/2} .$$ \end{lem} The conditions in Lemma \ref{lem:csq2mc} are also known to be necessary for low margin complexity. \begin{lem}[\citep{Feldman:08ev,KallweitSimon:11}] \label{lem:mc2csq} Let $X$ be a domain, $C$ be a class of Boolean functions over $X$ and $d = \mc(C)$. Then for $m = O(\ln (|C||X|)d^2)$, there exists a set of functions $h_1,\ldots, h_m$ satisfying: for every $f\in C$ and distribution $D$ over $X$ there exists $i \in [m]$ such that $|\E_{x\sim D}[f(x)h_i(x)]| \geq 1/m$. \end{lem} \section{Overview} \iffull\else\footnotetext{Extended abstract. Full version appears as \citep{DanielyF18}.}\fi We consider learning in distributed systems where each client $i$ (or user) holds a data point $z_i \in Z$ drawn i.i.d. from some unknown distribution $P$ and the goal of the server is to solve some statistical learning problem using the data stored at the clients. In addition, the communication from the client to the server is constrained. The primary model we consider is that of local differential privacy (LDP) \citep{KasiviswanathanLNRS11}. In this model each user $i$ applies a differentially-private algorithm to their point $z_i$ and then sends the result to the server. The specific algorithm applied by each user is determined by the server. In the general version of the model the server can determine which algorithm the user should apply on the basis of all the previous communications the server has received. In practice, however waiting for the client's response often takes a relatively large amount of time. Therefore in such systems it is necessary to limit the number of rounds of interaction. That is, the queries of the server need to be split into a small number of batches such that the LDP algorithms used in each batch depend only on responses to queries in previous batches (a query specifies the algorithm to apply). Indeed, currently deployed systems that use local differential privacy use very few rounds (usually just one) \citep{ErlingssonPK14,appledp,DKY17-Microsoft}. See Section \ref{sec:prelims} for a formal definition of the model. In this paper we will focus on the standard PAC learning of a class of Boolean functions $C$ over some domain $X$. In this setting the input distribution $P$ is over labeled examples $(x,y) \in X \times \on$ where $x$ is drawn from some distribution $D$ and $y = f(x)$ for some unknown $f\in C$ (referred to as the target function). The goal of the learning algorithm is to output a function $h$ such that the error $\pr_{x\sim D}[f(x) \neq h(x)]$ is small. In the distribution-independent setting $D$ is not known to the learning algorithm while in the distribution-specific setting the learning algorithm only needs to succeed for some specific $D$. For many of the important classes of functions all known LDP learning algorithms require many rounds of interaction. Yet there are no results that rule out solving these problems without interaction. This problem was first addressed by \citet{KasiviswanathanLNRS11} who demonstrated existence of an artificial class of Boolean functions $C$ over $\zo^d$ with the following property. $C$ can be PAC learned efficiently relative to the uniform distribution over $\zod$ by an interactive LDP protocol but requires $2^{\Omega(d)}$ samples to learn by any non-interactive learning algorithm. The class $C$ is highly unnatural. It splits the domain into two parts. Target function learned on the first half gives the key to the learning problem on the second half of the domain. That problem is exponentially hard to solve without the key. This approach does not extend to distribution-independent learning setting (intuitively, the learning algorithm will not be able to obtain the key if the distribution does not place any probability on the first half of the domain). Deriving a technique that applies to distribution independent learning is posed as a natural open problem in this area \citep{KasiviswanathanLNRS11}. Even beyond PAC learning, there are no examples of natural problems that provably require exponentially more samples to solve non-interactively. \subsection{Our results} We give a new technique for proving lower bounds on the power of non-interactive LDP algorithms for distribution-independent PAC learning. Our technique is based on a connection between the power of interaction and margin complexity of Boolean function classes that we establish. The margin complexity of a class of Boolean functions $C$, denoted by $\mc(C)$, is the inverse of the largest margin of separation achievable by an embedding of $X$ in $\R^d$ that makes the positive and negative examples of each function in $C$ linearly separable (see Definition \ref{def:mc}). It is a well-studied measure of complexity of classes of functions and corresponding sign matrices in learning theory and communication complexity (\eg \citep{Novikoff:62,AizermanBR67,BoserGV92,ForsterSS01,Ben-DavidES02,Sherstov:08,LinialS:2009,KallweitSimon:11}). We prove that only classes that have polynomially small {\em margin complexity} can be efficiently PAC learned by a non-interactive LDP algorithm. Our lower bound implies that two natural and well-studied classes of functions: linear separators and decision lists require an exponential number of samples to learn non-interactively. Importantly, it is known that these classes can be learned efficiently by interactive LDP algorithms (this follows from the results for the statistical query model that we discuss later). Thus our result gives an exponential separation between the power of interactive and non-interactive protocols. To the best of our knowledge this is the only known such separation for a natural statistical problem (see Section \ref{sec:related} for a more detailed comparison with related notions of non-interactive algorithms). Our result follows from a stronger lower bound that also holds against algorithms for which only the queries that depend on the label of the point are non-interactive (also referred to as {\em non-adaptive} in related contexts). We will refer to such algorithms as {\em label-non-adaptive} LDP algorithms. Formally, our lower bounds for such algorithms is as follows. We say that a class of Boolean ($\on$-valued) functions $C$ is closed under negation if for every $f\in C$, $-f \in C$. \begin{thm} \label{thm:main-intro} Let $C$ be a class of Boolean functions closed under negation. Assume that there exists a label-non-adaptive $\eps$-LDP algorithm $\A$ that, with success probability at least $2/3$, PAC learns $C$ distribution-independently with error less than $1/2$ using at most $n$ examples. Then $n = \Omega(\mc(C)^{2/3}/e^\eps)$. \end{thm} Our second contribution is an algorithm for learning large-margin linear separators that matches (up to polynomial factors) our lower bound. \begin{thm} \label{thm:algorithm} Let $C$ be an arbitrary class of Boolean functions over $X$. For any $\alpha,\eps > 0$ and $n=\poly\left(\mc(C)/(\alpha\eps)\right)$ there is a label-non-adaptive $\eps$-LDP algorithm that PAC learns $C$ distribution-independently with accuracy $1-\alpha$ using at most $n$ examples. \end{thm} Learning of large-margin classifiers is a classical learning problem and various algorithms for the problem are widely used in practice. Our learning algorithm is computationally efficient as long as an embedding of $C$ into a $d=\poly\left(\mc(C)\log|X|\right)$-dimensional space can be computed efficiently (such an embedding is known to exists by the Johnson-Lindenstrauss random projection argument \citep{ArriagaVempala:99}). Together these results show an equivalence (up to polynomials) between margin complexity and PAC learning with this limited form of interaction in the LDP model. Another implication of Theorem \ref{thm:algorithm} is that if the distribution over $X$ is fixed (and known to the learning algorithm) then the learning algorithm becomes non-interactive. \begin{cor} \label{cor:algorithm-fixed-d} Let $C$ be a class of Boolean functions over $X$ and $D$ be an arbitrary distribution over $X$. For any $\alpha,\eps > 0$ and $n=\poly\left(\mc(C)/(\alpha\eps)\right)$ there is a non-interactive $\eps$-LDP algorithm that PAC learns $C$ relative to $D$ with accuracy $1-\alpha$ using at most $n$ examples. \end{cor} \paragraph{Techniques:} Following the approach of \citet{KasiviswanathanLNRS11}, we use the characterization of LDP protocols using the {\em statistical query} (SQ) model of \citet{Kearns:98}. In this model an algorithm has access to a statistical query oracle for $P$ in place of i.i.d.~samples from $P$. The most commonly studied SQ oracle give an estimate of the mean of any bounded function with fixed tolerance. \begin{defn} \label{def:stat} Let $P$ be a distribution over a domain $Z$ and $\tau >0$. A statistical query oracle $\STAT_P(\tau)$ is an oracle that given as input any function $\phi \colon Z \to [-1,1]$, returns some value $v$ such that $|v - \E_{z\sim P}[\phi(z)]| \leq \tau$. \end{defn} Tolerance $\tau$ of statistical queries roughly corresponds to the number of random samples in the traditional setting. Non-adaptive (or non-interactive) SQ algorithms are defined analogously to LDP protocols. The reductions between learning in the SQ model and learning in the LDP model given by \citet{KasiviswanathanLNRS11} preserve the number of rounds of interaction of a learning algorithm. The key technical tool we apply to prove our lower bound is a result of \citet{Feldman:08ev} relating margin complexity and a certain notion of complexity for statistical queries. The result shows that the existence of a (possibly randomized) algorithm that outputs a set $T$ of $m$ functions such that for every $f\in C$ and distribution $D$, with significant probability one of the functions in $T$ is at least $1/m$-correlated with $f$ relative to $D$ implies that $\mc(C) = O(m^{3/2}))$ (the sharpest bound was proved in \citep{KallweitSimon:11}). We then show that such a set of functions can be easily extracted from the queries of any label-non-adaptive SQ algorithm for learning $C$ Our label-non-adaptive LDP learning algorithm for large-margin halfspaces relies on a new formulation of halfspace learning as a stochastic convex optimization problem. The crucial property of this program is that (approximately) computing sub-gradients can be done by using a fixed set of non-adaptive queries (that measure the correlation of each of the attributes with the label) and (adaptive) but label-independent queries. We can then use an arbitrary gradient-descent-based LDP algorithm for stochastic convex optimization. Such algorithms were first described by \citet{DuchiJW:13focs}. For simplicity, we appeal to the fact that such algorithms can also be implemented in the statistical query model \citep{FeldmanGV:15}. \paragraph{Corollaries:} The class of decision lists (see \citep{Rivest:87,KearnsVazirani:94} for a definition) and the class of linear separators (or halfspaces) over $\zod$ are known to have exponentially large margin complexity \citep{GHR:92,BuhrmanVW:07,Sherstov:08} (and are also negation closed). In contrast, these classes are known to be learnable efficiently by SQ algorithms \citep{Kearns:98,DunaganVempala:04} and thus also by LDP algorithms. Formally, we obtain the following lower bounds: \begin{cor} \label{cor:ldp-hsdl-intro} Any label-non-adaptive $\eps$-LPD algorithm that PAC learns the class of linear separators over $\zod$ with error less than $1/2$ and success probability at least $3/4$ must use $n = 2^{\Omega(d)}/e^\eps$ i.i.d.~examples. For learning the class of decision lists under the same conditions the algorithm must use $n = 2^{\Omega(d^{1/3})}/e^{\eps}$ i.i.d.~examples. \end{cor} Our use of the statistical query model to prove the results implies that we can derive the analogues of our results in other models that have connections to the SQ model. One of such models is the distributed model in which only a small number of bits is communicated from each client. Namely, each client applies a function with range $\zo^k$ to their input and sends the result to the server (for some $k \ll \log |Z|$). As in the case of LDP, the specific function used is chosen by the server. One motivation for this model is collection of data from remote sensors where the cost of communication is highly asymmetric. In the context of learning this model was introduced by \citet{Ben-DavidD98} and generalized by \citet{SteinhardtVW16}. Identical and closely related models are often studied in the context of distributed statistical estimation with communication constraints (\eg \citep{luo2005universal,rajagopal2006universal,ribeiro2006bandwidth,ZhangDJW13,SteinhardtD15,suresh2016distributed,acharya2018inference}). As in the setting of LDP, the number of rounds of interaction that the server uses to solve a learning problem in this model is a critical resource. Using the equivalence between this model and SQ learning that preserves the number of rounds of interaction we immediately obtain analogous results for this model. We are not aware of any prior results on the power of interaction in the context of this model. See Section~\ref{sec:low-comm} for additional details. \subsection{Related work} \label{sec:related} \citet{SmithTU17} address the question of the power of non-interactive LDP algorithms in the closely related setting of stochastic convex optimization. They derive new non-interactive LDP algorithms for the problem albeit requiring an exponential in the dimension number of queries. They also give an exponential lower bound for non-interactive algorithms that are further restricted to obtain only local information about the optimized function. Subsequently, upper and lower bounds on the number of queries to the gradient/second-order oracles for algorithms with few rounds of interaction have been studied by several groups \citep{DuchiRY18,WoodworthWSMS18,BalkanskiSinger18,DiakonikolasGuzman18}. In the context of discrete optimization from queries for the value of the optimized function the round complexity has been recently investigated in \citep{BalkanskiRS17,BalkanskiS18,BalkanskiRS19}. To the best of our knowledge, the techniques used in these works are unrelated to ours. Also in all these works the lower bounds rely heavily on the fact that the oracle provides only local (in the geometric sense) information about the optimized function. In contrast, statistical queries allow getting global information about the optimized function. A number of lower bounds on the sample complexity of LDP algorithms demonstrate that LDP is less efficient than the central model of differential privacy (\eg \cite{DuchiWJ13:nips,duchi2019lower}). The number of data samples necessary to answer statistical queries chosen adaptively has recently been studied in a line of work on adaptive data analysis \citep{DworkFHPRR14:arxiv,HardtU14,BassilyNSSSU16,SteinkeU15}. Our work provably demonstrates that the use of such adaptive queries is important for solving basic learning problems. \iffull Margin complexity also plays a key role in separation of the power of correlational statistical query algorithms (CSQ) from general SQ algorithms. A statistical query $\phi$ is {\em correlational} if $\phi(x,\ell) = \ell \cdot \psi(x)$ for some function $\psi \colon X\to [-1,1]$. Such queries allow measuring the correlation between the target function and an arbitrary predictor. CSQ algorithm are known to capture the power of learning algorithm in Valiant's \citeyearpar{Valiant:09} model of evolvability \citep{Feldman:08ev}. In this context it was shown in \citep{Feldman:08ev} that the complexity of {\em weak} and distribution-independent learning in this model is exactly characterized by margin complexity. Our work relies on some of the properties of margin complexity given in that work. At the same time these are incomparable restrictions on the power of algorithms and our lower bound is incomparable to the lower bound for CSQ algorithms. For example, Boolean conjunctions are (strongly) PAC learnable distribution independently and efficiently by a non-interactive SQ algorithm \citep{Kearns:98} but not by CSQ algorithms \citep{Feldman:11ltfarx}. On the other hand, the function class in \citep{KasiviswanathanLNRS11} is PAC learnable by a CSQ algorithm relative to the uniform distribution but not by a non-interactive SQ algorithm. \fi \paragraph{Subsequent work:} \citet{acharya2018inference} implicitly give a separation between interactive and non-interactive protocols for the problem of identity testing for a discrete distribution over $k$ elements, albeit a relatively weak one ($O(k)$ vs $\Omega(k^{3/2})$ samples). The work of \citet{JosephMNR:19,joseph2019} explores a different aspect of interactivity in LDP. Specifically, they distinguish between two types of interactive protocols: fully-interactive and sequentially-interactive ones. Fully-interactive protocols place no restrictions on interaction whereas sequentially-interactive ones only allows asking one query per user. They give a separation showing that sequentially-interactive protocols may require exponentially more samples than fully interactive ones. This separation is orthogonal to ours since our lower bounds are against completely non-interactive protocols and we separate them from sequentially-interactive protocols. \input{adaptive-prelims-colt} \section{Lower bounds for label-non-adaptive algorithms} We prove the SQ version of our lower bound. Theorem \ref{thm:main-intro} then follows immediately by applying the simulation result from Theorem \ref{thm:LDP-2-SQ}. \begin{thm} \label{thm:main-sq} Let $C$ be a class of Boolean functions closed under negation. Assume that for some $m$ there exists a label-non-adaptive possibly randomized SQ algorithm $\A$ that, with success probability at least $2/3$, PAC learns $C$ distribution-independently with error less than $1/2$ using at most $m$ queries to $\STAT(1/m)$. Then $\mc(C) \leq 6 m^{3/2}$. \end{thm} \begin{proof} We first recall a simple observation from \citep{BshoutyFeldman:02} that allows to decompose each statistical query into a correlational and label-independent parts. Namely, for a function $\phi\colon X\times \on \to [-1,1]$, $$\phi(x,y) = \frac{1-y}{2}\phi(x,-1) + \frac{1+y}{2}\phi(x,1) = \frac{\phi(x,-1) + \phi(x,1)}{2} + y \cdot \frac{\phi(x,1) - \phi(x,-1)}{2} .$$ For a query $\phi$, we will use $h$ and $g$ to denote the parts of the decomposition $\phi(x,y) = g(x) + yh(x)$: $$h(x) \doteq \frac{\phi(x,1) - \phi(x,-1)}{2}$$ and $$g(x) \doteq \frac{\phi(x,1) + \phi(x,-1)}{2} .$$ For every input distribution $D$ and target functions $f$, we define the following SQ oracle. Given a query $\phi$, if $\left|\E_D[f(x)h(x)]\right| \geq 1/m$ then the oracle provides the exact expectation $\E_D[\phi(x,f(x)]$ as the response. Otherwise, it answers with $\E_D[g(x)]$. Note that, by the properties of the decomposition, this is a valid implementation of the SQ oracle. Let $\A(r)$ denote $\A$ with its random bits set to $r$, where $r$ is drawn from some distribution $R$. Let $\phi^r_1,\ldots,\phi^r_{m'}\colon X\times \on \to [-1,1]$ be the statistical queries asked by $\A(r)$ that depend on the label (where $m' \leq m$). Note that, by the definition of a label-non-adaptive SQ algorithm, all these queries are fixed in advance and do not depend on the oracle's answers. Let $g^r_i$ and $h^r_i$ denote the decomposition of these queries into correlational and label-independent parts. Let $h^r_{f,D}$ denote the hypothesis output by $\A(r)$ when used with the SQ oracle defined above. We claim that if $\A$ achieves error $<1/2$ with probability at least $2/3$, then for every $f \in C$ and distribution $D$, with probability at least $1/3$, there exists $i \in [m']$ such that $|\E_D[f(x)h^r_i(x)]| \geq 1/m$ (satisfying the conditions of Lemma \ref{lem:csq2mc} with $\beta =1/3$). To see this, assume for the sake of contradiction that for some distribution $D$ and function $f\in C$, $$\pr_{r\sim R} [r\in T(f,D)]>2/3, $$ where $T(f,D)$ is the set of all random strings $r$ such that for all $i\in [m']$, $|\E_D[f(x)h^r_i(x)]| < 1/m$. Let $S(f,D)$ denote the set of random strings $r$ for which $\A$ succeeds (with the given SQ oracle), that is $\pr_D[f(x) \neq h^r_{f,D}(x)] <1/2$. By our assumption, $\pr_{r\sim R} [r\in S(f,D)]\geq 2/3 $ and therefore \equ{\pr_{r\sim R} [r\in T(f,D) \cap S(f,D)]>1/3 .\label{eq:succeed}} Now, observe that $T(-f,D) = T(f,D)$ and, in particular, the answers of our SQ oracle to $\A(r)$'s queries are identical for $f$ and $-f$ whenever $r \in T(f,D)$. Further, if $\pr_D[f(x) \neq h^r_{f,D}(x)] <1/2$ then $\pr_D[-f(x) \neq h^r_{f,D}(x)] >1/2$. This means that for every $r \in T(f,D)\cap S(f,D)$, $\A(r)$ fails for the target function is $-f$ and the distribution $D$ (by definition, $-f \in C$). By eq.~\eqref{eq:succeed} we obtain that $\A$ fails with probability $>1/3$ for $-f$ and $D$. This contradicts our assumption and therefore we obtain that $$\pr_{r\sim R} [r\not\in T(f,D)]\geq 1/3. $$ By Lemma \ref{lem:csq2mc}, we obtain the claim. \end{proof} \subsection{Applications} \label{sec:apps} We will now spell out several easy corollaries of our lower bound, simulation results and existing SQ algorithms. Together they imply the claimed separations for halfspaces and decision lists. We start with the class of halfspaces over $\zod$ which we denote by $C_{HS}$. The lower bound on the margin complexity of halfspaces is implied by a celebrated work of \citet{GHR:92} on the complexity of linear threshold circuits (the connection of this result to margin complexity is due to \citet{Sherstov:08}): \begin{thm}[\citep{GHR:92,Sherstov:08}] $\mc(C_{HS}) = 2^{\Omega(d)}$. \end{thm} We denote the class of decision lists over $\zod$ by $C_{DL}$ (see \citep{KearnsVazirani:94} for a standard definition). A lower bound on the margin complexity decision lists was derived by \citet{BuhrmanVW:07} in the context of communication complexity. \begin{thm}[\citep{BuhrmanVW:07}] $\mc(C_{DL}) = 2^{\Omega(d^{1/3})}$. \end{thm} Combining these results with Theorem \ref{thm:main-intro} we obtain the lower bound on complexity of LDP algorithms for learning linear classifiers and decision lists given in Corollary \ref{cor:ldp-hsdl-intro}. \remove{ \begin{cor} Any label-non-adaptive SQ algorithm that PAC learns $C_{HS}$ over $\zod$ with error less than $1/2$ and success probability $\geq 2/3$ using at most $m$ queries to $\STAT(1/m)$ must have $m = 2^{\Omega(d)}$. \end{cor}} Learnability of decision list using statistical queries is a classical result of \citet{Kearns:98}. Applying the simulation in Theorem \ref{thm:sq-2-LDP} we obtain polynomial time learnability of this class by (interactive) LDP algorithms. \begin{thm}[\citep{Kearns:98}] \label{thm:learn-dl} For every $\eps,\alpha >0$, there exists an $\eps$-LDP learning algorithm that PAC learns $C_{DL}$ with error $\alpha$ using $\poly(d/(\eps\alpha))$ i.i.d.~examples (with one query per example). \end{thm} In the case of halfspaces, \citet{DunaganVempala:04} give the first efficient algorithm for PAC learning halfspaces (their description is not in the SQ model but it is known that their algorithm can be easily converted to the SQ model \citep{BalcanF15}). Applying Theorem \ref{thm:sq-2-LDP} we obtain learnability of this class by (interactive) LDP algorithms. \begin{thm}[\citep{DunaganVempala:04,BalcanF15}] \label{thm:learn-hs} For every $\eps,\alpha >0$, there exists an $\eps$-LDP learning algorithm that PAC learns $C_{HS}$ with error $\alpha$ using $\poly(d/(\eps\alpha))$ i.i.d.~examples (with one query per example). \end{thm} \section{Label-non-adaptive learning algorithm for halfspaces} Our algorithm for learning large-margin halfspaces relies on the formulation of the problem of learning a halfspace as the following convex optimization problem. \iffull\else Proofs of results in this section can be found in the full version \citep{DanielyF18}.\fi \begin{lem}\label{lem:objective} Let $P$ be a distribution on $\B^d(1)\times \on$. Suppose that there is a vector $w^*\in \B^d(1)$ such that $\pr_{(x,\ell)\sim P}[\inner{w^*,\ell x}\ge \gamma] =1$. Let $(e_1,\ldots,e_d)$ denote the standard basis of $\R^d$ and let $w$ be a unit vector such that for $\alpha,\beta \in (0,1)$ \begin{equation}\label{eq:1} F(w) \doteq \E_{(x,\ell)\sim P}\left[\sum_{i=1}^d \left|\inner{w + \gamma e_i,x}\right| - \inner{w+ \gamma e_i,\ell x} + \sum_{i=1}^d \left|\inner{w - \gamma e_i,x}\right| - \inner{w- \gamma e_i,\ell x} \right]\le \alpha \beta. \end{equation} Then, $F(w^*) = 0$ and $$\pr_P\left[\inner{w,\ell x} \ge -\frac{\beta}{2} + \frac{\gamma^2}{\sqrt{d}}\right]\ge 1-\alpha.$$ In particular, if $\beta<\frac{2\gamma^2}{\sqrt{d}}$ then $\pr_P\left[\inner{w,\ell x} > 0\right]\ge 1-\alpha$. \end{lem} \iffull \begin{proof} To see the first part, note that with probability 1 over $(x,\ell)\sim P$, $$\inner{w^*,\ell x} \geq \gamma \geq |\inner{\gamma e_i,\ell x}| .$$ Therefore $\inner{w^*+ \gamma e_i,\ell x} \geq 0$ and $\inner{w^*- \gamma e_i,\ell x} \geq 0$ and thus $\inner{w^*+ \gamma e_i,\ell x} = |\inner{w^*+ \gamma e_i,x}|$ and $\inner{w^*- \gamma e_i,\ell x} = |\inner{w^*+ \gamma e_i,x}|$ making $F(w^*) =0$. By Markov's inequality, with probability at least $1-\alpha$, the sum is less that $\beta$. In this event, for all $i\in [d]$, $$\left|\inner{w + \gamma e_i,x}\right| - \inner{w+ \gamma e_i,\ell x}\le \beta$$ and $$\left|\inner{w - \gamma e_i,x}\right| - \inner{w- \gamma e_i,\ell x}\le \beta,$$ which implies that $\inner{w+ \gamma e_i,\ell x}\ge -\beta/2$ and $\inner{w- \gamma e_i,\ell x}\ge -\beta/2$. Furthermore, in this event, there exists $i$ such that $|\inner{x,e_i}| \ge \frac{\gamma}{\sqrt{d}}$ (This is true with probability $1$ since $\|x\| \ge \inner{w^*,\ell x}\ge \gamma$). For this $i$ one of $\inner{w+ \gamma e_i,\ell x}, \inner{w- \gamma e_i,\ell x}$ must be $\ge -\frac{\beta}{2} + \frac{2\gamma^2}{\sqrt{d}}$. Hence, $$\inner{w,\ell x} = \frac{\inner{w+ \gamma e_i,\ell x} + \inner{w- \gamma e_i,\ell x}}{2} \ge -\frac{\beta}{2} + \frac{\gamma^2}{\sqrt{d}}.$$ \end{proof} \fi We now describe how to solve the convex optimization problem given in Lemma \ref{lem:objective}. Both the running time and accuracy of queries of our solution depend on the ambient dimension $d$. This dimension is not necessarily upper-bounded by a polynomial in $\mc(C)$. However, the well-known random projection argument shows that the dimension can be reduced to $O(\log(1/\delta)/\mc(C)^2)$ at the expense of small multiplicative decrease in the margin and probability of at most $\delta$ of failure (for every individual point over the randomness of the random projection) \citep{ArriagaVempala:99,Ben-DavidES02}. This fact together with Markov's inequality implies the following standard lemma: \begin{lem} \label{lem:jl} Let $d$ be an arbitrary dimension. For every $\delta$ and $\gamma$, there exists a distribution $\Psi$ over mappings $\psi\colon \B^d(1) \to \B^{d'}(1)$, where $d' = O(\log(1/\delta)/\gamma^2)$ such that: For every distribution $D$ and function $f$ over $\B^d(1)$, if there exists $w \in \B^d(1)$ such that $\pr_{x\sim D}[f(x) \cdot \la w,x\ra \ge \gamma] = 1$ then $$\pr_{\psi \sim \Psi} \lb \exists w',\ \pr_{x\sim D} \lb f(x) \cdot \la w',\psi(x)\ra \geq \frac{\gamma}{2} \rb \geq 1-\delta \rb \geq 1-\delta .$$ \end{lem} Lemma \ref{lem:jl} ensures that at most a tiny fraction $\delta$ of the points (according to $D$) does not satisfy the margin condition. This is not an issue as we will be implementing our algorithm in the SQ model that, by definition allows any of the answers to its queries to be imprecise. The lemma also allows for tiny probability that the mapping will fail altogether (making the overall algorithm randomized). Therefore the only ingredient missing for establishing Theorem \ref{thm:algorithm} is a label-non-adaptive SQ algorithm for solving the convex optimization algorithm in dimension $d$ using a polynomial in $d$, $1/\gamma$ and $1/\alpha$ number of queries and (the inverse of) tolerance: \begin{lem}\label{lem:1} Let $P$ be a distribution on $\B^d(1)\times \on$. Suppose that there is a vector $w^*\in \B^d(1)$ such that $\pr_{(x,\ell)\sim P}[\inner{w^*,\ell x}\ge \gamma] =1$. There is a label-non-adaptive SQ algorithm that for every $\alpha \in (0,1)$ uses $O(d^4/(\gamma^4\alpha^2))$ queries to $\STAT_P(\Omega(\gamma^4\alpha^2/d^3))$, and finds a vector $w$ such that $\pr_P\left[\inner{w,\ell x} > 0\right]\ge 1-\alpha$. \end{lem} \iffull \begin{proof} By Lemma \ref{lem:objective}, it is enough to find a vector $w$ with $F(w)\le \alpha \beta$ for $\beta = \frac{2\gamma^2}{\sqrt{d}}$. We next explain how to find such a vector. To this end, consider the stochastic convex program of minimizing $F(w)$ subject to the constraint that $\|w\|\le 1$. Since $F(w)$ is $4d$-Lipschitz over $\B^d(1)$, and $F(w^*)=0$, a solution with $F(w)\le \alpha\beta$ can be found using projected (sub-)gradient descent using $O(d^3/(\alpha\beta)^2)$ queries to $\STAT_P(\Omega((\alpha\beta/d)^2))$ \citep{FeldmanGV:15}. It remains to verify that the sub-gradient computations for this algorithm can be done using label-non-adaptive statistical queries. We decompose $F(w)$ into two parts $F(w) = F_1(w) + F_2(w)$ where, \[ F_1(w) \doteq \E_{(x,\ell)\sim P}\left[\sum_{i=1}^d \left|\inner{w + \gamma e_i,x}\right| + \sum_{i=1}^d \left|\inner{w - \gamma e_i,x}\right|\right] \] and \[ F_2(w)\doteq -\E_{(x,\ell)\sim P}\left[\sum_{i=1}^d \inner{w + \gamma e_i,\ell x} + \sum_{i=1}^d \inner{w - \gamma e_i,\ell x}\right]. \] Now the sub-gradient of $F_1$ is \[ \nabla F_1(w)=\E_{(x,\ell)\sim P} \left[\sum_{i=1}^d (\sign(\inner{w + \gamma e_i,x}) + \sign(\inner{w - \gamma e_i,x}) )x \right] \] and is just a function of $x$. Hence, computing an estimate requires of sub-gradient of $F_1$ requires only label-independent SQs. The gradient of (the linear) $F_2(w)$ is $$\nabla F_2(w)= -2d\E_{(x,\ell)\sim P}[\ell x] .$$ Crucially, it does not depend on $w$ and hence can be computed using $d$ non-adaptive statistical queries once. \end{proof} \fi \section{Implications for distributed learning with communication constraints} \label{sec:low-comm} In this section we briefly define the model of bounded communication per sample, state the known equivalence results to the SQ model and spell out the immediate corollary of our lower bound. In the bounded communication model \citep{Ben-DavidD98,SteinhardtVW16} it is assumed that the total number of bits learned by the server about each data sample is bounded by $\ell$ for some $\ell \ll \log |Z|$. As in the case of LDP this is modeled by using an appropriate oracle for accessing the dataset. \begin{defn} \label{def:comm} We say that an algorithm $R\colon Z \to \zo^\ell$ extracts $\ell$ bits. For a dataset $S \in Z^n$, an $\COMM_S$ oracle takes as an input an index $i$ and an algorithm $R$ and outputs a random value $w$ obtained by applying $R(z_i)$. An algorithm is $\ell$-bit communication bounded if it accesses $S$ only via the $\COMM_S$ oracle with the following restriction: for all $i\in [n]$, if $\COMM_S(i,R_1),\ldots,\COMM_S(i,R_k)$ are the algorithm's invocations of $\COMM_S$ on index $i$ where each $R_j$ extracts $\ell_j$ bits then $\sum_{j\in [k]} \ell_j \leq \ell$. \end{defn} We use (non-)adaptive in the same sense as we do for LDP. As first observed by \citet{Ben-DavidD98}, it is easy to simulate a single query to $\STAT_P(\tau)$ by extracting a single bit from each of the $O(1/\tau^2)$ samples. This gives the following simulation. \begin{thm}[\citep{Ben-DavidD98}] \label{thm:sq-2-COMM} Let $\A_{SQ}$ be an algorithm that makes at most $t$ queries to $\STAT_P(\tau)$. Then for every $\delta >0$ there is an $\eps$-local algorithm $\A$ that uses $\COMM_S$ oracle for $S$ containing $n\geq n_0=O(t \log(t/\delta)/\tau^2)$ i.i.d.~samples from $P$ and produces the same output as $\A_{SQ}$ (for some valid answers of $\STAT_P(\tau)$) with probability at least $1-\delta$. Further, if $\A_{SQ}$ is non-interactive then $\A$ is non-interactive. \end{thm} The converse of this theorem for the simpler $\COMM$ oracle that accesses each sample once was given in \citep{Ben-DavidD98,FeldmanGRVX:12}. For the stronger oracle in Definition \ref{def:comm}, the converse was given by \citet{SteinhardtVW16}. \begin{thm}[\citep{SteinhardtVW16}] \label{thm:COMM-2-SQ} Let $\A$ be an $\ell$-bit communication bounded algorithm that makes queries to $\COMM_S$ for $S$ drawn i.i.d. from $P^n$. Then for every $\delta >0$, there is an SQ algorithm $\A_{SQ}$ that makes $2 n\ell$ queries to $\STAT_P\left(\delta/(2^{\ell+1} n)\right)$ and produces the same output as $\A$ with probability at least $1-\delta$. Further, if $\A$ is non-interactive then $\A_{SQ}$ is non-interactive. \end{thm} Note that in this simulation we do not need to assume a separate bound on the number of queries since at most $\ell n$ queries can be asked. A direct corollary of Theorems \ref{thm:main-sq} and \ref{thm:COMM-2-SQ} and is the following lower bound: \begin{cor} Let $C$ be a class of Boolean functions closed under negation. Any label-non-adaptive $\ell$-communication bounded algorithm that PAC learns $C$ with error less than $1/2$ and success probability at least $3/4$ using queries to $\COMM_S$ for $S$ drawn i.i.d.~from $P^n$ must have $n = \mc(C)^{2/3}/2^\ell$. \end{cor} Our other results can be extended analogously. \section{Discussion} Our work shows that polynomial margin complexity is a necessary and sufficient condition for efficient distribution-independent PAC learning of a class of binary classifiers by a label-non-adaptive SQ/LDP/limited-communication algorithm. A natural open problem that is left open is whether there exists an efficient and fully non-interactive algorithm for any class of polynomial margin complexity. We conjecture that the answer is ``no" and in this case the question is how to characterize the problems that are learnable by non-interactive algorithms. See \citep{DanielyF19:open} for a more detailed discussion of this open problem. A significant limitation of our result is that it does not rule out even a $2$-round algorithm for learning halfspaces (or decision lists). This is, again, in contrast to the fact that learning algorithms for these classes require at least $d$ rounds of interaction. We believe that extending our lower bounds to multiple-round algorithms and quantifying the tradeoff between the number of rounds and the complexity of learning is an important direction for future work. \iffull \printbibliography \else \bibliographystyle{plainnat} \small{
1,314,259,995,181
arxiv
\section*{Version fran\c{c}aise abr\'eg\'ee} \selectlanguage{english} \section{Introduction} A classical result of Furstenberg \cite{Furst} says that if $X$ is a closed subset of $[0,1]$, invariant under the map $T_m:\ x\mapsto mx$ (mod 1), then its Hausdorff dimension equals the Minkowski (box-counting) dimension, which equals the topological entropy of $T_m|_X$ divided by $\log m$. A simple example is the set $ \Psi_{G}:= \Bigl\{ x = \sum_{k=1}^\infty x_k 2^{-k}:\ x_k \in \{0,1\},\ x_k x_{k+1}=0 \ \mbox{for all}\ k\Bigr\} $ for which we have $ \dim_H(\Psi_{G}) = \dim_M(\Psi_{G}) = \log_2\Bigl(\frac{1+\sqrt{5}}{2}\Bigr) $ (the subscript $G$ here stands for the ``Golden Ratio''). Instead, we consider the set $$ \Xi_{G}:= \Bigl\{ x = \sum_{k=1}^\infty x_k 2^{-k}:\ x_k \in \{0,1\},\ x_k x_{2k}=0 \ \mbox{for all}\ k\Bigr\} $$ which we call the ``multiplicative golden mean shift.'' The reason for this term is that the set of binary sequences corresponding to the points of $\Xi_{G}$ is invariant under the action of the semigroup of multiplicative positive integers $\N^*$: $ M_r(x_k) = (x_{rk})\ \ \mbox{for}\ r\in \N. $ Fan, Liao, and Ma \cite{Fan} showed that $ \dim_M(\Xi_{G}) = \sum_{k=1}^\infty 2^{-k-1}\log_2 F_{k+1}= 0.82429\ldots, $ where $F_k$ is the $k$-th Fibonacci number: $F_1=1,\ F_2 = 2, F_{k+1} = F_{k-1}+F_k$, and raised the question of computing the Hausdorff dimension of $\Xi_{G}$. \medskip \begin{theorem} \label{prop-gold} We have $\dim_H(\Xi_{G}) < \dim_M(\Xi_{G})$. In fact, \be \label{gold1} \dim_H(\Xi_{G}) = -\log_2 p = 0.81137\ldots,\ \ \mbox{where}\ p^3=(1-p)^2,\ \ \ 0<p<1. \ee \end{theorem} Our manuscript \cite{KPS} contains substantial generalizations of this result, extending it to a large class of ``multiplicative subshifts.'' We state one of them at the end of the paper. Although the set $\Xi_{G}$ is on the real line, it appears to have a strong resemblance with a class of self-affine sets on the plane, namely, the Bedford-McMullen ``carpets'' \cite{Bedf,McM}, for which also the Hausdorff dimension is typically smaller than the Minkowski dimension. However, this seems to be more of an analogy than a direct link. An additional motivation to study the multiplicative subshifts comes from questions on multifractal analysis of multiple ergodic averages raised in \cite{Fan}. Perhaps, the simplest non-trivial case of such multifractal analysis is the study of the sets $ A_\theta:= \Bigl\{x = \sum_{k=1}^\infty x_k 2^{-k}:\ x_k \in \{0,1\}, \ \lim_{n\to \infty} \frac{1}{n} \sum_{k=1}^n x_k x_{2k} = \theta\Bigr\}\,. $ It is not hard to show that $\dim_H(A_0) = \dim_H(\Xi_{G})$, which we compute in Theorem~\ref{prop-gold}. With more work, our method can be used to compute the Hausdorff dimension of $A_\theta$, but the details are beyond the scope of this note. In this paper, we focus on $\Xi_{G}$ to explain our ideas and methods in the simplest possible setting. To conclude the introduction, we should mention that the dimensions of some analogous sets, e.g., $ \wtil{\Xi} = \Bigl\{x= \sum_{k=1}^\infty x_k 2^{-k}:\ x_k \in \{0,1\},\ x_{k} x_{2k} x_{3k} = 0\ \mbox{for all $k$}\ \Bigr\} $ are so far out of reach. \section{Proof of Theorem~\ref{prop-gold}} It is more convenient to work in the symbolic space $\Sig_2 = \{0,1\}^\N$, with the metric $ \varrho((x_k) ,(y_k)) = \\ 2^{-\min\{n:\ x_n \ne y_n\}}. $ It is well-known that the dimensions of a compact subset of $[0,1]$ and the corresponding set of binary digit sequences in $\Sig_2$ are equal (this is equivalent to replacing the covers by arbitrary interval with those by dyadic intervals). Thus, it suffices to determine the dimensions of the set $X_{G}$---the collection of all binary sequences $(x_k)$ such that $x_k x_{2k}=0$ for all $k$. Observe that \be \label{eq1} X_{G} = \Bigl\{\om = {(x_k)}_{k=1}^\infty \in \Sig_2:\ {(x_{i2^r})}_{r=0}^\infty \in \Sig_{G}\ \ \mbox{for all}\ i \ \mbox{odd}\Bigr\} \ee where $\Sig_{G}$ is usual (additive) golden mean shift: $ \Sig_{G}:= \{{(x_k)}_{k=1}^\infty\in \Sig_2,\ x_k x_{k+1}=0,\ \forall\,k\ge 1\}. $ We will use the following well-known result; it essentially goes back to Billingsley \cite{Billing}. We state it in the symbolic space $\Sig_2$ where $[u]$ denotes the cylinder set of sequences starting with a finite ``word'' $u$ and $x_1^n = x_1\ldots x_n$. \medskip \begin{prop}[see \cite{Falc}] \label{prop-mass} Let $E$ be a Borel set in $\Sig_2$ and let $\nu$ be a finite Borel measure on $\Sig_2$. {\bf (i)} If $\liminf_{n\to \infty} (-\frac{1}{n}) \log_2 \nu[x_1^n] \ge s\ \ \mbox{for $\nu$-a.e.}\ x\in E,$ then $\dim_H(E) \ge s$. {\bf (ii)} If $\liminf_{n\to \infty} (-\frac{1}{n}) \log_2 \nu[x_1^n] \le s\ \ \mbox{for all}\ x\in E,$ then $\dim_H(E) \le s$. \end{prop} \medskip Given a probability measure $\mu$ on $\Sig_{G}$, we can define a probability measure on $X_{G}$ by \be \label{eq-meas1} \Pmu[u]:= \prod_{i\le n,\, i\ \mbox{\tiny odd}} \mu[u|_{J(i)}], \ \ \mbox{where}\ J(i) = \{2^r i\}_{r=0}^\infty \ee and $u|_{J(i)}$ denotes the ``restriction'' of the word $u$ to the subsequence $J(i)$. It turns out that this class of measures is sufficiently rich to compute $\dim_H(X_{G})$. For $k\ge 1$ let $\alpha_k$ be the partition of $\Sig_{G}$ into cylinders of length $k$. For a measure $\mu$ on $\Sig_2$ and a finite partition $\alpha$, denote by $H^\mu(\alpha)$ the $\mu$-entropy of the partition, with base $2$ logarithms: $ H^\mu(\alpha) = -\sum_{C\in \alpha} \mu(C)\log_2\mu(C). $ Define \be\label{def-smu} s(\mu):= \sum_{k=1}^\infty \frac{H^\mu(\alpha_k)}{2^{k+1}}\,. \ee \begin{prop} \label{prop-ldim} Let $\mu$ be a probability measure on $\Sig_{G}$. Then $\dim_H(X_{G})\ge s(\mu)$. \end{prop} \medskip \noindent{\bf Proof.} We are going to demonstrate that for every $\ell \in \N$, \be \label{eq-lb2} \liminf_{n\to \infty} \frac{-\log_2\Pmu[x_1^n]}{n} \ge \sum_{k=1}^\ell \frac{H^\mu(\alpha_k)}{2^{k+1}}\ \ \mbox{for $\Pmu$-a.e.}\ x. \ee Then, letting $\ell\to\infty$ and using Proposition~\ref{prop-mass}(i) will yield the desired inequality. Fix $\ell\in \Nat$. By a routine argument, to verify (\ref{eq-lb2}) we can restrict ourselves to $n=2^\ell r, \ r\in \N$. In view of (\ref{eq-meas1}), we have \be \label{eq-lb3} \Pmu[x_1^n] \le \prod_{k=1}^\ell \ \prod_{\frac{n}{2^k} < i \le \frac{n}{2^{k-1}},\ i\ \mbox{\tiny odd}} \mu[x_1^n|_{J(i)}]. \ee Note that $x_1^n|_{J(i)}$ is a word of length $k$ for $i\in (n/2^k, n/2^{k-1}]$, with $i$ odd, which is a beginning of a sequence in $\Sig_G$. Thus, $[x_1^n|_{J(i)}]$ is an element of the partition $\alpha_k$. The random variables $x\mapsto -\log_2\mu[x_1^n|_{J(i)}]$ are i.i.d\ for $i\in (n/2^k, n/2^{k-1}]$, with $i$ odd, and their expectation equals $H^\mu(\alpha_k)$, by the definition of entropy. Note that there are $n/2^{k+1}$ odds in $(n/2^k, n/2^{k-1}]$. Fixing $k,\ell$ with $k\le \ell$ and taking $n=2^\ell r$, $r\to \infty$, we get an infinite sequence of i.i.d.\ random variables. Therefore, by a version of the Law of Large Numbers, \be \label{eq-lb4} \forall\ k\le \ell, \sum_{\frac{n}{2^k} < i \le \frac{n}{2^{k-1}},\ i\ \mbox{\tiny odd}} \frac{-\log_2 \mu[x_1^n|_{J(i)}]}{(n/2^{k+1})} \ \to H^\mu(\alpha_k)\ \ \mbox{as}\ n = 2^\ell r\to \infty,\ \ \mbox{for $\Pmu$-a.e.\ $x$}. \ee By (\ref{eq-lb3}) and (\ref{eq-lb4}), for $\Pmu$-a.e.\ $x$, $$ \frac{-\log_2\Pmu[x_1^n]}{n} \ge \sum_{k=1}^\ell \frac{1}{2^{k+1}} \sum_{\frac{n}{2^k} < i \le \frac{n}{2^{k-1}},\ i\ \mbox{\tiny odd}} \frac{-\log_2\mu[x_1^n|_{J(i)}]}{n/2^{k+1}} \to \sum_{k=1}^\ell \frac{H^\mu(\alpha_k)}{2^{k+1}}\,. $$ This confirms (\ref{eq-lb2}), so the proof is complete. \qed \medskip \noindent{\bf Proof of the lower bound for the Hausdorff dimension in Theorem~\ref{prop-gold}.} Let $s:= \sup\{s(\mu):\ \mu$ is a probability measure on $\Sig_G\}$. By Proposition~\ref{prop-ldim}, we have $\dim_H(X_G) \ge s$, and we will prove that this is actually an equality. To this end, we specify a measure which will turn out to be ``optimal.'' This measure is Markov, but non-stationary. It could be ``guessed'' or derived by solving the optimization problem (which also yields that the optimal measure is unique). However, for the proof of dimension formula it suffices to produce the answer. Let $\mu$ be a Markov measure on $\Sig_G$, with initial probabilities $(p,1-p)$, and the matrix of transition probabilities $P = (P(i,j))_{i,j=0,1} = \left( \begin{array}{cc} p & \ 1-p \\ 1 & 0 \end{array} \right)$. Using elementary properties of entropy, it is not hard to see that $s(\mu) = \frac{H(p)}{2} + \frac{ps(\mu)}{2} + \frac{(1-p)s(\mu)}{4}$, whence $s(\mu) = \frac{2H(p)}{3-p}$. Maximizing over $p$ yields $s(\mu) = 2\log_2\frac{p}{1-p}$, and comparing this to $s(\mu) = \frac{2H(p)}{3-p}$ we get \be \label{eq-s2} p^3=(1-p)^2,\ \ s(\mu)=-\log_2 p. \ee Combined with Proposition~\ref{prop-ldim}, this proves the lower bound for the Hausdorff dimension in (\ref{gold1}). \qed \medskip \noindent {\bf Proof of the upper bound for the Hausdorff dimension in Theorem~\ref{prop-gold}.} Denote by $N_i(u)$ the number of symbols $i$ in a word $u$. By the definition of the measure $\mu$, we obtain for any $u=u_1\ldots u_k \in \{0,1\}^n$, \be \mu[u] = p_{u_1} P(u_1,u_2)\cdot \ldots \cdot P(u_{k-1}, u_k) = (1-p)^{N_1(u_1\ldots u_k)} p^{N_0(u_1\ldots u_k) - N_1(u_1\ldots u_{k-1})}. \label{eq-meas3} \ee Indeed, the probability of a 1 is always $1-p$, whereas the probability of a 0 is $p$, except in those cases when it follows a 1, which it must by the definition of $\Sig_G$. In view of (\ref{eq-meas3}), by the definition of the measure $\Pmu$ on $X_G$, we have $ \Pmu[x_1^n] = (1-p)^{N_1(x_1^n)} p^{N_0(x_1^n) - N_1(x_1^{n/2})} $ for any $x\in X_G$ and $n$ even. Using that $(1-p)^2=p^3$ and $N_0(x_1^n) = n - N_1(x_1^n)$, we obtain that $$ \Pmu[x_1^n] = p^n p^{N_1(x_1^n)/2- N_1(x_1^{n/2})}. $$ Let $a_\ell= -\frac{1}{n}\log_2 \Pmu[x_1^n]$ for $n=2^\ell$. Then $ a_\ell = -\log_2 p\left( 1+ \frac{1}{2} \Bigl[\frac{N_1(x_1^n)}{n} - \frac{N_1(x_1^{n/2})}{n/2}\Bigr]\right). $ Now we see that the average of $a_\ell$'s ``telescopes'': $$ \frac{a_1 + \cdots + a_\ell}{\ell} = -\log_2 p \left(1+ \frac{1}{2\ell} \Bigl[\frac{N_1(x_1^{2^\ell})}{2^\ell} - N_1(x_1)\Bigr]\right) \to -\log_2 p, \ \ \mbox{as}\ \ell\to \infty. $$ It follows that $$ \liminf_{\ell\to \infty} a_\ell = \liminf_{\ell\to \infty} 2^{-\ell} (-\log_2 \Pmu[x_1^{2^\ell}]) \le -\log_2 p = s, $$ for every $x\in X_G$, so $\dim_H(X_G) \le s$ by Proposition~\ref{prop-mass}(ii). \qed \section{Generalization} Here we state a generalization of Theorem~\ref{prop-gold} to the case of arbitrary multiplicative subshifts of finite type; the proof can be found in \cite{KPS}. \medskip \begin{theorem} \label{th-main} {\rm (i)} Let $A$ be a $0$-$1$ primitive $m\times m$ matrix (i.e.\ some power of $A$ has only positive entries). Consider $ \Xi_A = \Bigl\{ x = \sum_{k=1}^\infty x_k m^{-k}:\ x_k \in \{0,\ldots,m-1\},\ A(x_k, x_{2k})=1 \ \mbox{for all}\ k\Bigr\}. $ Then $ \dim_H(\Xi_A) = \frac{1}{2}\log_m \sum_{i=0}^{m-1} t_i, $ where $(t_i)_{i=0}^{m-1}$ is the unique vector satisfying $ t_i^2= \sum_{j=0}^{m-1} A(i,j) t_j,\ \ t_i>1,\ i=0,\ldots,m-1. $ {\rm (ii)} The Minkowski dimension of $\Xi_A$ exists and equals $ \dim_M(\Xi_A) = \sum_{k=1}^\infty 2^{-k-1} \log_m(A^{k-1} \ov{1},\ov{1})$ where $\ov{1}=(1,\ldots,1)^T \in \R^m. $ We have $\dim_H(\Xi_A) = \dim_M(\Xi_A)$ if and only if all row sums of $A$ are equal. \end{theorem} \section*{Acknowledgements} We are grateful to J\"org Schmeling for telling us about the problem, and to Aihua Fan, Lingmin Liao, and Jihua Ma for sending us their preprint \cite{Fan} prior to publication. The research of R. K. and B. S. was supported in part by NSF.
1,314,259,995,182
arxiv
\section{Introduction}\label{S:intro} In this work, we consider the local dynamics of periodic traveling wave solutions, i.e. wave trains, in reaction diffusion systems of the form \begin{equation}\label{e:rd} u_t = u_{xx} + f(u),~~x\in{\mathbb{R}},~~t\geq 0,~~u\in{\mathbb{R}}^n \end{equation} where $n\in{\mathbb{N}}$ and $f:{\mathbb{R}}^n\to{\mathbb{R}}^n$ is a $C^K$-smooth nonlinearity for some $K\geq 3$. Such systems arise naturally in many areas of applied mathematics, and the behavior of such wave train solutions when subject to a variety of classes of perturbations has been studied intensively over the last decade. Most commonly in the literature, one studies the stability and instability of such periodic traveling waves to perturbations which are \emph{localized}, i.e. integrable on the line, or which are \emph{nonlocalized}, accounting for asymptotic phase differences at infinity. See, for example, \cite{DSSS,JNRZ_13_1,JNRZ_13_2,JZ_11_1,SSSU,SW15} and references therein. Here, we consider the stability and long-time dynamics of $T$-periodic traveling wave solutions of \eqref{e:rd} when subjected to $NT$-periodic, i.e. \emph{subharmonic}, perturbations for some $N\in{\mathbb{N}}$. More precisely, suppose that $u(x,t)=\phi(k(x-ct))$ is a periodic traveling wave solution of \eqref{e:rd} with period $T=1/k$, where we choose $k\in{\mathbb{R}}$ so that the profile $\phi\in H^1_{\rm loc}({\mathbb{R}})$ is a $1$-periodic stationary solution of \begin{equation}\label{e:RDE_trav} k u_t - kc u_x = k^2 u_{xx} + f(u), \end{equation} i.e. it satisfies the profile equation \begin{equation}\label{e:profile} k^2\phi''+kc\phi'+f(\phi)=0. \end{equation} Given such a solution, note that a function of the form $u(x,t)=\phi(x)+v(x,t)$ is a solution of \eqref{e:RDE_trav} provided it satisfies a system of the form \begin{equation}\label{e:rd_nonlinear} kv_t=k\mathcal{L}[\phi]v+\mathcal{N}(v), \end{equation} where here $\mathcal{N}(v)$ is at least quadratic in $v$ and $\mathcal{L}[\phi]$ is the linear differential operator \[ k\mathcal{L}[\phi]:=k^2\partial_x^2+kc\partial_x+Df(\phi). \] Naturally, the domain of the operator $\mathcal{L}[\phi]$ is determined by the chosen class of perturbations $v$ of the underlying standing wave $\phi$ and, as mentioned above, several choices are available in the literature. As we are interested in subharmonic perturbations, i.e. perturbations with period $N\in{\mathbb{N}}$, we consider $\mathcal{L}[\phi]$ as a closed, densely defined linear operator acting on $L^2_{\rm per}(0,N)$ with $1$-periodic coefficients. The stability analysis of periodic waves to such subharmonic perturbations naturally relies on a detailed understanding of the spectrum of $\mathcal{L}[\phi]$ acting on $L^2_{\rm per}(0,N)$. To describe the $N$-periodic spectrum of $\mathcal{L}[\phi]$, we begin by introducing the notion of spectral stability that will be used throughout this work. \begin{definition}\label{D:specstab} A $1$-periodic stationary solution $\phi\in H^1_{\rm loc}({\mathbb{R}})$ of \eqref{e:RDE_trav} is said to be \emph{diffusively spectrally stable} provided the following conditions hold: \begin{itemize} \item[(i)] The spectrum of the linear operator $\mathcal{L}[\phi]$ acting on $L^2({\mathbb{R}})$ satisfies \[ \sigma_{L^2({\mathbb{R}})}\left(\mathcal{L}[\phi]\right)\subset\left\{\lambda\in{\mathbb{C}}:\Re(\lambda)<0\right\}\cup\{0\}; \] \item[(ii)] There exists a $\theta>0$ such that for any $\xi\in[-\pi,\pi)$ the real part of the spectrum of the Bloch operator $\mathcal{L}_\xi[\phi]$ acting on $L^2_{\rm per}(0,1)$ satisfies \[ \Re\left(\sigma_{L^2_{\rm per}(0,1)}\left(\mathcal{L}_\xi[\phi]\right)\right)\leq-\theta\xi^2; \] \item[(iii)] $\lambda=0$ is a simple eigenvalue of $\mathcal{L}_0[\phi]$ with associated eigenfunction $\phi'$. \end{itemize} \end{definition} Since the pioneering work of Schnieder \cite{S96,S98_1}, the above notion of spectral stability has been taken as the standard spectral assumption in nonlinear stability results for periodic traveling/standing waves in reaction diffusion systems. Specifically, the above notion of spectral stability is sufficiently strong to allow one to immediately conclude important details regarding the nonlinear dynamics of $\phi$ under localized, or general bounded, perturbations, including long-time asymptotics of the associated modulation functions. For more information, see \cite{DSSS,JNRZ_13_1,JNRZ_13_2,SSSU,SW15} and references therein. \begin{remark} Note the assumption on simplicity of the eigenvalue $\lambda=0$ is natural since such periodic standing waves typically appear as one-parameter families parametrized only by translational invariance. Indeed, solutions of \eqref{e:profile} are readily seen to rely (up to translation invariance) on the $n+2$ parameters $(\phi(0),k,c)$, while periodicity requires the enforcement of $n$ constraints, leaving in general a two-parameter family of $1$-periodic solutions \[ u(x-ct-x_0;c,x_0) \] which satisfy \eqref{e:profile} with $k=k(c)$. Due to the secular dependence of the frequency $k$ on the wave speed $c$, variations in $c$ do not preserve periodicity and hence, generically, it follows one should expect the kernel of $\mathcal{L}[\phi]$ to be one-dimensional, which leads to (iii) in Definition \eqref{D:specstab} above. \end{remark} Given a diffusively spectrally stable $1$-periodic traveling wave solution $\phi$ of \eqref{e:rd}, one can now easily characterize the spectrum of $\mathcal{L}[\phi]$ acting on $L^2_{\rm per}(0,N)$. Indeed, as described in Section \ref{S:bloch} below, the spectrum of $\mathcal{L}[\phi]$ acting on $L^2_{\rm per}(0,N)$ is equal to the union of the necessarily discrete\footnote{Note since the domains of the operators $\mathcal{L}_\xi[\phi]$ are compactly contained in $L^2_{\rm per}(0,1)$, it follows that their $L^2_{\rm per}(0,1)$-spectrum is comprised entirely of isolated eigenvalues with finite multiplicities.} spectrum of the corresponding Bloch operators $\mathcal{L}_\xi[\phi]$, defined in Definition \ref{D:specstab} above, acting in $L^2_{\rm per}(0,1)$ for the discrete (finite) subset of $\xi\in[-\pi,\pi)$ such that $e^{i\xi N}=1$. It follows that diffusively spectrally stable periodic traveling waves of \eqref{e:rd} are necessarily spectrally stable to all subharmonic perturbations. In particular, for each $N\in{\mathbb{N}}$ the non-zero $N$-periodic eigenvalues of $\mathcal{L}[\phi]$ satisfy the spectral gap condition \[ \Re\left(\sigma_{L^2_{\rm per}(0,N)}\left(\mathcal{L}[\phi]\right)\setminus\{0\}\right)\leq -\delta_N \] for some constant $\delta_N>0$. From here, using that $\mathcal{L}[\phi]$ is sectorial, it is easy to show that for each $\delta\in(0,\delta_N)$ there exists a constant $C_\delta>0$ such that \begin{equation}\label{e:lin_exp} \left\|e^{\mathcal{L}[\phi]t}\left(1-\mathcal{P}_1\right)f\right\|_{L^2_{\rm per}(0,N)}\leq C_\delta e^{-\delta t}\|f\|_{L^2_{\rm per}(0,N)}. \end{equation} for all $f\in L^2_{\rm per}(0,N)$, where here $\mathcal{P}_1$ denotes the projection of $L^2_{\rm per}(0,N)$ onto the $N$-periodic kernel of $\mathcal{L}[\phi]$ spanned by $\phi'$. Equipped with this linear estimate, one can now establish the following nonlinear stability result. \begin{proposition}\label{P:sub_stab} Let $\phi\in H^1_{\rm loc}$ be a $1$-periodic stationary solution of \eqref{e:RDE_trav} and fix $N\in{\mathbb{N}}$. Assume that $\phi$ is diffusively spectrally stable, in the sense of Definition \ref{D:specstab} below and, for each $N\in{\mathbb{N}}$, take $\delta_N>0$ such that \begin{equation}\label{spec_gap} \max \Re\left(\sigma_{L^2_{\rm per}(0,N)}\left(\mathcal{L}[\phi]\right)\setminus\{0\}\right)=-\delta_N \end{equation} holds. Then for each $N\in{\mathbb{N}}$, $\phi$ is asymptotically stable to subharmonic $N$-periodic perturbations. More precisely, for every $\delta\in(0,\delta_N)$ there exists an $\varepsilon=\varepsilon_\delta>0$ and a constant $C=C_\delta>0$ such that whenever $u_0\in H^1_{\rm per}(0,N)$ and $\|u_0-\phi\|_{H^1(0,N)}<\varepsilon$, then the solution $u$ of \eqref{e:RDE_trav} with initial data $u(0)=u_0$ exists globally in time and satisfies \[ \left\|u(\cdot,t)-\phi(\cdot+\sigma_\infty)\right\|_{H^1(0,N)}\leq Ce^{-\delta t}\|u_0-\phi\|_{H^1(0,N)} \] for all $t>0$, where here $\sigma_\infty=\sigma_\infty(N)$ is some constant. \end{proposition} The proof of Proposition \ref{P:sub_stab} is by now standard, and can be completed by following appropriate texts: see, for example, \cite[Chapter 4]{KP_book}. The main idea is that the linear estimate \eqref{e:lin_exp} suggests that if $u(x,t)$ is a solution of \eqref{e:RDE_trav} which is initially close to $\phi$ in $L^2_{\rm per}(0,N)$, then there exists a (small) time-dependent modulation function $\sigma(t)$ such that $u(x,t)$ essentially behaves for large time as \[ u(x,t)\approx \phi(x)+\sigma(t)\phi'(x)\approx \phi(x+\sigma(t)), \] corresponding to standard asymptotic (orbital) stability of $\phi$. With this insight gained from \eqref{e:lin_exp}, a straightforward nonlinear iteration scheme completes the proof of Proposition \ref{P:sub_stab}. While Proposition \ref{P:sub_stab} establishes nonlinear stability of $\phi$ in $L^2_{\rm per}(0,N)$ for each fixed $N\in{\mathbb{N}}$, it lacks uniformity in $N$ in two important (and related) aspects. Indeed, note that the exponential rate of decay $\delta$ and the allowable size of initial perturbations $\varepsilon=\varepsilon_\delta$ are both controlled completely in terms of the size of the spectral gap $\delta_N>0$. Since $\delta_N\to 0$ as $N\to\infty$, it follows that both $\delta$ and $\varepsilon$ chosen in Proposition \ref{P:sub_stab} necessarily tend to zero\footnote{Additionally, this degeneracy can be seen in the linear estimate \eqref{e:lin_exp} since both $\delta\to 0^+$ and $C_\delta\to\infty$ as $N\to\infty$.} as $N\to\infty$. With this observation in mind, it is natural to ask if one can obtain a stability result to $N$-periodic perturbations which is uniform in $N$. In such a result, one should naturally require that both the rate of decay and and the size of initial perturbations be independent of $N$, thus depending only on the background wave $\phi$. This is precisely achieved in our main result. \begin{theorem}[Uniform Subharmonic Asymptotic Stability]\label{T:main} Fix\footnote{Here and throughout, $K$ encodes the regularity of the nonlinearity $f$ in \eqref{e:rd}.} $K\geq 3$. Suppose $\phi\in H^1_{\rm loc}({\mathbb{R}})$ is a $1$-periodic stationary solution of \eqref{e:RDE_trav} that is diffusively spectrally stable, in the sense of Definition \ref{D:specstab}. There exists an $\varepsilon>0$ and a constant $C>0$ such that, for each $N\in{\mathbb{N}}$, whenever $u_0\in L^1_{\rm per}(0,N)\cap H^K_{\rm per}(0,N)$ and \[ E_0:=\left\|u_0-\phi\right\|_{L^1_{\rm per}(0,N)\cap H^K_{\rm per}(0,N)}<\varepsilon, \] there exists a function $\widetilde{\psi}(x,t)$ satisfying $\widetilde{\psi}(\cdot,0)\equiv 0$ such that the solution of \eqref{e:RDE_trav} with initial data $u(0)=u_0$ exists globally in time and satisfies \begin{equation}\label{e:result1} \left\|u\left(\cdot-\widetilde{\psi}(\cdot,t),t\right)-\phi\right\|_{H^K_{\rm per}(0,N)},~~\left\|\nabla_{x,t}\widetilde{\psi}(\cdot,t)\right\|_{H^K_{\rm per}(0,N)}\leq C E_0(1+t)^{-3/4} \end{equation} for all $t\geq 0$. Further, there exists constants $\gamma_\infty\in{\mathbb{R}}$ and $C>0$ such that for each $N\in{\mathbb{N}}$ we have \begin{equation}\label{e:result2} \left\|\widetilde{\psi}(\cdot,t)-\frac{1}{N}\gamma_\infty\right\|_{H^K_{\rm per}(0,N)}\leq C E_0(1+t)^{-1/4} \end{equation} for all $t\geq 0$. \end{theorem} \begin{remark} Using the methods in \cite{JNRZ_13_1,JZ_11_1}, the results in Theorem \ref{T:main} can easily be extended to establish uniform (in $N$) decay rates of perturbations in $L^p_{\rm per}(0,N)$ for any $2\leq p\leq \infty$ provided the initial perturbations are again sufficiently small in $L^1_{\rm per}(0,N)\cap H^K_{\rm per}(0,N)$. For simplicity, however, and to establish proof of concept, in this work we concentrate on the $L^2$-based theory only. \end{remark} The key idea to the proof of Theorem \ref{T:main} is to use the stability theory of periodic waves of reaction diffusion equations to \emph{localized perturbations}, specifically those techniques developed in \cite{JNRZ_13_1,JZ_11_1}, as a guide for how to uniformly control the dynamics of subharmonic perturbations for large $N$. Indeed, observe the decay rates guaranteed in Theorem \ref{T:main} are precisely those predicted by considering the dynamics of such periodic wave trains to localized perturbations: see \cite{JNRZ_Invent,JNRZ_13_1,JNRZ_13_2,JZ_11_1}, for example. Formally, this should not be too surprising since, up to appropriate translations, a sequence of $N$-periodic functions may converge (locally) as $N\to\infty$ to functions in $L^2({\mathbb{R}})$. We make the above intuition precise by first following the methodology recently developed in \cite{HJP_1} in order to provide a delicate decomposition of the semigroup $e^{\mathcal{L}[\phi]t}$ acting on the the space $L^2_{\rm per}(0,N)$ with $N\in{\mathbb{N}}$. This decomposition is accomplished by adapting the linear theory for localized perturbations developed in \cite{JNRZ_13_1,JZ_11_1} to the subharmonic context in order to uniformly handle the accumulation of Bloch eigenvalues near the origin as $N\to\infty$. Furthermore, our linear decomposition, which will be reviewed in Section \ref{S:lin_stab} below, not only recovers the exponential decay rates exhibited in Proposition \ref{P:sub_stab}, but they also provide the uniform (in $N$) rates of decay in Theorem \ref{T:main}. As we will see, this linear analysis predicts that if $u(x,t)$ is a solution of \eqref{e:RDE_trav} which is initially close to $\phi$ in $L^2_{\rm per}(0,N)$ then there exists a (small) space-time dependent, $N$-periodic (in $x$) modulation function $\widetilde{\psi}(x,t)$ such that $u(x,t)$ essentially behaves for large time like \[ u(x,t)\approx\phi(x)+\widetilde{\psi}(x,t)\phi'(x)\approx\phi\left(x+\widetilde{\psi}(x,t)\right) \] giving a refined insight into the long-time local dynamics near $\phi$ beyond the more standard asymptotic stability (with asymptotic phase) as in Proposition \ref{P:sub_stab}. Motivated by this initial linear analysis, we then build a nonlinear iteration scheme for subharmonic perturbations which incorporates phase modulation functions which depend on \emph{both space and time} in order to complete the proof of Theorem \ref{T:main}. The requirement that the modulation functions are spatially dependent is necessary for our method, and is fundamentally different than the methodology used in the proof of Proposition \ref{P:sub_stab}. In particular, to the authors' knowledge, this work is the first to consider spatially dependent modulation functions in the context of periodic perturbations. Furthermore, Theorem \ref{T:main} is the first result to obtain stability results for periodic waves to subharmonic perturbations that are uniform in the period of the perturbation. \begin{remark} As indicated above, the strategy for proving our subharmonic results follows the stability analyses \cite{JNRZ_13_1,JZ_11_1} for localized perturbations of periodic wave trains in reaction diffusion systems. In the localized case, the origin is always a part of the essential spectrum of the linearized operator, leading one to introduce space-time dependent modulation functions. In the subharmonic case, however, the origin is an isolated simple eigenvalue for each fixed $N\in{\mathbb{N}}$, and using time-dependent modulations only leads to results such as Proposition \ref{P:sub_stab}. In order to achieve the proof of Theorem \ref{T:main}, we will rely on a combination of these approaches, using an $N$-dependent time-modulation function to account for the isolated eigenvalue at the origin, while simultaneously using a space-time modulation to account for the accumulation of spectrum near the origin as $N\to\infty$. \end{remark} Next, we point out an important corollary of Theorem \ref{T:main}. Particularly, since the decay rates in Theorem \ref{T:main} are sufficiently fast we can obtain the following result accounting for only time-dependent modulations yet offering slower uniform decay rates. Note that while the result uses only time-dependent modulations, the proof requires the use of space-time dependent modulation functions. \begin{corollary}\label{C:main} Under the hypotheses of Theorem \ref{T:main}, there exists an $\varepsilon>0$ and a constant $C>0$ such that, for each $N\in{\mathbb{N}}$, whenever $u_0\in L^1_{\rm per}(0,N)\cap H^{K}_{\rm per}(0,N)$ and $E_0<\varepsilon$, there exists a function $\gamma(t)$ satisfying $\gamma(0)=0$ such that the solution $u$ of \eqref{e:RDE_trav} with initial data $u(0)=u_0$ exists globally in time and satisfies \begin{equation}\label{e:result3} \left\|u\left(\cdot-\frac{1}{N}\gamma(t),t\right)-\phi\right\|_{H^K_{\rm per}(0,N)}\leq C E_0(1+t)^{-1/4}. \end{equation} for all $t>0$. Further, the time-dependent modulation function $\gamma(t)$ satisfies \[ \left|\gamma_t(t)\right|\leq CE_0(1+t)^{-3/2} \] and hence, in particular, there exists a $\gamma_\infty\in{\mathbb{R}}$ \[ \left|\gamma(t)-\gamma_\infty\right|\leq CE_0(1+t)^{-1/2} \] for all $t>0$. In particular, \[ \left\|u\left(\cdot,t\right)-\phi\left(\cdot+\frac{1}{N}\gamma_\infty\right)\right\|_{H^K_{\rm per}(0,N)}\leq CE_0 (1+t)^{-1/4}. \] for all $t>0$. \end{corollary} \begin{remark} Comparing Corollary \ref{C:main} with Proposition \ref{P:sub_stab}, we see that one necessarily has the relationship $\sigma_\infty(N) = \frac{1}{N}\gamma_\infty$, establishing a direct correspondence between the ($N$-dependent) asymptotic phase shifts. Further, we note the $\gamma_\infty$ is the same in both Theorem \ref{T:main} and Corollary \ref{C:main}. \end{remark} Our last result combines the results of Corollary \ref{C:main} with Proposition \ref{P:sub_stab} in order to obtain a nonlinear stability result allowing a uniform (in $N$) size of initial perturbations with (eventual) exponential rates of decay. \begin{corollary}\label{C:min_thm} Under the hypotheses of Theorem \ref{T:main}, there exists an $\varepsilon>0$ and a constant $C>0$ such that, for each $N\in{\mathbb{N}}$ and $\delta\in (0,\delta_N)$, with $\delta_N$ as in \eqref{spec_gap}, whenever $u_0\in L^1_{\rm per}(0,N)\cap H^K_{\rm per}(0,N)$ with $E_0<\varepsilon$ there exists a $T_\delta>0$ and a constant $M_\delta>0$ such that \[ \left\|u(\cdot,t)-\phi\left(\cdot+\frac{1}{N}\gamma_\infty\right)\right\|_{H^1_{\rm per}(0,N)}\leq \left\{\begin{aligned} &CE_0(1+t)^{-1/4},~~{\rm for}~~0<t\leq T_\delta\\ &M_\delta E_0 e^{-\delta t},~~{\rm for}~~t>T_\delta. \end{aligned}\right. \] \end{corollary} The above corollary has a few important features to highlight. First, we emphasize that $\varepsilon$, the size of the initial perturbation above, is independent of both $N$ and the choice $\delta\in(0,\delta_N)$. In particular, this establishes a uniform size on the domain of attraction for perturbations to (eventually) exhibit exponential decay. This is in stark contrast to Proposition \ref{P:sub_stab} which requires $\varepsilon_\delta\to 0$ as $\delta\to 0$. Secondly, we note that the length of time one must wait to observe exponential decay, quantified by $T_\delta$ above, necessarily satisfies $T_\delta\to \infty$ as $\delta\to 0$; hence, it is not uniform in $N$. Nevertheless, Corollary \ref{C:min_thm} upgrades the long-time behavior of Proposition \ref{P:sub_stab} allowing for a uniform size of initial perturbations. Interestingly, Corollary \ref{C:min_thm} can be easily seen, at least at the linear level, directly from our forthcoming decomposition of the semigroup $e^{\mathcal{L}[\phi]t}$: see Remark \ref{R:riemann_sum1} in Section \ref{S:lin_stab} below. \ The online of the paper is as follows. In Section \ref{S:prelim} we review several preliminary results, including a review in Section \ref{S:bloch} of Floquet-Bloch theory in the context of $N$-periodic function spaces. This will provide us with a characterization of $N$-periodic eigenvalues of the $1$-periodic coefficient differential operator $\mathcal{L}[\phi]$ in terms of the associated Bloch operators. We further collect several properties of the Bloch operators and their associated semigroups. In Section \ref{S:spec_semigrp}, we establish basic decay properties of the Bloch semigroups arising as a result of the diffusive spectral stability assumption. In Section \ref{S:lin_stab}, we establish our key linear estimates by providing a delicate decomposition of the semigroup $e^{\mathcal{L}[\phi]t}$ acting on $L^2_{\rm per}(0,N)$, which allows us to identify polynomial decay rates on the linear evolution which are \emph{uniform in $N$}: see Proposition \ref{P:lin_est}. These linear estimates form the backbone for our nonlinear analysis, which is detailed in Section \ref{S:nlin_stab}. In Section \ref{S:nlin_decomp}, we use intuition gained from the linear estimates of Section \ref{S:lin_stab} to introduce an appropriate nonlinear decomposition of a small $L^2_{\rm per}(0,N)$ neighborhood of the underlying diffusively stable $1$-periodic wave $\phi$, and we develop appropriate perturbation equations satisfied by the corresponding perturbation and modulation functions. In Section \ref{S:nlin_iteration}, we apply a nonlinear iteration scheme to the system of perturbation equations obtained in Section \ref{S:nlin_decomp} and present the proofs of Theorem \ref{T:main} and its corollaries stated above. Finally, a proof of some technical results from Section \ref{S:lin_stab} are provided in an Appendix. \\ \noindent {\bf Acknowledgments:} The work of MAJ was partially funded by the NSF under grant DMS-16-14785, as well the Simons Foundation Collaboration grant number 714021. The authors are also grateful to the referees for their many helpful suggestions. Finally, we thank Prof. Guido Schneider for initial discussions regarding Corollary \ref{C:min_thm}. \section{Preliminaries}\label{S:prelim} In this section, we review several preliminary results. First, to aid in our description of the spectrum of the linearization $\mathcal{L}[\phi]$, we review general results from Floquet-Bloch theory as applied to subharmonic perturbations. From this, we establish some elementary semigroup estimates for the associated Bloch operators. Throughout the remainder of the paper, for notational convenience, we set for each $N\in{\mathbb{N}}$ and $p\geq 1$ \[ L^p_N:=L^p_{\rm per}(0,N). \] \subsection{Floquet Bloch Theory for Subharmonic Perturbations}\label{S:bloch} Motivated by Floquet-Bloch theory for linear differential operators with periodic coefficients acting on $L^2({\mathbb{R}})$ (see \cite{G93,JNRZ_Invent,RS4}, for example), we review a modification of this theory (restricted to the present reaction-diffusion context) for the study of subharmonic perturbations\footnote{See also \cite{HJP_1} for more information regarding this subharmonic extension.}. Suppose that $\phi$ is a $1$-periodic stationary solution of \eqref{e:RDE_trav}, and consider the linearized operator $\mathcal{L}[\phi]$. Since the coefficients of $\mathcal{L}[\phi]$ are $1$-periodic, Floquet theory implies that for each $\lambda\in{\mathbb{C}}$ any non-trivial solution of the ordinary differential equation \[ \mathcal{L}[\phi]v=\lambda v \] cannot be integrable on ${\mathbb{R}}$ and that, at best, they can be bounded functions of the form \begin{equation}\label{e:floquet_form} v(x)=e^{i\xi x}w(x) \end{equation} for some $\xi\in[-\pi,\pi)$ and non-trivial function $w\in L^2_{\rm per}(0,1)$. For a given $N\in{\mathbb{N}}$, setting \[ \Omega_N:=\left\{\xi\in[-\pi,\pi):e^{i\xi N}=1\right\} \] we see from \eqref{e:floquet_form} that the perturbation $v$ satisfies $N$-periodic boundary conditions if and only if $\xi\in\Omega_N$. In particular, it can be shown that $\lambda\in{\mathbb{C}}$ belongs to the $L^2_N$-spectrum of $\mathcal{L}[\phi]$ if and only if there exits a $\xi\in\Omega_N$ and a non-trivial $w\in L^2_{\rm per}(0,1)$ such that \[ \lambda w=e^{-i\xi x}\mathcal{L}[\phi]e^{i\xi x}w=:\mathcal{L}_\xi[\phi]w. \] The operators $\mathcal{L}_\xi[\phi]$ are known as the Bloch operators associated to $\mathcal{L}[\phi]$, and the parameter $\xi$ is referred to as the Bloch frequency. Note that each $\mathcal{L}_\xi[\phi]$ acts on $L^2_{\rm per}(0,1)$ with densely defined and compactly embedded domain $H^1_{\rm per}(0,1)$, and hence their spectrum consists entirely of isolated eigenvalues with finite algebraic multiplicities which, furthermore, depend continuously on $\xi$. In fact, we have the spectral decomposition \[ \sigma_{L^2_N}\left(\mathcal{L}[\phi]\right)=\bigcup_{\xi\in\Omega_N}\sigma_{L^2_{\rm per}(0,1)}\left(\mathcal{L}_\xi[\phi]\right). \] This characterizes the $N$-periodic spectrum of $\mathcal{L}[\phi]$ in terms of union of $1$-periodic eigenvalues for the Bloch operators $\{\mathcal{L}_\xi[\phi]\}_{\xi\in\Omega_N}$. \begin{remark} For definiteness, we note that the set $\Omega_N$ may be written explicitly when $N$ is even by \[ \Omega_N=\left\{\xi_j=\frac{2\pi j}{N}:j=-\frac{N}{2},~-\frac{N}{2}+1,\ldots,\frac{N}{2}-1\right\} \] and when $N$ is odd by \[ \Omega_N=\left\{\xi_j=\frac{2\pi j}{N}:j=-\frac{N-1}{2},~-\frac{N-1}{2}+1,\ldots,\frac{N-1}{2}\right\}. \] In particular, observe that we have $0\in\Omega_N$ and $|\Omega_N|=N$ for all $N\in{\mathbb{N}}$ and that, furthermore, $\Delta\xi_j:=\xi_j-\xi_{j-1}=\frac{2\pi}{N}$ for each appropriate $j$. \end{remark} From the above, it is clearly desirable to have the ability to decompose arbitrary functions in $L^2_N$ into superpositions of functions of the form $e^{i\xi x}w(x)$ with $\xi\in\Omega_N$ and $w\in L^2_{\rm per}(0,1)$. This is achieved by noting that a given $g\in L^2_N$ admits a Fourier series representation \[ g(x)=\frac{1}{N}\sum_{m\in{\mathbb{Z}}}e^{2\pi imx/N}\widehat{g}\left(2\pi m/N\right) \] where here $\widehat{g}$ denotes the Fourier transform of $g$ on the torus given by \begin{equation}\label{e:fourier_def} \widehat{g}(z):=\int_{-N/2}^{N/2} e^{-izy}g(y)dy. \end{equation} Together with the identity (valid for any $f$ for which the sum converges) \[ \sum_{m\in{\mathbb{Z}}}f\left(2\pi m/N\right)=\sum_{\xi\in\Omega_N}\sum_{\ell\in{\mathbb{Z}}}f\left(\xi+2\pi\ell\right), \] it follows that $g$ may be represented as \[ g(x)=\frac{1}{N}\sum_{\xi\in\Omega_N}\sum_{\ell\in{\mathbb{Z}}}e^{i(\xi+2\pi\ell)x}\widehat{g}\left(\xi+2\pi\ell\right). \] In particular, defining for $\xi\in\Omega_N$ the $1$-periodic Bloch transform of a function $g\in L^2_N$ as \[ \mathcal{B}_1(g)(\xi,x):=\sum_{\ell\in{\mathbb{Z}}} e^{2\pi i\ell x}\widehat{g}(\xi+2\pi\ell), \] the above yields the inverse Bloch representation formula \[ g(x)=\frac{1}{N}\sum_{\xi\in\Omega_N}e^{i\xi x}\mathcal{B}_1(g)(\xi,x), \] which is valid for all $g\in L^2_N$. Note that the function $\mathcal{B}_1(g)(\xi,\cdot)$ is clearly $1$-periodic for each $\xi\in\Omega_N$, and hence the above representation formula decomposes arbitrary $N$-periodic functions in the desired fashion. Before proceeding, we note that, in fact, the $1$-periodic Bloch transform \[ \mathcal{B}_1:L^2_N\to \ell^2\left(\Omega_N:L^2_{\rm per}(0,1)\right) \] as defined above satisfies the subharmonic Parseval identity \begin{equation}\label{e:parseval_per} \left<f,g\right>_{L^2_N}=\frac{1}{N}\sum_{\xi\in\Omega_N}\left<\mathcal{B}_1(f)(\xi,\cdot),\mathcal{B}_1(g)(\xi,\cdot)\right>_{L^2(0,1)} \end{equation} valid for all $f,g\in L^2_N$. In particular, this yields the useful identity \[ \|g\|_{L^2_N}^2=\frac{1}{N}\sum_{\xi\in\Omega_N}\left\|\mathcal{B}_1(g)(\xi,\cdot)\right\|_{L^2(0,1)}^2 \] valid for all $g\in L^2_N$, establishing that (up to normalization) $\mathcal{B}_1$ is an isometry. Furthermore, we note that \[ \mathcal{B}_1\left(\mathcal{L}[\phi]v\right)(\xi,x)=\left(\mathcal{L}_\xi[\phi]\mathcal{B}_1(v)(\xi,\cdot)\right)(x)~~{\rm and}~~ \mathcal{L}[\phi]v(x)=\frac{1}{N}\sum_{\xi\in\Omega_N} e^{i\xi x}\mathcal{L}_\xi[\phi]\mathcal{B}_1(v)(\xi,x). \] and hence we may view the Bloch operators $\mathcal{L}_\xi[\phi]$ as operator valued symbols associated to $\mathcal{L}[\phi]$ under the action of the $1$-periodic Bloch transform $\mathcal{B}_1$. Since the operator $\mathcal{L}[\phi]$ and its corresponding Bloch operators $\mathcal{L}_\xi[\phi]$ are clearly sectorial on $L^2_N$ and $L^2_{\rm per}(0,1)$, respectively, they clearly generate analytic semigroups on their respective function spaces and, further, it is now straightforward to check that the associated semigroups satisfy \begin{equation}\label{e:per_semigrp} \mathcal{B}_1\left(e^{\mathcal{L}[\phi]t}v\right)(\xi,x)=\left(e^{\mathcal{L}_\xi[\phi]t}\mathcal{B}_1(v)(\xi,\cdot)\right)(x)~~{\rm and}~~ e^{\mathcal{L}[\phi]t}v(x)=\frac{1}{N}\sum_{\xi\in\Omega_N} e^{i\xi x}e^{\mathcal{L}_\xi[\phi]t}\mathcal{B}_1(v)(\xi,x). \end{equation} Combined with \eqref{e:parseval_per}, this latter identity allows us to conclude information about the semigroup $e^{\mathcal{L}[\phi]t}$ acting on $L^2_N$ by synthesizing (over $\xi\in\Omega_N$) information about the Bloch semigroups $e^{\mathcal{L}_\xi[\phi]t}$ acting on $L^2_{\rm per}(0,1)$. This decomposition is key to our forthcoming linear analysis. Finally, we end by recalling the following useful identity. \begin{lemma}\label{L:per_factor_lemma} Let $N\in{\mathbb{N}}$. If $f\in L^2_{\rm per}(0,1)$ and $g\in L^2_N$, then \[ \mathcal{B}_1(fg)(\xi,x)=f(x)\mathcal{B}_1(g)(\xi,x). \] In particular, for such $f$ and $g$ we have the identity \[ \left<f,g\right>_{L^2_N}=\left<f,\mathcal{B}_1(g)(0,\cdot)\right>_{L^2(0,1)}. \] \end{lemma} The proof of Lemma \ref{L:per_factor_lemma} is straightforward and can be found in \cite{HJP_1}. \subsection{Diffusive Spectral Stability \& Properties of Semigroups}\label{S:spec_semigrp} With the above characterization of the $L^2_N$-spectrum of the linearized operator $\mathcal{L}[\phi]$ about a $1$-periodic stationary solution $\phi$ of \eqref{e:RDE_trav}, we can now provide some immediate consequences of the diffusive spectral stability assumption in Definition \ref{D:specstab}. Specifically, in our present subharmonic context we note that if $\phi$ is such a diffusively spectrally stable standing solution of \eqref{e:RDE_trav}, then for each $N\in{\mathbb{N}}$ there exists a $\delta_N>0$ such that \[ \Re\left(\sigma_{L^2_N}\left(\mathcal{L}[\phi]\right)\setminus\{0\}\right)\leq-\delta_N, \] i.e. the non-zero $N$-periodic eigenvalues of $\mathcal{L}[\phi]$ are uniformly bounded away from the imaginary axis. In particular, by standard spectral perturbation theory, we immediately have that the following spectral properties hold. \begin{lemma}[Spectral Preparation]\label{L:specprep} Suppose that $\phi$ is a $1$-periodic stationary solution of \eqref{e:RDE_trav} which is diffusively spectrally stable. Then the following properties hold. \begin{itemize} \item[(i)] For any fixed $\xi_0\in(0,\pi)$, there exists a constant $\delta_0>0$ such that \[ \Re\left(\sigma\left(\mathcal{L}_\xi[\phi]\right)\right)<-\delta_0 \] for all $\xi\in[-\pi,\pi)$ with $|\xi|>\xi_0$. \item[(ii)] There exist positive constants $\xi_1$ and $\delta_1$ such that for any $|\xi|<\xi_1$, the spectrum of $\mathcal{L}_\xi[\phi]$ decomposes into two disjoint subsets \[ \sigma\left(\mathcal{L}_\xi[\phi]\right)=\sigma_-\left(\mathcal{L}_\xi[\phi]\right)\bigcup\sigma_0\left(\mathcal{L}_\xi[\phi]\right) \] with the following properties: \begin{itemize} \item[(a)] $\Re~\sigma_-\left(\mathcal{L}_\xi[\phi]\right)<-\delta_1$ and $\Re~\sigma_0\left(\mathcal{L}_\xi[\phi]\right)>-\delta_1$; \item[(b)] the set $\sigma_0\left(\mathcal{L}_\xi[\phi]\right)$ consists of a single eigenvalue $\lambda_c(\xi)$ which is analytic in $\xi$ and expands as \[ \lambda_c(\xi)=ia\xi-d\xi^2+\mathcal{O}(\xi^3) \] for $|\xi|\ll 1$ and some constants $a\in{\mathbb{R}}$ and $d>0$; \item[(c)] the eigenfunction associated to $\lambda_c(\xi)$ is analytic near $\xi=0$ and expands as \[ \Phi_\xi(x)=\phi'(x)+\mathcal{O}(\xi) \] for $|\xi|\ll 1$. \end{itemize} \end{itemize} \end{lemma} The proof of (i) follows immediately from the properties (i) and (ii) in Definition \ref{D:specstab}, while the second part follows since $\lambda=0$ is a simple eigenvalue of the co-periodic operator $\mathcal{L}_0[\phi]$ and that the coefficients of $\mathcal{L}_\xi[\phi]$ clearly vary analytically on $\xi$. With the above spectral preparation result in hand, we now record some key induced features of the associated semigroups. These estimates are immediate consequences of Lemma \ref{L:specprep} and the fact that the Bloch operators are clearly sectorial when acting on $L^2_{\rm per}(0,1)$. \begin{proposition}\label{P:hfexp_decay_est} Suppose that $\phi$ is a $1$-periodic stationary solution of \eqref{e:RDE_trav} which is diffusively spectrally stable. Then the following properties hold. \begin{itemize} \item[(i)] For any fixed $\xi_0\in(0,\pi)$, there exist positive constants $C_0$ and $d_0$ such that \[ \left\|e^{\mathcal{L}_\xi[\phi]t}f\right\|_{B(L^2_{\rm per}(0,1))}\leq C_0 e^{-d_0 t} \] valid for all $t\geq 0$ and all $\xi\in[-\pi,\pi)$ with $|\xi|>\xi_0$. \item[(ii)] With $\xi_1$ chosen as in Lemma \ref{L:specprep}, there exist positive constants $C_1$ and $d_1$ such that for any $|\xi|<\xi_1$, if $\Pi(\xi)$ denotes the (rank-one) spectral projection onto the eigenspace associated to $\lambda_c(\xi)$ given by Lemma \ref{L:specprep}(ii), then \[ \left\|e^{\mathcal{L}_\xi[\phi]t}\left(1-\Pi(\xi)\right)\right\|_{B(L^2_{\rm per}(0,1))}\leq C_1 e^{-d_1 t} \] for all $t\geq 0$. \end{itemize} \end{proposition} Coupled with an appropriate decomposition of $e^{\mathcal{L}[\phi]t}$, the above linear estimates form the core of our forthcoming linear analysis (which, in turn, forms the backbone of our nonlinear iteration scheme). \section{Uniform Subharmonic Linear Estimates}\label{S:lin_stab} We begin our analysis by obtaining decay rates on the semigroup $e^{\mathcal{L}[\phi]t}$ acting on classes of subharmonic perturbations in $L^2_N$ which are uniform in $N$. This analysis is based on a delicate decomposition of the semigroup. In particular, we use \eqref{e:per_semigrp} to study the action of $e^{\mathcal{L}[\phi]t}$ on $L^2_N$ in terms of associated Bloch operators, which is accomplished by separating the semigroup into appropriate critical frequency and non-critical frequency components. Note that, due to Lemma \ref{L:specprep} we expect the ``critical frequency" component to be dominated by the translational mode $\phi'$. This decomposition was recently carried out in detail (in a related context) in \cite{HJP_1}, and for completeness we review it here. Note the decomposition is heavily motivated by the corresponding decomposition used in the case of localized perturbations: see \cite{JNRZ_Invent,JNRZ_13_1}. To begin, let $\xi_1\in(0,\pi)$ be defined as in Lemma \ref{L:specprep} and let $\rho$ be a smooth cutoff function satisfying $\rho(\xi)=1$ for $|\xi|<\frac{\xi_1}{2}$ and $\rho(\xi)=0$ for $|\xi|>\xi_1$. For a given $v\in L^2_ N$, we use \eqref{e:per_semigrp} to decompose $e^{\mathcal{L}[\phi]t}$ into low-frequency and high-frequency components as \begin{equation}\label{e:lf_hf_decomp} \begin{aligned} e^{\mathcal{L}[\phi]t}v(x)&=\frac{1}{N}\sum_{\xi\in\Omega_N}\rho(\xi)e^{i\xi x}e^{\mathcal{L}_\xi[\phi]t}\mathcal{B}_1(v)(\xi,x)+ \frac{1}{N}\sum_{\xi\in\Omega_N}\left(1-\rho(\xi)\right)e^{i\xi x}e^{\mathcal{L}_\xi[\phi]t}\mathcal{B}_1(v)(\xi,x)\\ &=S_{lf,N}(t)v(x) + S_{hf,N}v(x). \end{aligned} \end{equation} Using Proposition \ref{P:hfexp_decay_est} and the subharmonic Parseval identity \ref{e:parseval_per}, it follows that there exist constants $C,\eta>0$, both independent of $N$, such that \begin{align*} \left\|S_{hf,N}(t)v\right\|_{L^2_N}^2&=\frac{1}{N}\sum_{\xi\in\Omega_N}\left\|(1-\rho(\xi))e^{\mathcal{L}_\xi[\phi]t}\mathcal{B}_1(v)(\xi,\cdot)\right\|_{L^2(0,1)}^2\\ &\leq\frac{1}{N}\sum_{\xi\in\Omega_N}(1-\rho(\xi))^2\left\|e^{\mathcal{L}_\xi[\phi]t}\right\|_{B(L^2(0,1))}^2\left\|\mathcal{B}_1(v)(\xi,\cdot)\right\|_{L^2(0,1)}^2\\ &\leq Ce^{-2\eta t}\left(\frac{1}{N}\sum_{\xi\in\Omega_N}\left\|\mathcal{B}_1(v)(\xi,\cdot)\right\|_{L^2(0,1)}^2\right), \end{align*} which, again using Parseval's identity \eqref{e:parseval_per}, yields the exponential decay estimate \begin{equation}\label{e:exp_est1} \left\|S_{hf,N}(t)v\right\|_{L^2_N}\leq Ce^{-\eta t}\|v\|_{L^2_N}. \end{equation} For the low-frequency component, for each $|\xi|<\xi_1$ define the rank-one spectral projection onto the critical mode of $\mathcal{L}_\xi[\phi]$ by \begin{equation}\label{e:spec_proj} \left\{\begin{aligned} &\Pi(\xi):L^2_{\rm per}(0,1)\to{\rm ker}\left(\mathcal{L}_\xi[\phi]-\lambda_c(\xi)I\right)\\ &\Pi(\xi)g(x)=\left<\widetilde{\Phi}_\xi,g\right>_{L^2(0,1)}\Phi_\xi(x) \end{aligned}\right. \end{equation} where here $\widetilde{\Phi}_\xi$ denotes the element of the kernel of the adjoint $\mathcal{L}_\xi[\phi]^\dag-\overline{\lambda_c(\xi)}I$ satisfying the normalization condition $\left<\widetilde{\Phi}_\xi,\Phi_\xi\right>_{L^2(0,1)}=1$. The low-frequency operator $S_{lf,N}$ can thus be further decomposed into the contribution from the critical mode and the contribution from low-frequency spectrum bounded away from $\lambda=0$ via \begin{equation}\label{e:lf_c_decomp} \begin{aligned} S_{lf,N}(t)v(x)&=\frac{1}{N}\sum_{\xi\in\Omega_N}\rho(\xi)e^{i\xi x}e^{\mathcal{L}_\xi[\phi]t}\Pi(\xi)\mathcal{B}_1(v)(\xi,x) +\frac{1}{N}\sum_{\xi\in\Omega_N}\rho(\xi)e^{i\xi x}e^{\mathcal{L}_\xi[\phi]t}\left(1-\Pi(\xi)\right)\mathcal{B}_1(v)(\xi,x)\\ &=:S_{c,N}v(x)+\widetilde{S}_{lf,N}(t)v(x). \end{aligned} \end{equation} As with the exponential estimate \eqref{e:exp_est1}, Proposition \ref{P:hfexp_decay_est} implies, by possibly choosing $\eta>0$ smaller, that there exists a constant $C>0$ independent of $N$ such that \begin{equation}\label{e:exp_est2} \left\|\widetilde{S}_{lf,N}(t)v\right\|_{L^2_N}\leq Ce^{-\eta t}\|v\|_{L^2_N}. \end{equation} For the critical component $S_{c,N}$, note by Lemma \ref{L:specprep}(ii) that we can write \begin{align*} S_{c,N}(t)v(x)&=\frac{1}{N}e^{\mathcal{L}_0[\phi]t}\Pi(0)\mathcal{B}_1(v)(0,x) +\frac{1}{N}\sum_{\xi\in\Omega_N\setminus\{0\}}\rho(\xi)e^{i\xi x}e^{\mathcal{L}_\xi[\phi]t}\Pi(\xi)\mathcal{B}_1(v)(\xi,x)\\ &=\frac{1}{N}\phi'(x)\left<\widetilde{\Phi}_0,\mathcal{B}_1(v)(0,\cdot)\right>_{L^2(0,1)} +\frac{1}{N}\sum_{\xi\in\Omega_N\setminus\{0\}}\rho(\xi)e^{i\xi x}e^{\lambda_c(\xi)t}\Phi_\xi(x)\left<\widetilde{\Phi}_\xi,\mathcal{B}_1(v)(\xi,\cdot)\right>_{L^2(0,1)}. \end{align*} and hence, recalling Lemma \ref{L:per_factor_lemma} and expanding $\Phi_\xi$, \begin{align*} S_{c,N}(t)v(x)&=\frac{1}{N}\phi'(x)\left<\widetilde{\Phi}_0,v\right>_{L^2_N} +\phi'(x)\frac{1}{N}\sum_{\xi\in\Omega_N\setminus\{0\}}\rho(\xi)e^{i\xi x}e^{\lambda_c(\xi)t}\left<\widetilde{\Phi}_\xi,\mathcal{B}_1(v)(\xi,\cdot)\right>_{L^2(0,1)}\\ &\quad + \frac{1}{N}\sum_{\xi\in\Omega_N\setminus\{0\}}\rho(\xi)e^{i\xi x}(i\xi)e^{\lambda_c(\xi)t} \left(\frac{\widetilde{\Phi}_\xi(x)-\phi'(x)}{i\xi}\right)\left<\widetilde{\Phi}_\xi,\mathcal{B}_1(v)(\xi,\cdot)\right>_{L^2(0,1)}\\ &=:\frac{1}{N}\phi'(x)\left<\widetilde{\Phi}_0,v\right>_{L^2_N}+\phi'(x)s_{p,N}(t)v(x)+\widetilde{S}_{c,N}(t)v(x). \end{align*} Taken together, it follows that the linear solution operator $e^{\mathcal{L}[\phi]t}$ can be decomposed as \begin{equation}\label{e:lin_decomp1} e^{\mathcal{L}[\phi]t}v(x)=\frac{1}{N}\phi'(x)\left<\widetilde{\Phi}_0,v\right>_{L^2_N}+\phi'(x)s_{p,N}(t)v(x)+\widetilde{S}_N(t)v(x) \end{equation} where \begin{equation}\label{e:sp} s_{p,N}(t)v(x)=\frac{1}{N}\sum_{\xi\in\Omega_N\setminus\{0\}}\rho(\xi)e^{i\xi x}e^{\lambda_c(\xi)t}\left<\widetilde{\Phi}_\xi,\mathcal{B}_1(v)(\xi,\cdot)\right>_{L^2(0,1)} \end{equation} and \[ \widetilde{S}_N(t)v(x)=S_{hf,N}(t)v(x)+\widetilde{S}_{lf,N}(t)v(x)+\widetilde{S}_{c,N}(t)v(x). \] Equipped with the above, we can establish our main set of linear estimates. \begin{proposition}[Linear Estimates]\label{P:lin_est} Suppose that $\phi$ is a $1$-periodic stationary solution of \eqref{e:RDE_trav} which is diffusively spectrally stable. Given any $M\in{\mathbb{N}}$, there exists a constant $C>0$ such that for all $t\geq 0$, $N\in{\mathbb{N}}$ and all $0\leq l,m\leq M$ we have \[ \left\|\partial_x^l\partial_t^m s_{p,N}(t) v\right\|_{L^2_N}\leq C (1+t)^{-1/4-(l+m)/2}\|v\|_{L^1_N} \] Furthermore, there exists constants $C,\eta>0$ such that for all $t\geq 0$ and $N\in{\mathbb{N}}$ we have \[ \left\|\widetilde{S}_N(t) v\right\|_{L^2_N}\leq C\left( (1+t)^{-3/4}\|v\|_{L^1_N}+e^{-\eta t}\left\| v\right\|_{L^2_N}\right). \] \end{proposition} \begin{remark} While the bounds above on the derivatives of $s_{p,N}(t)$ are largely unmotivated by our linear analysis, they will be essential in our forthcoming nonlinear theory. \end{remark} \begin{proof} First observe that, by definition of $\mathcal{B}_1$, we have \begin{align*} \left<\widetilde{\Phi}_\xi,\mathcal{B}_1(v)(\xi,\cdot)\right>_{L^2(0,1)}&=\int_0^1\overline{\widetilde{\Phi}_\xi(x)}\sum_{\ell\in{\mathbb{Z}}}e^{2\pi i\ell x}\widehat{v}(\xi+2\pi\ell)dx\\ &=\sum_{\ell\in{\mathbb{Z}}}\widehat{v}(\xi+2\pi\ell)\int_0^1\overline{\widetilde{\Phi}_\xi(x)} e^{2\pi i\ell x}dx\\ &=\sum_{\ell\in{\mathbb{Z}}}\widehat{v}(\xi+2\pi\ell)\overline{\widehat{\widetilde{\Phi}_\xi}(2\pi\ell)} \end{align*} and hence, using the fact that \eqref{e:fourier_def} implies $\|\widehat{v}\|_{L^\infty({\mathbb{R}})}\leq\|v\|_{L^1_N}$ along with Cauchy-Schwartz, it follows that \begin{align*} \rho(\xi)\left|\left<\widetilde{\Phi}_\xi,\mathcal{B}_1(v)(\xi,\cdot)\right>_{L^2(0,1)}\right|^2&\leq\rho(\xi)\|v\|_{L^1_N}^2 \left(\sum_{\ell\in{\mathbb{Z}}}(1+|\ell|^2)^{1/2}\left|\overline{\widehat{\widetilde{\Phi}_\xi}(2\pi\ell)}\right|(1+|\ell|^2)^{-1/2}\right)^2\\ &\leq C\|v\|_{L^1_N}^2\sup_{\xi\in[-\pi,\pi)}\left(\rho(\xi)\left\|\widetilde{\Phi}_\xi\right\|_{H^1_{\rm per}(0,1)}^2\right). \end{align*} valid for all $\xi\in\Omega_N$. Using Lemma \ref{L:specprep}, it follows by Parseval's identity \eqref{e:parseval_per} that there exists constants $C,d>0$, independent of $N$, such that \begin{align*} \left\|\partial_x^l\partial_t^m s_{p,N}(t)v\right\|_{L^2_N}^2 &= \frac{1}{N}\sum_{\xi\in\Omega_N}\left\|\rho(\xi)(i\xi)^l\left(\lambda_c(\xi)\right)^me^{\lambda_c(\xi)t} \left<\widetilde{\Phi}_\xi,\mathcal{B}_1(v)(\xi,\cdot)\right>_{L^2(0,1)}\right\|_{L^2(0,1)}^2\\ &\leq C\|v\|_{L^1_N}^2\left(\frac{1}{N}\sum_{\xi\in\Omega_N}|\xi|^{2(l+m)}e^{-2d\xi^2 t}\right). \end{align*} By similar considerations, we find that \[ \left\|\widetilde{S}_{N}(t)v\right\|_{L^2_N}^2\leq Ce^{-2\eta t}\|v\|_{L^2_N}^2+C\|v\|_{L^1_N}^2\left(\frac{1}{N}\sum_{\xi\in\Omega_N}|\xi|^2 e^{-2d\xi^2t}\right). \] It remains to provide uniform in $N$ decay rates on the finite sums \begin{equation}\label{sum1} \frac{1}{N}\sum_{\xi\in\Omega_N}|\xi|^{2(l+m)}e^{-2d\xi^2 t}~~{\rm and}~~\frac{1}{N}\sum_{\xi\in\Omega_N}|\xi|^2 e^{-2d\xi^2t}. \end{equation} To gain some intuition on how to uniformly bound these sums, notice that they can be interpreted as Riemann sum approximations (up to a harmless rescaling) of the integrals \begin{equation}\label{int1} \int_{-\pi}^{\pi} \xi^{2(\ell+m)}e^{-2d\xi^2 t}d\xi, \qquad \int_{-\pi}^{\pi} \xi^{2}e^{-2d\xi^2 t}d\xi, \end{equation} which, through an elementary scaling argument, exhibit $(1+t)^{-1/2-(\ell+m)}$ and $(1+t)^{-3/2}$ decay for large time, respectively. The proof that the Riemann sums are uniformly controlled by these decay rates is provided in Lemma \ref{L:sum_poly_bd} in the Appendix, which completes the proof. \end{proof} \begin{remark}\label{R:riemann_sum1} The result of Corollary \ref{C:min_thm} can be seen from the above analysis, at least at the linear level. Indeed, following the methods in \cite[Section 5]{HJP_1} one sees that, for large $N$, the sums in \eqref{sum1} are good approximations of the respective integrals in \eqref{int1} for times up to $t=\mathcal{O}(N^2)$, corresponding to an observed polynomial decay of perturbations on such a timescale. For larger times, however, the exponential nature of the summands dominate and the sums decay monotonically to zero at exponential rates, corresponding to an exponential decay of perturbations on these longer timescales. \end{remark} Before continuing to our nonlinear analysis, we pause to interpret the above results. Suppose that $\phi$ is a $1$-periodic diffusively spectrally stable stationary solution of \eqref{e:RDE_trav}, and let $u(x,t)$ be a solution of \eqref{e:RDE_trav} with initial data $u(x,0)=\phi(x)+\varepsilon v(x)$ with $\varepsilon\ll 1$ and $v\in L^1_N\cap L^2_N$. From Proposition \ref{P:lin_est}, it follows that one may expect that the solution $u$ behaves for large time like \begin{equation}\label{e:intuition} \begin{aligned} u(x,t)&\approx \phi(x)+\varepsilon e^{\mathcal{L}[\phi]t}v(x)\\ &\approx \phi(x)+\varepsilon \phi'(x)\left(\frac{1}{N}\left<\widetilde{\Phi}_0,v\right>_{L^2_N}+s_{p,N}(t)v(x)\right)\\ &\approx \phi\left(x+\varepsilon\left(\frac{1}{N}\left<\widetilde{\Phi}_0,v\right>_{L^2_N}+s_{p,N}(t)v(x)\right)\right), \end{aligned} \end{equation} which is a space-time dependent phase modulation of the underlying periodic wave $\phi$. More precisely, note the phase modulation naturally decomposes into two parts: a spatially independent component coming from the projection of the perturbation onto the translational eigenvalue at the origin, and a space-time dependent component accounting for the dynamics associated to the accumulation of Bloch eigenvalues near the origin for large $N$. In the next section, we use this linear intuition to develop a nonlinear iteration scheme and complete the proof of Theorem \ref{T:main} and its corollaries. \section{Uniform Nonlinear Asymptotic Stability}\label{S:nlin_stab} In this section, we use the decomposition of the linearized solution operator $e^{\mathcal{L}[\phi]t}$ and the associated linear estimates in Proposition \ref{P:lin_est} to develop a nonlinear iteration scheme to complete the proof of Theorem \ref{T:main}. As discussed at the end of Section \ref{S:lin_stab}, the linear estimates in Proposition \ref{P:lin_est} suggest that if $\phi$ is a $1$-periodic diffusively spectrally stable stationary solution of \eqref{e:RDE_trav}, then $N$-periodic perturbations of $\phi$ should, for large time, behave essentially like space-time modulated version of $\phi$. This suggests a nonlinear decomposition of $N$-periodic perturbations of $\phi$, which we develop in Section \ref{S:nlin_decomp} below. With this decomposition in hand, the proof of Theorem \ref{T:main} will be completed in Section \ref{S:nlin_iteration} through an appropriate nonlinear iteration scheme. \subsection{Nonlinear Decomposition and Perturbation Equations}\label{S:nlin_decomp} Suppose $\phi$ is a $1$-periodic diffusively spectrally stable stationary solution of \eqref{e:RDE_trav}. Motivated by the work in the previous section, we introduce a decomposition of nonlinear perturbations of the background wave $\phi$ which accounts for the critical phase-shift contribution $s_{p,N}(t)$ of the linear operator. Motivated by \eqref{e:intuition}, we begin by letting $\widetilde{u}(x,t)$ be a solution of \eqref{e:RDE_trav} and define a spatially modulated function \begin{equation}\label{e:mod} u(x,t):=\widetilde{u}\left(x-\frac{1}{N}\gamma(t)-\psi(x,t),t\right) \end{equation} where both $\gamma:{\mathbb{R}}_+\to{\mathbb{R}}$ and $\psi:{\mathbb{R}}\times{\mathbb{R}}_+\to{\mathbb{R}}$ are functions to be determined later. Taking $\widetilde{u}$ to be initially close to $\phi$ in some sense, we attempt to decompose $u$ as \begin{equation}\label{e:nlin_residual} u(x,t)=\phi(x)+v(x,t), \end{equation} where here $v$ denotes a nonlinear perturbation. Note that the form of the modulation in \eqref{e:mod} is a combination of (i) a time-dependent modulation, as one would utilize in the proof of Proposition \ref{P:sub_stab}, and (ii) a space-time dependent modulation, as is used in the study of localized perturbations of periodic waves \cite{JNRZ_13_1,JZ_11_1}. Consequently, the forthcoming nonlinear analysis is essentially a mixture of these two approaches. As a preliminary step, we derive equations that must be satisfied by the perturbation $v$ and the modulation functions $\gamma$ and $\psi$. To this end, we note that in \cite{JNRZ_13_1,JZ_11_1} it is shown through elementary, but tedious, manipulations that if $u(x,t)$ is as above then the triple $(v,\gamma,\psi)$ satisfies \begin{equation}\label{e:pert1} (k\d_t - k\mathcal{L}[\phi])\left(v+\frac{1}{N}\phi'\gamma +\phi'\psi\right) = k\widetilde{\mathcal{N}}, \quad\text{where}~ k\widetilde{\mathcal{N}} := \widetilde{\mathcal{Q}} + k\widetilde{\mathcal{R}}_x + k\widetilde{\mathcal{S}}_t + \widetilde{\mathcal{T}}, \end{equation} with \[ \widetilde{\mathcal{Q}}:= f(\phi+v) - f(\phi) - Df(\phi)v,\quad \widetilde{\mathcal{R}}:= -\psi_t v - \frac{1}{N}\gamma_t v + k \left(\frac{\psi_x}{1-\psi_x} v_x\right) + k \left(\frac{\psi_x^2}{1-\psi_x} \phi'\right), \] and \[ \widetilde{\mathcal{S}}:= \psi_x v,\quad \widetilde{\mathcal{T}}:= -\psi_x\left[f(\phi+v) - f(\phi)\right]. \] Rearranging slightly as in \cite{DS_18} to remove temporal derivatives of the perturbation $v$ in present in $\widetilde{\mathcal{N}}$ in \eqref{e:pert1} yields the following. \begin{lemma}\label{L:nlin_pert} The nonlinear residual $v$ defined in \eqref{e:nlin_residual} and modulation functions $\gamma$ and $\psi$ in \eqref{e:mod} satisfy \begin{equation}\label{e:pert2} (k\d_t - k\mathcal{L}[\phi])\left((1-\psi_x)v+\frac{1}{N}\phi'\gamma +\phi'\psi\right) = k\mathcal{N}, \quad\text{where}~ k\mathcal{N} = \mathcal{Q} + k\mathcal{R}_x, \end{equation} where here \begin{equation} \mathcal{Q} = (1-\psi_x)\left[f(\phi+v) - f(\phi) - Df(\phi)v\right], \end{equation} and \begin{equation} \mathcal{R} = -\psi_t v - \frac{1}{N}\gamma_t v + c\psi_x v + k (\psi_x v)_x + k \left(\frac{\psi_x}{1-\psi_x} v_x\right) + k \left(\frac{\psi_x^2}{1-\psi_x} \phi'\right). \end{equation} \end{lemma} Our goal is to now obtain a closed nonlinear iteration scheme by integrating \eqref{e:pert2} and exploiting the decomposition of the linear solution operator $e^{\mathcal{L}[\phi]t}$ provided in \eqref{e:lin_decomp1}. To motivate this, we first provide an informal description of how to determine the modulation functions $\gamma$ and $\psi$ to separate out the principle nonlinear behavior. Using Duhamel's formula, we can write \eqref{e:pert2} as the implicit integral equation \[ (1-\psi_x(x,t))v(x,t)+\frac{1}{N}\phi'(x)\gamma(t) +\phi'(x)\psi(x,t) = e^{\mathcal{L}[\phi]t} v(x,0) + \int_0^t e^{\mathcal{L}[\phi](t-s)}\mathcal{N}(x,s)ds. \] with initial data $\gamma(0)=0$, $\psi(\cdot,0)=0$ and $v(\cdot,0)=\widetilde{u}(\cdot,0)-\phi(\cdot)$. Recalling that \eqref{e:lin_decomp1} implies the linear solution operator can be decomposed as \begin{equation}\label{e:lin_decomp} e^{\mathcal{L}[\phi]t}f(x) = \phi'(x)\underbrace{\left(\frac{1}{N} \LA\widetilde{\Phi}_0,f\RA_{L^2_N} + s_{p,N}(t)f(x)\right)}_{\text{phase modulation}} + \underbrace{\widetilde{S}(t)f(x)}_{\text{faster decaying residual}} \end{equation} it follows that we can remove the principle (i.e. slowest decaying) part of the nonlinear perturbation by implicitly defining \begin{equation}\label{e:mod_slave1} \left\{\begin{aligned} &\gamma(t) \sim \LA\widetilde{\Phi}_0,v(0)\RA_{L^2_N} + \int_0^t \LA\widetilde{\Phi}_0,\mathcal{N}(s)\RA_{L^2_N} ds\\ &\psi(x,t) \sim s_{p,N}(t)v(0) + \int_0^t s_{p,N}(t-s)\mathcal{N}(s)ds, \end{aligned}\right. \end{equation} where here $\sim$ indicates equality for $t\geq 1$. This choice then yields the implicit description \begin{equation}\label{e:v1} v(x,t) \sim \psi_x(x,t)v(x,t) + \widetilde{S}(t)v(0) + \int_0^t \widetilde{S}(t-s)\mathcal{N}(s) ds \end{equation} involving only the faster decaying residual component of the linear solution operator. Note the above choices clearly cannot extend all the way to $t=0$ due to an incompatibility of these choices with the initial data on $(v,\gamma,\psi)$. Here, we choose to keep the above choices for all $t\geq 1$ while interpolating between the initial data and the right hand sides of \eqref{e:mod_slave1}-\eqref{e:v1} on the initial layer $0\leq t\leq 1$. Specifically, we let $\chi(t)$ be a smooth cutoff function that is zero for $t\leq 1/2$ and one for $t\geq 1$, and define the modulation functions $\gamma$ and $\psi$ implicitly for all $t\geq 0$ as \begin{equation}\label{e:mod_slave2} \left\{\begin{aligned} &\gamma(t) = \chi(t)\left[ \LA\widetilde{\Phi}_0,v(0)\RA_{L^2_N} + \int_0^t \LA\widetilde{\Phi}_0,\mathcal{N}(s)\RA_{L^2_N} ds\right]\\ &\psi(x,t) =\chi(t)\left[ s_{p,N}(t)v(0) + \int_0^t s_{p,N}(t-s)\mathcal{N}(s)ds\right], \end{aligned}\right. \end{equation} leaving the system \begin{equation}\label{e:v2} \begin{aligned} v(x,t) &= \left(1-\chi(t)\right)\left[e^{\mathcal{L}[\phi]t}v(x,0)+\int_0^t e^{\mathcal{L}[\phi](t-s)}\mathcal{N}(s)ds\right]\\ &\quad+\chi(t)\left(\psi_x(x,t)v(x,t)+ \widetilde{S}(t)v(0) + \int_0^t \widetilde{S}(t-s)\mathcal{N}(s) ds\right). \end{aligned} \end{equation} We note that from the differential equation \eqref{e:pert2}, along with the system of integral equations \eqref{e:mod_slave2}-\eqref{e:v2}, we readily obtain short-time existence and continuity with respect to $t$ of a solution $(v,\psi_t,\psi_x)\in H^K_N$ and $\gamma\in W^{1,\infty}(0,\infty)$ by a standard contraction mapping argument, treating \eqref{e:pert2} as a forced heat equation: see, for example, \cite{Hen}. Associated with this solution, we now aim to obtain $L^2$ estimates on $(v,\gamma_t,\psi_x,\psi_t)$ and some of their derivatives. Noting that the nonlinear residual $\mathcal{N}$ in \eqref{e:pert2} involves only derivatives of the modulation functions $\gamma$ and $\psi$, we may then expect to extract a closed system in $(v,\gamma_t,\psi_x,\psi_t)$, and some of their derivatives, and then recover $\gamma$ and $\psi$ through the slaved system \eqref{e:mod_slave2}. In particular, observe that using \eqref{e:v2} we see that control of $v$ in, say, $L^2_N$ requires (in part) control $v$ in $H^2_N$. This loss of derivatives is compensated by the following result, established by energy estimates in \cite{JNRZ_13_1,JZ_11_1}, which uses the dissipative nature of the governing evolution equation to control higher derivatives of $v$ by lower ones, enabling us to close our nonlinear iteration. \begin{proposition}[Nonlinear Damping]\label{P:nonlin_damp} Suppose the nonlinear perturbation $v$ defined in \eqref{e:nlin_residual} satisfies $v(\cdot,0)\in H^K_N$, and suppose that for some $T>0$ the $H^K_N$ norm of $v$ and $\psi_t$, the $H^{K+1}_N$ norm of $\psi_x$, and the $L^\infty$ norms of $\gamma$ and $\gamma_t$ remain bounded by a sufficiently small constant for all $0\leq t\leq T$. Then there exist positive constants $\theta,C>0$, both independent of $N$ and $T$, such that \[ \|v(t)\|_{H^K_N}^2 \lesssim e^{-\theta t}\|v(0)\|_{H^K_N}^2 + \int_0^t e^{-\theta(t-s)}\left(\left\|v(s)\right\|_{L^2_N}^2 + \left\|\psi_x(s)\right\|_{H^{K+1}_N}^2 + \left\|\psi_t(s)\right\|_{ H^{K}_N}^2 + \left|\gamma_t(s)\right|^2\right)ds \] for all $0\leq t\leq T$. \end{proposition} \begin{proof} The proof strategy is by now standard, and can be found, or example, in \cite{JNRZ_13_1,JZ_11_1}. For completeness, here we simply outline the main details. First, one rewrites \eqref{e:pert2} as the forced heat equation \begin{align*} (1-\psi_x)\left(kv_t-k^2v_{xx}\right)&=-k\left(\psi_t+\frac{1}{N}\gamma_t\right)\phi'+k^2\left(\frac{\psi_x}{1-\psi_x}~\phi'\right)_x -\psi_xf(\phi+v)+f(\phi+v)-f(\phi)\\ &\quad+kv_x\left(c-\psi_t-\frac{\gamma_t}{N}\right) +k^2\left[\left(\frac{1}{1-\psi_x}+1\right)\psi_xv_x\right]_x. \end{align*} Multiplying by $\sum_{j=0}^K(-1)^j\frac{\partial_x^{2j}v}{1-\psi_x}$, integrating over $[0,N]$, using integration by parts and rearranging yields a bound of the form\footnote{Below, the symbol $A\lesssim B$ implies there exists a constant $C>0$, independent of $N$, such that $A\leq CB$.} \begin{align*} \partial_t\|v\|_{H^K_N}^2+2k\|v\|_{H^{K+1}_N}^2&\lesssim \varepsilon\|v\|_{H^{K+1}_N}^2+\|v\|_{L^2_N}^2+\frac{1}{\varepsilon}\left\|\frac{\psi_t}{1-\psi_x}~\phi'\right\|_{H^{K-1}_N}^2\\ &+\frac{|\gamma_t|^2}{N^2\varepsilon}\left\|\frac{1}{1-\psi_x}~\phi'\right\|_{H^{K-1}_N}^2+\frac{1}{\varepsilon}\left\|\frac{1}{1-\psi_x}\partial_x\left(\frac{\psi_x}{1-\psi_x}~\phi'\right)\right\|_{H^{K-1}_N}^2\\ &+\frac{1}{\varepsilon}\left\|\frac{\psi_x}{1-\psi_x}f(\phi+v)\right\|_{H^{K-1}_N}^2+\frac{1}{\varepsilon}\left\|\frac{1}{1-\psi_x}\left(f(\phi+v)-f(\phi)\right)\right\|_{H^{K-1}_N}^2\\ &+\frac{1}{\varepsilon}\left\|\frac{v_x}{1-\psi_x}\right\|_{H^{K-1}_N}^2+\frac{1}{\varepsilon}\left\|\frac{\psi_tv_x}{1-\psi_x}\right\|_{H^{K-1}_N}^2 +\frac{|\gamma_t|^2}{N^2\varepsilon}\left\|\frac{v_x}{1-\psi_x}\right\|_{H^{K-1}_N}^2\\ &+\frac{1}{\varepsilon}\left\|\frac{1}{1-\psi_x}\partial_x\left[\left(\frac{1}{1-\psi_x}+1\right)\psi_xv_x\right]\right\|_{H^{K-1}_N}^2, \end{align*} where here $\varepsilon>0$ is an arbitrary constant\footnote{Introduced by the application of the Cauchy inequality with $\varepsilon$ throughout.} independent of $N$. Using the Sobolev interpolation \[ \|g\|_{H^K_N}^2\leq \widetilde{C}^{-1}\|\partial_x^{K+1}g\|_{L^2_N}^2+\widetilde{C}\|g\|_{L^2_N}^2, \] valid for some constant $\widetilde{C}>0$ independent of $N$, now gives \[ \frac{d}{dt}\|v\|_{H^K_N}^2(t)\leq -\theta\|v(t)\|_{H^{K}_N}^2+C\left(\|v(t)\|_{L^2_N}^2+\|\psi_x\|_{H^{K+1}_N}^2+\|\psi_t\|_{H^K_N}^2+|\gamma_t(t)|^2\right). \] The proof is now complete by an application of Gronwall's inequality. \end{proof} \subsection{Nonlinear Iteration}\label{S:nlin_iteration} To complete the proof of Theorem \ref{T:main}, associated to the solution $(v,\gamma_t,\gamma_t,\gamma_x)$ of of \eqref{e:mod_slave2}-\eqref{e:v2} we define, so long as it is finite, the function \[ \zeta(t) := \sup_{0\leq s\leq t}\left( \left\|v(s)\right\|_{H^K_N}^2 + \left\|\psi_x(s)\right\|_{H^{K+1}_N}^2 + \left\|\psi_t(s)\right\|_{ H^{K}_N}^2 + \left|\gamma_t(s)\right|\right)^{1/2}(1+s)^{3/4}. \] Combining the linear estimates in Proposition \ref{P:lin_est} with the damping estimate in Proposition \ref{P:nonlin_damp}, we now establish a key inequality for $\zeta$ which will yield global existence and stability of our solutions. \begin{proposition}\label{P:iteration} Under the assumptions of Theorem \ref{T:main}, there exist positive constants $C,\varepsilon>0$, both independent of $N$, such that if $v(\cdot,0)$ is such that \[ E_0:=\|v(\cdot,0)\|_{L^1_N\cap H^K_N}\leq \varepsilon\quad{\rm and}\quad\zeta(T)\leq \varepsilon \] for some $T>0$, then we have \[ \zeta(t) \leq C\left(E_0 + \zeta^2(t)\right) \] valid for all $0\leq t\leq T$. \end{proposition} \begin{proof} Recalling Lemma \ref{L:nlin_pert} we readily see that there exists a constant $C>0$, independent of $N$, such that \[ \|\mathcal{Q}(t)\|_{L^1_N\cap H^1_N} \leq C \left(1+\|\psi_x(t)\|_{H^1_N}\right)\|v(t)\|_{H^1_N}^2 \] and \[ \|\mathcal{R}(t)\|_{L^1_N\cap H^1_N}\leq C \left(\|(v, v_x, \psi_x, \psi_{xx}, \psi_t)(t)\|_{H^1_N}^2 + |\gamma_t(t)|^2\right) \] so that, using the linear estimates in Proposition \ref{P:lin_est}, we have for so long as $\zeta(t)$ remains small that \[ \|\mathcal{Q}(t)\|_{L^1_N\cap H^1_N},~~ \|\mathcal{R}(t)\|_{L^1_N\cap H^1_N}\leq C \zeta^2(t)(1+t)^{-3/2} \] for some constant $C>0$ which is independent of $N$. Since $k\mathcal{N} = \mathcal{Q} + k\mathcal{R}_x$, it follows there exists a constant $C>0$ independent of $N$ such that \begin{equation}\label{e:Nbd} \|\mathcal{N}(t)\|_{L^1_N\cap H^1_N} \leq C \|(v, v_x, v_{xx}, \psi_x, \psi_{xx}, \psi_{xxx}, \psi_t, \psi_{tx})(t)\|_{H^1_N}^2 + |\gamma_t(t)|^2 \leq C \zeta^2(t)(1+t)^{-3/2}. \end{equation} for so long as $\zeta(t)$ remains small. Applying the bounds in Proposition \ref{P:lin_est} to the implicit equation \eqref{e:v2}, it immediately follows that \begin{align*} \left\|v(t)\right\|_{L^2_N}&\leq \left\|v(\cdot,t)\psi_x(t)\right\|_{L^2_N}+CE_0(1+t)^{-3/4} +C\int_0^t(1+t-s)^{-3/4}\left\|\mathcal{N}(s)\right\|_{L^1_N\cap L^2_N}ds\\ &\leq \zeta(t)^2(1+t)^{-3/2}+CE_0(1+t)^{-3/4}+C\zeta(t)^2\int_0^t(1+t-s)^{-3/4}(1+s)^{-3/2}ds \\ &\leq C\left(E_0+\zeta(t)^2\right)(1+t)^{-3/4} \end{align*} for some constant $C>0$ independent of $N$. In particular, observe the loss of derivatives in the above estimate: control of the $L^2_N$ norm of $v(t)$ requires control of the $H^K_N$ norm of $v(t)$. This loss of derivatives may be compensated by the nonlinear damping estimate in Proposition \ref{P:nonlin_damp}, assuming we can obtain appropriate estimates on the modulation functions and their derivatives. To this end, we observe that by using \eqref{e:mod_slave2} for $0\leq \ell\leq K+1$ we have that \[ \d_x^\ell\psi_x(x,t) = \chi(t)\left(\d_x^{\ell+1}s_{p,N}(t)v(0) + \int_0^t \d_x^{\ell+1}s_{p,N}(t-s)\mathcal{N}(s)ds\right), \] and for $0\leq \ell \leq K$ \begin{align*} \d_x^\ell\psi_t(x,t) &= \chi(t)\left(\d_x^{\ell}\d_t [s_{p,N}](t)v(0) + \d_x^\ell s_{p,N}(0)\mathcal{N}(t) + \int_0^t \d_x^{\ell}\d_t[s_{p,N}](t-s)\mathcal{N}(s)ds\right)\\ &\quad +\chi'(t)\left(\d_x^\ell s_{p,N}(t)v(0)+\int_0^t\d_x^{\ell}s_{p,N}(t-s)\mathcal{N}(s)ds\right), \end{align*} and hence that \[ \left\|\psi_x\right\|_{H^{K+1}_N}, \left\|\psi_t\right\|_{H^{K}_N}\leq C\left(E_0+\zeta(t)^2\right)(1+t)^{-3/4}. \] Similarly, using \eqref{e:mod_slave2}(i) we find\footnote{Note here we use an $L^\infty$-$L^1$ bound to control the inner product. This is opposed to using Cauchy-Schwartz, which would contribute the growing factor $\|\widetilde{\Phi}_0\|_{L^2_N}=\mathcal{O}(N)$.} \[ |\gamma_t(t)| =\left|\LA \widetilde{\Phi}_0, \mathcal{N}(t)\RA_{L^2_N}\right|\leq C\|\mathcal{N}(t)\|_{L^1_N}\leq C\left(E_0+ \zeta^2(t)\right)(1+t)^{-3/2}. \] Using the damping result in Proposition \ref{P:nonlin_damp}, we conclude that \begin{equation}\label{e:vbd} \begin{aligned} \|v(t)\|_{H^K_N}^2 &\leq C E_0^2 e^{-\theta t} + C\left(E_0 + \zeta^2(t)\right)^2\int_0^t e^{-\theta(t-s)} (1+s)^{-3/2}ds\\ &\leq C E_0^2 e^{-\theta t} + C\left(E_0 + \zeta^2(t)\right)^2 (1+t)^{-3/2}\\ &\leq C\left(E_0 + \zeta^2(t)\right)^2 (1+t)^{-3/2}. \end{aligned} \end{equation} Since $\zeta(t)$ is a non-decreasing function, it follows that for a given $t\in(0,T)$ we have \[ \left( \left\|v(s)\right\|_{H^K_N}^2 + \left\|\psi_x(s)\right\|_{H^{K+1}_N}^2 + \left\|\psi_t(s)\right\|_{ H^{K}_N}^2 + \left|\gamma_t(s)\right|^2\right)^{1/2}(1+s)^{3/4} \leq C\left(E_0 + \zeta^2(t)\right)^2 \] valid for all $s\in(0,t)$. Taking the supremum over $s\in(0,t)$ completes the proof. \end{proof} The proof of Theorem \ref{T:main} now follows by continuous induction. Indeed, $\zeta(t)$ is continuous so long as it remains small, Proposition \ref{P:iteration} implies that if $E_0<\frac{1}{4C}$ then $\zeta(t)\leq 2CE_0$ for all $t\geq 0$. Noting that $C>0$ is independent of $N$, this establishes the stability estimates \eqref{e:result1} from Theorem \ref{T:main} by taking \[ \widetilde{\psi}(x,t):=\frac{1}{N}\gamma(t)+\psi(x,t). \] Further, the stability estimate \eqref{e:result3} in Corollary \ref{C:main} follows by \eqref{e:vbd} and the triangle inequality since \begin{align*} \left\|u\left(\cdot-\frac{1}{N}\gamma(t),t\right)-\phi\right\|_{L^2_N}&\leq \|u_x\|_{L^\infty}\|\psi(x,t)\|_{L^2_N}+CE_0(1+t)^{-3/4}\\ &\leq CE_0\left(1+t\right)^{-1/4}, \end{align*} as claimed. Further, note that since for $0<t<s$ we have \[ |\gamma(t)-\gamma(s)|\leq\int_t^s|\gamma_t(z)|dz\leq CE_0(1+t)^{-1/2} \] it follows that $\gamma(t)$ converges to some\footnote{Note since the modulation function $\gamma$ depends on $N$, so does the limiting phase shift $\gamma_\infty$.} $\gamma_\infty\in{\mathbb{R}}$ as $t\to\infty$ with rate \[ |\gamma(t)-\gamma_\infty|\leq\int_t^\infty|\gamma_t(z)|dz\leq CE_0(1+t)^{-1/2}, \] which, by the triangle inequality, establishes \eqref{e:result2}, thus completing the proof of Theorem \ref{T:main}, as well as completes the proof of Corollary \ref{C:main}. In fact, notice that from \eqref{e:Nbd} we have \[ \left|\int_0^t\left<\widetilde{\Phi}_0,\mathcal{N}(s)\right>_{L^2_N}ds\right| \leq C\left\|\widetilde{\Phi}_0\right\|_{L^\infty(\mathbb{R})}\zeta^2(t)\int_0^t (1+s)^{-3/2}ds, \] which, since the above work shows that $\zeta(t)\leq C\varepsilon$ for some constant $C>0$, implies from \eqref{e:mod_slave2} that \[ \gamma_\infty = \left<\widetilde{\Phi}_0,v(0)\right>_{L^2_N} + \mathcal{O}(\varepsilon^2). \] That is, the asymptotic phase shift in Theorem \ref{T:main} is $\mathcal{O}(\varepsilon^2)$ close to that suggested by the linear theory in Section \ref{S:lin_stab}. Finally, we combine Corollary \ref{C:main} with Proposition \ref{P:sub_stab} to establish Corollary \ref{C:min_thm}. To this end, let $\varepsilon>0$ and $C>0$ be as in Corollary \ref{C:main}. Fix $N\in{\mathbb{N}}$ and $\delta\in(0,\delta_N)$, with $\delta_N$ as in \eqref{spec_gap}, and let $\varepsilon_\delta>0$ be as in Proposition \ref{P:sub_stab}. If $u_0\in L^1_{\rm per}(0,N)\cap H^K_{\rm per}(0,N)$ with $E_0<\varepsilon$, then Corollary \ref{C:main} implies that \[ \left\|u\left(\cdot,t\right)-\phi\left(\cdot+\frac{1}{N}\gamma_\infty\right)\right\|_{H^1_{\rm per}(0,N)}\leq CE_0 (1+t)^{-1/4} \] for all $t>0$. In particular, there exists a time $T_\delta>0$ such that \[ \left\|u\left(\cdot,t\right)-\phi\left(\cdot+\frac{1}{N}\gamma_\infty\right)\right\|_{H^1_{\rm per}(0,N)}<\varepsilon_\delta \] for all $t\geq T_\delta$. By the translational invariance of \eqref{e:RDE_trav} it is clear that $\phi(\cdot+\frac{1}{N}\gamma_\infty)$ is a diffusively spectrally stable $1$-periodic solution of \eqref{e:rd}, and hence Proposition \ref{P:sub_stab} implies\footnote{Here, we are applying Proposition \ref{P:sub_stab} with initial data $u(\cdot,T_\delta)$.} there exists a constant $C_\delta>0$ such that \[ \left\|u\left(\cdot,t\right)-\phi\left(\cdot+\frac{1}{N}\gamma_\infty\right)\right\|_{H^1_{\rm per}(0,N)}\leq C_\delta\varepsilon_\delta e^{-\delta t} \] for all $t>T_\delta$. Taking $M_\delta=\frac{C_\delta\varepsilon_\delta}{E_0}$ completes the proof.
1,314,259,995,183
arxiv
\section{Introduction} Recently a Landau-Ginzburg model of two-dimensional strings at self-dual radius (i.e., $c = 1$ topological matter coupled to two-dimensional gravity) has been proposed and studied by several groups \cite{bib:Ghoshal-Mukhi,bib:HOP,bib:Danielsson}. This model is in a sense a natural extrapolation of the topological $A_{k+1}$ model to $k = -3$, and seems to inherits the remarkable properties of the $A_{k+1}$ models such as: (i) an underlying structure of Lax equation \cite{bib:DVV}, (ii) a period integral representation of correlation functions \cite{bib:period-integral}, (iii) an algebraic structure of gravitational primaries and descendents \cite{bib:contact-algebra}, etc. Although the status of the so called special (discrete) states \cite{bib:special-state} still remains obscure, the dynamics of massless tachyons with discrete momenta is shown to be correctly described in this new framework. The $c = 1$ model, however, differs from the $A_{k+1}$ (and some other $c < 1$) models in several essential aspects. This seems to be eventually due to the difference of underlying integrable hierarchies. The $A_{k+1}$ models are special solutions of the dispersionless KP (or generalized KdV) hierarchy \cite{bib:Krichever,bib:Dubrovin,bib:TT-dKP}. Hanany et al. \cite{bib:HOP} suggested a similar link between the $c = 1$ model and the dispersionless Toda hierarchy \cite{bib:TT-dToda}. In this paper, we demonstrate the suggestion of Hanany et al. in the machinery of dispersionless Toda hierarchy, and search for implications therefrom. Our basic observation is that the tachyon dynamics at self-dual radius is perfectly encoded into the structure of a special solution of this integrable hierarchy. In Section \ref{sec:notions}, we recall fundamental notions concerning the dispersionless Toda hierarchy, and in Sections \ref{sec:lambda}-\ref{sec:symmetries}, reformulate several results of our previous work \cite{bib:TT-dToda} in a more convenient form. The aforementioned special solution is constructed in Section \ref{sec:RHproblem} by solving a Riemann-Hilbert problem. A set of $w_{1+\infty}$-constraints (recursion relations) characterizing tachyon correlation functions are derived from the Riemann-Hilbert problem in Section \ref{sec:constraints}. Since, as remarked by Hanany et al., those $w_{1+\infty}$-constraints determine tachyon correlation functions uniquely, we can conclude that our solution indeed describe the tachyon dynamics. In Section \ref{sec:wedge} we show the existence of a wedge algebra behind the Riemann-Hilbert problem, and propose a speculative interpretation of this algebra as generators of ``extra'' states and fields in the $c = 1$ model. Section \ref{sec:conclusion} is devoted to concluding remarks. \section{Fundamental notions in dispersionless Toda hierarchy \label{sec:notions}} The Lax formalism of the dispersionless Toda hierarchy is based on the two-dimensional Poisson bracket \begin{equation} \{ A(p,s), B(p,s) \} = p \frac{\partial A}{\partial p} \frac{\partial B}{\partial s} - \frac{\partial A}{\partial s} p \frac{\partial B}{\partial p} \end{equation} rather than usual commutators. Fundamental quantities (counterparts of Lax operators) are two Laurent series ${\cal L}$ and $\bar{\cL}$ of the form \begin{eqnarray} {\cal L} &=& p + \sum_{n=0}^\infty u_{n+1}(t,\bar{t},s) p^{-n}, \\ \bar{\cL}^{-1} &=& \bar{u}_0 p^{-1} + \sum_{n=0}^\infty \bar{u}_{n+1}(t,\bar{t},s) p^n, \label{eq:L} \end{eqnarray} where the coefficients depend on time variables of flows $t = (t_1,t_2,\ldots)$ and $\bar{t} = (\bar{t}_1,\bar{t}_2,\ldots)$ as well as the spatial coordinate $s$. (We have slightly changed notations in the previous work \cite{bib:TT-dToda}.) Lax equations of these ``Lax functions" are written \begin{eqnarray} \dfrac{\partial {\cal L}}{\partial t_n} = \{ {\cal B}_n, {\cal L} \}, &\quad& \dfrac{\partial {\cal L}}{\partial \bar{t}_n} = \{ \bar{\cB}_n, {\cal L} \}, \nonumber \\ \dfrac{\partial \bar{\cL}}{\partial t_n} = \{ {\cal B}_n, \bar{\cL} \}, &\quad& \dfrac{\partial \bar{\cL}}{\partial \bar{t}_n} = \{ \bar{\cB}_n, \bar{\cL} \}, \label{eq:LaxL} \end{eqnarray} where ${\cal B}_n$ and $\bar{\cB}_n$ are given by \begin{eqnarray} && {\cal B}_n = ( {\cal L}^n )_{\ge 0}, \quad \bar{\cB}_n = ( \bar{\cL}^{-n} )_{\le -1}, \\ && (\quad)_{\ge 0}:\ \mbox{projection onto}\ p^0,p^1,\ldots, \nonumber\\ && (\quad)_{\le -1}:\ \mbox{projection onto}\ p^{-1},p^{-2},\ldots. \nonumber \end{eqnarray} Furthermore, given such a pair ${\cal L}$ and $\bar{\cL}$, one can find another pair of Laurent series \begin{eqnarray} {\cal M} &=& \sum_{n=1}^\infty n t_n {\cal L}^n + s + \sum_{n=1}^\infty v_n(t,\bar{t},s){\cal L}^{-n}, \nonumber\\ \bar{\cM} &=& - \sum_{n=1}^\infty n \bar{t}_n \bar{\cL}^{-n} + s + \sum_{n=1}^\infty \bar{v}_n(t,\bar{t},s) \bar{\cL}^n \label{eq:M} \end{eqnarray} that satisfy the Lax equations \begin{eqnarray} \dfrac{\partial {\cal M}}{\partial t_n} = \{ {\cal B}_n, {\cal M} \}, && \dfrac{\partial {\cal M}}{\partial \bar{t}_n} = \{ \bar{\cB}_n, {\cal M} \}, \nonumber \\ \dfrac{\partial \bar{\cM}}{\partial t_n} = \{ {\cal B}_n, \bar{\cM} \}, && \dfrac{\partial \bar{\cM}}{\partial \bar{t}_n} = \{ \bar{\cB}_n, \bar{\cM} \} \label{eq:LaxM} \end{eqnarray} and the canonical Poisson relations \begin{equation} \{ {\cal L}, {\cal M} \} = {\cal L}, \quad \{ \bar{\cL},\bar{\cM} \} = \bar{\cL}. \label{eq:canonical} \end{equation} It is rather these ``extra'' Lax functions that play a central role in our approach to two-dimensional strings. Before going forward, a few comments on formal residue calculus are in order. We consider residues as being defined for 1-forms as: \begin{equation} \;\mathop{\mbox{\rm res}} \sum a_n z^n dz = a_{-1}. \end{equation} Residues of more general 1-form are to be evaluated by the standard rule of exterior differential calculus: \begin{equation} \;\mathop{\mbox{\rm res}} f(z)dg(z) = \;\mathop{\mbox{\rm res}} f(z) \frac{dg(z)}{dz} dz. \end{equation} Residues thus defined are invariant under coordinate transformations $z \to w = h(z)$ sending $\infty\to\infty$ or $0\to0$. We can now define four fundamental potentials $\phi$, $F$, ${\cal S}$ and $\bar{\cS}$ as follows. The first potential $\phi = \phi(t,\bar{t},s)$ is defined by the equation \begin{equation} d\phi = \sum_{n=1}^\infty \;\mathop{\mbox{\rm res}}( {\cal L}^n d\log p) dt_n -\sum_{n=1}^\infty \;\mathop{\mbox{\rm res}}( \bar{\cL}^{-n} d\log p) d\bar{t}_n + \log \bar{u}_0 ds, \label{eq:dphi} \end{equation} where ``$d$" means total differentiation in $(t,\bar{t},s)$, and of course $d\log p = dp/p$. The right hand side is a closed form as far as ${\cal L}$ and $\bar{\cL}$ are subject to Lax equations (\ref{eq:LaxL}). This potential $\phi$ satisfies the second-order equation \begin{equation} \dfrac{\partial^2 \phi}{\partial t_1 \partial \bar{t}_1} + \frac{\partial}{\partial s} \exp\left( \frac{\partial\phi}{\partial s} \right) = 0. \end{equation} This is the well known dispersionless (or long-wave) limit of the two-dimensional Toda field equation. The second potential $F = F(t,\bar{t},s)$ is defined by the equation \begin{equation} dF = \sum_{n=1}^\infty v_n dt_n - \sum_{n=1}^\infty \bar{v}_n d\bar{t}_n + \phi ds. \label{eq:dF} \end{equation} Again, the right hand side is a closed form as far as ${\cal L}$, ${\cal M}$, $\bar{\cL}$ and $\bar{\cM}$ are subject to Lax equations (\ref{eq:LaxL},\ref{eq:LaxM},\ref{eq:canonical}). This potential $F$ plays the role of a ``generating function" --- all other quantities $u_n$, $\bar{u}_n$, $v_n$, $\bar{v}_n$ and $\phi$ can be reproduced from $F$ by differentiation with respect to $t$, $\bar{t}$ and $s$. This is obviously reminiscent of the role of partition functions with external sources in usual field theories. In our earlier work \cite[1991]{bib:TT-dToda}, $F$ was defined as logarithm of the ``tau function" of the dispersionless Toda hierarchy, but it was later recognized that $F$ is also connected with the tau function $\tau(\hbar,t,\bar{t},s)$ of the full Toda hierarchy by $\hbar$-expansion \cite[1993]{bib:TT-dToda}: \begin{equation} \log\tau(\hbar,t,\bar{t},s) = \hbar^{-2} F(t,\bar{t},s) + O(\hbar^{-1}). \end{equation} The last two potentials ${\cal S} = {\cal S}(t,\bar{t},s,p)$ and $\bar{\cS} = \bar{\cS}(t,\bar{t},s,p)$ can be defined rather directly as: \begin{eqnarray} {\cal S} &=& \sum_{n=1}^\infty t_n {\cal L}^n + s \log{\cal L} - \sum_{n=1}^\infty \frac{1}{n} \dfrac{\partial F}{\partial t_n} {\cal L}^{-n}, \nonumber \\ \bar{\cS} &=& \sum_{n=1}^\infty \bar{t}_n \bar{\cL}^{-n} + s\log\bar{\cL} + \phi - \sum_{n=1}^\infty \frac{1}{n} \dfrac{\partial F}{\partial \bar{t}_n} \bar{\cL}^n. \label{eq:S} \end{eqnarray} We call ${\cal S}$ and $\bar{\cS}$ ``potentials" because they can also be characterized as: \begin{eqnarray} d{\cal S} &=& {\cal M} d\log{\cal L} + \log p ds + \sum_{n=1}^\infty {\cal B}_n dt_n + \sum_{n=1}^\infty \bar{\cB}_n d\bar{t}_n, \nonumber \\ d\bar{\cS} &=& \bar{\cM} d\log\bar{\cL} + \log p ds + \sum_{n=1}^\infty {\cal B}_n dt_n + \sum_{n=1}^\infty \bar{\cB}_n d\bar{t}_n, \label{eq:dS} \end{eqnarray} where ``$d$" now means total differentiation in $(t,\bar{t},s)$ and $p$. An immediate consequence of (\ref{eq:dS}) is the following expressions of ${\cal B}_n$ and $\bar{\cB}_n$: \begin{eqnarray} {\cal B}_m &=& {\cal L}^m - \sum_{n=1}^\infty \frac{1}{n} \dfrac{\partial^2 F}{\partial t_m \partial t_n} {\cal L}^{-n} \nonumber \\ &=& \dfrac{\partial^2 F}{\partial t_m \partial s} - \sum_{n=1}^\infty \frac{1}{n} \dfrac{\partial^2 F}{\partial t_m \partial \bar{t}_n} \bar{\cL}^n, \nonumber \\ \bar{\cB}_m &=& - \sum_{n=1}^\infty \frac{1}{n} \dfrac{\partial^2 F}{\partial \bar{t}_m \partial t_n} {\cal L}^{-n} \nonumber \\ &=& \bar{\cL}^{-m} + \dfrac{\partial^2 F}{\partial \bar{t}_m \partial s} - \sum_{n=1}^\infty \frac{1}{n} \dfrac{\partial^2 F}{\partial \bar{t}_m \partial \bar{t}_n} \bar{\cL}^n. \label{eq:BbyL} \end{eqnarray} \section{Spectral parameters $\lambda$ and $\bar{\lambda}$ \label{sec:lambda}} We now introduce two new variables $\lambda$ and $\bar{\lambda}$, and reformulate the setting of the previous section by replacing \begin{equation} {\cal L} \to \lambda, \quad \bar{\cL} \to \bar{\lambda}. \label{eq:L->lambda} \end{equation} By this substitution, ${\cal S}$ and $\bar{\cS}$ are replaced by \begin{eqnarray} {\cal S}({\cal L}\to\lambda) &=& \sum_{n=1}^\infty t_n \lambda^n + s \log\lambda - \sum_{n=1}^\infty \frac{1}{n} \dfrac{\partial F}{\partial t_n} \lambda^{-n}, \nonumber \\ \bar{\cS}(\bar{\cL}\to\bar{\lambda}) &=& \sum_{n=1}^\infty \bar{t}_n \bar{\lambda}^{-n} + s\log\bar{\lambda} + \phi - \sum_{n=1}^\infty \frac{1}{n} \dfrac{\partial F}{\partial \bar{t}_n} \bar{\lambda}^n. \label{eq:S(lambda)} \end{eqnarray} In the language of the full Toda hierarchy, these quantities are just the leading terms in $\hbar$-expansion of logarithm of two Baker-Akhiezer functions $\Psi(\hbar,t,\bar{t},s,\lambda)$ and $\bar{\Psi}(\hbar,t,\bar{t},s,\bar{\lambda})$ \cite{bib:TT-dToda}: \begin{eqnarray} \Psi(\hbar,t,\bar{t},s,\lambda) &=& \exp[ \hbar^{-1} {\cal S}({\cal L}\to\lambda) + O(\hbar^0)], \nonumber \\ \bar{\Psi}(\hbar,t,\bar{t},s,\bar{\lambda}) &=& \exp[ \hbar^{-1} \bar{\cS}(\bar{\cL}\to\bar{\lambda}) + O(\hbar^0)]. \end{eqnarray} The new variables $\lambda$ and $\bar{\lambda}$ are thus nothing but the spectral parameters of the full Toda hierarchy. In the usual setting, actually, one does not have to distinguish between $\lambda$ and $\bar{\lambda}$; in the present setting, they correspond to the two Lax functions ${\cal L}$ and $\bar{\cL}$. Furthermore, in our interpretation of the Laudau-Ginzburg formulation, they do arise in a different form as we shall see later. These are main reasons that we use the two different spectral parameters. Similarly, ${\cal M}$ and $\bar{\cM}$ are replaced by \begin{eqnarray} {\cal M}({\cal L}\to\lambda) &=& \sum_{n=1}^\infty n t_n \lambda^n + s + \sum_{n=1}^\infty \dfrac{\partial F}{\partial t_n} \lambda^{-n}, \nonumber \\ \bar{\cM}(\bar{\cL}\to\bar{\lambda}) &=& - \sum_{n=1}^\infty n \bar{t}_n \bar{\lambda}^{-n} + s - \sum_{n=1}^\infty \dfrac{\partial F}{\partial \bar{t}_n} \bar{\lambda}^n, \label{eq:M(lambda)} \end{eqnarray} where we have rewritten $v_n$ and $\bar{v}_n$ into derivatives of $F$. By comparing (\ref{eq:M(lambda)}) with (\ref{eq:S(lambda)}), one can readily find that \begin{eqnarray} {\cal M}({\cal L}\to\lambda) &=& \lambda\dfrac{\partial}{\partial\lambda} {\cal S}({\cal L}\to\lambda), \nonumber \\ \bar{\cM}(\bar{\cL}\to\bar{\lambda}) &=& \bar{\lambda}\dfrac{\partial}{\partial\bar{\lambda}} \bar{\cS}(\bar{\cL}\to\bar{\lambda}). \label{eq:M(lambda)byS(lambda)} \end{eqnarray} These equations can be derived from (\ref{eq:dS}), too. Lastly, applying the same substitution rule to (\ref{eq:BbyL}), we can define four quantities ${\cal B}_n({\cal L}\to\lambda)$, ${\cal B}_n(\bar{\cL}\to\bar{\lambda})$, $\bar{\cB}_n({\cal L}\to\lambda)$, $\bar{\cB}_n(\bar{\cL}\to\bar{\lambda})$. Eqs. (\ref{eq:dS}) imply that these quantities, too, can be written as derivatives of ${\cal S}({\cal L}\to\lambda)$ and $\bar{\cS}(\bar{\cL}\to\bar{\lambda})$: \begin{eqnarray} {\cal B}_n({\cal L}\to\lambda) = \dfrac{\partial}{\partial t_n} {\cal S}(\lambda), && \bar{\cB}_n({\cal L}\to\lambda) = \dfrac{\partial}{\partial \bar{t}_n} {\cal S}(\lambda), \nonumber\\ {\cal B}_n(\bar{\cL}\to\bar{\lambda}) = \dfrac{\partial}{\partial t_n} \bar{\cS}(\bar{\lambda}), && \bar{\cB}_n(\bar{\cL}\to\bar{\lambda}) = \dfrac{\partial}{\partial \bar{t}_n} \bar{\cS}(\bar{\lambda}). \label{eq:B(lambda)byS(lambda)} \end{eqnarray} An immediate consequence of (\ref{eq:M(lambda)byS(lambda)}) and (\ref{eq:B(lambda)byS(lambda)}) is the following identities: \begin{eqnarray} \dfrac{\partial}{\partial\lambda} {\cal B}_n({\cal L}\to\lambda) &=& \dfrac{\partial}{\partial t_n} {\cal M}({\cal L}\to\lambda) \lambda^{-1}, \nonumber \\ \dfrac{\partial}{\partial\lambda} \bar{\cB}_n({\cal L}\to\lambda) &=& \dfrac{\partial}{\partial \bar{t}_n} {\cal M}({\cal L}\to\lambda) \lambda^{-1}, \nonumber \\ \dfrac{\partial}{\partial (\bar{\lambda}^{-1})} {\cal B}_n(\bar{\cL}\to\bar{\lambda}) &=& - \dfrac{\partial}{\partial t_n} \bar{\cM}(\bar{\cL}\to\bar{\lambda}) \bar{\lambda}, \nonumber \\ \dfrac{\partial}{\partial (\bar{\lambda}^{-1})} \bar{\cB}_n(\bar{\cL}\to\bar{\lambda}) &=& - \dfrac{\partial}{\partial \bar{t}_n} \bar{\cM}(\bar{\cL}\to\bar{\lambda}) \bar{\lambda}. \label{eq:dB(lambda)dM(lambda)} \end{eqnarray} We shall show later that these quantities are fundamental ingredients of the Landau-Ginzburg formulation of two-dimensional strings. \section{Symmetries of dispersionless Toda hierarchy \label{sec:symmetries}} Given two functions ${\cal A} = {\cal A}({\cal L},{\cal M})$ and $\bar{\cA} = \bar{\cA}(\bar{\cL},\bar{\cM})$, one can construct an infinitesimal symmetry $\delta_{{\cal A},\bar{\cA}}$ of the dispersionless Toda hierarchy \cite{bib:TT-dToda}. More precisely, ${\cal A}$ and $\bar{\cA}$ are assumed to be a ``good'' function, such as a polynomial of $({\cal L},{\cal L}^{-1},{\cal M})$ and $(\bar{\cL},\bar{\cL}^{-1},\bar{\cM})$, respectively, with constant coefficients. We here explain how these symmetries are actually defined, and present several formulas that we shall use crucially in the subsequent sections. Let us consider the ring ${\cal R}$ generated by $t$, $\bar{t}$, $s$, $F$ and all its derivatives. In this setting, $F$ and its derivatives have to be considered abstract ``symbols'' rather than actual functions of $(t,\bar{t},s)$. By ``derivation'' we mean a linear map $\delta: {\cal R} \to {\cal R}$ satisfying the Leibniz rule $\delta(ab) = \delta(a) b + a \delta(b)$. One can define the derivations $\partial/\partial t_n$, $\partial/\partial\bar{t}_n$, and $\partial/\partial s$ as derivations on ${\cal R}$ in an obvious manner: \begin{eqnarray} && \dfrac{\partial}{\partial t_n} F = v_n, \quad \dfrac{\partial}{\partial \bar{t}_n} F = - \bar{v}_n, \quad \dfrac{\partial}{\partial s} F = \phi, \nonumber\\ && \dfrac{\partial}{\partial t_n} t_m = \delta_{nm}, \quad \ldots \mbox{etc} \ldots. \end{eqnarray} Differential equations satisfied by $F$ and its derivatives (which include differential equations of $v_n$, $\bar{v}_n$ and $\phi$, too) are thus encoded into these differential-algebraic structures of ${\cal R}$. The symmetry $\delta_{{\cal A},\bar{\cA}}$ is defined to be an additional derivation of ${\cal R}$ with the following properties \cite{bib:TT-dToda}: \begin{itemize} \item The action of $\delta_{{\cal A},\bar{\cA}}$ on $F$ is given by \begin{equation} \delta_{{\cal A},\bar{\cA}} F = - \;\mathop{\mbox{\rm res}}\left( \int_0^{{\cal M}({\cal L}\to\lambda)} {\cal A}(\lambda,\mu)d\mu \right) d\lambda + \;\mathop{\mbox{\rm res}}\left( \int_0^{\bar{\cM}(\bar{\cL}\to\bar{\lambda})} \bar{\cA}(\bar{\lambda},\bar{\mu})d\bar{\mu} \right) d\bar{\lambda}. \label{eq:deltaF} \end{equation} \item $\delta_{{\cal A},\bar{\cA}}$ acts trivially on $t$, $\bar{t}$ and $s$ as: \begin{equation} \delta_{{\cal A},\bar{\cA}} t_n = 0, \quad \delta_{{\cal A},\bar{\cA}} \bar{t}_n = 0, \quad \delta_{{\cal A},\bar{\cA}} s = 0. \end{equation} \item $\delta_{{\cal A},\bar{\cA}}$ commutes with $\partial/\partial t_n$, $\partial/\partial \bar{t}_n$ and $\partial/\partial s$: \begin{equation} \left[ \delta_{{\cal A},\bar{\cA}}, \frac{\partial}{\partial t_n} \right] = \left[ \delta_{{\cal A},\bar{\cA}}, \frac{\partial}{\partial \bar{t}_n} \right] = \left[ \delta_{{\cal A},\bar{\cA}}, \frac{\partial}{\partial s} \right] = 0. \end{equation} \end{itemize} The last property implies, in particular, that $\delta_{{\cal A},\bar{\cA}}$ commutes with all flows of the dispersionless Toda hierarchy, a condition characterizing a symmetry! Furthermore, these symmetries satisfy the following commutation relations \cite{bib:TT-dToda}: \begin{equation} [\delta_{{\cal A},\bar{\cA}}, \delta_{{\cal B},\bar{\cB}} ] = \delta_{\{{\cal A},{\cal B}\},\{\bar{\cA},\bar{\cB}\}} + \;\mathop{\mbox{\rm res}}\left( {\cal A}(\lambda,0)d{\cal B}(\lambda,0) -\bar{\cA}(\bar{\lambda},0)d\bar{\cB}(\bar{\lambda},0) \right) \partial_F, \label{eq:CR} \end{equation} where $\partial_F$ is yet another derivation on ${\cal R}$ defined by \begin{equation} \partial_F F = 0, \quad \partial_F (\mbox{any other generator of}\ {\cal R}) = 0, \end{equation} which accordingly commute with all other derivations $\partial/\partial t_n$, $\partial/\partial \bar{t}_n$, $\partial/\partial s$ and $\delta_{{\cal A},\bar{\cA}}$. Thus an underlying Lie algebra is a central extension of $w_{1+\infty} \oplus w_{1+\infty}$; note that $w_{1+\infty}$ is now realized as the Lie algebra of Poisson brackets. The action of $\delta_{{\cal A},\bar{\cA}}$ on other fundamental quantities such as $v_n$, $\bar{v}_n$ and $\phi$, etc. can be read off from the above construction, because they all are derivatives of $F$. For $v_n$, $\bar{v}_n$ and $\phi$, we have the following formulas (and, actually, the above formula for $F$ was first discovered by ``integrating'' these formulas \cite{bib:TT-dToda}): \begin{eqnarray} \delta_{{\cal A},\bar{\cA}} v_n &=& \;\mathop{\mbox{\rm res}} \left( - {\cal A}({\cal L},{\cal M}) + \bar{\cA}(\bar{\cL},\bar{\cM}) \right) d{\cal B}_n, \nonumber \\ \delta_{{\cal A},\bar{\cA}} \bar{v}_n &=& \;\mathop{\mbox{\rm res}} \left( + {\cal A}({\cal L},{\cal M}) - \bar{\cA}(\bar{\cL},\bar{\cM}) \right) d\bar{\cB}_n, \nonumber \\ \delta_{{\cal A},\bar{\cA}} \phi &=& \;\mathop{\mbox{\rm res}} \left( - {\cal A}({\cal L},{\cal M}) + \bar{\cA}(\bar{\cL},\bar{\cM}) \right) d\log p. \label{eq:deltav} \end{eqnarray} Furthermore, since ${\cal M}({\cal L}\to\lambda)$ and $\bar{\cM}(\bar{\cL}\to\bar{\lambda})$ are generating functions of $v_n$ and $\bar{v}_n$, one should be able to rewrite the first two of (\ref{eq:deltav}) in terms of these generating functions. This indeed results in the following formulas: \begin{eqnarray} \delta_{{\cal A},\bar{\cA}} {\cal M}({\cal L}\to\lambda) &=& \lambda\frac{\partial}{\partial\lambda} \left[ \left( {\cal A}({\cal L},{\cal M}) - \bar{\cA}(\bar{\cL},\bar{\cM}) \right)_{\le -1}({\cal L} \to \lambda) \right] , \nonumber \\ \delta_{{\cal A},\bar{\cA}} \bar{\cM}(\bar{\cL}\to\bar{\lambda}) &=& \bar{\lambda}\frac{\partial}{\partial\bar{\lambda}} \left[ \left( - {\cal A}({\cal L},{\cal M}) + \bar{\cA}(\bar{\cL},\bar{\cM}) \right)_{\ge 0}(\bar{\cL} \to \bar{\lambda}) \right] , \label{eq:deltaM(lambda)} \end{eqnarray} where $\delta_{{\cal A},\bar{\cA}}$ is understood to act trivially on $\lambda$ and $\bar{\lambda}$ (i.e., $\delta_{{\cal A},\bar{\cA}}\lambda = 0$ and $\delta_{{\cal A},\bar{\cA}}\bar{\lambda} = 0$); inside ``$[\quad]$'' on the right hand side, we first take the projection with respect to powers of $p$, then reexpand the results into powers of ${\cal L}$ and $\bar{\cL}$ instead of $p$, and finally replace them by $\lambda$ and $\bar{\lambda}$. \section{Riemann-Hilbert problem \label{sec:RHproblem}} We are now in a position to apply the general machinery of the preceding sections to two-dimensional string theory. In this section, we solve a Riemann-Hilbert problem to construct a special solution of the dispersionless Toda hierarchy. In the next section, we prove that it indeed describes the tachyon dynamics at self-dual radius by showing that its $F$ potential satisfies $w_{1+\infty}$-constraints of tachyon correlation functions. In general, Riemann-Hilbert problems for solving the dispersionless Toda hierarchy can be written \begin{equation} \bar{\cL} = f({\cal L},{\cal M}), \quad \bar{\cM} = g({\cal L},{\cal M}), \label{eq:generalRH} \end{equation} where ${\cal L}$, ${\cal M}$, $\bar{\cL}$ and $\bar{\cM}$ are required to be Laurent series of $p$ of the form assumed in (\ref{eq:L},\ref{eq:M}); $f = f(\lambda,\mu)$ and $g = g(\lambda,\mu)$ (``Riemann-Hilbert data'') are functions satisfying the area-preserving condition \begin{equation} \lambda \frac{\partial f}{\partial \lambda} \frac{\partial g}{\partial \mu} - \frac{\partial f}{\partial \mu} \lambda \frac{\partial g}{\partial \lambda} = f \end{equation} (which means that the map $(\log\lambda,\mu) \to (\log f, g)$ is area-preserving in the ordinary sense) and some additional condition on its analyticity. A general theorem \cite{bib:TT-dToda} ensures that if (\ref{eq:generalRH}) has a unique solution, then ${\cal L}$, ${\cal M}$, $\bar{\cL}$ and $\bar{\cM}$ satisfy all relevant equations (\ref{eq:LaxL},\ref{eq:LaxM},\ref{eq:canonical}) of the Lax formalism. Theoretically, one can thus obtain all solutions of the dispersionless Toda hierarchy. Practically, explicit solutions of such a Riemann-Hilbert problem is rarely available. Note that (\ref{eq:generalRH}) is just a compact expression of an infinite number of highly nonlinear relations between the two sets of variables $(u_n,v_n)$ and $(\bar{u}_n,\bar{v}_n)$ (in which $t$, $\bar{t}$ and $s$ enter as parameters); solving these equations looks as difficult as solving the hierarchy directly! Fortunately, the Riemann-Hilbert problem we consider below, is relatively easy to handle with. The Riemann-Hilbert problem to be considered is the following: \begin{equation} {\cal L} = \bar{\cM}\bar{\cL}, \quad \bar{\cL}^{-1} = {\cal M} {\cal L}^{-1}. \label{eq:RH} \end{equation} Apparently this does not take the form of (\ref{eq:generalRH}), but can be readily rewritten in that form. This non-standard (but more symmetric) expression is rather suited for recognizing a wedge algebra structure. The area-preserving condition, too, can be easily checked. This Riemann-Hilbert problem can be solved by almost the same method as used for the $A_{k+1}$ models \cite{bib:TT-dKP}. Actually, details of calculations are rather similar to the case of the $D_\ell$ models \cite{bib:T-DLG}; the integrable hierarchy underlying these models, too, has four Lax functions, and Riemann-Hilbert problems takes the same form as (\ref{eq:generalRH}). Solving (\ref{eq:RH}) consists of several steps. The first step is to split each equation of (\ref{eq:RH}) into two pieces by applying $(\quad)_{\ge 0}$ and $(\quad)_{\le -1}$. This gives the following four equations: \begin{eqnarray} ({\cal L})_{\ge 0} &=& - \sum_{k=2}^\infty k \bar{t}_k (\bar{\cL}^{-k+1})_{\ge 0} - \bar{t}_1 + s \bar{\cL} + \sum_{n=1}^\infty \bar{v}_n \bar{\cL}^{n+1}, \label{eq:1a} \\ ({\cal L})_{\le -1} &=& - \sum_{k=2}^\infty k \bar{t}_k (\bar{\cL}^{-k+1})_{\le -1}, \label{eq:1b} \\ (\bar{\cL}^{-1})_{\ge 0} &=& \sum_{k=2}^\infty k t_k ({\cal L}^{k-1})_{\ge 0} + t_1, \label{eq:2a} \\ (\bar{\cL}^{-1})_{\le -1} &=& \sum_{k=2}^\infty k t_k ({\cal L}^{k-1})_{\le -1} + s {\cal L}^{-1} + \sum_{n=1}^\infty v_n {\cal L}^{-n-1}. \label{eq:2b} \end{eqnarray} The second step is to decompose each equation into an infinite number of equations not including $p$, by taking residue pairing of both hand sides with suitable 1-forms. For instance, by taking the residue pairing of both hand sides of (\ref{eq:1a},\ref{eq:1b}) with (i) $\bar{u}_0 p^{-1} d\log p$, (ii) $p^{n-1}d\log p$, (iii) $\bar{\cL}^{-n-1} d\log\bar{\cL}$, respectively, we can obtain the equations \begin{eqnarray} \bar{u}_0 &=& - \sum_{k=2}^\infty k\bar{t}_k \bar{u}_0 \;\mathop{\mbox{\rm res}}[ \bar{\cL}^{-k+1}p^{-1}d\log p ] + s, \label{eq:3'} \\ u_n &=& - n\bar{t}_n \bar{u}_0{}^{n-1} - \sum_{k=n+1}^\infty k\bar{t}_k \;\mathop{\mbox{\rm res}}[ \bar{\cL}^{-k+1} p^{n-1} d\log p ], \label{eq:45} \\ \bar{v}_n &=& \;\mathop{\mbox{\rm res}}[ ({\cal L})_{\ge 0} \bar{\cL}^{-n-1} d\log\bar{\cL} ] \nonumber \\ && + \sum_{k=2}^\infty k\bar{t}_k \;\mathop{\mbox{\rm res}}[ (\bar{\cL}^{-k+1})_{\ge 0} \bar{\cL}^{-n-1}d\log\bar{\cL} ] \label{eq:8} \end{eqnarray} for $n = 1,2,\ldots$. Here trivial equations of the form $0 = 0$ have been omitted. It should be noted that this process is reversible, because the 1-forms (i)-(iii) used in the residue pairing form a complete set. Similarly, from (\ref{eq:2a},\ref{eq:2b}), we obtain another infinite set of equations \begin{eqnarray} \bar{u}_0 &=& \sum_{k=2}^\infty kt_k \;\mathop{\mbox{\rm res}}[{\cal L}^{k-1} d\log p] + s \label{eq:3} \\ \bar{u}_n &=& nt_n + \sum_{k=n+1}^\infty kt_k \;\mathop{\mbox{\rm res}}[ {\cal L}^{k-1} p^{-n+1} d\log p ], \label{eq:67} \\ v_n &=& \;\mathop{\mbox{\rm res}}[ (\bar{\cL}^{-1})_{\le -1} {\cal L}^{n+1} d\log{\cal L} ] \nonumber \\ && - \sum_{k=2}^\infty kt_k \;\mathop{\mbox{\rm res}}[ ({\cal L}^{k-1})_{\le -1} {\cal L}^{n+1} d\log{\cal L} ] \label{eq:9} \end{eqnarray} for $n = 1,2,\ldots$. This process, too, is reversible. Therefore we now have only to solve these equations for $u_n$, $v_n$, $\bar{u}_n$ and $\bar{v}_n$. The third and final step is to solve these equations by Taylor expansion. Eqs. (\ref{eq:3'},\ref{eq:45},\ref{eq:3},\ref{eq:67}) include only $u$'s and $\bar{u}$'s. By expanding these unknown functions into Taylor series of $(t,\bar{t})$ at $(t,\bar{t})=(0,0)$, one can convert these equations into (very complicated) recursion relations of Taylor coefficients. By the standard power counting method, one can show that these recursion relations uniquely determines $u$'s and $\bar{u}$'s as: \begin{eqnarray} \bar{u}_0 &=& s + \ \mbox{higher order terms}, \nonumber\\ u_n &=& - n \bar{t}_n s^{n-1} + \ \mbox{higher order terms }, \nonumber\\ \bar{u}_n &=& n t_n + \ \mbox{higher order terms} \quad (n \ge 1). \end{eqnarray} Once $u$'s and $\bar{u}$'s are thus determined, remaining two equations (\ref{eq:8},\ref{eq:9}) give $v_n$ and $\bar{v}_n$ explicitly. Thus our Riemann-Hilbert problem turns out to have a unique solution. The solutions $u_n$, $\bar{u}_n$, $v_n$ and $\bar{v}_n$ of the above equations turn out to have good scaling properties. Note that each equation of (\ref{eq:RH}) is invariant under the following formal rescaling of variables included therein: \begin{eqnarray} t_n \to c^{-n}t_n, && \bar{t}_n \to c^n \bar{t}_n, \nonumber\\ s \to s, && p \to c^{-1} p \nonumber\\ u_n \to c^n u_n, && \bar{u}_n \to c^{-n} \bar{u}_n, \nonumber\\ v_n \to c^n v_n, && \bar{v}_n \to c^{-n} \bar{v}_n. \end{eqnarray} Since the Riemann-Hilbert problem has a unique solution, this means that $u_n$, $\bar{u}_n$, $v_n$ and $\bar{v}_n$ indeed have the above scaling property as functions of $(t,\bar{t},s)$. In other words, if we define a weight (U(1)-charge) of $t,\bar{t},s$ as \begin{equation} \mbox{wt}(t_n) = -n, \quad \mbox{wt}(\bar{t}_n) = n, \quad \mbox{wt}(s) = 0, \end{equation} then $u_n$, $\bar{u}_n$, $v_n$ and $\bar{v}_n$ become quasi-homogeneous functions of degree $n$, $-n$, $n$ and $-n$, respectively. Accordingly, the functions $\phi$ and $F$, which are defined by (\ref{eq:dphi},\ref{eq:dF}), become quasi-homogeneous function of degree $0$. Three remarks are now in order: First, we have in fact two equations (\ref{eq:3'}) and (\ref{eq:3}) that include $\bar{u}_0$ as a main term; apparently this is redundant. Actually, one may select one of them arbitrarily, and solve them along with (\ref{eq:45},\ref{eq:67}). This eventually leads to the same result, as one can verify by returning to (\ref{eq:1a},\ref{eq:1b},\ref{eq:2a},\ref{eq:2b}) and reexamining the derivation of the above equations therefrom. Second, in the final step of the above consideration, we have Taylor-expanded all unknown functions at $(t,\bar{t})=(0,0)$, but $s$ is left free. Namely, we do not need Taylor expansion in $s$, and can set it to any constant value. This is also reflected to the fact that the weight (U(1)-charge) of $s$ is zero. This is a desirable property, because $s$ is interpreted to be the cosmological constant of two-dimensional strings, and an advantage of the Landau-Ginzburg formulation lies in the fact that it describes the theoy with non-zero cosmological constant. Third, we have not specificed any explicit expression of $u_n$, $v_n$, $\bar{u}_n$ and $\bar{v}_n$; they should be very complicated, and we actually do not need such explicit formulas. We just have to prove that the Riemann-Hilbert problem has a unique solution. The general machinery of the dispersionless Toda hierarchy can work only after this fact is confirmed. Once the existence of such a solution is proven, one can derive $w_{1+\infty}$-constraints to the $F$ potential therefrom, and identify it with the generating function of tachyon correlation functions, as we shall show in the next section. All relevant information on the tachyon dynamics is now encoded into the $F$ potential. \section{Constraints to $F$ potential \label{sec:constraints}} Let us now derive $w_{1+\infty}$-constraints to $F$, To this end, we start from the relations \begin{equation} {\cal L}^n = \bar{\cM}^n \bar{\cL}^n, \quad \bar{\cL}^{-n} = {\cal M}^n {\cal L}^{-n}, \quad n = 1,2, \ldots, \label{eq:RH(n)} \end{equation} which are an obvious consequence of (\ref{eq:RH}). Just as we derived (\ref{eq:3'}) etc. in the previous section, we now take residue paring of both hand sides of (\ref{eq:RH(n)}) with $d{\cal B}_m$, $d\bar{\cB}_m$ and $d\log p$ ($m = 1,2,\ldots$). This results in the following relations: \begin{eqnarray} \;\mathop{\mbox{\rm res}}[ {\cal L}^n d{\cal B}_m ] &=& \;\mathop{\mbox{\rm res}}[ \bar{\cM}^n \bar{\cL}^n d{\cal B}_m ], \nonumber\\ \;\mathop{\mbox{\rm res}}[ {\cal L}^n d\bar{\cB}_m ] &=& \;\mathop{\mbox{\rm res}}[ \bar{\cM}^n \bar{\cL}^n d\bar{\cB}_m ], \nonumber\\ \;\mathop{\mbox{\rm res}}[ {\cal L}^n d\log p] &=& \;\mathop{\mbox{\rm res}}[ \bar{\cM}^n \bar{\cL}^n d\log p ], \nonumber\\ \;\mathop{\mbox{\rm res}}[ \bar{\cL}^{-n} d{\cal B}_m ] &=& \;\mathop{\mbox{\rm res}}[ {\cal M}^n {\cal L}^{-n} d{\cal B}_m ], \nonumber\\ \;\mathop{\mbox{\rm res}}[ \bar{\cL}^{-n} d\bar{\cB}_m ] &=& \;\mathop{\mbox{\rm res}}[ {\cal M}^n {\cal L}^{-n} d\bar{\cB}_m ], \nonumber\\ \;\mathop{\mbox{\rm res}}[ \bar{\cL}^{-n} d\log p] &=& \;\mathop{\mbox{\rm res}}[ {\cal M}^n {\cal L}^{-n} d\log p ]. \label{eq:resRH(n)} \end{eqnarray} Note that these relations conversely imply (\ref{eq:RH(n)}), because this residue pairing is complete (i.e., $\;\mathop{\mbox{\rm res}}[fd{\cal B}_m] = \;\mathop{\mbox{\rm res}}[fd\bar{\cB}_m] = \;\mathop{\mbox{\rm res}}[fd\log p] = 0$ for all $m=1,2,\ldots$ if and only if $f = 0$). We can now apply (\ref{eq:deltav}) to each equations of (\ref{eq:resRH(n)}) to rewrite them as: \begin{eqnarray} && \dfrac{\partial}{\partial t_m}\delta_{{\cal L}^n,\bar{\cM}^n \bar{\cL}^n}F = \dfrac{\partial}{\partial \bar{t}_m}\delta_{{\cal L}^n,\bar{\cM}^n \bar{\cL}^n}F = \dfrac{\partial}{\partial s}\delta_{{\cal L}^n,\bar{\cM}^n \bar{\cL}^n}F = 0, \nonumber\\ && \dfrac{\partial}{\partial t_m}\delta_{{\cal M}^n {\cal L}^{-n},\bar{\cL}^{-n}}F = \dfrac{\partial}{\partial \bar{t}_m}\delta_{{\cal M}^n {\cal L}^{-n},\bar{\cL}^{-n}}F = \dfrac{\partial}{\partial s}\delta_{{\cal M}^n {\cal L}^{-n},\bar{\cL}^{-n}}F = 0. \end{eqnarray} These equations show that $\delta_{{\cal L}^n,\bar{\cM}^n \bar{\cL}^n}F$ and $\delta_{{\cal M}^n {\cal L}^{-n},\bar{\cL}^{-n}}F$ are constant. Actually, this constant should vanish: If one recalls the aforementioned scaling properties of $v_n$, $\bar{v}_n$ and $\phi$, and apply them to general formula (\ref{eq:deltaF}), one will be able to see that $\delta_{{\cal L}^n,\bar{\cM}^n \bar{\cL}^n}F$ and $\delta_{{\cal M}^n {\cal L}^{-n},\bar{\cL}^{-n}}F$ are quasi-homogeneous of degree $-1$. This means that the constant values should be zero. Thus we can conclude that $F$ satisfies the equations \begin{equation} \delta_{{\cal L}^n,\bar{\cM}^n \bar{\cL}^n}F = 0, \quad \delta_{{\cal M}^n {\cal L}^{-n},\bar{\cL}^{-n}}F = 0, \quad n = 1,2,\ldots. \label{eq:w-constraints} \end{equation} Furthermore, by carefully examining the above derivation, one can see that this derivation is reversible; Eqs. (\ref{eq:RH(n)}) (therefore the original Riemann-Hilbert problem) can be derived conversely from (\ref{eq:w-constraints}). Eqs. (\ref{eq:w-constraints}) are, actually, just a disguise of the $w_{1+\infty}$-constraints of Hanany et al. By general formula (\ref{eq:deltaF}), one can rewrite (\ref{eq:w-constraints}) into a more explicit form: \begin{eqnarray} v_n - \frac{1}{n+1} \;\mathop{\mbox{\rm res}}[ \bar{\cM}^{n+1}\bar{\cL}^n d\log\bar{\cL}] &=& 0, \nonumber\\ \bar{v}_n + \frac{1}{n+1} \;\mathop{\mbox{\rm res}}[ {\cal M}^{n+1} {\cal L}^{-n} d\log{\cal L}] &=& 0. \end{eqnarray} One can then substitute $v_n = \partial F/\partial t_n$ and $\bar{v}_n = - \partial F/\partial \bar{t}_n$ to write the left hand side in terms of derivatives of $F$. Furthermore, one can introduce a new variable $X$ and, as in (\ref{eq:L->lambda}), rewrite the residues in terms of $X$ by replacing ${\cal L} \to X$, $\bar{\cL} \to X^{-1}$. Thus, eventually, (\ref{eq:w-constraints}) turn into the following form: \begin{eqnarray} \dfrac{\partial F}{\partial t_n} - \frac{1}{n+1} \;\mathop{\mbox{\rm res}} \left[ \left( \frac{\bar{\cM}(\bar{\cL}\to X^{-1})}{X} \right)^{n+1} dX \right] &=& 0, \nonumber \\ \dfrac{\partial F}{\partial \bar{t}_n} + \frac{1}{n+1} \;\mathop{\mbox{\rm res}} \left[ \left( \frac{{\cal M}({\cal L}\to X)}{X} \right)^{n+1} dX \right] &=& 0, \label{eq:HOPconstraints} \end{eqnarray} which become exactly the $w_{1+\infty}$-constraints of Hanany et al. if we interpret their two Landau-Ginzburg potentials $W(X)$, $\bar{W}(X)$ and tachyon correlation functions $<\!< T_n >\!>$ as: \begin{eqnarray} W(X) = - \frac{{\cal M}({\cal L}\to X)}{X}, && \bar{W}(X) = - \frac{\bar{\cM}(\bar{\cL}\to X^{-1})}{X}, \label{eq:LGpotential} \\ \left<\!\left< T_n \right>\!\right> = \frac{1}{n} \dfrac{\partial F}{\partial t_n}, && \left<\!\left< T_0 \right>\!\right> = \dfrac{\partial F}{\partial s}, \nonumber\\ \left<\!\left< T_{-n} \right>\!\right> = - \frac{1}{n} \dfrac{\partial R}{\partial\bar{t}_n}, && (n = 1,2,\ldots). \label{eq:Tcorrelator} \end{eqnarray} The extra numerical factors on the right hand side emerge because our $(t,\bar{t},s)$ are slightly different from the background sources of Hanany et al. Our results agree with theirs if we interpret the correlator $<\!< {\cal O} >\!>$ as: \begin{equation} \left<\!\left< {\cal O} \right>\!\right> = \left< {\cal O} \exp( \sum_{n=1}^\infty n t_n T_n + s T_0 - \sum_{n=1}^\infty n \bar{t}_n T_{-n} ) \right>. \end{equation} Actually, in place of (\ref{eq:RH(n)}), one can consider even more general combinations of the fundamental Riemann-Hilbert relation as: \begin{equation} {\cal M}^k {\cal L}^{n-k} = \bar{\cM}^n \bar{\cL}^{n-k}, \quad k,n = 0,1,\ldots. \label{eq:RH(kn)} \end{equation} Then, by the same reasoning as above, the following constraints can be obtained: \begin{equation} \delta_{{\cal M}^k {\cal L}^{n-k},\bar{\cM}^n \bar{\cL}^{n-k}} F = 0. \label{eq:constraints(kn)} \end{equation} In terms of the Landau-Ginzburg potential, more explicitly, these constraints can be written \begin{equation} \frac{1}{k+1} \;\mathop{\mbox{\rm res}} \left[ \left( - W(X) \right)^{k+1} X^n dX \right] = \frac{1}{n+1} \;\mathop{\mbox{\rm res}} \left[ \left( - \bar{W}(X) \right)^{n+1} X^k dX \right]. \end{equation} Of course, as also noted by Hanany et al., their $w_{1+\infty}$-constraints are in themselves powerful enough to determine the tachyon correlation functions completely. In this respect, the above constraints are redundant. These extra constraints, however, turn out to stem from underlying highersymmetries, as we shall discuss in the next section. \section{States and fields generated by wedge algebra \label{sec:wedge}} We first note that both hand sides of (\ref{eq:RH(kn)}) are generators of a wedge algebra. To clarify this fact, we introduce nonnegative half-integer indices $(j,m)$ in the ``wedge'' $|m| \le j$ by the usual convention \begin{equation} k = j-m, \quad n = j+m, \end{equation} and write both hand sides of (\ref{eq:RH(kn)}) as $w_{jm}$: \begin{equation} w_{jm} = {\cal L}^n ({\cal M}{\cal L}^{-1})^k = (\bar{\cM}\bar{\cL})^n \bar{\cL}^{-k}. \end{equation} Since \begin{equation} \{ {\cal L}, {\cal M}{\cal L}^{-1} \} = \{ \bar{\cM}\bar{\cL}, \bar{\cL}^{-1} \} = 1, \end{equation} $w_{jm}$ indeed form a wedge algebra with respect to the Poisson bracket. In the following, we propose a speculative interpretation of this wedge algebra as generators of ``extra'' states and fields of two-dimensional strings. Let us show how such ``states'' emerge in our framework. Let $W_{jm}$ denote the following symmetries of the dispersionless Toda hierarchy: \begin{equation} W_{jm} = \delta_{{\cal L}^n ({\cal M}{\cal L}^{-1})^k,0} = - \delta_{0,(\bar{\cM}\bar{\cL})^n \bar{\cL}^{-k}}. \end{equation} These symmetries are understood to be acting on the ring ${\cal R}$ of Section \ref{sec:symmetries}. The two expressions on the right hand side give the same symmetry because of (\ref{eq:constraints(kn)}). Furthermore, by (\ref{eq:CR}), $W_{jm}$ obey the same commutation relations as the Poisson commutation relations of $w_{jm}$; the central terms disappear, as usual, on a wedge. The action of those sitting on the ``edge" of the wedge, $(j,m) = (n/2,\pm n/2)$ generate the tachyon correlation functions: \begin{eqnarray} && W_{n/2,n/2} F = - \frac{\partial F}{\partial t_n} = - n \left<\!\left< T_n \right>\!\right>, \nonumber\\ && W_{n/2,-n/2} F = \frac{\partial F}{\partial \bar{t}_n} = - n \left<\!\left< T_{-n} \right>\!\right>. \end{eqnarray} In view of this, we propose to consider the action of other $W$'s, too, as insertion of a ``state" $W_{jm}$ into the correlator: \begin{equation} W_{j_1,m_1}\cdots W_{j_r,m_r} F = \left<\!\left< W_{j_1,m_1} \cdots W_{j_r,m_r} \right>\!\right>. \end{equation} Commutation relations (\ref{eq:CR}) of our symmetries will then reproduce the $w_{1+\infty}$ Ward identities in the matrix model approach \cite{bib:matrix-model} (now in in the presence of tachyon backgrounds). What about ``fields"? A set of fields $\phi_n(X)$ and $\bar{\phi}_n(X)$ are introduced by Hanany et al. \cite{bib:HOP} as $c=1$ analogues of $c<1$ chiral ring generators and gravitational descendents. In our interpretation of $(t,\bar{t})$ as background sources, $\phi_n(X)$ are given by \begin{equation} \phi_n(X) = - \frac{1}{n}\frac{\partial W(X)}{\partial t_n}, \quad \phi_{-n}(X) = \frac{1}{n}\frac{\partial W(X)}{\partial \bar{t}_n} \quad (n = 1,2,\ldots), \end{equation} and $\bar{\phi}_n(X)$ by similar derivatives of $\bar{W}(X)$. Since the Landau-Ginzburg potentials are written in terms of ${\cal M}({\cal L}\to X)$ and $\bar{\cM}(\bar{\cL}\to X^{-1})$ as shown in (\ref{eq:LGpotential}), these ``fields'' are exactly the same quantities as emerging on the right hand side of (\ref{eq:dB(lambda)dM(lambda)}), i.e., derivatives of the flow generators ${\cal B}_n$ and $\bar{\cB}_n$ with respect to the Landau-Ginzburg field variable $X$. Note that this is parallel to the construction of chiral ring generators in the $A_{k+1}$ models \cite{bib:DVV,bib:Krichever,bib:Dubrovin}. These ``fields'' are Landau-Ginzburg counterparts of tachyon ``states'' $W_{n/2,\pm n/2}$. To find other ``fields'', let us note that $\phi_n(X)$ can also be written \begin{equation} \phi_n(X) = X^{n-1} + \frac{1}{n}\delta_{{\cal L}^n,0} W(X), \quad \phi_{-n}(X) = - \frac{1}{n}\delta_{0,\bar{\cL}^{-n}} W(X). \end{equation} Here we have used (\ref{eq:deltaM(lambda)}), recalling the correspondence (\ref{eq:LGpotential}) between the Laudau-Ginzburg potential and the Lax functions. The somewhat strange extra term $X^{n-1}$ is due to the presence of tachyon backgrounds. Since the symmetries on the right hand side are just $W_{n/2,\pm n/2}$, we are naturally led to conjecture that ``fields'' $\Phi_{jm}(X)$ corresponding to the ``states'' $W_{jm}$ are to be given by \begin{equation} \Phi_{jm}(X) = W_{jm} W(X). \end{equation} Similarly the action of $W_{jm}$ on $\bar{W}(X)$ will give another set of extra ``fields'' $\bar{\Phi}_{jm}(X)$. In principle, one can find an explicit form of these ``extra fields'' from (\ref{eq:deltaM(lambda)}), though it will become considerably complicated in general. To push forward this speculation further, we will have to examine if the period integral representation of tachyon correlation functions and the contact algebra of $\phi_n(X)$ and $\bar{\phi}_n(X)$ \cite{bib:Ghoshal-Mukhi,bib:HOP,bib:Danielsson} can be extended to our $\Phi_{jm}(X)$ and $\bar{\Phi}_{jm}(X)$. \section{Conclusion \label{sec:conclusion}} Inspired by the suggestion of Hanany et al., we have considered the integrable structure of two-dimensional string theory at self-dual compactification radius. Our main conclusion is that the dispersionless Toda hierarchy is a very convenient framework for studying the tachyon sector of this theory. We have been able to identify a special solution of this integrable hierarchy in which full data of the tachyon dynamics is encoded. The $w_{1+\infty}$-constraints of tachyon correlation functions can be indeed reproduced from the construction (Riemann-Hilbert problem) of this solution. The Landau-Ginzburg formulation, too, turns out to be closely related to the Lax formalism of the dispersionless Toda hierarchy. Furthermore, we have pointed out the existence of a wedge algebra structure behind the Riemann-Hilbert problem, and proposed a speculative interpretation of this algebra as generators of ``extra'' states and fields in this model of two-dimensional strings. The last issue deserves to be pursued in more detail. We conclude this paper with several remarks. 1) In the context of two-dimensional gravity, the dispersionless Toda hierarchy is zero-genus limit of the full Toda hierarchy. A full-genus analysis in the language of the Toda hierarchy is done by Dijkgraaf et al. \cite{bib:DMP}. We will be able to extend the results of this paper to that case. 2) As already mentioned, the integrable hierarchy underlying the topological $D_\ell$ models \cite{bib:T-DLG} resembles the dispersionless Toda hierarchy. This hierarchy is related to the Drinfeld-Sokolov hierarchy of $D$-type. It is intriguing that Danielsson \cite{bib:Danielsson} pointed out a link between a deformed Landau-Ginzburg model and the Drinfeld-Sokolov hierarchy of $D$-type. 3) Our method for solving a Riemann-Hilbert problem can be extended to more general cases such as: \begin{equation} {\cal L}^N = \bar{\cM} \bar{\cL}^{\bar{N}} / \bar{N}, \quad \bar{\cL}^{-\bar{N}} = {\cal M} {\cal L}^{-N} / N, \label{eq:NNbar} \end{equation} where $N$ and $\bar{N}$ are nonzero integers. In this paper, we have considered the simplest case, $N=\bar{N}=1$; other cases, too, may have interesting physical interpretations. For instance, the work of Dijkgraaf et al. \cite{bib:DMP} implicitly shows that if the compactification radius ($\beta$ in their notation) is a positive integer, the dynamics of tachyons in zero-genus limit can be described by the solution of (\ref{eq:NNbar}) with $N=\bar{N}=\beta$. Thus we can deal with a discrete series of theories at non-self-dual ($\beta > 1$) radii in much the same way; a full genus analysis will become possible in the full Toda hierarchy. 4) Discrete states and quadratic Ward identities in the free field approach \cite{bib:free-field} are still beyond our scope. Our approach by the dispersionless Toda hierarchy is at most an effective theory in the tachyon sector, though we can anyhow reproduce the wedge algebra symmetries acting on tachyon states. Presumably, a suitable integrable extension of the dispersionless (or full) Toda hierarchy will provide a framework for dealing with this issue. \begin{acknowledgement} The author is very grateful to Hiroaki Kanno and Takashi Takebe for many useful comments. This work is partially supported by the Grant-in-Aid for Scientific Research, the Ministry of Education, Science and Culture, Japan. \end{acknowledgement} \begin{note-added} Hiroaki Kanno informed the author that Tohru Eguchi independently arrived at the same Riemann-Hilbert relation as ours (\ref{eq:RH}). \end{note-added}
1,314,259,995,184
arxiv
\subsection{Pseudo code of Counter Application} \label{subsec:countercode} OP\_LT\_SEED is defined as number of operations per transaction, T\_OBJ\_SEED is defined as number of transaction objects in the system, TRANS\_LT defines the total number of transactions to be executed in the system, and READ\_PER is the percentage of read operation which is used to define various workloads. \label{apn:conters} \begin{algorithm} \caption{$main()$: The main procedure invoked by counter application} \label{algo:main} \begin{algorithmic}[1] \State \Comment{To log abort counts by each thread} \State $abort\_count[$NUMTHREADS$]$ \State \Comment{To log average time taken by each transaction to commit} \State $time\_taken[$NUMTHREADS$]$ \State \Comment{To log the time of longest running transaction by each thread, worst case time} \State $worst\_time[$NUMTHREADS$]$ \For{(i = 0 : NUMTHREADS)} \State pthread\_create(\&threads[i], NULL, testFunc\_helper,(void$\ast$)args) \EndFor \For{(i = 0 : NUMTHREADS)} \State pthread\_join(threads[i], \&status) \EndFor \State $max\_worst\_time = 0.0$ \State $total\_abort\_count = 0$ \State $average\_time_taken = 0$ \For{(i = 0 : NUMTHREADS)} \If{($max\_worst\_time < worst\_time[i]$)} \State $max\_worst\_time = worst\_time[i]$ \EndIf \State $total\_abort\_count += abort\_count[i]$ \State $average\_time\_taken += time\_taken[i]$ \EndFor \end{algorithmic} \end{algorithm} \vspace{1mm} \begin{algorithm} [H] \caption{$testFunc\_helper()$:Function invoked by threads} \label{algo:testFunc} \begin{algorithmic}[1] \State $transaction\_count = 0$ \While{(TRANS\_LT)} \State\Comment{Log the time at the start of every transaction} \State $begin\_time = time\_request()$ \State\Comment{Invoke the test function to execute a transaction} \State $abort\_count[thread\_id] = test\_function()$ \State $transaction\_count++$ \algstore{myalg} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic} \algrestore{myalg} \State\Comment{Log the time at the end of every transaction} \State $end\_time = time\_request()$ \State $time\_taken[thread\_id] += (end\_time - begin\_time)$ \If{($worst\_time[thread_id] < (end\_time - begin\_time)$)} \State $worst\_time[thread_id] = (end\_time - begin\_time)$ \EndIf \State TRANS\_LT -= 1 \EndWhile \State $time\_taken[thread\_id]$ /= $transaction\_count$ \end{algorithmic} \end{algorithm} \vspace{1mm} \begin{algorithm} [H] \caption{$test\_function()$:main test function while executes a transaction} \label{algo:testFunx} \begin{algorithmic}[1] \State Transaction $\ast$T = new Transaction; \State $T\rightarrow g\_its$ = NIL \State $local\_abort\_count$ = 0 \State label: \While{(true)} \If{($T\rightarrow g\_its$ != $NIL$)} \State $its = T\rightarrow g\_its$ \State $T = lib\rightarrow stm$-$begin(its)$ \Else \State $T = lib\rightarrow stm$-$begin(T\rightarrow g\_its)$ \EndIf \ForAll{(OP\_LT\_SEED)} \State $t\_obj = rand()\%T\_OBJ\_SEED$ \State $randVal = rand()\%OP\_SEED$ \If{($randVal <= READ\_PER $)} \State $stm$-$read(t\_obj, value)$ \If{(value == $ABORTED$)} \State $local\_abort\_count$++ \State goto label \EndIf \Else \State $stm$-$write(t\_obj, value)$ \EndIf \EndFor \If{($lib\rightarrow stm$-$tryC() == ABORTED$)} \State $local\_abort\_count$++ \State continue \EndIf \State break \EndWhile \end{algorithmic} \end{algorithm} \subsection{Data Structures and Pseudocode of \ksftm} \label{sec:code} The STM system consists of the following methods: $\init(), \begt(), read(i, x), write_i(i, x, v)$ and $\tryc(i)$. We assume that all the \tobj{s} are ordered as $x_1, x_2, ...x_n$ and belong to the set $\mathcal{T}$. We describe the data-structures used by the algorithm. We start with structures that local to each transaction. Each transaction $T_i$ maintains a $\rset{i}$ and $\wset{i}$. In addition it maintains the following structures (1) $\ct_i$: This is value given to $T_i$ when it terminates which is assigned a value in \tryc \mth. (2) A series of lists: \srl, \lrl, \allrl, \pvl, \nvl, \relll, \abl. The meaning of these lists will be clear with the description of the pseudocode. In addition to these local structures, the following shared global structures are maintained that are shared across transactions (and hence, threads). We name all the shared variable starting with `G'. \begin{itemize} \item $\gtcnt$ (counter): This a numerical valued counter that is incremented when a transaction begins and terminates. \end{itemize} \noindent For each transaction $T_i$ we maintain the following shared time-stamps: \begin{itemize} \item $\glock_i$: A lock for accessing all the shared variables of $T_i$. \item $\gits_i$ (initial timestamp): It is a time-stamp assigned to $T_i$ when it was invoked for the first time without any aborts. The current value of $\gtcnt$ is atomically assigned to it and then incremented. If $T_i$ is aborted and restarts later then the application assigns it the same \gits. \item $\gcts_i$ (current timestamp): It is a time-stamp when $T_i$ is invoked again at a later time after an abort. Like \gits,the current value of $\gtcnt$ is atomically assigned to it and then incremented. When $T_i$ is created for the first time, then its \gcts{} is same as its \gits. \item $\gwts_i$ (working timestamp): It is the time-stamp that $T_i$ works with. It is either greater than or equal to $T_i$'s \gcts. It is computed as follows: $\gwts_i = \gcts_i + C * (\gcts_i - \gits_i)$. \item $\gval_i$: This is a boolean variable which is initially true. If it becomes false then $T_i$ has to be aborted. \item $\gstat_i$: This is a variable which states the current value of $T_i$. It has three states: \texttt{live}, \texttt{committed} or \texttt{aborted}. \item $\tltl_i, \tutl_i$ (transaction lower \& upper time limits): These are the time-limits described in the previous section used to keep the transaction \wts and \rt orders in sync. $\tltl_i$ is \gcts{} of $T_i$ when transaction begins and is a non-decreasing value. It continues to increase (or remains same) as $T_i$ reads \tobj{s} and later terminates. $\tutl_i$ on the other hand is a non-increasing value starting with $\infty$ when the $T_i$ is created. It reduces (or remains same) as $T_i$ reads \tobj{s} and later terminates. If $T_i$ commits then both $\tltl_i$ \& $\tutl_i$ are made equal. \end{itemize} Two transactions having the same \its are said to be \inc{s}. No two transaction can have the same \cts. For simplicity, we assume that no two transactions have the same \wts as well. In case, two transactions have the same \wts, one can use the tuple $\langle$\wts, \cts$\rangle$ instead of \wts. But we ignore such cases. For each \tobj $x$ in $\mathcal{T}$, we maintain: \begin{itemize} \item $x.\vl$ (version list): It is a list consisting of version tuples or \emph{\vtup} of the form $\langle \ts, val, \rl, \vt \rangle$. The details of the tuple are explained below. \item $\ts$ (timestmp): Here $\ts$ is the $\gwts_i$ of a committed transaction $T_i$ that has created this version. \item $val$: The value of this version. \item $\rl$ (readList): $rl$ is the read list consists of all the transactions that have read this version. Each entry in this list is of the form $\langle rts \rangle$ where $rts$ is the $\gwts_j$ of a transaction $T_j$ that read this version. \item $\vt$ (version real-time timestamp): It is the \tutl value (which is same as \tltl) of the transaction $T_i$ that created this version at the time of commit of $T_i$. \end{itemize} \begin{algorithm}[H] \caption{STM $\init()$: Invoked at the start of the STM system. Initializes all the \tobj{s} used by the STM System} \label{algo:init} \begin{algorithmic}[1] \State $\gtcnt$ = 1; \Comment{Global Transaction Counter} \ForAll {$x$ in $\mathcal{T}$} \Comment{All the \tobj{s} used by the STM System} \State /* $T_0$ is creating the first version of $x$: $\ts= 0, val = 0, \rl = nil, \vt = 0$ */ \State add $\langle 0, 0, nil, 0 \rangle$ to $x.\vl$; \label{lin:t0-init1} \EndFor; \end{algorithmic} \end{algorithm} \begin{algorithm} [H] \caption{STM $\begt(its)$: Invoked by a thread to start a new transaction $T_i$. Thread can pass a parameter $its$ which is the initial timestamp when this transaction was invoked for the first time. If this is the first invocation then $its$ is $nil$. It returns the tuple $\langle id, \gwts, \gcts \rangle$} \label{algo:begin} \begin{algorithmic}[1] \State $i$ = unique-id; \Comment{An unique id to identify this transaction. It could be same as \gcts} \State \Comment{Initialize transaction specific local \& global variables} \If {($its == nil$)} \State $\gits_i = \gwts_i = \gcts_i = \gtcnt.get\&Inc()$; \Comment{$\gtcnt.get\&Inc()$ returns the current value of \gtcnt and atomically increments it} \label{lin:ti-ts-init} \Else \State $\gits_i = its$; \State $\gcts_i = \gtcnt.get\&Inc()$; \State $\gwts_i = \gcts_i + C * (\gcts_i - \gits_i)$; \Comment{$C$ is any constant greater or equal to than 1} \EndIf \State $\tltl_i = \gcts_i$; $\tutl_i = \ct_i = \infty$; \label{lin:lts-init} \State $\gstat_i$ = \texttt{live}; $\gval_i = T$; \State $rset\xspace_i = wset\xspace_i = nil$; \State return $\langle i, \gwts_i, \gcts_i\rangle$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{STM $read(i, x)$: Invoked by a transaction $T_i$ to read \tobj{} $x$. It returns either the value of $x$ or $\mathcal{A}$} \label{algo:read} \begin{algorithmic}[1] \If {($x \in wset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $wset\xspace_i$} \State return $wset\xspace_i[x].val$; \ElsIf{($x \in rset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $rset\xspace_i$} \State return $rset\xspace_i[x].val$; \Else \Comment{\tobj{} $x$ is not in $rset\xspace_i$ and $wset\xspace_i$} \State lock $x$; lock $\glock_i$; \If {$(\gval_i == F)$} return abort(i); \label{lin:rd-chk} \EndIf \State /* \findls: From $x.\vl$, returns the largest \ts value less than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $curVer = \findls(\gwts_i,x)$; \label{lin:rd-curver10} \If {$(curVer == nil)$} return abort(i); \Comment{Proceed only if $curVer$ is not nil} \label{lin:rd-cvnil} \EndIf \State /* \findsl: From $x.\vl$, returns the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $nextVer = \findsl(\gwts_i,x)$; \algstore{myalg} \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \begin{algorithmic} \algrestore{myalg} \If {$(nextVer \neq nil)$} \State \Comment{Ensure that $\tutl_i$ remains smaller than $nextVer$'s \vltl} \State $\tutl_i = min(\tutl_i, x[nextVer].\vt-1)$; \label{lin:rd-ul-dec} \EndIf \State \Comment{$\tltl_i$ should be greater than $x[curVer].\vltl$} \State $\tltl_i = max(\tltl_i, x[curVer].\vt + 1)$; \label{lin:rd-tltl-inc} \If {($\tltl_i > \tutl_i$)} \Comment{If the limits have crossed each other, then $T_i$ is aborted} \State return abort(i); \label{lin:rd-lts-cross} \EndIf \State $val = x[curVer].v$; add $\langle x, val \rangle$ to $rset\xspace_i$; \State add $T_i$ to $x[curVer].rl$; \State unlock $\glock_i$; unlock $x$; \State return $val$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{STM $write_i(x,val)$: A Transaction $T_i$ writes into local memory} \label{algo:write} \begin{algorithmic}[1] \State Append the $d\_tuple \langle x,val \rangle$ to $wset\xspace_i$. \State return $ok$; \end{algorithmic} \end{algorithm} \begin{algorithm} [H] \caption{STM $\tryc()$: Returns $ok$ on commit else return Abort} \label{algo:tryc} \begin{algorithmic}[1] \State \Comment{The following check is an optimization which needs to be performed again later} \State lock $\glock_i$; \If {$(\gval_i == F)$} return abort(i); \label{lin:init-tc-chk} \EndIf \State unlock $\glock_i$; \State \Comment{Initialize smaller read list (\srl), larger read list (\lrl), all read list (\allrl) to nil} \State $\srl = \lrl = \allrl = nil$; \label{lin:init-rls} \State \Comment{Initialize previous version list (\pvl), next version list (\nvl) to nil} \State $\pvl = \nvl = nil$; \label{lin:init-vls} \ForAll {$x \in wset\xspace_i$} \State lock $x$ in pre-defined order; \label{lin:lockxs} \State /* \findls: returns the version of $x$ with the largest \ts less than $\gwts_i$. If no such version exists, it returns $nil$. */ \State $\prevv = \findls(\gwts_i, x)$; \Comment{\prevv: largest version smaller than $\gwts_i$} \If {$(\prevv == nil)$} \Comment{There exists no version with \ts value less than $\gwts_i$} \State lock $\glock_i$; return abort(i); \label{lin:prev-nil} \EndIf \State $\pvl = \pvl \cup \prevv$; \Comment{\pvl stores the previous version in sorted order } \State $\allrl = \allrl \cup x[\prevv].rl$; \Comment{Store the read-list of the previous version} \State \Comment{\textbf{\getl}: obtain the list of reading transactions of $x[\prevv].rl$ whose $\gwts$ is greater than $\gwts_i$} \State $\lrl = \lrl \cup \getl(\gwts_i, $ \Statex $x[\prevv].rl)$; \label{lin:lar-coll} \State \Comment{\textbf{\getsm}: obtain the list of reading transactions of $x[\prevv].rl$ whose $\gwts$ is smaller than $\gwts_i$} \State $\srl = \srl \cup \getsm(\gwts_i, $ \Statex $x[\prevv].rl)$; \label{lin:lar-sml} \algstore{myalg} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[H] \algrestore{myalg} \State /* \findsl: returns the version with the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$. */ \State $\nextv = \findsl(\gwts_i, x)$; \Comment{\nextv: smallest version larger than $\gwts_i$} \label{lin:get-nextv} \If {$(\nextv \neq nil)$)} \State $\nvl = \nvl \cup \nextv$; \Comment{\nvl stores the next version in sorted order} \label{lin:nvl-coll} \EndIf \EndFor \Comment{$x \in wset\xspace_i$} \State $\relll = \allrl \cup T_i$; \Comment{Initialize relevant Lock List (\relll)} \ForAll {($T_k \in \relll$)} \State lock $\glock_k$ in pre-defined order; \Comment{Note: Since $T_i$ is also in $\relll$, $\glock_i$ is also locked} \label{lin:lockall} \EndFor \State \Comment{Verify if $\gval_i$ is false} \If {$(\gval_i == F)$} return abort(i); \label{lin:mid-tc-chk} \EndIf \State $\abl = nil$ \Comment{Initialize abort read list (\abl)} \State \Comment{Among the transactions in $T_k$ in $\lrl$, either $T_k$ or $T_i$ has to be aborted} \ForAll {$(T_k \in \lrl)$} \If {$(\isab(T_k))$} \State \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \If {$(\gits_i < \gits_k) \land (\gstat_k == \texttt{live})$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \label{lin:addAbl-lar} \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \label{lin:its-chk1} \EndIf \EndFor \State \Comment{Ensure that $\tltl_i$ is greater than \vltl of the versions in $\pvl$} \ForAll {$(ver \in \pvl)$} \State $x$ = \tobj of $ver$; \State $\tltl_i = max(\tltl_i, x[ver].\vt + 1)$; \label{lin:tryc-tltl-inc} \EndFor \State \Comment{Ensure that $\vutl_i$ is less than \vltl of versions in $\nvl$} \ForAll {$(ver \in \nvl)$} \State $x$ = \tobj of $ver$; \State $\tutl_i = min(\tutl_i, x[ver].\vltl - 1)$; \label{lin:tryc-ul-dec} \EndFor \State \Comment{Store the current value of the global counter as commit time and increment it} \State $\ct_i = \gtcnt.add\&Get(\incv)$; \Comment{$\incv$ can be any constant $\geq$ 1} \label{lin:tryc-cmt-mod} \State $\tutl_i = min(\tutl_i, \ct_i)$; \Comment{Ensure that $\tutl_i$ is less than or equal to $\ct$} \label{lin:tryc-ul-cmt} \State \Comment{Abort $T_i$ if its limits have crossed} \If {$(\tltl_i > \tutl_i)$} return abort(i); \label{lin:tc-lts-cross} \EndIf \algstore{myalg} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[H] \algrestore{myalg} \ForAll {$(T_k \in \srl)$} \If {$(\isab(T_k))$} \State continue; \EndIf \If {$(\tltl_k \geq \tutl_i)$} \label{lin:tk-check} \Comment{Ensure that the limits do not cross for both $T_i$ \& $T_k$} \If {$(\gstat_k == live)$} \Comment{Check if $T_k$ is live} \If {$(\gits_i < \gits_k)$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \label{lin:addAbl-sml} \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \label{lin:its|lar-sml} \EndIf \Comment{$(\gits_i < \gits_k)$} \label{lin:its-ik} \Else \Comment{($T_k$ is committed. Hence, $T_i$ has to be aborted)} \State return abort(i); \label{lin:its-chk2} \EndIf \Comment{$(\gstat_k == live)$} \EndIf \Comment{$(\tltl_k \geq \tutl_i)$} \EndFor {$(T_k \in \srl)$} \State \Comment{After this point $T_i$ can't abort.} \State $\tltl_i = \tutl_i$; \label{lin:ti-updt} \State \Comment{Since $T_i$ can't abort, we can update $T_k$'s \tutl} \ForAll {$(T_k \in \srl)$} \If {$(\isab(T_k))$} \State continue; \EndIf \State /* The following line ensure that $\tltl_k \leq \tutl_k < \tltl_i$. Note that this does not cause the limits of $T_k$ to cross each other because of the check in \Lineref{tk-check}.*/ \State $\tutl_k = min(\tutl_k, \tltl_i - 1)$; \label{lin:tk-updt} \EndFor \ForAll {$T_k \in \abl$} \Comment{Abort all the transactions in \abl since $T_i$ can't abort} \State $\gval_k = F$; \label{lin:gval-set} \EndFor \State \Comment{Having completed all the checks, $T_i$ can be committed} \ForAll {$(x \in wset\xspace_i)$} \State /* Create new v\_tuple: $\ts, val, \rl, \vt$ for $x$ */ \State $newTuple = \langle \gwts_i, wset\xspace_i[x].val, nil, \tltl_i \rangle$; \label{lin:new-tup} \If {($|x.vl| > k$)} \State replace the oldest tuple in $x.\vl$ with $newTuple$; \Comment{$x.\vl$ is ordered by $\ts$} \Else \State add a $newTuple$ to $x.vl$ in sorted order; \EndIf \EndFor \Comment{$x \in wset\xspace_i$} \State $\gstat_i$ = \texttt{commit}; \State unlock all variables; \State return $\mathcal{C}$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{$\isab(T_k)$: Verifies if $T_i$ is already aborted or its \gval flag is set to false implying that $T_i$ will be aborted soon} \label{algo:isab} \begin{algorithmic}[1] \If {$(\gval_k == F) \lor (\gstat_k == \texttt{abort}) \lor (T_k \in \abl)$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{$abort(i)$: Invoked by various STM methods to abort transaction $T_i$. It returns $\mathcal{A}$} \label{algo:abort} \begin{algorithmic}[1] \State $\gval_i = F$; $\gstat_i$ = \texttt{abort}; \State unlock all variables locked by $T_i$; \State return $\mathcal{A}$; \end{algorithmic} \end{algorithm} \subsubsection{Data Structures and Pseudocode of \sftm} \label{apn:SFTM} We start with data-structures that are local to each transaction. For each transaction $T_i$: \begin{itemize} \item $rset\xspace_i$(read-set): It is a list of data tuples ($d\_tuples$) of the form $\langle x, val \rangle$, where $x$ is the t-object and $v$ is the value read by the transaction $T_i$. We refer to a tuple in $T_i$'s read-set by $rset\xspace_i[x]$. \item $wset\xspace_i$(write-set): It is a list of ($d\_tuples$) of the form $\langle x, val \rangle$, where $x$ is the \tobj{} to which transaction $T_i$ writes the value $val$. Similarly, we refer to a tuple in $T_i$'s write-set by $wset\xspace_i[x]$. \end{itemize} In addition to these local structures, the following shared global structures are maintained that are shared across transactions (and hence, threads). We name all the shared variable starting with `G'. \begin{itemize} \item $\gtcnt$ (counter): This a numerical valued counter that is incremented when a transaction begins. \end{itemize} \noindent For each transaction $T_i$ we maintain the following shared time-stamps: \begin{itemize} \item $\glock_i$: A lock for accessing all the shared variables of $T_i$. \item $\gits_i$ (initial timestamp): It is a time-stamp assigned to $T_i$ when it was invoked for the first time. \item $\gcts_i$ (current timestamp): It is a time-stamp when $T_i$ is invoked again at a later time. When $T_i$ is created for the first time, then its $\gcts$ is same as its $its$. \item $\gval_i$: This is a boolean variable which is initially true ($T$). If it becomes false ($F$) then $T_i$ has to be aborted. \item $\gstat_i$: This is a variable which states the current value of $T_i$. It has three states: \texttt{live}, \texttt{commit} or \texttt{abort}. \end{itemize} \noindent For each data item $x$ in history $H$, we maintain: \begin{itemize} \item $x.val$ (value): It is the successful previous closest value written by any transaction. \item $x.rl$ (readList): It is the read list consists of all the transactions that have read $x$. \end{itemize} \begin{algorithm} [H] \caption{STM $\init()$: Invoked at the start of the STM system. Initializes all the data items used by the STM System} \label{alg:init} \begin{algorithmic}[1] \State $\gtcnt$ = 1; \ForAll {data item $x$ used by the STM System} \State add $\langle 0, nil \rangle$ to $x.val$;\Comment{ $T_0$ is initializing $x$} \label{lin:t0-init} \EndFor; \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:begin} \caption{STM $\begt(its)$: Invoked by a thread to start a new transaction $T_i$. Thread can pass a parameter $its$ which is the initial timestamp when this transaction was invoked for the first time. If this is the first invocation then $its$ is $nil$. It returns the tuple $\langle id, \gcts \rangle$} \begin{algorithmic}[1] \State $i$ = unique-id; \Comment{An unique id to identify this transaction. It could be same as $\gcts$.} \If {($its == nil$)} \State $\gits_i = \gcts_i = \gtcnt.get\&Inc()$; \State \Comment{$\gtcnt.get\&Inc()$ returns the current value of $\gtcnt$ and atomically increments it by 1.} \Else \State $\gits_i = its$; \State $\gcts_i = \gtcnt.get\&Inc()$; \EndIf \State $rset\xspace_i = wset\xspace_i = null$; \State $\gstat_i$ = \texttt{live}; \State $\gval_i = T$; \State return $\langle i, \gcts_i\rangle$ \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:read} \caption{STM $read(i, x)$: Invoked by a transaction $T_i$ to read $x$. It returns either the value of $x$ or $\mathcal{A}$} \begin{algorithmic}[1] \If {($x \in wset\xspace_i$)} \Comment{Check if $x$ is in $wset\xspace_i$} \State return $wset\xspace_i[x].val$; \ElsIf {($x \in rset\xspace_i$)} \Comment{Check if $x$ is in $rset\xspace_i$} \State return $rset\xspace_i[x].val$; \Else \Comment{$x$ is not in $rset\xspace_i$ and $wset\xspace_i$} \State lock $x$; \State lock $\glock_i$; \If {$(\gval_i == F)$} \State return $abort(i)$; \label{lin:rabort} \EndIf \cmnt { \State /* \findsl: From $x.\vl$, returns the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $nextVer = \findsl(\gwts_i,x)$; \If {$(nextVer \neq nil)$} \State \Comment{Ensure that $\tutl_i$ remains smaller than $nextVer$'s \vltl} \State $\tutl_i = min(\tutl_i, x[nextVer].vltl-1)$; \EndIf \State \Comment{$\tltl_i$ should be greater than $x[curVer].\vltl$} \State $\tltl_i = max(\tltl_i, x[curVer].\vltl + 1)$; \If {($\tltl_i > \tutl_i$)} \Comment{If the limits have crossed each other, then $T_i$ is aborted} \State return abort(i); \EndIf } \State $val = x.val$; \State add $T_i$ to $x.rl$; \State unlock $\glock_i$; \State unlock $x$; \State return $val$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:write} \caption{STM $write_i(x,val)$: A Transaction $T_i$ writes into local memory} \begin{algorithmic}[1] \State Append the $d\_tuple \langle x,val \rangle$ to $wset\xspace_i$.\Comment{If same dataitem then overwrite the tuple } \State return $ok$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:llts} \caption{STM $findLLTS(TSet)$: Find the lowest $its$ value among all the live trasactions in $TSet$.} \begin{algorithmic}[1] \State $min\_its$ = $\infty$ \ForAll {( $T_j \in TSet$)} \If {(($\gits_j$ $< min\_its)$ \&\& $(\gstat_j == \texttt{live})$)} \State $min\_its$ = $\gits_j$; \EndIf \EndFor \State return $min\_its$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:tryc} \caption{STM $\tryc()$: Returns $\mathcal{C}$ on commit else return Abort $\mathcal{A}$} \begin{algorithmic}[1] \State lock $\glock_i$ \If {$(\gval_i == F)$} return $abort(i)$; \EndIf \State $TSet = null$ {} \Comment{$TSet$ storing transaction Ids} \ForAll {($x \in wset\xspace_i$)} \State lock $x$ in pre-defined order; \ForAll {($T_j \in x.rl$)} \State $TSet$ = $TSet$ $\cup$ \{$T_j$\} \EndFor \EndFor \Comment{$x \in wset\xspace_i$} \State $TSet$ = $TSet$ $\cup$ \{$T_i$\} \Comment{Add current transaction $T_i$ into $TSet$} \ForAll {( $T_k \in TSet$)} \State lock $\glock_k$ in pre-defined order; \Comment{Note: Since $T_i$ is also in $TSet$, $\glock_i$ is also locked} \EndFor \If {$(\gval_i == F)$} return $abort(i)$; \Else \If {($\gits_i == findLLTS(TSet$))} \Comment{Check if $T_i$ has lowest $its$ among all \texttt{live} transactions in $TSet$} \ForAll {($T_j \in TSet$)} \Comment{ ($T_i \neq T_j$)} \State $G\_valid_j = F$ \State unlock $\glock_j$; \EndFor \Else \State return $abort(i)$; \EndIf \EndIf \cmnt { \algstore{tryc-break} \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:tryc-cont} \caption{STM $\tryc()$: Continued} \begin{algorithmic}[1] \algrestore{tryc-break} \State \Comment{Ensure that $\vutl_i$ is less than \vltl of versions in $\nvl$} \ForAll {$(ver \in \nvl)$} \State $x$ = \tobj of $ver$; \State $\tutl_i = min(\tutl_i, x[ver].\vltl - 1)$; \EndFor } \ForAll {($x \in wset\xspace_i$)} \State replace the old value in $x.val$ with $newValue$; \State $x.rl$ = null; \EndFor \State $\gstat_i$ = \texttt{commit}; \State unlock all variables locked by $T_i$; \State return $\mathcal{C}$; \end{algorithmic} \end{algorithm} \cmnt { \begin{algorithm} \label{alg:lowPri} \caption{$\lowp(T_k, T_i)$: Verifies if $T_k$ has lower priority than $T_i$ and is not already committed} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k) \land (\gstat_k == \texttt{live})$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:\lowp} \caption{$\lowp(T_k, T_i)$: Aborts lower priority transaction among $T_k$ and $T_i$} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k)$} \State $\abl = \abl \cup T_k$; \Comment{Store lower priority $T_k$ in \abl} \Else \State return abort(i); \Comment{Abort $T_i$} \EndIf \end{algorithmic} \end{algorithm} } \begin{algorithm}[H] \label{alg:abort} \caption{$abort(i)$: Invoked by various STM methods to abort transaction $T_i$. It returns $\mathcal{A}$} \begin{algorithmic}[1] \State $\gval_i = F$; \State $\gstat_i$ = \texttt{abort}; \State unlock all variables locked by $T_i$; \State return $\mathcal{A}$; \end{algorithmic} \end{algorithm} \section{Experimental Evaluation} \label{apn:ap-exp} \vspace{-.2cm} In this section we evaluate the performance of \ksftm and \sftm (single-version starvation freedom algorithm) on Intel(R) Xeon(R) CPU E5-2690 v4 at 2.60GHz (56 cores). We have compared our algorithm \ksftm with the existing \sftm. In order to analyze the performance of \ksftm, we have considered multiple parameters and conclude with the optimal value of $K$ as 10 (described in \apnref{ap-opk}), the optimal value of $C$ as 0.2 (described in \apnref{ap-opc}) and optimal read-write ratio as 50\% read and 50\% write(described in \apnref{ap-rwr}). \subsection{Optimal value of $K$} \label{apn:ap-opk} We have considered 20 \tobj{s}, 64 number of threads, each thread operates on 5000 transactions and each transaction has 30 operations(50\% read and 50\% write). We have considered the value of $C$ as 0.1 while varying the value of $K$ from 5 to 30. \figref{ap-optkval} depicts the optimal value of $K$ is 5. \vspace{5mm} \begin{figure}[H] \centering \subfloat[Commit time per transaction by varying $K$ \label{fig:ap-optkval}] {\includegraphics[width=0.40\textwidth]{figs/optimalK.png}} \subfloat[ Commit time per transaction by varying $C$ \label{fig:ap-optcval}] {\includegraphics[width=0.40\textwidth]{figs/optimalC.png}} \caption[The average and standard deviation of critical parameters]{Optimal values of parameters for \ksftm, while commit time per transaction for \sftm is 999.463ms} \label{fig:ap-eval} \end{figure} \subsection{Optimal value of $C$} \vspace{-3mm} \label{apn:ap-opc} For calculating the optimal value of $C$, we have considered $K$ as 5, \tobj{s} count as 20, number of threads as 64, each thread operates on transactions count 5000 and each transaction has 30 operations(50\% read and 50\% write). \figref{ap-optcval} represents the execution under \ksftm with the value of C ( 0, 0.1, 0.2, 0.4, 0.6, 0.8 and 1). Experiment depicts, $C$ as 0.1 is best amongst all, so the optimal value of $C$ is 0.1 . \subsection{Read-write ratio} \label{apn:ap-rwr} For obtaining the best read-write ratio, we have considered $K$ as 5 (optimal value of \textit{K}), $C$ as 0.1 (optimal value of \textit{C}), \tobj{s} count as 20, number of threads as 64, each thread operates on 5000 transactions and each transaction has 30 operations, while varying the read percentage from 10 to 90. \vspace{5mm} \begin{figure}[H] \centering \hfill \subfloat[ Commit time per transaction by varying read percentage\label{fig:ap-readper}] {\includegraphics[width=0.29\textwidth]{figs/readper.png}} \subfloat[Read abort by varying read \label{fig:ap-readabort}] {\includegraphics[width=0.31\textwidth]{figs/Readabort.png}} \subfloat[ Write abort by varying read percentage\label{fig:ap-writeabort}] {\includegraphics[width=0.32\textwidth]{figs/Writeabort.png}} \caption[Abort count] {Commit time per transaction, read and tryC abort count with varying read percentage} \label{fig:ap-rweval1} \end{figure} \figref{ap-readper} depicts, \ksftm outperforms \sftm within the range of read percentage varying from 20 to 80. Where \ksftm with read percentage 50 is giving more than twice speedup as compared to \sftm. That implies \ksftm is not only favoring reading transactions but also favours the write transactions. \sftm performs better than \ksftm for read percentage 90, because \ksftm takes time for searching the correct version to read it from and adding itself into the corresponding version’s read list at appropriate place, so KSFTM does have version overhead. \figref{ap-readabort} and \figref{ap-writeabort} illustrate $read$ and $tryC$ abort count for \ksftm and \sftm. \ksftm has more $read$ abort count than \sftm while \sftm has more $tryC$ abort count than \ksftm. This depicts that \ksftm ensures early aborts than allowing the transaction to get aborted at commiting stage. These early aborts is one of the reason why \ksftm outperforms \sftm. \subsection{Execution under varying number of transactions} \figref{ap-threadcount} illustrates time taken per thread in milliseconds by \ksftm and \sftm algorithms while varying the number of transactions from 1000 to 5000. Specifically, we have considered 64 threads with read percentage 50, optimal value of $K$ i.e. 5 and $C$ i.e. 0.1. Our experiment shows that KSFTM provides more than 2 fold speedup than \sftm with 5000 transactions. \begin{figure}[H] \centering \subfloat[ \ksftm vs \sftm \label{fig:ap-threadcount}] {\includegraphics[width=0.45\textwidth]{figs/speedup.png}} \caption[Speedup with varying number of transactions] {Speedup with varying number of transactions} \label{fig:ap-rweval} \end{figure} \section{Graph Characterization of Local Opacity \& \ksftm Correctness} \label{sec:ap-gphchar} To prove correctness of STM systems, it is useful to consider graph characterization of histories. In this section, we describe the graph characterization developed by Kumar et al \cite{Kumar+:MVTO:ICDCN:2014} for proving \opty which is based on characterization by Bernstein and Goodman \cite{BernGood:1983:MCC:TDS}. We extend this characterization for \lo. Consider a history $H$ which consists of multiple versions for each \tobj. The graph characterization uses the notion of \textit{version order}. Given $H$ and a \tobj{} $x$, we define a version order for $x$ as any (non-reflexive) total order on all the versions of $x$ ever created by committed transactions in $H$. It must be noted that the version order may or may not be the same as the actual order in which the version of $x$ are generated in $H$. A version order of $H$, denoted as $\ll_H$ is the union of the version orders of all the \tobj{s} in $H$. Consider the history $H2: r_1(x, 0) r_2(x, 0) r_1(y, 0) r_3(z, 0) w_1(x, 5) w_3(y, 15) w_2(y, 10) w_1(z, 10) \\ c_1 c_2 r_4(x, 5) r_4(y, 10) w_3(z, 15) c_3 r_4(z, 10)$. Using the notation that a committed transaction $T_i$ writing to $x$ creates a version $x_i$, a possible version order for $H2$ $\ll_{H2}$ is: $\langle x_0 \ll x_1 \rangle, \langle y_0 \ll y_2 \ll y_3 \rangle, \langle z_0 \ll z_1 \ll z_3 \rangle $. We define the graph characterization based on a given version order. Consider a history $H$ and a version order $\ll$. We then define a graph (called opacity graph) on $H$ using $\ll$, denoted as $\opg{H}{\ll} = (V, E)$. The vertex set $V$ consists of a vertex for each transaction $T_i$ in $\overline{H}$. The edges of the graph are of three kinds and are defined as follows: \begin{enumerate} \item \textit{\rt}(real-time) edges: If $T_i$ commits before $T_j$ starts in $H$, then there is an edge from $v_i$ to $v_j$. This set of edges are referred to as $\rtx(H)$. \item \textit{\rf}(reads-from) edges: If $T_j$ reads $x$ from $T_i$ in $H$, then there is an edge from $v_i$ to $v_j$. Note that in order for this to happen, $T_i$ must have committed before $T_j$ and $c_i <_H r_j(x)$. This set of edges are referred to as $\rf(H)$. \item \textit{\mv}(multiversion) edges: The \mv{} edges capture the multiversion relations and is based on the version order. Consider a successful read \op{} $r_k(x,v)$ and the write \op{} $w_j(x,v)$ belonging to transaction $T_j$ such that $r_k(x,v)$ reads $x$ from $w_j(x,v)$ (it must be noted $T_j$ is a committed transaction and $c_j <_H r_k$). Consider a committed transaction $T_i$ which writes to $x$, $w_i(x, u)$ where $u \neq v$. Thus the versions created $x_i, x_j$ are related by $\ll$. Then, if $x_i \ll x_j$ we add an edge from $v_i$ to $v_j$. Otherwise ($x_j \ll x_i$), we add an edge from $v_k$ to $v_i$. This set of edges are referred to as $\mv(H, \ll)$. \end{enumerate} Using the construction, the $\opg{H2}{\ll_{H2}}$ for history $H2$ and $\ll_{H2}$ is shown in \figref{opg}. The edges are annotated. The only \mv{} edge from $T4$ to $T3$ is because of \tobj{s} $y, z$. $T4$ reads value 5 for $z$ from $T1$ whereas $T3$ also writes 15 to $z$ and commits before $r_4(z)$. \begin{figure}[tbph] \centerline{\scalebox{0.7}{\input{figs/ex2.pdf_t}}} \captionsetup{justification=centering} \caption{$\opg{H2}{\ll_{H2}}$} \label{fig:opg} \end{figure} Kumar et al \cite{Kumar+:MVTO:ICDCN:2014} showed that if a version order $\ll$ exists for a history $H$ such that $\opg{H}{\ll_H}$ is acyclic, then $H$ is \opq. This is captured in the following result. \ignore{ \begin{result} \label{res:main-opg} A \valid{} history $H$ is opaque iff there exists a version order $\ll_H$ such that $\opg{H}{\ll_H}$ is acyclic. \end{result} \noindent This result can be extended to characterize \lo using graphs with the following theorem. The proof is in Appendix \thmref{log}. \begin{theorem} \label{thm:main-log} A \valid{} history $H$ is \lopq iff for each sub-history $sh$ in $\shset{H}$ there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic. Formally, $\langle (H \text{ is \lopq}) \Leftrightarrow (\forall sh \in \shset{H}, \exists \ll_{sh}: \opg{sh}{\ll_{sh}} \text{ is acyclic}) \rangle$. \end{theorem} } \begin{result} \label{res:opg} A \valid{} history $H$ is opaque iff there exists a version order $\ll_H$ such that $\opg{H}{\ll_H}$ is acyclic. \end{result} \noindent This result can be easily extended to prove \lo as follows \begin{theorem} \label{thm:log} A \valid{} history $H$ is \lopq iff for each sub-history $sh$ in $\shset{H}$ there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic. Formally, $\langle (H \text{ is \lopq}) \Leftrightarrow (\forall sh \in \shset{H}, \exists \ll_{sh}: \opg{sh}{\ll_{sh}} \text{ is acyclic}) \rangle$. \end{theorem} \begin{proof} To prove this theorem, we have to show that each sub-history $sh$ in $\shset{H}$ is \valid. Then the rest follows from \resref{opg}. Now consider a sub-history $sh$. Consider any read \op $r_i(x, v)$ of a transaction $T_i$. It is clear that $T_i$ must have read a version of $x$ created by a previously committed transaction. From the construction of $sh$, we get that all the transaction that committed before $r_i$ are also in $sh$. Hence $sh$ is also \valid. Now, proving $sh$ to be \opq iff there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic follows from \resref{opg}. \end{proof} \ignore { Using this theorem, we can give the proof sketch of \ksftm algorithms. Here for simplicity, we assume that the history generated is sequential. \begin{theorem} \label{thm:ap1-ksftm-lo} Any history generated by \ksftm{} is \lopq. \end{theorem} \begin{proof} For proving this, we consider a sequential history $H$ generated by \ksftm. We define the version order $\ll_{\vt}$: for two versions $v_i, v_j$ it is defined as $(v_i \ll_{\vt} v_j) \equiv (v_i.\vt < v_j.\vt)$ \noindent Using this version order $\ll_{\vt}$, we can show that all the sub-histories in $\shset{H}$ are acyclic. \end{proof} } \section{Proof of Liveness} \label{sec:ap-liveness} \paragraph{Proof Notations:} Let $\gen{\ksftm}$ consist of all the histories accepted by \ksftm{} algorithm. In the follow sub-section, we only consider histories that are generated by \ksftm unless explicitly stated otherwise. For simplicity, we only consider sequential histories in our discussion below. Consider a transaction $T_i$ in a history $H$ generated by \ksftm. Once it executes \begt \mth, its \its, \cts, \wts values do not change. Thus, we denote them as $\tits{i}, \tcts{i}, \twts{i}$ respectively for $T_i$. In case the context of the history $H$ in which the transaction executing is important, we denote these variables as $\htits{i}{H}, \htcts{i}{H}, \htwts{i}{H}$ respectively. The other variables that a transaction maintains are: \ltl, \utl, \lock, \val, \stat. These values change as the execution proceeds. Hence, we denote them as: $\htltl{i}{H}, \htutl{i}{H}, \htlock{i}{H}, \htval{i}{H}, \htstat{i}{H}$. These represent the values of \ltl, \utl, \lock, \val, \stat after the execution of last event in $H$. Depending on the context, we sometimes ignore $H$ and denote them only as: $\tlock{i}, \tval{i}, \tstat{i}, \ttltl{i}, \ttutl{i}$. We approximate the system time with the value of $\tcntr$. We denote the \syst of history $H$ as the value of $\tcntr$ immediately after the last event of $H$. Further, we also assume that the value of $C$ is 1 in our arguments. But, it can be seen that the proof will work for any value greater than 1 as well. The application invokes transactions in such a way that if the current $T_i$ transaction aborts, it invokes a new transaction $T_j$ with the same \its. We say that $T_i$ is an \emph{\inc} of $T_j$ in a history $H$ if $\htits{i}{H} = \htits{j}{H}$. Thus the multiple \inc{s} of a transaction $T_i$ get invoked by the application until an \inc finally commits. To capture this notion of multiple transactions with the same \its, we define \emph{\incset} (incarnation set) of $T_i$ in $H$ as the set of all the transactions in $H$ which have the same \its as $T_i$ and includes $T_i$ as well. Formally, \begin{equation*} \incs{i}{H} = \{T_j|(T_i = T_j) \lor (\htits{i}{H} = \htits{j}{H})\} \end{equation*} Note that from this definition of \incset, we implicitly get that $T_i$ and all the transactions in its \incset of $H$ also belong to $H$. Formally, $\incs{i}{H} \in \txns{H}$. The application invokes different incarnations of a transaction $T_i$ in such a way that as long as an \inc is live, it does not invoke the next \inc. It invokes the next \inc after the current \inc has got aborted. Once an \inc of $T_i$ has committed, it can't have any future \inc{s}. Thus, the application views all the \inc{s} of a transaction as a single \emph{\aptr}. We assign \emph{\incn{s}} to all the transactions that have the same \its. We say that a transaction $T_i$ starts \emph{afresh}, if $\inum{i}$ is 1. We say that $T_i$ is the \ninc of $T_i$ if $T_j$ and $T_i$ have the same \its and $T_i$'s \incn is $T_j$'s \incn + 1. Formally, $\langle (\nexti{i} = T_j) \equiv (\tits{i} = \tits{j}) \land (\inum{i} = \inum{j} + 1)\rangle$ As mentioned the objective of the application is to ensure that every \aptr eventually commits. Thus, the applications views the entire \incset as a single \aptr (with all the transactions in the \incset having the same \its). We can say that an \aptr has committed if in the corresponding \incset a transaction in eventually commits. For $T_i$ in a history $H$, we denote this by a boolean value \incct (incarnation set committed) which implies that either $T_i$ or an \inc of $T_i$ has committed. Formally, we define it as $\inct{i}{H}$ \begin{equation*} \inct{i}{H} = \begin{cases} True & (\exists T_j: (T_j \in \incs{i}{H}) \land (T_j \in \comm{H})) \\ False & \text{otherwise} \end{cases} \end{equation*} \noindent From the definition of \incct we get the following observations \& lemmas about a transaction $T_i$ \begin{observation} \label{obs:inct-term} Consider a transaction $T_i$ in a history $H$ with its \incct being true in $H$. Then $T_i$ is terminated (either committed or aborted) in $H$. Formally, $\langle H, T_i: (T_i \in \txns{H}) \land (\inct{i}{H}) \implies (T_i \in \termed{H}) \rangle$. \end{observation} \begin{observation} \label{obs:inct-fut} Consider a transaction $T_i$ in a history $H$ with its \incct being true in $H1$. Let $H2$ be a extension of $H1$ with a transaction $T_j$ in it. Suppose $T_j$ is an \inc of $T_i$. Then $T_j$'s \incct is true in $H2$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (\inct{i}{H1}) \land (T_j \in \txns{H2}) \land (T_i \in \incs{j}{H2})\implies (\inct{j}{H2}) \rangle$. \end{observation} \begin{lemma} \label{lem:inct-diff} Consider a history $H1$ with a strict extension $H2$. Let $T_i$ \& $T_j$ be two transactions in $H1$ \& $H2$ respectively. Let $T_j$ not be in $H1$. Suppose $T_i$'s \incct is true. Then \its of $T_i$ cannot be the same as \its of $T_j$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubset H2) \land (\inct{i}{H1}) \land (T_j \in \txns{H2}) \land (T_j \notin \txns{H1}) \implies (\htits{i}{H1} \neq \htits{j}{H2}) \rangle$. \end{lemma} \begin{proof} Here, we have that $T_i$'s \incct is true in $H1$. Suppose $T_j$ is an \inc of $T_i$, i.e., their \its{s} are the same. We are given that $T_j$ is not in $H1$. This implies that $T_j$ must have started after the last event of $H1$. We are also given that $T_i$'s \incct is true in $H1$. This implies that an \inc of $T_i$ or $T_i$ itself has committed in $H1$. After this commit, the application will not invoke another transaction with the same \its as $T_i$. Thus, there cannot be a transaction after the last event of $H1$ and in any extension of $H1$ with the same \its of $T_1$. Hence, $\htits{i}{H1}$ cannot be same as $\htits{j}{H2}$. \end{proof} Now we show the liveness with the following observations, lemmas \& theorems. We start with two observations about that histories of which one is an extension of the other. The following states that for any history, there exists an extension. In other words, we assume that the STM system runs forever and does not terminate. This is required for showing that every transaction eventually commits. \begin{observation} \label{obs:hist-future} Consider a history $H1$ generated by \gen{\ksftm}. Then there is a history $H2$ in \gen{\ksftm} such that $H2$ is a strict extension of $H1$. Formally, $\langle \forall H1: (H1 \in \gen{ksftm}) \implies (\exists H2: (H2 \in \gen{ksftm}) \land (H1 \sqsubset H2) \rangle$. \end{observation} \noindent The follow observation is about the transaction in a history and any of its extensions. \begin{observation} \label{obs:hist-subset} Given two histories $H1$ \& $H2$ such that $H2$ is an extension of $H1$. Then, the set of transactions in $H1$ are a subset equal to the set of transaction in $H2$. Formally, $\langle \forall H1, H2: (H1 \sqsubseteq H2) \implies (\txns{H1} \subseteq \txns{H2}) \rangle$. \end{observation} In order for a transaction $T_i$ to commit in a history $H$, it has to compete with all the live transactions and all the aborted that can become live again as a different \inc. Once a transaction $T_j$ aborts, another \inc of $T_j$ can start and become live again. Thus $T_i$ will have to compete with this \inc of $T_j$ later. Thus, we have the following observation about aborted \& committed transactions. \begin{observation} \label{obs:abort-retry} Consider an aborted transaction $T_i$ in a history $H1$. Then there is an extension of $H1$, $H2$ in which an \inc of $T_i$, $T_j$ is live and has $\tcts{j}$ is greater than $\tcts{i}$. Formally, $\langle H1, T_i: (T_i \in \aborted{H1}) \implies(\exists T_j, H2: (H1 \sqsubseteq H2) \land (T_j \in \live{H2}) \land (\htits{i}{H2} = \htits{j}{H2}) \land (\htcts{i}{H2} < \htcts{j}{H2})) \rangle$. \end{observation} \begin{observation} \label{obs:cmt-noinc} Consider an committed transaction $T_i$ in a history $H1$. Then there is no extension of $H1$, in which an \inc of $T_i$, $T_j$ is live. Formally, $\langle H1, T_i: (T_i \in \comm{H1}) \implies(\nexists T_j, H2: (H1 \sqsubseteq H2) \land (T_j \in \live{H2}) \land (\htits{i}{H2} = \htits{j}{H2})) \rangle$. \end{observation} \begin{lemma} \label{lem:cts-wts} Consider a history $H1$ and its extension $H2$. Let $T_i, T_j$ be in $H1, H2$ respectively such that they are \inc{s} of each other. If \wts of $T_i$ is less than \wts of $T_j$ then \cts of $T_i$ is less than \cts $T_j$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubset H2) \land (T_i \in \txns{H1}) \land (T_j \in \txns{H2}) \land (T_i \in \incs{j}{H2}) \land (\htwts{i}{H1} < \htwts{j}{H2})\implies (\htcts{i}{H1} < \htcts{j}{H2}) \rangle$ \end{lemma} \begin{proof} Here we are given that \begin{equation} \label{eq:wts-ij} \htwts{i}{H1} < \htwts{j}{H2} \end{equation} The definition of \wts of $T_i$ is: $\htwts{i}{H1} = \htcts{i}{H1} + C * (\htcts{i}{H1} - \htits{i}{H1})$. Combining this \eqnref{wts-ij}, we get that $(C + 1) * \htcts{i}{H1} - C * \htits{i}{H1} < (C + 1) * \htcts{j}{H2} - C * \htits{j}{H2} \xrightarrow[\htits{i}{H1} = \htits{j}{H2}]{T_i \in \incs{j}{H2}} \htcts{i}{H1} < \htcts{j}{H2}$. \end{proof} \begin{lemma} \label{lem:wts-great} Consider a live transaction $T_i$ in a history $H1$ with its $\twts{i}$ less than a constant $\alpha$. Then there is a strict extension of $H1$, $H2$ in which an \inc of $T_i$, $T_j$ is live with \wts greater than $\alpha$. Formally, $\langle H1, T_i: (T_i \in \live{H1}) \land (\htwts{i}{H1} < \alpha) \implies(\exists T_j, H2: (H1 \sqsubseteq H2) \land (T_i \in \incs{j}{H2}) \land ((T_j \in \comm{H2}) \lor ((T_j \in \live{H2}) \land (\htwts{j}{H2} > \alpha)))) \rangle$. \end{lemma} \begin{proof} The proof comes the behavior of an \aptr. The application keeps invoking a transaction with the same \its until it commits. Thus the transaction $T_i$ which is live in $H1$ will eventually terminate with an abort or commit. If it commits, $H2$ could be any history after the commit of $T_2$. On the other hand if $T_i$ is aborted, as seen in \obsref{abort-retry} it will be invoked again or reincarnated with another \cts and \wts. It can be seen that \cts is always increasing. As a result, the \wts is also increasing. Thus eventually the \wts will become greater $\alpha$. Hence, we have that either an \inc of $T_i$ will get committed or will eventually have \wts greater than or equal to $\alpha$. \end{proof} \noindent Next we have a lemma about \cts of a transaction and the \syst of a history. \begin{lemma} \label{lem:cts-syst} Consider a transaction $T_i$ in a history $H$. Then, we have that \cts of $T_i$ will be less than or equal to \syst of $H$. Formally, $\langle T_i, H1: (T_i \in \txns{H}) \implies (\htcts{i}{H} \leq \hsyst{H}) \rangle$. \end{lemma} \begin{proof} We get this lemma by observing the \mth{s} of the STM System that increment the \tcntr which are \begt and \tryc. It can be seen that \cts of $T_i$ gets assigned in the \begt \mth. So if the last \mth of $H$ is the \begt of $T_i$ then we get that \cts of $T_i$ is same as \syst of $H$. On the other hand if some other \mth got executed in $H$ after \begt of $T_i$ then we have that \cts of $T_i$ is less than \syst of $H$. Thus combining both the cases, we get that \cts of $T_i$ is less than or equal to as \syst of $H$, i.e., $(\htcts{i}{H} \leq \hsyst{H})$ \end{proof} \noindent From this lemma, we get the following corollary which is the converse of the lemma statement \begin{corollary} \label{cor:cts-syst} Consider a transaction $T_i$ which is not in a history $H1$ but in an strict extension of $H1$, $H2$. Then, we have that \cts of $T_i$ is greater than the \syst of $H$. Formally, $\langle T_i, H1, H2: (H1 \sqsubset H2) \land (T_i \notin \txns{H1}) \land (T_i \in \txns{H2}) \implies (\htcts{i}{H2} > \hsyst{H1}) \rangle$. \end{corollary} \noindent Now, we have lemma about the \mth{s} of \ksftm completing in finite time. \begin{lemma} \label{lem:mth-fdm} If all the locks are fair and the underlying system scheduler is fair then all the \mth{s} of \ksftm will eventually complete. \end{lemma} \begin{proof} It can be seen that in any \mth, whenever a transaction $T_i$ obtains multiple locks, it obtains locks in the same order: first lock relevant \tobj{s} in a pre-defined order and then lock relevant \glock{s} again in a predefined order. Since all the locks are obtained in the same order, it can be seen that the \mth{s} of \ksftm will not deadlock. It can also be seen that none of the \mth{s} have any unbounded while loops. All the loops in \tryc \mth iterate through all the \tobj{s} in the write-set of $T_i$. Moreover, since we assume that the underlying scheduler is fair, we can see that no thread gets swapped out infinitely. Finally, since we assume that all the locks are fair, it can be seen all the \mth{s} terminate in finite time. \end{proof} \begin{theorem} \label{thm:trans-com|abt} Every transaction either commits or aborts in finite time. \end{theorem} \begin{proof} This theorem comes directly from the \lemref{mth-fdm}. Since every \mth of \ksftm will eventually complete, all the transactions will either commit or abort in finite time. \end{proof} \noindent From this theorem, we get the following corollary which states that the maximum \emph{lifetime} of any transaction is $L$. \begin{corollary} \label{cor:cts-L} Any transaction $T_i$ in a history $H$ will either commit or abort before the \syst of $H$ crosses $\tcts{i} + L$. \end{corollary} \noindent The following lemma connects \wts and \its of two transactions, $T_i, T_j$. \begin{lemma} \label{lem:wts-its} Consider a history $H1$ with two transactions $T_i, T_j$. Let $T_i$ be in $\live{H1}$. Suppose $T_j$'s \wts is greater or equal to $T_i$' s \wts. Then \its of $T_j$ is less than $\tits{i} + 2*L$. Formally, $\langle H, T_i, T_j : (\{ T_i, T_j\} \subseteq \txns{H}) \land ( T_i \in \live{H}) \land (\htwts{j}{H} \geq \htwts{i}{H}) \Longrightarrow (\htits{i}{H} + 2L \geq \htits{j}{H}) \rangle$. \end{lemma} \begin{proof} Since $T_i$ is live in $H1$, from \corref{cts-L}, we get that it terminates before the system time, $\tcntr$ becomes $\tcts{i} + L$. Thus, \syst of history $H1$ did not progress beyond $\tcts{i} + L$. Hence, for any other transaction $T_j$ (which is either live or terminated) in $H1$, it must have started before \syst has crossed $\tcts{i} + L$. Formally $\langle \tcts{j} \leq \tcts{i} + L \rangle$. Note that we have defined \wts of a transaction $T_j$ as: $\twts{j} = (\tcts{j} + C * (\tcts{j} - \tits{j}))$. Now, let us consider the difference of the \wts{s} of both the transactions. \noindent \begin{math} \twts{j} - \twts{i} = (\tcts{j} + C * (\tcts{j} - \tits{j})) - (\tcts{i} + C * (\tcts{i} - \tits{i})) \\ = (C + 1)(\tcts{j} - \tcts{i}) - C(\tits{j} - \tits{i}) \\ \leq (C + 1)L - C(\tits{j} - \tits{i}) \qquad [\because \tcts{j} \leq \tcts{i} + L] \\ = 2*L + \tits{i} - \tits{j} \qquad [\because C = 1] \\ \end{math} \noindent Thus, we have that: $ \langle (\tits{i} + 2L - \tits{j}) \geq (\twts{j} - \twts{i}) \rangle$. This gives us that \\ $((\twts{j} - \twts{i}) \geq 0) \Longrightarrow ((\tits{i} + 2L - \tits{j}) \geq 0)$. \noindent From the above implication we get that, $(\twts{j} \geq \twts{i}) \Longrightarrow (\tits{i} + 2L \geq \tits{j})$. \end{proof} It can be seen that \ksftm algorithm gives preference to transactions with lower \its to commit. To understand this notion of preference, we define a few notions of enablement of a transaction $T_i$ in a history $H$. We start with the definition of \emph{\itsen} as: \begin{definition} \label{defn:itsen} We say $T_i$ is \emph{\itsen} in $H$ if for all transactions $T_j$ with \its lower than \its of $T_i$ in $H$ have \incct to be true. Formally, \begin{equation*} \itsenb{i}{H} = \begin{cases} True & (T_i \in \live{H}) \land (\forall T_j \in \txns{H} : (\htits{j}{H} < \htits{i}{H}) \implies (\inct{j}{H})) \\ False & \text{otherwise} \end{cases} \end{equation*} \end{definition} \noindent The follow lemma states that once a transaction $T_i$ becomes \itsen it continues to remain so until it terminates. \begin{lemma} \label{lem:itsen-future} Consider two histories $H1$ and $H2$ with $H2$ being a extension of $H1$. Let a transaction $T_i$ being live in both of them. Suppose $T_i$ is \itsen in $H1$. Then $T_i$ is \itsen in $H2$ as well. Formally, $\langle H1, H2, T_i: (H1 \sqsubseteq H2) \land (T_i \in \live{H1}) \land (T_i \in \live{H2}) \land (\itsenb{i}{H1}) \implies (\itsenb{i}{H2}) \rangle$. \end{lemma} \begin{proof} When $T_i$ begins in a history $H3$ let the set of transactions with \its less than $\tits{i}$ be $smIts$. Then in any extension of $H3$, $H4$ the set of transactions with \its less than $\tits{i}$ remains as $smIts$. Suppose $H1, H2$ are extensions of $H3$. Thus in $H1, H2$ the set of transactions with \its less than $\tits{i}$ will be $smIts$. Hence, if $T_i$ is \itsen in $H1$ then all the transactions $T_j$ in $smIts$ are $\inct{j}{H1}$. It can be seen that this continues to remain true in $H2$. Hence in $H2$, $T_i$ is also \itsen which proves the lemma. \end{proof} The following lemma deals with a committed transaction $T_i$ and any transaction $T_j$ that terminates later. In the following lemma, $\incv$ is any constant greater than or equal to 1. \begin{lemma} \label{lem:tryci-j} Consider a history $H$ with two transactions $T_i, T_j$ in it. Suppose transaction $T_i$ commits before $T_j$ terminates (either by commit or abort) in $H$. Then $\ct_i$ is less than $\ct_j$ by at least $\incv$. Formally, $\langle H, \{T_i, T_j\} \in \txns{H}: (\tryc_i <_H \term_j) \implies (\ct_i + \incv \leq \ct_j)\rangle$. \end{lemma} \begin{proof} When $T_i$ commits, let the value of the global $\tcntr$ be $\alpha$. It can be seen that in \begt \mth, $\ct_j$ get initialized to $\infty$. The only place where $\ct_j$ gets modified is at \Lineref{tryc-cmt-mod} of \tryc. Thus if $T_j$ gets aborted before executing \tryc \mth or before this line of \tryc we have that $\ct_j$ remains at $\infty$. Hence in this case we have that $\langle \ct_i + \incv < \ct_j \rangle$. If $T_j$ terminates after executing \Lineref{tryc-cmt-mod} of \tryc \mth then $\ct_j$ is assigned a value, say $\beta$. It can be seen that $\beta$ will be greater than $\alpha$ by at least $\incv$ due to the execution of this line. Thus, we have that $\langle \alpha + \incv \leq \beta \rangle$ \end{proof} \noindent The following lemma connects the \tltl and \ct of a transaction $T_i$. \begin{lemma} \label{lem:ti|tltl-comt} Consider a history $H$ with a transaction $T_i$ in it. Then in $H$, $\ttltl{i}$ will be less than or equal to $\ct_i$. Formally, $\langle H, \{T_i\} \in \txns{H}: (\htltl{i}{H} \leq H.\ct_i) \rangle$. \end{lemma} \begin{proof} Consider the transaction $T_i$. In \begt \mth, $\ct_i$ get initialized to $\infty$. The only place where $\ct_i$ gets modified is at \Lineref{tryc-cmt-mod} of \tryc. Thus if $T_i$ gets aborted before this line or if $T_i$ is live we have that $(\ttltl{i} \leq \ct_i)$. On executing \Lineref{tryc-cmt-mod}, $\ct_i$ gets assigned to some finite value and it does not change after that. It can be seen that $\ttltl{i}$ gets initialized to $\tcts{i}$ in \Lineref{ti-ts-init} of \begt \mth. In that line, $\tcts{i}$ reads $\tcntr$ and increments it atomically. Then in \Lineref{tryc-cmt-mod}, $\ct_i$ gets assigned the value of $\tcntr$ after incrementing it. Thus, we clearly get that $\tcts{i} (= \ttltl{i}\text{ initially}) < \ct_i$. Then $\ttltl{i}$ gets updated on \Lineref{rd-tltl-inc} of read, \Lineref{tryc-tltl-inc} and \Lineref{ti-updt} of \tryc \mth{s}. Let us analyze them case by case assuming that $\ttltl{i}$ was last updated in each of these \mth{s} before the termination of $T_i$: \begin{enumerate} \item \label{case:read} \Lineref{rd-tltl-inc} of read \mth: Suppose this is the last line where $\ttltl{i}$ updated. Here $\ttltl{i}$ gets assigned to 1 + \vt of the previously committed version which say was created by a transaction $T_j$. Thus, we have the following equation, \begin{equation} \label{eq:tltl-vt} \ttltl{i} = 1 + x[j].\vt \end{equation} It can be seen that $x[j].\vt$ is same as $\ttltl{j}$ when $T_j$ executed \Lineref{new-tup} of \tryc. Further, $\ttltl{j}$ in turn is same as $\ttutl{j}$ due to \Lineref{ti-updt} of \tryc. From \Lineref{tryc-ul-cmt}, it can be seen that $\ttutl{j}$ is less than or equal to $\ct_j$ when $T_j$ committed. Thus we have that \begin{equation} \label{eq:tltl-ct} x[j].\vt = \ttltl{j} = \ttutl{j} \leq \ct_j \end{equation} It is clear that from the above discussion that $T_j$ executed \tryc \mth before $T_i$ terminated (i.e. $\tryc_j <_{H1} \term_i$). From \eqnref{tltl-vt} and \eqnref{tltl-ct}, we get \\ \begin{math} \ttltl{i} \leq 1 + \ct_j \xrightarrow[]{\incv \geq 1} \ttltl{i} \leq \incv + \ct_j \xrightarrow[]{\lemref{tryci-j}} \ttltl{i} \leq \ct_i \end{math} \item \label{case:tryc-short} \Lineref{tryc-tltl-inc} of \tryc \mth: The reasoning in this case is very similar to the above case. \item \label{case:tryc-long} \Lineref{ti-updt} of \tryc \mth: In this line, $\ttltl{i}$ is made equal to $\ttutl{i}$. Further, in \Lineref{tryc-ul-cmt}, $\ttutl{i}$ is made lesser than or equal to $\ct_{i}$. Thus combing these, we get that $\ttltl{i} \leq \ct_{i}$. It can be seen that the reasoning here is similar in part to \csref{read}. \end{enumerate} Hence, in all the three cases we get that $\langle \ttltl{i} \leq \ct_i \rangle$. \end{proof} \noindent The following lemma connects the \tutl,\ct of a transaction $T_i$ with \wts of a transaction $T_j$ that has already committed. \begin{lemma} \label{lem:ti|tutl-comt} Consider a history $H$ with a transaction $T_i$ in it. Suppose $\ttutl{i}$ is less than $\ct_i$. Then, there is a committed transaction $T_j$ in $H$ such that $\twts{j}$ is greater than $\twts{i}$. Formally, $\langle H \in \gen{\ksftm}, \{T_i\} \in \txns{H}: (\htutl{i}{H} < H.\ct_i) \implies (\exists T_j \in \comm{H}: \htwts{j}{H} > \htwts{i}{H}) \rangle$. \end{lemma} \begin{proof} It can be seen that $\tutl_i$ initialized in \begt \mth to $\infty$. $\ttutl{i}$ is updated in \Lineref{rd-ul-dec} of read \mth, \Lineref{tryc-ul-dec} \& \Lineref{tryc-ul-cmt} of \tryc \mth. If $T_i$ executes \Lineref{rd-ul-dec} of read \mth and/or \Lineref{tryc-ul-dec} of \tryc \mth then $\ttutl{i}$ gets decremented to some value less than $\infty$, say $\alpha$. Further, it can be seen that in both these lines the value of $\ttutl{i}$ is possibly decremented from $\infty$ because of $nextVer$ (or $ver$), a version of $x$ whose \ts is greater than $T_i$'s \wts. This implies that some transaction $T_j$, which is committed in $H$, must have created $nextVer$ (or $ver$) and $\twts{j} > \twts{i}$. Next, let us analyze the value of $\alpha$. It can be seen that $\alpha = x[nextVer/ver].vrt - 1$ where $nextVer/ver$ was created by $T_j$. Further, we can see when $T_j$ executed \tryc, we have that $x[nextVer].vrt = \ttltl{j}$ (from \Lineref{new-tup}). From \lemref{ti|tltl-comt}, we get that $\ttltl{j} \leq \ct_j$. This implies that $\alpha < \ct_j$. Now, we have that $T_j$ has already committed before the termination of $T_i$. Thus from \lemref{tryci-j}, we get that $\ct_j < \ct_i$. Hence, we have that, \begin{equation} \label{eq:alph-ct} \alpha < \ct_i \end{equation} Now let us consider \Lineref{tryc-ul-cmt} executed by $T_i$ which causes $\ttutl{i}$ to change. This line will get executed only after both \Lineref{rd-ul-dec} of read \mth, \Lineref{tryc-ul-dec} of \tryc \mth. This is because every transaction executes \tryc \mth only after read \mth. Further within \tryc \mth, \Lineref{tryc-ul-cmt} follows \Lineref{tryc-ul-dec}. There are two sub-cases depending on the value of $\ttutl{i}$ before the execution of \Lineref{tryc-ul-cmt}: (i) If $\ttutl{i}$ was $\infty$ and then get decremented to $\ct_i$ upon executing this line, then we get $\ct_i = \ttutl{i}$. From \eqnref{alph-ct}, we can ignore this case. (ii) Suppose the value of $\ttutl{i}$ before executing \Lineref{tryc-ul-cmt} was $\alpha$. Then from \eqnref{alph-ct} we get that $\ttutl{i}$ remains at $\alpha$ on execution of \Lineref{tryc-ul-cmt}. This implies that a transaction $T_j$ committed such that $\twts{j} > \twts{i}$. \end{proof} \noindent The following lemma connects the \tltl of a committed transaction $T_j$ and \ct of a transaction $T_i$ that commits later. \begin{lemma} \label{lem:tltlj-comti} Consider a history $H1$ with transactions $T_i, T_j$ in it. Suppose $T_j$ is committed and $T_i$ is live in $H1$. Then in any extension of $H1$, say $H2$, $\ttltl{j}$ is less than or equal to $\ct_i$. Formally, $\langle {H1, H2} \in \gen{\ksftm}, \{T_i, T_j\} \subseteq \txns{H1, H2}: (H1 \sqsubseteq H2) \land (T_j \in \comm{H1}) \land (T_i \in \live{H1}) \implies (\htltl{j}{H2} < H2.\ct_i) \rangle$. \end{lemma} \begin{proof} As observed in the previous proof of \lemref{ti|tltl-comt}, if $T_i$ is live or aborted in $H2$, then its \ct is $\infty$. In both these cases, the result follows. If $T_i$ is committed in $H2$ then, one can see that \ct of $T_i$ is not $\infty$. In this case, it can be seen that $T_j$ committed before $T_i$. Hence, we have that $\ct_j < \ct_i$. From \lemref{ti|tltl-comt}, we get that $\ttltl{j} \leq \ct_j$. This implies that $\ttltl{j} < \ct_i$. \end{proof} \noindent In the following sequence of lemmas, we identify the condition by when a transaction will commit. \begin{lemma} \label{lem:its-wts} Consider two histories $H1, H3$ such that $H3$ is a strict extension of $H1$. Let $T_i$ be a transaction in $\live{H1}$ such that $T_i$ \itsen in $H1$ and $\gval_i$ flag is true in $H1$. Suppose $T_i$ is aborted in $H3$. Then there is a history $H2$ which is an extension of $H1$ (and could be same as $H1$) such that (1) Transaction $T_i$ is live in $H2$; (2) there is a transaction $T_j$ that is live in ${H2}$; (3) $\htwts{j}{H2}$ is greater than $\htwts{i}{H2}$; (4) $T_j$ is committed in $H3$. Formally, $ \langle H1, H3, T_i: (H1 \sqsubset H3) \land (T_i \in \live{H1}) \land (\htval{i}{H1} = True) \land (\itsenb{i}{H1}) \land (T_i \in \aborted{H3})) \implies (\exists H2, T_j: (H1 \sqsubseteq H2 \sqsubset H3) \land (T_i \in \live{H2}) \land (T_j \in \txns{H2}) \land (\htwts{i}{H2} < \htwts{j}{H2}) \land (T_j \in \comm{H3})) \rangle$. \end{lemma} \begin{proof} To show this lemma, w.l.o.g we assume that $T_i$ on executing either read or \tryc in $H2$ (which could be same as $H1$) gets aborted resulting in $H3$. Thus, we have that $T_i$ is live in $H2$. Here $T_i$ is \itsen in $H1$. From \lemref{itsen-future}, we get that $T_i$ is \itsen in $H2$ as well. Let us sequentially consider all the lines where a $T_i$ could abort. In $H2$, $T_i$ executes one of the following lines and is aborted in $H3$. We start with \tryc method. \begin{enumerate} \item STM \tryc: \begin{enumerate} \item \Lineref{init-tc-chk} \label{case:init-tc-chk}: This line invokes abort() method on $T_i$ which releases all the locks and returns $\mathcal{A}$ to the invoking thread. Here $T_i$ is aborted because its \val flag, is set to false by some other transaction, say $T_j$, in its \tryc algorithm. This can occur in Lines: \ref{lin:addAbl-lar}, \ref{lin:addAbl-sml} where $T_i$ is added to $T_j$'s \abl set. Later in \Lineref{gval-set}, $T_i$'s \val flag is set to false. Note that $T_i$'s \val is true (after the execution of the last event) in $H1$. Thus, $T_i$'s \val flag must have been set to false in an extension of $H1$, which we again denote as $H2$. This can happen only if in both the above cases, $T_j$ is live in $H2$ and its \its is less than $T_i$'s \its. But we have that $T_i$'s \itsen in $H2$. As a result, it has the smallest among all live and aborted transactions of $H2$. Hence, there cannot exist such a $T_j$ which is live and $\htits{j}{H2} < \htits{i}{H2}$. Thus, this case is not possible. \item \Lineref{prev-nil}: This line is executed in $H2$ if there exists no version of $x$ whose \ts is less than $T_i$'s \wts. This implies that all the versions of $x$ have \ts{s} greater than $\twts{i}$. Thus the transactions that created these versions have \wts greater than $\twts{i}$ and have already committed in $H2$. Let $T_j$ create one such version. Hence, we have that $\langle (T_j \in \comm{H2}) \implies (T_j \in \comm{H3}) \rangle$ since $H3$ is an extension of $H2$. \item \Lineref{mid-tc-chk} \label{case:mid-tc-chk}: This case is similar to \csref{init-tc-chk}, i.e., \Lineref{init-tc-chk}. \item \Lineref{its-chk1} \label{case:its-chk1}: In this line, $T_i$ is aborted as some other transaction $T_j$ in $T_i$'s \lrl has committed. Any transaction in $T_i$'s \lrl has \wts greater than $T_i$'s \wts. This implies that $T_j$ is already committed in $H2$ and hence committed in $H3$ as well. \item \Lineref{tc-lts-cross} \label{case:tc-lts-cross}: In this line, $T_i$ is aborted because its lower limit has crossed its upper limit. First, let us consider $\ttutl{i}$. It is initialized in \begt \mth to $\infty$. As long as it is $\infty$, these limits cannot cross each other. Later, $\ttutl{i}$ is updated in \Lineref{rd-ul-dec} of read \mth, \Lineref{tryc-ul-dec} \& \Lineref{tryc-ul-cmt} of \tryc \mth. Suppose $\ttutl{i}$ gets decremented to some value $\alpha$ by one of these lines. Now there are two cases here: (1) Suppose $\ttutl{i}$ gets decremented to $\ct_i$ due to \Lineref{tryc-ul-cmt} of \tryc \mth. Then from \lemref{ti|tltl-comt}, we have $\ttltl{i} \leq \ct_i = \ttutl{i}$. Thus in this case, $T_i$ will not abort. (2) $\ttutl{i}$ gets decremented to $\alpha$ which is less than $\ct_i$. Then from \lemref{ti|tutl-comt}, we get that there is a committed transaction $T_j$ in $\comm{H2}$ such that $\twts{j} > \twts{i}$. This implies that $T_j$ is in $\comm{H3}$. \ignore{ It can be seen that if $T_i$ executes \Lineref{rd-ul-dec} of read \mth and/or \Lineref{tryc-ul-dec} of \tryc \mth then $\ttutl{i}$ gets decremented to some value less than $\infty$, say $\alpha$. Further, it can be seen that in both these lines the value of $\ttutl{i}$ is possibly decremented from $\infty$ because of $nextVer$ (or $ver$), a version of $x$ who \ts is greater than $T_i$. This implies that some transaction $T_j$ which is committed in $H$ must have created $nextVer$ ($ver$) and $\twts{j} > \twts{i}$. Next, let us analyze the value of $\alpha$. It can be seen that $\alpha = x[nextVer/ver].vrt - 1$ where $nextVer/ver$ was created by $T_j$. Further, we can see when $T_j$ executed \tryc, we have that $x[nextVer].vrt = \ttltl{j}$ (from \Lineref{new-tup}). From \lemref{ti|tltl-comt}, we get that $\ttltl{j} \leq \ct_j$. This implies that $\alpha < \ct_j$. Now, we can see that $T_j$ has already committed before the termination of $T_i$. Thus from \lemref{tryci-j}, we get that $\ct_j < \ct_i$. Hence, we have that $\alpha < \ct_i$. It is clear that before executing this line \Lineref{tc-lts-cross}, $T_i$ executed \Lineref{tryc-ul-cmt}. Now there are two sub-cases depending on the value of $\ttutl{i}$ before the execution of \Lineref{tryc-ul-cmt}: (i) If $\ttutl{i}$ was $\infty$ then it get decremented to $\ct_i$ upon executing this line. Then again from \lemref{ti|tltl-comt}, we have $\ttltl{i} \leq \ct_i = \ttutl{i}$. Thus in this case, $T_i$ will not abort. (ii) Suppose the value of $\ttutl{i}$ before executing \Lineref{tryc-ul-cmt} was $\alpha$. Then from the above discussion we get that $\ttutl{i}$ remains at $\alpha$. This implies that a transaction $T_j$ committed such that $\twts{j} > \twts{i}$. Thus if $\ttltl{i}$ turned out to be greater than $\ttutl{i}$ causing $T_i$ to abort, we still have that the lemma is true. } \item \Lineref{its|lar-sml}: This case is similar to \csref{init-tc-chk}, i.e., \Lineref{init-tc-chk}. \item \Lineref{its-chk2} \label{case:its-chk2}: In this case, $T_k$ is in $T_i$'s \srl and is committed in $H1$. And, from this case, we have that \begin{equation} \label{eq:tltl-k_i} \htutl{i}{H2} \leq \htltl{k}{H2} \end{equation} From the assumption of this case, we have that $T_k$ commits before $T_i$. Thus, from \lemref{tltlj-comti}, we get that $\ct_k < \ct_i$. From \lemref{ti|tltl-comt}, we have that $\ttltl{k} \leq \ct_k$. Thus, we get that $\ttltl{k} < \ct_i$. Combining this with the inequality of this case \eqnref{tltl-k_i}, we get that $\ttutl{i} < \ct_i$. Combining this inequality with \lemref{ti|tutl-comt}, we get that there is a transaction $T_j$ in $\comm{H2}$ and $\htwts{j}{H2} > \htwts{i}{H2}$. This implies that $T_j$ is in $\comm{H3}$ as well. \end{enumerate} \item STM read: \begin{enumerate} \item \Lineref{rd-chk}: This case is similar to \csref{init-tc-chk}, i.e., \Lineref{init-tc-chk} \item \Lineref{rd-lts-cross}: The reasoning here is similar to \csref{tc-lts-cross}, i.e., \Lineref{tc-lts-cross}. \end{enumerate} \end{enumerate} \end{proof} The interesting aspect of the above lemma is that it gives us a insight as to when a $T_i$ will get commit. If an \itsen transaction $T_i$ aborts then it is because of another transaction $T_j$ with \wts higher than $T_i$ has committed. To precisely capture this, we define two more notions of a transaction being enabled \emph{\cdsen} and \emph{\finen}. To define these notions of enabled, we in turn define a few other auxiliary notions. We start with \emph{\affset}, \begin{equation*} \haffset{i}{H} = \{T_j|(T_j \in \txns{H}) \land (\htits{j}{H} < \htits{i}{H} + 2*L)\} \end{equation*} From the description of \ksftm algorithm and \lemref{wts-its}, it can be seen that a transaction $T_i$'s commit can depend on committing of transactions (or their \inc{s}) which have their \its less than \its of $T_i$ + $2*L$, which is $T_i$'s \affset. We capture this notion of dependency for a transaction $T_i$ in a history $H$ as \emph{commit dependent set} or \emph{\cdset} as: the set of all transactions $T_j$ in $T_i$'s \affset that do not any \inc that is committed yet, i.e., not yet have their \incct flag set as true. Formally, \begin{equation*} \hcds{i}{H} = \{T_j| (T_j \in \haffset{i}{H}) \land (\neg\inct{j}{H}) \} \end{equation*} \noindent Based on this definition of \cdset, we next define the notion of \cdsen. \begin{definition} \label{defn:cdsen} We say that transaction $T_i$ is \emph{\cdsen} if the following conditions hold true (1) $T_i$ is live in $H$; (2) \cts of $T_i$ is greater than or equal to \its of $T_i$ + $2*L$; (3) \cdset of $T_i$ is empty, i.e., for all transactions $T_j$ in $H$ with \its lower than \its of $T_i$ + $2*L$ in $H$ have their \incct to be true. Formally, \begin{equation*} \cdsenb{i}{H} = \begin{cases} True & (T_i \in \live{H}) \land (\htcts{i}{H} \geq \htits{i}{H} + 2*L) \land (\hcds{i}{H} = \phi) \\ False & \text{otherwise} \end{cases} \end{equation*} \end{definition} \noindent The meaning and usefulness of these definitions will become clear in the course of the proof. In fact, we later show that once the transaction $T_i$ is \cdsen, it will eventually commit. We will start with a few lemmas about these definitions. \begin{lemma} \label{lem:its-enb} Consider a transaction $T_i$ in a history $H$. If $T_i$ is \cdsen then $T_i$ is also \itsen. Formally, $\langle H, T_i: (T_i \in \txns{H}) \land (\cdsenb{i}{H}) \implies (\itsenb{i}{H}) \rangle$. \end{lemma} \begin{proof} If $T_i$ is \cdsen in $H$ then it implies that $T_i$ is live in $H$. From the definition of \cdsen, we get that $\hcds{i}{H}$ is $\phi$ implying that any transaction $T_j$ with $\tits{k}$ less than $\tits{i} + 2*L$ has its \incct flag as true in $H$. Hence, for any transaction $T_k$ having $\tits{k}$ less than $\tits{i}$, $\inct{k}{H}$ is also true. This shows that $T_i$ is \itsen in $H$. \end{proof} \ignore{ \begin{lemma} \label{lem:cds-h1} Consider a transaction $T_i$ which is \cdsen in a history $H1$. Let $T_j$ be a transaction in \affset of $T_i$ in $H1$. Consider an extension of $H1$, $H2$ with a transaction $T_k$ in it such that $T_k$ is an \inc of $T_j$. Then $T_k$ is also in the set of transaction of $H1$. Formally, $\langle H1, H2, T_i, T_j, T_k: (H1 \sqsubseteq H2) \land (\cdsenb{i}{H1}) \land (T_j \in \haffset{i}{H1}) \land (T_k \in \incs{j}{H2}) \implies (T_k \in \txns{H1}) \rangle$ \end{lemma} \begin{proof} Once $T_i$ becomes \cdsen, all the transactions in its \affset have an \inc that is committed. Hence, as per our model the corresponding \aptr has committed and the application does not invoke another transaction with the same \its. Thus from \obsref{cmt-noinc}, we get that no new \inc of $T_j$ will get invoked by the application in any future extension of $H1$. This implies that the \inc of $T_j$ in $H2$, $T_k$ must have already been invoked before $T_i$ became \enbd. Since $T_i$ is \enbd in $H1$, we get that $T_k$ must also be in the set of transactions of $H1$, i.e., $(T_k \in \txns{H1})$. \end{proof} } \begin{lemma} \label{lem:cds-tk-h1} Consider a transaction $T_i$ which is \cdsen in a history $H1$. Consider an extension of $H1$, $H2$ with a transaction $T_j$ in it such that $T_i$ is an \inc of $T_j$. Let $T_k$ be a transaction in the \affset of $T_j$ in $H2$ Then $T_k$ is also in the set of transaction of $H1$. Formally, $\langle H1, H2, T_i, T_j, T_k: (H1 \sqsubseteq H2) \land (\cdsenb{i}{H1}) \land (T_i \in \incs{j}{H2}) \land (T_k \in \haffset{j}{H2}) \implies (T_k \in \txns{H1}) \rangle$ \end{lemma} \begin{proof} Since $T_i$ is \cdsen in $H1$, we get (from the definition of \cdsen) that \begin{equation} \label{eq:ti-cts-its} \htcts{i}{H1} \geq \htits{i}{H1} + 2*L \end{equation} Here, we have that $T_k$ is in $\haffset{j}{H2}$. Thus from the definition of \affset, we get that \begin{equation} \label{eq:tk-tj-aff} \htits{k}{H2} < \htits{j}{H2} + 2*L \end{equation} Since $T_i$ and $T_j$ are \inc{s} of each other, their \its are the same. Combining this with \eqnref{tk-tj-aff}, we get that \begin{equation} \label{eq:tk-ti-h12} \htits{k}{H2} < \htits{i}{H1} + 2*L \end{equation} We now show this proof through contradiction. Suppose $T_k$ is not in $\txns{H1}$. Then there are two cases: \begin{itemize} \item No \inc of $T_k$ is in $H1$: This implies that $T_k$ starts afresh after $H1$. Since $T_k$ is not in $H1$, from \corref{cts-syst} we get that $\htcts{k}{H2} > \hsyst{H1} \xrightarrow [\htcts{k}{H2} = \htits{k}{H2}] {T_k \text{ starts afresh}}\htits{k}{H2} > \hsyst{H1} \xrightarrow [\hsyst{H1} \geq \htcts{i}{H1}]{(T_i \in H1) \land \lemref{cts-syst}} \htits{k}{H2} > \htcts{i}{H1} \xrightarrow {\eqnref{ti-cts-its}} \htits{k}{H2} > \htits{i}{H1} + 2*L \xrightarrow {\htits{i}{H1} = \htits{j}{H2}} \htits{k}{H2} > \htits{j}{H2} + 2*L$ But this result contradicts with \eqnref{tk-tj-aff}. Hence, this case is not possible. \item There is an \inc of $T_k$, $T_l$ in $H1$: In this case, we have that \begin{equation} \label{eq:tl-h1} \htits{l}{H1} = \htits{k}{H2} \end{equation} Now combing this result with \eqnref{tk-ti-h12}, we get that $\htits{l}{H1} < \htits{i}{H1} + 2*L$. This implies that $T_l$ is in \affset of $T_i$ in $H1$. Since $T_i$ is \cdsen, we get that $T_l$'s \incct must be true. We also have that $T_k$ is not in $H1$ but in $H2$ where $H2$ is an extension of $H1$. Since $H2$ has some events more than $H1$, we get that $H2$ is a strict extension of $H1$. Thus, we have that, $(H1 \sqsubset H2) \land (\inct{l}{H1}) \land (T_k \in \txns{H2}) \land (T_k \notin \txns{H1})$. Combining these with \lemref{inct-diff}, we get that $(\htits{l}{H1} \neq \htits{k}{H2})$. But this result contradicts \eqnref{tl-h1}. Hence, this case is also not possible. \end{itemize} Thus from both the cases we get that $T_k$ should be in $H1$. Hence proved. \end{proof} \begin{lemma} \label{lem:aff-tkinc-h1} Consider two histories $H1, H2$ where $H2$ is an extension of $H1$. Let $T_i, T_j, T_k$ be three transactions such that $T_i$ is in $\txns{H1}$ while $T_j, T_k$ are in $\txns{H2}$. Suppose we have that (1) $\tcts{i}$ is greater than $\tits{i} + 2*L$ in $H1$; (2) $T_i$ is an \inc of $T_j$; (3) $T_k$ is in \affset of $T_j$ in $H2$. Then an \inc of $T_k$, say $T_l$ (which could be same as $T_k$) is in $\txns{H1}$. Formally, $\langle H1, H2, T_i, T_j, T_k: (H1 \sqsubseteq H2) \land (T_i \in \txns{H1}) \land (\{T_j, T_k\} \in \txns{H2}) \land (\htcts{i}{H1} > \htits{i}{H1} + 2*L) \land (T_i \in \incs{j}{H2}) \land (T_k \in \haffset{j}{H2}) \implies (\exists T_l: (T_l \in \incs{k}{H2}) \land (T_l \in \txns{H1})) \rangle$ \end{lemma} \begin{proof} \noindent This proof is similar to the proof of \lemref{cds-tk-h1}. We are given that \begin{equation} \label{eq:given-ti-ctsits} \htcts{i}{H1} \geq \htits{i}{H1} + 2*L \end{equation} We now show this proof through contradiction. Suppose no \inc of $T_k$ is in $\txns{H1}$. This implies that $T_k$ must have started afresh in some history $H3$ which is an extension of $H1$. Also note that $H3$ could be same as $H2$ or a prefix of it, i.e., $H3 \sqsubseteq H2$. Thus, we have that \noindent \begin{math} \htits{k}{H3} > \hsyst{H1} \xrightarrow{\lemref{cts-syst}} \htits{k}{H3} > \htcts{i}{H1} \xrightarrow{\eqnref{given-ti-ctsits}} \htits{k}{H3} > \htits{i}{H1} + 2*L \xrightarrow{\htits{i}{H1} = \htits{j}{H2}} \htits{k}{H3} > \htits{j}{H2} + 2*L \xrightarrow[\obsref{hist-subset}]{H3 \sqsubseteq H2} \htits{k}{H2} > \htits{j}{H2} + 2*L \xrightarrow[definition]{\affset} T_k \notin \haffset{j}{H2} \end{math} But we are given that $T_k$ is in \affset of $T_j$ in $H2$. Hence, it is not possible that $T_k$ started afresh after $H1$. Thus, $T_k$ must have a \inc in $H1$. \end{proof} \begin{lemma} \label{lem:aff-same} Consider a transaction $T_i$ which is \cdsen in a history $H1$. Consider an extension of $H1$, $H2$ with a transaction $T_j$ in it such that $T_j$ is an \inc of $T_i$ in $H2$. Then \affset of $T_i$ in $H1$ is same as the \affset of $T_j$ in $H2$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (\cdsenb{i}{H1}) \land (T_j \in \txns{H2}) \land (T_i \in \incs{j}{H2}) \implies ((\haffset{i}{H1} = \haffset{j}{H2})) \rangle$ \end{lemma} \begin{proof} From the definition of \cdsen, we get that $T_i$ is in $\txns{H1}$. Now to prove that \affset{s} are the same, we have to show that $(\haffset{i}{H1} \subseteq \haffset{j}{H2})$ and $(\haffset{j}{H1} \subseteq \haffset{i}{H2})$. We show them one by one: \paragraph{$(\haffset{i}{H1} \subseteq \haffset{j}{H2})$:} Consider a transaction $T_k$ in $\haffset{i}{H1}$. We have to show that $T_k$ is also in $\haffset{j}{H2}$. From the definition of \affset, we get that \begin{equation} \label{eq:tk-h1} T_k \in \txns{H1} \end{equation} \noindent Combining \eqnref{tk-h1} with \obsref{hist-subset}, we get that \begin{equation} \label{eq:tk-h2} T_k \in \txns{H2} \end{equation} \noindent From the definition of \its, we get that \begin{equation} \label{eq:its-h1-h2} \htits{k}{H1} = \htits{k}{H2} \end{equation} \noindent Since $T_i, T_j$ are \inc{s} we have that . \begin{equation} \label{eq:its-ij} \htits{i}{H1} = \htits{j}{H2} \end{equation} \noindent From the definition of \affset, we get that, \\ $\htits{k}{H1} < \htits{i}{H1} + 2*L \xrightarrow{\eqnref{its-h1-h2}} \htits{k}{H2} < \htits{i}{H1} + 2*L \xrightarrow{\eqnref{its-ij}} \htits{k}{H2} < \htits{j}{H2} + 2*L$ \noindent Combining this result with \eqnref{tk-h2}, we get that $T_k \in \haffset{j}{H2}$. \paragraph{$(\haffset{i}{H1} \subseteq \haffset{j}{H2})$:} Consider a transaction $T_k$ in $\haffset{j}{H2}$. We have to show that $T_k$ is also in $\haffset{i}{H1}$. From the definition of \affset, we get that $T_k \in \txns{H2}$. Here, we have that $(H1 \sqsubseteq H2) \land (\cdsenb{i}{H1}) \land (T_i \in \incs{j}{H2}) \land (T_k \in \haffset{j}{H2})$. Thus from \lemref{cds-tk-h1}, we get that $T_k \in \txns{H1}$. Now, this case is similar to the above case. It can be seen that Equations \ref{eq:tk-h1}, \ref{eq:tk-h2}, \ref{eq:its-h1-h2}, \ref{eq:its-ij} hold good in this case as well. Since $T_k$ is in $\haffset{j}{H2}$, we get that \\ $\htits{k}{H2} < \htits{i}{H2} + 2*L \xrightarrow{\eqnref{its-h1-h2}} \htits{k}{H1} < \htits{j}{H2} + 2*L \xrightarrow{\eqnref{its-ij}} \htits{k}{H1} < \htits{i}{H1} + 2*L $ \noindent Combining this result with \eqnref{tk-h1}, we get that $T_k \in \haffset{i}{H1}$. \end{proof} \noindent Next we explore how a \cdsen transaction remains \cdsen in the future histories once it becomes true. \begin{lemma} \label{lem:cds-fut} Consider two histories $H1$ and $H2$ with $H2$ being an extension of $H1$. Let $T_i$ and $T_j$ be two transactions which are live in $H1$ and $H2$ respectively. Let $T_i$ be an \inc of $T_j$ and $\tcts{i}$ is less than $\tcts{j}$. Suppose $T_i$ is \cdsen in $H1$. Then $T_j$ is \cdsen in $H2$ as well. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (T_i \in \incs{j}{H2}) \land (\htcts{i}{H1} < \htcts{j}{H2}) \land (\cdsenb{i}{H1}) \implies (\cdsenb{j}{H2}) \rangle$. \end{lemma} \begin{proof} We have that $T_i$ is live in $H1$ and $T_j$ is live in $H2$. Since $T_i$ is \cdsen in $H1$, we get (from the definition of \cdsen) that \begin{equation} \label{eq:cts-its} \htcts{i}{H1} \geq \htits{i}{H2} + 2*L \end{equation} We are given that $\tcts{i}$ is less than $\tcts{j}$ and $T_i, T_j$ are incarnations of each other. Hence, we have that \ignore{ \begin{align*} \htcts{j}{H2} & > \htcts{i}{H1} \\ \htcts{j}{H2} & > \htits{i}{H1} + 2*L & [\text{From \eqnref{cts-its}}] \\ \htcts{j}{H2} & > \htits{j}{H2} + 2*L & [\tits{i} = \tits{j}] \\ \end{align*} } \begin{align*} \htcts{j}{H2} & > \htcts{i}{H1} \\ & > \htits{i}{H1} + 2*L & [\text{From \eqnref{cts-its}}] \\ & > \htits{j}{H2} + 2*L & [\tits{i} = \tits{j}] \\ \end{align*} Thus we get that $\tcts{j} > \tits{j} + 2*L$. We have that $T_j$ is live in $H2$. In order to show that $T_j$ is \cdsen in $H2$, it only remains to show that \cdset of $T_j$ in $H2$ is empty, i.e., $\hcds{j}{H2} = \phi$. The \cdset becomes empty when all the transactions of $T_j$'s \affset in $H2$ have their \incct as true in $H2$. Since $T_j$ is live in $H2$, we get that $T_j$ is in $\txns{H2}$. Here, we have that $(H1 \sqsubseteq H2) \land (T_j \in \txns{H2}) \land (T_i \in \incs{j}{H2}) \land (\cdsenb{i}{H1})$. Combining this with \lemref{aff-same}, we get that $\haffset{i}{H1} = \haffset{j}{H2}$. Now, consider a transaction $T_k$ in $ \haffset{j}{H2}$. From the above result, we get that $T_k$ is also in $\haffset{i}{H1}$. Since $T_i$ is \cdsen in $H1$, i.e., $\cdsenb{i}{H1}$ is true, we get that $\inct{k}{H1}$ is true. Combining this with \obsref{inct-fut}, we get that $T_k$ must have its \incct as true in $H2$ as well, i.e. $\inct{k}{H2}$. This implies that all the transactions in $T_j$'s \affset have their \incct flags as true in $H2$. Hence the $\hcds{j}{H2}$ is empty. As a result, $T_j$ is \cdsen in $H2$, i.e., $\cdsenb{j}{H2}$. \end{proof} \ignore{ \begin{proof} We have that $T_i$ is live in $H1$ and $T_j$ is live in $H2$. Since $T_i$ is \cdsen in $H1$, we get (from the definition of \cdsen) that \begin{equation} \label{eq:cts-its1} \htcts{i}{H1} \geq \htits{i}{H2} + 2*L \end{equation} We are given that $\tcts{i}$ is less than $\tcts{j}$ and $T_i, T_j$ are incarnations of each other. Hence, we have that \begin{align*} \htcts{j}{H2} & > \htcts{i}{H1} \\ & > \htits{i}{H1} + 2*L & [\text{From \eqnref{cts-its}}] \\ & > \htits{j}{H2} + 2*L & [\tits{i} = \tits{j}] \\ \end{align*} Thus we get that $\tcts{j} > \tits{j} + 2*L$. We have that $T_j$ is live in $H2$. Now, suppose $T_j$ is not \cdsen in $H2$. This can happen only if there is a transaction $T_k$ such that $\tits{k}$ is less than $\tits{i} + 2*L$ but \incct of $T_k$ is not true in $H2$. Formally, \begin{equation} \label{eq:its-ki1} (\htits{k}{H2} < \htits{j}{H2} + 2*L) \land (\neg \incs{k}{H2}) \end{equation} Since $T_i$ is \cdsen in $H1$, we get that for all transactions $T_k$, such that $\tits{k} < \tits{i} + 2*L$, \incct of $T_j$ has to be true in $H1$. Combining this \eqnref{its-ki}, we get that $T_k$ or any \inc of $T_k$ cannot be in $H1$. Thus, we get that $T_k$ is not in $\txns{H1}$. This implies that $T_k$ must have started afresh in some history after $H1$. We get that, \begin{align*} \htits{k}{H2} & \geq \hsyst{H1} & [\text{Since $T_k$ starts afresh after $H1$}] \\ & > \htcts{i}{H1} & [\text{From \obsref{cts-syst}}] \\ & \geq \htits{i}{H1} + 2*L & [\text{From \eqnref{cts-its}}] \\ & = \htits{j}{H2} + 2*L & [\text{As $T_i, T_j$ are \inc{s} of each other}] \\ \end{align*} Thus, we get that $\htits{k}{H2} > \htits{j}{H2} + 2*L$. But this contradicts with \eqnref{its-ki}. Hence, we have that there cannot exist a transaction $T_k$ such that $\tits{k}$ is less than $\tits{i} + 2*L$ and \incct of $T_k$ is false in $H2$. This implies that $T_i$ must be \cdsen in $H2$ as well. \end{proof} } Having defined the properties related to \cdsen, we start defining notions for \finen. Next, we define \emph{\maxwts} for a transaction $T_i$ in $H$ which is the transaction $T_j$ with the largest \wts in $T_i$'s \incset. Formally, \begin{equation*} \hmaxwts{i}{H} = max\{\htwts{j}{H}|(T_j \in \incs{i}{H})\} \end{equation*} \noindent From this definition of \maxwts, we get the following simple observation. \begin{observation} \label{obs:max-wts} For any transaction $T_i$ in $H$, we have that $\twts{i}$ is less than or equal to $\hmaxwts{i}{H}$. Formally, $\htwts{i}{H} \leq \hmaxwts{i}{H}$. \end{observation} Next, we combine the notions of \affset and \maxwts to define \emph{\affwts}. It is the maximum of \maxwts of all the transactions in its \affset. Formally, \begin{equation*} \haffwts{i}{H} = max\{\hmaxwts{j}{H}|(T_j \in \haffset{i}{H})\} \end{equation*} \noindent Having defined the notion of \affwts, we get the following lemma relating the \affset and \affwts of two transactions. \begin{lemma} \label{lem:affwts-same} Consider two histories $H1$ and $H2$ with $H2$ being an extension of $H1$. Let $T_i$ and $T_j$ be two transactions which are live in $H1$ and $H2$ respectively. Suppose the \affset of $T_i$ in $H1$ is same as \affset of $T_j$ in $H2$. Then the \affwts of $T_i$ in $H1$ is same as \affwts of $T_j$ in $H2$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (T_i \in \txns{H1}) \land (T_j \in \txns{H2}) \land (\haffset{i}{H1} = \haffset{j}{H2}) \implies (\haffwts{i}{H1} = \haffwts{j}{H2}) \rangle$. \end{lemma} \begin{proof} From the definition of \affwts, we get the following equations \begin{equation} \label{eq:h1-ti-affwts} \haffwts{i}{H} = max\{\hmaxwts{k}{H}|(T_k \in \haffset{i}{H1})\} \end{equation} \begin{equation} \label{eq:h2-tj-affwts} \haffwts{j}{H} = max\{\hmaxwts{l}{H}|(T_l \in \haffset{j}{H2})\} \end{equation} From these definitions, let us suppose that $\haffwts{i}{H1}$ is $\hmaxwts{p}{H1}$ for some transaction $T_p$ in $\haffset{i}{H1}$. Similarly, suppose that $\haffwts{j}{H2}$ is $\hmaxwts{q}{H2}$ for some transaction $T_q$ in $\haffset{j}{H2}$. Here, we are given that $\haffset{i}{H1} = \haffset{j}{H2})$. Hence, we get that $T_p$ is also in $\haffset{i}{H1}$. Similarly, $T_q$ is in $\haffset{j}{H2}$ as well. Thus from Equations \eqref{eq:h1-ti-affwts} \& \eqref{eq:h2-tj-affwts}, we get that \begin{equation} \label{eq:ti-tp-max} \hmaxwts{p}{H1} \geq \hmaxwts{q}{H2} \end{equation} \begin{equation} \label{eq:tj-tq-max} \hmaxwts{q}{H2} \geq \hmaxwts{p}{H1} \end{equation} Combining these both equations, we get that $\hmaxwts{p}{H1} = \hmaxwts{q}{H2}$ which in turn implies that $\haffwts{i}{H1} = \haffwts{j}{H2}$. \end{proof} \noindent Finally, using the notion of \affwts and \cdsen, we define the notion of \emph{\finen} \begin{definition} \label{defn:finen} We say that transaction $T_i$ is \emph{\finen} if the following conditions hold true (1) $T_i$ is live in $H$; (2) $T_i$ is \cdsen is $H$; (3) $\htwts{j}{H}$ is greater than $\haffwts{i}{H}$. Formally, \begin{equation*} \finenb{i}{H} = \begin{cases} True & (T_i \in \live{H}) \land (\cdsenb{i}{H}) \land (\htwts{j}{H} > \haffwts{i}{H}) \\ False & \text{otherwise} \end{cases} \end{equation*} \end{definition} It can be seen from this definition, a transaction that is \finen is also \cdsen. We now show that just like \itsen and \cdsen, once a transaction is \finen, it remains \finen until it terminates. The following lemma captures it. \ignore{ \begin{lemma} \label{lem:fin-sam-fut} Consider two histories $H1$ and $H2$ with $H2$ being an extension of $H1$. Let $T_i$ be a transaction live in $H1$ and $H2$. Suppose $T_i$ is \finen in $H1$. Then $T_i$ is \finen in $H2$ as well. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (T_i \in \{\live{H1} \cup \live{H2}\}) \land (\finenb{i}{H1}) \implies (\finenb{i}{H2}) \rangle$. \end{lemma} \todo{Proof to be added here} } \begin{lemma} \label{lem:fin-fut} Consider two histories $H1$ and $H2$ with $H2$ being an extension of $H1$. Let $T_i$ and $T_j$ be two transactions which are live in $H1$ and $H2$ respectively. Suppose $T_i$ is \finen in $H1$. Let $T_i$ be an \inc of $T_j$ and $\tcts{i}$ is less than $\tcts{j}$. Then $T_j$ is \finen in $H2$ as well. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (T_i \in \incs{j}{H2}) \land (\htcts{i}{H1} < \htcts{j}{H2}) \land (\finenb{i}{H1}) \implies (\finenb{j}{H2}) \rangle$. \end{lemma} \begin{proof} Here we are given that $T_j$ is live in $H2$. Since $T_i$ is \finen in $H1$, we get that it is \cdsen in $H1$ as well. Combining this with the conditions given in the lemma statement, we have that, \\ \begin{equation} \label{eq:fin-given} \begin{split} \langle (H1 \sqsubseteq H2) \land (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (T_i \in \incs{j}{H2}) \land (\htcts{i}{H1} < \htcts{j}{H2}) \\ \land (\cdsenb{i}{H1}) \rangle \end{split} \end{equation} Combining \eqnref{fin-given} with \lemref{cds-fut}, we get that $T_j$ is \cdsen in $H2$, i.e., $\cdsenb{j}{H2}$. Now, in order to show that $T_j$ is \finen in $H2$ it remains for us to show that $\htwts{j}{H2} > \haffwts{j}{H2}$. We are given that $T_j$ is live in $H2$ which in turn implies that $T_j$ is in $\txns{H2}$. Thus changing this in \eqnref{fin-given}, we get the following \begin{equation} \label{eq:mod-given} \begin{split} \langle (H1 \sqsubseteq H2) \land (T_j \in \txns{H2}) \land (T_i \in \incs{j}{H2}) \land (\htcts{i}{H1} < \htcts{j}{H2}) \\ \land (\cdsenb{i}{H1}) \rangle \end{split} \end{equation} \noindent Combining \eqnref{mod-given} with \lemref{aff-same} we get that \begin{equation} \label{eq:affs-eq} \haffwts{i}{H1} = \haffwts{j}{H2} \end{equation} \noindent We are given that $\htcts{i}{H1} < \htcts{j}{H2}$. Combining this with the definition of \wts, we get \begin{equation} \label{eq:titj-wts} \htwts{i}{H1} < \htwts{j}{H2} \end{equation} \noindent Since $T_i$ is \finen in $H1$, we have that \\ $\htwts{i}{H1} > \haffwts{i}{H1} \xrightarrow{\eqnref{titj-wts}} \htwts{j}{H2} > \haffwts{i}{H1} \xrightarrow{\eqnref{affs-eq}} \htwts{j}{H2} > \\ \haffwts{j}{H2}$ \end{proof} \noindent Now, we show that a transaction that is \finen will eventually commit. \begin{lemma} \label{lem:enbd-ct} Consider a live transaction $T_i$ in a history $H1$. Suppose $T_i$ is \finen in $H1$ and $\tval{i}$ is true in $H1$. Then there exists an extension of $H1$, $H3$ in which $T_i$ is committed. Formally, $\langle H1, T_i: (T_i \in \live{H1}) \land (\htval{i}{H1}) \land (\finenb{i}{H1}) \implies (\exists H3: (H1 \sqsubset H3) \land (T_i \in \comm{H3})) \rangle$. \end{lemma} \begin{proof} Consider a history $H3$ such that its \syst being greater than $\tcts{i} + L$. We will prove this lemma using contradiction. Suppose $T_i$ is aborted in $H3$. Now consider $T_i$ in $H1$: $T_i$ is live; its \val flag is true; and is \enbd. From the definition of \finen, we get that it is also \cdsen. From \lemref{its-enb}, we get that $T_i$ is \itsen in $H1$. Thus from \lemref{its-wts}, we get that there exists an extension of $H1$, $H2$ such that (1) Transaction $T_i$ is live in $H2$; (2) there is a transaction $T_j$ in ${H2}$; (3) $\htwts{j}{H2}$ is greater than $\htwts{i}{H2}$; (4) $T_j$ is committed in $H3$. Formally, \begin{equation} \label{eq:its-wts-ant} \begin{split} \langle (\exists H2, T_j: (H1 \sqsubseteq H2 \sqsubset H3) \land (T_i \in \live{H2}) \land (T_j \in \txns{H2}) \land (\htwts{i}{H2} < \htwts{j}{H2}) \\ \land (T_j \in \comm{H3})) \rangle \end{split} \end{equation} Here, we have that $H2$ is an extension of $H1$ with $T_i$ being live in both of them and $T_i$ is \finen in $H1$. Thus from \lemref{fin-fut}, we get that $T_i$ is \finen in $H2$ as well. Now, let us consider $T_j$ in $H2$. From \eqnref{its-wts-ant}, we get that $(\htwts{i}{H2} < \htwts{j}{H2})$. Combining this with the observation that $T_i$ being live in $H2$, \lemref{wts-its} we get that $(\htits{j}{H2} \leq \htits{i}{H2} + 2*L)$. This implies that $T_j$ is in \affset of $T_i$ in $H2$, i.e., $(T_j \in \haffset{i}{H2})$. From the definition of \affwts, we get that \begin{equation} \label{eq:max-affwts} (\haffwts{i}{H2} \geq \hmaxwts{j}{H2}) \end{equation} Since $T_i$ is \finen in $H2$, we get that $\twts{i}$ is greater than \affwts of $T_i$ in $H2$. \begin{equation} \label{eq:wts-affwts} (\htwts{i}{H2} > \haffwts{i}{H2}) \end{equation} Now combining Equations \ref{eq:max-affwts}, \ref{eq:wts-affwts} we get, \begin{align*} \htwts{i}{H2} & > \haffwts{i}{H2} \geq \hmaxwts{j}{H2} & \\ & > \haffwts{i}{H2} \geq \hmaxwts{j}{H2} \geq \htwts{j}{H2} & [\text{From \obsref{max-wts}}] \\ & > \htwts{j}{H2} \end{align*} But this equation contradicts with \eqnref{its-wts-ant}. Hence our assumption that $T_i$ will get aborted in $H3$ after getting \finen is not possible. Thus $T_i$ has to commit in $H3$. \end{proof} \noindent Next we show that once a transaction $T_i$ becomes \itsen, it will eventually become \finen as well and then committed. We show this change happens in a sequence of steps. We first show that Transaction $T_i$ which is \itsen first becomes \cdsen (or gets committed). We next show that $T_i$ which is \cdsen becomes \finen or get committed. On becoming \finen, we have already shown that $T_i$ will eventually commit. Now, we show that a transaction that is \itsen will become \cdsen or committed. To show this, we introduce a few more notations and definitions. We start with the notion of \emph{\depits (dependent-its)} which is the set of \its{s} that a transaction $T_i$ depends on to commit. It is the set of \its of all the transactions in $T_i$'s \cdset in a history $H$. Formally, \begin{equation*} \hdep{i}{H} = \{\htits{j}{H}|T_j \in \hcds{i}{H}\} \end{equation*} \noindent We have the following lemma on the \depits of a transaction $T_i$ and its future \inc $T_j$ which states that \depits of a $T_i$ either reduces or remains the same. \begin{lemma} \label{lem:depits-fut} Consider two histories $H1$ and $H2$ with $H2$ being an extension of $H1$. Let $T_i$ and $T_j$ be two transactions which are live in $H1$ and $H2$ respectively and $T_i$ is an \inc of $T_j$. In addition, we also have that $\tcts{i}$ is greater than $\tits{i} + 2*L$ in $H1$. Then, we get that $\hdep{j}{H2}$ is a subset of $\hdep{i}{H1}$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (T_i \in \incs{j}{H2}) \land (\htcts{i}{H1} \geq \htits{i}{H1} + 2*L) \implies (\hdep{j}{H2} \subseteq \hdep{i}{H1}) \rangle$. \end{lemma} \begin{proof} Suppose $\hdep{j}{H2}$ is not a subset of $\hdep{i}{H1}$. This implies that there is a transaction $T_k$ such that $\htits{k}{H2} \in \hdep{j}{H2}$ but $\htits{k}{H1} \notin \hdep{j}{H1}$. This implies that $T_k$ starts afresh after $H1$ in some history say $H3$ such that $H1 \sqsubset H3 \sqsubseteq H2$. Hence, from \corref{cts-syst} we get the following \noindent \begin{math} \htits{k}{H3} > \hsyst{H1} \xrightarrow{\lemref{cts-syst}} \htits{k}{H3} > \htcts{i}{H1} \implies \htits{k}{H3} > \htits{i}{H1} + 2*L \xrightarrow{\htits{i}{H1} = \htits{j}{H2}} \htits{k}{H3} > \htits{j}{H2} + 2*L \xrightarrow[definitions]{\affset, \depits} \htits{k}{H2} \notin \hdep{j}{H2} \end{math} We started with $\tits{k}$ in $\hdep{j}{H2}$ and ended with $\tits{k}$ not in $\hdep{j}{H2}$. Thus, we have a contradiction. Hence, the lemma follows. \end{proof} \noindent Next we denote the set of committed transactions in $T_i$'s \affset in $H$ as \emph{\cis (commit independent set)}. Formally, \begin{equation*} \hcis{i}{H} = \{T_j| (T_j \in \haffset{i}{H}) \land (\inct{j}{H}) \} \end{equation*} \noindent In other words, we have that $\hcis{i}{H} = \haffset{i}{H} - \hcds{i}{H}$. Finally, using the notion of \cis we denote the maximum of \maxwts of all the transactions in $T_i$'s \cis as \emph{\pawts} (partly affecting \wts). It turns out that the value of \pawts affects the commit of $T_i$ which we show in the course of the proof. Formally, \pawts is defined as \begin{equation*} \hpawts{i}{H} = max\{\hmaxwts{j}{H}|(T_j \in \hcis{i}{H})\} \end{equation*} \noindent Having defined the required notations, we are now ready to show that a \itsen transaction will eventually become \cdsen. \begin{lemma} \label{lem:its-cds} Consider a transaction $T_i$ which is live in a history $H1$ and $\tcts{i}$ is greater than or equal to $\tits{i} + 2*L$. If $T_i$ is \itsen in $H1$ then there is an extension of $H1$, $H2$ in which an \inc $T_i$, $T_j$ (which could be same as $T_i$), is either committed or \cdsen. Formally, $\langle H1, T_i: (T_i \in \live{H1}) \land (\htcts{i}{H1} \geq \htits{i}{H1} + 2*L) \land (\itsenb{i}{H1}) \implies (\exists H2, T_j: (H1 \sqsubset H2) \land (T_j \in \incs{i}{H2}) \land ((T_j \in \comm{H2}) \lor (\cdsenb{j}{H2}))) \rangle$. \end{lemma} \begin{proof} We prove this by inducting on the size of $\hdep{i}{H1}$, $n$. For showing this, we define a boolean function $P(k)$ as follows: \begin{math} P(k) = \begin{cases} True & \langle H1, T_i: (T_i \in \live{H1}) \land (\htcts{i}{H1} \geq \htits{i}{H1} + 2*L) \land (\itsenb{i}{H1}) \land \\ & (k \geq |\hdep{i}{H1}|) \implies (\exists H2, T_j: (H1 \sqsubset H2) \land (T_j \in \incs{i}{H2}) \land \\ & ((T_j \in \comm{H2}) \lor (\cdsenb{j}{H2}))) \rangle \\ False & \text{otherwise} \end{cases} \end{math} As can be seen, here $P(k)$ means that if (1) $T_i$ is live in $H1$; (2) $\tcts{i}$ is greater than or equal to $\tits{i} + 2*L$; (3) $T_i$ is \itsen in $H1$ (4) the size of $\hdep{i}{H1}$ is less than or equal to $k$; then there exists a history $H2$ with a transaction $T_j$ in it which is an \inc of $T_i$ such that $T_j$ is either committed or \cdsen in $H2$. We show $P(k)$ is true for all (integer) values of $k$ using induction. \vspace{1mm} \noindent \textbf{Base Case - $P(0)$:} Here, from the definition of $P(0)$, we get that $|\hdep{i}{H1}| = 0$. This in turn implies that $\hcds{i}{H1}$ is null. Further, we are already given that $T_i$ is live in $H1$ and $\htcts{i}{H1} \geq \htits{i}{H1} + 2*L$. Hence, all these imply that $T_i$ is \cdsen in $H1$. \vspace{1mm} \noindent \textbf{Induction case - To prove $P(k+1)$ given that $P(k)$ is true:} If $|\hdep{i}{H1}| \leq k$, from the induction hypothesis $P(k)$, we get that $T_j$ is either committed or \cdsen in $H2$. Hence, we consider the case when \begin{equation} \label{eq:hdep-assn} |\hdep{i}{H1}| = k + 1 \end{equation} Let $\alpha$ be $\hpawts{i}{H1}$. Suppose $\htwts{i}{H1} < \alpha$. Then from \lemref{wts-great}, we get that there is an extension of $H1$, say $H3$ in which an \inc of $T_i$, $T_l$ (which could be same as $T_i$) is committed or is live in $H3$ and has \wts greater than $\alpha$. If $T_l$ is committed then $P(k+1)$ is trivially true. So we consider the latter case in which $T_l$ is live in $H3$. In case $\htwts{i}{H1} \geq \alpha$, then in the analysis below follow where we can replace $T_l$ with $T_i$. Next, suppose $T_l$ is aborted in an extension of $H3$, $H5$. Then from \lemref{its-wts}, we get that there exists an extension of $H3$, $H4$ in which (1) $T_l$ is live; (2) there is a transaction $T_m$ in $\txns{H4}$; (3) $\htwts{m}{H4} > \htwts{l}{H4}$ (4) $T_m$ is committed in $H5$. Combining the above derived conditions (1), (2), (3) with \lemref{ti|tltl-comt} we get that in $H4$, \begin{equation} \label{eq:ml-tits} \htits{m}{H4} \leq \htits{l}{H4} + 2*L \end{equation} \eqnref{ml-tits} implies that $T_m$ is in $T_l$'s \affset. Here, we have that $T_l$ is an \inc of $T_i$ and we are given that $\htcts{i}{H1} \geq \htits{i}{H1} + 2*L$. Thus from \lemref{aff-tkinc-h1}, we get that there exists an \inc of $T_m$, $T_n$ in $H1$. Combining \eqnref{ml-tits} with the observations (a) $T_n, T_m$ are \inc{s}; (b) $T_l, T_i$ are \inc{s}; (c) $T_i, T_n$ are in $\txns{H1}$, we get that $\htits{n}{H1} \leq \htits{i}{H1} + 2*L$. This implies that $T_n$ is in $\haffset{i}{H1}$. Since $T_n$ is not committed in $H1$ (otherwise, it is not possible for $T_m$ to be an \inc of $T_n$), we get that $T_n$ is in $\hcds{i}{H1}$. Hence, we get that $\htits{m}{H4} = \htits{n}{H1}$ is in $\hdep{i}{H1}$. From \eqnref{hdep-assn}, we have that $\hdep{i}{H1}$ is $k+1$. From \lemref{depits-fut}, we get that $\hdep{i}{H4}$ is a subset of $\hdep{i}{H1}$. Further, we have that transaction $T_m$ has committed. Thus $\htits{m}{H4}$ which was in $\hdep{i}{H1}$ is no longer in $\hdep{i}{H4}$. This implies that $\hdep{i}{H4}$ is a strict subset of $\hdep{i}{H1}$ and hence $|\hdep{i}{H4}| \leq k$. \noindent Since $T_i$ and $T_l$ are \inc{s}, we get that $\hdep{i}{H4} = \hdep{l}{H1}$. Thus, we get that \begin{equation} \label{eqn:hdep-ilh4} |\hdep{i}{H4}| \leq k \implies |\hdep{l}{H4}| \leq k \end{equation} \noindent Further, we have that $T_l$ is a later \inc of $T_i$. So, we get that \begin{equation} \label{eqn:cts-its} \htcts{l}{H4} > \htcts{i}{H4} \xrightarrow{given} \htcts{l}{H4} > \htits{i}{H4} + 2*L \xrightarrow{\htits{i}{H4} = \htits{l}{H4}} \htcts{l}{H4} > \htits{l}{H4} + 2*L \end{equation} We also have that $T_l$ is live in $H4$. Combining this with Equations \ref{eqn:hdep-ilh4}, \ref{eqn:cts-its} and given the induction hypothesis that $P(k)$ is true, we get that there exists a history extension of $H4$, $H6$ in which an \inc of $T_l$ (also $T_i$), $T_p$ is either committed or \cdsen. This proves the lemma. \end{proof} \begin{lemma} \label{lem:cds-fin} Consider a transaction $T_i$ in a history $H1$. If $T_i$ is \cdsen in $H1$ then there is an extension of $H1$, $H2$ in which an \inc $T_i$, $T_j$ (which could be same as $T_i$), is either committed or \finen. Formally, $\langle H1, T_i: (T_i \in \live{H}) \land (\cdsenb{i}{H1}) \implies (\exists H2, T_j: (H1 \sqsubset H2) \land (T_j \in \incs{i}{H2}) \land ((T_j \in \comm{H2}) \lor (\finenb{j}{H2})) \rangle$. \end{lemma} \begin{proof} In $H1$, suppose $\haffwts{i}{H1}$ is $\alpha$. From \lemref{wts-great}, we get that there is a extension of $H1$, $H2$ with a transaction $T_j$ which is an \inc of $T_i$. Here there are two cases: (1) Either $T_j$ is committed in $H2$. This trivially proves the lemma; (2) Otherwise, $\twts{j}$ is greater than $\alpha$. \noindent In the second case, we get that \begin{equation} \label{eq:ext} \begin{split} (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (\cdsenb{i}{H}) \land (T_j \in \incs{i}{H2}) \land \\ (\htwts{i}{H1} < \htwts{j}{H2}) \end{split} \end{equation} \noindent Combining the above result with \lemref{cts-wts}, we get that $\htcts{i}{H1} < \htcts{j}{H2}$. Thus the modified equation is \begin{equation} \label{eq:new-ext} \begin{split} (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (\cdsenb{i}{H1}) \land (T_j \in \incs{i}{H2}) \land \\ (\htcts{i}{H1} < \htcts{j}{H2}) \end{split} \end{equation} \noindent Next combining \eqnref{new-ext} with \lemref{aff-same}, we get that \begin{equation} \label{eq:affs-ij} \haffset{i}{H1} = \haffset{j}{H2} \end{equation} \noindent Similarly, combining \eqnref{new-ext} with \lemref{cds-fut} we get that $T_j$ is \cdsen in $H2$ as well. Formally, \begin{equation} \label{eq:th-cdsen} \cdsenb{j}{H2} \end{equation} Now combining \eqnref{affs-ij} with \lemref{affwts-same}, we get that \begin{equation} \label{eq:affwts-same} \haffwts{i}{H1} = \haffwts{j}{H2} \end{equation} From our initial assumption we have that $\haffwts{i}{H1}$ is $\alpha$. From \eqnref{affwts-same}, we get that $\haffwts{j}{H2} = \alpha$. Further, we had earlier also seen that $\htwts{j}{H2}$ is greater than $\alpha$. Hence, we have that $\htwts{j}{H2} > \haffwts{j}{H2}$. \noindent Combining the above result with \eqnref{th-cdsen}, $\cdsenb{j}{H2}$, we get that $T_j$ is \finen, i.e., $\finenb{j}{H2}$. \end{proof} \noindent Next, we show that every live transaction eventually become \itsen. \begin{lemma} \label{lem:live-its} Consider a history $H1$ with $T_i$ be a transaction in $\live{H1}$. Then there is an extension of $H1$, $H2$ in which an \inc of $T_i$, $T_j$ (which could be same as $T_i$) is either committed or is \itsen. Formally, $\langle H1, T_i: (T_i\in \live{H}) \implies (\exists T_j, H2: (H1 \sqsubset H2) \land (T_j \in \incs{i}{H2}) \land (T_j \in \comm{H2}) \lor (\itsenb{i}{H})) \rangle$. \end{lemma} \begin{proof} \noindent We prove this lemma by inducting on \its. \vspace{1mm} \noindent \textbf{Base Case - $\tits{i} = 1$:} In this case, $T_i$ is the first transaction to be created. There are no transactions with smaller \its. Thus $T_i$ is trivially \itsen. \vspace{1mm} \noindent \textbf{Induction Case:} Here we assume that for any transaction $\tits{i} \leq k$ the lemma is true. \end{proof} Combining these lemmas gives us the result that for every live transaction $T_i$ there is an incarnation $T_j$ (which could be the same as $T_i$) that will commit. This implies that every \aptr eventually commits. The follow lemma captures this notion. \begin{theorem} \label{thm:hwtm-com} Consider a history $H1$ with $T_i$ be a transaction in $\live{H1}$. Then there is an extension of $H1$, $H2$ in which an \inc of $T_i$, $T_j$ is committed. Formally, $\langle H1, T_i: (T_i\in \live{H}) \implies (\exists T_j, H2: (H1 \sqsubset H2) \land (T_j \in \incs{i}{H2}) \land (T_j \in \comm{H2})) \rangle$. \end{theorem} \begin{proof} Here we show the states that a transaction $T_i$ (or one of it its \inc{s}) undergoes before it commits. In all these transitions, it is possible that an \inc of $T_i$ can commit. But to show the worst case, we assume that no \inc of $T_i$ commits. Continuing with this argument, we show that finally an \inc of $T_i$ commits. Consider a live transaction $T_i$ in $H1$. Then from \lemref{live-its}, we get that there is a history $H2$, which is an extension of $H1$, in which $T_j$ an \inc of $T_i$ is either committed or \itsen. If $T_j$ is \itsen in $H2$, then from \lemref{its-cds}, we get that $T_k$, an \inc of $T_j$, will be \cdsen in a extension of $H2$, $H3$ (assuming that $T_k$ is not committed in $H3$). From \lemref{cds-fin}, we get that there is an extension of $H3$, $H4$ in which an \inc of $T_k$, $T_l$ will be \finen assuming that it is not committed in $H4$. Finally, from \lemref{enbd-ct}, we get that there is an extension of $H4$ in which $T_m$, an \inc of $T_l$, will be committed. This proves our theorem. \end{proof} \section{PCode of \sftm} \label{sec:pcode-sftm} \textbf{Data Structure:} We start with data-structures that are local to each transaction. For each transaction $T_i$: \begin{itemize} \item $rset\xspace_i$(read-set): It is a list of data tuples ($d\_tuples$) of the form $\langle x, val \rangle$, where $x$ is the t-object and $v$ is the value read by the transaction $T_i$. We refer to a tuple in $T_i$'s read-set by $rset\xspace_i[x]$. \item $wset\xspace_i$(write-set): It is a list of ($d\_tuples$) of the form $\langle x, val \rangle$, where $x$ is the \tobj{} to which transaction $T_i$ writes the value $val$. Similarly, we refer to a tuple in $T_i$'s write-set by $wset\xspace_i[x]$. \end{itemize} In addition to these local structures, the following shared global structures are maintained that are shared across transactions (and hence, threads). We name all the shared variable starting with `G'. \begin{itemize} \item $\gtcnt$ (counter): This a numerical valued counter that is incremented when a transaction begins \end{itemize} \noindent For each transaction $T_i$ we maintain the following shared time-stamps: \begin{itemize} \item $\glock_i$: A lock for accessing all the shared variables of $T_i$. \item $\gits_i$ (initial timestamp): It is a time-stamp assigned to $T_i$ when it was invoked for the first time. \item $\gcts_i$ (current timestamp): It is a time-stamp when $T_i$ is invoked again at a later time. When $T_i$ is created for the first time, then its \gcts{} is same as its \its. \item $\gval_i$: This is a boolean variable which is initially true. If it becomes false then $T_i$ has to be aborted. \item $\gstat_i$: This is a variable which states the current value of $T_i$. It has three states: \texttt{live}, \texttt{committed} or \texttt{aborted}. \end{itemize} \noindent For each data item $x$ in history $H$, we maintain: \begin{itemize} \item $x.val$ (value): It is the successful previous closest value written by any transaction. \item $\rl$ (readList): $rl$ is the read list consists of all the transactions that have read it. \end{itemize} \begin{algorithm} [H] \caption{STM $\init()$: Invoked at the start of the STM system. Initializes all the data items used by the STM System} \label{alg:init} \begin{algorithmic}[1] \State $\gtcnt$ = 1; \ForAll {data item $x$ used by the STM System} \State add $\langle 0, nil \rangle$ to $x.val$;\Comment{ $T_0$ is initializing $x$} \label{lin:t0-init-SFTM} \EndFor; \end{algorithmic} \end{algorithm} \begin{algorithm} [H] \label{alg:begin} \caption{STM $\begt(its)$: Invoked by a thread to start a new transaction $T_i$. Thread can pass a parameter $its$ which is the initial timestamp when this transaction was invoked for the first time. If this is the first invocation then $its$ is $nil$. It returns the tuple $\langle id, \gcts \rangle$} \begin{algorithmic}[1] \State $i$ = unique-id; \Comment{An unique id to identify this transaction. It could be same as \gcts} \If {($its == nil$)} \State $\gits_i = \gcts_i = \gtcnt.get\&Inc()$; \State \Comment{$\gtcnt.get\&Inc()$ returns the current value of \gtcnt and atomically increments it} \Else \State $\gits_i = its$; \State $\gcts_i = \gtcnt.get\&Inc()$; \EndIf \State $rset\xspace_i = wset\xspace_i = null$; \State $\gstat_i$ = \texttt{live}; \State $\gval_i = T$; \State return $\langle i, \gcts_i\rangle$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:read} \caption{STM $read(i, x)$: Invoked by a transaction $T_i$ to read $x$. It returns either the value of $x$ or $\mathcal{A}$} \begin{algorithmic}[1] \If {($x \in rset\xspace_i$)} \Comment{Check if $x$ is in $rset\xspace_i$} \State return $rset\xspace_i[x].val$; \ElsIf {($x \in wset\xspace_i$)} \Comment{Check if $x$ is in $wset\xspace_i$} \State return $wset\xspace_i[x].val$; \Else \Comment{$x$ is not in $rset\xspace_i$ and $wset\xspace_i$} \State lock $x$; lock $\glock_i$; \If {$(\gval_i == F)$} \State return abort(i); \label{lin:rabort-SFTM} \EndIf \State \Comment{ Find available value from $x.val$, returns the value } \State $curVer = findavilval(\gcts_i,x)$; \cmnt { \State /* \findsl: From $x.\vl$, returns the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $nextVer = \findsl(\gwts_i,x)$; \If {$(nextVer \neq nil)$} \State \Comment{Ensure that $\tutl_i$ remains smaller than $nextVer$'s \vltl} \State $\tutl_i = min(\tutl_i, x[nextVer].vltl-1)$; \EndIf \State \Comment{$\tltl_i$ should be greater than $x[curVer].\vltl$} \State $\tltl_i = max(\tltl_i, x[curVer].\vltl + 1)$; \If {($\tltl_i > \tutl_i$)} \Comment{If the limits have crossed each other, then $T_i$ is aborted} \State return abort(i); \EndIf } \State $val = x[curVer].v$; add $\langle x, val \rangle$ to $rset\xspace_i$; \State add $T_i$ to $x[curVer].rl$; \State unlock $\glock_i$; \State unlock $x$; \State return $val$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:write} \caption{STM $write_i(x,val)$: A Transaction $T_i$ writes into local memory} \begin{algorithmic}[1] \State Append the $d\_tuple \langle x,val \rangle$ to $wset\xspace_i$. \State return $ok$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:tryc} \caption{STM $\tryc()$: Returns $ok$ on commit else return Abort} \begin{algorithmic}[1] \State \Comment{The following check is an optimization which needs to be performed again later} \State Set$<$int$>$ TSet $\leftarrow$ $\phi$ {} \Comment{TSet storing transaction Ids} \ForAll {$x \in wset\xspace_i$} \State lock $x$ in pre-defined order; \For{$<$each transaction $t_j$ of [$x$].rl$>$} \State TSet = [$x$].rl \EndFor \State TSet = TSet $\cup$ \{$t_i$\} \EndFor \Comment{$x \in wset\xspace_i$} \State lock $\glock_i$; \If {$(\gval_i == F)$} return abort(i); \Else \State Find LTS in TSet {} \Comment{lowest time stamp} \If {(TS($t_i$) == LTS)} \For{$<$each transaction $t_j$ of [$x$].rl$>$} \State $G\_valid_j$ $\leftarrow$ false \State unlock $\glock_j$; \EndFor \Else \State return abort(i); \EndIf \EndIf \cmnt { \algstore{tryc-break} \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:tryc-cont} \caption{STM $\tryc()$: Continued} \begin{algorithmic}[1] \algrestore{tryc-break} \State \Comment{Ensure that $\vutl_i$ is less than \vltl of versions in $\nvl$} \ForAll {$(ver \in \nvl)$} \State $x$ = \tobj of $ver$; \State $\tutl_i = min(\tutl_i, x[ver].\vltl - 1)$; \EndFor } \State \Comment{Store the current value of the global counter as commit time and increment it} \State $\ct = \gtcnt.get\&Inc()$; \algstore{myalgtc} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{myalgtc} \ForAll {$x \in wset\xspace_i$} \State replace the old value in $x.\vl$ with $newValue$; \EndFor \State $\gstat_i$ = \texttt{commit}; \State unlock all variables; \State return $\mathcal{C}$; \end{algorithmic} \end{algorithm} \cmnt { \begin{algorithm} \label{alg:lowPri} \caption{$\lowp(T_k, T_i)$: Verifies if $T_k$ has lower priority than $T_i$ and is not already committed} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k) \land (\gstat_k == \texttt{live})$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:\lowp} \caption{$\lowp(T_k, T_i)$: Aborts lower priority transaction among $T_k$ and $T_i$} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k)$} \State $\abl = \abl \cup T_k$; \Comment{Store lower priority $T_k$ in \abl} \Else \State return abort(i); \Comment{Abort $T_i$} \EndIf \end{algorithmic} \end{algorithm} } \begin{algorithm}[H] \label{alg:abort} \caption{$abort(i)$: Invoked by various STM methods to abort transaction $T_i$. It returns $\mathcal{A}$} \begin{algorithmic}[1] \State $\gval_i = F$; $\gstat_i$ = \texttt{abort}; \State unlock all variables locked by $T_i$; \State return $\mathcal{A}$; \end{algorithmic} \end{algorithm} \section{Pcode of \kstm} \label{sec:pcode-kstm} \begin{algorithm}[H] \caption{STM $\init()$: Invoked at the start of the STM system. Initializes all the \tobj{s} used by the STM System} \label{alg:init-KSTM} \begin{algorithmic}[1] \State $\gtcnt$ = 1; \ForAll {$x$ in $\mathcal{T}$} \Comment{All the \tobj{s} used by the STM System} \State add $\langle 0, 0, nil \rangle$ to $x.\vl$; \Comment { $T_0$ is initializing $x$} \label{lin:t0-init-KSTM} \EndFor; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-begin} \caption{STM $\begt(its)$: Invoked by a thread to start a new transaction $T_i$. Thread can pass a parameter $its$ which is the initial timestamp when this transaction was invoked for the first time. If this is the first invocation then $its$ is $nil$. It returns the tuple $\langle id, \gcts \rangle$} \begin{algorithmic}[1] \State $i$ = unique-id; \Comment{An unique id to identify this transaction. It could be same as \gcts} \State \Comment{Initialize transaction specific local \& global variables} \If {($its == nil$)} \State \Comment{$\gtcnt.get\&Inc()$ returns the current value of \gtcnt and atomically increments it} \State $\gits_i = \gcts_i = \gtcnt.get\&Inc()$; \Else \State $\gits_i = its$; \State $\gcts_i = \gtcnt.get\&Inc()$; \EndIf \State $rset\xspace_i = wset\xspace_i = null$; \State $\gstat_i$ = \texttt{live}; \State $\gval_i = T$; \State return $\langle i, \gcts_i\rangle$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-read} \caption{STM $read(i, x)$: Invoked by a transaction $T_i$ to read \tobj{} $x$. It returns either the value of $x$ or $\mathcal{A}$} \begin{algorithmic}[1] \If {($x \in rset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $rset\xspace_i$} \State return $rset\xspace_i[x].val$; \ElsIf {($x \in wset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $wset\xspace_i$} \State return $wset\xspace_i[x].val$; \Else \Comment{\tobj{} $x$ is not in $rset\xspace_i$ and $wset\xspace_i$} \State lock $x$; lock $\glock_i$; \If {$(\gval_i == F)$} return abort(i); \label{lin:rabort-KSTM} \EndIf \State \Comment{ \findls: From $x.\vl$, returns the largest \ts value less than $\gcts_i$. If no such version exists, it returns $nil$ } \State $curVer = \findls(\gcts_i,x)$; \If {$(curVer == nil)$} return abort(i); \Comment{Proceed only if $curVer$ is not nil} \EndIf \cmnt { \State /* \findsl: From $x.\vl$, returns the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $nextVer = \findsl(\gwts_i,x)$; \If {$(nextVer \neq nil)$} \State \Comment{Ensure that $\tutl_i$ remains smaller than $nextVer$'s \vltl} \State $\tutl_i = min(\tutl_i, x[nextVer].vltl-1)$; \EndIf \State \Comment{$\tltl_i$ should be greater than $x[curVer].\vltl$} \State $\tltl_i = max(\tltl_i, x[curVer].\vltl + 1)$; \If {($\tltl_i > \tutl_i$)} \Comment{If the limits have crossed each other, then $T_i$ is aborted} \State return abort(i); \EndIf } \State $val = x[curVer].v$; add $\langle x, val \rangle$ to $rset\xspace_i$; \State add $T_i$ to $x[curVer].rl$; \State unlock $\glock_i$; unlock $x$; \State return $val$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-write} \caption{STM $write_i(x,val)$: A Transaction $T_i$ writes into local memory} \begin{algorithmic}[1] \State Append the $d\_tuple \langle x,val \rangle$ to $wset\xspace_i$. \State return $ok$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-tryc} \caption{STM $\tryc()$: Returns $ok$ on commit else return Abort} \begin{algorithmic}[1] \State \Comment{The following check is an optimization which needs to be performed again later} \State lock $\glock_i$; \If {$(\gval_i == F)$} \State return abort(i); \EndIf \State unlock $\glock_i$; \State $\lrl = \allrl = nil$; \Comment{Initialize larger read list (\lrl), all read list (\allrl) to nil} \ForAll {$x \in wset\xspace_i$} \State lock $x$ in pre-defined order; \State \Comment{ \findls: returns the version with the largest \ts value less than $\gcts_i$. If no such version exists, it returns $nil$. } \State $\prevv = \findls(\gcts_i, x)$; \Comment{\prevv: largest version smaller than $\gcts_i$} \If {$(\prevv == nil)$} \Comment{There exists no version with \ts value less than $\gcts_i$} \State lock $\glock_i$; return abort(i); \EndIf \State \Comment{\textbf{\getl}: obtain the list of reading transactions of $x[\prevv].rl$ whose $\gcts$ is greater than $\gcts_i$} \State $\lrl = \lrl \cup \getl(\gcts_i, x[\prevv].rl)$; \EndFor \Comment{$x \in wset\xspace_i$} \State $\relll = \lrl \cup T_i$; \Comment{Initialize relevant Lock List (\relll)} \ForAll {($T_k \in \relll$)} \State lock $\glock_k$ in pre-defined order; \Comment{Note: Since $T_i$ is also in $\relll$, $\glock_i$ is also locked} \EndFor \State \Comment{Verify if $\gval_i$ is false} \algstore{myalgtc} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{myalgtc} \If {$(\gval_i == F)$} \State return abort(i); \EndIf \State $\abl = nil$ \Comment{Initialize abort read list (\abl)} \State \Comment{Among the transactions in $T_k$ in $\lrl$, either $T_k$ or $T_i$ has to be aborted} \ForAll {$(T_k \in \lrl)$} \If {$(\isab(T_k))$} \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \If {$(\gcts_i < \gcts_k) \land (\gstat_k == \texttt{live})$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \cmnt{ \algstore{sweta} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{sweta} } \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \EndIf \EndFor \cmnt { \algstore{tryc-break} \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:ap-tryc-cont} \caption{STM $\tryc()$: Continued} \begin{algorithmic}[1] \algrestore{tryc-break} \State \Comment{Ensure that $\vutl_i$ is less than \vltl of versions in $\nvl$} \ForAll {$(ver \in \nvl)$} \State $x$ = \tobj of $ver$; \State $\tutl_i = min(\tutl_i, x[ver].\vltl - 1)$; \EndFor } \State \Comment{Store the current value of the global counter as commit time and increment it} \State $\ct = \gtcnt.get\&Inc()$; \cmnt{ \ForAll {$(T_k \in \srl)$} \Comment{Iterate through $\srl$ to see if $T_k$ or $T_i$ has to aborted} \If {$(\isab(T_k))$} \State \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \If {$(\tltl_k \geq \tutl_i)$} \label{lin:tk-check} \Comment{Ensure that the limits do not cross for both $T_i$ \& $T_k$} \If {$(\gstat_k == live)$} \Comment{Check if $T_k$ is live} \If {$(\gits_i < \gits_k)$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \EndIf \Comment{$(\gits_i < \gits_k)$} \Else \Comment{($T_k$ is committed. Hence, $T_i$ has to be aborted)} \State return abort(i); \EndIf \Comment{$(\gstat_k == live)$} \EndIf \Comment{$(\tltl_k \geq \tutl_i)$} \EndFor {$(T_k \in \srl)$} \State \Comment{At this point $T_i$ can't abort.} \State $\tltl_i = \tutl_i$; \label{lin:ti-updt} \State \Comment{Since $T_i$ can't abort, we can update $T_k$'s \tutl} \ForAll {$(T_k \in \srl)$} \If {$(\isab(T_k))$} \State \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \State /* The following line ensure that $\tltl_k \leq \tutl_k < \tltl_i$. Note that this does not cause the limits of $T_k$ to cross each other because of the check in \lineref{tk-check}.*/ \State $\tutl_k = min(\tutl_k, \tltl_i - 1)$; \EndFor } \ForAll {$T_k \in \abl$} \Comment{Abort all the transactions in \abl} \State $\gval_k = F$; \EndFor \cmnt { \algstore{tryc-break2} \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:tryc-cont2} \caption{STM $\tryc()$: Continued Again} \begin{algorithmic}[1] \algrestore{tryc-break2} } \State \Comment{Having completed all the checks, $T_i$ can be committed} \ForAll {$(x \in wset\xspace_i)$} \State $newTuple = \langle \gcts_i, wset\xspace_i[x].val, nil \rangle$; \Comment { Create new v\_tuple: \gcts, val, \rl for $x$} \If {($|x.vl| > k$)} \State replace the oldest tuple in $x.\vl$ with $newTuple$; \Comment{$x.\vl$ is ordered by time stamp} \Else \State add a $newTuple$ to $x.vl$ in sorted order; \EndIf \EndFor \Comment{$x \in wset\xspace_i$} \State $\gstat_i$ = \texttt{commit}; \State unlock all variables; \State return $\mathcal{C}$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:isab} \caption{$\isab(T_k)$: Verifies if $T_i$ is already aborted or its \gval flag is set to false implying that $T_i$ will be aborted soon} \begin{algorithmic}[1] \If {$(\gval_k == F) \lor (\gstat_k == \texttt{abort}) \lor (T_k \in \abl)$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \cmnt { \begin{algorithm} \label{alg:ap-lowPri} \caption{$\lowp(T_k, T_i)$: Verifies if $T_k$ has lower priority than $T_i$ and is not already committed} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k) \land (\gstat_k == \texttt{live})$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:\lowp} \caption{$\lowp(T_k, T_i)$: Aborts lower priority transaction among $T_k$ and $T_i$} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k)$} \State $\abl = \abl \cup T_k$; \Comment{Store lower priority $T_k$ in \abl} \Else \State return abort(i); \Comment{Abort $T_i$} \EndIf \end{algorithmic} \end{algorithm} } \begin{algorithm}[H] \label{alg:ap-abort} \caption{$abort(i)$: Invoked by various STM methods to abort transaction $T_i$. It returns $\mathcal{A}$} \begin{algorithmic}[1] \State $\gval_i = F$; $\gstat_i$ = \texttt{abort}; \State unlock all variables locked by $T_i$; \State return $\mathcal{A}$; \end{algorithmic} \end{algorithm} \section{Some Preliminary Results} \label{sec:ap-results} The below graphs have been produced by using a linked list application to compare the performance of KSTM with different values of k. In the application chosen below, there were 90\% lookups and remaining were 9:1 ratio of inserts and deletes. Varying number of threads were generated and each thread in turn generated 100 transactions. \pgfplotsset{scaled x ticks=false} \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel = Number of Transactions, ylabel=Operations/sec, enlargelimits=0.05, xtick = data, xticklabels={5000,7500,10000}, width=.47\textwidth ] \addplot coordinates {(5000,4.335) (7500,1.9351425 ) (10000, 1.06684) }; \addplot coordinates { (5000, 5.47465) (7500, 2.31684 ) (10000, 1.249) }; \addplot coordinates { (5000, 4.26628) (7500,2.15325) (10000, 1.31769) }; \legend{k=1, k=10,k=20} \end{axis} \end{tikzpicture} \end{center} As per the results obtained, multiversion performs better than single version STM. This is because the multiple versions used in KSTM decreases the number of aborts per transaction, thereby effectively increasing the operations/sec performed. \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel = Number of Transactions, ylabel= Commit Time(microseconds), enlargelimits=0.05, xtick = data, xticklabels={5000,7500,10000}, width=.47\textwidth ] \addplot coordinates {(5000,230679) (7500, 516758) (10000, 937346)}; \addplot coordinates { (5000,182660) (7500, 431622) (10000, 800638)}; \addplot coordinates {(5000,234396) (7500,464414) (10000,758903)}; \legend{k=1, k=10,k=20} \end{axis} \end{tikzpicture} \end{center} The commit time (time taken per transaction to commit) observed during KSTM (k = 10 here) is the least since is inversely proportional to the operations/sec. As the number of transactions are increasing, they need more versions to read from, to attain higher concurrency leading to lesser abort counts. In the application chosen below, there were 50\% lookups and remaining were 9:1 ratio of inserts and deletes into the linked list. This kind of setup will have more read-write conflicts between the transactions involved when compared to the previous setup. \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel = Number of Transactions, ylabel=Operations/sec, enlargelimits=0.05, xtick = data, xticklabels={5000,7500,10000}, width=.47\textwidth ] \addplot coordinates {(5000,1.97486) (7500, 0.737251) (10000, 0.498075)}; \addplot coordinates { (5000,1.84234) (7500, 0.838329) (10000, 0.492734) }; \addplot coordinates {(5000,2.07114) (7500,0.944222 ) (10000,0.488506 )}; \legend{k=1, k=10,k=20} \end{axis} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel = Number of Transactions, ylabel= Commit Time(microseconds), enlargelimits=0.05, xtick = data, xticklabels={5000,7500,10000}, width=.47\textwidth ] \addplot coordinates {(5000,506365) (7500, 1.36E+06) (10000, 2.01E+06)}; \addplot coordinates { (5000,542787) (7500, 1.19E+06) (10000, 2.03E+06)}; \addplot coordinates {(5000,482826) (7500,1.06E+06) (10000,2.05E+06)}; \legend{k=1, k=10,k=20} \end{axis} \end{tikzpicture} \end{center} As per the graph, k = 20 gives the best operations/sec and the least commit time. Hence, having multiple versions(KSTM) performs better than single version STM in this setup too. \section{Proof of safety} \begin{lemma} \label{lem:tltl-edge} Consider a history $H$ in $\gen{\ksftm}$ with two transactions $T_i$ and $T_j$ such that both their G\_valid flags are true. there is an edge from $T_i$ $\rightarrow$ $T_j$ then $\tltl_i$ $<$ $\tltl_j$. \end{lemma} \begin{proof} There are three types of possible edges in MVSG. \begin{enumerate} \item Real-time edge: Since, transaction $T_i$ and $T_j$ are in real time order so $\ct_i$ $<$ $\gcts_j$. As we know from \lemref{ti|tltl-comt} $(\tltl_i \leq \ct_i)$. So, $(\tltl_i \leq \cts_j)$.\\ We know from STM $\begt(its)$ method, $\tltl_j = \gcts_j$.\\ Eventually, $\tltl_i$ $<$ $\tltl_j$. \item Read-from edge: Since, transaction $T_i$ has been committed and $T_j$ is reading from $T_i$ so, from \Lineref{new-tup} $\tryc(T_i)$, $\tltl_i = \vltl_i$.\\ and from \Lineref{rd-tltl-inc} STM $read(j, x)$, $\tltl_j = max(\tltl_j,\\ x[curVer]. \vltl + 1)$ $\Rightarrow$ $(\tltl_j > \vltl_i)$ $\Rightarrow$ $(\tltl_j > \tltl_i)$ \\ Hence, $\tltl_i$ $<$ $\tltl_j$. \item Version-order edge: Consider a triplet $w_j(x_j) r_k(x_j) w_i(x_i)$ in which there are two possibilities of version order: \begin{enumerate} \item i $\ll$ j $\Longrightarrow$ $\gwts_i < \gwts_j$ \\ There are two possibilities of commit order: \begin{enumerate} \item $\ct_i <_H \ct_j$: Since, $T_i$ has been committed before $T_j$ so $\tltl_i = \vltl_i$. From \Lineref{tryc-tltl-inc} of $\tryc(T_j)$, $\vltl_i < \tltl(j)$.\\ Hence, $\tltl_i$ $<$ $\tltl_j$. \item $\ct_j <_H \ct_i$: Since, $T_j$ has been committed before $T_i$ so $\tltl_j = \vltl_j$. From \Lineref{tryc-ul-dec} of $\tryc(T_i)$, $\tutl_i < \vltl_j$. As we have assumed $\gval_i$ is true so definitely it will execute the \Lineref{ti-updt} $\tryc(T_i)$ i.e. $\tltl_i = \tutl_i$.\\ Hence, $\tltl_i$ $<$ $\tltl_j$. \end{enumerate} \item j $\ll$ i $\Longrightarrow$ $\gwts_j < \gwts_i$ \\ Again, there are two possibilities of commit order: \begin{enumerate} \item $\ct_j <_H \ct_i$: Since, $T_j$ has been committed before $T_i$ and $T_k$ read from $T_j$. There can be two possibilities $\gwts_k$. \begin{enumerate} \item $\gwts_k > \gwts_i$: That means $T_k$ is in largeRL of $T_i$. From \Lineref{addAbl-lar} to \Lineref{its-chk1}of $\tryc(i)$, either transaction $T_k$ or $T_i$, $\gval$ flag is set to be false. If $T_i$ returns abort then this case will not be considered in \lemref{tltl-edge}. Otherwise, as $T_j$ has already been committed and later $T_i$ will execute the \Lineref{new-tup} $\tryc(T_i)$, Hence, $\tltl_j < \tltl_i$.\\ \item $\gwts_k < \gwts_i$: That means $T_k$ is in smallRL of $T_i$. From \Lineref{rd-ul-dec} of $read(k, x)$, $\tutl_k < \vltl_i$ and from \Lineref{rd-tltl-inc} of $read(k, x)$, $\tltl_k > \vltl_j$. Here, $T_j$ has already been committed so, $\tltl_j = \vltl_j$. As we have assumed $\gval_i$ is true so definitely it will execute the \Lineref{new-tup} $\tryc(T_i)$, $\tltl_i = \vltl_i$.\\ So, $\tutl_k < \tltl_i $ and $\tltl_k > \tltl_j$. While considering $\gval_k$ flag is true $\rightarrow$ $\tltl_k < \tutl_k$.\\ Hence, $\tltl_j < \tltl_k < \tutl_k < \tltl_i$.\\ Therefore, $\tltl_j < \tltl_k < \tltl_i$. \end{enumerate} \item $\ct_i <_H \ct_j$: Since, $T_i$ has been committed before $T_j$ so, $\tltl_i = \vltl_i$. From \Lineref{tryc-ul-dec} of $\tryc(T_j)$, $\tutl_j < \vltl_i$ i.e. $\tutl_j < \tltl_i$. Here, $T_k$ read from $T_j$. So, From \Lineref{rd-ul-dec} of $read(k, x)$, $\tutl_k < \vltl_i$ $\rightarrow$ $\tutl_k < \tltl_i$ from \Lineref{rd-tltl-inc} of $read(k, x)$, $\tltl_k > \vltl_j$. As we have assumed $\gval_j$ is true so definitely it will execute the \Lineref{new-tup} $\tryc(T_j)$, $\tltl_j = \vltl_j$.\\ Hence, $\tltl_j < \tltl_k < \tutl_k < \tltl_i$.\\ Therefore, $\tltl_j < \tltl_k < \tltl_i$. \end{enumerate} \end{enumerate} \cmnt{Due to acquiring the lock on each dataitem before creating a version So, let say $T_i$ created a version then release it and return commit. For both the transactions G\_valid flags are true. As we know from \lemref{tltl commit} $(\tltl_i \leq \ct_i)$. After creating a version by $T_i$, transaction $T_j$ wants to create a version of the same dataitem then definitely, $(\ct_i < \ct_j)$. Again from \lemref{tltl commit} on $T_j$, $(\tltl_j \leq \ct_j)$. \\ So, $\tltl_i$ $<$ $\tltl_j$. } \end{enumerate} \end{proof} \cmnt{ \begin{theorem} \label{thm:trans-com|abt} Transaction with lowest $\its$ value will eventually have the highest $\gwts$ value. \end{theorem} } \begin{theorem} Any history H gen(KSFTM) is local opaque iff for a given version order $\ll$ H, MVSG(H,$\ll$) is acyclic. \end{theorem} \begin{proof} We are proving it by contradiction, so Assuming MVSG(H,$\ll$) has cycle. From \lemref{tltl-edge}, For any two transactions $T_i$ and $T_j$ such that both their G\_valid flags are true and if there is an edge from $T_i$ $\rightarrow$ $T_j$ then $\tltl_i$ $<$ $\tltl_j$. While considering transitive case for k transactions $T_1, T_2, T_3...T_k$ such that G\_valid flags of all the transactions are true. if there is an edge from $T_1$ $\rightarrow$ $T_2$ $\rightarrow$ $T_3$ $\rightarrow$....$\rightarrow$ $T_k$ then $\tltl_1 $ $<$ $\tltl_2$ $<$ $\tltl_3$ $<$ ....$<$ $\tltl_k$.\\ Now, considering our assumption, MVSG(H,$\ll$) has cycle so, $T_1$ $\rightarrow$ $T_2$ $\rightarrow$ $T_3$ $\rightarrow$....$\rightarrow$ $T_k$ $\rightarrow$ $T_1$ that implies $\tltl_1 $ $<$ $\tltl_2$ $<$ $\tltl_3$ $<$ ....$<$ $\tltl_k$ $<$ $\tltl_1$.\\ Hence from above assumption, $\tltl_1$ $<$ $\tltl_1$ but this is impossible. So, our assumption is wrong.\\ Therefore, MVSG(H,$\ll$) produced by KSFTM is acyclic. \end{proof} \textbf{\textit{M\_Order$_H$:}} It stands for method order of history H in which methods of transactions are interval (consists of invocation and response of a method) instead of dot (atomic). Because of having method as an interval, methods of different transactions can overlap. To prove the correctness \textit{(local opacity)} of our algorithm, we need to order the overlapping methods. Let say, there are two transactions $T_i$ and $T_j$ either accessing common (t-objects/$\glock$) or $\gtcnt$ through operations $op_i$ and $op_j$ respectively. If res($op_i$) $<_H$ inv($op_j$) then $op_i$ and $op_j$ are in real-time order in H. So, the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. If operations are overlapping and either accessing common t-objects or sharing $\glock$: \begin{enumerate} \item $read_i(x)$ and $read_j(x)$: If $read_i(x)$ acquires the lock on x before $read_j(x)$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. \item $read_i(x)$ and $\tryc_j()$: If they are accessing common t-objects then, let say $read_i(x)$ acquires the lock on x before $\tryc_j()$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. Now if they are not accessing common t-objects but sharing $\glock$ then, let say $read_i(x)$ acquires the lock on $\glock_i$ before $\tryc_j()$ acquires the lock on $\relll$ (which consists of $\glock_i$ and $\glock_j$) then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. \item $\tryc_i()$ and $\tryc_j()$: If they are accessing common t-objects then, let say $\tryc_i()$ acquires the lock on x before $\tryc_j()$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. Now if they are not accessing common t-objects but sharing $\glock$ then, let say $\tryc_i()$ acquires the lock on $\relll_i$ before $\tryc_j()$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. \end{enumerate} If operations are overlapping and accessing different t-objects but sharing $\gtcnt$ counter: \begin{enumerate} \item $\begt_i$ and $\begt_j$: Both the $\begt$ are accessing shared counter variable $\gtcnt$. If $\begt_i$ executes $\gtcnt.get\&Inc()$ before $\begt_j$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. \item $\begt_i$ and $\tryc(j)$: If $\begt_i$ executes $\gtcnt.get\&Inc()$ before $\tryc(j)$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. \end{enumerate} \textit{Linearization:} The history generated by STMs are generally not sequintial because operations of the transactions are overlapping. The correctness of STMs is defined on sequintial history, inorder to show history generated by our algorithm is correct we have to consider sequintial history. We have enough information to order the overlapping methods, after ordering the operations will have equivalent sequintial history, the total order of the operation is called linearization of the history.\\ \textit{Operation graph (OPG):} Consider each operation as a vertex and edges as below: \begin{enumerate} \item Real time edge: If response of operation $op_i$ happen before the invocation of operation $op_j$ i.e. rsp($op_i$) $<_H$ inv($op_j$) then there exist real time edge between $op_i$ $\rightarrow$ $op_j$. \item Conflict edge: It is based on $L\_Order_H$ which depends on three conflicts: \begin{enumerate} \item Common \textit{t-object}: If two operations $op_i$ and $op_j$ are overlapping and accessing common \textit{t-object x}. Let say $op_i$ acquire lock first on x then $L\_Order.op_i$(x) $<_H$ $L\_Order.op_j$(x) so, conflict edge is $op_i$ $\rightarrow$ $op_j$. \item Common $\gval$ flag: If two operation $op_i$ and $op_j$ are overlapping but accessing common $\gval$ flag instead of \textit{t-object}. Let say $op_i$ acquire lock first on $\gval_i$ then $L\_Order.op_i$(x) $<_H$ $L\_Order.op_j$(x) so, conflict edge is $op_i$ $\rightarrow$ $op_j$. \end{enumerate} \item Common $\gtcnt$ counter: If two operation $op_i$ and $op_j$ are overlapping but accessing common $\gtcnt$ counter instead of \textit{t-object}. Let say $op_i$ access $\gtcnt$ counter before $op_j$ then $L\_Order.op_i$(x) $<_H$ $L\_Order.op_j$(x) so, conflict edge is $op_i$ $\rightarrow$ $op_j$. \end{enumerate} \cmnt{ \begin{lemma} Any history H gen(KSFTM) follows strict partial order of all the locks in H ($lockOrder_H$) so, operation graph (OPG(H)) is acyclic. i.e. \end{lemma} \begin{enumerate} \item \textrm{If ($p_i$, $p_j$) is an edge in OPG, then $Fpu_i$($\alpha$) $<$ $Lpl_j$($\alpha$) for some data item $\alpha$ and two operations $p_i$($\alpha$), $p_j$($\alpha$) in conflict.} \item \textrm{If ($p_1$,$p_2$, ... ,$p_n$) is a path in OPG, n $\geq$ 1, then $Fpu_1$($\alpha$) $<$ $Lpl_n$($\gamma$) for two data items $\alpha$ and $\gamma$ as well as operations $p_1$($\alpha$) and $p_n$($\gamma$).} \item \textrm{OPG is acyclic.} \end{enumerate} \begin{proof} \textrm{We assume variables $\alpha$, $\beta$ and $\gamma$ can be data item or G\_Valid.} \begin{enumerate} \item \textrm{If ($p_i$, $p_j$) is an edge in operation graph OPG, then OPG comprises two steps $p_i$($\alpha$) and $p_j$($\alpha$) in conflict such that $p_i$($\alpha$) $<$ $p_j$($\alpha$). According to (If $o_i$(x)(o $\in$\{r, w\}) occurs in graph, then so do $ol_i$($\alpha$) and $ou_i$($\alpha$) with the sequencing $ol_i$($\alpha$) $<$ $o_i$($\alpha$) $<$ $ou_i$($\alpha$).), this implies $pl_i$($\alpha$) $<$ $p_i$($\alpha$) $<$ $pu_i$($\alpha$) and $pl_j$($\alpha$) $<$ $p_j$($\alpha$) $<$ $pu_j$($\alpha$). According to (If some steps $p_i$($\alpha$) and $p_j$($\alpha$) from graph are in conflict, then either $Fpu_i$($\alpha$) $<$ $Lpl_j$($\alpha$) or $Fpu_j$($\alpha$) $<$ $Lpl_i$($\alpha$) holds.), we moreover find (a) $Fpu_i$($\alpha$) $<$ $Lpl_j$($\alpha$) or (b) $Fpu_j$($\alpha$) $<$ $Lpl_i$($\alpha$). Case (b) means $Lpl_j$($\alpha$) $<$ $p_j$($\alpha$) $<$ $pu_j$($\alpha$) $<$ $pl_i$($\alpha$) $<$ $p_i$($\alpha$) $<$ $Fpu_i$($\alpha$) and hence $p_j$($\alpha$) $<$ $p_i$($\alpha$), a contradiction to $p_i$($\alpha$) $<$ $p_j$($\alpha$). Thus, $Fpu_i$($\alpha$) $<$ $Lpl_j$($\alpha$).} \item \textrm{In this case we are considering transitive order. In order to prove it, we are taking induction on n (1): If ($p_1$, $p_2$) is an edge in OPG, there is a conflict between $p_1$ and $p_2$. Thus, $Fpu_1$($\alpha$) $<$ $Lpl_2$($\alpha$), i.e., $p_1$ unlocks $\alpha$ before $p_2$ locks $\alpha$. In other words, when $p_2$ sets a lock, $p_1$ has already released one. Now we are assuming it is true for n transactions on the path ($p_1$,$p_2$, ... ,$p_n$) in OPG and we need to prove, it holds for path n+1. The inductive assumption now tells us that there are data items $\alpha$ and $\beta$ such that $Fpu_1$($\alpha$) $<$ $Lpl_n$($\beta$) in S . Since ($p_n$, $p_{n+1}$) is an edge in OPG, it follows from (1) above that for operations $p_n$($\gamma$) and $p_{n+1}$($\gamma$) in conflict we have $Fpu_n$($\gamma$) $<$ $Lpl_{n+1}$($\gamma$). According to (If $p_i$($\alpha$) and $p_i$($\gamma$) are in graph, then $pl_i$($\alpha$) $<$ $pu_i$($\gamma$), i.e., every lock operation occurs before every unlock operation of the same transaction), this implies $pl_n$($\beta$) $<$ $pu_n$($\gamma$) and hence $Fpu_1$($\alpha$) $<$ $Lpl_{n+1}$($\gamma$).} \item \textrm{Proof by contradiction: Assuming OPG is cyclic. So there exists a cycle ($p_1$,$p_2$, ... ,$p_n$, $p_1$), n $\geq$ 1. By using (2), $Fpu_1$($\alpha$) $<$ $Lpl_1$(($\gamma$) for operations $p_1$($\alpha$), $p_1$($\gamma$), a contradiction to the KSFTM (If $p_i$($\alpha$) and $p_i$(($\gamma$) are in OPG, then $pl_i$($\alpha$) $<$ $pu_i$(($\gamma$), i.e., every lock operation occurs before every unlock operation of the same transaction).} \end{enumerate} \end{proof} } \begin{lemma} All the locks in history H ($L\_Order_H$) gen(KSFTM) follows strict partial order. So, operation graph (OPG(H)) is acyclic. If ($op_i$$\rightarrow$$op_j$) in OPG, then atleast one of them will definitely true: ($Fpu_i$($\alpha$) $<$ $Lpl\_op_j$($\alpha$)) $\cup$ ($access.\gtcnt_i$ $<$ $access.\gtcnt_j$) $\cup$ ($Fpu\_op_i$($\alpha$) $<$ $access.\gtcnt_j$) $\cup$ ($access.\gtcnt_i$ $<$ $Lpl\_op_j$($\alpha$)). Here, $\alpha$ can either be t-object or $\gval$. \end{lemma} \begin{proof} we consider proof by induction, So we assummed there exist a path from $op_1$ to $op_n$ and there is an edge between $op_n$ to $op_{n+1}$. As we described, while constructing OPG(H) we need to consider three types of edges. We are considering one by one: \begin{enumerate} \item Real time edge between $op_n$ to $op_{n+1}$: \begin{enumerate} \item $op_{n+1}$ is a locking method: In this we are considering all the possible path between $op_1$ to $op_n$: \begin{enumerate} \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ So, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)) $<$ ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$))\\ Hence, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)) \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($access.\gtcnt_n$ $<$ $Ll\_op_{n+1}$($\alpha$)). As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)) \item ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ So, ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ Hence, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n+1}$($\alpha$)). \item ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ So, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)) \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ So, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)) $<$ ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ Hence, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n+1}$($\alpha$)). \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($access.\gtcnt_n$ $<$ $Ll\_op_{n+1}$($\alpha$)). As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n+1}$($\alpha$)). \end{enumerate} \item $op_{n+1}$ is a non-locking method: Again, we are considering all the possible path between $op_1$ to $op_n$: \begin{enumerate} \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($access.\gtcnt_n$) $<$ ($access.\gtcnt_{n+1}$).\\ As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n+1}$) \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ ($access.\gtcnt_{n+1}$). \\ So, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)) $<$ ($Fu\_op_n$($\alpha$) $<$ ($access.\gtcnt_{n+1}$)\\ Hence, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n+1}$)) \item ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$).\\ So, ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$).\\ Hence, ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n+1}$). \item ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$).\\ So, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n+1}$) \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($access.\gtcnt_n$) $<$ ($access.\gtcnt_{n+1}$).\\ As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n+1}$). \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ ($access.\gtcnt_{n+1}$). \\ So, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)) $<$ ($Fu\_op_n$($\alpha$) $<$ ($access.\gtcnt_{n+1}$). \\ Hence, ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n+1}$). \end{enumerate} \end{enumerate} \item Conflict edge between $op_n$ to $op_{n+1}$: \begin{enumerate} \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)). Ref 1.(a).i. \item ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)). As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n+1}$($\alpha$)). \item ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)). As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)). \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ Ref 1.(a).v. \end{enumerate} \item Common counter edge between $op_n$ to $op_{n+1}$: \begin{enumerate} \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$). As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n+1}$). \item ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$). Ref 1.(b).iii. \item ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$). Ref 1.(b).iv. \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($access.\gtcnt_n$) $<$ ($access.\gtcnt_{n+1}$). Ref 1.(b).v \end{enumerate} \end{enumerate} \cmnt{ then $Fpu_i$($\alpha$) $<$ $Lpl_j$($\alpha$). Conflict edge is based on $M\_Order_H$.\\ We are proving by contradiction while assuming OPG(H) is cyclic. So, $op_1$ $\rightarrow$ $op_2$ $\rightarrow$ $op_3$ $\rightarrow$....$\rightarrow$ $op_k$ $\rightarrow$ $op_1$ that implies either ($Fpu_1$($\alpha$) $<$ $Lpl_2$($\alpha$) $<$ $Fpu_2$($\alpha$) $<$ $Lpl_3$($\alpha$) $<$ .... $<$ $Fpu_{k-1}$($\alpha$) $<$ $Lpl_k$($\alpha$) $<$ $Fpu_k$($\alpha$) $<$ $Lpl_1$($\alpha$)) or ($access.\gtcnt_1$ $<$ $access.\gtcnt_2$ $<$ $access.\gtcnt_3$ $<$ .... $<$ $access.\gtcnt_{k-1}$ $<$ $access.\gtcnt_k$ $<$ $access.\gtcnt_1$). Hence from above assumption, either ($Fpu_1$($\alpha$) $<$ $Lpl_1$($\alpha$)) or ($access.\gtcnt_1$ $<$ $access.\gtcnt_1$). But ($Fpu_1$($\alpha$) $<$ $Lpl_1$($\alpha$)) is impossible because methods are follwing 2PL order of locking and ($access.\gtcnt_1$ $<$ $access.\gtcnt_1$) is never be true because of same method $op_1$. Hence, all the above cases are impossible. So, our assumption is wrong. } Therefore, OPG(H, $M\_Order$) produced by KSFTM is acyclic. \end{proof} \begin{lemma} \label{lem:val-hs} Any history H gen(KSFTM) with $\alpha$ linearization such that it respects $M\_Order_H$ then (H, $\alpha$) is valid. \end{lemma} \begin{proof} From the definition of \textit{valid history}: If all the read operations of H is reading from the previously committed transaction $T_j$ then H is valid.\\ In order to prove H is valid, we are analyzing the read(i,x). so, from \Lineref{rd-curver10}, it returns the largest \ts value less than $\gwts_i$ that has already been committed and return the value successfully. If such version created by transaction $T_j$ found then $T_i$ read from $T_j$. Otherwise, if there is no version whose \wts is less than $T_i$'s \wts, then $T_i$ returns abort.\\ Now, consider the base case read(i,x) is the first transaction $T_1$ and none of the transactions has been created a version then as we have assummed, there always exist $T_0$ by default that has been created a version for all t-objects. Hence, $T_1$ reads from committed transaction $T_0$.\\ So, all the reads are reading from largest \ts value less than $\gwts_i$ that has already been committed. Hence, (H, $\alpha$) is valid. \end{proof} \begin{lemma} \label{lem:rt-hs} Any history H gen(KSFTM) with $\alpha$ and $\beta$ linearization such that both respects $M\_Order_H$ i.e. $M\_Order_H \subseteq \alpha$ and $M\_Order_H \subseteq \beta$ then $\prec_{(H, {\alpha})} ^{RT}$= $\prec_{(H, {\beta})}^{RT}$. \end{lemma} \begin{proof} Consider a history H gen(KSFTM) such that two transactions $T_i$ and $T_j$ are in real time order which respects $M\_Order_H$ i.e. $\tryc_i$ $<$ $\begt_j$. As $\alpha$ and $\beta$ are linearizations of H so, $\tryc_i$ $<_{(H,{\alpha})}$ $\begt_j$ and $\tryc_i$ $<_{(H,{\beta})}$ $\begt_j$. Hence in both the cases of linearizations, $T_i$ committed before begin of $T_j$. So, $\prec_{(H, {\alpha})} ^{RT}$= $\prec_{(H, {\beta})}^{RT}$. \end{proof} \begin{lemma} Any history H gen(KSFTM) with $\alpha$ and $\beta$ linearization such that both respects $M\_Order_H$ i.e. $M\_Order_H \subseteq \alpha$ and $M\_Order_H \subseteq \beta$ then $(H, {\alpha})$ is local opaque iff $(H, {\beta})$ is local opaque. \end{lemma} \begin{proof} As $\alpha$ and $\beta$ are linearizations of history H gen(KSFTM) so, from \lemref{val-hs} (H, $\alpha$) and (H, $\beta$) are valid histories. Now assuming (H, $\alpha$) is local opaque so we need to show (H, $\beta$) is also local opaque. Since (H, $\alpha$) is local opaque so there exists legal t-sequential history S (with respect to each aborted transactions and last committed transaction while considering only committed transactions) which is equivalent to ($\overline{H}$, $\alpha$). As we know $\beta$ is a linearization of H so ($\overline{H}$, $\beta$) is equivalent to some legal t-sequential history S. From the definition of local opacity $\prec_{(H, {\alpha})} ^{RT} \subseteq \prec_S^{RT}$. From \lemref{rt-hs}, $\prec_{(H, {\alpha})} ^{RT}$= $\prec_{(H, {\beta})}^{RT}$ that implies $\prec_{(H, {\beta})}^{RT} \subseteq \prec_S^{RT}$. Hence, $(H, {\beta})$ is local opaque.\\ Now consider the other way in which (H, $\beta$) is local opaque and we need to show (H, $\alpha$) is also local opaque. We can prove it while giving the same argument as above, by exchanging $\alpha$ and $\beta$.\\ Hence, $(H, {\alpha})$ is local opaque iff $(H, {\beta})$ is local opaque. \end{proof} \begin{theorem} \label{thm:ap-ksftm-lo} Any history generated by \ksftm{} is \lopq. \end{theorem} \begin{proof} For proving this, we consider a sequential history $H$ generated by \ksftm. We define the version order $\ll_{\vt}$: for two versions $v_i, v_j$ it is defined as $(v_i \ll_{\vt} v_j) \equiv (v_i.\vt < v_j.\vt)$ \noindent Using this version order $\ll_{\vt}$, we can show that all the sub-histories in $\shset{H}$ are acyclic. \end{proof} \noindent Since the histories generated by \ksftm are \lopq, we get that they are also \stsble. \begin{corollary} \label{thm:ap-ksftm-stsble} Any history generated by \ksftm{} is \stsble. \end{corollary} \ignore{ \begin{lemma} Any history H gen(KSFTM) is deadlock-free. \end{lemma} \begin{proof} In our algorithm, each transaction $T_i$ is following lock order in every method ($read(x,i)$ and $tryc()$) that are locking t-object first then $\glock$. Since transaction $T_i$ is acquiring locks on t-objects in predefined order at \Lineref{lockxs} of $\tryc()$ and it is also following predefined locking order of all conflicting $\glock$ including itself at \Lineref{lockall} of $\tryc()$. Hence, history H gen(KSFTM) is deadlock-free. \end{proof} \begin{lemma} \label{lem:its-finitewts} Consider two histories $H1, H2$ in $\gen{\ksftm}$ such that $H2$ is an extension of $H1$. Let $T_i$ and $T_j$ are transactions in $\live{H1}$ such that $T_i$ has the lowest $\its$ among all the transactions in \textbf{live} and \textbf{$\rab$} set and $\gval_i$ flag is true in $H1$. Suppose $T_i$ is aborted in $H2$. Then the number of transaction $T_j$ in $\txns{H1}$ with higher $\twts{j}$ than $\twts{i}$ is finite in $H2$. Formally, $ \langle H1, H2, T_i: ((\{H1, H2\} \subset \gen{\ksftm}) \land (H1 \sqsubset H2) \land (T_i \in \live{H1}) \land (\tits{i} \text{ is the smallest among } ((\live{H1} \land \rab(H1)) ) \land (\htval{i}{H1} = T) \land (T_i \in \aborted{H2})) \implies (\exists T_j \in \txns{H1}: (\htwts{i}{H1} < \htwts{j}{H1}) \land$ (such $T_j$ are finite in {H2})) $\rangle$. \end{lemma} \begin{proof} As we observed from \lemref{wts-its}, $T_i$ Will terminate within 2L range. So everytime ($\tits{i} + 2L \geq \tits{j}$) transactions with higher $twts$ cause $T_i$ to abort. Let say, there are m such transactions within 2L range called $T_j$. Then in worst case, $T_i$ will abort maximum m times because on every reties atleast one of transaction from $T_j$ will definitely commit and cause $T_i$ to abort. On abort, when $T_i$ retries again it retains same $its$ but higher $wts$. So, after committing all such $T_j$, $T_i$ will be the only transaction with lowest $its$ among all the transactions in \textbf{live} and \textbf{$\rab$} set (from lemma assumption) and highest $wts$ among all the transactions in \textbf{live}, \textbf{committed} and \textbf{$\rab$} set. Hence, the maximum such $T_j$ with higher $wts$ than $wts_i$ is finite that causes $T_i$ to abort. So, the number of such $T_j$ in $\txns{H1}$ with higher $\twts{j}$ than $\twts{i}$ is finite in $H2$. \end{proof} \begin{lemma} \label{lem:its-hwts} Consider two histories $H1, H2$ in $\gen{\ksftm}$ such that $H2$ is an extension of $H1$. Let $T_i$ be a transaction in $\live{H1}$ such that $T_i$ has the lowest \its among all the transactions in \textbf{live} and \textbf{$\rab$} set and $\gval_i$ flag is true in $H1$. Suppose $T_i$ is having highest $wts$ in $H2$. Then $T_i$ will definitely commit in $H2$. So, KSFTM ensures starvation-freedom. Formally, $ \langle H1, H2, T_i: ((\{H1, H2\} \subset \gen{\ksftm}) \land (H1 \sqsubset H2) \land (T_i \in \live{H1}) \land (\tits{i} \text{ is the smallest among } ((\live{H1} \land \rab(H1)) ) \land (\htval{i}{H1} = T) \land highest(\htwts{i}{H1})$ $\implies$ ($\exists$ $T_i$ is committed in {H2})) $\rangle$. \end{lemma} \begin{proof} From \lemref{its-finitewts}, transaction $T_i$ having lowest $its$ among all the transactions in \textbf{live} and \textbf{$\rab$} set. Then the number of transaction $T_j$ in $\txns{H1}$ with higher $\twts{j}$ than $\twts{i}$ is finite in $H2$. So, for each transaction $T_i$ there eventually exists a global state in which it has the lowestest $its$ and highest $wts$. In that state and in all other future global states (in which $T_i$ is still live), $T_i$ can not be aborted. So, $T_i$ will definitely commit in $H2$. Hence, KSFTM ensures starvation-freedom. \end{proof} } \section{Discussion and Conclusion} \label{sec:conc} In this paper, we propose a $K$ version \emph{\stf} STM system, \emph{\ksftm}. The algorithm ensures that if an \emph{aborted} transaction is retried successively, then it will eventually commit. The algorithm maintains $K$ versions where $K$ can range from between one to infinity. For correctness, we show \ksftm{} satisfies strict-serializability \cite{Papad:1979:JACM} and local opacity \cite{KuzSat:NI:ICDCN:2014, KuzSat:NI:TCS:2016}. To the best of our knowledge, this is the first work to explore \emph{\stfdm} with \mvstm{s}. Our experiments show that \ksftm performs better than single-version STMs (ESTM, Norec STM) under high contention and also single-version \emph{\stf} STM \svsftm developed based on the principle of priority. On the other hand, its performance is comparable or slightly worse than multi-version STM, \pkto (around 2\%). This is the cost of the overhead required to achieve \emph{\stfdm} which we believe is a marginal price. In this document, we have not considered a transactional solution based on two-phase locking (2PL) and its multi-version variants \cite{WeiVoss:2002:Morg}. With the carefully designed 2PL solution, one can ensure that none of the transactions abort \cite{WeiVoss:2002:Morg}. But this will require advance knowledge of the code of the transactions which may not always be available with the STM library. Without such knowledge, it is possible that a 2PL solution can deadlock and cause further aborts which will, raise the issue of \emph{\stfdm} again. \noindent Since we have considered \stsble as one of the \emph{correctness-criteria}, this algorithm can be extended to databases as well. In fact, to the best of our knowledge, there has been no prior work on \emph{\stfdm} in the context of database concurrency control. \section{Main Idea behind the working of \ksftm Algorithm} \section{The Working of \ksftm Algorithm} \label{sec:idea} In this section, we propose \emph{K-version \stf STM} or \emph{\ksftm} for a given parameter $K$. Here $K$ is the number of versions of each \tobj and can range from 1 to $\infty$. When $K$ is 1, it boils down to single-version \emph{\stf} STM. If $K$ is $\infty$, then \ksftm uses unbounded versions and needs a separate garbage collection mechanism to delete old versions like other \mvstm{s} proposed in the literature \cite{Kumar+:MVTO:ICDCN:2014,LuScott:GMV:DISC:2013}. We denote \ksftm using unbounded versions as \emph{\mvsftm} and \mvsftm with garbage collection as \emph{\mvsftmgc}. Next, we describe some \emph{\stfdm} preliminaries in \subsecref{prelim} to explain the working of \ksftm algorithm. To explain the intuition behind the \ksftm algorithm, we start with the modification of \mvto \cite{BernGood:1983:MCC:TDS,Kumar+:MVTO:ICDCN:2014} algorithm in \subsecref{mvto}. We then make a sequence of modifications to it to arrive at \ksftm algorithm. \subsection{\emph{Starvation-Freedom} Preliminaries} \label{subsec:prelim} In this section, we start with the definition of \emph{\stfdm}. Then we describe the invocation of transactions by the application. Next, we describe the data structures used by the algorithms. \begin{definition} \label{defn:stf} \textbf{Starvation-Freedom:} A STM system is said to be \stf if a thread invoking a non-parasitic transaction $T_i$ gets the opportunity to retry $T_i$ on every abort, due to the presence of a fair scheduler, then $T_i$ will eventually commit. \end{definition} As explained by Herlihy \& Shavit \cite{HerlihyShavit:Progress:Opodis:2011}, a fair scheduler implies that no thread is forever delayed or crashed. Hence with a fair scheduler, we get that if a thread acquires locks then it will eventually release the locks. Thus a thread cannot block out other threads from progressing. \ignore{ \noindent \textbf{\stfdm:} A STM system is said to be \stf if a thread invoking a transaction $T_i$ gets the opportunity to retry $T_i$ on every abort due to the presence of a fair scheduler then $T_i$ will eventually commit (as described in \secref{intro}). As explained by Herlihy \& Shavit \cite{HerlihyShavit:AMP:Book:2012}, a fair scheduler implies that no thread is forever delayed or crashed. Hence with this assumption of a fair scheduler, we get that if in the course of execution a thread acquires locks then it will eventually release the locks. Since it is not forever delayed, it cannot block out other threads from progressing. } \vspace{1mm} \noindent \textbf{Assumption about Scheduler: } In order for \stf algorithm \ksftm (described in \subsecref{ksftm}) to work correctly, we make the following assumption about the fair scheduler: \begin{assumption} \label{asm:bdtm} \textbf{Bounded-Termination}: For any transaction $T_i$, invoked by a thread $Th_x$, the fair system scheduler\xspace ensures, in the absence of deadlocks, $Th_x$ is given sufficient time on a CPU (and memory etc.) such that $T_i$ terminates (either commits or aborts) in bounded time. \end{assumption} While the bound for each transaction may be different, we use $L$ to denote the maximum bound. In other words, in time $L$, every transaction will either abort or commit due to the absence of deadlocks. There are different ways to satisfy the scheduler requirement. For example, a round-robin scheduler which provides each thread equal amount of time in any window satisfies this requirement as long as the number of threads is bounded. In a system with two threads, even if a scheduler provides one thread 1\% of CPU and another thread 99\% of the CPU, it satisfies the above requirement. On the other hand, a scheduler that schedules the threads as `$T_1, T_2, T_1, T_2, T_2, T_1, T_2$, $T_2, T_2, T_2$, $T_1, T_2, T_2$, $T_2, T_2, T_2, T_2, T_2, T_2, T_1, T_2 (16 times) $' does not satisfy the above requirement. This is due to the fact that over time, thread 1 gets infinitesimally smaller portion of the CPU and, hence, the time required for it to complete (commit or abort) will continue to increase over time. In our algorithm, we will ensure that it is deadlock free using standard techniques from the literature. In other words, each thread is in a position to make progress. We assume that the scheduler provides sufficient CPU time to complete (either commit or abort) within a bounded time. As explained by Herlihy \& Shavit \cite{HerlihyShavit:Progress:Opodis:2011}, a fair scheduler implies that no thread is forever delayed or crashed. Hence with a fair scheduler, we get that if a thread acquires locks then it will eventually release the locks. Thus a thread cannot block out other threads from progressing. \ignore{ \noindent \textbf{\stfdm:} A STM system is said to be \emph{\stf} if a thread invoking a transaction $T_i$ gets the opportunity to retry $T_i$ on every abort due to the presence of a fair scheduler then $T_i$ will eventually commit (as described in \secref{intro}). As explained by Herlihy \& Shavit \cite{HerlihyShavit:AMP:Book:2012}, a fair scheduler implies that no thread is forever delayed or crashed. Hence with this assumption of a fair scheduler, we get that if in the course of execution a thread acquires locks then it will eventually release the locks. Since it is not forever delayed, it cannot block out other threads from progressing. } \noindent \textbf{Transaction Invocation:} Transactions are invoked by threads. Suppose a thread $Th_x$ invokes a transaction $T_i$. If this transaction $T_i$ gets \emph{aborted}, $Th_x$ will reissue it, as a new incarnation of $T_i$, say $T_j$. The thread $Th_x$ will continue to invoke new \inc{s} of $T_i$ until an \inc commits. When the thread $Th_x$ invokes a transaction, say $T_i$, for the first time then the STM system assigns $T_i$ a unique timestamp called \emph{current timestamp or \cts}. If it aborts and retries again as $T_j$, then its \cts will change. However, in this case, the thread $Th_x$ will also pass the \cts value of the first incarnation ($T_i$) to $T_j$. By this, $Th_x$ informs the STM system that, $T_j$ is not a new invocation but is an \inc of $T_i$. We denote the \cts of $T_i$ (first \inc) as \emph{Initial Timestamp or \its} for all the \inc{s} of $T_i$. Thus, the invoking thread $Th_x$ passes $\tcts{i}$ to all the \inc{s} of $T_i$ (including $T_j$). Thus for $T_j$, $\tits{j} = \tcts{i}$. The transaction $T_j$ is associated with the timestamps: $\langle \tits{j}, \tcts{j} \rangle$. For $T_i$, which is the initial \inc, its \its and \cts are the same, i.e., $\tits{i} = \tcts{i}$. For simplicity, we use the notation that for transaction $T_j$, $j$ is its \cts, i.e., $\tcts{j} = j$. \begin{figure}[H] \centerline{\scalebox{0.50}{\input{figs/vlrl.pdf_t}}} \captionsetup{justification=centering} \caption{Data Structures for Maintaining Versions} \label{fig:tobj} \end{figure} We also assume that in the absence of other concurrent conflicting transactions, every transaction will commit. In other words, if a transaction is executing in a system where other concurrent conflicting transactions are not present then it will not self-abort. If transactions can self-abort then providing \emph{\stfdm} is impossible. \ignore{ \color{red} We assume that a transaction $T_i$ on getting \emph{aborted} will be retried by the invoking thread $Th_x$, as a new transaction, say $T_j$. $T_i$ and $T_j$ are said to \emph{incarnations} of each other. The thread $Th_x$ will continue to invoke a new \inc{s} of $T_i$ until an \inc commits. Thus, we can order all the \inc{s} of $T_i$ based on the order in which they have been invoked. Whenever a transaction $T_i$ begins, it is assigned a unique timestamp by the STM system called as current timestamp or \emph{\cts}. $T_i$ on getting \emph{aborted} will be retried by the invoking thread $Th_x$, as a new transaction, say $T_j$. While invoking $T_j$, we assume that $Th_x$ will pass the \cts of the first \inc of $T_j$, which in this case is suppose $T_i$, to the STM system. By this, $Th_x$ informs the STM system this $T_j$ is not the initial invocation and is an \inc of $T_i$. In this case, we denote the \cts of $T_i$ as \emph{Initial Timestamp} or \emph{\its} for all the \inc{s} of $T_i$. Any \inc of $T_i$, $T_j$ is characterized by : $\langle \its_j, \cts_j \rangle$. For $T_i$ both its \its and \cts are the same. \color{black} } \noindent \textbf{Common Data Structures and STM Methods:} Here we describe the common data structures used by all the algorithms proposed in this section. For each \tobj, the algorithms maintain multiple versions in $version$-$list$ (or \emph{\vlist}) using list. Similar to versions in \mvto \cite{Kumar+:MVTO:ICDCN:2014}, each version of a \tobj{} is a tuple denoted as \emph{\vtup} and consists of three fields: (1) timestamp, (or $ts$) of the transaction that created this version which normally is the \cts; (2) the value (or $val$) of the version; (3) a list, called \rlist{} (or $rl$), consisting of transactions ids (can be \cts as well) that read from this version. The \rlist of a version is initially empty. \figref{tobj} illustrates this structure. For a \tobj $x$, we use the notation $x[t]$ to access the version with timestamp $t$. Depending on the algorithm considered, the fields change of this structure. The algorithms have access to a global atomic counter, $\gtcnt$ used for generating timestamps in the various transactional \mth{s}. We assume that the STM system exports the following \mth{s} for a transaction $T_i$: (1) $\begt(t)$ where $t$ is provided by the invoking thread, $Th_x$. From our earlier assumption, it is the \cts of the first \inc. In case $Th_x$ is invoking this transaction for the first time, then $t$ is $null$. This \mth returns a unique timestamp to $Th_x$ which is the \cts/id of the transaction. (2) $\tread_i(x)$ tries to read \tobj $x$. It returns either value $v$ or $\mathcal{A}$. (3) $\twrite_i(x,v)$ operation that updates a \tobj $x$ with value $v$ locally. It returns $ok$. (4) $\tryc_i()$ tries to commit the transaction and returns $\mathcal{C}$ if it succeeds. Otherwise, it returns $\mathcal{A}$. \ignore{ In addition to these global structures, for each transaction $T_i$, \pkto maintains structure that are local to $T_i$: \begin{itemize} \item $rset\xspace_i$(read-set): It is a list of data tuples or \emph{\dtup} of the form $\langle x, val \rangle$, where $x$ is the t-object and $v$ is the value read by the transaction $T_i$. We refer to a tuple in $T_i$'s read-set by $rset\xspace_i[x]$. \item $wset\xspace_i$(write-set): It is a list of \dtup{s} where $x$ is the \tobj{} to which transaction $T_i$ writes the value $val$. Similarly, we refer to a tuple in $T_i$'s write-set by $wset\xspace_i[x]$. \end{itemize} \noindent Next, we describe the \mth{s} exported by \pkto. Here in this discussion, we use the notation that a transaction $T_i$ has its \cts as $i$. } \noindent \textbf{Correctness Criteria:} For ease of exposition, we initially consider \stsbty as \emph{\cc} to illustrate the correctness of the algorithms. But \stsbty does not consider the correctness of \emph{aborted} transactions and as a result not a suitable \emph{\cc} for STMs. Finally, we show that the proposed STM algorithm \ksftm satisfies \lopty, a \emph{\cc} for STMs (described in \secref{model}). We denote the set of histories generated by an STM algorithm, say $A$, as $\gen{A}$. \input{SVSFTM} \cmnt{ \vspace{1mm} \noindent \textbf{Common Data-Structures \& STM Methods:} Here we describe the common data-structures used by all the algorithms proposed in this section. For each \tobj, the algorithm(s) maintains multiple version as a linked-list, \emph{\vlist}. Similar to versions in \mvto \cite{Kumar+:MVTO:ICDCN:2014}, each version of a \tobj{} is a tuple denoted as \emph{\vtup} and consists of three fields: (1) timestamp, $ts$ of the transaction that created this version which normally is the \cts; (2) the value of the version; (3) a list, called \rlist{}, consisting of transactions ids (could be \cts as well) that read from this version. The \rlist of a version is initially empty. \figref{tobj} illustrates this structure. For a \tobj $x$, we use the notation $x[t]$ to access the version with timestamp $t$. Depending on the algorithm considered, the fields change of this structure. \begin{figure} [h] \centerline{\scalebox{0.50}{\input{figs/kstm-ver.pdf_t}}} \captionsetup{justification=centering} \caption{Data Structures for Maintaining Versions} \label{fig:tobj} \end{figure} The algorithms have access to a global atomic counter, $\gtcnt$ used for generating timestamps in the various transactional \mth{s}. We assume that the STM system exports the following \mth{s} for a transaction $T_i$: (1) $\begt(t)$ where $t$ is provided by the invoking thread. From our earlier assumption, it is the \cts of the first \inc. This \mth returns a unique timestamp to the invoking thread which is the \cts/id of the transaction. (2) $\tread_i(x)$ tries to read \tobj $x$. It returns either value $v$ or $\mathcal{A}$. (3) $\twrite_i(x,v)$ operation that tries to update a \tobj $x$ with value $v$. It returns $ok$. (4) $\tryc_i()$ that tries to commit the transaction and returns $ok$ if it succeeds. Otherwise, it returns $\mathcal{A}$. (5) $\trya()$ that aborts the transaction and returns $\mathcal{A}$. \ignore{ In addition to these global structures, for each transaction $T_i$, \pmvto maintains structure that are local to $T_i$: \begin{itemize} \item $rset\xspace_i$(read-set): It is a list of data tuples or \emph{\dtup} of the form $\langle x, val \rangle$, where $x$ is the t-object and $v$ is the value read by the transaction $T_i$. We refer to a tuple in $T_i$'s read-set by $rset\xspace_i[x]$. \item $wset\xspace_i$(write-set): It is a list of \dtup{s} where $x$ is the \tobj{} to which transaction $T_i$ writes the value $val$. Similarly, we refer to a tuple in $T_i$'s write-set by $wset\xspace_i[x]$. \end{itemize} \noindent Next, we describe the \mth{s} exported by \pmvto. Here in this discussion, we use the notation that a transaction $T_i$ has its \cts as $i$. } \vspace{1mm} \noindent \textbf{Simplifying Assumptions:} We next describe the main idea behind the starvation-free STM algorithm \ksftm through a sequence of algorithms. For ease of exposition, we make two simplifying assumptions (1) We assume that in the absence of other concurrent conflicting transactions, every transaction will commit. In other words, if a transaction is executed in a system by itself, it will not self-abort. (2) We initially consider \stsbty as \cc to illustrate the correctness of the algorithms. But \stsbty does not consider the correctness of aborted transactions and as a result not a suitable \cc for STMs. Finally, we show that the proposed STM algorithm \ksftm satisfies \lopty, a \cc for STMs. \noindent We denote the set of histories generated by an STM algorithm, say $A$, as $\gen{A}$. \cmnt { A STM system exports the functions:\begtrans{}, $\read_i, \writei_i$ and $\tryci_i$. A thread invokes a transaction with a \begtrans{} function which returns a unique transaction id which is the timestamp of the transactions. This timestamp is numerically greater than the timestamps of all the transactions invoked so far. This thread invokes future functions of the transaction using this timestamp. We use the notation $T_i$ to denote a transaction where $i$ is the timestamp of $T_i$. } \subsection{Priority-based MVTO Algorithm} \label{subsec:mvto} In this subsection, we describe a modification to the multi-version timestamp ordering (\mvto) algorithm \cite{BernGood:1983:MCC:TDS,Kumar+:MVTO:ICDCN:2014} to ensure that it provides preference to transactions that have low \its, i.e., transactions that have been in the system for a longer time. We denote the basic algorithm which maintains unbounded versions as \emph{Priority-based MVTO} or \emph{\pmvto} (akin to the original \mvto). We denote the variant of \pmvto that maintains $K$ versions as \emph{\pkto} and the unbounded versions variant with garbage collection as \emph{\pmvtogc}. In this sub-section, we specifically describe \pkto. But most of these properties apply to \pmvto and \pmvtogc as well. \ignore{ \color{blue} \color{red} REMOVE We have not described the details of locking of data-items by transactions. \color{black} } \vspace{1mm} \noindent \textbf{$\begt(t)$:} A unique timestamp $ts$ is allocated to $T_i$ which is its \cts ($i$ from our assumption). The timestamp $ts$ is generated by atomically incrementing the global counter $\gtcnt$. If the input $t$ is null, then $\tcts{i} = \tits{i} = ts$ as this is the first \inc of this transaction. Otherwise, the non-null value of $t$ is assigned as $\tits{i}$. \noindent \textbf{$\tread(x)$:} Transaction $T_i$ reads from a version of $x$ in the shared memory (if $x$ does not exist in $T_i$'s local buffer) with timestamp $j$ such that $j$ is the largest timestamp less than $i$ (among the versions $x$), i.e., there exists no version of $x$ with timestamp $k$ such that $j<k<i$. After reading this version of $x$, $T_i$ is stored in $x[j]$'s \rlist. If no such version exists then $T_i$ is \emph{aborted}. \noindent \textbf{$\twrite(x,v)$:} $T_i$ stores this write to value $x$ locally in its $wset\xspace_i$. If $T_i$ ever reads $x$ again, this value will be returned. \noindent \textbf{$\tryc:$} This \op{} consists of three steps. In \stref{val}, it checks whether $T_i$ can be \emph{committed}. In \stref{updt}, it performs the necessary tasks to mark $T_i$ as a \emph{committed} transaction and in \stref{commit}, $T_i$ return commits. \begin{enumerate} \item Before $T_i$ can commit, it needs to verify that any version it creates does not violate consistency. Suppose $T_i$ creates a new version of $x$ with timestamp $i$. Let $j$ be the largest timestamp smaller than $i$ for which version of $x$ exists. Let this version be $x[j]$. Now, $T_i$ needs to make sure that any transaction that has read $x[j]$ is not affected by the new version created by $T_i$. There are two possibilities of concern: \label{step:val} \begin{enumerate} \item Let $T_k$ be some transaction that has read $x[j]$ and $k > i$ ($k$ = \cts of $T_k$). In this scenario, the value read by $T_k$ would be incorrect (w.r.t \stsbty) if $T_i$ is allowed to create a new version. In this case, we say that the transactions $T_i$ and $T_k$ are in \emph{conflict}. So, we do the following: \\(i) if $T_k$ has already \emph{committed} then $T_i$ is \emph{aborted}; \\(ii) if $T_k$ is live and $\tits{k}$ is less than $\tits{i}$. Then again $T_i$ is \emph{aborted}; \\(iii) If $T_k$ is still live with $its_i$ less than $its_k$ then $T_k$ is \emph{aborted}. \label{step:verify} \item The previous version $x[j]$ does not exist. This happens when the previous version $x[j]$ has been overwritten. In this case, $T_i$ is \emph{aborted} since \pkto does not know if $T_i$ conflicts with any other transaction $T_k$ that has read the previous version. \label{step:notfound} \end{enumerate} \item After \stref{val}, we have verified that it is ok for $T_i$ to commit. Now, we have to create a version of each \tobj $x$ in the $wset\xspace$ of $T_i$. This is achieved as follows: \label{step:updt} \begin{enumerate} \item $T_i$ creates a $\vtup$ $\langle i, \wset{i}.x.v, null \rangle$. In this tuple, $i$ (\cts of $T_i$) is the timestamp of the new version; $\wset{i}.x.v$ is the value of $x$ is in $T_i$'s $wset\xspace$, and the \rlist of the $\vtup$ is $null$. \item Suppose the total number of versions of $x$ is $K$. Then among all the versions of $x$, $T_i$ replaces the version with the smallest timestamp with $\vtup$ $\langle i, \wset{i}.x.v, null \rangle$. Otherwise, the $\vtup$ is added to $x$'s $\vlist$. \end{enumerate} \item Transaction $T_i$ is then \emph{committed}. \label{step:commit} \end{enumerate} The algorithm described here is only the main idea. The actual implementation will use locks to ensure that each of these \mth{s} are \lble \cite{HerlWing:1990:TPLS}. It can be seen that \pkto gives preference to the transaction having lower \its in \stref{verify}. Transactions having lower \its have been in the system for a longer time. Hence, \pkto gives preference to them. \subsection{Pseudocode of \pkto} \label{apn:pcode} \begin{algorithm}[H] \caption{STM $\init()$: Invoked at the start of the STM system. Initializes all the \tobj{s} used by the STM System} \label{alg:init} \begin{algorithmic}[1] \State $\gtcnt$ = 1; \ForAll {$x$ in $\mathcal{T}$} \Comment{All the \tobj{s} used by the STM System} \State add $\langle 0, 0, nil \rangle$ to $x.\vl$; \Comment { $T_0$ is initializing $x$} \label{lin:t0-init} \EndFor; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-begin} \caption{STM $\begt(its)$: Invoked by a thread to start a new transaction $T_i$. Thread can pass a parameter $its$ which is the initial timestamp when this transaction was invoked for the first time. If this is the first invocation then $its$ is $nil$. It returns the tuple $\langle id, \gcts \rangle$} \begin{algorithmic}[1] \State $i$ = unique-id; \Comment{An unique id to identify this transaction. It could be same as \gcts} \State \Comment{Initialize transaction specific local and global variables} \If {($its == nil$)} \State \Comment{$\gtcnt.get\&Inc()$ returns the current value of \gtcnt and atomically increments it} \State $\gits_i = \gcts_i = \gtcnt.get\&Inc()$; \Else \State $\gits_i = its$; \State $\gcts_i = \gtcnt.get\&Inc()$; \EndIf \State $rset\xspace_i = wset\xspace_i = null$; \State $\gstat_i$ = \texttt{live}; \State $\gval_i = T$; \State return $\langle i, \gcts_i\rangle$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-read} \caption{STM $read(i, x)$: Invoked by a transaction $T_i$ to read \tobj{} $x$. It returns either the value of $x$ or $\mathcal{A}$} \begin{algorithmic}[1] \If {($x \in rset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $rset\xspace_i$} \State return $rset\xspace_i[x].val$; \ElsIf {($x \in wset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $wset\xspace_i$} \State return $wset\xspace_i[x].val$; \Else \Comment{\tobj{} $x$ is not in $rset\xspace_i$ and $wset\xspace_i$} \State lock $x$; lock $\glock_i$; \If {$(\gval_i == F)$} return abort(i); \label{lin:rabort} \EndIf \State \Comment{ \findls: From $x.\vl$, returns the largest \ts value less than $\gcts_i$. If no such version exists, it returns $nil$ } \State $curVer = \findls(\gcts_i,x)$; \If {$(curVer == nil)$} return abort(i); \Comment{Proceed only if $curVer$ is not nil} \EndIf \cmnt { \State /* \findsl: From $x.\vl$, returns the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $nextVer = \findsl(\gwts_i,x)$; \If {$(nextVer \neq nil)$} \State \Comment{Ensure that $\tutl_i$ remains smaller than $nextVer$'s \vltl} \State $\tutl_i = min(\tutl_i, x[nextVer].vltl-1)$; \EndIf \State \Comment{$\tltl_i$ should be greater than $x[curVer].\vltl$} \State $\tltl_i = max(\tltl_i, x[curVer].\vltl + 1)$; \If {($\tltl_i > \tutl_i$)} \Comment{If the limits have crossed each other, then $T_i$ is aborted} \State return abort(i); \EndIf } \State $val = x[curVer].v$; add $\langle x, val \rangle$ to $rset\xspace_i$; \State add $T_i$ to $x[curVer].rl$; \State unlock $\glock_i$; unlock $x$; \State return $val$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-write} \caption{STM $write_i(x,val)$: A Transaction $T_i$ writes into local memory} \begin{algorithmic}[1] \State Append the $d\_tuple \langle x,val \rangle$ to $wset\xspace_i$. \State return $ok$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-tryc} \caption{STM $\tryc()$: Returns $ok$ on commit else return Abort} \begin{algorithmic}[1] \State \Comment{The following check is an optimization which needs to be performed again later} \State lock $\glock_i$; \If {$(\gval_i == F)$} \State return abort(i); \EndIf \State unlock $\glock_i$; \State $\lrl = \allrl = nil$; \Comment{Initialize larger read list (\lrl), all read list (\allrl) to nil} \ForAll {$x \in wset\xspace_i$} \State lock $x$ in pre-defined order; \State \Comment{ \findls: returns the version with the largest \ts value less than $\gcts_i$. If no such version exists, it returns $nil$. } \State $\prevv = \findls(\gcts_i, x)$; \Comment{\prevv: largest version smaller than $\gcts_i$} \If {$(\prevv == nil)$} \Comment{There exists no version with \ts value less than $\gcts_i$} \State lock $\glock_i$; return abort(i); \EndIf \State \Comment{\textbf{\getl}: obtain the list of reading transactions of $x[\prevv].rl$ whose $\gcts$ is greater than $\gcts_i$} \State $\lrl = \lrl \cup \getl(\gcts_i, x[\prevv].rl)$; \EndFor \Comment{$x \in wset\xspace_i$} \State $\relll = \lrl \cup T_i$; \Comment{Initialize relevant Lock List (\relll)} \ForAll {($T_k \in \relll$)} \State lock $\glock_k$ in pre-defined order; \Comment{Note: Since $T_i$ is also in $\relll$, $\glock_i$ is also locked} \EndFor \State \Comment{Verify if $\gval_i$ is false} \If {$(\gval_i == F)$} \State return abort(i); \EndIf \State $\abl = nil$ \Comment{Initialize abort read list (\abl)} \State \Comment{Among the transactions in $T_k$ in $\lrl$, either $T_k$ or $T_i$ has to be aborted} \ForAll {$(T_k \in \lrl)$} \If {$(\isab(T_k))$} \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \If {$(\gits_i < \gits_k) \land (\gstat_k == \texttt{live})$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \EndIf \EndFor \algstore{myalgtc} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{myalgtc} \cmnt { \algstore{tryc-break} \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:ap-tryc-cont} \caption{STM $\tryc()$: Continued} \begin{algorithmic}[1] \algrestore{tryc-break} \State \Comment{Ensure that $\vutl_i$ is less than \vltl of versions in $\nvl$} \ForAll {$(ver \in \nvl)$} \State $x$ = \tobj of $ver$; \State $\tutl_i = min(\tutl_i, x[ver].\vltl - 1)$; \EndFor } \State \Comment{Store the current value of the global counter as commit time and increment it} \State $\ct = \gtcnt.get\&Inc()$; \cmnt{ \ForAll {$(T_k \in \srl)$} \Comment{Iterate through $\srl$ to see if $T_k$ or $T_i$ has to aborted} \If {$(\isab(T_k))$} \State \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \If {$(\tltl_k \geq \tutl_i)$} \label{lin:tk-check} \Comment{Ensure that the limits do not cross for both $T_i$ and $T_k$} \If {$(\gstat_k == live)$} \Comment{Check if $T_k$ is live} \If {$(\gits_i < \gits_k)$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \EndIf \Comment{$(\gits_i < \gits_k)$} \Else \Comment{($T_k$ is committed. Hence, $T_i$ has to be aborted)} \State return abort(i); \EndIf \Comment{$(\gstat_k == live)$} \EndIf \Comment{$(\tltl_k \geq \tutl_i)$} \EndFor {$(T_k \in \srl)$} \State \Comment{At this point $T_i$ can't abort.} \State $\tltl_i = \tutl_i$; \label{lin:ti-updt} \State \Comment{Since $T_i$ can't abort, we can update $T_k$'s \tutl} \ForAll {$(T_k \in \srl)$} \If {$(\isab(T_k))$} \State \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \State /* The following line ensure that $\tltl_k \leq \tutl_k < \tltl_i$. Note that this does not cause the limits of $T_k$ to cross each other because of the check in \Lineref{tk-check}.*/ \State $\tutl_k = min(\tutl_k, \tltl_i - 1)$; \EndFor } \ForAll {$T_k \in \abl$} \Comment{Abort all the transactions in \abl} \State $\gval_k = F$; \EndFor \cmnt { \algstore{tryc-break2} \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:tryc-cont2} \caption{STM $\tryc()$: Continued Again} \begin{algorithmic}[1] \algrestore{tryc-break2} } \State \Comment{Having completed all the checks, $T_i$ can be committed} \ForAll {$(x \in wset\xspace_i)$} \State $newTuple = \langle \gcts_i, wset\xspace_i[x].val, nil \rangle$; \Comment { Create new v\_tuple: \gcts, val, \rl for $x$} \If {($|x.vl| > k$)} \State replace the oldest tuple in $x.\vl$ with $newTuple$; \Comment{$x.\vl$ is ordered by timestamp} \Else \State add a $newTuple$ to $x.vl$ in sorted order; \EndIf \EndFor \Comment{$x \in wset\xspace_i$} \State $\gstat_i$ = \texttt{commit}; \State unlock all variables; \State return $\mathcal{C}$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:isab} \caption{$\isab(T_k)$: Verifies if $T_i$ is already aborted or its \gval flag is set to false implying that $T_i$ will be aborted soon} \begin{algorithmic}[1] \If {$(\gval_k == F) \lor (\gstat_k == \texttt{abort}) \lor (T_k \in \abl)$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-abort} \caption{$abort(i)$: Invoked by various STM methods to abort transaction $T_i$. It returns $\mathcal{A}$} \begin{algorithmic}[1] \State $\gval_i = F$; $\gstat_i$ = \texttt{abort}; \State unlock all variables locked by $T_i$; \State return $\mathcal{A}$; \end{algorithmic} \end{algorithm} We have the following property on the correctness of \pkto. \begin{property} \label{prop:pmvto-correct} Any history generated by \pkto is strict-serializable. \end{property} Consider a history $H$ generated by \pkto. Let the \emph{committed} \ssch of $H$ be $CSH = \shist{\comm{H}}{H}$. It can be shown that $CSH$ is \opq with the equivalent serialized history $SH'$ is one in which all the transactions of $CSH$ are ordered by their \cts{s}. Hence, $H$ is \stsble. \cmnt{ The read \op{} described in \stref{read} is a common idea used in \mvstm{s} which are based on \ts{s} such as \mvto{} \cite{Kumar+:MVTO:ICDCN:2014}, Generic Multi-Version STM \cite{LuScott:GMV:DISC:2013} etc. Unlike these STMs which have infinite versions, a read \op{} in \pmvto{} can abort. Similarly, the $\tryc{}$ \op{} described in \stref{tryc} is very similar to \mvto, but modified for finite versions. } \noindent \textbf{Possibility of Starvation in \pkto:} As discussed above, \pkto gives priority to transactions having lower \its. But a transaction $T_i$ having the lowest \its could still abort due to one of the following reasons: (1) Upon executing $\tread(x)$ \mth if it does not find any other version of $x$ to read from. This can happen if all the versions of $x$ present have a timestamp greater than $\tcts{i}$. (2) While executing \stref{verify}(i), of the $\tryc$ \mth, if $T_i$ wishes to create a version of $x$ with timestamp $i$. But some other transaction, say $T_k$ has read from a version with timestamp $j$ and $j<i<k$. In this case, $T_i$ has to abort if $T_k$ has already \emph{committed}. This issue is not restricted only to \pkto. It can occur in \pmvto (and \pmvtogc) due to the point (2) described above. \begin{figure}[H] \centering \scalebox{0.5}{\input{figs/pktod.pdf_t}} \captionsetup{justification=centering} \caption{Pictorial representation of execution under \pkto} \label{fig:kstmvl} \end{figure} We illustrate this problem in \pkto with \figref{kstmvl}. Here transaction $T_{26}$, with \its 26 is the lowest among all the live transactions, starves due to \stref{verify}.(i) of the $\tryc$. \emph{First time}, $T_{26}$ gets \emph{aborted} due to higher timestamp transaction $T_{29}$ in the \rlist of $x[25]$ has \emph{committed}. We have denoted it by a `(C)' next to the version. The \emph{second time}, $T_{26}$ retries with same \its 26 but new \cts 33. Now when $T_{33}$ comes for commit, suppose another transaction $T_{34}$ in the \rlist of $x[25]$ has already \emph{committed}. So this will cause $T_{33}$ (another incarnation of $T_{26}$) to abort again. Such scenario can possibly repeat again and again and thus causing no \inc of $T_{26}$ to ever commit leading to its starvation. \noindent \textbf{Garbage Collection in \mvsftmgc and \pmvtogc:} Having multiple versions to increase the performance and to decrease the number of aborts, leads to creating too many versions which are not of any use and hence occupying space. So, such garbage versions need to be taken care of. Hence we come up with a garbage collection over these unwanted versions. This technique help to conserve memory space and increases the performance in turn as no more unnecessary traversing of garbage versions by transactions is necessary. We have used a global, i.e., across all transactions a list that keeps track of all the live transactions in the system. We call this list as \livel. Each transaction at the beginning of its life cycle creates its entry in this \livel. Under the optimistic approach of STM, each transaction in the shared memory performs its updates in the $\tryc$ phase. In this phase, each transaction performs some validations, and if all the validations are successful then the transaction make changes or in simple terms creates versions of the corresponding \tobj{} in the shared memory. While creating a version every transaction, check if it is the least timestamp live transaction present in the system by using \livel{} data structure, if yes then the current transaction deletes all the version of that \tobj{} and create one of its own. Else the transaction does not do any garbage collection or delete any version and look for creating a new version of next \tobj{} in the write set, if at all. \figref{ksftm} and \figref{pkto} show that both \mvsftmgc and \pmvtogc performs better than \mvsftm and \pmvto across all workloads. \ignore{ \begin{enumerate} \item \textbf{read rule:} $T_i$ on invoking $r_i(x)$ reads the value $v$, where $v$ is the value written by a transaction $T_j$ that commits before $r_i(x)$ and $j$ is the largest timestamp $\leq i$. \item \textbf{write rule:} $T_i$ writes into local memory. \item \textbf{commit rule:} $T_i$ on invoking \tryc{} \op{} checks for each \tobj{} $x$, in its \textit{Wset}: \begin{enumerate} \item If a transaction $T_k$ has read $x$ from $T_j$, i.e. $r_k(x, v) \in \evts{T_k}$ and $w_j(x, v) \in \evts{T_j}$ and $j < i < k$, then $\tryc_i$ returns abort, \item otherwise, the transaction is allowed to commit. \end{enumerate} \end{enumerate} } \input{sfalgo} \section{Introduction} \label{sec:intro} STMs \cite{HerlMoss:1993:SigArch,ShavTou:1995:PODC} are a convenient programming interface for a programmer to access shared memory without worrying about consistency issues. STMs often use an optimistic approach for concurrent execution of \emph{transactions} (a piece of code invoked by a thread). In optimistic execution, each transaction reads from the shared memory, but all write updates are performed on local memory. On completion, the STM system \textit{validates} the reads and writes of the transaction. If any inconsistency is found, the transaction is \emph{aborted}, and its local writes are discarded. Otherwise, the transaction is committed, and its local writes are transferred to the shared memory. A transaction that has begun but has not yet committed/aborted is referred to as \emph{live}. A typical STM is a library which exports the following methods: \emph{\begt} which begins a transaction, \textit{\tread} which reads a \emph{transactional object} or \emph{\tobj}, \textit{\twrite} which writes to a \emph{\tobj}, \textit{\tryc} which tries to commit the transaction. Typical code for using STMs is as shown in \algoref{sfdemo} which shows how an insert of a concurrent linked-list library is implemented using STMs. \ignore{ \color{blue} \color{red} A typical code using STMs is as shown in \algoref{sfdemo}. It shows the overview of a concurrent \emph{insert} \mth which inserts an element $e$ into a linked-list $LL$. It consists of a loop where the thread creates a transaction. This transaction executes the code to insert an element $e$ in a linked-list $LL$ using $\tread$ and $\twrite$ operations. (The result of $\twrite$ operation are stored locally.) At the end of the transaction, the thread calls \textit{\tryc}. At this point, the STM checks if the given transaction can be committed while satisfying the required safety properties (e.g., \slbty \cite{Papad:1979:JACM}, \opty \cite{GuerKap:2008:PPoPP}). If yes, then the transaction is committed. At this time, any updates done by the transaction are reflected in the shared memory. Otherwise, it is aborted. In this case, all the updates made by the transaction are discarded. If the given transaction is aborted, then the invoking thread may retry that transaction again like \linereff{retry} in \algoref{sfdemo}. \color{black} } \noindent \textbf{Correctness:} \ignore{ \color{red} By committing/aborting the transactions, the STM system ensures atomicity and consistency of transactions. Thus, an important requirement of STMs is to precisely identify the criterion as to when a transaction should be \emph{aborted/committed}, referred to as \emph{\cc}. \color{black} } Several \emph{\ccs} have been proposed for STMs such as opacity \cite{GuerKap:2008:PPoPP}, local opacity \cite{KuzSat:NI:ICDCN:2014,KuzSat:NI:TCS:2016}. All these \emph{\ccs} require that all the transactions including aborted ones appear to execute sequentially in an order that agrees with the order of non-overlapping transactions. Unlike the \ccs for traditional databases, such as serializability, strict-serializability \cite{Papad:1979:JACM}, the \ccs for STMs ensure that even aborted transactions read correct values. This ensures that programmers do not see any undesirable side-effects due to the reads by transaction that get aborted later such as divide-by-zero, infinite-loops, crashes etc. in the application due to concurrent executions. This additional requirement on aborted transactions is a fundamental requirement of STMs which differentiates STMs from databases as observed by Guerraoui \& Kapalka \cite{GuerKap:2008:PPoPP}. Thus in this paper, we focus on optimistic executions with the \emph{\cc} being \emph{\lopty} \cite{KuzSat:NI:TCS:2016}. \ignore{ \todo{This para could be dropped. We have already said optimistic execution before} To ensure correctness such as \opty, most STMs execute optimistically. With this approach, a transaction only reads values written by other committed transactions. \textbf{To achieve this, all writes are written to local memory first. They are added to the shared memory only when the transaction commits.} This (combined with required validation) can, in turn, ensure that the reads of the transaction are consistent as required by the \emph{\cc}. Thus in this paper, we focus only on optimistic executions with the \emph{\cc} being \emph{\lopty} \cite{KuzSat:NI:TCS:2016} (explained in \secref{model}). \color{black} } \vspace{1mm} \noindent \textbf{Starvation Freedom:} In the execution shown in \algoref{sfdemo}, there is a possibility that the transaction which a thread tries to execute gets aborted again and again. Every time, it executes the transaction, say $T_i$, $T_i$ conflicts with some other transaction and hence gets aborted. In other words, the thread is effectively starving because it is not able to commit $T_i$ successfully. A well known blocking progress condition associated with concurrent programming is \stfdm \cite[chap 2]{HerlihyShavit:AMP:Book:2012}, \cite{HerlihyShavit:Progress:Opodis:2011}. In the context of STMs, \stfdm ensures that every aborted transaction that is retried infinitely often eventually commits. It can be defined as: an STM system is said to be \emph{\stf} if a thread invoking a transaction $T_i$ gets the opportunity to retry $T_i$ on every abort (due to the presence of a fair underlying scheduler with bounded termination) and $T_i$ is not \emph{parasitic}, i.e., $T_i$ will try to commit given a chance then $T_i$ will eventually commit. Parasitic transactions \cite{Bushkov+:Live-TM:PODC:2012} will not commit even when given a chance to commit possibly because they are caught in an infinite loop or some other error. \setlength{\intextsep}{0pt} \vspace{1mm} \setlength{\textfloatsep}{0pt} \begin{algorithm} \caption{Insert($LL, e$): Invoked by a thread to insert an element $e$ into a linked-list $LL$. This method is implemented using transactions.} \label{algo:sfdemo} \begin{algorithmic}[1] \State $retry$ = 0; \While {$(true)$} \label{lin:wstart1} \State $id$ = \begt($retry$); \label{lin:beg-illus} \State ... \State ... \State $v$ = $\tread(id, x)$; \Comment{reads the value of $x$ as $v$} \State ... \State ... \State $\twrite(id, x, v')$; \Comment{writes a value $v'$ to $x$} \State ... \State ... \State $ret$ = $\tryc(id)$; \Comment{$\tryc$ can return $commit$ or $abort$} \If {($ret == commit$)} \State break; \Else \State $retry$++; \label{lin:retry} \EndIf \EndWhile \label{lin:wend1} \end{algorithmic} \end{algorithm} \emph{Wait-freedom} is another interesting progress condition for STMs in which every transaction commits regardless of the nature of concurrent transactions and the underlying scheduler \cite{HerlihyShavit:Progress:Opodis:2011}. But it was shown by Guerraoui and Kapalka \cite{Bushkov+:Live-TM:PODC:2012} that it is not possible to achieve \emph{wait-freedom} in dynamic STMs in which data sets of transactions are not known in advance. So in this paper, we explore the weaker progress condition of \emph{\stfdm} for transactional memories while assuming that the data sets of the transactions are \textit{not} known in advance. \ignore { \color{red} With a \stf STM, the thread invoking insert in \algoref{sfdemo} will eventually be able to complete. Otherwise, every transaction invoked by the thread could potentially abort and the \mth will never complete. \color{black} } \vspace{1mm} \noindent \textbf{Related work on the \stf STMs:} Starvation-freedom in STMs has been explored by a few researchers in literature such as Gramoli et al. \cite{Gramoli+:TM2C:Eurosys:2012}, Waliullah and Stenstrom \cite{WaliSten:Starve:CC:2009}, Spear et al. \cite{Spear+:CCM:PPoPP:2009}. Most of these systems work by assigning priorities to transactions. In case of a conflict between two transactions, the transaction with lower priority is aborted. They ensure that every aborted transaction, on being retried a sufficient number of times, will eventually have the highest priority and hence will commit. We denote such an algorithm as \emph{single-version \stf STM} or \emph{\svsftm}. Although \svsftm guarantees \stfdm, it can still abort many transactions spuriously. Consider the case where a transaction $T_i$ has the highest priority. Hence, as per \svsftm, $T_i$ cannot be aborted. But if it is slow (for some reason), then it can cause several other conflicting transactions to abort and hence, bring down the efficiency and progress of the entire system. \vspace{1mm} \figref{svsf-draw} illustrates this problem. Consider the execution: $r_1(x,0) r_1(y,0) w_2(x,10) w_2(z,10) w_3(y,15) w_1(z,7)$. It has three transactions $T_1$, $T_2$ and $T_3$. Let $T_1$ has the highest priority. After reading $y$, suppose $T_1$ becomes slow. Next $T_2$ and $T_3$ want to write to $x, z$ and $y$ respectively and \emph{commit}. But $T_2$ and $T_3$'s write \op{s} are in conflict with $T_1$'s read operations. Since $T_1$ has higher priority and has not committed yet, $T_2$ and $T_3$ have to \emph{abort}. If these transactions are retried and again conflict with $T_1$ (while it is still live), they will have to \emph{abort} again. Thus, any transaction with the priority lower than $T_1$ and conflicts with it has to abort. It is as if $T_1$ has locked the \tobj{s} $x, y$ and does not allow any other transaction, write to these \tobj{s} and to \emph{commit}. \ignore{ \color{red} This text should be in another section: To illustrate the need for starvation freedom, consider a transaction, say $T1$, that reads all t-objects to obtain a consistent state of the system. This can be achieved by having the transaction read all t-objects with the highest timestamp that is less than $\cts$ value. However, such a transaction may abort because by the time the transaction reads the a $\tobj$, it may be deleted due to the fact that $K$ additional versions are created. Furthermore, transactions that deleted those versions were not aware of the interest of $T1$ in reading those \tobj{s}. However, in our protocol, if this transaction is aborted several times eventually $\wts_{T1}$ would be high thereby preventing it from being aborted due to unavailability of the version. \color{blue} } \vspace{1mm} \begin{figure} \center \scalebox{0.45}{\import{figs/}{dsftm.pdf_t}} \captionsetup{justification=centering} \caption{Limitation of Single-version Starvation Free Algorithm} \label{fig:svsf-draw} \end{figure} \vspace{1mm} \noindent \textbf{Multi-version \stf STM:} A key limitation of single-version STMs is limited concurrency. As shown above, it is possible that one long transaction conflicts with several transactions causing them to abort. This limitation can be overcome by using multi-version STMs where we store multiple versions of the data item (either unbounded versions with garbage collection, or bounded versions where the oldest version is replaced when the number of versions exceeds the bound). Several multi-version STMs have been proposed in the literature \cite{Kumar+:MVTO:ICDCN:2014,LuScott:GMV:DISC:2013,Fern+:LSMVSTM:PPoPP:2011,Perel+:2011:SMV:DISC} that provide increased concurrency. But none of them provide \stfdm. Furthermore, achieving \stfdm while using only bounded versions is especially challenging given that a transaction may rely on the oldest version that is removed. In that case, it would be necessary to abort that transaction, making it harder to achieve \stfdm. A typical code using STMs is as shown in \algoref{sfdemo}. It shows the overview of a concurrent \emph{insert} \mth which inserts an element $e$ into a linked-list $LL$. It consists of a loop where the thread creates a transaction. This transaction executes the code to insert an element $e$ in a linked-list $LL$ using $\tread$ and $\twrite$ operations. (The result of $\twrite$ operation are stored locally.) At the end of the transaction, the thread calls \textit{\tryc}. At this point, the STM checks if the given transaction can be \emph{committed} while satisfying the required safety properties (e.g., \slbty \cite{Papad:1979:JACM}, \opty \cite{GuerKap:2008:PPoPP}). If yes, then the transaction is \emph{committed}. At this time, any updates done by the transaction are reflected in the shared memory. Otherwise, it is aborted. In this case, all the updates made by the transaction are discarded. If the given transaction is aborted, then the invoking thread may retry that transaction again like \linereff{retry} in \algoref{sfdemo}. The advantage of multi-version STMs, is that they allow greater concurrency by allowing more transactions to commit. Consider the execution shown in \figref{svsf-draw}. Suppose this execution used multiple versions for each \tobj. Then it is possible for all the three transactions to commit. Transactions $T_2$ and $T_3$ create a new version corresponding to each \tobj $x$, $z$ and $y$ and return commit. Since multiple versions are being used, $T_1$ need not abort as well. $T_1$ reads the initial value of $z$, and returns commit. So, by maintaining multiple versions all the transactions $T_1$, $T_2$, and $T_3$ can commit with equivalent serial history as $T_1 T_2 T_3$ or $T_1 T_3 T_2$. Thus multiple versions can help with \stfdm without sacrificing on concurrency. This motivated us to develop a multi-version \stf STM system. Although multi-version STMs provide greater concurrency, they suffer from the cost of garbage collection. One way to avoid this is to use bounded-multi-version STMs, where the number of versions is bounded to be at most $K$. Thus, when $(K+1)^{th}$ version is created, the oldest version is removed. Bounding the number of versions can hinder with starvation freedom: a transaction needing to read a version that is currently removed must be aborted. \ignore{ \color{red} To address this limitation, in this paper, we focus on developing \emph{starvation-free} algorithms STMs using multiple versions. Many STMs have been proposed which uses the idea of multiple versions \cite{Kumar+:MVTO:ICDCN:2014,LuScott:GMV:DISC:2013,Fern+:LSMVSTM:PPoPP:2011,Perel+:2011:SMV:DISC}. Multi-version STMs (\mvstm{s}), by virtue of multiple versions, can ensure that more \mth{s} succeed \cite{Kumar+:MVTO:ICDCN:2014}. Hence, multi-version STMs can achieve greater concurrency. Suppose the execution shown in \figref{svsf-draw} uses multiple versions for each \tobj. Then both $T_2$ and $T_3$ create a new version corresponding to each \tobj $x$, $z$ and $y$ and return commit while not causing $T_1$ to abort as well. $T_1$ reads the initial value of $z$, and returns commit. So, by maintaining multiple versions all the transactions $T_1$, $T_2$, and $T_3$ can commit with equivalent serial history as $T_1 T_2 T_3$ or $T_1 T_3 T_2$. Thus multiple versions can help with \stfdm without sacrificing on concurrency. This motivated us to develop a multi-version \stf STM system. Although multi-version STMs provide greater concurrency, they suffer from the cost of garbage collection. One way to avoid this is to use bounded-multi-version STMs, where the number of versions is bounded to be at most $K$. Thus, when $(K+1)^{th}$ version is created, the oldest version is removed. Bounding the number of versions can hinder with starvation freedom: a transaction needing to read a version that is currently removed must be aborted. \color{black} } This paper addresses this gap by developing a starvation-free algorithm for bounded \mvstm{s}. Our approach is different from the approach used in \svsftm to provide starvation-freedom in single version STMs (the policy of aborting lower priority transactions in case of conflict) as it does not work for \mvstm{s}. As part of the derivation of our final \stf algorithm, we consider an algorithm (\pkto) that considers this approach and show that it is insufficient to provide starvation freedom. \color{black} \color{black} \vspace{1mm} \noindent \textbf{Contributions of the paper:} \begin{itemize} \item We propose a multi-version \stf STM system as \emph{K-version \stf STM} or \emph{\ksftm} for a given parameter $K$. Here $K$ is the number of versions of each \tobj and can range from 1 to $\infty$. To the best of our knowledge, this is the first \stf \mvstm. We develop \ksftm algorithm in a step-wise manner starting from \mvto \cite{Kumar+:MVTO:ICDCN:2014} as follows: \begin{itemize} \item First, in \subsecref{mvto}, we use the standard idea to provide higher priority to older transactions. Specifically, we propose priority-based $K$-version STM algorithm \emph{Priority-based $K$-version MVTO} or \emph{\pkto}. This algorithm guarantees the safety properties of \stsbty and \lopty. However, it is not \stf. \item We analyze \pkto to identify the characteristics that will help us to achieve preventing a transaction from getting aborted forever. This analysis leads us to the development of \emph{\stf K-version TO} or \emph{\sfkv} (\subsecref{sfmvto}), a multi-version starvation-free STM obtained by revising \pkto. But \sfkv does not satisfy correctness, i.e., \stsbty, and \lopty. \item Finally, we extend \sfkv to develop \ksftm (\subsecref{ksftm}) that preserves the \stfdm, strict-serializability, and \lopty. Our algorithm works on the assumption that any transaction that is not deadlocked, terminates (commits or aborts) in a bounded time. \end{itemize} \item Our experiments (\secref{exp}) show that \ksftm gives an average speedup on the worst-case time to commit of a transaction by a factor of 1.22, 1.89, 23.26 and 13.12 times over \pkto, \svsftm, NOrec STM \cite{Dalessandro:2010:NSS:1693453.1693464} and ESTM \cite{Felber:2017:jpdc} respectively for counter application. \ksftm performs 1.5 and 1.44 times better than \pkto and \svsftm but 1.09 times worse than NOrec for low contention KMEANS application of STAMP \cite{stamp123} benchmark whereas \ksftm performs 1.14, 1.4 and 2.63 times better than \pkto, \svsftm and NOrec for LABYRINTH application of STAMP benchmark which has high contention with long-running transactions. \end{itemize} \ignore{ \begin {table} \captionsetup{justification=centering} \caption{Comparison of the various Algorithms Developed} \label{tab:our-algos} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Algorithm & Starvation Freedom & Correct & Sub-Section\\ \hline \pkto & No & Yes & \ref{subsec:mvto} \\ \hline \sfkv & Yes & No & \ref{subsec:sfmvto} \\ \hline \ksftm & Yes & Yes& \ref{subsec:ksftm} \\ \hline \end{tabular} \end{center} \end{table} } \ignore{ \todo{Outline of paper?} \textbf{Organization of the paper. } \todo{Rel Work} \color{blue} \color{red} In this paper, we explore the problem of achieving starvation-freedom in \mvstm system. We have proposed a $K$ version starvation-free \mvstm system, \emph{\ksftm}. To explain this algorithm, we start with the \mvto algorithm \cite{BernGood:1983:MCC:TDS, Kumar+:MVTO:ICDCN:2014} and then walk through a series of changes to reach the final \stf algorithm. \ksftm maintains $K$ versions where $K$ can range from one to infinity. When $K$ is one then this it boils down to a single-version STM system. If $K$ is infinity, then it is similar to existing MVSTMs which do not maintain an upper bound on the number of versions. In this case, a separate \gc thread will be required. When $K$ is finite, then no separate \gc thread will be required to remove the older versions. For correctness, we show \ksftm{} satisfies \stsbty \cite{Papad:1979:JACM} and local-opacity \cite{KuzSat:NI:ICDCN:2014,KuzSat:NI:TCS:2016}. To the best of our knowledge, this is the first work to explore \stfdm with \mvstm{s}. We have also implemented \ksftm and compared its performance with a single version starvation free STM system (\sftm) which works on the priority principle. Our experiments show that \ksftm achieves more than two-fold speedup over \sftm for lookup intensive workload (90\% read, 10\% write). \ksftm shows significant performance gain, almost two to fifteen times better than existing non-starvation-free state-of-the-art STMs (ESTM, NOrec, and MVTO) on various workloads. } \section{System Model and Preliminaries} \label{sec:model} Following~\cite{tm-book,KuzSat:NI:TCS:2016}, we assume a system of $n$ processes/threads, $p_1,\ldots,p_n$ that access a collection of \emph{transactional objects} (or \emph{{\tobj}s}) via atomic \emph{transactions}. Each transaction has a unique identifier. Within a transaction, processes can perform \emph{transactional operations or \mth{s}}: $\begt{}$ that begins a transaction, \textit{\twrite}$(x,v)$ operation that updates a \tobj $x$ with value $v$ in its local memory, the \textit{\tread}$(x)$ operation tries to read $x$, \textit{\tryc}$()$ that tries to commit the transaction and returns $commit$ if it succeeds, and \textit{\trya}$()$ that aborts the transaction and returns $\mathcal{A}$. For the sake of presentation simplicity, we assume that the values taken as arguments by \textit{\twrite} operations are unique. Operations \textit{\tread} and \textit{\tryc}$()$ may return $\mathcal{A}$, in which case we say that the operations \emph{forcefully abort}. Otherwise, we say that the operations have \emph{successfully} executed. Each operation is equipped with a unique transaction identifier. A transaction $T_i$ starts with the first operation and completes when any of its operations return $\mathcal{A}$ or $\mathcal{C}$. We denote any \op{} that returns $\mathcal{A}$ or $\mathcal{C}$ as \emph{\termop{s}}. Hence, \op{s} $\tryc$$()$ and $\trya$$()$ are \termop{s}. A transaction does not invoke any further \op{s} after \termop{s}. For a transaction $T_k$, we denote all the \tobj{s} accessed by its read \op{s} as $rset\xspace_k$ and \tobj{s} accessed by its write operations as $wset\xspace_k$. We denote all the \op{s} of a transaction $T_k$ as $\evts{T_k}$ or $evts_k$. \noindent \textbf{History:} A \emph{history} is a sequence of \emph{events}, i.e., a sequence of invocations and responses of transactional operations. The collection of events is denoted as $\evts{H}$. For simplicity, we only consider \emph{sequential} histories here: the invocation of each transactional operation is immediately followed by a matching response. Therefore, we treat each transactional operation as one atomic event, and let $<_H$ denote the total order on the transactional operations incurred by $H$. With this assumption, the only relevant events of a transaction $T_k$ is of the types: $r_k(x,v)$, $r_k(x,\mathcal{A})$, $w_k(x, v)$, $\tryc_k(\mathcal{C})$ (or $c_k$ for short), $\tryc_k(\mathcal{A})$, $\trya_k(\mathcal{A})$ (or $a_k$ for short). We identify a history $H$ as tuple $\langle \evts{H},<_H \rangle$. Let $H|T$ denote the history consisting of events of $T$ in $H$, and $H|p_i$ denote the history consisting of events of $p_i$ in $H$. We only consider \emph{well-formed} histories here, i.e., no transaction of a process begins before the previous transaction invocation has completed (either $commits$ or $aborts$). We also assume that every history has an initial \emph{committed} transaction $T_0$ that initializes all the t-objects with value $0$. The set of transactions that appear in $H$ is denoted by $\txns{H}$. The set of \emph{committed} (resp., \emph{aborted}) transactions in $H$ is denoted by $\comm{H}$ (resp., $\aborted{H}$). The set of \emph{incomplete} or \emph{live} transactions in $H$ is denoted by $\incomp{H} = \live{H} = (\txns{H}-\comm{H}-\aborted{H})$. For a history $H$, we construct the \emph{completion} of $H$, denoted as $\overline{H}$, by inserting $\trya_k(\mathcal{A})$ immediately after the last event of every transaction $T_k\in \live{H}$. But for $\tryc_i$ of transaction $T_i$, if it released the lock on first \tobj successfully that means updates made by $T_i$ is consistent so, $T_i$ will immediately return commit. \noindent \textbf{Transaction orders:} For two transactions $T_k,T_m \in \txns{H}$, we say that $T_k$ \emph{precedes} $T_m$ in the \emph{real-time order} of $H$, denote $T_k\prec_H^{RT} T_m$, if $T_k$ is complete in $H$ and the last event of $T_k$ precedes the first event of $T_m$ in $H$. If neither $T_k \prec_H^{RT} T_m$ nor $T_m \prec_H^{RT} T_k$, then $T_k$ and $T_m$ \emph{overlap} in $H$. We say that a history is \emph{\tseq} if all the transactions are ordered by this real-time order. Note that from our earlier assumption all the transactions of a single process are ordered by real-time. \ignore{ We say that $T_k, T_m$ are in conflict, if (1) (tryc-tryc) $\tryc_k(C)<_H \tryc_m(C)$ and $Wset(T_k) \cap Wset(T_m) \neq\emptyset$; (2) (tryc-r) $\tryc_k(C)<_H r_m(x,v)$, $x \in Wset(T_k)$ and $v \neq A$; (3) (r-tryc) $r_k(x,v)<_H \tryc_m(C)$, $x\in Wset(T_m)$ and $v \neq A$. Thus, it can be seen that the conflict order is defined only on \op{s} that have successfully executed. We denote the corresponding \op{s} as conflicting. } \noindent \textbf{Sub-history:} A \textit{sub-history} ($SH$) of a history ($H$) denoted as the tuple $\langle \evts{SH},$ $<_{SH}\rangle$ and is defined as: (1) $<_{SH} \subseteq <_{H}$; (2) $\evts{SH} \subseteq \evts{H}$; (3) If an event of a transaction $T_k\in\txns{H}$ is in $SH$ then all the events of $T_k$ in $H$ should also be in $SH$. For a history $H$, let $R$ be a subset of $\txns{H}$. Then $\shist{R}{H}$ denotes the \ssch{} of $H$ that is formed from the \op{s} in $R$. \noindent \textbf{Valid and legal history:} A successful read $r_k(x, v)$ (i.e., $v \neq \mathcal{A}$) in a history $H$ is said to be \emph{\valid} if there exist a transaction $T_j$ that wrote $v$ to $x$ and \emph{committed} before $r_k(x,v)$. Formally, $\langle r_k(x, v)$ is \valid{} $\Leftrightarrow \exists T_j: (c_j <_{H} r_k(x, v)) \land (w_j(x, v) \in \evts{T_j}) \land (v \neq \mathcal{A}) \rangle$. The history $H$ is \valid{} if all its successful read \op{s} are \valid. We define $r_k(x, v)$'s \textit{\lastw{}} as the latest commit event $c_i$ preceding $r_k(x, v)$ in $H$ such that $x\in wset_i$ ($T_i$ can also be $T_0$). A successful read \op{} $r_k(x, v)$, is said to be \emph{\legal{}} if the transaction containing $r_k$'s \lastw{} also writes $v$ onto $x$: $\langle r_k(x, v)$ \text{is \legal{}} $\Leftrightarrow (v \neq \mathcal{A}) \land (\lwrite{r_k(x, v)}{H} = c_i) \land (w_i(x,v) \in \evts{T_i})\rangle$. The history $H$ is \legal{} if all its successful read \op{s} are \legal. From the definitions we get that if $H$ is \legal{} then it is also \valid. \noindent \textbf{Opacity and Strict Serializability:} We say that two histories $H$ and $H'$ are \emph{equivalent} if they have the same set of events. Now a history $H$ is said to be \textit{opaque} \cite{GuerKap:2008:PPoPP,tm-book} if it is \valid{} and there exists a t-sequential legal history $S$ such that (1) $S$ is equivalent to $\overline{H}$ and (2) $S$ respects $\prec_{H}^{RT}$, i.e., $\prec_{H}^{RT} \subset \prec_{S}^{RT}$. By requiring $S$ being equivalent to $\overline{H}$, opacity treats all the incomplete transactions as aborted. We call $S$ an (opaque) \emph{serialization} of $H$. Along same lines, a \valid{} history $H$ is said to be \textit{strictly serializable} if $\shist{\comm{H}}{H}$ is opaque. Unlike opacity, strict serializability does not include aborted or incomplete transactions in the global serialization order. An opaque history $H$ is also strictly serializable: a serialization of $\shist{\comm{H}}{H}$ is simply the subsequence of a serialization of $H$ that only contains transactions in $\comm{H}$. Serializability is commonly used criterion in databases. But it is not suitable for STMs as it does not consider the correctness of \emph{aborted} transactions as shown by Guerraoui \& Kapalka \cite{GuerKap:2008:PPoPP}. Opacity, on the other hand, considers the correctness of \emph{aborted} transactions as well. Similarly, \lopty (described below) is another \cc for STMs but is not as restrictive as \opty. \noindent \textbf{Local opacity:} For a history H, we define a set of sub-histories, denoted as $\shset{H}$ as follows: (1) For each aborted transaction $T_i$, we consider a $\subhist$ consisting of \op{s} from all previously \emph{committed} transactions and including all successful \op{s} of $T_i$ (i.e., \op{s} which did not return $\mathcal{A}$) while immediately putting commit after last successful operation of $T_i$; (2) for last \emph{committed} transaction $T_l$ considers all the previously \emph{committed} transactions including $T_l$. A history H is said to be \emph{\lopq} \cite{KuzSat:NI:ICDCN:2014,KuzSat:NI:TCS:2016} if all the sub-histories in \shset{H} are opaque. It must be seen that in the construction of sub-history of an aborted transaction $T_i$, the $\subhist$ will contain \op{s} from only one aborted transaction which is $T_i$ itself and no other live/aborted transactions. Similarly, the sub-history of \emph{committed} transaction $T_l$ has no \op{s} of aborted and live transactions. Thus in \lopty, no aborted or live transaction can cause another transaction to abort. It was shown that \lopty \cite{KuzSat:NI:ICDCN:2014,KuzSat:NI:TCS:2016} allows greater concurrency than \opty. Any history that is \opq is also \lopq but not necessarily the vice-versa. On the other hand, a history that is \lopq is also \stsble, but the vice-versa need not be true.\\ \noindent \textbf{Graph Characterization of Local Opacity:} To prove correctness of STM systems, it is useful to consider graph characterization of histories. In this section, we describe the graph characterization developed by Kumar et al \cite{Kumar+:MVTO:ICDCN:2014} for proving \opty which is based on characterization by Bernstein and Goodman \cite{BernGood:1983:MCC:TDS}. We extend this characterization for \lo. Consider a history $H$ which consists of multiple versions for each \tobj. The graph characterization uses the notion of \textit{version order}. Given $H$ and a \tobj{} $x$, we define a version order for $x$ as any (non-reflexive) total order on all the versions of $x$ ever created by committed transactions in $H$. It must be noted that the version order may or may not be the same as the actual order in which the version of $x$ are generated in $H$. A version order of $H$, denoted as $\ll_H$ is the union of the version orders of all the \tobj{s} in $H$. Consider the history $H2: r_1(x, 0) r_2(x, 0) r_1(y, 0) r_3(z, 0) w_1(x, 5) w_3(y, 15) w_2(y, 10) w_1(z, 10) c_1 c_2 r_4(x, 5) \\r_4(y, 10) w_3(z, 15) c_3 r_4(z, 10)$. Using the notation that a committed transaction $T_i$ writing to $x$ creates a version $x_i$, a possible version order for $H2$ $\ll_{H2}$ is: $\langle x_0 \ll x_1 \rangle, \langle y_0 \ll y_2 \ll y_3 \rangle, \langle z_0 \ll z_1 \ll z_3 \rangle $. We define the graph characterization based on a given version order. Consider a history $H$ and a version order $\ll$. We then define a graph (called opacity graph) on $H$ using $\ll$, denoted as $\opg{H}{\ll} = (V, E)$. The vertex set $V$ consists of a vertex for each transaction $T_i$ in $\overline{H}$. The edges of the graph are of three kinds and are defined as follows: \begin{enumerate} \item \textit{\rt}(real-time) edges: If $T_i$ commits before $T_j$ starts in $H$, then there is an edge from $v_i$ to $v_j$. This set of edges are referred to as $\rtx(H)$. \item \textit{\rf}(reads-from) edges: If $T_j$ reads $x$ from $T_i$ in $H$, then there is an edge from $v_i$ to $v_j$. Note that in order for this to happen, $T_i$ must have committed before $T_j$ and $c_i <_H r_j(x)$. This set of edges are referred to as $\rf(H)$. \item \textit{\mv}(multiversion) edges: The \mv{} edges capture the multiversion relations and is based on the version order. Consider a successful read \op{} $r_k(x,v)$ and the write \op{} $w_j(x,v)$ belonging to transaction $T_j$ such that $r_k(x,v)$ reads $x$ from $w_j(x,v)$ (it must be noted $T_j$ is a committed transaction and $c_j <_H r_k$). Consider a committed transaction $T_i$ which writes to $x$, $w_i(x, u)$ where $u \neq v$. Thus the versions created $x_i, x_j$ are related by $\ll$. Then, if $x_i \ll x_j$ we add an edge from $v_i$ to $v_j$. Otherwise ($x_j \ll x_i$), we add an edge from $v_k$ to $v_i$. This set of edges are referred to as $\mv(H, \ll)$. \end{enumerate} Using the construction, the $\opg{H2}{\ll_{H2}}$ for history $H2$ and $\ll_{H2}$ is shown in \figref{opg}. The edges are annotated. The only \mv{} edge from $T4$ to $T3$ is because of \tobj{s} $y, z$. $T4$ reads value 5 for $z$ from $T1$ whereas $T3$ also writes 15 to $z$ and commits before $r_4(z)$. \begin{figure}[tbph] \centerline{\scalebox{0.7}{\input{figs/ex2.pdf_t}}} \captionsetup{justification=centering} \caption{$\opg{H2}{\ll_{H2}}$} \label{fig:opg} \end{figure} Kumar et al \cite{Kumar+:MVTO:ICDCN:2014} showed that if a version order $\ll$ exists for a history $H$ such that $\opg{H}{\ll_H}$ is acyclic, then $H$ is \opq. This is captured in the following result. \ignore{ \begin{result} \label{res:main-opg} A \valid{} history $H$ is opaque iff there exists a version order $\ll_H$ such that $\opg{H}{\ll_H}$ is acyclic. \end{result} \noindent This result can be extended to characterize \lo using graphs with the following theorem. The proof is in Appendix \thmref{log}. \begin{theorem} \label{thm:main-log} A \valid{} history $H$ is \lopq iff for each sub-history $sh$ in $\shset{H}$ there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic. Formally, $\langle (H \text{ is \lopq}) \Leftrightarrow (\forall sh \in \shset{H}, \exists \ll_{sh}: \opg{sh}{\ll_{sh}} \text{ is acyclic}) \rangle$. \end{theorem} } \begin{result} \label{res:opg} A \valid{} history $H$ is opaque iff there exists a version order $\ll_H$ such that $\opg{H}{\ll_H}$ is acyclic. \end{result} \noindent This result can be easily extended to prove \lo as follows \begin{theorem} \label{thm:log} A \valid{} history $H$ is \lopq iff for each sub-history $sh$ in $\shset{H}$ there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic. Formally, $\langle (H \text{ is \lopq}) \Leftrightarrow (\forall sh \in \shset{H}, \exists \ll_{sh}: \opg{sh}{\ll_{sh}} \text{ is acyclic}) \rangle$. \end{theorem} \begin{proof} To prove this theorem, we have to show that each sub-history $sh$ in $\shset{H}$ is \valid. Then the rest follows from \resref{opg}. Now consider a sub-history $sh$. Consider any read \op $r_i(x, v)$ of a transaction $T_i$. It is clear that $T_i$ must have read a version of $x$ created by a previously committed transaction. From the construction of $sh$, we get that all the transaction that committed before $r_i$ are also in $sh$. Hence $sh$ is also \valid. Now, proving $sh$ to be \opq iff there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic follows from \resref{opg}. \end{proof} \section{Experimental Evaluation} \label{sec:exp} For performance evaluation of \ksftm with the state-of-the-art STMs, we implemented the the algorithms \pkto, \svsftm \cite{Gramoli+:TM2C:Eurosys:2012, WaliSten:Starve:CC:2009, Spear+:CCM:PPoPP:2009} along with \ksftm in C++ \footnote{Code is available here: https://github.com/PDCRL/KSFTM}. We used the available implementations of NOrec STM \cite{Dalessandro:2010:NSS:1693453.1693464}, and ESTM \cite{Felber:2017:jpdc} developed in C++. Although, only \ksftm and \svsftm provide \stfdm, we compared with other STMs as well, to see its performance in practice. \noindent \textbf{Experimental system:} The experimental system is a 2-socket Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz with 14 cores per socket and 2 hyper-threads (HTs) per core, for a total of 56 threads. Each core has a private 32KB L1 cache and 256 KB L2 cache. The machine has 32GB of RAM and runs Ubuntu 16.04.2 LTS. In our implementation, all threads have the same base priority and we use the default Linux scheduling algorithm. This satisfies the \asmref{bdtm} (bounded\text{-}termination\xspace) about the scheduler. We ensured that there no parasitic transactions \cite{BushkovandGuerraoui:2015:COST} in our experiments. \noindent \textbf{Methodology:} Here we have considered two different applications:\textbf{(1)} Counter application - In this, each thread invokes a single transaction which performs 10 reads/writes \op{s} on randomly chosen \tobj{s}. A thread continues to invoke a transaction until it successfully commits. To obtain high contention, we have taken large number of threads ranging from 50-250 where each thread performs its read/write operation over a set of 5 \tobj{s}. We have performed our tests on three workloads stated as: (W1) Li - Lookup intensive: 90\% read, 10\% write, (W2) Mi - Mid intensive: 50\% read, 50\% write and (W3) Ui - Update intensive: 10\% read, 90\% write. This application is undoubtedly very flexible as it allows us to examine performance by tweaking different parameters (refer to \subsecref{countercode} for details). \textbf{(2)} Two benchmarks from STAMP suite \cite{stamp123} - (a) We considered KMEANS which has low contention with short running transactions. The number of data points as 2048 with 16 dimensions and total clusters as 5. (b) We then considered LABYRINTH which has high contention with long running transactions. We considered the grid size as 64x64x3 and paths to route as 48. To study starvation in the various algorithms, we considered \emph{\wct}, which is the maximum time taken by a transaction among all the transactions in a given experiment to commit from its first invocation. This includes time taken by all the aborted \inc{s} of the transaction to execute as well. To reduce the effect of outliers, we took the average of \wct in ten runs as the final result for each application. \cmnt{ and second application is labyrinth from STAMP benchmark \cite{stamp} which has long running transactions with high contention. \cmnt{In our counter application, each thread invokes a single transaction and within a transaction, a thread performs 10 reads/writes \op{s} on randomly chosen \tobj{s} from a set of 5 \tobj{s} to increase the contention level. A thread continues to invoke a transaction until it successfully commits.To increase the contention we considered multiple threads. We have the control of greater flexibility with counter application. So, }We have performed our tests on three workloads stated as: (W1) Li - Lookup intensive (90\% read, 10\% write), (W2) Mi - Mid intensive (50\% read, 50\% write) and (W3) Ui - Update intensive (10\% read, 90\% write). \cmnt{We have also checked the performance on labyrinth which is showing similar results. Due to brevity of space, the comparison of algorithms are in \apnref{apn-result}.} For accurate results, we took an average of ten runs as the final result for each algorithm.} \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/wcts1.pdf} \vspace{-.2cm} \caption{Performance analysis on workload $W1$, $W2$, $W3$}\label{fig:wcts} \end{figure} \noindent \textbf{Results Analysis:} \figref{wcts} illustrates \wct analysis of \ksftm over the above mentioned STMs for the counters application under the workloads $W1$, $W2$ and $W3$ while varying the number of threads from 50 to 250. For \ksftm and \pkto, we chose the value of K as 5 and C as 0.1 as the best results were obtained with these parameters. We can see that \ksftm performs the best for all the three workloads. \ksftm gives an average speedup on \wct by a factor of 1.22, 1.89, 23.26 and 13.12 over \pkto, \svsftm, NOrec STM and ESTM respectively. \ignore{ Under high contention the time taken by the longest running transaction to commit is considered as the worst case time. We implemented 3 variants of \ksftm (\mvsftm, \mvsftmgc, and \ksftm) and \pkto (\pmvto, \pmvtogc, and \pkto) and tested on all the workloads $W1$ $W2$ and $W3$. \ksftm outperforms \mvsftm and \mvsftmgc by a factor of 2.1 and 1.5. Similarly, \pkto outperforms \pmvto and \pmvtogc by a factor of 2 and 1.35. These results show that maintaining finite versions corresponding to each \tobj performs better than maintaining infinite versions and garbage collection on infinite versions corresponding to each \tobj. We identified the best value of K as 5 and optimal value of $C$ as 0.1 for \ksftm on counter application. we ran our experiment, varying value of K and keeping the number of threads as 64 on workload $W1$ and obtained the optimal value of $K$ in \ksftm is 5 as shown in \figref{worstcase}.(b). Similarly, we calculate the best value of $K$ as 5 for \pkto on the same parameters. The optimal value of $C$ as 0.1 for \ksftm on the above parameters (details described in \apnref{apn-result} and technical report \cite{DBLP:journals/corr/abs-1709-01033}). } \figref{stamp}(a) shows analysis of \wct for KMEANS while \figref{stamp}(b) shows for LABYRINTH. In this analysis we have not considered ESTM as the integrated STAMP code for ESTM is not publicly available. For KMEANS, \ksftm performs 1.5 and 1.44 times better than \pkto and \svsftm. But, NOrec is performing 1.09 times better than \ksftm. This is because KMEANS has short running transactions have low contention. As a result, the commit time of the transactions is also low. On the other hand for LABYRINTH, \ksftm again performs the best. It performs 1.14, 1.4 and 2.63 times better than \pkto, \svsftm and NOrec respectively. This is because LABYRINTH has high contention with long running transactions. This result in longer commit times for transactions. \figref{stamp}(c) shows the stability of \ksftm algorithm over time for the counter application. Here we fixed the number of threads to 32, $K$ as 5, $C$ as 0.1, \tobj{s} as 1000, along with 5 seconds warm-up period on $W1$ workload. Each thread invokes transactions until its time-bound of 60 seconds expires. We performed the experiments on number of transactions committed over time in the increments 5 seconds. The experiment shows that over time \ksftm is stable which helps to hold the claim that \ksftm's performance will continue in same manner if time is increased to higher orders. \vspace{1mm} \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/stamp1.pdf} \vspace{-.2cm} \caption{Performance analysis on KMEANS, LABYRINTH and KSFTM's Stability }\label{fig:stamp} \end{figure} \cmnt{ \begin{figure} \captionsetup{justification=centering} \includegraphics[width=12cm, height=8cm]{figs/Graph5.pdf} \vspace{-.5cm} \caption{Worst case time analysis, Optimal value of K and \ksftm stability}\label{fig:worstcase} \end{figure} Our experimental goals are to: (G1) identify the best value of K (i.e., number of versions) in the propose algorithm \ksftm; (G2) to evaluate the performance of all our proposed algorithms (\ksftm, \mvsftm, \mvsftmgc, \pkto \pmvto, \pmvtogc); (G3) to evaluate the overhead of starvation-freedom by comparing the performance of \ksftm and \emph{non-\stf} \pkto STM, and (G4) to compare the performance of \ksftm with state-of-the-art STMs (NOrec, ESTM, MVTO, and \svsftm) on different workloads. \ignore{ \color{red} It can be seen that achieving \stfdm involves some overhead. We want to understand how costly is this overhead by comparing the performance of \ksftm with finite version non-starvation free STM - \pkto. \color{black} } \begin{figure} \captionsetup{justification=centering} \includegraphics[width=12cm, height=6cm]{figs/Graph4.pdf} \caption{Performance on workload $W1$, $W2$, $W3$}\label{fig:performance} \end{figure} \noindent \textbf{Experimental system:} The experimental system is a 2-socket Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz with 14 cores per socket and 2 hyper-threads (HTs) per core, for a total of 56 threads. Each core has a private 32KB L1 cache and 256 KB L2 cache. The machine has 32GB of RAM and runs Ubuntu 16.04.2 LTS. In our implementation, all threads have the same base priority and we use the default Linux scheduling algorithm. This satisfies the \asmref{bdtm} (bounded\text{-}termination\xspace), about the scheduler and no parasitic transaction \cite{BushkovandGuerraoui:2015:COST} exist. \ignore{ \noindent \textbf{STM implementations:} The application first creates N-threads, each thread, in turn, invokes a transaction. Each transaction has two phases, the first phase, i.e., the read-write phase consists of read (from the shared memory) and write (to the transaction's local buffer) operations followed by the second phase, i.e., the commit phase, where the actual writes are made visible in the shared memory, and the transaction tries to commit. We have specifically chosen such a test application to achieve our experimental goals. Also to test the starvation of transactions, we wanted to vary the various parameters and our chosen application suffice this need too. We get the inspiration of single version starvation-freedom, \svsftm from the literature by Gramoli et al. \cite{Gramoli+:TM2C:Eurosys:2012}, Waliullah and Stenstrom \cite{WaliSten:Starve:CC:2009}, Spear et al. \cite{Spear+:CCM:PPoPP:2009}. } \noindent \textbf{Methodology:} To test the algorithms, we considered a test-application in which each thread invokes a single transaction. Within a transaction, a thread performs 10 reads/writes \op{s} on randomly chosen \tobj{s} from a set of 1000 \tobj{s}. A thread continues to invoke a transaction until it successfully commits. We have considered three types of workloads: (W1) Li - Lookup intensive (90\% read, 10\% write), (W2) Mi - Mid intensive (50\% read, 50\% write) and (W3) Ui - Update intensive (10\% read, 90\% write). For accurate results, we took an average of ten runs as the final result for each algorithm. \noindent \textbf{Results Analysis:} We implemented 3 variants of \ksftm (\mvsftm, \mvsftmgc, and \ksftm) and \pkto (\pmvto, \pmvtogc, and \pkto) to do the performance analysis on different workloads $W1$ (Li), $W2$ (Mi) and $W3$ (Ui) respectively. \ksftm outperforms \mvsftm and \mvsftmgc by a factor of 2.1 and 1.5. Similarly, \pkto outperforms \pmvto and \pmvtogc by a factor of 2 and 1.35. These results show that maintaining finite versions corresponding to each \tobj performs better than maintaining infinite versions and garbage collection on infinite versions corresponding to each \tobj. To identify the best value of K for \ksftm on counter application, we ran our experiment, varying value of K and keeping the number of threads as 64 on workload $W1$ and obtained the optimal value of $K$ in \ksftm is 5 as shown in \figref{worstcase}.(b). Similarly, we calculate the best value of $K$ as 5 for \pkto on the same parameters. The optimal value of $C$ as 0.1 for \ksftm on the above parameters (details described in \apnref{apn-result} and technical report \cite{DBLP:journals/corr/abs-1709-01033}). \figref{performance} (a), (b) and (c) shows the performance of all the algorithms (proposed as well as state-of-the-art STMs) for workloads $W1$, $W2$ and $W3$ respectively. For workload $W1$, the graph shows that \ksftm outperforms \svsftm, NOrec, ESTM, and MVTO by a factor of 2.5, 3, 1.7 and 1.5. \ignore{ESTM performs better than \ksftm in low contention when the thread count is 16 and less, while this is not the case for high contention when the thread count is increased to 32 and more.} For workload $W2$, \ksftm exceeds \svsftm, NOrec, ESTM, and MVTO by a factor of 1.5, 2, 1.6 and 1.3 respectively. For workload $W3$, \ksftm again beats \svsftm, NOrec, ESTM, and MVTO by 1.7, 3.3, 3 and 1.4 at thread count 64. So, \ksftm outperforms all the other STM algorithms in low as well as high contention. \ksftm's performance is comparable to \pkto in all workloads. In fact, in all workloads, the performance of \ksftm is 2\% less than \pkto. But as discussed in \subsecref{mvto}, that the transactions can possibly starve with \pkto while this is not the case with \ksftm. We believe that this is the overhead that one has to pay to achieve \stfdm. On the positive side, the overhead is very small, being only around 2\% as compared to \pkto. On the other hand, the performance of \ksftm is much better than single-version \stf algorithm \svsftm. We analyzed time to achieve the \stfdm for \ksftm in worst-case and compared \svsftm, NOrec, ESTM, MVTO and \pkto. High contention certainly increases probability of transactions being aborted. So we have considered varying the threads from 50 to 400, each thread invoking 1000 transaction with $K$ as 5, $C$ as 0.1, \tobj{s} as 1000 and with $W1$ workload. \figref{worstcase}.(a) shows that the worst-case time for the commit of a transaction in \ksftm is consistently better. \begin{figure} \captionsetup{justification=centering} \includegraphics[width=12cm, height=6cm]{figs/Graph5.pdf} \vspace{-.5cm} \caption{Worst case time analysis, Optimal value of K and \ksftm stability}\label{fig:worstcase1} \end{figure} \vspace{-.05cm} \ignore{ \begin{figure} \centering \begin{minipage}[b]{0.49\textwidth} \centering \includegraphics[width=6cm, height=5cm]{figs/ctime.pdf} \caption{Worst-Case Time Comparison}\label{fig:worst-case time1} \end{minipage} \hfill \begin{minipage}[b]{0.49\textwidth} \includegraphics[width=7cm, height=4cm]{figs/ctrans.pdf} \centering \caption{KSFTM Stability}\label{fig:stability1} \end{minipage} \end{figure} } \cmnt{ \begin{figure} \captionsetup{justification=centering} \includegraphics[width=12cm, height=9cm]{figs/ctime.pdf} \caption{\Large Worst-Case Time Comparison}\label{fig:worst-case time2} \end{figure} \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/ctrans.pdf} \caption{\Large KSFTM Stability}\label{fig:stability2} \end{figure} } To check the stability of our proposed algorithm \ksftm over time, we performed an experiment and shown in \figref{worstcase}.(c) where we fixed the number of threads to 32, $K$ as 5, $C$ as 0.1, \tobj{s} as 1000, along with 5 seconds warm-up period on $W1$ workload. Each thread keeps on invoking and executing transactions until $TIME$-$BOUND$ is over, which in our case is 60 seconds. We have done the experiments on number of transactions committed with increase in time as $t = t+ \epsilon$, where $\epsilon$ is fixed to 5 seconds. The experiment shows that over time \ksftm is stable which helps to hold the claim that \ksftm's performance will continue in same manner if time is increased to higher order. To show the benefits of starvation freedom utilizing the power of multiple versions in \ksftm, we tested on an application called Labyrinth provided by STAMP benchmark which has long running transactions along with high contention. \ksftm shows better performance than \pkto and \svsftm. } Maintaining multiple versions to increase the performance and to decrease the number of aborts, leads to creating too many versions which are not of any use and hence occupying space. So, such garbage versions need to be taken care of. Hence we come up with a garbage collection over these unwanted versions. This technique help to conserve memory space and increases the performance in turn as no more unnecessary traversing of garbage versions by transactions is necessary. We have used a global, i.e., across all transactions a list that keeps track of all the live transactions in the system. We call this list as \livel. Each transaction at the beginning of its life cycle creates its entry in this \livel. Under the optimistic approach of STM, each transaction in the shared memory performs its updates in the $\tryc$ phase. In this phase, each transaction performs some validations, and if all the validations are successful then the transaction make changes or in simple terms creates versions of the corresponding \tobj{} in the shared memory. While creating a version every transaction, check if it is the least timestamp live transaction present in the system by using \livel{} data structure, if yes then the current transaction deletes all the version of that \tobj{} and create one of its own. Else the transaction does not do any garbage collection or delete any version and look for creating a new version of next \tobj{} in the write set, if at all. \figref{ksftm} represents three variants of \ksftm (\mvsftm, \mvsftmgc, and \ksftm) and \figref{pkto} shows the three variants of \pkto (\pmvto, \pmvtogc, and \pkto) on all the workloads $W1$ $W2$ and $W3$. \ksftm outperforms \mvsftm and \mvsftmgc by a factor of 2.1 and 1.5. Similarly, \pkto outperforms \pmvto and \pmvtogc by a factor of 2 and 1.35. These results show that maintaining finite versions corresponding to each \tobj performs better than maintaining infinite versions and garbage collection on infinite versions corresponding to each \tobj. \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/ksftm.pdf} \caption{Time comparison among variants of \ksftm}\label{fig:ksftm} \end{figure} \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/pkto.pdf} \caption{Time comparison among variants of \pkto}\label{fig:pkto} \end{figure} \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/aborts.pdf} \caption{Abort Count on workload $W1, W2, W3$}\label{fig:aborts} \end{figure} \cmnt{ \begin{figure} \captionsetup{justification=centering} \includegraphics[width=13cm, height=18cm]{figs/ksftmpkto.pdf} \caption{\Large Execution under variants of $KSFTM$ and $PKTO$}\label{fig:ksftmpkto} \end{figure} } \begin{figure}[H] \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/optimalC.pdf} \caption{Best value of K and optimal value of $C$ for \ksftm}\label{fig:optimalC} \end{figure} \noindent \textbf{Comparison on the basis of Abort count:} \figref{aborts} shows the abort count comparisons of \ksftm with \pkto, ESTM, NOrec, MVTO, and \svsftm across all workloads ($W1$, $W2$, and $W3$). The number of aborts in ESTM and NOrec are high as compared to all other STM algorithms while all other algorithms (\ksftm, \pkto, MVTO, \svsftm) have marginally small differences among them. \vspace{.4cm} \noindent \textbf{Best value of $K$ and optimal value of constant \textit{C}:} To identify the best value of K for \ksftm, we ran our experiment, varying value of K and keeping the number of threads as 64 on workload $W1$ and obtained the optimal value of $K$ in \ksftm is 5 as shown in \figref{optimalC}.(a) for counter application. Similarly, we calculate the best value of $K$ as 5 for \pkto on the same parameters. $C$, is a constant that is used to calculate $WTS$ of a transaction. i.e., $\twts{i} = \tcts{i} + C * (\tcts{i} - \tits{i});$ where, $C$ is any constant greater than 0. We run or experiments across load $W1$, for 64 threads and other parameters are same as defined in the methodology of \secref{exp}, we achieve the best value of $C$ as 0.1 for counter application. Experimental results are shown in \figref{optimalC} (b). \cmnt{ \textbf{Benefit of Starvation freedom and multi-versioning:}Our proposed algorithm provides the dual benefit of starvation freedom utilizing the power of multiple versions. \figref{lib} depicts that proposed algorithm KSFTM performs better than non-starvation free finite version algorithm PKVTO when tested on an application called Labyrinth provided by STAMP benchmark which has long running transactions and has very high contention. For the same application we experimented our algorithm against single version starvation freedom SV-SFTM algorithm, results shown in the \figref{lib} which supports the fact that multi-versioning provides better performance than single versioning. The experiment shows the worst case time a transaction can take over high contention and long running transactions environment, where grid size is 128 x 128 x 3 and paths to route are 64.} \cmnt{ \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/Lib.pdf} \caption{Execution under Labyrinth provided by STAMP benchmark }\label{fig:lib} \end{figure} } \cmnt{ \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/abortc.pdf} \caption{ Aborts on workload $W1, W2, W3$ and Optimal value of C as 0.1}\label{fig:optimalC} \end{figure} } \input{ap-counters.tex} \subsection{Adding Starvation-Freedom to \pkto: \sfkv} \subsection{Modifying \pkto to Obtain \sfkv: Trading Correctness for \emph{Starvation-Freedom}} \label{subsec:sfmvto} Our goal is to revise \pkto algorithm to ensure that \emph{\stfdm} is satisfied. Specifically, we want the transaction with the lowest \its to eventually commit. Once this happens, the next non-committed transaction with the lowest \its will commit. Thus, from induction, we can see that every transaction will eventually commit. \noindent \textbf{Key Insights For Eliminating Starvation in \pkto:} To identify the necessary revision, we first focus on the effect of this algorithm on two transactions, say $T_{50}$ and $T_{60}$ with their \cts values being 50 and 60 respectively. Furthermore, for the sake of discussion, assume that these transactions only read and write \tobj $x$. Also, assume that the latest version for $x$ is with $ts$ $40$. Each transaction first reads $x$ and then writes $x$ (as part of the $\tryc{}$ operation). We use $r_{50}$ and $r_{60}$ to denote their read operations while $w_{50}$ and $w_{60}$ to denote their $\tryc$ \op{s}. Here, a read operation will not fail as there is a previous version present. Now, there are six possible permutations of these statements. We identify these permutations and the action that should be taken for that permutation in Table \ref{tbl:sfillus}. In all these permutations, the read \op{s} of a transaction come before the write \op{s} as the writes to the shared memory occurs only in the $\tryc$ \op (due to optimistic execution) which is the final \op of a transaction. \begin{table}[ht] \centering \begin{tabular}{|c|l|l|} \hline \multicolumn{1}{|c|}{S. No} & \multicolumn{1}{c|}{Sequence} & \multicolumn{1}{c|}{Action} \\ \hline 1. & $r_{50}, w_{50}, r_{60}, w_{60}$ & $T_{60}$ reads the version written by $T_{50}$. No conflict. \\ \hline 2. & $r_{50}, r_{60}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$. Either abort $T_{50}$ or $T_{60}$. \\ \hline 3. & $r_{50}, r_{60}, w_{60}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$. \\ \hline 4. & $r_{60}, r_{50}, w_{60}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$. \\ \hline 5. & $r_{60}, r_{50}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$. Either abort $T_{50}$ or $T_{60}$. \\ \hline 6. & $r_{60}, w_{60}, r_{50}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$.\\ \hline \end{tabular} \caption{Permutations of operations} \label{tbl:sfillus} \end{table} \ignore{ \begin{table} \centering \begin{tabular}{|c|l|l|} \hline \multicolumn{1}{|c|}{S.No} & \multicolumn{1}{c|}{Sequence} & \multicolumn{1}{c|}{Action} \\ \hline 1. & $r_{50}, w_{50}, r_{60}, w_{60}$ & $T_{60}$ reads the version written by $T_{50}$. No conflict. \\ \hline 2. & $r_{50}, r_{60}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$. Either abort $T_{50}$ or $T_{60}$. \\ \hline 3. & $r_{50}, r_{60}, w_{60}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$. \\ \hline 4. & $r_{60}, r_{50}, w_{60}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$. \\ \hline 5. & $r_{60}, r_{50}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$. Either abort $T_{50}$ or $T_{60}$. \\ \hline 6. & $r_{60}, w_{60}, r_{50}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$.\\ \hline \end{tabular} \captionsetup{justification=centering} \caption{Permutations of operations} \label{tbl:sfillus2} \end{table} \begin{table} \centering \begin{tabular}{|l|l|l|} \hline 1. & $r_{50}, w_{50}, r_{60}, w_{60}$ & $T_{60}$ reads the version written by $T_{50}$. No conflict.\\ \hline 2. & $r_{50}, r_{60}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$. Either abort $T_{50}$ or $T_{60}$\\ \hline 3. & $r_{50}, r_{60}, w_{60}, w_{50}$ & Conflict detected at $w_{50}$.We must abort $T_{50}$.\\ \hline 4. & $r_{60}, r_{50}, w_{60}, w_{50}$ & Conflict detected at $w_{60}$, We must abort $T_{50}$.\\ \hline 5. & $r_{60}, r_{50}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$, Either abort $T_{50}$ or $T_{60}$\\ \hline 6. & $r_{60}, w_{60}, r_{50}, w_{50}$ & Conflict detected at $w_{50}$, We must abort $T_{50}$\\ \hline \end{tabular} \caption{Permutations of operations} \label{tbl:sfillus1} \end{table} } From this table, it can be seen that when a conflict is detected, in some cases, algorithm \pkto \textit{must} abort $T_{50}$. In case both the transactions are live, \pkto has the option of aborting either transaction depending on their \its. If $T_{60}$ has lower \its then in no case, \pkto is required to abort $T_{60}$. In other words, it is possible to ensure that the transaction with lowest \its and the highest \cts is never aborted. Although in this example, we considered only one \tobj, this logic can be extended to cases having multiple \op{s} and \tobj{s}. Next, consider \stref{notfound} of \pkto algorithm. Suppose a transaction $T_i$ wants to read a \tobj but does not find a version with a timestamp smaller than $i$. In this case, $T_i$ has to abort. But if $T_i$ has the highest \cts, then it will certainly find a version to read from. This is because the timestamp of a version corresponds to the timestamp of the transaction that created it. If $T_i$ has the highest \cts value then it implies that all versions of all the \tobj{s} have a timestamp smaller than \cts of $T_i$. This reinforces the above observation that a transaction with lowest \its and highest \cts is not aborted. To summarize the discussion, algorithm $\pkto$ has an in-built mechanism to protect transactions with lowest \its and highest \cts value. However, this is different from what we need. Specifically, we want to protect a transaction $T_i$, with lowest $\its$ value. One way to ensure this: if transaction $T_i$ with lowest \its keeps getting aborted, eventually it will achieve the highest \cts. Once this happens, \pkto ensures that $T_i$ cannot be further aborted. In this way, we can ensure the liveness of all transactions. \ignore{ \color{blue} I propose to rename 3.2 as Key Insights For Eliminating Starvation in PMVTO Create a new section here with title Modifying PMVTO to Obtain SFMVTO: Trading Correctness for Starvation-freedom Change title of 3.3 (new 3.4) to Design of KFTM: Regaining Correctness while Preserving Starvation Freedom \color{black} } \noindent \textbf{The working of \emph{\stf} algorithm:} To realize this idea and achieve \emph{\stfdm}, we consider another variation of \mvto, \emph{Starvation-Free MVTO} or \emph{\sfmv}. We specifically consider \sfmv with $K$ versions, denoted as \emph{\sfkv}. A transaction $T_i$ instead of using the current time as $\tcts{i}$, uses a potentially higher timestamp, \emph{Working Timestamp - \wts} or $\twts{i}$. Specifically, it adds $C * (\tcts{i} - \tits{i})$ to $\tcts{i}$, i.e., \begin{equation} \label{eq:wtsf} \twts{i} = \tcts{i} + C * (\tcts{i} - \tits{i}); \end{equation} where, $C$ is any constant greater than 0. In other words, when the transaction $T_i$ is issued for the first time, $\twts{i}$ is same as $\tcts{i}(= \tits{i})$. However, as transaction keeps getting aborted, the drift between $\tcts{i}$ and $\twts{i}$ increases. The value of $\twts{i}$ increases with each retry. Furthermore, in \sfkv algorithm, \cts is replaced with \wts for $\tread$, $\twrite$ and $\tryc$ \op{s} of \pkto. In \sfkv, a transaction $T_i$ uses $\twts{i}$ to read a version in $\tread$. Similarly, $T_i$ uses $\twts{i}$ in $\tryc$ to find the appropriate previous version (in \stref{notfound}) and to verify if $T_i$ has to be aborted (in \stref{verify}). Along the same lines, once $T_i$ decides to commit and create new versions of $x$, the timestamp of $x$ will be same as its $\twts{i}$ (in \stref{commit}). Thus the timestamp of all the versions in $\vlist$ will be \wts of the transactions that created them. \noindent Now, we have the following property about \sfkv algorithm. \begin{property} \label{prop:sfmv-live} \sfkv algorithm ensures \stfdm. \end{property} While the proof of this property is somewhat involved, the key idea is that the transaction with lowest \its value, say $T_{low}$, will eventually have highest \wts value than all the other transactions in the system. Moreover, after a certain duration, any \textit{new} transaction arriving in the system (i.e., whose $\its$ value sufficiently higher than that of $T_{low}$) will have a lower $\wts$ value than $T_{low}$. This will ensure that $T_{low}$ will not be aborted. In fact, this property can be shown to be true of \sfmv as well. \noindent \textbf{The drawback of \sfkv:} Although \sfkv satisfies starvation-freedom, it, unfortunately, does not satisfy strict-serializability. Specifically, it violates the real-time requirement. \pkto uses \cts for its working while \sfkv uses \wts. It can be seen that \cts is close to the \rt execution of transactions whereas \wts of a transaction $T_i$ is artificially inflated based on its \its and might be much larger than its \cts. \begin{figure} \centerline{ \scalebox{0.6}{\input{figs/sfmv-correct.pdf_t}}} \captionsetup{justification=centering} \caption{Correctness of \sfkv Algorithm} \label{fig:sfmv-correct} \end{figure} We illustrate this with an example. Consider the history $H1$ as shown in \figref{sfmv-correct}: $r_1(x,0) r_2(y,0) w_1(x, 10)\\ C_1 w_2(x, 20) C_2 r_3(x, 10) r_3(z, 25) C_3$ with \cts as 50, 60 and 80 and \wts as 50, 100 and 80 for $T_1, T_2, T_3$ respectively. Here $T_1, T_2$ are ordered before $T_3$ in \rt with $T_1 \prec_{H1}^{RT} T_3$ and $T_2 \prec_{H1}^{RT} T_3$ although $T_2$ has a higher \wts than $T_3$. Here, as per \sfkv algorithm, $T_3$ reads $x$ from $T_1$ since $T_1$ has the largest \wts (50) smaller than $T_3$'s \wts (80). It can be verified that it is possible for \sfkv to generate such a history. But this history is not \stsble. The only possible serial order equivalent to $H1$ and \legal is $T_1 T_3 T_2$. But this violates \rt order as $T_3$ is serialized before $T_2$ but in $H1$, $T_2$ completes before $T_3$ has begun. Since $H1$ is not \stsble, it is not \lopq as well. Naturally, this drawback extends to \sfmv as well. \subsection{Design of \ksftm: Regaining Correctness while Preserving \emph{Starvation-Freedom}} \label{subsec:ksftm} In this section, we discuss how principles of \pkto and \sfkv can be combined to obtain \ksftm that provides both correctness (strict-serializability and \lopq) as well as \emph{\stfdm}. To achieve this, we first understand why the initial algorithm, \pkto satisfies strict-serializability. This is because \cts was used to create the ordering among committed transactions. \cts is closely associated with real-time. In contrast, \sfkv uses \wts which may not correspond to the real-time, as \wts may be significantly larger than \cts as shown by $H1$ in \figref{sfmv-correct}. One straightforward way to modify \sfkv is to delay a committing transaction, say $T_i$ with \wts value $\twts{i}$ until the real-time (\gtcnt) catches up to $\twts{i}$. This will ensure that value of \wts will also become same as the real-time thereby guaranteeing \stsbty. However, this is unacceptable, as in practice, it would require transaction $T_i$ locking all the variables it plans to update and wait. This will adversely affect the performance of the STM system. We can allow the transaction $T_i$ to commit before its $\twts{i}$ has caught up with the actual time if it does not violate the \rt ordering. Thus, to ensure that the notion of \rt order is respected by transactions in the course of their execution in \sfkv, we add extra time constraints. We use the idea of timestamp ranges. This notion of timestamp ranges was first used by Riegel et al. \cite{Riegel+:LSA:DISC:2006} in the context of multi-version STMs. Several other researchers have used this idea since then such as Guerraoui et al. \cite{Guer+:disc:2008}, Crain et al. \cite{Crain+:RI_VWC:ICA3PP:2011}, Aydonat \& Abdelrahman \cite{AydAbd:RCC:TPDS:2012}. \ignore{ Towards, this end, first, we understand why the initial algorithm, \pkto is correct. As mentioned in \propref{pmvto-correct}, any history generated by it is \stsble with the equivalent serial history being one in which transactions are ordered by their \cts{s}. We wanted to achieve the same principle with \sfkv using \wts{s}. But the serial order of \wts{s} of transactions does not respect \rt order as shown by $H1$ in \figref{sfmv-correct}. Thus, to ensure that the notion of \rt order is respected by transactions in course of their execution in \sfkv, we add extra time constraints. We use the idea of timestamp ranges. This notion of timestamp ranges was first used by Riegel et al. \cite{Riegel+:LSA:DISC:2006} in the context of multi-version STMs. Several other researchers have used this idea since then such as Guerraoui et al. \cite{Guer+:disc:2008}, Crain et al. \cite{Crain+:RI_VWC:ICA3PP:2011}, Aydonat \& Abdelrahman \cite{AydAbd:RCC:TPDS:2012}. } Thus, in addition to \its, \cts and \wts, each transaction $T_i$ maintains a timestamp range: \emph{Transaction Lower Timestamp Limit} or $\ttltl{i}$, and \emph{Transaction Upper Timestamp Limit} or $\ttutl{i}$. When a transaction $T_i$ begins, $\ttltl{i}$ is assigned $\tcts{i}$ and $\ttutl{i}$ is assigned a largest possible value which we denote as infinity. When $T_i$ executes a \mth $m$ in which it reads a version of a \tobj $x$ or creates a new version of $x$ in $\tryc$, $\ttltl{i}$ is incremented while $\ttutl{i}$ gets decremented \footnote{Technically $\infty$, which is assigned to $\ttutl{i}$, cannot be decremented. But here as mentioned earlier, we use $\infty$ to denote the largest possible value that can be represented in a system.}. We require to serialize all the transactions based on their \wts while maintaining their \rt order. On executing $m$, $T_i$ is ordered w.r.t to other transactions that have created a version of $x$ based on increasing order of \wts. For all transactions $T_j$ which also have created a version of $x$ and whose $\twts{j}$ is less than $\twts{i}$, $\ttltl{i}$ is incremented such that $\ttutl{j}$ is less than $\ttltl{i}$. Note that all such $T_j$ are serialized before $T_i$. Similarly, for any transaction $T_k$ which has created a version of $x$ and whose $\twts{k}$ is greater than $\twts{i}$, $\ttutl{i}$ is decremented such that it becomes less than $\ttltl{k}$. Again, note that all such $T_k$ is serialized after $T_i$. Note that in the above discussion, $T_i$ need not have created a version of $x$. It could also have read the version of $x$ created by $T_j$. After the increments of $\ttltl{i}$ and the decrements of $\ttutl{i}$, if $\ttltl{i}$ turns out to be greater than $\ttutl{i}$ then $T_i$ is aborted. Intuitively, this implies that $T_i$'s \wts and \rt orders are out of \emph{sync} and cannot be reconciled. Finally, when a transaction $T_i$ commits: (1) $T_i$ records its commit time (or $\ct_i$) by getting the current value of \gtcnt and incrementing it by $\incv$ which is any value greater than or equal to 1. Then $\ttutl{i}$ is set to $\ct_i$ if it is not already less than it. Now suppose $T_i$ occurs in \rt before some other transaction, $T_k$ but does not have any conflict with it. This step ensures that $\ttutl{i}$ remains less than $\ttltl{k}$ (which is initialized with $\tcts{k}$); (2) Ensure that $\ttltl{i}$ is still less than $\ttutl{i}$. Otherwise, $T_i$ is aborted. We illustrate this technique with the history $H1$ shown in \figref{sfmv-correct}. When $T_1$ starts its $\tcts{1} = 50, \ttltl{1} = 50, \ttutl{1}=\infty$. Now when $T_1$ commits, suppose $\gtcnt$ is 70. Hence, $\ttutl{1}$ reduces to 70. Next, when $T_2$ commits, suppose $\ttutl{2}$ reduces to 75 (the current value of $\gtcnt$). As $T_1, T_2$ have accessed a common \tobj $x$ in a conflicting manner, $\ttltl{2}$ is incremented to a value greater than $\ttutl{1}$, say 71. Next, when $T_3$ begins, $\ttltl{3}$ is assigned $\tcts{3}$ which is 80 and $\ttutl{3}$ is initialized to $\infty$. When $T_3$ reads 10 from $T_1$, which is $r_3(x, 10)$, $\ttutl{3}$ is reduced to a value less than $\ttltl{2} (= 71)$, say 70. But $\ttltl{3}$ is already at 80. Hence, the limits of $T_3$ have crossed and thus causing $T_3$ to abort. The resulting history consisting of only committed transactions $T_1 T_2$ is \stsble. Based on this idea, we next develop a variation of \sfkv, \emph{K-version Starvation-Free STM System} or \emph{\ksftm}. To explain this algorithm, we first describe the structure of the version of a \tobj used. It is a slight variation of the \tobj used in \pkto algorithm. It consists of: (1) timestamp, $ts$ which is the \wts of the transaction that created this version (and not \cts like \pkto); (2) the value of the version; (3) a list, called \rlist{}, consisting of transactions ids (could be \cts as well) that read from this version; (4) version \rt timestamp or \vt which is the \utl of the transaction that created this version. Thus a version has information of \wts and \utl of the transaction that created it. Now, we describe the main idea behind $\begt$, $\tread$, $\twrite$ and $\tryc{}$ \op{s} of a transaction $T_i$ which is an extension of \pkto. Note that as per our notation $i$ represents the \cts of $T_i$. \noindent \textbf{$\begt(t)$:} A unique timestamp $ts$ is allocated to $T_i$ which is its \cts ($i$ from our assumption) which is generated by atomically incrementing the global counter $\gtcnt$. If the input $t$ is null then $\tcts{i} = \tits{i} = ts$ as this is the first \inc of this transaction. Otherwise, the non-null value of $t$ is assigned to $\tits{i}$. Then, \wts is computed by \eqnref{wtsf}. Finally, \ltl and \utl are initialized: $\ttltl{i} = \tcts{i}$, $\ttutl{i} = \infty$. \noindent \textbf{$\tread(x)$:} Transaction $T_i$ reads from a version of $x$ with timestamp $j$ such that $j$ is the largest timestamp less than $\twts{i}$ (among the versions $x$), i.e. there exists no version $k$ such that $j<k<\twts{i}$ is true. If no such $j$ exists then $T_i$ is aborted. Otherwise, after reading this version of $x$, $T_i$ is stored in $j$'s $rl$. Then we modify \ltl, \utl as follows: \begin{enumerate} \item The version $x[j]$ is created by a transaction with $\twts{j}$ which is less than $\twts{i}$. Hence, $\ttltl{i} = max(\ttltl{i}, x[j].$\vt$ + 1)$. \item Let $p$ be the timestamp of smallest version larger than $i$. Then $\ttutl{i} = min(\ttutl{i}, x[p].\vt - 1)$. \item After these steps, abort $T_i$ if \ltl and \utl have crossed, i.e., $\ttltl{i} > \ttutl{i}$. \end{enumerate} \noindent \textbf{$\twrite(x,v)$:} $T_i$ stores this write to value $x$ locally in its $wset\xspace_i$. \noindent \textbf{$\tryc:$} This \op{} consists of multiple steps: \begin{enumerate} \item Before $T_i$ can commit, we need to verify that any version it creates is updated consistently. $T_i$ creates a new version with timestamp $\twts{i}$. Hence, we must ensure that any transaction that read a previous version is unaffected by this new version. Additionally, creating this version would require an update of \ltl and \utl of $T_i$ and other transactions whose read-write set overlaps with that of $T_i$. Thus, $T_i$ first validates each \tobj{} $x$ in its $wset\xspace{}$ as follows: \label{step:kverify} \begin{enumerate} \item $T_i$ finds a version of $x$ with timestamp $j$ such that $j$ is the largest timestamp less than $\twts{i}$ (like in $\tread$). If there exists no version of $x$ with a timestamp less than $\twts{i}$ then $T_i$ is aborted. This is similar to \stref{notfound} of the $\tryc$ of \pkto algorithm. \label{step:k-look} \item Among all the transactions that have previously read from $j$ suppose there is a transaction $T_k$ such that $j<\twts{i}<\twts{k}$. Then (i) if $T_k$ has already committed then $T_i$ is aborted; (ii) Suppose $T_k$ is live, and $\tits{k}$ is less than $\tits{i}$. Then again $T_i$ is aborted; (iii) If $T_k$ is still live with $its_i$ less than $its_k$ then $T_k$ is aborted. This step is similar to \stref{verify} of the $\tryc$ of \pkto algorithm. \label{step:k-verify} \item Next, we must ensure that $T_i$'s \ltl and \utl are updated correctly w.r.t to other concurrently executing transactions. To achieve this, we adjust \ltl, \utl as follows: (i) Let $j$ be the $ts$ of the largest version smaller than $\twts{i}$. Then $\ttltl{i} = max(\ttltl{i}, x[j].\vt + 1)$. Next, for each reading transaction, $T_r$ in $x[j].\rlist$, we again set, $\ttltl{i} = max(\ttltl{i}, \ttutl{r} + 1)$. (ii) Similarly, let $p$ be the $ts$ of the smallest version larger than $\twts{i}$. Then, $\ttutl{i} = min(\ttutl{i}, x[p].\vt - 1)$. (Note that we don't have to check for the transactions in the \rlist of $x[p]$ as those transactions will have \ltl higher than $x[p].\vt$ due to $\tread$.) (iii) Finally, we get the commit time of this transaction from \gtcnt: $\ct_i = \gtcnt.add\&Get(\incv)$ where $\incv$ is any constant $\geq 1$. Then, $\ttutl{i} = min(\ttutl{i}, \ct_i)$. After performing these updates, abort $T_i$ if \ltl and \utl have crossed, i.e., $\ttltl{i} > \ttutl{i}$. \label{step:ktk-upd} \end{enumerate} \item After performing the tests of \stref{kverify} over each \tobj{s} $x$ in $T_i$'s $wset\xspace$, if $T_i$ has not yet been aborted, we proceed as follows: for each $x$ in $wset\xspace_i$ create a \vtup $\langle \twts{i}, wset\xspace_i.x.v, null,\\ \ttutl{i} \rangle$. In this tuple, $\twts{i}$ is the timestamp of the new version; $wset\xspace_i.x.v$ is the value of $x$ is in $T_i$'s $wset\xspace$; the \rlist of the $\vtup$ is $null$; $\vt$ is $\ttutl{i}$ (actually it can be any value between $\ttltl{i}$ and $\ttutl{i}$). Update the $\vlist$ of each \tobj $x$ similar to \stref{updt} of $\tryc$ of \pkto. \ignore{ \begin{enumerate} \item $T_i$ creates a \vtup $\langle i, x.v, null \rangle$. In this tuple, $i$ (\cts of $T_i$) is the timestamp of the new version; $x.v$ is the value of $x$ is in $T_i$'s wset\xspace and the \rlist of the \vtup is $null$. \item Suppose the total number of versions of $x$ is $K$. Then among all the versions of $x$, $T_i$ replaces the version with the smallest timestamp with \vtup $\langle i, x.v, null \rangle$. Otherwise, the \vtup is added to $x$'s \vlist. \end{enumerate} } \item Transaction $T_i$ is then committed. \label{step:kcommit} \end{enumerate} \noindent \stref{ktk-upd}.(iii) of $\tryc$ ensures that \rt order between transactions that are not in conflict. It can be seen that locks have to be used to ensure that all these \mth{s} to execute in a \lble manner (i.e., atomically). \input{ap-dsksftm} We get the following nice properties on \ksftm. For simplicity, we assumed $C$ and $\incv$ to be 0.1 and 1 respectively in our analysis. But the proof and the analysis holds for any value greater than 0. \ignore{ \begin{property} \label{prop:ksftm-sble} Any history generated by \pkto is \stsble. \end{property} } \begin{theorem} \label{thm:ksftm-lo} Any history generated by \ksftm is strict-serializable and \lopq. \end{theorem} \begin{theorem} \label{thm:ksftm-live} \ksftm algorithm ensures \stfdm. \end{theorem} \noindent As explained in the description \propref{sfmv-live}, the proof of this property is somewhat involved. As expected, this proof can be extended to \mvsftm as well. \noindent \textbf{Garbage Collection:} Having described the \emph{\stf} algorithm, we now describe how garbage collection can be performed on the unbounded variant, \mvsftm to achieve \mvsftmgc. This is achieved by deleting non-latest version (i.e., there exists a version with greater $ts$) of each \tobj whose timestamp, $ts$ is less than the \cts of smallest live transaction. It must be noted that \mvsftm (\ksftm) works with \wts which is greater or equal to \cts for any transaction. Interestingly, the same garbage collection principle can be applied for \pmvto to achieve \pmvtogc. To identify the transaction with the smallest \cts among live transactions, we maintain a set of all the live transactions, \livel. When a transaction $T_i$ begins, its \cts is added to this \livel. And when $T_i$ terminates (either commits or aborts), $T_i$ is deleted from this \livel. \subsection{Motivation for Starvation Freedom in Multi-Version Systems } \label{sec:sftm} In this section, first we describe the starvation freedom solution used for single version i.e. \sftm algorithm and then the drawback of it. \subsubsection{Illustration of \sftm} \label{subsec:sftm-main} Forward-oriented optimistic concurrency control protocol (\focc), is a commonly used optimistic algorithm in databases \cite[Chap 4]{WeiVoss:2002:Morg}. In fact, several STM Systems are also based on this idea. In a typical STM system (also in database optimistic concurrency control algorithms), a transaction execution is divided can be two phases - a \emph{\rwph} and \emph{\tryph} (also referred to as validation phase in databases). The various algorithms differ in how the \tryph{} executes. Let the write-set or wset\xspace{} and read-set or rset\xspace{} of a $t_i$ denotes the set of \tobj{s} written \& read by $t_i$. In \focc{} a transaction $t_i$ in its \tryph{} is validated against all live transactions that are in their \rwph{} as follows: $\langle wset\xspace(t_i) \cap (\forall t_j: rset\xspace^{n}(t_j)) = \Phi \rangle$. This implies that the wset\xspace{} of $t_i$ can not have any conflict with the current rset\xspace{} of any transaction $t_j$ in its \rwph. Here $rset\xspace^{n}(t_j)$ implies the rset\xspace{} of $t_j$ till the point of validation of $t_i$. If there is a conflict, then either $t_i$ or $t_j$ (all transactions conflicting with $t_i$) is aborted. A commonly used approach in databases is to abort $t_i$, the validating transaction. In \sftm{} we use \emph{\ts{s}} which are monotonically in increasing order. We implement the \ts{s} using atomic counters. Each transaction $t_i$ has two time-stamps: (i) \emph{current time-stamp or \cts}: this is a unique \ts{} alloted to $t_i$ when it begins; (ii) \emph{initial time-stamp or \its}: this is same as \cts{} when a transaction $t_i$ starts for the first time. When $t_i$ aborts and re-starts later, it gets a new \cts. But it retains its original \cts{} as \its. The value of \its{} is retained across aborts. For achieving starvation freedom, \sftm{} uses \its{} with a modification to \focc{} as follows: a transaction $t_i$ in \tryph{} is validated against all other conflicting transactions, say $t_j$ which are in their \rwph. The \its{} of $t_i$ is compared with the \its{} of any such transaction $t_j$. If \its{} of $t_i$ is smaller than \its{} of all such $t_j$, then all such $t_j$ are aborted while $t_i$ is committed. Otherwise, $t_i$ is aborted. We show that \sftm{} satisfies \opty{} and \stf. \begin{theorem} \label{thm:sftm-correct} Any history generated by \sftm{} is \opq. \end{theorem} \begin{theorem} \label{thm:sftm-stf} \sftm{} ensure \stfdm. \end{theorem} We prove the correctness by showing that the conflict graph \cite[Chap 3]{WeiVoss:2002:Morg}, \cite{KuzSat:NI:ICDCN:2014} of any history generated by \sftm{} is acyclic. We show \stfdm{} by showing that for each transaction $t_i$ there eventually exists a global state in which it has the smallest \its. \cmnt{ \subsection{Illustration of \sftm} \label{sec:sftm-illus} } \figref{sftmex} shows the a sample execution of \sftm. It compares the execution of \focc{} with \sftm. The execution on the left corresponds to \focc, while the execution one the right is of \sftm{} for the same input. It can be seen that each transaction has two \ts{s} in \sftm. They correspond to \cts, \its{} respectively. Thus, transaction $T_{1,1}$ implies that \cts{} and \its{} are $1$. In this execution, transaction $T_3$ executes the read \op{} $r_3(z)$ and is aborted due to conflict with $T_2$. The same happens with $T_{3,3}$. Transaction $T_5$ is re-execution of $T_3$. With \focc{} $T_5$ again aborts due to conflict with $T_4$. In case of \sftm{}, $T_{5,3}$ which is re-execution of $T_{3,3}$ has the same \its{} $3$. Hence, when $T_{4,4}$ validates in \sftm, it aborts as $T_{5,3}$ has lower \its. Later $T_{5,3}$ commits. It can be seen that \its{s} prioritizes the transactions under conflict and the transaction with lower \its{} is given higher priority. \begin{figure}[tbph] \centerline{\scalebox{0.5}{\input{figs/sftm.pdf_t}}} \caption{Sample execution of \sftm} \label{fig:sftmex} \end{figure} \subsubsection{Drawback of \sftm} \label{subsec:sftm-drawback} Figure \ref{fig:stmvl1} is representing history H: $r_1(x,0)r_1(y,0) w_2(x,10)w_3(y,15)a_2 a_3 c_1$ It has three transactions $T_1$, $T_2$ and $T_3$. $T_1$ is having lowest time stamp and after reading it became slow. $T_2$ and $T_3$ wants to write to $x$ and $y$ respectively but when it came into validation phase, due to $r_1(x)$, $r_1(y)$ and not committed yet, $T_2$ and $T_3$ gets aborted. However, when we are using multiple version $T_2$ and $T_3$ both can commit and $T_1$ can also read from $T_0$. The equivalent serial history is $T_1 T_2 T_3$. \begin{figure}[H] \center \scalebox{0.55}{\input{figs/abc.pdf_t}} \caption{Pictorial representation of execution under SFTM} \label{fig:stmvl1} \end{figure} \input{ap-dssftm} \section{Proof of Liveness} \label{sec:ap-liveness} \paragraph{Proof Notations:} Let $\gen{\ksftm}$ consist of all the histories accepted by \ksftm{} algorithm. In the follow sub-section, we only consider histories that are generated by \ksftm unless explicitly stated otherwise. For simplicity, we only consider sequential histories in our discussion below. Consider a transaction $T_i$ in a history $H$ generated by \ksftm. Once it executes \begt \mth, its \its, \cts, \wts values do not change. Thus, we denote them as $\tits{i}, \tcts{i}, \twts{i}$ respectively for $T_i$. In case the context of the history $H$ in which the transaction executing is important, we denote these variables as $\htits{i}{H}, \htcts{i}{H}, \htwts{i}{H}$ respectively. The other variables that a transaction maintains are: \ltl, \utl, \lock, \val, \stat. These values change as the execution proceeds. Hence, we denote them as: $\htltl{i}{H}, \htutl{i}{H}, \htlock{i}{H}, \htval{i}{H}, \htstat{i}{H}$. These represent the values of \ltl, \utl, \lock, \val, \stat after the execution of last event in $H$. Depending on the context, we sometimes ignore $H$ and denote them only as: $\tlock{i}, \tval{i}, \tstat{i}, \ttltl{i}, \ttutl{i}$. We approximate the system time with the value of $\tcntr$. We denote the \syst of history $H$ as the value of $\tcntr$ immediately after the last event of $H$. Further, we also assume that the value of $C$ is 1 in our arguments. But, it can be seen that the proof will work for any value greater than 1 as well. The application invokes transactions in such a way that if the current $T_i$ transaction aborts, it invokes a new transaction $T_j$ with the same \its. We say that $T_i$ is an \emph{\inc} of $T_j$ in a history $H$ if $\htits{i}{H} = \htits{j}{H}$. Thus the multiple \inc{s} of a transaction $T_i$ get invoked by the application until an \inc finally commits. To capture this notion of multiple transactions with the same \its, we define \emph{\incset} (incarnation set) of $T_i$ in $H$ as the set of all the transactions in $H$ which have the same \its as $T_i$ and includes $T_i$ as well. Formally, \begin{equation*} \incs{i}{H} = \{T_j|(T_i = T_j) \lor (\htits{i}{H} = \htits{j}{H})\} \end{equation*} Note that from this definition of \incset, we implicitly get that $T_i$ and all the transactions in its \incset of $H$ also belong to $H$. Formally, $\incs{i}{H} \in \txns{H}$. The application invokes different incarnations of a transaction $T_i$ in such a way that as long as an \inc is live, it does not invoke the next \inc. It invokes the next \inc after the current \inc has got aborted. Once an \inc of $T_i$ has committed, it can't have any future \inc{s}. Thus, the application views all the \inc{s} of a transaction as a single \emph{\aptr}. We assign \emph{\incn{s}} to all the transactions that have the same \its. We say that a transaction $T_i$ starts \emph{afresh}, if $\inum{i}$ is 1. We say that $T_i$ is the \ninc of $T_i$ if $T_j$ and $T_i$ have the same \its and $T_i$'s \incn is $T_j$'s \incn + 1. Formally, $\langle (\nexti{i} = T_j) \equiv (\tits{i} = \tits{j}) \land (\inum{i} = \inum{j} + 1)\rangle$ As mentioned the objective of the application is to ensure that every \aptr eventually commits. Thus, the applications views the entire \incset as a single \aptr (with all the transactions in the \incset having the same \its). We can say that an \aptr has committed if in the corresponding \incset a transaction in eventually commits. For $T_i$ in a history $H$, we denote this by a boolean value \incct (incarnation set committed) which implies that either $T_i$ or an \inc of $T_i$ has committed. Formally, we define it as $\inct{i}{H}$ \begin{equation*} \inct{i}{H} = \begin{cases} True & (\exists T_j: (T_j \in \incs{i}{H}) \land (T_j \in \comm{H})) \\ False & \text{otherwise} \end{cases} \end{equation*} \noindent From the definition of \incct we get the following observations \& lemmas about a transaction $T_i$ \begin{observation} \label{obs:inct-term} Consider a transaction $T_i$ in a history $H$ with its \incct being true in $H$. Then $T_i$ is terminated (either committed or aborted) in $H$. Formally, $\langle H, T_i: (T_i \in \txns{H}) \land (\inct{i}{H}) \implies (T_i \in \termed{H}) \rangle$. \end{observation} \begin{observation} \label{obs:inct-fut} Consider a transaction $T_i$ in a history $H$ with its \incct being true in $H1$. Let $H2$ be a extension of $H1$ with a transaction $T_j$ in it. Suppose $T_j$ is an \inc of $T_i$. Then $T_j$'s \incct is true in $H2$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (\inct{i}{H1}) \land (T_j \in \txns{H2}) \land (T_i \in \incs{j}{H2})\implies (\inct{j}{H2}) \rangle$. \end{observation} \begin{lemma} \label{lem:inct-diff} Consider a history $H1$ with a strict extension $H2$. Let $T_i$ \& $T_j$ be two transactions in $H1$ \& $H2$ respectively. Let $T_j$ not be in $H1$. Suppose $T_i$'s \incct is true. Then \its of $T_i$ cannot be the same as \its of $T_j$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubset H2) \land (\inct{i}{H1}) \land (T_j \in \txns{H2}) \land (T_j \notin \txns{H1}) \implies (\htits{i}{H1} \neq \htits{j}{H2}) \rangle$. \end{lemma} \begin{proof} Here, we have that $T_i$'s \incct is true in $H1$. Suppose $T_j$ is an \inc of $T_i$, i.e., their \its{s} are the same. We are given that $T_j$ is not in $H1$. This implies that $T_j$ must have started after the last event of $H1$. We are also given that $T_i$'s \incct is true in $H1$. This implies that an \inc of $T_i$ or $T_i$ itself has committed in $H1$. After this commit, the application will not invoke another transaction with the same \its as $T_i$. Thus, there cannot be a transaction after the last event of $H1$ and in any extension of $H1$ with the same \its of $T_1$. Hence, $\htits{i}{H1}$ cannot be same as $\htits{j}{H2}$. \end{proof} Now we show the liveness with the following observations, lemmas \& theorems. We start with two observations about that histories of which one is an extension of the other. The following states that for any history, there exists an extension. In other words, we assume that the STM system runs forever and does not terminate. This is required for showing that every transaction eventually commits. \begin{observation} \label{obs:hist-future} Consider a history $H1$ generated by \gen{\ksftm}. Then there is a history $H2$ in \gen{\ksftm} such that $H2$ is a strict extension of $H1$. Formally, $\langle \forall H1: (H1 \in \gen{ksftm}) \implies (\exists H2: (H2 \in \gen{ksftm}) \land (H1 \sqsubset H2) \rangle$. \end{observation} \noindent The follow observation is about the transaction in a history and any of its extensions. \begin{observation} \label{obs:hist-subset} Given two histories $H1$ \& $H2$ such that $H2$ is an extension of $H1$. Then, the set of transactions in $H1$ are a subset equal to the set of transaction in $H2$. Formally, $\langle \forall H1, H2: (H1 \sqsubseteq H2) \implies (\txns{H1} \subseteq \txns{H2}) \rangle$. \end{observation} In order for a transaction $T_i$ to commit in a history $H$, it has to compete with all the live transactions and all the aborted that can become live again as a different \inc. Once a transaction $T_j$ aborts, another \inc of $T_j$ can start and become live again. Thus $T_i$ will have to compete with this \inc of $T_j$ later. Thus, we have the following observation about aborted \& committed transactions. \begin{observation} \label{obs:abort-retry} Consider an aborted transaction $T_i$ in a history $H1$. Then there is an extension of $H1$, $H2$ in which an \inc of $T_i$, $T_j$ is live and has $\tcts{j}$ is greater than $\tcts{i}$. Formally, $\langle H1, T_i: (T_i \in \aborted{H1}) \implies(\exists T_j, H2: (H1 \sqsubseteq H2) \land (T_j \in \live{H2}) \land (\htits{i}{H2} = \htits{j}{H2}) \land (\htcts{i}{H2} < \htcts{j}{H2})) \rangle$. \end{observation} \begin{observation} \label{obs:cmt-noinc} Consider an committed transaction $T_i$ in a history $H1$. Then there is no extension of $H1$, in which an \inc of $T_i$, $T_j$ is live. Formally, $\langle H1, T_i: (T_i \in \comm{H1}) \implies(\nexists T_j, H2: (H1 \sqsubseteq H2) \land (T_j \in \live{H2}) \land (\htits{i}{H2} = \htits{j}{H2})) \rangle$. \end{observation} \begin{lemma} \label{lem:cts-wts} Consider a history $H1$ and its extension $H2$. Let $T_i, T_j$ be in $H1, H2$ respectively such that they are \inc{s} of each other. If \wts of $T_i$ is less than \wts of $T_j$ then \cts of $T_i$ is less than \cts $T_j$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubset H2) \land (T_i \in \txns{H1}) \land (T_j \in \txns{H2}) \land (T_i \in \incs{j}{H2}) \land (\htwts{i}{H1} < \htwts{j}{H2})\implies (\htcts{i}{H1} < \htcts{j}{H2}) \rangle$ \end{lemma} \begin{proof} Here we are given that \begin{equation} \label{eq:wts-ij} \htwts{i}{H1} < \htwts{j}{H2} \end{equation} The definition of \wts of $T_i$ is: $\htwts{i}{H1} = \htcts{i}{H1} + C * (\htcts{i}{H1} - \htits{i}{H1})$. Combining this \eqnref{wts-ij}, we get that $(C + 1) * \htcts{i}{H1} - C * \htits{i}{H1} < (C + 1) * \htcts{j}{H2} - C * \htits{j}{H2} \xrightarrow[\htits{i}{H1} = \htits{j}{H2}]{T_i \in \incs{j}{H2}} \htcts{i}{H1} < \htcts{j}{H2}$. \end{proof} \begin{lemma} \label{lem:wts-great} Consider a live transaction $T_i$ in a history $H1$ with its $\twts{i}$ less than a constant $\alpha$. Then there is a strict extension of $H1$, $H2$ in which an \inc of $T_i$, $T_j$ is live with \wts greater than $\alpha$. Formally, $\langle H1, T_i: (T_i \in \live{H1}) \land (\htwts{i}{H1} < \alpha) \implies(\exists T_j, H2: (H1 \sqsubseteq H2) \land (T_i \in \incs{j}{H2}) \land ((T_j \in \comm{H2}) \lor ((T_j \in \live{H2}) \land (\htwts{j}{H2} > \alpha)))) \rangle$. \end{lemma} \begin{proof} The proof comes the behavior of an \aptr. The application keeps invoking a transaction with the same \its until it commits. Thus the transaction $T_i$ which is live in $H1$ will eventually terminate with an abort or commit. If it commits, $H2$ could be any history after the commit of $T_2$. On the other hand if $T_i$ is aborted, as seen in \obsref{abort-retry} it will be invoked again or reincarnated with another \cts and \wts. It can be seen that \cts is always increasing. As a result, the \wts is also increasing. Thus eventually the \wts will become greater $\alpha$. Hence, we have that either an \inc of $T_i$ will get committed or will eventually have \wts greater than or equal to $\alpha$. \end{proof} \noindent Next we have a lemma about \cts of a transaction and the \syst of a history. \begin{lemma} \label{lem:cts-syst} Consider a transaction $T_i$ in a history $H$. Then, we have that \cts of $T_i$ will be less than or equal to \syst of $H$. Formally, $\langle T_i, H1: (T_i \in \txns{H}) \implies (\htcts{i}{H} \leq \hsyst{H}) \rangle$. \end{lemma} \begin{proof} We get this lemma by observing the \mth{s} of the STM System that increment the \tcntr which are \begt and \tryc. It can be seen that \cts of $T_i$ gets assigned in the \begt \mth. So if the last \mth of $H$ is the \begt of $T_i$ then we get that \cts of $T_i$ is same as \syst of $H$. On the other hand if some other \mth got executed in $H$ after \begt of $T_i$ then we have that \cts of $T_i$ is less than \syst of $H$. Thus combining both the cases, we get that \cts of $T_i$ is less than or equal to as \syst of $H$, i.e., $(\htcts{i}{H} \leq \hsyst{H})$ \end{proof} \noindent From this lemma, we get the following corollary which is the converse of the lemma statement \begin{corollary} \label{cor:cts-syst} Consider a transaction $T_i$ which is not in a history $H1$ but in an strict extension of $H1$, $H2$. Then, we have that \cts of $T_i$ is greater than the \syst of $H$. Formally, $\langle T_i, H1, H2: (H1 \sqsubset H2) \land (T_i \notin \txns{H1}) \land (T_i \in \txns{H2}) \implies (\htcts{i}{H2} > \hsyst{H1}) \rangle$. \end{corollary} \noindent Now, we have lemma about the \mth{s} of \ksftm completing in finite time. \begin{lemma} \label{lem:mth-fdm} If all the locks are fair and the underlying system scheduler is fair then all the \mth{s} of \ksftm will eventually complete. \end{lemma} \begin{proof} It can be seen that in any \mth, whenever a transaction $T_i$ obtains multiple locks, it obtains locks in the same order: first lock relevant \tobj{s} in a pre-defined order and then lock relevant \glock{s} again in a predefined order. Since all the locks are obtained in the same order, it can be seen that the \mth{s} of \ksftm will not deadlock. It can also be seen that none of the \mth{s} have any unbounded while loops. All the loops in \tryc \mth iterate through all the \tobj{s} in the write-set of $T_i$. Moreover, since we assume that the underlying scheduler is fair, we can see that no thread gets swapped out infinitely. Finally, since we assume that all the locks are fair, it can be seen all the \mth{s} terminate in finite time. \end{proof} \begin{theorem} \label{thm:trans-com|abt} Every transaction either commits or aborts in finite time. \end{theorem} \begin{proof} This theorem comes directly from the \lemref{mth-fdm}. Since every \mth of \ksftm will eventually complete, all the transactions will either commit or abort in finite time. \end{proof} \noindent From this theorem, we get the following corollary which states that the maximum \emph{lifetime} of any transaction is $L$. \begin{corollary} \label{cor:cts-L} Any transaction $T_i$ in a history $H$ will either commit or abort before the \syst of $H$ crosses $\tcts{i} + L$. \end{corollary} \noindent The following lemma connects \wts and \its of two transactions, $T_i, T_j$. \begin{lemma} \label{lem:wts-its} Consider a history $H1$ with two transactions $T_i, T_j$. Let $T_i$ be in $\live{H1}$. Suppose $T_j$'s \wts is greater or equal to $T_i$' s \wts. Then \its of $T_j$ is less than $\tits{i} + 2*L$. Formally, $\langle H, T_i, T_j : (\{ T_i, T_j\} \subseteq \txns{H}) \land ( T_i \in \live{H}) \land (\htwts{j}{H} \geq \htwts{i}{H}) \Longrightarrow (\htits{i}{H} + 2L \geq \htits{j}{H}) \rangle$. \end{lemma} \begin{proof} Since $T_i$ is live in $H1$, from \corref{cts-L}, we get that it terminates before the system time, $\tcntr$ becomes $\tcts{i} + L$. Thus, \syst of history $H1$ did not progress beyond $\tcts{i} + L$. Hence, for any other transaction $T_j$ (which is either live or terminated) in $H1$, it must have started before \syst has crossed $\tcts{i} + L$. Formally $\langle \tcts{j} \leq \tcts{i} + L \rangle$. Note that we have defined \wts of a transaction $T_j$ as: $\twts{j} = (\tcts{j} + C * (\tcts{j} - \tits{j}))$. Now, let us consider the difference of the \wts{s} of both the transactions. \noindent \begin{math} \twts{j} - \twts{i} = (\tcts{j} + C * (\tcts{j} - \tits{j})) - (\tcts{i} + C * (\tcts{i} - \tits{i})) \\ = (C + 1)(\tcts{j} - \tcts{i}) - C(\tits{j} - \tits{i}) \\ \leq (C + 1)L - C(\tits{j} - \tits{i}) \qquad [\because \tcts{j} \leq \tcts{i} + L] \\ = 2*L + \tits{i} - \tits{j} \qquad [\because C = 1] \\ \end{math} \noindent Thus, we have that: $ \langle (\tits{i} + 2L - \tits{j}) \geq (\twts{j} - \twts{i}) \rangle$. This gives us that \\ $((\twts{j} - \twts{i}) \geq 0) \Longrightarrow ((\tits{i} + 2L - \tits{j}) \geq 0)$. \noindent From the above implication we get that, $(\twts{j} \geq \twts{i}) \Longrightarrow (\tits{i} + 2L \geq \tits{j})$. \end{proof} It can be seen that \ksftm algorithm gives preference to transactions with lower \its to commit. To understand this notion of preference, we define a few notions of enablement of a transaction $T_i$ in a history $H$. We start with the definition of \emph{\itsen} as: \begin{definition} \label{defn:itsen} We say $T_i$ is \emph{\itsen} in $H$ if for all transactions $T_j$ with \its lower than \its of $T_i$ in $H$ have \incct to be true. Formally, \begin{equation*} \itsenb{i}{H} = \begin{cases} True & (T_i \in \live{H}) \land (\forall T_j \in \txns{H} : (\htits{j}{H} < \htits{i}{H}) \implies (\inct{j}{H})) \\ False & \text{otherwise} \end{cases} \end{equation*} \end{definition} \noindent The follow lemma states that once a transaction $T_i$ becomes \itsen it continues to remain so until it terminates. \begin{lemma} \label{lem:itsen-future} Consider two histories $H1$ and $H2$ with $H2$ being a extension of $H1$. Let a transaction $T_i$ being live in both of them. Suppose $T_i$ is \itsen in $H1$. Then $T_i$ is \itsen in $H2$ as well. Formally, $\langle H1, H2, T_i: (H1 \sqsubseteq H2) \land (T_i \in \live{H1}) \land (T_i \in \live{H2}) \land (\itsenb{i}{H1}) \implies (\itsenb{i}{H2}) \rangle$. \end{lemma} \begin{proof} When $T_i$ begins in a history $H3$ let the set of transactions with \its less than $\tits{i}$ be $smIts$. Then in any extension of $H3$, $H4$ the set of transactions with \its less than $\tits{i}$ remains as $smIts$. Suppose $H1, H2$ are extensions of $H3$. Thus in $H1, H2$ the set of transactions with \its less than $\tits{i}$ will be $smIts$. Hence, if $T_i$ is \itsen in $H1$ then all the transactions $T_j$ in $smIts$ are $\inct{j}{H1}$. It can be seen that this continues to remain true in $H2$. Hence in $H2$, $T_i$ is also \itsen which proves the lemma. \end{proof} The following lemma deals with a committed transaction $T_i$ and any transaction $T_j$ that terminates later. In the following lemma, $\incv$ is any constant greater than or equal to 1. \begin{lemma} \label{lem:tryci-j} Consider a history $H$ with two transactions $T_i, T_j$ in it. Suppose transaction $T_i$ commits before $T_j$ terminates (either by commit or abort) in $H$. Then $\ct_i$ is less than $\ct_j$ by at least $\incv$. Formally, $\langle H, \{T_i, T_j\} \in \txns{H}: (\tryc_i <_H \term_j) \implies (\ct_i + \incv \leq \ct_j)\rangle$. \end{lemma} \begin{proof} When $T_i$ commits, let the value of the global $\tcntr$ be $\alpha$. It can be seen that in \begt \mth, $\ct_j$ get initialized to $\infty$. The only place where $\ct_j$ gets modified is at \Lineref{tryc-cmt-mod} of \tryc. Thus if $T_j$ gets aborted before executing \tryc \mth or before this line of \tryc we have that $\ct_j$ remains at $\infty$. Hence in this case we have that $\langle \ct_i + \incv < \ct_j \rangle$. If $T_j$ terminates after executing \Lineref{tryc-cmt-mod} of \tryc \mth then $\ct_j$ is assigned a value, say $\beta$. It can be seen that $\beta$ will be greater than $\alpha$ by at least $\incv$ due to the execution of this line. Thus, we have that $\langle \alpha + \incv \leq \beta \rangle$ \end{proof} \noindent The following lemma connects the \tltl and \ct of a transaction $T_i$. \begin{lemma} \label{lem:ti|tltl-comt} Consider a history $H$ with a transaction $T_i$ in it. Then in $H$, $\ttltl{i}$ will be less than or equal to $\ct_i$. Formally, $\langle H, \{T_i\} \in \txns{H}: (\htltl{i}{H} \leq H.\ct_i) \rangle$. \end{lemma} \begin{proof} Consider the transaction $T_i$. In \begt \mth, $\ct_i$ get initialized to $\infty$. The only place where $\ct_i$ gets modified is at \Lineref{tryc-cmt-mod} of \tryc. Thus if $T_i$ gets aborted before this line or if $T_i$ is live we have that $(\ttltl{i} \leq \ct_i)$. On executing \Lineref{tryc-cmt-mod}, $\ct_i$ gets assigned to some finite value and it does not change after that. It can be seen that $\ttltl{i}$ gets initialized to $\tcts{i}$ in \Lineref{ti-ts-init} of \begt \mth. In that line, $\tcts{i}$ reads $\tcntr$ and increments it atomically. Then in \Lineref{tryc-cmt-mod}, $\ct_i$ gets assigned the value of $\tcntr$ after incrementing it. Thus, we clearly get that $\tcts{i} (= \ttltl{i}\text{ initially}) < \ct_i$. Then $\ttltl{i}$ gets updated on \Lineref{rd-tltl-inc} of read, \Lineref{tryc-tltl-inc} and \Lineref{ti-updt} of \tryc \mth{s}. Let us analyze them case by case assuming that $\ttltl{i}$ was last updated in each of these \mth{s} before the termination of $T_i$: \begin{enumerate} \item \label{case:read} \Lineref{rd-tltl-inc} of read \mth: Suppose this is the last line where $\ttltl{i}$ updated. Here $\ttltl{i}$ gets assigned to 1 + \vt of the previously committed version which say was created by a transaction $T_j$. Thus, we have the following equation, \begin{equation} \label{eq:tltl-vt} \ttltl{i} = 1 + x[j].\vt \end{equation} It can be seen that $x[j].\vt$ is same as $\ttltl{j}$ when $T_j$ executed \Lineref{new-tup} of \tryc. Further, $\ttltl{j}$ in turn is same as $\ttutl{j}$ due to \Lineref{ti-updt} of \tryc. From \Lineref{tryc-ul-cmt}, it can be seen that $\ttutl{j}$ is less than or equal to $\ct_j$ when $T_j$ committed. Thus we have that \begin{equation} \label{eq:tltl-ct} x[j].\vt = \ttltl{j} = \ttutl{j} \leq \ct_j \end{equation} It is clear that from the above discussion that $T_j$ executed \tryc \mth before $T_i$ terminated (i.e. $\tryc_j <_{H1} \term_i$). From \eqnref{tltl-vt} and \eqnref{tltl-ct}, we get \\ \begin{math} \ttltl{i} \leq 1 + \ct_j \xrightarrow[]{\incv \geq 1} \ttltl{i} \leq \incv + \ct_j \xrightarrow[]{\lemref{tryci-j}} \ttltl{i} \leq \ct_i \end{math} \item \label{case:tryc-short} \Lineref{tryc-tltl-inc} of \tryc \mth: The reasoning in this case is very similar to the above case. \item \label{case:tryc-long} \Lineref{ti-updt} of \tryc \mth: In this line, $\ttltl{i}$ is made equal to $\ttutl{i}$. Further, in \Lineref{tryc-ul-cmt}, $\ttutl{i}$ is made lesser than or equal to $\ct_{i}$. Thus combing these, we get that $\ttltl{i} \leq \ct_{i}$. It can be seen that the reasoning here is similar in part to \csref{read}. \end{enumerate} Hence, in all the three cases we get that $\langle \ttltl{i} \leq \ct_i \rangle$. \end{proof} \noindent The following lemma connects the \tutl,\ct of a transaction $T_i$ with \wts of a transaction $T_j$ that has already committed. \begin{lemma} \label{lem:ti|tutl-comt} Consider a history $H$ with a transaction $T_i$ in it. Suppose $\ttutl{i}$ is less than $\ct_i$. Then, there is a committed transaction $T_j$ in $H$ such that $\twts{j}$ is greater than $\twts{i}$. Formally, $\langle H \in \gen{\ksftm}, \{T_i\} \in \txns{H}: (\htutl{i}{H} < H.\ct_i) \implies (\exists T_j \in \comm{H}: \htwts{j}{H} > \htwts{i}{H}) \rangle$. \end{lemma} \begin{proof} It can be seen that $\tutl_i$ initialized in \begt \mth to $\infty$. $\ttutl{i}$ is updated in \Lineref{rd-ul-dec} of read \mth, \Lineref{tryc-ul-dec} \& \Lineref{tryc-ul-cmt} of \tryc \mth. If $T_i$ executes \Lineref{rd-ul-dec} of read \mth and/or \Lineref{tryc-ul-dec} of \tryc \mth then $\ttutl{i}$ gets decremented to some value less than $\infty$, say $\alpha$. Further, it can be seen that in both these lines the value of $\ttutl{i}$ is possibly decremented from $\infty$ because of $nextVer$ (or $ver$), a version of $x$ whose \ts is greater than $T_i$'s \wts. This implies that some transaction $T_j$, which is committed in $H$, must have created $nextVer$ (or $ver$) and $\twts{j} > \twts{i}$. Next, let us analyze the value of $\alpha$. It can be seen that $\alpha = x[nextVer/ver].vrt - 1$ where $nextVer/ver$ was created by $T_j$. Further, we can see when $T_j$ executed \tryc, we have that $x[nextVer].vrt = \ttltl{j}$ (from \Lineref{new-tup}). From \lemref{ti|tltl-comt}, we get that $\ttltl{j} \leq \ct_j$. This implies that $\alpha < \ct_j$. Now, we have that $T_j$ has already committed before the termination of $T_i$. Thus from \lemref{tryci-j}, we get that $\ct_j < \ct_i$. Hence, we have that, \begin{equation} \label{eq:alph-ct} \alpha < \ct_i \end{equation} Now let us consider \Lineref{tryc-ul-cmt} executed by $T_i$ which causes $\ttutl{i}$ to change. This line will get executed only after both \Lineref{rd-ul-dec} of read \mth, \Lineref{tryc-ul-dec} of \tryc \mth. This is because every transaction executes \tryc \mth only after read \mth. Further within \tryc \mth, \Lineref{tryc-ul-cmt} follows \Lineref{tryc-ul-dec}. There are two sub-cases depending on the value of $\ttutl{i}$ before the execution of \Lineref{tryc-ul-cmt}: (i) If $\ttutl{i}$ was $\infty$ and then get decremented to $\ct_i$ upon executing this line, then we get $\ct_i = \ttutl{i}$. From \eqnref{alph-ct}, we can ignore this case. (ii) Suppose the value of $\ttutl{i}$ before executing \Lineref{tryc-ul-cmt} was $\alpha$. Then from \eqnref{alph-ct} we get that $\ttutl{i}$ remains at $\alpha$ on execution of \Lineref{tryc-ul-cmt}. This implies that a transaction $T_j$ committed such that $\twts{j} > \twts{i}$. \end{proof} \noindent The following lemma connects the \tltl of a committed transaction $T_j$ and \ct of a transaction $T_i$ that commits later. \begin{lemma} \label{lem:tltlj-comti} Consider a history $H1$ with transactions $T_i, T_j$ in it. Suppose $T_j$ is committed and $T_i$ is live in $H1$. Then in any extension of $H1$, say $H2$, $\ttltl{j}$ is less than or equal to $\ct_i$. Formally, $\langle {H1, H2} \in \gen{\ksftm}, \{T_i, T_j\} \subseteq \txns{H1, H2}: (H1 \sqsubseteq H2) \land (T_j \in \comm{H1}) \land (T_i \in \live{H1}) \implies (\htltl{j}{H2} < H2.\ct_i) \rangle$. \end{lemma} \begin{proof} As observed in the previous proof of \lemref{ti|tltl-comt}, if $T_i$ is live or aborted in $H2$, then its \ct is $\infty$. In both these cases, the result follows. If $T_i$ is committed in $H2$ then, one can see that \ct of $T_i$ is not $\infty$. In this case, it can be seen that $T_j$ committed before $T_i$. Hence, we have that $\ct_j < \ct_i$. From \lemref{ti|tltl-comt}, we get that $\ttltl{j} \leq \ct_j$. This implies that $\ttltl{j} < \ct_i$. \end{proof} \noindent In the following sequence of lemmas, we identify the condition by when a transaction will commit. \begin{lemma} \label{lem:its-wts} Consider two histories $H1, H3$ such that $H3$ is a strict extension of $H1$. Let $T_i$ be a transaction in $\live{H1}$ such that $T_i$ \itsen in $H1$ and $\gval_i$ flag is true in $H1$. Suppose $T_i$ is aborted in $H3$. Then there is a history $H2$ which is an extension of $H1$ (and could be same as $H1$) such that (1) Transaction $T_i$ is live in $H2$; (2) there is a transaction $T_j$ that is live in ${H2}$; (3) $\htwts{j}{H2}$ is greater than $\htwts{i}{H2}$; (4) $T_j$ is committed in $H3$. Formally, $ \langle H1, H3, T_i: (H1 \sqsubset H3) \land (T_i \in \live{H1}) \land (\htval{i}{H1} = True) \land (\itsenb{i}{H1}) \land (T_i \in \aborted{H3})) \implies (\exists H2, T_j: (H1 \sqsubseteq H2 \sqsubset H3) \land (T_i \in \live{H2}) \land (T_j \in \txns{H2}) \land (\htwts{i}{H2} < \htwts{j}{H2}) \land (T_j \in \comm{H3})) \rangle$. \end{lemma} \begin{proof} To show this lemma, w.l.o.g we assume that $T_i$ on executing either read or \tryc in $H2$ (which could be same as $H1$) gets aborted resulting in $H3$. Thus, we have that $T_i$ is live in $H2$. Here $T_i$ is \itsen in $H1$. From \lemref{itsen-future}, we get that $T_i$ is \itsen in $H2$ as well. Let us sequentially consider all the lines where a $T_i$ could abort. In $H2$, $T_i$ executes one of the following lines and is aborted in $H3$. We start with \tryc method. \begin{enumerate} \item STM \tryc: \begin{enumerate} \item \Lineref{init-tc-chk} \label{case:init-tc-chk}: This line invokes abort() method on $T_i$ which releases all the locks and returns $\mathcal{A}$ to the invoking thread. Here $T_i$ is aborted because its \val flag, is set to false by some other transaction, say $T_j$, in its \tryc algorithm. This can occur in Lines: \ref{lin:addAbl-lar}, \ref{lin:addAbl-sml} where $T_i$ is added to $T_j$'s \abl set. Later in \Lineref{gval-set}, $T_i$'s \val flag is set to false. Note that $T_i$'s \val is true (after the execution of the last event) in $H1$. Thus, $T_i$'s \val flag must have been set to false in an extension of $H1$, which we again denote as $H2$. This can happen only if in both the above cases, $T_j$ is live in $H2$ and its \its is less than $T_i$'s \its. But we have that $T_i$'s \itsen in $H2$. As a result, it has the smallest among all live and aborted transactions of $H2$. Hence, there cannot exist such a $T_j$ which is live and $\htits{j}{H2} < \htits{i}{H2}$. Thus, this case is not possible. \item \Lineref{prev-nil}: This line is executed in $H2$ if there exists no version of $x$ whose \ts is less than $T_i$'s \wts. This implies that all the versions of $x$ have \ts{s} greater than $\twts{i}$. Thus the transactions that created these versions have \wts greater than $\twts{i}$ and have already committed in $H2$. Let $T_j$ create one such version. Hence, we have that $\langle (T_j \in \comm{H2}) \implies (T_j \in \comm{H3}) \rangle$ since $H3$ is an extension of $H2$. \item \Lineref{mid-tc-chk} \label{case:mid-tc-chk}: This case is similar to \csref{init-tc-chk}, i.e., \Lineref{init-tc-chk}. \item \Lineref{its-chk1} \label{case:its-chk1}: In this line, $T_i$ is aborted as some other transaction $T_j$ in $T_i$'s \lrl has committed. Any transaction in $T_i$'s \lrl has \wts greater than $T_i$'s \wts. This implies that $T_j$ is already committed in $H2$ and hence committed in $H3$ as well. \item \Lineref{tc-lts-cross} \label{case:tc-lts-cross}: In this line, $T_i$ is aborted because its lower limit has crossed its upper limit. First, let us consider $\ttutl{i}$. It is initialized in \begt \mth to $\infty$. As long as it is $\infty$, these limits cannot cross each other. Later, $\ttutl{i}$ is updated in \Lineref{rd-ul-dec} of read \mth, \Lineref{tryc-ul-dec} \& \Lineref{tryc-ul-cmt} of \tryc \mth. Suppose $\ttutl{i}$ gets decremented to some value $\alpha$ by one of these lines. Now there are two cases here: (1) Suppose $\ttutl{i}$ gets decremented to $\ct_i$ due to \Lineref{tryc-ul-cmt} of \tryc \mth. Then from \lemref{ti|tltl-comt}, we have $\ttltl{i} \leq \ct_i = \ttutl{i}$. Thus in this case, $T_i$ will not abort. (2) $\ttutl{i}$ gets decremented to $\alpha$ which is less than $\ct_i$. Then from \lemref{ti|tutl-comt}, we get that there is a committed transaction $T_j$ in $\comm{H2}$ such that $\twts{j} > \twts{i}$. This implies that $T_j$ is in $\comm{H3}$. \ignore{ It can be seen that if $T_i$ executes \Lineref{rd-ul-dec} of read \mth and/or \Lineref{tryc-ul-dec} of \tryc \mth then $\ttutl{i}$ gets decremented to some value less than $\infty$, say $\alpha$. Further, it can be seen that in both these lines the value of $\ttutl{i}$ is possibly decremented from $\infty$ because of $nextVer$ (or $ver$), a version of $x$ who \ts is greater than $T_i$. This implies that some transaction $T_j$ which is committed in $H$ must have created $nextVer$ ($ver$) and $\twts{j} > \twts{i}$. Next, let us analyze the value of $\alpha$. It can be seen that $\alpha = x[nextVer/ver].vrt - 1$ where $nextVer/ver$ was created by $T_j$. Further, we can see when $T_j$ executed \tryc, we have that $x[nextVer].vrt = \ttltl{j}$ (from \Lineref{new-tup}). From \lemref{ti|tltl-comt}, we get that $\ttltl{j} \leq \ct_j$. This implies that $\alpha < \ct_j$. Now, we can see that $T_j$ has already committed before the termination of $T_i$. Thus from \lemref{tryci-j}, we get that $\ct_j < \ct_i$. Hence, we have that $\alpha < \ct_i$. It is clear that before executing this line \Lineref{tc-lts-cross}, $T_i$ executed \Lineref{tryc-ul-cmt}. Now there are two sub-cases depending on the value of $\ttutl{i}$ before the execution of \Lineref{tryc-ul-cmt}: (i) If $\ttutl{i}$ was $\infty$ then it get decremented to $\ct_i$ upon executing this line. Then again from \lemref{ti|tltl-comt}, we have $\ttltl{i} \leq \ct_i = \ttutl{i}$. Thus in this case, $T_i$ will not abort. (ii) Suppose the value of $\ttutl{i}$ before executing \Lineref{tryc-ul-cmt} was $\alpha$. Then from the above discussion we get that $\ttutl{i}$ remains at $\alpha$. This implies that a transaction $T_j$ committed such that $\twts{j} > \twts{i}$. Thus if $\ttltl{i}$ turned out to be greater than $\ttutl{i}$ causing $T_i$ to abort, we still have that the lemma is true. } \item \Lineref{its|lar-sml}: This case is similar to \csref{init-tc-chk}, i.e., \Lineref{init-tc-chk}. \item \Lineref{its-chk2} \label{case:its-chk2}: In this case, $T_k$ is in $T_i$'s \srl and is committed in $H1$. And, from this case, we have that \begin{equation} \label{eq:tltl-k_i} \htutl{i}{H2} \leq \htltl{k}{H2} \end{equation} From the assumption of this case, we have that $T_k$ commits before $T_i$. Thus, from \lemref{tltlj-comti}, we get that $\ct_k < \ct_i$. From \lemref{ti|tltl-comt}, we have that $\ttltl{k} \leq \ct_k$. Thus, we get that $\ttltl{k} < \ct_i$. Combining this with the inequality of this case \eqnref{tltl-k_i}, we get that $\ttutl{i} < \ct_i$. Combining this inequality with \lemref{ti|tutl-comt}, we get that there is a transaction $T_j$ in $\comm{H2}$ and $\htwts{j}{H2} > \htwts{i}{H2}$. This implies that $T_j$ is in $\comm{H3}$ as well. \end{enumerate} \item STM read: \begin{enumerate} \item \Lineref{rd-chk}: This case is similar to \csref{init-tc-chk}, i.e., \Lineref{init-tc-chk} \item \Lineref{rd-lts-cross}: The reasoning here is similar to \csref{tc-lts-cross}, i.e., \Lineref{tc-lts-cross}. \end{enumerate} \end{enumerate} \end{proof} The interesting aspect of the above lemma is that it gives us a insight as to when a $T_i$ will get commit. If an \itsen transaction $T_i$ aborts then it is because of another transaction $T_j$ with \wts higher than $T_i$ has committed. To precisely capture this, we define two more notions of a transaction being enabled \emph{\cdsen} and \emph{\finen}. To define these notions of enabled, we in turn define a few other auxiliary notions. We start with \emph{\affset}, \begin{equation*} \haffset{i}{H} = \{T_j|(T_j \in \txns{H}) \land (\htits{j}{H} < \htits{i}{H} + 2*L)\} \end{equation*} From the description of \ksftm algorithm and \lemref{wts-its}, it can be seen that a transaction $T_i$'s commit can depend on committing of transactions (or their \inc{s}) which have their \its less than \its of $T_i$ + $2*L$, which is $T_i$'s \affset. We capture this notion of dependency for a transaction $T_i$ in a history $H$ as \emph{commit dependent set} or \emph{\cdset} as: the set of all transactions $T_j$ in $T_i$'s \affset that do not any \inc that is committed yet, i.e., not yet have their \incct flag set as true. Formally, \begin{equation*} \hcds{i}{H} = \{T_j| (T_j \in \haffset{i}{H}) \land (\neg\inct{j}{H}) \} \end{equation*} \noindent Based on this definition of \cdset, we next define the notion of \cdsen. \begin{definition} \label{defn:cdsen} We say that transaction $T_i$ is \emph{\cdsen} if the following conditions hold true (1) $T_i$ is live in $H$; (2) \cts of $T_i$ is greater than or equal to \its of $T_i$ + $2*L$; (3) \cdset of $T_i$ is empty, i.e., for all transactions $T_j$ in $H$ with \its lower than \its of $T_i$ + $2*L$ in $H$ have their \incct to be true. Formally, \begin{equation*} \cdsenb{i}{H} = \begin{cases} True & (T_i \in \live{H}) \land (\htcts{i}{H} \geq \htits{i}{H} + 2*L) \land (\hcds{i}{H} = \phi) \\ False & \text{otherwise} \end{cases} \end{equation*} \end{definition} \noindent The meaning and usefulness of these definitions will become clear in the course of the proof. In fact, we later show that once the transaction $T_i$ is \cdsen, it will eventually commit. We will start with a few lemmas about these definitions. \begin{lemma} \label{lem:its-enb} Consider a transaction $T_i$ in a history $H$. If $T_i$ is \cdsen then $T_i$ is also \itsen. Formally, $\langle H, T_i: (T_i \in \txns{H}) \land (\cdsenb{i}{H}) \implies (\itsenb{i}{H}) \rangle$. \end{lemma} \begin{proof} If $T_i$ is \cdsen in $H$ then it implies that $T_i$ is live in $H$. From the definition of \cdsen, we get that $\hcds{i}{H}$ is $\phi$ implying that any transaction $T_j$ with $\tits{k}$ less than $\tits{i} + 2*L$ has its \incct flag as true in $H$. Hence, for any transaction $T_k$ having $\tits{k}$ less than $\tits{i}$, $\inct{k}{H}$ is also true. This shows that $T_i$ is \itsen in $H$. \end{proof} \ignore{ \begin{lemma} \label{lem:cds-h1} Consider a transaction $T_i$ which is \cdsen in a history $H1$. Let $T_j$ be a transaction in \affset of $T_i$ in $H1$. Consider an extension of $H1$, $H2$ with a transaction $T_k$ in it such that $T_k$ is an \inc of $T_j$. Then $T_k$ is also in the set of transaction of $H1$. Formally, $\langle H1, H2, T_i, T_j, T_k: (H1 \sqsubseteq H2) \land (\cdsenb{i}{H1}) \land (T_j \in \haffset{i}{H1}) \land (T_k \in \incs{j}{H2}) \implies (T_k \in \txns{H1}) \rangle$ \end{lemma} \begin{proof} Once $T_i$ becomes \cdsen, all the transactions in its \affset have an \inc that is committed. Hence, as per our model the corresponding \aptr has committed and the application does not invoke another transaction with the same \its. Thus from \obsref{cmt-noinc}, we get that no new \inc of $T_j$ will get invoked by the application in any future extension of $H1$. This implies that the \inc of $T_j$ in $H2$, $T_k$ must have already been invoked before $T_i$ became \enbd. Since $T_i$ is \enbd in $H1$, we get that $T_k$ must also be in the set of transactions of $H1$, i.e., $(T_k \in \txns{H1})$. \end{proof} } \begin{lemma} \label{lem:cds-tk-h1} Consider a transaction $T_i$ which is \cdsen in a history $H1$. Consider an extension of $H1$, $H2$ with a transaction $T_j$ in it such that $T_i$ is an \inc of $T_j$. Let $T_k$ be a transaction in the \affset of $T_j$ in $H2$ Then $T_k$ is also in the set of transaction of $H1$. Formally, $\langle H1, H2, T_i, T_j, T_k: (H1 \sqsubseteq H2) \land (\cdsenb{i}{H1}) \land (T_i \in \incs{j}{H2}) \land (T_k \in \haffset{j}{H2}) \implies (T_k \in \txns{H1}) \rangle$ \end{lemma} \begin{proof} Since $T_i$ is \cdsen in $H1$, we get (from the definition of \cdsen) that \begin{equation} \label{eq:ti-cts-its} \htcts{i}{H1} \geq \htits{i}{H1} + 2*L \end{equation} Here, we have that $T_k$ is in $\haffset{j}{H2}$. Thus from the definition of \affset, we get that \begin{equation} \label{eq:tk-tj-aff} \htits{k}{H2} < \htits{j}{H2} + 2*L \end{equation} Since $T_i$ and $T_j$ are \inc{s} of each other, their \its are the same. Combining this with \eqnref{tk-tj-aff}, we get that \begin{equation} \label{eq:tk-ti-h12} \htits{k}{H2} < \htits{i}{H1} + 2*L \end{equation} We now show this proof through contradiction. Suppose $T_k$ is not in $\txns{H1}$. Then there are two cases: \begin{itemize} \item No \inc of $T_k$ is in $H1$: This implies that $T_k$ starts afresh after $H1$. Since $T_k$ is not in $H1$, from \corref{cts-syst} we get that $\htcts{k}{H2} > \hsyst{H1} \xrightarrow [\htcts{k}{H2} = \htits{k}{H2}] {T_k \text{ starts afresh}}\htits{k}{H2} > \hsyst{H1} \xrightarrow [\hsyst{H1} \geq \htcts{i}{H1}]{(T_i \in H1) \land \lemref{cts-syst}} \htits{k}{H2} > \htcts{i}{H1} \xrightarrow {\eqnref{ti-cts-its}} \htits{k}{H2} > \htits{i}{H1} + 2*L \xrightarrow {\htits{i}{H1} = \htits{j}{H2}} \htits{k}{H2} > \htits{j}{H2} + 2*L$ But this result contradicts with \eqnref{tk-tj-aff}. Hence, this case is not possible. \item There is an \inc of $T_k$, $T_l$ in $H1$: In this case, we have that \begin{equation} \label{eq:tl-h1} \htits{l}{H1} = \htits{k}{H2} \end{equation} Now combing this result with \eqnref{tk-ti-h12}, we get that $\htits{l}{H1} < \htits{i}{H1} + 2*L$. This implies that $T_l$ is in \affset of $T_i$ in $H1$. Since $T_i$ is \cdsen, we get that $T_l$'s \incct must be true. We also have that $T_k$ is not in $H1$ but in $H2$ where $H2$ is an extension of $H1$. Since $H2$ has some events more than $H1$, we get that $H2$ is a strict extension of $H1$. Thus, we have that, $(H1 \sqsubset H2) \land (\inct{l}{H1}) \land (T_k \in \txns{H2}) \land (T_k \notin \txns{H1})$. Combining these with \lemref{inct-diff}, we get that $(\htits{l}{H1} \neq \htits{k}{H2})$. But this result contradicts \eqnref{tl-h1}. Hence, this case is also not possible. \end{itemize} Thus from both the cases we get that $T_k$ should be in $H1$. Hence proved. \end{proof} \begin{lemma} \label{lem:aff-tkinc-h1} Consider two histories $H1, H2$ where $H2$ is an extension of $H1$. Let $T_i, T_j, T_k$ be three transactions such that $T_i$ is in $\txns{H1}$ while $T_j, T_k$ are in $\txns{H2}$. Suppose we have that (1) $\tcts{i}$ is greater than $\tits{i} + 2*L$ in $H1$; (2) $T_i$ is an \inc of $T_j$; (3) $T_k$ is in \affset of $T_j$ in $H2$. Then an \inc of $T_k$, say $T_l$ (which could be same as $T_k$) is in $\txns{H1}$. Formally, $\langle H1, H2, T_i, T_j, T_k: (H1 \sqsubseteq H2) \land (T_i \in \txns{H1}) \land (\{T_j, T_k\} \in \txns{H2}) \land (\htcts{i}{H1} > \htits{i}{H1} + 2*L) \land (T_i \in \incs{j}{H2}) \land (T_k \in \haffset{j}{H2}) \implies (\exists T_l: (T_l \in \incs{k}{H2}) \land (T_l \in \txns{H1})) \rangle$ \end{lemma} \begin{proof} \noindent This proof is similar to the proof of \lemref{cds-tk-h1}. We are given that \begin{equation} \label{eq:given-ti-ctsits} \htcts{i}{H1} \geq \htits{i}{H1} + 2*L \end{equation} We now show this proof through contradiction. Suppose no \inc of $T_k$ is in $\txns{H1}$. This implies that $T_k$ must have started afresh in some history $H3$ which is an extension of $H1$. Also note that $H3$ could be same as $H2$ or a prefix of it, i.e., $H3 \sqsubseteq H2$. Thus, we have that \noindent \begin{math} \htits{k}{H3} > \hsyst{H1} \xrightarrow{\lemref{cts-syst}} \htits{k}{H3} > \htcts{i}{H1} \xrightarrow{\eqnref{given-ti-ctsits}} \htits{k}{H3} > \htits{i}{H1} + 2*L \xrightarrow{\htits{i}{H1} = \htits{j}{H2}} \htits{k}{H3} > \htits{j}{H2} + 2*L \xrightarrow[\obsref{hist-subset}]{H3 \sqsubseteq H2} \htits{k}{H2} > \htits{j}{H2} + 2*L \xrightarrow[definition]{\affset} T_k \notin \haffset{j}{H2} \end{math} But we are given that $T_k$ is in \affset of $T_j$ in $H2$. Hence, it is not possible that $T_k$ started afresh after $H1$. Thus, $T_k$ must have a \inc in $H1$. \end{proof} \begin{lemma} \label{lem:aff-same} Consider a transaction $T_i$ which is \cdsen in a history $H1$. Consider an extension of $H1$, $H2$ with a transaction $T_j$ in it such that $T_j$ is an \inc of $T_i$ in $H2$. Then \affset of $T_i$ in $H1$ is same as the \affset of $T_j$ in $H2$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (\cdsenb{i}{H1}) \land (T_j \in \txns{H2}) \land (T_i \in \incs{j}{H2}) \implies ((\haffset{i}{H1} = \haffset{j}{H2})) \rangle$ \end{lemma} \begin{proof} From the definition of \cdsen, we get that $T_i$ is in $\txns{H1}$. Now to prove that \affset{s} are the same, we have to show that $(\haffset{i}{H1} \subseteq \haffset{j}{H2})$ and $(\haffset{j}{H1} \subseteq \haffset{i}{H2})$. We show them one by one: \paragraph{$(\haffset{i}{H1} \subseteq \haffset{j}{H2})$:} Consider a transaction $T_k$ in $\haffset{i}{H1}$. We have to show that $T_k$ is also in $\haffset{j}{H2}$. From the definition of \affset, we get that \begin{equation} \label{eq:tk-h1} T_k \in \txns{H1} \end{equation} \noindent Combining \eqnref{tk-h1} with \obsref{hist-subset}, we get that \begin{equation} \label{eq:tk-h2} T_k \in \txns{H2} \end{equation} \noindent From the definition of \its, we get that \begin{equation} \label{eq:its-h1-h2} \htits{k}{H1} = \htits{k}{H2} \end{equation} \noindent Since $T_i, T_j$ are \inc{s} we have that . \begin{equation} \label{eq:its-ij} \htits{i}{H1} = \htits{j}{H2} \end{equation} \noindent From the definition of \affset, we get that, \\ $\htits{k}{H1} < \htits{i}{H1} + 2*L \xrightarrow{\eqnref{its-h1-h2}} \htits{k}{H2} < \htits{i}{H1} + 2*L \xrightarrow{\eqnref{its-ij}} \htits{k}{H2} < \htits{j}{H2} + 2*L$ \noindent Combining this result with \eqnref{tk-h2}, we get that $T_k \in \haffset{j}{H2}$. \paragraph{$(\haffset{i}{H1} \subseteq \haffset{j}{H2})$:} Consider a transaction $T_k$ in $\haffset{j}{H2}$. We have to show that $T_k$ is also in $\haffset{i}{H1}$. From the definition of \affset, we get that $T_k \in \txns{H2}$. Here, we have that $(H1 \sqsubseteq H2) \land (\cdsenb{i}{H1}) \land (T_i \in \incs{j}{H2}) \land (T_k \in \haffset{j}{H2})$. Thus from \lemref{cds-tk-h1}, we get that $T_k \in \txns{H1}$. Now, this case is similar to the above case. It can be seen that Equations \ref{eq:tk-h1}, \ref{eq:tk-h2}, \ref{eq:its-h1-h2}, \ref{eq:its-ij} hold good in this case as well. Since $T_k$ is in $\haffset{j}{H2}$, we get that \\ $\htits{k}{H2} < \htits{i}{H2} + 2*L \xrightarrow{\eqnref{its-h1-h2}} \htits{k}{H1} < \htits{j}{H2} + 2*L \xrightarrow{\eqnref{its-ij}} \htits{k}{H1} < \htits{i}{H1} + 2*L $ \noindent Combining this result with \eqnref{tk-h1}, we get that $T_k \in \haffset{i}{H1}$. \end{proof} \noindent Next we explore how a \cdsen transaction remains \cdsen in the future histories once it becomes true. \begin{lemma} \label{lem:cds-fut} Consider two histories $H1$ and $H2$ with $H2$ being an extension of $H1$. Let $T_i$ and $T_j$ be two transactions which are live in $H1$ and $H2$ respectively. Let $T_i$ be an \inc of $T_j$ and $\tcts{i}$ is less than $\tcts{j}$. Suppose $T_i$ is \cdsen in $H1$. Then $T_j$ is \cdsen in $H2$ as well. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (T_i \in \incs{j}{H2}) \land (\htcts{i}{H1} < \htcts{j}{H2}) \land (\cdsenb{i}{H1}) \implies (\cdsenb{j}{H2}) \rangle$. \end{lemma} \begin{proof} We have that $T_i$ is live in $H1$ and $T_j$ is live in $H2$. Since $T_i$ is \cdsen in $H1$, we get (from the definition of \cdsen) that \begin{equation} \label{eq:cts-its} \htcts{i}{H1} \geq \htits{i}{H2} + 2*L \end{equation} We are given that $\tcts{i}$ is less than $\tcts{j}$ and $T_i, T_j$ are incarnations of each other. Hence, we have that \ignore{ \begin{align*} \htcts{j}{H2} & > \htcts{i}{H1} \\ \htcts{j}{H2} & > \htits{i}{H1} + 2*L & [\text{From \eqnref{cts-its}}] \\ \htcts{j}{H2} & > \htits{j}{H2} + 2*L & [\tits{i} = \tits{j}] \\ \end{align*} } \begin{align*} \htcts{j}{H2} & > \htcts{i}{H1} \\ & > \htits{i}{H1} + 2*L & [\text{From \eqnref{cts-its}}] \\ & > \htits{j}{H2} + 2*L & [\tits{i} = \tits{j}] \\ \end{align*} Thus we get that $\tcts{j} > \tits{j} + 2*L$. We have that $T_j$ is live in $H2$. In order to show that $T_j$ is \cdsen in $H2$, it only remains to show that \cdset of $T_j$ in $H2$ is empty, i.e., $\hcds{j}{H2} = \phi$. The \cdset becomes empty when all the transactions of $T_j$'s \affset in $H2$ have their \incct as true in $H2$. Since $T_j$ is live in $H2$, we get that $T_j$ is in $\txns{H2}$. Here, we have that $(H1 \sqsubseteq H2) \land (T_j \in \txns{H2}) \land (T_i \in \incs{j}{H2}) \land (\cdsenb{i}{H1})$. Combining this with \lemref{aff-same}, we get that $\haffset{i}{H1} = \haffset{j}{H2}$. Now, consider a transaction $T_k$ in $ \haffset{j}{H2}$. From the above result, we get that $T_k$ is also in $\haffset{i}{H1}$. Since $T_i$ is \cdsen in $H1$, i.e., $\cdsenb{i}{H1}$ is true, we get that $\inct{k}{H1}$ is true. Combining this with \obsref{inct-fut}, we get that $T_k$ must have its \incct as true in $H2$ as well, i.e. $\inct{k}{H2}$. This implies that all the transactions in $T_j$'s \affset have their \incct flags as true in $H2$. Hence the $\hcds{j}{H2}$ is empty. As a result, $T_j$ is \cdsen in $H2$, i.e., $\cdsenb{j}{H2}$. \end{proof} \ignore{ \begin{proof} We have that $T_i$ is live in $H1$ and $T_j$ is live in $H2$. Since $T_i$ is \cdsen in $H1$, we get (from the definition of \cdsen) that \begin{equation} \label{eq:cts-its1} \htcts{i}{H1} \geq \htits{i}{H2} + 2*L \end{equation} We are given that $\tcts{i}$ is less than $\tcts{j}$ and $T_i, T_j$ are incarnations of each other. Hence, we have that \begin{align*} \htcts{j}{H2} & > \htcts{i}{H1} \\ & > \htits{i}{H1} + 2*L & [\text{From \eqnref{cts-its}}] \\ & > \htits{j}{H2} + 2*L & [\tits{i} = \tits{j}] \\ \end{align*} Thus we get that $\tcts{j} > \tits{j} + 2*L$. We have that $T_j$ is live in $H2$. Now, suppose $T_j$ is not \cdsen in $H2$. This can happen only if there is a transaction $T_k$ such that $\tits{k}$ is less than $\tits{i} + 2*L$ but \incct of $T_k$ is not true in $H2$. Formally, \begin{equation} \label{eq:its-ki1} (\htits{k}{H2} < \htits{j}{H2} + 2*L) \land (\neg \incs{k}{H2}) \end{equation} Since $T_i$ is \cdsen in $H1$, we get that for all transactions $T_k$, such that $\tits{k} < \tits{i} + 2*L$, \incct of $T_j$ has to be true in $H1$. Combining this \eqnref{its-ki}, we get that $T_k$ or any \inc of $T_k$ cannot be in $H1$. Thus, we get that $T_k$ is not in $\txns{H1}$. This implies that $T_k$ must have started afresh in some history after $H1$. We get that, \begin{align*} \htits{k}{H2} & \geq \hsyst{H1} & [\text{Since $T_k$ starts afresh after $H1$}] \\ & > \htcts{i}{H1} & [\text{From \obsref{cts-syst}}] \\ & \geq \htits{i}{H1} + 2*L & [\text{From \eqnref{cts-its}}] \\ & = \htits{j}{H2} + 2*L & [\text{As $T_i, T_j$ are \inc{s} of each other}] \\ \end{align*} Thus, we get that $\htits{k}{H2} > \htits{j}{H2} + 2*L$. But this contradicts with \eqnref{its-ki}. Hence, we have that there cannot exist a transaction $T_k$ such that $\tits{k}$ is less than $\tits{i} + 2*L$ and \incct of $T_k$ is false in $H2$. This implies that $T_i$ must be \cdsen in $H2$ as well. \end{proof} } Having defined the properties related to \cdsen, we start defining notions for \finen. Next, we define \emph{\maxwts} for a transaction $T_i$ in $H$ which is the transaction $T_j$ with the largest \wts in $T_i$'s \incset. Formally, \begin{equation*} \hmaxwts{i}{H} = max\{\htwts{j}{H}|(T_j \in \incs{i}{H})\} \end{equation*} \noindent From this definition of \maxwts, we get the following simple observation. \begin{observation} \label{obs:max-wts} For any transaction $T_i$ in $H$, we have that $\twts{i}$ is less than or equal to $\hmaxwts{i}{H}$. Formally, $\htwts{i}{H} \leq \hmaxwts{i}{H}$. \end{observation} Next, we combine the notions of \affset and \maxwts to define \emph{\affwts}. It is the maximum of \maxwts of all the transactions in its \affset. Formally, \begin{equation*} \haffwts{i}{H} = max\{\hmaxwts{j}{H}|(T_j \in \haffset{i}{H})\} \end{equation*} \noindent Having defined the notion of \affwts, we get the following lemma relating the \affset and \affwts of two transactions. \begin{lemma} \label{lem:affwts-same} Consider two histories $H1$ and $H2$ with $H2$ being an extension of $H1$. Let $T_i$ and $T_j$ be two transactions which are live in $H1$ and $H2$ respectively. Suppose the \affset of $T_i$ in $H1$ is same as \affset of $T_j$ in $H2$. Then the \affwts of $T_i$ in $H1$ is same as \affwts of $T_j$ in $H2$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (T_i \in \txns{H1}) \land (T_j \in \txns{H2}) \land (\haffset{i}{H1} = \haffset{j}{H2}) \implies (\haffwts{i}{H1} = \haffwts{j}{H2}) \rangle$. \end{lemma} \begin{proof} From the definition of \affwts, we get the following equations \begin{equation} \label{eq:h1-ti-affwts} \haffwts{i}{H} = max\{\hmaxwts{k}{H}|(T_k \in \haffset{i}{H1})\} \end{equation} \begin{equation} \label{eq:h2-tj-affwts} \haffwts{j}{H} = max\{\hmaxwts{l}{H}|(T_l \in \haffset{j}{H2})\} \end{equation} From these definitions, let us suppose that $\haffwts{i}{H1}$ is $\hmaxwts{p}{H1}$ for some transaction $T_p$ in $\haffset{i}{H1}$. Similarly, suppose that $\haffwts{j}{H2}$ is $\hmaxwts{q}{H2}$ for some transaction $T_q$ in $\haffset{j}{H2}$. Here, we are given that $\haffset{i}{H1} = \haffset{j}{H2})$. Hence, we get that $T_p$ is also in $\haffset{i}{H1}$. Similarly, $T_q$ is in $\haffset{j}{H2}$ as well. Thus from Equations \eqref{eq:h1-ti-affwts} \& \eqref{eq:h2-tj-affwts}, we get that \begin{equation} \label{eq:ti-tp-max} \hmaxwts{p}{H1} \geq \hmaxwts{q}{H2} \end{equation} \begin{equation} \label{eq:tj-tq-max} \hmaxwts{q}{H2} \geq \hmaxwts{p}{H1} \end{equation} Combining these both equations, we get that $\hmaxwts{p}{H1} = \hmaxwts{q}{H2}$ which in turn implies that $\haffwts{i}{H1} = \haffwts{j}{H2}$. \end{proof} \noindent Finally, using the notion of \affwts and \cdsen, we define the notion of \emph{\finen} \begin{definition} \label{defn:finen} We say that transaction $T_i$ is \emph{\finen} if the following conditions hold true (1) $T_i$ is live in $H$; (2) $T_i$ is \cdsen is $H$; (3) $\htwts{j}{H}$ is greater than $\haffwts{i}{H}$. Formally, \begin{equation*} \finenb{i}{H} = \begin{cases} True & (T_i \in \live{H}) \land (\cdsenb{i}{H}) \land (\htwts{j}{H} > \haffwts{i}{H}) \\ False & \text{otherwise} \end{cases} \end{equation*} \end{definition} It can be seen from this definition, a transaction that is \finen is also \cdsen. We now show that just like \itsen and \cdsen, once a transaction is \finen, it remains \finen until it terminates. The following lemma captures it. \ignore{ \begin{lemma} \label{lem:fin-sam-fut} Consider two histories $H1$ and $H2$ with $H2$ being an extension of $H1$. Let $T_i$ be a transaction live in $H1$ and $H2$. Suppose $T_i$ is \finen in $H1$. Then $T_i$ is \finen in $H2$ as well. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (T_i \in \{\live{H1} \cup \live{H2}\}) \land (\finenb{i}{H1}) \implies (\finenb{i}{H2}) \rangle$. \end{lemma} \todo{Proof to be added here} } \begin{lemma} \label{lem:fin-fut} Consider two histories $H1$ and $H2$ with $H2$ being an extension of $H1$. Let $T_i$ and $T_j$ be two transactions which are live in $H1$ and $H2$ respectively. Suppose $T_i$ is \finen in $H1$. Let $T_i$ be an \inc of $T_j$ and $\tcts{i}$ is less than $\tcts{j}$. Then $T_j$ is \finen in $H2$ as well. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (T_i \in \incs{j}{H2}) \land (\htcts{i}{H1} < \htcts{j}{H2}) \land (\finenb{i}{H1}) \implies (\finenb{j}{H2}) \rangle$. \end{lemma} \begin{proof} Here we are given that $T_j$ is live in $H2$. Since $T_i$ is \finen in $H1$, we get that it is \cdsen in $H1$ as well. Combining this with the conditions given in the lemma statement, we have that, \\ \begin{equation} \label{eq:fin-given} \begin{split} \langle (H1 \sqsubseteq H2) \land (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (T_i \in \incs{j}{H2}) \land (\htcts{i}{H1} < \htcts{j}{H2}) \\ \land (\cdsenb{i}{H1}) \rangle \end{split} \end{equation} Combining \eqnref{fin-given} with \lemref{cds-fut}, we get that $T_j$ is \cdsen in $H2$, i.e., $\cdsenb{j}{H2}$. Now, in order to show that $T_j$ is \finen in $H2$ it remains for us to show that $\htwts{j}{H2} > \haffwts{j}{H2}$. We are given that $T_j$ is live in $H2$ which in turn implies that $T_j$ is in $\txns{H2}$. Thus changing this in \eqnref{fin-given}, we get the following \begin{equation} \label{eq:mod-given} \begin{split} \langle (H1 \sqsubseteq H2) \land (T_j \in \txns{H2}) \land (T_i \in \incs{j}{H2}) \land (\htcts{i}{H1} < \htcts{j}{H2}) \\ \land (\cdsenb{i}{H1}) \rangle \end{split} \end{equation} \noindent Combining \eqnref{mod-given} with \lemref{aff-same} we get that \begin{equation} \label{eq:affs-eq} \haffwts{i}{H1} = \haffwts{j}{H2} \end{equation} \noindent We are given that $\htcts{i}{H1} < \htcts{j}{H2}$. Combining this with the definition of \wts, we get \begin{equation} \label{eq:titj-wts} \htwts{i}{H1} < \htwts{j}{H2} \end{equation} \noindent Since $T_i$ is \finen in $H1$, we have that \\ $\htwts{i}{H1} > \haffwts{i}{H1} \xrightarrow{\eqnref{titj-wts}} \htwts{j}{H2} > \haffwts{i}{H1} \xrightarrow{\eqnref{affs-eq}} \htwts{j}{H2} > \\ \haffwts{j}{H2}$ \end{proof} \noindent Now, we show that a transaction that is \finen will eventually commit. \begin{lemma} \label{lem:enbd-ct} Consider a live transaction $T_i$ in a history $H1$. Suppose $T_i$ is \finen in $H1$ and $\tval{i}$ is true in $H1$. Then there exists an extension of $H1$, $H3$ in which $T_i$ is committed. Formally, $\langle H1, T_i: (T_i \in \live{H1}) \land (\htval{i}{H1}) \land (\finenb{i}{H1}) \implies (\exists H3: (H1 \sqsubset H3) \land (T_i \in \comm{H3})) \rangle$. \end{lemma} \begin{proof} Consider a history $H3$ such that its \syst being greater than $\tcts{i} + L$. We will prove this lemma using contradiction. Suppose $T_i$ is aborted in $H3$. Now consider $T_i$ in $H1$: $T_i$ is live; its \val flag is true; and is \enbd. From the definition of \finen, we get that it is also \cdsen. From \lemref{its-enb}, we get that $T_i$ is \itsen in $H1$. Thus from \lemref{its-wts}, we get that there exists an extension of $H1$, $H2$ such that (1) Transaction $T_i$ is live in $H2$; (2) there is a transaction $T_j$ in ${H2}$; (3) $\htwts{j}{H2}$ is greater than $\htwts{i}{H2}$; (4) $T_j$ is committed in $H3$. Formally, \begin{equation} \label{eq:its-wts-ant} \begin{split} \langle (\exists H2, T_j: (H1 \sqsubseteq H2 \sqsubset H3) \land (T_i \in \live{H2}) \land (T_j \in \txns{H2}) \land (\htwts{i}{H2} < \htwts{j}{H2}) \\ \land (T_j \in \comm{H3})) \rangle \end{split} \end{equation} Here, we have that $H2$ is an extension of $H1$ with $T_i$ being live in both of them and $T_i$ is \finen in $H1$. Thus from \lemref{fin-fut}, we get that $T_i$ is \finen in $H2$ as well. Now, let us consider $T_j$ in $H2$. From \eqnref{its-wts-ant}, we get that $(\htwts{i}{H2} < \htwts{j}{H2})$. Combining this with the observation that $T_i$ being live in $H2$, \lemref{wts-its} we get that $(\htits{j}{H2} \leq \htits{i}{H2} + 2*L)$. This implies that $T_j$ is in \affset of $T_i$ in $H2$, i.e., $(T_j \in \haffset{i}{H2})$. From the definition of \affwts, we get that \begin{equation} \label{eq:max-affwts} (\haffwts{i}{H2} \geq \hmaxwts{j}{H2}) \end{equation} Since $T_i$ is \finen in $H2$, we get that $\twts{i}$ is greater than \affwts of $T_i$ in $H2$. \begin{equation} \label{eq:wts-affwts} (\htwts{i}{H2} > \haffwts{i}{H2}) \end{equation} Now combining Equations \ref{eq:max-affwts}, \ref{eq:wts-affwts} we get, \begin{align*} \htwts{i}{H2} & > \haffwts{i}{H2} \geq \hmaxwts{j}{H2} & \\ & > \haffwts{i}{H2} \geq \hmaxwts{j}{H2} \geq \htwts{j}{H2} & [\text{From \obsref{max-wts}}] \\ & > \htwts{j}{H2} \end{align*} But this equation contradicts with \eqnref{its-wts-ant}. Hence our assumption that $T_i$ will get aborted in $H3$ after getting \finen is not possible. Thus $T_i$ has to commit in $H3$. \end{proof} \noindent Next we show that once a transaction $T_i$ becomes \itsen, it will eventually become \finen as well and then committed. We show this change happens in a sequence of steps. We first show that Transaction $T_i$ which is \itsen first becomes \cdsen (or gets committed). We next show that $T_i$ which is \cdsen becomes \finen or get committed. On becoming \finen, we have already shown that $T_i$ will eventually commit. Now, we show that a transaction that is \itsen will become \cdsen or committed. To show this, we introduce a few more notations and definitions. We start with the notion of \emph{\depits (dependent-its)} which is the set of \its{s} that a transaction $T_i$ depends on to commit. It is the set of \its of all the transactions in $T_i$'s \cdset in a history $H$. Formally, \begin{equation*} \hdep{i}{H} = \{\htits{j}{H}|T_j \in \hcds{i}{H}\} \end{equation*} \noindent We have the following lemma on the \depits of a transaction $T_i$ and its future \inc $T_j$ which states that \depits of a $T_i$ either reduces or remains the same. \begin{lemma} \label{lem:depits-fut} Consider two histories $H1$ and $H2$ with $H2$ being an extension of $H1$. Let $T_i$ and $T_j$ be two transactions which are live in $H1$ and $H2$ respectively and $T_i$ is an \inc of $T_j$. In addition, we also have that $\tcts{i}$ is greater than $\tits{i} + 2*L$ in $H1$. Then, we get that $\hdep{j}{H2}$ is a subset of $\hdep{i}{H1}$. Formally, $\langle H1, H2, T_i, T_j: (H1 \sqsubseteq H2) \land (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (T_i \in \incs{j}{H2}) \land (\htcts{i}{H1} \geq \htits{i}{H1} + 2*L) \implies (\hdep{j}{H2} \subseteq \hdep{i}{H1}) \rangle$. \end{lemma} \begin{proof} Suppose $\hdep{j}{H2}$ is not a subset of $\hdep{i}{H1}$. This implies that there is a transaction $T_k$ such that $\htits{k}{H2} \in \hdep{j}{H2}$ but $\htits{k}{H1} \notin \hdep{j}{H1}$. This implies that $T_k$ starts afresh after $H1$ in some history say $H3$ such that $H1 \sqsubset H3 \sqsubseteq H2$. Hence, from \corref{cts-syst} we get the following \noindent \begin{math} \htits{k}{H3} > \hsyst{H1} \xrightarrow{\lemref{cts-syst}} \htits{k}{H3} > \htcts{i}{H1} \implies \htits{k}{H3} > \htits{i}{H1} + 2*L \xrightarrow{\htits{i}{H1} = \htits{j}{H2}} \htits{k}{H3} > \htits{j}{H2} + 2*L \xrightarrow[definitions]{\affset, \depits} \htits{k}{H2} \notin \hdep{j}{H2} \end{math} We started with $\tits{k}$ in $\hdep{j}{H2}$ and ended with $\tits{k}$ not in $\hdep{j}{H2}$. Thus, we have a contradiction. Hence, the lemma follows. \end{proof} \noindent Next we denote the set of committed transactions in $T_i$'s \affset in $H$ as \emph{\cis (commit independent set)}. Formally, \begin{equation*} \hcis{i}{H} = \{T_j| (T_j \in \haffset{i}{H}) \land (\inct{j}{H}) \} \end{equation*} \noindent In other words, we have that $\hcis{i}{H} = \haffset{i}{H} - \hcds{i}{H}$. Finally, using the notion of \cis we denote the maximum of \maxwts of all the transactions in $T_i$'s \cis as \emph{\pawts} (partly affecting \wts). It turns out that the value of \pawts affects the commit of $T_i$ which we show in the course of the proof. Formally, \pawts is defined as \begin{equation*} \hpawts{i}{H} = max\{\hmaxwts{j}{H}|(T_j \in \hcis{i}{H})\} \end{equation*} \noindent Having defined the required notations, we are now ready to show that a \itsen transaction will eventually become \cdsen. \begin{lemma} \label{lem:its-cds} Consider a transaction $T_i$ which is live in a history $H1$ and $\tcts{i}$ is greater than or equal to $\tits{i} + 2*L$. If $T_i$ is \itsen in $H1$ then there is an extension of $H1$, $H2$ in which an \inc $T_i$, $T_j$ (which could be same as $T_i$), is either committed or \cdsen. Formally, $\langle H1, T_i: (T_i \in \live{H1}) \land (\htcts{i}{H1} \geq \htits{i}{H1} + 2*L) \land (\itsenb{i}{H1}) \implies (\exists H2, T_j: (H1 \sqsubset H2) \land (T_j \in \incs{i}{H2}) \land ((T_j \in \comm{H2}) \lor (\cdsenb{j}{H2}))) \rangle$. \end{lemma} \begin{proof} We prove this by inducting on the size of $\hdep{i}{H1}$, $n$. For showing this, we define a boolean function $P(k)$ as follows: \begin{math} P(k) = \begin{cases} True & \langle H1, T_i: (T_i \in \live{H1}) \land (\htcts{i}{H1} \geq \htits{i}{H1} + 2*L) \land (\itsenb{i}{H1}) \land \\ & (k \geq |\hdep{i}{H1}|) \implies (\exists H2, T_j: (H1 \sqsubset H2) \land (T_j \in \incs{i}{H2}) \land \\ & ((T_j \in \comm{H2}) \lor (\cdsenb{j}{H2}))) \rangle \\ False & \text{otherwise} \end{cases} \end{math} As can be seen, here $P(k)$ means that if (1) $T_i$ is live in $H1$; (2) $\tcts{i}$ is greater than or equal to $\tits{i} + 2*L$; (3) $T_i$ is \itsen in $H1$ (4) the size of $\hdep{i}{H1}$ is less than or equal to $k$; then there exists a history $H2$ with a transaction $T_j$ in it which is an \inc of $T_i$ such that $T_j$ is either committed or \cdsen in $H2$. We show $P(k)$ is true for all (integer) values of $k$ using induction. \vspace{1mm} \noindent \textbf{Base Case - $P(0)$:} Here, from the definition of $P(0)$, we get that $|\hdep{i}{H1}| = 0$. This in turn implies that $\hcds{i}{H1}$ is null. Further, we are already given that $T_i$ is live in $H1$ and $\htcts{i}{H1} \geq \htits{i}{H1} + 2*L$. Hence, all these imply that $T_i$ is \cdsen in $H1$. \vspace{1mm} \noindent \textbf{Induction case - To prove $P(k+1)$ given that $P(k)$ is true:} If $|\hdep{i}{H1}| \leq k$, from the induction hypothesis $P(k)$, we get that $T_j$ is either committed or \cdsen in $H2$. Hence, we consider the case when \begin{equation} \label{eq:hdep-assn} |\hdep{i}{H1}| = k + 1 \end{equation} Let $\alpha$ be $\hpawts{i}{H1}$. Suppose $\htwts{i}{H1} < \alpha$. Then from \lemref{wts-great}, we get that there is an extension of $H1$, say $H3$ in which an \inc of $T_i$, $T_l$ (which could be same as $T_i$) is committed or is live in $H3$ and has \wts greater than $\alpha$. If $T_l$ is committed then $P(k+1)$ is trivially true. So we consider the latter case in which $T_l$ is live in $H3$. In case $\htwts{i}{H1} \geq \alpha$, then in the analysis below follow where we can replace $T_l$ with $T_i$. Next, suppose $T_l$ is aborted in an extension of $H3$, $H5$. Then from \lemref{its-wts}, we get that there exists an extension of $H3$, $H4$ in which (1) $T_l$ is live; (2) there is a transaction $T_m$ in $\txns{H4}$; (3) $\htwts{m}{H4} > \htwts{l}{H4}$ (4) $T_m$ is committed in $H5$. Combining the above derived conditions (1), (2), (3) with \lemref{ti|tltl-comt} we get that in $H4$, \begin{equation} \label{eq:ml-tits} \htits{m}{H4} \leq \htits{l}{H4} + 2*L \end{equation} \eqnref{ml-tits} implies that $T_m$ is in $T_l$'s \affset. Here, we have that $T_l$ is an \inc of $T_i$ and we are given that $\htcts{i}{H1} \geq \htits{i}{H1} + 2*L$. Thus from \lemref{aff-tkinc-h1}, we get that there exists an \inc of $T_m$, $T_n$ in $H1$. Combining \eqnref{ml-tits} with the observations (a) $T_n, T_m$ are \inc{s}; (b) $T_l, T_i$ are \inc{s}; (c) $T_i, T_n$ are in $\txns{H1}$, we get that $\htits{n}{H1} \leq \htits{i}{H1} + 2*L$. This implies that $T_n$ is in $\haffset{i}{H1}$. Since $T_n$ is not committed in $H1$ (otherwise, it is not possible for $T_m$ to be an \inc of $T_n$), we get that $T_n$ is in $\hcds{i}{H1}$. Hence, we get that $\htits{m}{H4} = \htits{n}{H1}$ is in $\hdep{i}{H1}$. From \eqnref{hdep-assn}, we have that $\hdep{i}{H1}$ is $k+1$. From \lemref{depits-fut}, we get that $\hdep{i}{H4}$ is a subset of $\hdep{i}{H1}$. Further, we have that transaction $T_m$ has committed. Thus $\htits{m}{H4}$ which was in $\hdep{i}{H1}$ is no longer in $\hdep{i}{H4}$. This implies that $\hdep{i}{H4}$ is a strict subset of $\hdep{i}{H1}$ and hence $|\hdep{i}{H4}| \leq k$. \noindent Since $T_i$ and $T_l$ are \inc{s}, we get that $\hdep{i}{H4} = \hdep{l}{H1}$. Thus, we get that \begin{equation} \label{eqn:hdep-ilh4} |\hdep{i}{H4}| \leq k \implies |\hdep{l}{H4}| \leq k \end{equation} \noindent Further, we have that $T_l$ is a later \inc of $T_i$. So, we get that \begin{equation} \label{eqn:cts-its} \htcts{l}{H4} > \htcts{i}{H4} \xrightarrow{given} \htcts{l}{H4} > \htits{i}{H4} + 2*L \xrightarrow{\htits{i}{H4} = \htits{l}{H4}} \htcts{l}{H4} > \htits{l}{H4} + 2*L \end{equation} We also have that $T_l$ is live in $H4$. Combining this with Equations \ref{eqn:hdep-ilh4}, \ref{eqn:cts-its} and given the induction hypothesis that $P(k)$ is true, we get that there exists a history extension of $H4$, $H6$ in which an \inc of $T_l$ (also $T_i$), $T_p$ is either committed or \cdsen. This proves the lemma. \end{proof} \begin{lemma} \label{lem:cds-fin} Consider a transaction $T_i$ in a history $H1$. If $T_i$ is \cdsen in $H1$ then there is an extension of $H1$, $H2$ in which an \inc $T_i$, $T_j$ (which could be same as $T_i$), is either committed or \finen. Formally, $\langle H1, T_i: (T_i \in \live{H}) \land (\cdsenb{i}{H1}) \implies (\exists H2, T_j: (H1 \sqsubset H2) \land (T_j \in \incs{i}{H2}) \land ((T_j \in \comm{H2}) \lor (\finenb{j}{H2})) \rangle$. \end{lemma} \begin{proof} In $H1$, suppose $\haffwts{i}{H1}$ is $\alpha$. From \lemref{wts-great}, we get that there is a extension of $H1$, $H2$ with a transaction $T_j$ which is an \inc of $T_i$. Here there are two cases: (1) Either $T_j$ is committed in $H2$. This trivially proves the lemma; (2) Otherwise, $\twts{j}$ is greater than $\alpha$. \noindent In the second case, we get that \begin{equation} \label{eq:ext} \begin{split} (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (\cdsenb{i}{H}) \land (T_j \in \incs{i}{H2}) \land \\ (\htwts{i}{H1} < \htwts{j}{H2}) \end{split} \end{equation} \noindent Combining the above result with \lemref{cts-wts}, we get that $\htcts{i}{H1} < \htcts{j}{H2}$. Thus the modified equation is \begin{equation} \label{eq:new-ext} \begin{split} (T_i \in \live{H1}) \land (T_j \in \live{H2}) \land (\cdsenb{i}{H1}) \land (T_j \in \incs{i}{H2}) \land \\ (\htcts{i}{H1} < \htcts{j}{H2}) \end{split} \end{equation} \noindent Next combining \eqnref{new-ext} with \lemref{aff-same}, we get that \begin{equation} \label{eq:affs-ij} \haffset{i}{H1} = \haffset{j}{H2} \end{equation} \noindent Similarly, combining \eqnref{new-ext} with \lemref{cds-fut} we get that $T_j$ is \cdsen in $H2$ as well. Formally, \begin{equation} \label{eq:th-cdsen} \cdsenb{j}{H2} \end{equation} Now combining \eqnref{affs-ij} with \lemref{affwts-same}, we get that \begin{equation} \label{eq:affwts-same} \haffwts{i}{H1} = \haffwts{j}{H2} \end{equation} From our initial assumption we have that $\haffwts{i}{H1}$ is $\alpha$. From \eqnref{affwts-same}, we get that $\haffwts{j}{H2} = \alpha$. Further, we had earlier also seen that $\htwts{j}{H2}$ is greater than $\alpha$. Hence, we have that $\htwts{j}{H2} > \haffwts{j}{H2}$. \noindent Combining the above result with \eqnref{th-cdsen}, $\cdsenb{j}{H2}$, we get that $T_j$ is \finen, i.e., $\finenb{j}{H2}$. \end{proof} \noindent Next, we show that every live transaction eventually become \itsen. \begin{lemma} \label{lem:live-its} Consider a history $H1$ with $T_i$ be a transaction in $\live{H1}$. Then there is an extension of $H1$, $H2$ in which an \inc of $T_i$, $T_j$ (which could be same as $T_i$) is either committed or is \itsen. Formally, $\langle H1, T_i: (T_i\in \live{H}) \implies (\exists T_j, H2: (H1 \sqsubset H2) \land (T_j \in \incs{i}{H2}) \land (T_j \in \comm{H2}) \lor (\itsenb{i}{H})) \rangle$. \end{lemma} \begin{proof} \noindent We prove this lemma by inducting on \its. \vspace{1mm} \noindent \textbf{Base Case - $\tits{i} = 1$:} In this case, $T_i$ is the first transaction to be created. There are no transactions with smaller \its. Thus $T_i$ is trivially \itsen. \vspace{1mm} \noindent \textbf{Induction Case:} Here we assume that for any transaction $\tits{i} \leq k$ the lemma is true. \end{proof} Combining these lemmas gives us the result that for every live transaction $T_i$ there is an incarnation $T_j$ (which could be the same as $T_i$) that will commit. This implies that every \aptr eventually commits. The follow lemma captures this notion. \begin{theorem} \label{thm:hwtm-com} Consider a history $H1$ with $T_i$ be a transaction in $\live{H1}$. Then there is an extension of $H1$, $H2$ in which an \inc of $T_i$, $T_j$ is committed. Formally, $\langle H1, T_i: (T_i\in \live{H}) \implies (\exists T_j, H2: (H1 \sqsubset H2) \land (T_j \in \incs{i}{H2}) \land (T_j \in \comm{H2})) \rangle$. \end{theorem} \begin{proof} Here we show the states that a transaction $T_i$ (or one of it its \inc{s}) undergoes before it commits. In all these transitions, it is possible that an \inc of $T_i$ can commit. But to show the worst case, we assume that no \inc of $T_i$ commits. Continuing with this argument, we show that finally an \inc of $T_i$ commits. Consider a live transaction $T_i$ in $H1$. Then from \lemref{live-its}, we get that there is a history $H2$, which is an extension of $H1$, in which $T_j$ an \inc of $T_i$ is either committed or \itsen. If $T_j$ is \itsen in $H2$, then from \lemref{its-cds}, we get that $T_k$, an \inc of $T_j$, will be \cdsen in a extension of $H2$, $H3$ (assuming that $T_k$ is not committed in $H3$). From \lemref{cds-fin}, we get that there is an extension of $H3$, $H4$ in which an \inc of $T_k$, $T_l$ will be \finen assuming that it is not committed in $H4$. Finally, from \lemref{enbd-ct}, we get that there is an extension of $H4$ in which $T_m$, an \inc of $T_l$, will be committed. This proves our theorem. \end{proof} \subsection{Pseudo code of Counter Application} \label{subsec:countercode} OP\_LT\_SEED is defined as number of operations per transaction, T\_OBJ\_SEED is defined as number of transaction objects in the system, TRANS\_LT defines the total number of transactions to be executed in the system, and READ\_PER is the percentage of read operation which is used to define various workloads. \label{apn:conters} \begin{algorithm} \caption{$main()$: The main procedure invoked by counter application} \label{algo:main} \begin{algorithmic}[1] \State \Comment{To log abort counts by each thread} \State $abort\_count[$NUMTHREADS$]$ \State \Comment{To log average time taken by each transaction to commit} \State $time\_taken[$NUMTHREADS$]$ \State \Comment{To log the time of longest running transaction by each thread, worst case time} \State $worst\_time[$NUMTHREADS$]$ \For{(i = 0 : NUMTHREADS)} \State pthread\_create(\&threads[i], NULL, testFunc\_helper,(void$\ast$)args) \EndFor \For{(i = 0 : NUMTHREADS)} \State pthread\_join(threads[i], \&status) \EndFor \State $max\_worst\_time = 0.0$ \State $total\_abort\_count = 0$ \State $average\_time_taken = 0$ \For{(i = 0 : NUMTHREADS)} \If{($max\_worst\_time < worst\_time[i]$)} \State $max\_worst\_time = worst\_time[i]$ \EndIf \State $total\_abort\_count += abort\_count[i]$ \State $average\_time\_taken += time\_taken[i]$ \EndFor \end{algorithmic} \end{algorithm} \vspace{1mm} \begin{algorithm} [H] \caption{$testFunc\_helper()$:Function invoked by threads} \label{algo:testFunc} \begin{algorithmic}[1] \State $transaction\_count = 0$ \While{(TRANS\_LT)} \State\Comment{Log the time at the start of every transaction} \State $begin\_time = time\_request()$ \State\Comment{Invoke the test function to execute a transaction} \State $abort\_count[thread\_id] = test\_function()$ \State $transaction\_count++$ \algstore{myalg} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic} \algrestore{myalg} \State\Comment{Log the time at the end of every transaction} \State $end\_time = time\_request()$ \State $time\_taken[thread\_id] += (end\_time - begin\_time)$ \If{($worst\_time[thread_id] < (end\_time - begin\_time)$)} \State $worst\_time[thread_id] = (end\_time - begin\_time)$ \EndIf \State TRANS\_LT -= 1 \EndWhile \State $time\_taken[thread\_id]$ /= $transaction\_count$ \end{algorithmic} \end{algorithm} \vspace{1mm} \begin{algorithm} [H] \caption{$test\_function()$:main test function while executes a transaction} \label{algo:testFunx} \begin{algorithmic}[1] \State Transaction $\ast$T = new Transaction; \State $T\rightarrow g\_its$ = NIL \State $local\_abort\_count$ = 0 \State label: \While{(true)} \If{($T\rightarrow g\_its$ != $NIL$)} \State $its = T\rightarrow g\_its$ \State $T = lib\rightarrow stm$-$begin(its)$ \Else \State $T = lib\rightarrow stm$-$begin(T\rightarrow g\_its)$ \EndIf \ForAll{(OP\_LT\_SEED)} \State $t\_obj = rand()\%T\_OBJ\_SEED$ \State $randVal = rand()\%OP\_SEED$ \If{($randVal <= READ\_PER $)} \State $stm$-$read(t\_obj, value)$ \If{(value == $ABORTED$)} \State $local\_abort\_count$++ \State goto label \EndIf \Else \State $stm$-$write(t\_obj, value)$ \EndIf \EndFor \If{($lib\rightarrow stm$-$tryC() == ABORTED$)} \State $local\_abort\_count$++ \State continue \EndIf \State break \EndWhile \end{algorithmic} \end{algorithm} \subsection{Data Structures and Pseudocode of \ksftm} \label{sec:code} The STM system consists of the following methods: $\init(), \begt(), read(i, x), write_i(i, x, v)$ and $\tryc(i)$. We assume that all the \tobj{s} are ordered as $x_1, x_2, ...x_n$ and belong to the set $\mathcal{T}$. We describe the data-structures used by the algorithm. We start with structures that local to each transaction. Each transaction $T_i$ maintains a $\rset{i}$ and $\wset{i}$. In addition it maintains the following structures (1) $\ct_i$: This is value given to $T_i$ when it terminates which is assigned a value in \tryc \mth. (2) A series of lists: \srl, \lrl, \allrl, \pvl, \nvl, \relll, \abl. The meaning of these lists will be clear with the description of the pseudocode. In addition to these local structures, the following shared global structures are maintained that are shared across transactions (and hence, threads). We name all the shared variable starting with `G'. \begin{itemize} \item $\gtcnt$ (counter): This a numerical valued counter that is incremented when a transaction begins and terminates. \end{itemize} \noindent For each transaction $T_i$ we maintain the following shared time-stamps: \begin{itemize} \item $\glock_i$: A lock for accessing all the shared variables of $T_i$. \item $\gits_i$ (initial timestamp): It is a time-stamp assigned to $T_i$ when it was invoked for the first time without any aborts. The current value of $\gtcnt$ is atomically assigned to it and then incremented. If $T_i$ is aborted and restarts later then the application assigns it the same \gits. \item $\gcts_i$ (current timestamp): It is a time-stamp when $T_i$ is invoked again at a later time after an abort. Like \gits,the current value of $\gtcnt$ is atomically assigned to it and then incremented. When $T_i$ is created for the first time, then its \gcts{} is same as its \gits. \item $\gwts_i$ (working timestamp): It is the time-stamp that $T_i$ works with. It is either greater than or equal to $T_i$'s \gcts. It is computed as follows: $\gwts_i = \gcts_i + C * (\gcts_i - \gits_i)$. \item $\gval_i$: This is a boolean variable which is initially true. If it becomes false then $T_i$ has to be aborted. \item $\gstat_i$: This is a variable which states the current value of $T_i$. It has three states: \texttt{live}, \texttt{committed} or \texttt{aborted}. \item $\tltl_i, \tutl_i$ (transaction lower \& upper time limits): These are the time-limits described in the previous section used to keep the transaction \wts and \rt orders in sync. $\tltl_i$ is \gcts{} of $T_i$ when transaction begins and is a non-decreasing value. It continues to increase (or remains same) as $T_i$ reads \tobj{s} and later terminates. $\tutl_i$ on the other hand is a non-increasing value starting with $\infty$ when the $T_i$ is created. It reduces (or remains same) as $T_i$ reads \tobj{s} and later terminates. If $T_i$ commits then both $\tltl_i$ \& $\tutl_i$ are made equal. \end{itemize} Two transactions having the same \its are said to be \inc{s}. No two transaction can have the same \cts. For simplicity, we assume that no two transactions have the same \wts as well. In case, two transactions have the same \wts, one can use the tuple $\langle$\wts, \cts$\rangle$ instead of \wts. But we ignore such cases. For each \tobj $x$ in $\mathcal{T}$, we maintain: \begin{itemize} \item $x.\vl$ (version list): It is a list consisting of version tuples or \emph{\vtup} of the form $\langle \ts, val, \rl, \vt \rangle$. The details of the tuple are explained below. \item $\ts$ (timestmp): Here $\ts$ is the $\gwts_i$ of a committed transaction $T_i$ that has created this version. \item $val$: The value of this version. \item $\rl$ (readList): $rl$ is the read list consists of all the transactions that have read this version. Each entry in this list is of the form $\langle rts \rangle$ where $rts$ is the $\gwts_j$ of a transaction $T_j$ that read this version. \item $\vt$ (version real-time timestamp): It is the \tutl value (which is same as \tltl) of the transaction $T_i$ that created this version at the time of commit of $T_i$. \end{itemize} \begin{algorithm}[H] \caption{STM $\init()$: Invoked at the start of the STM system. Initializes all the \tobj{s} used by the STM System} \label{algo:init} \begin{algorithmic}[1] \State $\gtcnt$ = 1; \Comment{Global Transaction Counter} \ForAll {$x$ in $\mathcal{T}$} \Comment{All the \tobj{s} used by the STM System} \State /* $T_0$ is creating the first version of $x$: $\ts= 0, val = 0, \rl = nil, \vt = 0$ */ \State add $\langle 0, 0, nil, 0 \rangle$ to $x.\vl$; \label{lin:t0-init1} \EndFor; \end{algorithmic} \end{algorithm} \begin{algorithm} [H] \caption{STM $\begt(its)$: Invoked by a thread to start a new transaction $T_i$. Thread can pass a parameter $its$ which is the initial timestamp when this transaction was invoked for the first time. If this is the first invocation then $its$ is $nil$. It returns the tuple $\langle id, \gwts, \gcts \rangle$} \label{algo:begin} \begin{algorithmic}[1] \State $i$ = unique-id; \Comment{An unique id to identify this transaction. It could be same as \gcts} \State \Comment{Initialize transaction specific local \& global variables} \If {($its == nil$)} \State $\gits_i = \gwts_i = \gcts_i = \gtcnt.get\&Inc()$; \Comment{$\gtcnt.get\&Inc()$ returns the current value of \gtcnt and atomically increments it} \label{lin:ti-ts-init} \Else \State $\gits_i = its$; \State $\gcts_i = \gtcnt.get\&Inc()$; \State $\gwts_i = \gcts_i + C * (\gcts_i - \gits_i)$; \Comment{$C$ is any constant greater or equal to than 1} \EndIf \State $\tltl_i = \gcts_i$; $\tutl_i = \ct_i = \infty$; \label{lin:lts-init} \State $\gstat_i$ = \texttt{live}; $\gval_i = T$; \State $rset\xspace_i = wset\xspace_i = nil$; \State return $\langle i, \gwts_i, \gcts_i\rangle$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{STM $read(i, x)$: Invoked by a transaction $T_i$ to read \tobj{} $x$. It returns either the value of $x$ or $\mathcal{A}$} \label{algo:read} \begin{algorithmic}[1] \If {($x \in wset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $wset\xspace_i$} \State return $wset\xspace_i[x].val$; \ElsIf{($x \in rset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $rset\xspace_i$} \State return $rset\xspace_i[x].val$; \Else \Comment{\tobj{} $x$ is not in $rset\xspace_i$ and $wset\xspace_i$} \State lock $x$; lock $\glock_i$; \If {$(\gval_i == F)$} return abort(i); \label{lin:rd-chk} \EndIf \State /* \findls: From $x.\vl$, returns the largest \ts value less than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $curVer = \findls(\gwts_i,x)$; \label{lin:rd-curver10} \If {$(curVer == nil)$} return abort(i); \Comment{Proceed only if $curVer$ is not nil} \label{lin:rd-cvnil} \EndIf \State /* \findsl: From $x.\vl$, returns the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $nextVer = \findsl(\gwts_i,x)$; \algstore{myalg} \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \begin{algorithmic} \algrestore{myalg} \If {$(nextVer \neq nil)$} \State \Comment{Ensure that $\tutl_i$ remains smaller than $nextVer$'s \vltl} \State $\tutl_i = min(\tutl_i, x[nextVer].\vt-1)$; \label{lin:rd-ul-dec} \EndIf \State \Comment{$\tltl_i$ should be greater than $x[curVer].\vltl$} \State $\tltl_i = max(\tltl_i, x[curVer].\vt + 1)$; \label{lin:rd-tltl-inc} \If {($\tltl_i > \tutl_i$)} \Comment{If the limits have crossed each other, then $T_i$ is aborted} \State return abort(i); \label{lin:rd-lts-cross} \EndIf \State $val = x[curVer].v$; add $\langle x, val \rangle$ to $rset\xspace_i$; \State add $T_i$ to $x[curVer].rl$; \State unlock $\glock_i$; unlock $x$; \State return $val$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{STM $write_i(x,val)$: A Transaction $T_i$ writes into local memory} \label{algo:write} \begin{algorithmic}[1] \State Append the $d\_tuple \langle x,val \rangle$ to $wset\xspace_i$. \State return $ok$; \end{algorithmic} \end{algorithm} \begin{algorithm} [H] \caption{STM $\tryc()$: Returns $ok$ on commit else return Abort} \label{algo:tryc} \begin{algorithmic}[1] \State \Comment{The following check is an optimization which needs to be performed again later} \State lock $\glock_i$; \If {$(\gval_i == F)$} return abort(i); \label{lin:init-tc-chk} \EndIf \State unlock $\glock_i$; \State \Comment{Initialize smaller read list (\srl), larger read list (\lrl), all read list (\allrl) to nil} \State $\srl = \lrl = \allrl = nil$; \label{lin:init-rls} \State \Comment{Initialize previous version list (\pvl), next version list (\nvl) to nil} \State $\pvl = \nvl = nil$; \label{lin:init-vls} \ForAll {$x \in wset\xspace_i$} \State lock $x$ in pre-defined order; \label{lin:lockxs} \State /* \findls: returns the version of $x$ with the largest \ts less than $\gwts_i$. If no such version exists, it returns $nil$. */ \State $\prevv = \findls(\gwts_i, x)$; \Comment{\prevv: largest version smaller than $\gwts_i$} \If {$(\prevv == nil)$} \Comment{There exists no version with \ts value less than $\gwts_i$} \State lock $\glock_i$; return abort(i); \label{lin:prev-nil} \EndIf \State $\pvl = \pvl \cup \prevv$; \Comment{\pvl stores the previous version in sorted order } \State $\allrl = \allrl \cup x[\prevv].rl$; \Comment{Store the read-list of the previous version} \State \Comment{\textbf{\getl}: obtain the list of reading transactions of $x[\prevv].rl$ whose $\gwts$ is greater than $\gwts_i$} \State $\lrl = \lrl \cup \getl(\gwts_i, $ \Statex $x[\prevv].rl)$; \label{lin:lar-coll} \State \Comment{\textbf{\getsm}: obtain the list of reading transactions of $x[\prevv].rl$ whose $\gwts$ is smaller than $\gwts_i$} \State $\srl = \srl \cup \getsm(\gwts_i, $ \Statex $x[\prevv].rl)$; \label{lin:lar-sml} \algstore{myalg} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[H] \algrestore{myalg} \State /* \findsl: returns the version with the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$. */ \State $\nextv = \findsl(\gwts_i, x)$; \Comment{\nextv: smallest version larger than $\gwts_i$} \label{lin:get-nextv} \If {$(\nextv \neq nil)$)} \State $\nvl = \nvl \cup \nextv$; \Comment{\nvl stores the next version in sorted order} \label{lin:nvl-coll} \EndIf \EndFor \Comment{$x \in wset\xspace_i$} \State $\relll = \allrl \cup T_i$; \Comment{Initialize relevant Lock List (\relll)} \ForAll {($T_k \in \relll$)} \State lock $\glock_k$ in pre-defined order; \Comment{Note: Since $T_i$ is also in $\relll$, $\glock_i$ is also locked} \label{lin:lockall} \EndFor \State \Comment{Verify if $\gval_i$ is false} \If {$(\gval_i == F)$} return abort(i); \label{lin:mid-tc-chk} \EndIf \State $\abl = nil$ \Comment{Initialize abort read list (\abl)} \State \Comment{Among the transactions in $T_k$ in $\lrl$, either $T_k$ or $T_i$ has to be aborted} \ForAll {$(T_k \in \lrl)$} \If {$(\isab(T_k))$} \State \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \If {$(\gits_i < \gits_k) \land (\gstat_k == \texttt{live})$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \label{lin:addAbl-lar} \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \label{lin:its-chk1} \EndIf \EndFor \State \Comment{Ensure that $\tltl_i$ is greater than \vltl of the versions in $\pvl$} \ForAll {$(ver \in \pvl)$} \State $x$ = \tobj of $ver$; \State $\tltl_i = max(\tltl_i, x[ver].\vt + 1)$; \label{lin:tryc-tltl-inc} \EndFor \State \Comment{Ensure that $\vutl_i$ is less than \vltl of versions in $\nvl$} \ForAll {$(ver \in \nvl)$} \State $x$ = \tobj of $ver$; \State $\tutl_i = min(\tutl_i, x[ver].\vltl - 1)$; \label{lin:tryc-ul-dec} \EndFor \State \Comment{Store the current value of the global counter as commit time and increment it} \State $\ct_i = \gtcnt.add\&Get(\incv)$; \Comment{$\incv$ can be any constant $\geq$ 1} \label{lin:tryc-cmt-mod} \State $\tutl_i = min(\tutl_i, \ct_i)$; \Comment{Ensure that $\tutl_i$ is less than or equal to $\ct$} \label{lin:tryc-ul-cmt} \State \Comment{Abort $T_i$ if its limits have crossed} \If {$(\tltl_i > \tutl_i)$} return abort(i); \label{lin:tc-lts-cross} \EndIf \algstore{myalg} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[H] \algrestore{myalg} \ForAll {$(T_k \in \srl)$} \If {$(\isab(T_k))$} \State continue; \EndIf \If {$(\tltl_k \geq \tutl_i)$} \label{lin:tk-check} \Comment{Ensure that the limits do not cross for both $T_i$ \& $T_k$} \If {$(\gstat_k == live)$} \Comment{Check if $T_k$ is live} \If {$(\gits_i < \gits_k)$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \label{lin:addAbl-sml} \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \label{lin:its|lar-sml} \EndIf \Comment{$(\gits_i < \gits_k)$} \label{lin:its-ik} \Else \Comment{($T_k$ is committed. Hence, $T_i$ has to be aborted)} \State return abort(i); \label{lin:its-chk2} \EndIf \Comment{$(\gstat_k == live)$} \EndIf \Comment{$(\tltl_k \geq \tutl_i)$} \EndFor {$(T_k \in \srl)$} \State \Comment{After this point $T_i$ can't abort.} \State $\tltl_i = \tutl_i$; \label{lin:ti-updt} \State \Comment{Since $T_i$ can't abort, we can update $T_k$'s \tutl} \ForAll {$(T_k \in \srl)$} \If {$(\isab(T_k))$} \State continue; \EndIf \State /* The following line ensure that $\tltl_k \leq \tutl_k < \tltl_i$. Note that this does not cause the limits of $T_k$ to cross each other because of the check in \Lineref{tk-check}.*/ \State $\tutl_k = min(\tutl_k, \tltl_i - 1)$; \label{lin:tk-updt} \EndFor \ForAll {$T_k \in \abl$} \Comment{Abort all the transactions in \abl since $T_i$ can't abort} \State $\gval_k = F$; \label{lin:gval-set} \EndFor \State \Comment{Having completed all the checks, $T_i$ can be committed} \ForAll {$(x \in wset\xspace_i)$} \State /* Create new v\_tuple: $\ts, val, \rl, \vt$ for $x$ */ \State $newTuple = \langle \gwts_i, wset\xspace_i[x].val, nil, \tltl_i \rangle$; \label{lin:new-tup} \If {($|x.vl| > k$)} \State replace the oldest tuple in $x.\vl$ with $newTuple$; \Comment{$x.\vl$ is ordered by $\ts$} \Else \State add a $newTuple$ to $x.vl$ in sorted order; \EndIf \EndFor \Comment{$x \in wset\xspace_i$} \State $\gstat_i$ = \texttt{commit}; \State unlock all variables; \State return $\mathcal{C}$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{$\isab(T_k)$: Verifies if $T_i$ is already aborted or its \gval flag is set to false implying that $T_i$ will be aborted soon} \label{algo:isab} \begin{algorithmic}[1] \If {$(\gval_k == F) \lor (\gstat_k == \texttt{abort}) \lor (T_k \in \abl)$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{$abort(i)$: Invoked by various STM methods to abort transaction $T_i$. It returns $\mathcal{A}$} \label{algo:abort} \begin{algorithmic}[1] \State $\gval_i = F$; $\gstat_i$ = \texttt{abort}; \State unlock all variables locked by $T_i$; \State return $\mathcal{A}$; \end{algorithmic} \end{algorithm} \section{Proof of safety} \begin{lemma} \label{lem:tltl-edge} Consider a history $H$ in $\gen{\ksftm}$ with two transactions $T_i$ and $T_j$ such that both their G\_valid flags are true. there is an edge from $T_i$ $\rightarrow$ $T_j$ then $\tltl_i$ $<$ $\tltl_j$. \end{lemma} \begin{proof} There are three types of possible edges in MVSG. \begin{enumerate} \item Real-time edge: Since, transaction $T_i$ and $T_j$ are in real time order so $\ct_i$ $<$ $\gcts_j$. As we know from \lemref{ti|tltl-comt} $(\tltl_i \leq \ct_i)$. So, $(\tltl_i \leq \cts_j)$.\\ We know from STM $\begt(its)$ method, $\tltl_j = \gcts_j$.\\ Eventually, $\tltl_i$ $<$ $\tltl_j$. \item Read-from edge: Since, transaction $T_i$ has been committed and $T_j$ is reading from $T_i$ so, from \Lineref{new-tup} $\tryc(T_i)$, $\tltl_i = \vltl_i$.\\ and from \Lineref{rd-tltl-inc} STM $read(j, x)$, $\tltl_j = max(\tltl_j,\\ x[curVer]. \vltl + 1)$ $\Rightarrow$ $(\tltl_j > \vltl_i)$ $\Rightarrow$ $(\tltl_j > \tltl_i)$ \\ Hence, $\tltl_i$ $<$ $\tltl_j$. \item Version-order edge: Consider a triplet $w_j(x_j) r_k(x_j) w_i(x_i)$ in which there are two possibilities of version order: \begin{enumerate} \item i $\ll$ j $\Longrightarrow$ $\gwts_i < \gwts_j$ \\ There are two possibilities of commit order: \begin{enumerate} \item $\ct_i <_H \ct_j$: Since, $T_i$ has been committed before $T_j$ so $\tltl_i = \vltl_i$. From \Lineref{tryc-tltl-inc} of $\tryc(T_j)$, $\vltl_i < \tltl(j)$.\\ Hence, $\tltl_i$ $<$ $\tltl_j$. \item $\ct_j <_H \ct_i$: Since, $T_j$ has been committed before $T_i$ so $\tltl_j = \vltl_j$. From \Lineref{tryc-ul-dec} of $\tryc(T_i)$, $\tutl_i < \vltl_j$. As we have assumed $\gval_i$ is true so definitely it will execute the \Lineref{ti-updt} $\tryc(T_i)$ i.e. $\tltl_i = \tutl_i$.\\ Hence, $\tltl_i$ $<$ $\tltl_j$. \end{enumerate} \item j $\ll$ i $\Longrightarrow$ $\gwts_j < \gwts_i$ \\ Again, there are two possibilities of commit order: \begin{enumerate} \item $\ct_j <_H \ct_i$: Since, $T_j$ has been committed before $T_i$ and $T_k$ read from $T_j$. There can be two possibilities $\gwts_k$. \begin{enumerate} \item $\gwts_k > \gwts_i$: That means $T_k$ is in largeRL of $T_i$. From \Lineref{addAbl-lar} to \Lineref{its-chk1}of $\tryc(i)$, either transaction $T_k$ or $T_i$, $\gval$ flag is set to be false. If $T_i$ returns abort then this case will not be considered in \lemref{tltl-edge}. Otherwise, as $T_j$ has already been committed and later $T_i$ will execute the \Lineref{new-tup} $\tryc(T_i)$, Hence, $\tltl_j < \tltl_i$.\\ \item $\gwts_k < \gwts_i$: That means $T_k$ is in smallRL of $T_i$. From \Lineref{rd-ul-dec} of $read(k, x)$, $\tutl_k < \vltl_i$ and from \Lineref{rd-tltl-inc} of $read(k, x)$, $\tltl_k > \vltl_j$. Here, $T_j$ has already been committed so, $\tltl_j = \vltl_j$. As we have assumed $\gval_i$ is true so definitely it will execute the \Lineref{new-tup} $\tryc(T_i)$, $\tltl_i = \vltl_i$.\\ So, $\tutl_k < \tltl_i $ and $\tltl_k > \tltl_j$. While considering $\gval_k$ flag is true $\rightarrow$ $\tltl_k < \tutl_k$.\\ Hence, $\tltl_j < \tltl_k < \tutl_k < \tltl_i$.\\ Therefore, $\tltl_j < \tltl_k < \tltl_i$. \end{enumerate} \item $\ct_i <_H \ct_j$: Since, $T_i$ has been committed before $T_j$ so, $\tltl_i = \vltl_i$. From \Lineref{tryc-ul-dec} of $\tryc(T_j)$, $\tutl_j < \vltl_i$ i.e. $\tutl_j < \tltl_i$. Here, $T_k$ read from $T_j$. So, From \Lineref{rd-ul-dec} of $read(k, x)$, $\tutl_k < \vltl_i$ $\rightarrow$ $\tutl_k < \tltl_i$ from \Lineref{rd-tltl-inc} of $read(k, x)$, $\tltl_k > \vltl_j$. As we have assumed $\gval_j$ is true so definitely it will execute the \Lineref{new-tup} $\tryc(T_j)$, $\tltl_j = \vltl_j$.\\ Hence, $\tltl_j < \tltl_k < \tutl_k < \tltl_i$.\\ Therefore, $\tltl_j < \tltl_k < \tltl_i$. \end{enumerate} \end{enumerate} \cmnt{Due to acquiring the lock on each dataitem before creating a version So, let say $T_i$ created a version then release it and return commit. For both the transactions G\_valid flags are true. As we know from \lemref{tltl commit} $(\tltl_i \leq \ct_i)$. After creating a version by $T_i$, transaction $T_j$ wants to create a version of the same dataitem then definitely, $(\ct_i < \ct_j)$. Again from \lemref{tltl commit} on $T_j$, $(\tltl_j \leq \ct_j)$. \\ So, $\tltl_i$ $<$ $\tltl_j$. } \end{enumerate} \end{proof} \cmnt{ \begin{theorem} \label{thm:trans-com|abt} Transaction with lowest $\its$ value will eventually have the highest $\gwts$ value. \end{theorem} } \begin{theorem} Any history H gen(KSFTM) is local opaque iff for a given version order $\ll$ H, MVSG(H,$\ll$) is acyclic. \end{theorem} \begin{proof} We are proving it by contradiction, so Assuming MVSG(H,$\ll$) has cycle. From \lemref{tltl-edge}, For any two transactions $T_i$ and $T_j$ such that both their G\_valid flags are true and if there is an edge from $T_i$ $\rightarrow$ $T_j$ then $\tltl_i$ $<$ $\tltl_j$. While considering transitive case for k transactions $T_1, T_2, T_3...T_k$ such that G\_valid flags of all the transactions are true. if there is an edge from $T_1$ $\rightarrow$ $T_2$ $\rightarrow$ $T_3$ $\rightarrow$....$\rightarrow$ $T_k$ then $\tltl_1 $ $<$ $\tltl_2$ $<$ $\tltl_3$ $<$ ....$<$ $\tltl_k$.\\ Now, considering our assumption, MVSG(H,$\ll$) has cycle so, $T_1$ $\rightarrow$ $T_2$ $\rightarrow$ $T_3$ $\rightarrow$....$\rightarrow$ $T_k$ $\rightarrow$ $T_1$ that implies $\tltl_1 $ $<$ $\tltl_2$ $<$ $\tltl_3$ $<$ ....$<$ $\tltl_k$ $<$ $\tltl_1$.\\ Hence from above assumption, $\tltl_1$ $<$ $\tltl_1$ but this is impossible. So, our assumption is wrong.\\ Therefore, MVSG(H,$\ll$) produced by KSFTM is acyclic. \end{proof} \textbf{\textit{M\_Order$_H$:}} It stands for method order of history H in which methods of transactions are interval (consists of invocation and response of a method) instead of dot (atomic). Because of having method as an interval, methods of different transactions can overlap. To prove the correctness \textit{(local opacity)} of our algorithm, we need to order the overlapping methods. Let say, there are two transactions $T_i$ and $T_j$ either accessing common (t-objects/$\glock$) or $\gtcnt$ through operations $op_i$ and $op_j$ respectively. If res($op_i$) $<_H$ inv($op_j$) then $op_i$ and $op_j$ are in real-time order in H. So, the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. If operations are overlapping and either accessing common t-objects or sharing $\glock$: \begin{enumerate} \item $read_i(x)$ and $read_j(x)$: If $read_i(x)$ acquires the lock on x before $read_j(x)$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. \item $read_i(x)$ and $\tryc_j()$: If they are accessing common t-objects then, let say $read_i(x)$ acquires the lock on x before $\tryc_j()$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. Now if they are not accessing common t-objects but sharing $\glock$ then, let say $read_i(x)$ acquires the lock on $\glock_i$ before $\tryc_j()$ acquires the lock on $\relll$ (which consists of $\glock_i$ and $\glock_j$) then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. \item $\tryc_i()$ and $\tryc_j()$: If they are accessing common t-objects then, let say $\tryc_i()$ acquires the lock on x before $\tryc_j()$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. Now if they are not accessing common t-objects but sharing $\glock$ then, let say $\tryc_i()$ acquires the lock on $\relll_i$ before $\tryc_j()$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. \end{enumerate} If operations are overlapping and accessing different t-objects but sharing $\gtcnt$ counter: \begin{enumerate} \item $\begt_i$ and $\begt_j$: Both the $\begt$ are accessing shared counter variable $\gtcnt$. If $\begt_i$ executes $\gtcnt.get\&Inc()$ before $\begt_j$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. \item $\begt_i$ and $\tryc(j)$: If $\begt_i$ executes $\gtcnt.get\&Inc()$ before $\tryc(j)$ then the \textit{M\_Order$_H$} is $op_i \rightarrow op_j$. \end{enumerate} \textit{Linearization:} The history generated by STMs are generally not sequintial because operations of the transactions are overlapping. The correctness of STMs is defined on sequintial history, inorder to show history generated by our algorithm is correct we have to consider sequintial history. We have enough information to order the overlapping methods, after ordering the operations will have equivalent sequintial history, the total order of the operation is called linearization of the history.\\ \textit{Operation graph (OPG):} Consider each operation as a vertex and edges as below: \begin{enumerate} \item Real time edge: If response of operation $op_i$ happen before the invocation of operation $op_j$ i.e. rsp($op_i$) $<_H$ inv($op_j$) then there exist real time edge between $op_i$ $\rightarrow$ $op_j$. \item Conflict edge: It is based on $L\_Order_H$ which depends on three conflicts: \begin{enumerate} \item Common \textit{t-object}: If two operations $op_i$ and $op_j$ are overlapping and accessing common \textit{t-object x}. Let say $op_i$ acquire lock first on x then $L\_Order.op_i$(x) $<_H$ $L\_Order.op_j$(x) so, conflict edge is $op_i$ $\rightarrow$ $op_j$. \item Common $\gval$ flag: If two operation $op_i$ and $op_j$ are overlapping but accessing common $\gval$ flag instead of \textit{t-object}. Let say $op_i$ acquire lock first on $\gval_i$ then $L\_Order.op_i$(x) $<_H$ $L\_Order.op_j$(x) so, conflict edge is $op_i$ $\rightarrow$ $op_j$. \end{enumerate} \item Common $\gtcnt$ counter: If two operation $op_i$ and $op_j$ are overlapping but accessing common $\gtcnt$ counter instead of \textit{t-object}. Let say $op_i$ access $\gtcnt$ counter before $op_j$ then $L\_Order.op_i$(x) $<_H$ $L\_Order.op_j$(x) so, conflict edge is $op_i$ $\rightarrow$ $op_j$. \end{enumerate} \cmnt{ \begin{lemma} Any history H gen(KSFTM) follows strict partial order of all the locks in H ($lockOrder_H$) so, operation graph (OPG(H)) is acyclic. i.e. \end{lemma} \begin{enumerate} \item \textrm{If ($p_i$, $p_j$) is an edge in OPG, then $Fpu_i$($\alpha$) $<$ $Lpl_j$($\alpha$) for some data item $\alpha$ and two operations $p_i$($\alpha$), $p_j$($\alpha$) in conflict.} \item \textrm{If ($p_1$,$p_2$, ... ,$p_n$) is a path in OPG, n $\geq$ 1, then $Fpu_1$($\alpha$) $<$ $Lpl_n$($\gamma$) for two data items $\alpha$ and $\gamma$ as well as operations $p_1$($\alpha$) and $p_n$($\gamma$).} \item \textrm{OPG is acyclic.} \end{enumerate} \begin{proof} \textrm{We assume variables $\alpha$, $\beta$ and $\gamma$ can be data item or G\_Valid.} \begin{enumerate} \item \textrm{If ($p_i$, $p_j$) is an edge in operation graph OPG, then OPG comprises two steps $p_i$($\alpha$) and $p_j$($\alpha$) in conflict such that $p_i$($\alpha$) $<$ $p_j$($\alpha$). According to (If $o_i$(x)(o $\in$\{r, w\}) occurs in graph, then so do $ol_i$($\alpha$) and $ou_i$($\alpha$) with the sequencing $ol_i$($\alpha$) $<$ $o_i$($\alpha$) $<$ $ou_i$($\alpha$).), this implies $pl_i$($\alpha$) $<$ $p_i$($\alpha$) $<$ $pu_i$($\alpha$) and $pl_j$($\alpha$) $<$ $p_j$($\alpha$) $<$ $pu_j$($\alpha$). According to (If some steps $p_i$($\alpha$) and $p_j$($\alpha$) from graph are in conflict, then either $Fpu_i$($\alpha$) $<$ $Lpl_j$($\alpha$) or $Fpu_j$($\alpha$) $<$ $Lpl_i$($\alpha$) holds.), we moreover find (a) $Fpu_i$($\alpha$) $<$ $Lpl_j$($\alpha$) or (b) $Fpu_j$($\alpha$) $<$ $Lpl_i$($\alpha$). Case (b) means $Lpl_j$($\alpha$) $<$ $p_j$($\alpha$) $<$ $pu_j$($\alpha$) $<$ $pl_i$($\alpha$) $<$ $p_i$($\alpha$) $<$ $Fpu_i$($\alpha$) and hence $p_j$($\alpha$) $<$ $p_i$($\alpha$), a contradiction to $p_i$($\alpha$) $<$ $p_j$($\alpha$). Thus, $Fpu_i$($\alpha$) $<$ $Lpl_j$($\alpha$).} \item \textrm{In this case we are considering transitive order. In order to prove it, we are taking induction on n (1): If ($p_1$, $p_2$) is an edge in OPG, there is a conflict between $p_1$ and $p_2$. Thus, $Fpu_1$($\alpha$) $<$ $Lpl_2$($\alpha$), i.e., $p_1$ unlocks $\alpha$ before $p_2$ locks $\alpha$. In other words, when $p_2$ sets a lock, $p_1$ has already released one. Now we are assuming it is true for n transactions on the path ($p_1$,$p_2$, ... ,$p_n$) in OPG and we need to prove, it holds for path n+1. The inductive assumption now tells us that there are data items $\alpha$ and $\beta$ such that $Fpu_1$($\alpha$) $<$ $Lpl_n$($\beta$) in S . Since ($p_n$, $p_{n+1}$) is an edge in OPG, it follows from (1) above that for operations $p_n$($\gamma$) and $p_{n+1}$($\gamma$) in conflict we have $Fpu_n$($\gamma$) $<$ $Lpl_{n+1}$($\gamma$). According to (If $p_i$($\alpha$) and $p_i$($\gamma$) are in graph, then $pl_i$($\alpha$) $<$ $pu_i$($\gamma$), i.e., every lock operation occurs before every unlock operation of the same transaction), this implies $pl_n$($\beta$) $<$ $pu_n$($\gamma$) and hence $Fpu_1$($\alpha$) $<$ $Lpl_{n+1}$($\gamma$).} \item \textrm{Proof by contradiction: Assuming OPG is cyclic. So there exists a cycle ($p_1$,$p_2$, ... ,$p_n$, $p_1$), n $\geq$ 1. By using (2), $Fpu_1$($\alpha$) $<$ $Lpl_1$(($\gamma$) for operations $p_1$($\alpha$), $p_1$($\gamma$), a contradiction to the KSFTM (If $p_i$($\alpha$) and $p_i$(($\gamma$) are in OPG, then $pl_i$($\alpha$) $<$ $pu_i$(($\gamma$), i.e., every lock operation occurs before every unlock operation of the same transaction).} \end{enumerate} \end{proof} } \begin{lemma} All the locks in history H ($L\_Order_H$) gen(KSFTM) follows strict partial order. So, operation graph (OPG(H)) is acyclic. If ($op_i$$\rightarrow$$op_j$) in OPG, then atleast one of them will definitely true: ($Fpu_i$($\alpha$) $<$ $Lpl\_op_j$($\alpha$)) $\cup$ ($access.\gtcnt_i$ $<$ $access.\gtcnt_j$) $\cup$ ($Fpu\_op_i$($\alpha$) $<$ $access.\gtcnt_j$) $\cup$ ($access.\gtcnt_i$ $<$ $Lpl\_op_j$($\alpha$)). Here, $\alpha$ can either be t-object or $\gval$. \end{lemma} \begin{proof} we consider proof by induction, So we assummed there exist a path from $op_1$ to $op_n$ and there is an edge between $op_n$ to $op_{n+1}$. As we described, while constructing OPG(H) we need to consider three types of edges. We are considering one by one: \begin{enumerate} \item Real time edge between $op_n$ to $op_{n+1}$: \begin{enumerate} \item $op_{n+1}$ is a locking method: In this we are considering all the possible path between $op_1$ to $op_n$: \begin{enumerate} \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ So, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)) $<$ ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$))\\ Hence, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)) \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($access.\gtcnt_n$ $<$ $Ll\_op_{n+1}$($\alpha$)). As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)) \item ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ So, ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ Hence, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n+1}$($\alpha$)). \item ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ So, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)) \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ So, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)) $<$ ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ Hence, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n+1}$($\alpha$)). \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($access.\gtcnt_n$ $<$ $Ll\_op_{n+1}$($\alpha$)). As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n+1}$($\alpha$)). \end{enumerate} \item $op_{n+1}$ is a non-locking method: Again, we are considering all the possible path between $op_1$ to $op_n$: \begin{enumerate} \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($access.\gtcnt_n$) $<$ ($access.\gtcnt_{n+1}$).\\ As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n+1}$) \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ ($access.\gtcnt_{n+1}$). \\ So, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)) $<$ ($Fu\_op_n$($\alpha$) $<$ ($access.\gtcnt_{n+1}$)\\ Hence, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n+1}$)) \item ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$).\\ So, ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$).\\ Hence, ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n+1}$). \item ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$).\\ So, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n+1}$) \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($access.\gtcnt_n$) $<$ ($access.\gtcnt_{n+1}$).\\ As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n+1}$). \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ ($access.\gtcnt_{n+1}$). \\ So, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)) $<$ ($Fu\_op_n$($\alpha$) $<$ ($access.\gtcnt_{n+1}$). \\ Hence, ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n+1}$). \end{enumerate} \end{enumerate} \item Conflict edge between $op_n$ to $op_{n+1}$: \begin{enumerate} \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)). Ref 1.(a).i. \item ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)). As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n+1}$($\alpha$)). \item ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)). As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)). \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($Fu\_op_n$($\alpha$) $<$ $Ll\_op_{n+1}$($\alpha$)).\\ Ref 1.(a).v. \end{enumerate} \item Common counter edge between $op_n$ to $op_{n+1}$: \begin{enumerate} \item ($Fu\_op_1$($\alpha$) $<$ $Ll\_op_n$($\alpha$)): Here, ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$). As we know if any method is locking as well as accessing common counter then locking tobject first then accessing the counter after that unlocking tobject i.e.\\ So, ($Ll\_op_n$($\alpha$)) $<$ ($access.\gtcnt_n$) $<$ ($Fu\_op_n$($\alpha$)).\\ Hence, ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n+1}$). \item ($access.\gtcnt_{1}$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$). Ref 1.(b).iii. \item ($Fu\_op_1$($\alpha$) $<$ ($access.\gtcnt_{n}$): Here, ($access.\gtcnt_{n}$) $<$ ($access.\gtcnt_{n+1}$). Ref 1.(b).iv. \item ($access.\gtcnt_{1}$) $<$ $Ll\_op_{n}$($\alpha$)): Here, ($access.\gtcnt_n$) $<$ ($access.\gtcnt_{n+1}$). Ref 1.(b).v \end{enumerate} \end{enumerate} \cmnt{ then $Fpu_i$($\alpha$) $<$ $Lpl_j$($\alpha$). Conflict edge is based on $M\_Order_H$.\\ We are proving by contradiction while assuming OPG(H) is cyclic. So, $op_1$ $\rightarrow$ $op_2$ $\rightarrow$ $op_3$ $\rightarrow$....$\rightarrow$ $op_k$ $\rightarrow$ $op_1$ that implies either ($Fpu_1$($\alpha$) $<$ $Lpl_2$($\alpha$) $<$ $Fpu_2$($\alpha$) $<$ $Lpl_3$($\alpha$) $<$ .... $<$ $Fpu_{k-1}$($\alpha$) $<$ $Lpl_k$($\alpha$) $<$ $Fpu_k$($\alpha$) $<$ $Lpl_1$($\alpha$)) or ($access.\gtcnt_1$ $<$ $access.\gtcnt_2$ $<$ $access.\gtcnt_3$ $<$ .... $<$ $access.\gtcnt_{k-1}$ $<$ $access.\gtcnt_k$ $<$ $access.\gtcnt_1$). Hence from above assumption, either ($Fpu_1$($\alpha$) $<$ $Lpl_1$($\alpha$)) or ($access.\gtcnt_1$ $<$ $access.\gtcnt_1$). But ($Fpu_1$($\alpha$) $<$ $Lpl_1$($\alpha$)) is impossible because methods are follwing 2PL order of locking and ($access.\gtcnt_1$ $<$ $access.\gtcnt_1$) is never be true because of same method $op_1$. Hence, all the above cases are impossible. So, our assumption is wrong. } Therefore, OPG(H, $M\_Order$) produced by KSFTM is acyclic. \end{proof} \begin{lemma} \label{lem:val-hs} Any history H gen(KSFTM) with $\alpha$ linearization such that it respects $M\_Order_H$ then (H, $\alpha$) is valid. \end{lemma} \begin{proof} From the definition of \textit{valid history}: If all the read operations of H is reading from the previously committed transaction $T_j$ then H is valid.\\ In order to prove H is valid, we are analyzing the read(i,x). so, from \Lineref{rd-curver10}, it returns the largest \ts value less than $\gwts_i$ that has already been committed and return the value successfully. If such version created by transaction $T_j$ found then $T_i$ read from $T_j$. Otherwise, if there is no version whose \wts is less than $T_i$'s \wts, then $T_i$ returns abort.\\ Now, consider the base case read(i,x) is the first transaction $T_1$ and none of the transactions has been created a version then as we have assummed, there always exist $T_0$ by default that has been created a version for all t-objects. Hence, $T_1$ reads from committed transaction $T_0$.\\ So, all the reads are reading from largest \ts value less than $\gwts_i$ that has already been committed. Hence, (H, $\alpha$) is valid. \end{proof} \begin{lemma} \label{lem:rt-hs} Any history H gen(KSFTM) with $\alpha$ and $\beta$ linearization such that both respects $M\_Order_H$ i.e. $M\_Order_H \subseteq \alpha$ and $M\_Order_H \subseteq \beta$ then $\prec_{(H, {\alpha})} ^{RT}$= $\prec_{(H, {\beta})}^{RT}$. \end{lemma} \begin{proof} Consider a history H gen(KSFTM) such that two transactions $T_i$ and $T_j$ are in real time order which respects $M\_Order_H$ i.e. $\tryc_i$ $<$ $\begt_j$. As $\alpha$ and $\beta$ are linearizations of H so, $\tryc_i$ $<_{(H,{\alpha})}$ $\begt_j$ and $\tryc_i$ $<_{(H,{\beta})}$ $\begt_j$. Hence in both the cases of linearizations, $T_i$ committed before begin of $T_j$. So, $\prec_{(H, {\alpha})} ^{RT}$= $\prec_{(H, {\beta})}^{RT}$. \end{proof} \begin{lemma} Any history H gen(KSFTM) with $\alpha$ and $\beta$ linearization such that both respects $M\_Order_H$ i.e. $M\_Order_H \subseteq \alpha$ and $M\_Order_H \subseteq \beta$ then $(H, {\alpha})$ is local opaque iff $(H, {\beta})$ is local opaque. \end{lemma} \begin{proof} As $\alpha$ and $\beta$ are linearizations of history H gen(KSFTM) so, from \lemref{val-hs} (H, $\alpha$) and (H, $\beta$) are valid histories. Now assuming (H, $\alpha$) is local opaque so we need to show (H, $\beta$) is also local opaque. Since (H, $\alpha$) is local opaque so there exists legal t-sequential history S (with respect to each aborted transactions and last committed transaction while considering only committed transactions) which is equivalent to ($\overline{H}$, $\alpha$). As we know $\beta$ is a linearization of H so ($\overline{H}$, $\beta$) is equivalent to some legal t-sequential history S. From the definition of local opacity $\prec_{(H, {\alpha})} ^{RT} \subseteq \prec_S^{RT}$. From \lemref{rt-hs}, $\prec_{(H, {\alpha})} ^{RT}$= $\prec_{(H, {\beta})}^{RT}$ that implies $\prec_{(H, {\beta})}^{RT} \subseteq \prec_S^{RT}$. Hence, $(H, {\beta})$ is local opaque.\\ Now consider the other way in which (H, $\beta$) is local opaque and we need to show (H, $\alpha$) is also local opaque. We can prove it while giving the same argument as above, by exchanging $\alpha$ and $\beta$.\\ Hence, $(H, {\alpha})$ is local opaque iff $(H, {\beta})$ is local opaque. \end{proof} \begin{theorem} \label{thm:ap-ksftm-lo} Any history generated by \ksftm{} is \lopq. \end{theorem} \begin{proof} For proving this, we consider a sequential history $H$ generated by \ksftm. We define the version order $\ll_{\vt}$: for two versions $v_i, v_j$ it is defined as $(v_i \ll_{\vt} v_j) \equiv (v_i.\vt < v_j.\vt)$ \noindent Using this version order $\ll_{\vt}$, we can show that all the sub-histories in $\shset{H}$ are acyclic. \end{proof} \noindent Since the histories generated by \ksftm are \lopq, we get that they are also \stsble. \begin{corollary} \label{thm:ap-ksftm-stsble} Any history generated by \ksftm{} is \stsble. \end{corollary} \ignore{ \begin{lemma} Any history H gen(KSFTM) is deadlock-free. \end{lemma} \begin{proof} In our algorithm, each transaction $T_i$ is following lock order in every method ($read(x,i)$ and $tryc()$) that are locking t-object first then $\glock$. Since transaction $T_i$ is acquiring locks on t-objects in predefined order at \Lineref{lockxs} of $\tryc()$ and it is also following predefined locking order of all conflicting $\glock$ including itself at \Lineref{lockall} of $\tryc()$. Hence, history H gen(KSFTM) is deadlock-free. \end{proof} \begin{lemma} \label{lem:its-finitewts} Consider two histories $H1, H2$ in $\gen{\ksftm}$ such that $H2$ is an extension of $H1$. Let $T_i$ and $T_j$ are transactions in $\live{H1}$ such that $T_i$ has the lowest $\its$ among all the transactions in \textbf{live} and \textbf{$\rab$} set and $\gval_i$ flag is true in $H1$. Suppose $T_i$ is aborted in $H2$. Then the number of transaction $T_j$ in $\txns{H1}$ with higher $\twts{j}$ than $\twts{i}$ is finite in $H2$. Formally, $ \langle H1, H2, T_i: ((\{H1, H2\} \subset \gen{\ksftm}) \land (H1 \sqsubset H2) \land (T_i \in \live{H1}) \land (\tits{i} \text{ is the smallest among } ((\live{H1} \land \rab(H1)) ) \land (\htval{i}{H1} = T) \land (T_i \in \aborted{H2})) \implies (\exists T_j \in \txns{H1}: (\htwts{i}{H1} < \htwts{j}{H1}) \land$ (such $T_j$ are finite in {H2})) $\rangle$. \end{lemma} \begin{proof} As we observed from \lemref{wts-its}, $T_i$ Will terminate within 2L range. So everytime ($\tits{i} + 2L \geq \tits{j}$) transactions with higher $twts$ cause $T_i$ to abort. Let say, there are m such transactions within 2L range called $T_j$. Then in worst case, $T_i$ will abort maximum m times because on every reties atleast one of transaction from $T_j$ will definitely commit and cause $T_i$ to abort. On abort, when $T_i$ retries again it retains same $its$ but higher $wts$. So, after committing all such $T_j$, $T_i$ will be the only transaction with lowest $its$ among all the transactions in \textbf{live} and \textbf{$\rab$} set (from lemma assumption) and highest $wts$ among all the transactions in \textbf{live}, \textbf{committed} and \textbf{$\rab$} set. Hence, the maximum such $T_j$ with higher $wts$ than $wts_i$ is finite that causes $T_i$ to abort. So, the number of such $T_j$ in $\txns{H1}$ with higher $\twts{j}$ than $\twts{i}$ is finite in $H2$. \end{proof} \begin{lemma} \label{lem:its-hwts} Consider two histories $H1, H2$ in $\gen{\ksftm}$ such that $H2$ is an extension of $H1$. Let $T_i$ be a transaction in $\live{H1}$ such that $T_i$ has the lowest \its among all the transactions in \textbf{live} and \textbf{$\rab$} set and $\gval_i$ flag is true in $H1$. Suppose $T_i$ is having highest $wts$ in $H2$. Then $T_i$ will definitely commit in $H2$. So, KSFTM ensures starvation-freedom. Formally, $ \langle H1, H2, T_i: ((\{H1, H2\} \subset \gen{\ksftm}) \land (H1 \sqsubset H2) \land (T_i \in \live{H1}) \land (\tits{i} \text{ is the smallest among } ((\live{H1} \land \rab(H1)) ) \land (\htval{i}{H1} = T) \land highest(\htwts{i}{H1})$ $\implies$ ($\exists$ $T_i$ is committed in {H2})) $\rangle$. \end{lemma} \begin{proof} From \lemref{its-finitewts}, transaction $T_i$ having lowest $its$ among all the transactions in \textbf{live} and \textbf{$\rab$} set. Then the number of transaction $T_j$ in $\txns{H1}$ with higher $\twts{j}$ than $\twts{i}$ is finite in $H2$. So, for each transaction $T_i$ there eventually exists a global state in which it has the lowestest $its$ and highest $wts$. In that state and in all other future global states (in which $T_i$ is still live), $T_i$ can not be aborted. So, $T_i$ will definitely commit in $H2$. Hence, KSFTM ensures starvation-freedom. \end{proof} } \section{Graph Characterization of Local Opacity \& \ksftm Correctness} \label{sec:ap-gphchar} To prove correctness of STM systems, it is useful to consider graph characterization of histories. In this section, we describe the graph characterization developed by Kumar et al \cite{Kumar+:MVTO:ICDCN:2014} for proving \opty which is based on characterization by Bernstein and Goodman \cite{BernGood:1983:MCC:TDS}. We extend this characterization for \lo. Consider a history $H$ which consists of multiple versions for each \tobj. The graph characterization uses the notion of \textit{version order}. Given $H$ and a \tobj{} $x$, we define a version order for $x$ as any (non-reflexive) total order on all the versions of $x$ ever created by committed transactions in $H$. It must be noted that the version order may or may not be the same as the actual order in which the version of $x$ are generated in $H$. A version order of $H$, denoted as $\ll_H$ is the union of the version orders of all the \tobj{s} in $H$. Consider the history $H2: r_1(x, 0) r_2(x, 0) r_1(y, 0) r_3(z, 0) w_1(x, 5) w_3(y, 15) w_2(y, 10) w_1(z, 10) \\ c_1 c_2 r_4(x, 5) r_4(y, 10) w_3(z, 15) c_3 r_4(z, 10)$. Using the notation that a committed transaction $T_i$ writing to $x$ creates a version $x_i$, a possible version order for $H2$ $\ll_{H2}$ is: $\langle x_0 \ll x_1 \rangle, \langle y_0 \ll y_2 \ll y_3 \rangle, \langle z_0 \ll z_1 \ll z_3 \rangle $. We define the graph characterization based on a given version order. Consider a history $H$ and a version order $\ll$. We then define a graph (called opacity graph) on $H$ using $\ll$, denoted as $\opg{H}{\ll} = (V, E)$. The vertex set $V$ consists of a vertex for each transaction $T_i$ in $\overline{H}$. The edges of the graph are of three kinds and are defined as follows: \begin{enumerate} \item \textit{\rt}(real-time) edges: If $T_i$ commits before $T_j$ starts in $H$, then there is an edge from $v_i$ to $v_j$. This set of edges are referred to as $\rtx(H)$. \item \textit{\rf}(reads-from) edges: If $T_j$ reads $x$ from $T_i$ in $H$, then there is an edge from $v_i$ to $v_j$. Note that in order for this to happen, $T_i$ must have committed before $T_j$ and $c_i <_H r_j(x)$. This set of edges are referred to as $\rf(H)$. \item \textit{\mv}(multiversion) edges: The \mv{} edges capture the multiversion relations and is based on the version order. Consider a successful read \op{} $r_k(x,v)$ and the write \op{} $w_j(x,v)$ belonging to transaction $T_j$ such that $r_k(x,v)$ reads $x$ from $w_j(x,v)$ (it must be noted $T_j$ is a committed transaction and $c_j <_H r_k$). Consider a committed transaction $T_i$ which writes to $x$, $w_i(x, u)$ where $u \neq v$. Thus the versions created $x_i, x_j$ are related by $\ll$. Then, if $x_i \ll x_j$ we add an edge from $v_i$ to $v_j$. Otherwise ($x_j \ll x_i$), we add an edge from $v_k$ to $v_i$. This set of edges are referred to as $\mv(H, \ll)$. \end{enumerate} Using the construction, the $\opg{H2}{\ll_{H2}}$ for history $H2$ and $\ll_{H2}$ is shown in \figref{opg}. The edges are annotated. The only \mv{} edge from $T4$ to $T3$ is because of \tobj{s} $y, z$. $T4$ reads value 5 for $z$ from $T1$ whereas $T3$ also writes 15 to $z$ and commits before $r_4(z)$. \begin{figure}[tbph] \centerline{\scalebox{0.7}{\input{figs/ex2.pdf_t}}} \captionsetup{justification=centering} \caption{$\opg{H2}{\ll_{H2}}$} \label{fig:opg} \end{figure} Kumar et al \cite{Kumar+:MVTO:ICDCN:2014} showed that if a version order $\ll$ exists for a history $H$ such that $\opg{H}{\ll_H}$ is acyclic, then $H$ is \opq. This is captured in the following result. \ignore{ \begin{result} \label{res:main-opg} A \valid{} history $H$ is opaque iff there exists a version order $\ll_H$ such that $\opg{H}{\ll_H}$ is acyclic. \end{result} \noindent This result can be extended to characterize \lo using graphs with the following theorem. The proof is in Appendix \thmref{log}. \begin{theorem} \label{thm:main-log} A \valid{} history $H$ is \lopq iff for each sub-history $sh$ in $\shset{H}$ there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic. Formally, $\langle (H \text{ is \lopq}) \Leftrightarrow (\forall sh \in \shset{H}, \exists \ll_{sh}: \opg{sh}{\ll_{sh}} \text{ is acyclic}) \rangle$. \end{theorem} } \begin{result} \label{res:opg} A \valid{} history $H$ is opaque iff there exists a version order $\ll_H$ such that $\opg{H}{\ll_H}$ is acyclic. \end{result} \noindent This result can be easily extended to prove \lo as follows \begin{theorem} \label{thm:log} A \valid{} history $H$ is \lopq iff for each sub-history $sh$ in $\shset{H}$ there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic. Formally, $\langle (H \text{ is \lopq}) \Leftrightarrow (\forall sh \in \shset{H}, \exists \ll_{sh}: \opg{sh}{\ll_{sh}} \text{ is acyclic}) \rangle$. \end{theorem} \begin{proof} To prove this theorem, we have to show that each sub-history $sh$ in $\shset{H}$ is \valid. Then the rest follows from \resref{opg}. Now consider a sub-history $sh$. Consider any read \op $r_i(x, v)$ of a transaction $T_i$. It is clear that $T_i$ must have read a version of $x$ created by a previously committed transaction. From the construction of $sh$, we get that all the transaction that committed before $r_i$ are also in $sh$. Hence $sh$ is also \valid. Now, proving $sh$ to be \opq iff there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic follows from \resref{opg}. \end{proof} \ignore { Using this theorem, we can give the proof sketch of \ksftm algorithms. Here for simplicity, we assume that the history generated is sequential. \begin{theorem} \label{thm:ap1-ksftm-lo} Any history generated by \ksftm{} is \lopq. \end{theorem} \begin{proof} For proving this, we consider a sequential history $H$ generated by \ksftm. We define the version order $\ll_{\vt}$: for two versions $v_i, v_j$ it is defined as $(v_i \ll_{\vt} v_j) \equiv (v_i.\vt < v_j.\vt)$ \noindent Using this version order $\ll_{\vt}$, we can show that all the sub-histories in $\shset{H}$ are acyclic. \end{proof} } \subsubsection{Data Structures and Pseudocode of \sftm} \label{apn:SFTM} We start with data-structures that are local to each transaction. For each transaction $T_i$: \begin{itemize} \item $rset\xspace_i$(read-set): It is a list of data tuples ($d\_tuples$) of the form $\langle x, val \rangle$, where $x$ is the t-object and $v$ is the value read by the transaction $T_i$. We refer to a tuple in $T_i$'s read-set by $rset\xspace_i[x]$. \item $wset\xspace_i$(write-set): It is a list of ($d\_tuples$) of the form $\langle x, val \rangle$, where $x$ is the \tobj{} to which transaction $T_i$ writes the value $val$. Similarly, we refer to a tuple in $T_i$'s write-set by $wset\xspace_i[x]$. \end{itemize} In addition to these local structures, the following shared global structures are maintained that are shared across transactions (and hence, threads). We name all the shared variable starting with `G'. \begin{itemize} \item $\gtcnt$ (counter): This a numerical valued counter that is incremented when a transaction begins. \end{itemize} \noindent For each transaction $T_i$ we maintain the following shared time-stamps: \begin{itemize} \item $\glock_i$: A lock for accessing all the shared variables of $T_i$. \item $\gits_i$ (initial timestamp): It is a time-stamp assigned to $T_i$ when it was invoked for the first time. \item $\gcts_i$ (current timestamp): It is a time-stamp when $T_i$ is invoked again at a later time. When $T_i$ is created for the first time, then its $\gcts$ is same as its $its$. \item $\gval_i$: This is a boolean variable which is initially true ($T$). If it becomes false ($F$) then $T_i$ has to be aborted. \item $\gstat_i$: This is a variable which states the current value of $T_i$. It has three states: \texttt{live}, \texttt{commit} or \texttt{abort}. \end{itemize} \noindent For each data item $x$ in history $H$, we maintain: \begin{itemize} \item $x.val$ (value): It is the successful previous closest value written by any transaction. \item $x.rl$ (readList): It is the read list consists of all the transactions that have read $x$. \end{itemize} \begin{algorithm} [H] \caption{STM $\init()$: Invoked at the start of the STM system. Initializes all the data items used by the STM System} \label{alg:init} \begin{algorithmic}[1] \State $\gtcnt$ = 1; \ForAll {data item $x$ used by the STM System} \State add $\langle 0, nil \rangle$ to $x.val$;\Comment{ $T_0$ is initializing $x$} \label{lin:t0-init} \EndFor; \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:begin} \caption{STM $\begt(its)$: Invoked by a thread to start a new transaction $T_i$. Thread can pass a parameter $its$ which is the initial timestamp when this transaction was invoked for the first time. If this is the first invocation then $its$ is $nil$. It returns the tuple $\langle id, \gcts \rangle$} \begin{algorithmic}[1] \State $i$ = unique-id; \Comment{An unique id to identify this transaction. It could be same as $\gcts$.} \If {($its == nil$)} \State $\gits_i = \gcts_i = \gtcnt.get\&Inc()$; \State \Comment{$\gtcnt.get\&Inc()$ returns the current value of $\gtcnt$ and atomically increments it by 1.} \Else \State $\gits_i = its$; \State $\gcts_i = \gtcnt.get\&Inc()$; \EndIf \State $rset\xspace_i = wset\xspace_i = null$; \State $\gstat_i$ = \texttt{live}; \State $\gval_i = T$; \State return $\langle i, \gcts_i\rangle$ \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:read} \caption{STM $read(i, x)$: Invoked by a transaction $T_i$ to read $x$. It returns either the value of $x$ or $\mathcal{A}$} \begin{algorithmic}[1] \If {($x \in wset\xspace_i$)} \Comment{Check if $x$ is in $wset\xspace_i$} \State return $wset\xspace_i[x].val$; \ElsIf {($x \in rset\xspace_i$)} \Comment{Check if $x$ is in $rset\xspace_i$} \State return $rset\xspace_i[x].val$; \Else \Comment{$x$ is not in $rset\xspace_i$ and $wset\xspace_i$} \State lock $x$; \State lock $\glock_i$; \If {$(\gval_i == F)$} \State return $abort(i)$; \label{lin:rabort} \EndIf \cmnt { \State /* \findsl: From $x.\vl$, returns the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $nextVer = \findsl(\gwts_i,x)$; \If {$(nextVer \neq nil)$} \State \Comment{Ensure that $\tutl_i$ remains smaller than $nextVer$'s \vltl} \State $\tutl_i = min(\tutl_i, x[nextVer].vltl-1)$; \EndIf \State \Comment{$\tltl_i$ should be greater than $x[curVer].\vltl$} \State $\tltl_i = max(\tltl_i, x[curVer].\vltl + 1)$; \If {($\tltl_i > \tutl_i$)} \Comment{If the limits have crossed each other, then $T_i$ is aborted} \State return abort(i); \EndIf } \State $val = x.val$; \State add $T_i$ to $x.rl$; \State unlock $\glock_i$; \State unlock $x$; \State return $val$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:write} \caption{STM $write_i(x,val)$: A Transaction $T_i$ writes into local memory} \begin{algorithmic}[1] \State Append the $d\_tuple \langle x,val \rangle$ to $wset\xspace_i$.\Comment{If same dataitem then overwrite the tuple } \State return $ok$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:llts} \caption{STM $findLLTS(TSet)$: Find the lowest $its$ value among all the live trasactions in $TSet$.} \begin{algorithmic}[1] \State $min\_its$ = $\infty$ \ForAll {( $T_j \in TSet$)} \If {(($\gits_j$ $< min\_its)$ \&\& $(\gstat_j == \texttt{live})$)} \State $min\_its$ = $\gits_j$; \EndIf \EndFor \State return $min\_its$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:tryc} \caption{STM $\tryc()$: Returns $\mathcal{C}$ on commit else return Abort $\mathcal{A}$} \begin{algorithmic}[1] \State lock $\glock_i$ \If {$(\gval_i == F)$} return $abort(i)$; \EndIf \State $TSet = null$ {} \Comment{$TSet$ storing transaction Ids} \ForAll {($x \in wset\xspace_i$)} \State lock $x$ in pre-defined order; \ForAll {($T_j \in x.rl$)} \State $TSet$ = $TSet$ $\cup$ \{$T_j$\} \EndFor \EndFor \Comment{$x \in wset\xspace_i$} \State $TSet$ = $TSet$ $\cup$ \{$T_i$\} \Comment{Add current transaction $T_i$ into $TSet$} \ForAll {( $T_k \in TSet$)} \State lock $\glock_k$ in pre-defined order; \Comment{Note: Since $T_i$ is also in $TSet$, $\glock_i$ is also locked} \EndFor \If {$(\gval_i == F)$} return $abort(i)$; \Else \If {($\gits_i == findLLTS(TSet$))} \Comment{Check if $T_i$ has lowest $its$ among all \texttt{live} transactions in $TSet$} \ForAll {($T_j \in TSet$)} \Comment{ ($T_i \neq T_j$)} \State $G\_valid_j = F$ \State unlock $\glock_j$; \EndFor \Else \State return $abort(i)$; \EndIf \EndIf \cmnt { \algstore{tryc-break} \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:tryc-cont} \caption{STM $\tryc()$: Continued} \begin{algorithmic}[1] \algrestore{tryc-break} \State \Comment{Ensure that $\vutl_i$ is less than \vltl of versions in $\nvl$} \ForAll {$(ver \in \nvl)$} \State $x$ = \tobj of $ver$; \State $\tutl_i = min(\tutl_i, x[ver].\vltl - 1)$; \EndFor } \ForAll {($x \in wset\xspace_i$)} \State replace the old value in $x.val$ with $newValue$; \State $x.rl$ = null; \EndFor \State $\gstat_i$ = \texttt{commit}; \State unlock all variables locked by $T_i$; \State return $\mathcal{C}$; \end{algorithmic} \end{algorithm} \cmnt { \begin{algorithm} \label{alg:lowPri} \caption{$\lowp(T_k, T_i)$: Verifies if $T_k$ has lower priority than $T_i$ and is not already committed} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k) \land (\gstat_k == \texttt{live})$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:\lowp} \caption{$\lowp(T_k, T_i)$: Aborts lower priority transaction among $T_k$ and $T_i$} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k)$} \State $\abl = \abl \cup T_k$; \Comment{Store lower priority $T_k$ in \abl} \Else \State return abort(i); \Comment{Abort $T_i$} \EndIf \end{algorithmic} \end{algorithm} } \begin{algorithm}[H] \label{alg:abort} \caption{$abort(i)$: Invoked by various STM methods to abort transaction $T_i$. It returns $\mathcal{A}$} \begin{algorithmic}[1] \State $\gval_i = F$; \State $\gstat_i$ = \texttt{abort}; \State unlock all variables locked by $T_i$; \State return $\mathcal{A}$; \end{algorithmic} \end{algorithm} \section{PCode of \sftm} \label{sec:pcode-sftm} \textbf{Data Structure:} We start with data-structures that are local to each transaction. For each transaction $T_i$: \begin{itemize} \item $rset\xspace_i$(read-set): It is a list of data tuples ($d\_tuples$) of the form $\langle x, val \rangle$, where $x$ is the t-object and $v$ is the value read by the transaction $T_i$. We refer to a tuple in $T_i$'s read-set by $rset\xspace_i[x]$. \item $wset\xspace_i$(write-set): It is a list of ($d\_tuples$) of the form $\langle x, val \rangle$, where $x$ is the \tobj{} to which transaction $T_i$ writes the value $val$. Similarly, we refer to a tuple in $T_i$'s write-set by $wset\xspace_i[x]$. \end{itemize} In addition to these local structures, the following shared global structures are maintained that are shared across transactions (and hence, threads). We name all the shared variable starting with `G'. \begin{itemize} \item $\gtcnt$ (counter): This a numerical valued counter that is incremented when a transaction begins \end{itemize} \noindent For each transaction $T_i$ we maintain the following shared time-stamps: \begin{itemize} \item $\glock_i$: A lock for accessing all the shared variables of $T_i$. \item $\gits_i$ (initial timestamp): It is a time-stamp assigned to $T_i$ when it was invoked for the first time. \item $\gcts_i$ (current timestamp): It is a time-stamp when $T_i$ is invoked again at a later time. When $T_i$ is created for the first time, then its \gcts{} is same as its \its. \item $\gval_i$: This is a boolean variable which is initially true. If it becomes false then $T_i$ has to be aborted. \item $\gstat_i$: This is a variable which states the current value of $T_i$. It has three states: \texttt{live}, \texttt{committed} or \texttt{aborted}. \end{itemize} \noindent For each data item $x$ in history $H$, we maintain: \begin{itemize} \item $x.val$ (value): It is the successful previous closest value written by any transaction. \item $\rl$ (readList): $rl$ is the read list consists of all the transactions that have read it. \end{itemize} \begin{algorithm} [H] \caption{STM $\init()$: Invoked at the start of the STM system. Initializes all the data items used by the STM System} \label{alg:init} \begin{algorithmic}[1] \State $\gtcnt$ = 1; \ForAll {data item $x$ used by the STM System} \State add $\langle 0, nil \rangle$ to $x.val$;\Comment{ $T_0$ is initializing $x$} \label{lin:t0-init-SFTM} \EndFor; \end{algorithmic} \end{algorithm} \begin{algorithm} [H] \label{alg:begin} \caption{STM $\begt(its)$: Invoked by a thread to start a new transaction $T_i$. Thread can pass a parameter $its$ which is the initial timestamp when this transaction was invoked for the first time. If this is the first invocation then $its$ is $nil$. It returns the tuple $\langle id, \gcts \rangle$} \begin{algorithmic}[1] \State $i$ = unique-id; \Comment{An unique id to identify this transaction. It could be same as \gcts} \If {($its == nil$)} \State $\gits_i = \gcts_i = \gtcnt.get\&Inc()$; \State \Comment{$\gtcnt.get\&Inc()$ returns the current value of \gtcnt and atomically increments it} \Else \State $\gits_i = its$; \State $\gcts_i = \gtcnt.get\&Inc()$; \EndIf \State $rset\xspace_i = wset\xspace_i = null$; \State $\gstat_i$ = \texttt{live}; \State $\gval_i = T$; \State return $\langle i, \gcts_i\rangle$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:read} \caption{STM $read(i, x)$: Invoked by a transaction $T_i$ to read $x$. It returns either the value of $x$ or $\mathcal{A}$} \begin{algorithmic}[1] \If {($x \in rset\xspace_i$)} \Comment{Check if $x$ is in $rset\xspace_i$} \State return $rset\xspace_i[x].val$; \ElsIf {($x \in wset\xspace_i$)} \Comment{Check if $x$ is in $wset\xspace_i$} \State return $wset\xspace_i[x].val$; \Else \Comment{$x$ is not in $rset\xspace_i$ and $wset\xspace_i$} \State lock $x$; lock $\glock_i$; \If {$(\gval_i == F)$} \State return abort(i); \label{lin:rabort-SFTM} \EndIf \State \Comment{ Find available value from $x.val$, returns the value } \State $curVer = findavilval(\gcts_i,x)$; \cmnt { \State /* \findsl: From $x.\vl$, returns the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $nextVer = \findsl(\gwts_i,x)$; \If {$(nextVer \neq nil)$} \State \Comment{Ensure that $\tutl_i$ remains smaller than $nextVer$'s \vltl} \State $\tutl_i = min(\tutl_i, x[nextVer].vltl-1)$; \EndIf \State \Comment{$\tltl_i$ should be greater than $x[curVer].\vltl$} \State $\tltl_i = max(\tltl_i, x[curVer].\vltl + 1)$; \If {($\tltl_i > \tutl_i$)} \Comment{If the limits have crossed each other, then $T_i$ is aborted} \State return abort(i); \EndIf } \State $val = x[curVer].v$; add $\langle x, val \rangle$ to $rset\xspace_i$; \State add $T_i$ to $x[curVer].rl$; \State unlock $\glock_i$; \State unlock $x$; \State return $val$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:write} \caption{STM $write_i(x,val)$: A Transaction $T_i$ writes into local memory} \begin{algorithmic}[1] \State Append the $d\_tuple \langle x,val \rangle$ to $wset\xspace_i$. \State return $ok$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:tryc} \caption{STM $\tryc()$: Returns $ok$ on commit else return Abort} \begin{algorithmic}[1] \State \Comment{The following check is an optimization which needs to be performed again later} \State Set$<$int$>$ TSet $\leftarrow$ $\phi$ {} \Comment{TSet storing transaction Ids} \ForAll {$x \in wset\xspace_i$} \State lock $x$ in pre-defined order; \For{$<$each transaction $t_j$ of [$x$].rl$>$} \State TSet = [$x$].rl \EndFor \State TSet = TSet $\cup$ \{$t_i$\} \EndFor \Comment{$x \in wset\xspace_i$} \State lock $\glock_i$; \If {$(\gval_i == F)$} return abort(i); \Else \State Find LTS in TSet {} \Comment{lowest time stamp} \If {(TS($t_i$) == LTS)} \For{$<$each transaction $t_j$ of [$x$].rl$>$} \State $G\_valid_j$ $\leftarrow$ false \State unlock $\glock_j$; \EndFor \Else \State return abort(i); \EndIf \EndIf \cmnt { \algstore{tryc-break} \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:tryc-cont} \caption{STM $\tryc()$: Continued} \begin{algorithmic}[1] \algrestore{tryc-break} \State \Comment{Ensure that $\vutl_i$ is less than \vltl of versions in $\nvl$} \ForAll {$(ver \in \nvl)$} \State $x$ = \tobj of $ver$; \State $\tutl_i = min(\tutl_i, x[ver].\vltl - 1)$; \EndFor } \State \Comment{Store the current value of the global counter as commit time and increment it} \State $\ct = \gtcnt.get\&Inc()$; \algstore{myalgtc} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{myalgtc} \ForAll {$x \in wset\xspace_i$} \State replace the old value in $x.\vl$ with $newValue$; \EndFor \State $\gstat_i$ = \texttt{commit}; \State unlock all variables; \State return $\mathcal{C}$; \end{algorithmic} \end{algorithm} \cmnt { \begin{algorithm} \label{alg:lowPri} \caption{$\lowp(T_k, T_i)$: Verifies if $T_k$ has lower priority than $T_i$ and is not already committed} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k) \land (\gstat_k == \texttt{live})$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:\lowp} \caption{$\lowp(T_k, T_i)$: Aborts lower priority transaction among $T_k$ and $T_i$} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k)$} \State $\abl = \abl \cup T_k$; \Comment{Store lower priority $T_k$ in \abl} \Else \State return abort(i); \Comment{Abort $T_i$} \EndIf \end{algorithmic} \end{algorithm} } \begin{algorithm}[H] \label{alg:abort} \caption{$abort(i)$: Invoked by various STM methods to abort transaction $T_i$. It returns $\mathcal{A}$} \begin{algorithmic}[1] \State $\gval_i = F$; $\gstat_i$ = \texttt{abort}; \State unlock all variables locked by $T_i$; \State return $\mathcal{A}$; \end{algorithmic} \end{algorithm} \section{Pcode of \kstm} \label{sec:pcode-kstm} \begin{algorithm}[H] \caption{STM $\init()$: Invoked at the start of the STM system. Initializes all the \tobj{s} used by the STM System} \label{alg:init-KSTM} \begin{algorithmic}[1] \State $\gtcnt$ = 1; \ForAll {$x$ in $\mathcal{T}$} \Comment{All the \tobj{s} used by the STM System} \State add $\langle 0, 0, nil \rangle$ to $x.\vl$; \Comment { $T_0$ is initializing $x$} \label{lin:t0-init-KSTM} \EndFor; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-begin} \caption{STM $\begt(its)$: Invoked by a thread to start a new transaction $T_i$. Thread can pass a parameter $its$ which is the initial timestamp when this transaction was invoked for the first time. If this is the first invocation then $its$ is $nil$. It returns the tuple $\langle id, \gcts \rangle$} \begin{algorithmic}[1] \State $i$ = unique-id; \Comment{An unique id to identify this transaction. It could be same as \gcts} \State \Comment{Initialize transaction specific local \& global variables} \If {($its == nil$)} \State \Comment{$\gtcnt.get\&Inc()$ returns the current value of \gtcnt and atomically increments it} \State $\gits_i = \gcts_i = \gtcnt.get\&Inc()$; \Else \State $\gits_i = its$; \State $\gcts_i = \gtcnt.get\&Inc()$; \EndIf \State $rset\xspace_i = wset\xspace_i = null$; \State $\gstat_i$ = \texttt{live}; \State $\gval_i = T$; \State return $\langle i, \gcts_i\rangle$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-read} \caption{STM $read(i, x)$: Invoked by a transaction $T_i$ to read \tobj{} $x$. It returns either the value of $x$ or $\mathcal{A}$} \begin{algorithmic}[1] \If {($x \in rset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $rset\xspace_i$} \State return $rset\xspace_i[x].val$; \ElsIf {($x \in wset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $wset\xspace_i$} \State return $wset\xspace_i[x].val$; \Else \Comment{\tobj{} $x$ is not in $rset\xspace_i$ and $wset\xspace_i$} \State lock $x$; lock $\glock_i$; \If {$(\gval_i == F)$} return abort(i); \label{lin:rabort-KSTM} \EndIf \State \Comment{ \findls: From $x.\vl$, returns the largest \ts value less than $\gcts_i$. If no such version exists, it returns $nil$ } \State $curVer = \findls(\gcts_i,x)$; \If {$(curVer == nil)$} return abort(i); \Comment{Proceed only if $curVer$ is not nil} \EndIf \cmnt { \State /* \findsl: From $x.\vl$, returns the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $nextVer = \findsl(\gwts_i,x)$; \If {$(nextVer \neq nil)$} \State \Comment{Ensure that $\tutl_i$ remains smaller than $nextVer$'s \vltl} \State $\tutl_i = min(\tutl_i, x[nextVer].vltl-1)$; \EndIf \State \Comment{$\tltl_i$ should be greater than $x[curVer].\vltl$} \State $\tltl_i = max(\tltl_i, x[curVer].\vltl + 1)$; \If {($\tltl_i > \tutl_i$)} \Comment{If the limits have crossed each other, then $T_i$ is aborted} \State return abort(i); \EndIf } \State $val = x[curVer].v$; add $\langle x, val \rangle$ to $rset\xspace_i$; \State add $T_i$ to $x[curVer].rl$; \State unlock $\glock_i$; unlock $x$; \State return $val$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-write} \caption{STM $write_i(x,val)$: A Transaction $T_i$ writes into local memory} \begin{algorithmic}[1] \State Append the $d\_tuple \langle x,val \rangle$ to $wset\xspace_i$. \State return $ok$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-tryc} \caption{STM $\tryc()$: Returns $ok$ on commit else return Abort} \begin{algorithmic}[1] \State \Comment{The following check is an optimization which needs to be performed again later} \State lock $\glock_i$; \If {$(\gval_i == F)$} \State return abort(i); \EndIf \State unlock $\glock_i$; \State $\lrl = \allrl = nil$; \Comment{Initialize larger read list (\lrl), all read list (\allrl) to nil} \ForAll {$x \in wset\xspace_i$} \State lock $x$ in pre-defined order; \State \Comment{ \findls: returns the version with the largest \ts value less than $\gcts_i$. If no such version exists, it returns $nil$. } \State $\prevv = \findls(\gcts_i, x)$; \Comment{\prevv: largest version smaller than $\gcts_i$} \If {$(\prevv == nil)$} \Comment{There exists no version with \ts value less than $\gcts_i$} \State lock $\glock_i$; return abort(i); \EndIf \State \Comment{\textbf{\getl}: obtain the list of reading transactions of $x[\prevv].rl$ whose $\gcts$ is greater than $\gcts_i$} \State $\lrl = \lrl \cup \getl(\gcts_i, x[\prevv].rl)$; \EndFor \Comment{$x \in wset\xspace_i$} \State $\relll = \lrl \cup T_i$; \Comment{Initialize relevant Lock List (\relll)} \ForAll {($T_k \in \relll$)} \State lock $\glock_k$ in pre-defined order; \Comment{Note: Since $T_i$ is also in $\relll$, $\glock_i$ is also locked} \EndFor \State \Comment{Verify if $\gval_i$ is false} \algstore{myalgtc} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{myalgtc} \If {$(\gval_i == F)$} \State return abort(i); \EndIf \State $\abl = nil$ \Comment{Initialize abort read list (\abl)} \State \Comment{Among the transactions in $T_k$ in $\lrl$, either $T_k$ or $T_i$ has to be aborted} \ForAll {$(T_k \in \lrl)$} \If {$(\isab(T_k))$} \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \If {$(\gcts_i < \gcts_k) \land (\gstat_k == \texttt{live})$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \cmnt{ \algstore{sweta} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{sweta} } \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \EndIf \EndFor \cmnt { \algstore{tryc-break} \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:ap-tryc-cont} \caption{STM $\tryc()$: Continued} \begin{algorithmic}[1] \algrestore{tryc-break} \State \Comment{Ensure that $\vutl_i$ is less than \vltl of versions in $\nvl$} \ForAll {$(ver \in \nvl)$} \State $x$ = \tobj of $ver$; \State $\tutl_i = min(\tutl_i, x[ver].\vltl - 1)$; \EndFor } \State \Comment{Store the current value of the global counter as commit time and increment it} \State $\ct = \gtcnt.get\&Inc()$; \cmnt{ \ForAll {$(T_k \in \srl)$} \Comment{Iterate through $\srl$ to see if $T_k$ or $T_i$ has to aborted} \If {$(\isab(T_k))$} \State \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \If {$(\tltl_k \geq \tutl_i)$} \label{lin:tk-check} \Comment{Ensure that the limits do not cross for both $T_i$ \& $T_k$} \If {$(\gstat_k == live)$} \Comment{Check if $T_k$ is live} \If {$(\gits_i < \gits_k)$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \EndIf \Comment{$(\gits_i < \gits_k)$} \Else \Comment{($T_k$ is committed. Hence, $T_i$ has to be aborted)} \State return abort(i); \EndIf \Comment{$(\gstat_k == live)$} \EndIf \Comment{$(\tltl_k \geq \tutl_i)$} \EndFor {$(T_k \in \srl)$} \State \Comment{At this point $T_i$ can't abort.} \State $\tltl_i = \tutl_i$; \label{lin:ti-updt} \State \Comment{Since $T_i$ can't abort, we can update $T_k$'s \tutl} \ForAll {$(T_k \in \srl)$} \If {$(\isab(T_k))$} \State \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \State /* The following line ensure that $\tltl_k \leq \tutl_k < \tltl_i$. Note that this does not cause the limits of $T_k$ to cross each other because of the check in \lineref{tk-check}.*/ \State $\tutl_k = min(\tutl_k, \tltl_i - 1)$; \EndFor } \ForAll {$T_k \in \abl$} \Comment{Abort all the transactions in \abl} \State $\gval_k = F$; \EndFor \cmnt { \algstore{tryc-break2} \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:tryc-cont2} \caption{STM $\tryc()$: Continued Again} \begin{algorithmic}[1] \algrestore{tryc-break2} } \State \Comment{Having completed all the checks, $T_i$ can be committed} \ForAll {$(x \in wset\xspace_i)$} \State $newTuple = \langle \gcts_i, wset\xspace_i[x].val, nil \rangle$; \Comment { Create new v\_tuple: \gcts, val, \rl for $x$} \If {($|x.vl| > k$)} \State replace the oldest tuple in $x.\vl$ with $newTuple$; \Comment{$x.\vl$ is ordered by time stamp} \Else \State add a $newTuple$ to $x.vl$ in sorted order; \EndIf \EndFor \Comment{$x \in wset\xspace_i$} \State $\gstat_i$ = \texttt{commit}; \State unlock all variables; \State return $\mathcal{C}$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:isab} \caption{$\isab(T_k)$: Verifies if $T_i$ is already aborted or its \gval flag is set to false implying that $T_i$ will be aborted soon} \begin{algorithmic}[1] \If {$(\gval_k == F) \lor (\gstat_k == \texttt{abort}) \lor (T_k \in \abl)$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \cmnt { \begin{algorithm} \label{alg:ap-lowPri} \caption{$\lowp(T_k, T_i)$: Verifies if $T_k$ has lower priority than $T_i$ and is not already committed} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k) \land (\gstat_k == \texttt{live})$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:\lowp} \caption{$\lowp(T_k, T_i)$: Aborts lower priority transaction among $T_k$ and $T_i$} \begin{algorithmic}[1] \If {$(\gits_i < \gits_k)$} \State $\abl = \abl \cup T_k$; \Comment{Store lower priority $T_k$ in \abl} \Else \State return abort(i); \Comment{Abort $T_i$} \EndIf \end{algorithmic} \end{algorithm} } \begin{algorithm}[H] \label{alg:ap-abort} \caption{$abort(i)$: Invoked by various STM methods to abort transaction $T_i$. It returns $\mathcal{A}$} \begin{algorithmic}[1] \State $\gval_i = F$; $\gstat_i$ = \texttt{abort}; \State unlock all variables locked by $T_i$; \State return $\mathcal{A}$; \end{algorithmic} \end{algorithm} \section{Some Preliminary Results} \label{sec:ap-results} The below graphs have been produced by using a linked list application to compare the performance of KSTM with different values of k. In the application chosen below, there were 90\% lookups and remaining were 9:1 ratio of inserts and deletes. Varying number of threads were generated and each thread in turn generated 100 transactions. \pgfplotsset{scaled x ticks=false} \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel = Number of Transactions, ylabel=Operations/sec, enlargelimits=0.05, xtick = data, xticklabels={5000,7500,10000}, width=.47\textwidth ] \addplot coordinates {(5000,4.335) (7500,1.9351425 ) (10000, 1.06684) }; \addplot coordinates { (5000, 5.47465) (7500, 2.31684 ) (10000, 1.249) }; \addplot coordinates { (5000, 4.26628) (7500,2.15325) (10000, 1.31769) }; \legend{k=1, k=10,k=20} \end{axis} \end{tikzpicture} \end{center} As per the results obtained, multiversion performs better than single version STM. This is because the multiple versions used in KSTM decreases the number of aborts per transaction, thereby effectively increasing the operations/sec performed. \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel = Number of Transactions, ylabel= Commit Time(microseconds), enlargelimits=0.05, xtick = data, xticklabels={5000,7500,10000}, width=.47\textwidth ] \addplot coordinates {(5000,230679) (7500, 516758) (10000, 937346)}; \addplot coordinates { (5000,182660) (7500, 431622) (10000, 800638)}; \addplot coordinates {(5000,234396) (7500,464414) (10000,758903)}; \legend{k=1, k=10,k=20} \end{axis} \end{tikzpicture} \end{center} The commit time (time taken per transaction to commit) observed during KSTM (k = 10 here) is the least since is inversely proportional to the operations/sec. As the number of transactions are increasing, they need more versions to read from, to attain higher concurrency leading to lesser abort counts. In the application chosen below, there were 50\% lookups and remaining were 9:1 ratio of inserts and deletes into the linked list. This kind of setup will have more read-write conflicts between the transactions involved when compared to the previous setup. \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel = Number of Transactions, ylabel=Operations/sec, enlargelimits=0.05, xtick = data, xticklabels={5000,7500,10000}, width=.47\textwidth ] \addplot coordinates {(5000,1.97486) (7500, 0.737251) (10000, 0.498075)}; \addplot coordinates { (5000,1.84234) (7500, 0.838329) (10000, 0.492734) }; \addplot coordinates {(5000,2.07114) (7500,0.944222 ) (10000,0.488506 )}; \legend{k=1, k=10,k=20} \end{axis} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel = Number of Transactions, ylabel= Commit Time(microseconds), enlargelimits=0.05, xtick = data, xticklabels={5000,7500,10000}, width=.47\textwidth ] \addplot coordinates {(5000,506365) (7500, 1.36E+06) (10000, 2.01E+06)}; \addplot coordinates { (5000,542787) (7500, 1.19E+06) (10000, 2.03E+06)}; \addplot coordinates {(5000,482826) (7500,1.06E+06) (10000,2.05E+06)}; \legend{k=1, k=10,k=20} \end{axis} \end{tikzpicture} \end{center} As per the graph, k = 20 gives the best operations/sec and the least commit time. Hence, having multiple versions(KSTM) performs better than single version STM in this setup too. \subsection{Adding Starvation-Freedom to \pkto: \sfkv} \subsection{Modifying \pkto to Obtain \sfkv: Trading Correctness for \emph{Starvation-Freedom}} \label{subsec:sfmvto} Our goal is to revise \pkto algorithm to ensure that \emph{\stfdm} is satisfied. Specifically, we want the transaction with the lowest \its to eventually commit. Once this happens, the next non-committed transaction with the lowest \its will commit. Thus, from induction, we can see that every transaction will eventually commit. \noindent \textbf{Key Insights For Eliminating Starvation in \pkto:} To identify the necessary revision, we first focus on the effect of this algorithm on two transactions, say $T_{50}$ and $T_{60}$ with their \cts values being 50 and 60 respectively. Furthermore, for the sake of discussion, assume that these transactions only read and write \tobj $x$. Also, assume that the latest version for $x$ is with $ts$ $40$. Each transaction first reads $x$ and then writes $x$ (as part of the $\tryc{}$ operation). We use $r_{50}$ and $r_{60}$ to denote their read operations while $w_{50}$ and $w_{60}$ to denote their $\tryc$ \op{s}. Here, a read operation will not fail as there is a previous version present. Now, there are six possible permutations of these statements. We identify these permutations and the action that should be taken for that permutation in Table \ref{tbl:sfillus}. In all these permutations, the read \op{s} of a transaction come before the write \op{s} as the writes to the shared memory occurs only in the $\tryc$ \op (due to optimistic execution) which is the final \op of a transaction. \begin{table}[ht] \centering \begin{tabular}{|c|l|l|} \hline \multicolumn{1}{|c|}{S. No} & \multicolumn{1}{c|}{Sequence} & \multicolumn{1}{c|}{Action} \\ \hline 1. & $r_{50}, w_{50}, r_{60}, w_{60}$ & $T_{60}$ reads the version written by $T_{50}$. No conflict. \\ \hline 2. & $r_{50}, r_{60}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$. Either abort $T_{50}$ or $T_{60}$. \\ \hline 3. & $r_{50}, r_{60}, w_{60}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$. \\ \hline 4. & $r_{60}, r_{50}, w_{60}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$. \\ \hline 5. & $r_{60}, r_{50}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$. Either abort $T_{50}$ or $T_{60}$. \\ \hline 6. & $r_{60}, w_{60}, r_{50}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$.\\ \hline \end{tabular} \caption{Permutations of operations} \label{tbl:sfillus} \end{table} \ignore{ \begin{table} \centering \begin{tabular}{|c|l|l|} \hline \multicolumn{1}{|c|}{S.No} & \multicolumn{1}{c|}{Sequence} & \multicolumn{1}{c|}{Action} \\ \hline 1. & $r_{50}, w_{50}, r_{60}, w_{60}$ & $T_{60}$ reads the version written by $T_{50}$. No conflict. \\ \hline 2. & $r_{50}, r_{60}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$. Either abort $T_{50}$ or $T_{60}$. \\ \hline 3. & $r_{50}, r_{60}, w_{60}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$. \\ \hline 4. & $r_{60}, r_{50}, w_{60}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$. \\ \hline 5. & $r_{60}, r_{50}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$. Either abort $T_{50}$ or $T_{60}$. \\ \hline 6. & $r_{60}, w_{60}, r_{50}, w_{50}$ & Conflict detected at $w_{50}$. Hence, abort $T_{50}$.\\ \hline \end{tabular} \captionsetup{justification=centering} \caption{Permutations of operations} \label{tbl:sfillus2} \end{table} \begin{table} \centering \begin{tabular}{|l|l|l|} \hline 1. & $r_{50}, w_{50}, r_{60}, w_{60}$ & $T_{60}$ reads the version written by $T_{50}$. No conflict.\\ \hline 2. & $r_{50}, r_{60}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$. Either abort $T_{50}$ or $T_{60}$\\ \hline 3. & $r_{50}, r_{60}, w_{60}, w_{50}$ & Conflict detected at $w_{50}$.We must abort $T_{50}$.\\ \hline 4. & $r_{60}, r_{50}, w_{60}, w_{50}$ & Conflict detected at $w_{60}$, We must abort $T_{50}$.\\ \hline 5. & $r_{60}, r_{50}, w_{50}, w_{60}$ & Conflict detected at $w_{50}$, Either abort $T_{50}$ or $T_{60}$\\ \hline 6. & $r_{60}, w_{60}, r_{50}, w_{50}$ & Conflict detected at $w_{50}$, We must abort $T_{50}$\\ \hline \end{tabular} \caption{Permutations of operations} \label{tbl:sfillus1} \end{table} } From this table, it can be seen that when a conflict is detected, in some cases, algorithm \pkto \textit{must} abort $T_{50}$. In case both the transactions are live, \pkto has the option of aborting either transaction depending on their \its. If $T_{60}$ has lower \its then in no case, \pkto is required to abort $T_{60}$. In other words, it is possible to ensure that the transaction with lowest \its and the highest \cts is never aborted. Although in this example, we considered only one \tobj, this logic can be extended to cases having multiple \op{s} and \tobj{s}. Next, consider \stref{notfound} of \pkto algorithm. Suppose a transaction $T_i$ wants to read a \tobj but does not find a version with a timestamp smaller than $i$. In this case, $T_i$ has to abort. But if $T_i$ has the highest \cts, then it will certainly find a version to read from. This is because the timestamp of a version corresponds to the timestamp of the transaction that created it. If $T_i$ has the highest \cts value then it implies that all versions of all the \tobj{s} have a timestamp smaller than \cts of $T_i$. This reinforces the above observation that a transaction with lowest \its and highest \cts is not aborted. To summarize the discussion, algorithm $\pkto$ has an in-built mechanism to protect transactions with lowest \its and highest \cts value. However, this is different from what we need. Specifically, we want to protect a transaction $T_i$, with lowest $\its$ value. One way to ensure this: if transaction $T_i$ with lowest \its keeps getting aborted, eventually it will achieve the highest \cts. Once this happens, \pkto ensures that $T_i$ cannot be further aborted. In this way, we can ensure the liveness of all transactions. \ignore{ \color{blue} I propose to rename 3.2 as Key Insights For Eliminating Starvation in PMVTO Create a new section here with title Modifying PMVTO to Obtain SFMVTO: Trading Correctness for Starvation-freedom Change title of 3.3 (new 3.4) to Design of KFTM: Regaining Correctness while Preserving Starvation Freedom \color{black} } \noindent \textbf{The working of \emph{\stf} algorithm:} To realize this idea and achieve \emph{\stfdm}, we consider another variation of \mvto, \emph{Starvation-Free MVTO} or \emph{\sfmv}. We specifically consider \sfmv with $K$ versions, denoted as \emph{\sfkv}. A transaction $T_i$ instead of using the current time as $\tcts{i}$, uses a potentially higher timestamp, \emph{Working Timestamp - \wts} or $\twts{i}$. Specifically, it adds $C * (\tcts{i} - \tits{i})$ to $\tcts{i}$, i.e., \begin{equation} \label{eq:wtsf} \twts{i} = \tcts{i} + C * (\tcts{i} - \tits{i}); \end{equation} where, $C$ is any constant greater than 0. In other words, when the transaction $T_i$ is issued for the first time, $\twts{i}$ is same as $\tcts{i}(= \tits{i})$. However, as transaction keeps getting aborted, the drift between $\tcts{i}$ and $\twts{i}$ increases. The value of $\twts{i}$ increases with each retry. Furthermore, in \sfkv algorithm, \cts is replaced with \wts for $\tread$, $\twrite$ and $\tryc$ \op{s} of \pkto. In \sfkv, a transaction $T_i$ uses $\twts{i}$ to read a version in $\tread$. Similarly, $T_i$ uses $\twts{i}$ in $\tryc$ to find the appropriate previous version (in \stref{notfound}) and to verify if $T_i$ has to be aborted (in \stref{verify}). Along the same lines, once $T_i$ decides to commit and create new versions of $x$, the timestamp of $x$ will be same as its $\twts{i}$ (in \stref{commit}). Thus the timestamp of all the versions in $\vlist$ will be \wts of the transactions that created them. \noindent Now, we have the following property about \sfkv algorithm. \begin{property} \label{prop:sfmv-live} \sfkv algorithm ensures \stfdm. \end{property} While the proof of this property is somewhat involved, the key idea is that the transaction with lowest \its value, say $T_{low}$, will eventually have highest \wts value than all the other transactions in the system. Moreover, after a certain duration, any \textit{new} transaction arriving in the system (i.e., whose $\its$ value sufficiently higher than that of $T_{low}$) will have a lower $\wts$ value than $T_{low}$. This will ensure that $T_{low}$ will not be aborted. In fact, this property can be shown to be true of \sfmv as well. \noindent \textbf{The drawback of \sfkv:} Although \sfkv satisfies starvation-freedom, it, unfortunately, does not satisfy strict-serializability. Specifically, it violates the real-time requirement. \pkto uses \cts for its working while \sfkv uses \wts. It can be seen that \cts is close to the \rt execution of transactions whereas \wts of a transaction $T_i$ is artificially inflated based on its \its and might be much larger than its \cts. \begin{figure} \centerline{ \scalebox{0.6}{\input{figs/sfmv-correct.pdf_t}}} \captionsetup{justification=centering} \caption{Correctness of \sfkv Algorithm} \label{fig:sfmv-correct} \end{figure} We illustrate this with an example. Consider the history $H1$ as shown in \figref{sfmv-correct}: $r_1(x,0) r_2(y,0) w_1(x, 10)\\ C_1 w_2(x, 20) C_2 r_3(x, 10) r_3(z, 25) C_3$ with \cts as 50, 60 and 80 and \wts as 50, 100 and 80 for $T_1, T_2, T_3$ respectively. Here $T_1, T_2$ are ordered before $T_3$ in \rt with $T_1 \prec_{H1}^{RT} T_3$ and $T_2 \prec_{H1}^{RT} T_3$ although $T_2$ has a higher \wts than $T_3$. Here, as per \sfkv algorithm, $T_3$ reads $x$ from $T_1$ since $T_1$ has the largest \wts (50) smaller than $T_3$'s \wts (80). It can be verified that it is possible for \sfkv to generate such a history. But this history is not \stsble. The only possible serial order equivalent to $H1$ and \legal is $T_1 T_3 T_2$. But this violates \rt order as $T_3$ is serialized before $T_2$ but in $H1$, $T_2$ completes before $T_3$ has begun. Since $H1$ is not \stsble, it is not \lopq as well. Naturally, this drawback extends to \sfmv as well. \subsection{Design of \ksftm: Regaining Correctness while Preserving \emph{Starvation-Freedom}} \label{subsec:ksftm} In this section, we discuss how principles of \pkto and \sfkv can be combined to obtain \ksftm that provides both correctness (strict-serializability and \lopq) as well as \emph{\stfdm}. To achieve this, we first understand why the initial algorithm, \pkto satisfies strict-serializability. This is because \cts was used to create the ordering among committed transactions. \cts is closely associated with real-time. In contrast, \sfkv uses \wts which may not correspond to the real-time, as \wts may be significantly larger than \cts as shown by $H1$ in \figref{sfmv-correct}. One straightforward way to modify \sfkv is to delay a committing transaction, say $T_i$ with \wts value $\twts{i}$ until the real-time (\gtcnt) catches up to $\twts{i}$. This will ensure that value of \wts will also become same as the real-time thereby guaranteeing \stsbty. However, this is unacceptable, as in practice, it would require transaction $T_i$ locking all the variables it plans to update and wait. This will adversely affect the performance of the STM system. We can allow the transaction $T_i$ to commit before its $\twts{i}$ has caught up with the actual time if it does not violate the \rt ordering. Thus, to ensure that the notion of \rt order is respected by transactions in the course of their execution in \sfkv, we add extra time constraints. We use the idea of timestamp ranges. This notion of timestamp ranges was first used by Riegel et al. \cite{Riegel+:LSA:DISC:2006} in the context of multi-version STMs. Several other researchers have used this idea since then such as Guerraoui et al. \cite{Guer+:disc:2008}, Crain et al. \cite{Crain+:RI_VWC:ICA3PP:2011}, Aydonat \& Abdelrahman \cite{AydAbd:RCC:TPDS:2012}. \ignore{ Towards, this end, first, we understand why the initial algorithm, \pkto is correct. As mentioned in \propref{pmvto-correct}, any history generated by it is \stsble with the equivalent serial history being one in which transactions are ordered by their \cts{s}. We wanted to achieve the same principle with \sfkv using \wts{s}. But the serial order of \wts{s} of transactions does not respect \rt order as shown by $H1$ in \figref{sfmv-correct}. Thus, to ensure that the notion of \rt order is respected by transactions in course of their execution in \sfkv, we add extra time constraints. We use the idea of timestamp ranges. This notion of timestamp ranges was first used by Riegel et al. \cite{Riegel+:LSA:DISC:2006} in the context of multi-version STMs. Several other researchers have used this idea since then such as Guerraoui et al. \cite{Guer+:disc:2008}, Crain et al. \cite{Crain+:RI_VWC:ICA3PP:2011}, Aydonat \& Abdelrahman \cite{AydAbd:RCC:TPDS:2012}. } Thus, in addition to \its, \cts and \wts, each transaction $T_i$ maintains a timestamp range: \emph{Transaction Lower Timestamp Limit} or $\ttltl{i}$, and \emph{Transaction Upper Timestamp Limit} or $\ttutl{i}$. When a transaction $T_i$ begins, $\ttltl{i}$ is assigned $\tcts{i}$ and $\ttutl{i}$ is assigned a largest possible value which we denote as infinity. When $T_i$ executes a \mth $m$ in which it reads a version of a \tobj $x$ or creates a new version of $x$ in $\tryc$, $\ttltl{i}$ is incremented while $\ttutl{i}$ gets decremented \footnote{Technically $\infty$, which is assigned to $\ttutl{i}$, cannot be decremented. But here as mentioned earlier, we use $\infty$ to denote the largest possible value that can be represented in a system.}. We require to serialize all the transactions based on their \wts while maintaining their \rt order. On executing $m$, $T_i$ is ordered w.r.t to other transactions that have created a version of $x$ based on increasing order of \wts. For all transactions $T_j$ which also have created a version of $x$ and whose $\twts{j}$ is less than $\twts{i}$, $\ttltl{i}$ is incremented such that $\ttutl{j}$ is less than $\ttltl{i}$. Note that all such $T_j$ are serialized before $T_i$. Similarly, for any transaction $T_k$ which has created a version of $x$ and whose $\twts{k}$ is greater than $\twts{i}$, $\ttutl{i}$ is decremented such that it becomes less than $\ttltl{k}$. Again, note that all such $T_k$ is serialized after $T_i$. Note that in the above discussion, $T_i$ need not have created a version of $x$. It could also have read the version of $x$ created by $T_j$. After the increments of $\ttltl{i}$ and the decrements of $\ttutl{i}$, if $\ttltl{i}$ turns out to be greater than $\ttutl{i}$ then $T_i$ is aborted. Intuitively, this implies that $T_i$'s \wts and \rt orders are out of \emph{sync} and cannot be reconciled. Finally, when a transaction $T_i$ commits: (1) $T_i$ records its commit time (or $\ct_i$) by getting the current value of \gtcnt and incrementing it by $\incv$ which is any value greater than or equal to 1. Then $\ttutl{i}$ is set to $\ct_i$ if it is not already less than it. Now suppose $T_i$ occurs in \rt before some other transaction, $T_k$ but does not have any conflict with it. This step ensures that $\ttutl{i}$ remains less than $\ttltl{k}$ (which is initialized with $\tcts{k}$); (2) Ensure that $\ttltl{i}$ is still less than $\ttutl{i}$. Otherwise, $T_i$ is aborted. We illustrate this technique with the history $H1$ shown in \figref{sfmv-correct}. When $T_1$ starts its $\tcts{1} = 50, \ttltl{1} = 50, \ttutl{1}=\infty$. Now when $T_1$ commits, suppose $\gtcnt$ is 70. Hence, $\ttutl{1}$ reduces to 70. Next, when $T_2$ commits, suppose $\ttutl{2}$ reduces to 75 (the current value of $\gtcnt$). As $T_1, T_2$ have accessed a common \tobj $x$ in a conflicting manner, $\ttltl{2}$ is incremented to a value greater than $\ttutl{1}$, say 71. Next, when $T_3$ begins, $\ttltl{3}$ is assigned $\tcts{3}$ which is 80 and $\ttutl{3}$ is initialized to $\infty$. When $T_3$ reads 10 from $T_1$, which is $r_3(x, 10)$, $\ttutl{3}$ is reduced to a value less than $\ttltl{2} (= 71)$, say 70. But $\ttltl{3}$ is already at 80. Hence, the limits of $T_3$ have crossed and thus causing $T_3$ to abort. The resulting history consisting of only committed transactions $T_1 T_2$ is \stsble. Based on this idea, we next develop a variation of \sfkv, \emph{K-version Starvation-Free STM System} or \emph{\ksftm}. To explain this algorithm, we first describe the structure of the version of a \tobj used. It is a slight variation of the \tobj used in \pkto algorithm. It consists of: (1) timestamp, $ts$ which is the \wts of the transaction that created this version (and not \cts like \pkto); (2) the value of the version; (3) a list, called \rlist{}, consisting of transactions ids (could be \cts as well) that read from this version; (4) version \rt timestamp or \vt which is the \utl of the transaction that created this version. Thus a version has information of \wts and \utl of the transaction that created it. Now, we describe the main idea behind $\begt$, $\tread$, $\twrite$ and $\tryc{}$ \op{s} of a transaction $T_i$ which is an extension of \pkto. Note that as per our notation $i$ represents the \cts of $T_i$. \noindent \textbf{$\begt(t)$:} A unique timestamp $ts$ is allocated to $T_i$ which is its \cts ($i$ from our assumption) which is generated by atomically incrementing the global counter $\gtcnt$. If the input $t$ is null then $\tcts{i} = \tits{i} = ts$ as this is the first \inc of this transaction. Otherwise, the non-null value of $t$ is assigned to $\tits{i}$. Then, \wts is computed by \eqnref{wtsf}. Finally, \ltl and \utl are initialized: $\ttltl{i} = \tcts{i}$, $\ttutl{i} = \infty$. \noindent \textbf{$\tread(x)$:} Transaction $T_i$ reads from a version of $x$ with timestamp $j$ such that $j$ is the largest timestamp less than $\twts{i}$ (among the versions $x$), i.e. there exists no version $k$ such that $j<k<\twts{i}$ is true. If no such $j$ exists then $T_i$ is aborted. Otherwise, after reading this version of $x$, $T_i$ is stored in $j$'s $rl$. Then we modify \ltl, \utl as follows: \begin{enumerate} \item The version $x[j]$ is created by a transaction with $\twts{j}$ which is less than $\twts{i}$. Hence, $\ttltl{i} = max(\ttltl{i}, x[j].$\vt$ + 1)$. \item Let $p$ be the timestamp of smallest version larger than $i$. Then $\ttutl{i} = min(\ttutl{i}, x[p].\vt - 1)$. \item After these steps, abort $T_i$ if \ltl and \utl have crossed, i.e., $\ttltl{i} > \ttutl{i}$. \end{enumerate} \noindent \textbf{$\twrite(x,v)$:} $T_i$ stores this write to value $x$ locally in its $wset\xspace_i$. \noindent \textbf{$\tryc:$} This \op{} consists of multiple steps: \begin{enumerate} \item Before $T_i$ can commit, we need to verify that any version it creates is updated consistently. $T_i$ creates a new version with timestamp $\twts{i}$. Hence, we must ensure that any transaction that read a previous version is unaffected by this new version. Additionally, creating this version would require an update of \ltl and \utl of $T_i$ and other transactions whose read-write set overlaps with that of $T_i$. Thus, $T_i$ first validates each \tobj{} $x$ in its $wset\xspace{}$ as follows: \label{step:kverify} \begin{enumerate} \item $T_i$ finds a version of $x$ with timestamp $j$ such that $j$ is the largest timestamp less than $\twts{i}$ (like in $\tread$). If there exists no version of $x$ with a timestamp less than $\twts{i}$ then $T_i$ is aborted. This is similar to \stref{notfound} of the $\tryc$ of \pkto algorithm. \label{step:k-look} \item Among all the transactions that have previously read from $j$ suppose there is a transaction $T_k$ such that $j<\twts{i}<\twts{k}$. Then (i) if $T_k$ has already committed then $T_i$ is aborted; (ii) Suppose $T_k$ is live, and $\tits{k}$ is less than $\tits{i}$. Then again $T_i$ is aborted; (iii) If $T_k$ is still live with $its_i$ less than $its_k$ then $T_k$ is aborted. This step is similar to \stref{verify} of the $\tryc$ of \pkto algorithm. \label{step:k-verify} \item Next, we must ensure that $T_i$'s \ltl and \utl are updated correctly w.r.t to other concurrently executing transactions. To achieve this, we adjust \ltl, \utl as follows: (i) Let $j$ be the $ts$ of the largest version smaller than $\twts{i}$. Then $\ttltl{i} = max(\ttltl{i}, x[j].\vt + 1)$. Next, for each reading transaction, $T_r$ in $x[j].\rlist$, we again set, $\ttltl{i} = max(\ttltl{i}, \ttutl{r} + 1)$. (ii) Similarly, let $p$ be the $ts$ of the smallest version larger than $\twts{i}$. Then, $\ttutl{i} = min(\ttutl{i}, x[p].\vt - 1)$. (Note that we don't have to check for the transactions in the \rlist of $x[p]$ as those transactions will have \ltl higher than $x[p].\vt$ due to $\tread$.) (iii) Finally, we get the commit time of this transaction from \gtcnt: $\ct_i = \gtcnt.add\&Get(\incv)$ where $\incv$ is any constant $\geq 1$. Then, $\ttutl{i} = min(\ttutl{i}, \ct_i)$. After performing these updates, abort $T_i$ if \ltl and \utl have crossed, i.e., $\ttltl{i} > \ttutl{i}$. \label{step:ktk-upd} \end{enumerate} \item After performing the tests of \stref{kverify} over each \tobj{s} $x$ in $T_i$'s $wset\xspace$, if $T_i$ has not yet been aborted, we proceed as follows: for each $x$ in $wset\xspace_i$ create a \vtup $\langle \twts{i}, wset\xspace_i.x.v, null,\\ \ttutl{i} \rangle$. In this tuple, $\twts{i}$ is the timestamp of the new version; $wset\xspace_i.x.v$ is the value of $x$ is in $T_i$'s $wset\xspace$; the \rlist of the $\vtup$ is $null$; $\vt$ is $\ttutl{i}$ (actually it can be any value between $\ttltl{i}$ and $\ttutl{i}$). Update the $\vlist$ of each \tobj $x$ similar to \stref{updt} of $\tryc$ of \pkto. \ignore{ \begin{enumerate} \item $T_i$ creates a \vtup $\langle i, x.v, null \rangle$. In this tuple, $i$ (\cts of $T_i$) is the timestamp of the new version; $x.v$ is the value of $x$ is in $T_i$'s wset\xspace and the \rlist of the \vtup is $null$. \item Suppose the total number of versions of $x$ is $K$. Then among all the versions of $x$, $T_i$ replaces the version with the smallest timestamp with \vtup $\langle i, x.v, null \rangle$. Otherwise, the \vtup is added to $x$'s \vlist. \end{enumerate} } \item Transaction $T_i$ is then committed. \label{step:kcommit} \end{enumerate} \noindent \stref{ktk-upd}.(iii) of $\tryc$ ensures that \rt order between transactions that are not in conflict. It can be seen that locks have to be used to ensure that all these \mth{s} to execute in a \lble manner (i.e., atomically). \input{ap-dsksftm} We get the following nice properties on \ksftm. For simplicity, we assumed $C$ and $\incv$ to be 0.1 and 1 respectively in our analysis. But the proof and the analysis holds for any value greater than 0. \ignore{ \begin{property} \label{prop:ksftm-sble} Any history generated by \pkto is \stsble. \end{property} } \begin{theorem} \label{thm:ksftm-lo} Any history generated by \ksftm is strict-serializable and \lopq. \end{theorem} \begin{theorem} \label{thm:ksftm-live} \ksftm algorithm ensures \stfdm. \end{theorem} \noindent As explained in the description \propref{sfmv-live}, the proof of this property is somewhat involved. As expected, this proof can be extended to \mvsftm as well. \noindent \textbf{Garbage Collection:} Having described the \emph{\stf} algorithm, we now describe how garbage collection can be performed on the unbounded variant, \mvsftm to achieve \mvsftmgc. This is achieved by deleting non-latest version (i.e., there exists a version with greater $ts$) of each \tobj whose timestamp, $ts$ is less than the \cts of smallest live transaction. It must be noted that \mvsftm (\ksftm) works with \wts which is greater or equal to \cts for any transaction. Interestingly, the same garbage collection principle can be applied for \pmvto to achieve \pmvtogc. To identify the transaction with the smallest \cts among live transactions, we maintain a set of all the live transactions, \livel. When a transaction $T_i$ begins, its \cts is added to this \livel. And when $T_i$ terminates (either commits or aborts), $T_i$ is deleted from this \livel. \section{Experimental Evaluation} \label{apn:ap-exp} \vspace{-.2cm} In this section we evaluate the performance of \ksftm and \sftm (single-version starvation freedom algorithm) on Intel(R) Xeon(R) CPU E5-2690 v4 at 2.60GHz (56 cores). We have compared our algorithm \ksftm with the existing \sftm. In order to analyze the performance of \ksftm, we have considered multiple parameters and conclude with the optimal value of $K$ as 10 (described in \apnref{ap-opk}), the optimal value of $C$ as 0.2 (described in \apnref{ap-opc}) and optimal read-write ratio as 50\% read and 50\% write(described in \apnref{ap-rwr}). \subsection{Optimal value of $K$} \label{apn:ap-opk} We have considered 20 \tobj{s}, 64 number of threads, each thread operates on 5000 transactions and each transaction has 30 operations(50\% read and 50\% write). We have considered the value of $C$ as 0.1 while varying the value of $K$ from 5 to 30. \figref{ap-optkval} depicts the optimal value of $K$ is 5. \vspace{5mm} \begin{figure}[H] \centering \subfloat[Commit time per transaction by varying $K$ \label{fig:ap-optkval}] {\includegraphics[width=0.40\textwidth]{figs/optimalK.png}} \subfloat[ Commit time per transaction by varying $C$ \label{fig:ap-optcval}] {\includegraphics[width=0.40\textwidth]{figs/optimalC.png}} \caption[The average and standard deviation of critical parameters]{Optimal values of parameters for \ksftm, while commit time per transaction for \sftm is 999.463ms} \label{fig:ap-eval} \end{figure} \subsection{Optimal value of $C$} \vspace{-3mm} \label{apn:ap-opc} For calculating the optimal value of $C$, we have considered $K$ as 5, \tobj{s} count as 20, number of threads as 64, each thread operates on transactions count 5000 and each transaction has 30 operations(50\% read and 50\% write). \figref{ap-optcval} represents the execution under \ksftm with the value of C ( 0, 0.1, 0.2, 0.4, 0.6, 0.8 and 1). Experiment depicts, $C$ as 0.1 is best amongst all, so the optimal value of $C$ is 0.1 . \subsection{Read-write ratio} \label{apn:ap-rwr} For obtaining the best read-write ratio, we have considered $K$ as 5 (optimal value of \textit{K}), $C$ as 0.1 (optimal value of \textit{C}), \tobj{s} count as 20, number of threads as 64, each thread operates on 5000 transactions and each transaction has 30 operations, while varying the read percentage from 10 to 90. \vspace{5mm} \begin{figure}[H] \centering \hfill \subfloat[ Commit time per transaction by varying read percentage\label{fig:ap-readper}] {\includegraphics[width=0.29\textwidth]{figs/readper.png}} \subfloat[Read abort by varying read \label{fig:ap-readabort}] {\includegraphics[width=0.31\textwidth]{figs/Readabort.png}} \subfloat[ Write abort by varying read percentage\label{fig:ap-writeabort}] {\includegraphics[width=0.32\textwidth]{figs/Writeabort.png}} \caption[Abort count] {Commit time per transaction, read and tryC abort count with varying read percentage} \label{fig:ap-rweval1} \end{figure} \figref{ap-readper} depicts, \ksftm outperforms \sftm within the range of read percentage varying from 20 to 80. Where \ksftm with read percentage 50 is giving more than twice speedup as compared to \sftm. That implies \ksftm is not only favoring reading transactions but also favours the write transactions. \sftm performs better than \ksftm for read percentage 90, because \ksftm takes time for searching the correct version to read it from and adding itself into the corresponding version’s read list at appropriate place, so KSFTM does have version overhead. \figref{ap-readabort} and \figref{ap-writeabort} illustrate $read$ and $tryC$ abort count for \ksftm and \sftm. \ksftm has more $read$ abort count than \sftm while \sftm has more $tryC$ abort count than \ksftm. This depicts that \ksftm ensures early aborts than allowing the transaction to get aborted at commiting stage. These early aborts is one of the reason why \ksftm outperforms \sftm. \subsection{Execution under varying number of transactions} \figref{ap-threadcount} illustrates time taken per thread in milliseconds by \ksftm and \sftm algorithms while varying the number of transactions from 1000 to 5000. Specifically, we have considered 64 threads with read percentage 50, optimal value of $K$ i.e. 5 and $C$ i.e. 0.1. Our experiment shows that KSFTM provides more than 2 fold speedup than \sftm with 5000 transactions. \begin{figure}[H] \centering \subfloat[ \ksftm vs \sftm \label{fig:ap-threadcount}] {\includegraphics[width=0.45\textwidth]{figs/speedup.png}} \caption[Speedup with varying number of transactions] {Speedup with varying number of transactions} \label{fig:ap-rweval} \end{figure} \section{System Model and Preliminaries} \label{sec:model} Following~\cite{tm-book,KuzSat:NI:TCS:2016}, we assume a system of $n$ processes/threads, $p_1,\ldots,p_n$ that access a collection of \emph{transactional objects} (or \emph{{\tobj}s}) via atomic \emph{transactions}. Each transaction has a unique identifier. Within a transaction, processes can perform \emph{transactional operations or \mth{s}}: $\begt{}$ that begins a transaction, \textit{\twrite}$(x,v)$ operation that updates a \tobj $x$ with value $v$ in its local memory, the \textit{\tread}$(x)$ operation tries to read $x$, \textit{\tryc}$()$ that tries to commit the transaction and returns $commit$ if it succeeds, and \textit{\trya}$()$ that aborts the transaction and returns $\mathcal{A}$. For the sake of presentation simplicity, we assume that the values taken as arguments by \textit{\twrite} operations are unique. Operations \textit{\tread} and \textit{\tryc}$()$ may return $\mathcal{A}$, in which case we say that the operations \emph{forcefully abort}. Otherwise, we say that the operations have \emph{successfully} executed. Each operation is equipped with a unique transaction identifier. A transaction $T_i$ starts with the first operation and completes when any of its operations return $\mathcal{A}$ or $\mathcal{C}$. We denote any \op{} that returns $\mathcal{A}$ or $\mathcal{C}$ as \emph{\termop{s}}. Hence, \op{s} $\tryc$$()$ and $\trya$$()$ are \termop{s}. A transaction does not invoke any further \op{s} after \termop{s}. For a transaction $T_k$, we denote all the \tobj{s} accessed by its read \op{s} as $rset\xspace_k$ and \tobj{s} accessed by its write operations as $wset\xspace_k$. We denote all the \op{s} of a transaction $T_k$ as $\evts{T_k}$ or $evts_k$. \noindent \textbf{History:} A \emph{history} is a sequence of \emph{events}, i.e., a sequence of invocations and responses of transactional operations. The collection of events is denoted as $\evts{H}$. For simplicity, we only consider \emph{sequential} histories here: the invocation of each transactional operation is immediately followed by a matching response. Therefore, we treat each transactional operation as one atomic event, and let $<_H$ denote the total order on the transactional operations incurred by $H$. With this assumption, the only relevant events of a transaction $T_k$ is of the types: $r_k(x,v)$, $r_k(x,\mathcal{A})$, $w_k(x, v)$, $\tryc_k(\mathcal{C})$ (or $c_k$ for short), $\tryc_k(\mathcal{A})$, $\trya_k(\mathcal{A})$ (or $a_k$ for short). We identify a history $H$ as tuple $\langle \evts{H},<_H \rangle$. Let $H|T$ denote the history consisting of events of $T$ in $H$, and $H|p_i$ denote the history consisting of events of $p_i$ in $H$. We only consider \emph{well-formed} histories here, i.e., no transaction of a process begins before the previous transaction invocation has completed (either $commits$ or $aborts$). We also assume that every history has an initial \emph{committed} transaction $T_0$ that initializes all the t-objects with value $0$. The set of transactions that appear in $H$ is denoted by $\txns{H}$. The set of \emph{committed} (resp., \emph{aborted}) transactions in $H$ is denoted by $\comm{H}$ (resp., $\aborted{H}$). The set of \emph{incomplete} or \emph{live} transactions in $H$ is denoted by $\incomp{H} = \live{H} = (\txns{H}-\comm{H}-\aborted{H})$. For a history $H$, we construct the \emph{completion} of $H$, denoted as $\overline{H}$, by inserting $\trya_k(\mathcal{A})$ immediately after the last event of every transaction $T_k\in \live{H}$. But for $\tryc_i$ of transaction $T_i$, if it released the lock on first \tobj successfully that means updates made by $T_i$ is consistent so, $T_i$ will immediately return commit. \noindent \textbf{Transaction orders:} For two transactions $T_k,T_m \in \txns{H}$, we say that $T_k$ \emph{precedes} $T_m$ in the \emph{real-time order} of $H$, denote $T_k\prec_H^{RT} T_m$, if $T_k$ is complete in $H$ and the last event of $T_k$ precedes the first event of $T_m$ in $H$. If neither $T_k \prec_H^{RT} T_m$ nor $T_m \prec_H^{RT} T_k$, then $T_k$ and $T_m$ \emph{overlap} in $H$. We say that a history is \emph{\tseq} if all the transactions are ordered by this real-time order. Note that from our earlier assumption all the transactions of a single process are ordered by real-time. \ignore{ We say that $T_k, T_m$ are in conflict, if (1) (tryc-tryc) $\tryc_k(C)<_H \tryc_m(C)$ and $Wset(T_k) \cap Wset(T_m) \neq\emptyset$; (2) (tryc-r) $\tryc_k(C)<_H r_m(x,v)$, $x \in Wset(T_k)$ and $v \neq A$; (3) (r-tryc) $r_k(x,v)<_H \tryc_m(C)$, $x\in Wset(T_m)$ and $v \neq A$. Thus, it can be seen that the conflict order is defined only on \op{s} that have successfully executed. We denote the corresponding \op{s} as conflicting. } \noindent \textbf{Sub-history:} A \textit{sub-history} ($SH$) of a history ($H$) denoted as the tuple $\langle \evts{SH},$ $<_{SH}\rangle$ and is defined as: (1) $<_{SH} \subseteq <_{H}$; (2) $\evts{SH} \subseteq \evts{H}$; (3) If an event of a transaction $T_k\in\txns{H}$ is in $SH$ then all the events of $T_k$ in $H$ should also be in $SH$. For a history $H$, let $R$ be a subset of $\txns{H}$. Then $\shist{R}{H}$ denotes the \ssch{} of $H$ that is formed from the \op{s} in $R$. \noindent \textbf{Valid and legal history:} A successful read $r_k(x, v)$ (i.e., $v \neq \mathcal{A}$) in a history $H$ is said to be \emph{\valid} if there exist a transaction $T_j$ that wrote $v$ to $x$ and \emph{committed} before $r_k(x,v)$. Formally, $\langle r_k(x, v)$ is \valid{} $\Leftrightarrow \exists T_j: (c_j <_{H} r_k(x, v)) \land (w_j(x, v) \in \evts{T_j}) \land (v \neq \mathcal{A}) \rangle$. The history $H$ is \valid{} if all its successful read \op{s} are \valid. We define $r_k(x, v)$'s \textit{\lastw{}} as the latest commit event $c_i$ preceding $r_k(x, v)$ in $H$ such that $x\in wset_i$ ($T_i$ can also be $T_0$). A successful read \op{} $r_k(x, v)$, is said to be \emph{\legal{}} if the transaction containing $r_k$'s \lastw{} also writes $v$ onto $x$: $\langle r_k(x, v)$ \text{is \legal{}} $\Leftrightarrow (v \neq \mathcal{A}) \land (\lwrite{r_k(x, v)}{H} = c_i) \land (w_i(x,v) \in \evts{T_i})\rangle$. The history $H$ is \legal{} if all its successful read \op{s} are \legal. From the definitions we get that if $H$ is \legal{} then it is also \valid. \noindent \textbf{Opacity and Strict Serializability:} We say that two histories $H$ and $H'$ are \emph{equivalent} if they have the same set of events. Now a history $H$ is said to be \textit{opaque} \cite{GuerKap:2008:PPoPP,tm-book} if it is \valid{} and there exists a t-sequential legal history $S$ such that (1) $S$ is equivalent to $\overline{H}$ and (2) $S$ respects $\prec_{H}^{RT}$, i.e., $\prec_{H}^{RT} \subset \prec_{S}^{RT}$. By requiring $S$ being equivalent to $\overline{H}$, opacity treats all the incomplete transactions as aborted. We call $S$ an (opaque) \emph{serialization} of $H$. Along same lines, a \valid{} history $H$ is said to be \textit{strictly serializable} if $\shist{\comm{H}}{H}$ is opaque. Unlike opacity, strict serializability does not include aborted or incomplete transactions in the global serialization order. An opaque history $H$ is also strictly serializable: a serialization of $\shist{\comm{H}}{H}$ is simply the subsequence of a serialization of $H$ that only contains transactions in $\comm{H}$. Serializability is commonly used criterion in databases. But it is not suitable for STMs as it does not consider the correctness of \emph{aborted} transactions as shown by Guerraoui \& Kapalka \cite{GuerKap:2008:PPoPP}. Opacity, on the other hand, considers the correctness of \emph{aborted} transactions as well. Similarly, \lopty (described below) is another \cc for STMs but is not as restrictive as \opty. \noindent \textbf{Local opacity:} For a history H, we define a set of sub-histories, denoted as $\shset{H}$ as follows: (1) For each aborted transaction $T_i$, we consider a $\subhist$ consisting of \op{s} from all previously \emph{committed} transactions and including all successful \op{s} of $T_i$ (i.e., \op{s} which did not return $\mathcal{A}$) while immediately putting commit after last successful operation of $T_i$; (2) for last \emph{committed} transaction $T_l$ considers all the previously \emph{committed} transactions including $T_l$. A history H is said to be \emph{\lopq} \cite{KuzSat:NI:ICDCN:2014,KuzSat:NI:TCS:2016} if all the sub-histories in \shset{H} are opaque. It must be seen that in the construction of sub-history of an aborted transaction $T_i$, the $\subhist$ will contain \op{s} from only one aborted transaction which is $T_i$ itself and no other live/aborted transactions. Similarly, the sub-history of \emph{committed} transaction $T_l$ has no \op{s} of aborted and live transactions. Thus in \lopty, no aborted or live transaction can cause another transaction to abort. It was shown that \lopty \cite{KuzSat:NI:ICDCN:2014,KuzSat:NI:TCS:2016} allows greater concurrency than \opty. Any history that is \opq is also \lopq but not necessarily the vice-versa. On the other hand, a history that is \lopq is also \stsble, but the vice-versa need not be true.\\ \noindent \textbf{Graph Characterization of Local Opacity:} To prove correctness of STM systems, it is useful to consider graph characterization of histories. In this section, we describe the graph characterization developed by Kumar et al \cite{Kumar+:MVTO:ICDCN:2014} for proving \opty which is based on characterization by Bernstein and Goodman \cite{BernGood:1983:MCC:TDS}. We extend this characterization for \lo. Consider a history $H$ which consists of multiple versions for each \tobj. The graph characterization uses the notion of \textit{version order}. Given $H$ and a \tobj{} $x$, we define a version order for $x$ as any (non-reflexive) total order on all the versions of $x$ ever created by committed transactions in $H$. It must be noted that the version order may or may not be the same as the actual order in which the version of $x$ are generated in $H$. A version order of $H$, denoted as $\ll_H$ is the union of the version orders of all the \tobj{s} in $H$. Consider the history $H2: r_1(x, 0) r_2(x, 0) r_1(y, 0) r_3(z, 0) w_1(x, 5) w_3(y, 15) w_2(y, 10) w_1(z, 10) c_1 c_2 r_4(x, 5) \\r_4(y, 10) w_3(z, 15) c_3 r_4(z, 10)$. Using the notation that a committed transaction $T_i$ writing to $x$ creates a version $x_i$, a possible version order for $H2$ $\ll_{H2}$ is: $\langle x_0 \ll x_1 \rangle, \langle y_0 \ll y_2 \ll y_3 \rangle, \langle z_0 \ll z_1 \ll z_3 \rangle $. We define the graph characterization based on a given version order. Consider a history $H$ and a version order $\ll$. We then define a graph (called opacity graph) on $H$ using $\ll$, denoted as $\opg{H}{\ll} = (V, E)$. The vertex set $V$ consists of a vertex for each transaction $T_i$ in $\overline{H}$. The edges of the graph are of three kinds and are defined as follows: \begin{enumerate} \item \textit{\rt}(real-time) edges: If $T_i$ commits before $T_j$ starts in $H$, then there is an edge from $v_i$ to $v_j$. This set of edges are referred to as $\rtx(H)$. \item \textit{\rf}(reads-from) edges: If $T_j$ reads $x$ from $T_i$ in $H$, then there is an edge from $v_i$ to $v_j$. Note that in order for this to happen, $T_i$ must have committed before $T_j$ and $c_i <_H r_j(x)$. This set of edges are referred to as $\rf(H)$. \item \textit{\mv}(multiversion) edges: The \mv{} edges capture the multiversion relations and is based on the version order. Consider a successful read \op{} $r_k(x,v)$ and the write \op{} $w_j(x,v)$ belonging to transaction $T_j$ such that $r_k(x,v)$ reads $x$ from $w_j(x,v)$ (it must be noted $T_j$ is a committed transaction and $c_j <_H r_k$). Consider a committed transaction $T_i$ which writes to $x$, $w_i(x, u)$ where $u \neq v$. Thus the versions created $x_i, x_j$ are related by $\ll$. Then, if $x_i \ll x_j$ we add an edge from $v_i$ to $v_j$. Otherwise ($x_j \ll x_i$), we add an edge from $v_k$ to $v_i$. This set of edges are referred to as $\mv(H, \ll)$. \end{enumerate} Using the construction, the $\opg{H2}{\ll_{H2}}$ for history $H2$ and $\ll_{H2}$ is shown in \figref{opg}. The edges are annotated. The only \mv{} edge from $T4$ to $T3$ is because of \tobj{s} $y, z$. $T4$ reads value 5 for $z$ from $T1$ whereas $T3$ also writes 15 to $z$ and commits before $r_4(z)$. \begin{figure}[tbph] \centerline{\scalebox{0.7}{\input{figs/ex2.pdf_t}}} \captionsetup{justification=centering} \caption{$\opg{H2}{\ll_{H2}}$} \label{fig:opg} \end{figure} Kumar et al \cite{Kumar+:MVTO:ICDCN:2014} showed that if a version order $\ll$ exists for a history $H$ such that $\opg{H}{\ll_H}$ is acyclic, then $H$ is \opq. This is captured in the following result. \ignore{ \begin{result} \label{res:main-opg} A \valid{} history $H$ is opaque iff there exists a version order $\ll_H$ such that $\opg{H}{\ll_H}$ is acyclic. \end{result} \noindent This result can be extended to characterize \lo using graphs with the following theorem. The proof is in Appendix \thmref{log}. \begin{theorem} \label{thm:main-log} A \valid{} history $H$ is \lopq iff for each sub-history $sh$ in $\shset{H}$ there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic. Formally, $\langle (H \text{ is \lopq}) \Leftrightarrow (\forall sh \in \shset{H}, \exists \ll_{sh}: \opg{sh}{\ll_{sh}} \text{ is acyclic}) \rangle$. \end{theorem} } \begin{result} \label{res:opg} A \valid{} history $H$ is opaque iff there exists a version order $\ll_H$ such that $\opg{H}{\ll_H}$ is acyclic. \end{result} \noindent This result can be easily extended to prove \lo as follows \begin{theorem} \label{thm:log} A \valid{} history $H$ is \lopq iff for each sub-history $sh$ in $\shset{H}$ there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic. Formally, $\langle (H \text{ is \lopq}) \Leftrightarrow (\forall sh \in \shset{H}, \exists \ll_{sh}: \opg{sh}{\ll_{sh}} \text{ is acyclic}) \rangle$. \end{theorem} \begin{proof} To prove this theorem, we have to show that each sub-history $sh$ in $\shset{H}$ is \valid. Then the rest follows from \resref{opg}. Now consider a sub-history $sh$. Consider any read \op $r_i(x, v)$ of a transaction $T_i$. It is clear that $T_i$ must have read a version of $x$ created by a previously committed transaction. From the construction of $sh$, we get that all the transaction that committed before $r_i$ are also in $sh$. Hence $sh$ is also \valid. Now, proving $sh$ to be \opq iff there exists a version order $\ll_{sh}$ such that $\opg{sh}{\ll_{sh}}$ is acyclic follows from \resref{opg}. \end{proof} \section{Introduction} \label{sec:intro} STMs \cite{HerlMoss:1993:SigArch,ShavTou:1995:PODC} are a convenient programming interface for a programmer to access shared memory without worrying about consistency issues. STMs often use an optimistic approach for concurrent execution of \emph{transactions} (a piece of code invoked by a thread). In optimistic execution, each transaction reads from the shared memory, but all write updates are performed on local memory. On completion, the STM system \textit{validates} the reads and writes of the transaction. If any inconsistency is found, the transaction is \emph{aborted}, and its local writes are discarded. Otherwise, the transaction is committed, and its local writes are transferred to the shared memory. A transaction that has begun but has not yet committed/aborted is referred to as \emph{live}. A typical STM is a library which exports the following methods: \emph{\begt} which begins a transaction, \textit{\tread} which reads a \emph{transactional object} or \emph{\tobj}, \textit{\twrite} which writes to a \emph{\tobj}, \textit{\tryc} which tries to commit the transaction. Typical code for using STMs is as shown in \algoref{sfdemo} which shows how an insert of a concurrent linked-list library is implemented using STMs. \ignore{ \color{blue} \color{red} A typical code using STMs is as shown in \algoref{sfdemo}. It shows the overview of a concurrent \emph{insert} \mth which inserts an element $e$ into a linked-list $LL$. It consists of a loop where the thread creates a transaction. This transaction executes the code to insert an element $e$ in a linked-list $LL$ using $\tread$ and $\twrite$ operations. (The result of $\twrite$ operation are stored locally.) At the end of the transaction, the thread calls \textit{\tryc}. At this point, the STM checks if the given transaction can be committed while satisfying the required safety properties (e.g., \slbty \cite{Papad:1979:JACM}, \opty \cite{GuerKap:2008:PPoPP}). If yes, then the transaction is committed. At this time, any updates done by the transaction are reflected in the shared memory. Otherwise, it is aborted. In this case, all the updates made by the transaction are discarded. If the given transaction is aborted, then the invoking thread may retry that transaction again like \linereff{retry} in \algoref{sfdemo}. \color{black} } \noindent \textbf{Correctness:} \ignore{ \color{red} By committing/aborting the transactions, the STM system ensures atomicity and consistency of transactions. Thus, an important requirement of STMs is to precisely identify the criterion as to when a transaction should be \emph{aborted/committed}, referred to as \emph{\cc}. \color{black} } Several \emph{\ccs} have been proposed for STMs such as opacity \cite{GuerKap:2008:PPoPP}, local opacity \cite{KuzSat:NI:ICDCN:2014,KuzSat:NI:TCS:2016}. All these \emph{\ccs} require that all the transactions including aborted ones appear to execute sequentially in an order that agrees with the order of non-overlapping transactions. Unlike the \ccs for traditional databases, such as serializability, strict-serializability \cite{Papad:1979:JACM}, the \ccs for STMs ensure that even aborted transactions read correct values. This ensures that programmers do not see any undesirable side-effects due to the reads by transaction that get aborted later such as divide-by-zero, infinite-loops, crashes etc. in the application due to concurrent executions. This additional requirement on aborted transactions is a fundamental requirement of STMs which differentiates STMs from databases as observed by Guerraoui \& Kapalka \cite{GuerKap:2008:PPoPP}. Thus in this paper, we focus on optimistic executions with the \emph{\cc} being \emph{\lopty} \cite{KuzSat:NI:TCS:2016}. \ignore{ \todo{This para could be dropped. We have already said optimistic execution before} To ensure correctness such as \opty, most STMs execute optimistically. With this approach, a transaction only reads values written by other committed transactions. \textbf{To achieve this, all writes are written to local memory first. They are added to the shared memory only when the transaction commits.} This (combined with required validation) can, in turn, ensure that the reads of the transaction are consistent as required by the \emph{\cc}. Thus in this paper, we focus only on optimistic executions with the \emph{\cc} being \emph{\lopty} \cite{KuzSat:NI:TCS:2016} (explained in \secref{model}). \color{black} } \vspace{1mm} \noindent \textbf{Starvation Freedom:} In the execution shown in \algoref{sfdemo}, there is a possibility that the transaction which a thread tries to execute gets aborted again and again. Every time, it executes the transaction, say $T_i$, $T_i$ conflicts with some other transaction and hence gets aborted. In other words, the thread is effectively starving because it is not able to commit $T_i$ successfully. A well known blocking progress condition associated with concurrent programming is \stfdm \cite[chap 2]{HerlihyShavit:AMP:Book:2012}, \cite{HerlihyShavit:Progress:Opodis:2011}. In the context of STMs, \stfdm ensures that every aborted transaction that is retried infinitely often eventually commits. It can be defined as: an STM system is said to be \emph{\stf} if a thread invoking a transaction $T_i$ gets the opportunity to retry $T_i$ on every abort (due to the presence of a fair underlying scheduler with bounded termination) and $T_i$ is not \emph{parasitic}, i.e., $T_i$ will try to commit given a chance then $T_i$ will eventually commit. Parasitic transactions \cite{Bushkov+:Live-TM:PODC:2012} will not commit even when given a chance to commit possibly because they are caught in an infinite loop or some other error. \setlength{\intextsep}{0pt} \vspace{1mm} \setlength{\textfloatsep}{0pt} \begin{algorithm} \caption{Insert($LL, e$): Invoked by a thread to insert an element $e$ into a linked-list $LL$. This method is implemented using transactions.} \label{algo:sfdemo} \begin{algorithmic}[1] \State $retry$ = 0; \While {$(true)$} \label{lin:wstart1} \State $id$ = \begt($retry$); \label{lin:beg-illus} \State ... \State ... \State $v$ = $\tread(id, x)$; \Comment{reads the value of $x$ as $v$} \State ... \State ... \State $\twrite(id, x, v')$; \Comment{writes a value $v'$ to $x$} \State ... \State ... \State $ret$ = $\tryc(id)$; \Comment{$\tryc$ can return $commit$ or $abort$} \If {($ret == commit$)} \State break; \Else \State $retry$++; \label{lin:retry} \EndIf \EndWhile \label{lin:wend1} \end{algorithmic} \end{algorithm} \emph{Wait-freedom} is another interesting progress condition for STMs in which every transaction commits regardless of the nature of concurrent transactions and the underlying scheduler \cite{HerlihyShavit:Progress:Opodis:2011}. But it was shown by Guerraoui and Kapalka \cite{Bushkov+:Live-TM:PODC:2012} that it is not possible to achieve \emph{wait-freedom} in dynamic STMs in which data sets of transactions are not known in advance. So in this paper, we explore the weaker progress condition of \emph{\stfdm} for transactional memories while assuming that the data sets of the transactions are \textit{not} known in advance. \ignore { \color{red} With a \stf STM, the thread invoking insert in \algoref{sfdemo} will eventually be able to complete. Otherwise, every transaction invoked by the thread could potentially abort and the \mth will never complete. \color{black} } \vspace{1mm} \noindent \textbf{Related work on the \stf STMs:} Starvation-freedom in STMs has been explored by a few researchers in literature such as Gramoli et al. \cite{Gramoli+:TM2C:Eurosys:2012}, Waliullah and Stenstrom \cite{WaliSten:Starve:CC:2009}, Spear et al. \cite{Spear+:CCM:PPoPP:2009}. Most of these systems work by assigning priorities to transactions. In case of a conflict between two transactions, the transaction with lower priority is aborted. They ensure that every aborted transaction, on being retried a sufficient number of times, will eventually have the highest priority and hence will commit. We denote such an algorithm as \emph{single-version \stf STM} or \emph{\svsftm}. Although \svsftm guarantees \stfdm, it can still abort many transactions spuriously. Consider the case where a transaction $T_i$ has the highest priority. Hence, as per \svsftm, $T_i$ cannot be aborted. But if it is slow (for some reason), then it can cause several other conflicting transactions to abort and hence, bring down the efficiency and progress of the entire system. \vspace{1mm} \figref{svsf-draw} illustrates this problem. Consider the execution: $r_1(x,0) r_1(y,0) w_2(x,10) w_2(z,10) w_3(y,15) w_1(z,7)$. It has three transactions $T_1$, $T_2$ and $T_3$. Let $T_1$ has the highest priority. After reading $y$, suppose $T_1$ becomes slow. Next $T_2$ and $T_3$ want to write to $x, z$ and $y$ respectively and \emph{commit}. But $T_2$ and $T_3$'s write \op{s} are in conflict with $T_1$'s read operations. Since $T_1$ has higher priority and has not committed yet, $T_2$ and $T_3$ have to \emph{abort}. If these transactions are retried and again conflict with $T_1$ (while it is still live), they will have to \emph{abort} again. Thus, any transaction with the priority lower than $T_1$ and conflicts with it has to abort. It is as if $T_1$ has locked the \tobj{s} $x, y$ and does not allow any other transaction, write to these \tobj{s} and to \emph{commit}. \ignore{ \color{red} This text should be in another section: To illustrate the need for starvation freedom, consider a transaction, say $T1$, that reads all t-objects to obtain a consistent state of the system. This can be achieved by having the transaction read all t-objects with the highest timestamp that is less than $\cts$ value. However, such a transaction may abort because by the time the transaction reads the a $\tobj$, it may be deleted due to the fact that $K$ additional versions are created. Furthermore, transactions that deleted those versions were not aware of the interest of $T1$ in reading those \tobj{s}. However, in our protocol, if this transaction is aborted several times eventually $\wts_{T1}$ would be high thereby preventing it from being aborted due to unavailability of the version. \color{blue} } \vspace{1mm} \begin{figure} \center \scalebox{0.45}{\import{figs/}{dsftm.pdf_t}} \captionsetup{justification=centering} \caption{Limitation of Single-version Starvation Free Algorithm} \label{fig:svsf-draw} \end{figure} \vspace{1mm} \noindent \textbf{Multi-version \stf STM:} A key limitation of single-version STMs is limited concurrency. As shown above, it is possible that one long transaction conflicts with several transactions causing them to abort. This limitation can be overcome by using multi-version STMs where we store multiple versions of the data item (either unbounded versions with garbage collection, or bounded versions where the oldest version is replaced when the number of versions exceeds the bound). Several multi-version STMs have been proposed in the literature \cite{Kumar+:MVTO:ICDCN:2014,LuScott:GMV:DISC:2013,Fern+:LSMVSTM:PPoPP:2011,Perel+:2011:SMV:DISC} that provide increased concurrency. But none of them provide \stfdm. Furthermore, achieving \stfdm while using only bounded versions is especially challenging given that a transaction may rely on the oldest version that is removed. In that case, it would be necessary to abort that transaction, making it harder to achieve \stfdm. A typical code using STMs is as shown in \algoref{sfdemo}. It shows the overview of a concurrent \emph{insert} \mth which inserts an element $e$ into a linked-list $LL$. It consists of a loop where the thread creates a transaction. This transaction executes the code to insert an element $e$ in a linked-list $LL$ using $\tread$ and $\twrite$ operations. (The result of $\twrite$ operation are stored locally.) At the end of the transaction, the thread calls \textit{\tryc}. At this point, the STM checks if the given transaction can be \emph{committed} while satisfying the required safety properties (e.g., \slbty \cite{Papad:1979:JACM}, \opty \cite{GuerKap:2008:PPoPP}). If yes, then the transaction is \emph{committed}. At this time, any updates done by the transaction are reflected in the shared memory. Otherwise, it is aborted. In this case, all the updates made by the transaction are discarded. If the given transaction is aborted, then the invoking thread may retry that transaction again like \linereff{retry} in \algoref{sfdemo}. The advantage of multi-version STMs, is that they allow greater concurrency by allowing more transactions to commit. Consider the execution shown in \figref{svsf-draw}. Suppose this execution used multiple versions for each \tobj. Then it is possible for all the three transactions to commit. Transactions $T_2$ and $T_3$ create a new version corresponding to each \tobj $x$, $z$ and $y$ and return commit. Since multiple versions are being used, $T_1$ need not abort as well. $T_1$ reads the initial value of $z$, and returns commit. So, by maintaining multiple versions all the transactions $T_1$, $T_2$, and $T_3$ can commit with equivalent serial history as $T_1 T_2 T_3$ or $T_1 T_3 T_2$. Thus multiple versions can help with \stfdm without sacrificing on concurrency. This motivated us to develop a multi-version \stf STM system. Although multi-version STMs provide greater concurrency, they suffer from the cost of garbage collection. One way to avoid this is to use bounded-multi-version STMs, where the number of versions is bounded to be at most $K$. Thus, when $(K+1)^{th}$ version is created, the oldest version is removed. Bounding the number of versions can hinder with starvation freedom: a transaction needing to read a version that is currently removed must be aborted. \ignore{ \color{red} To address this limitation, in this paper, we focus on developing \emph{starvation-free} algorithms STMs using multiple versions. Many STMs have been proposed which uses the idea of multiple versions \cite{Kumar+:MVTO:ICDCN:2014,LuScott:GMV:DISC:2013,Fern+:LSMVSTM:PPoPP:2011,Perel+:2011:SMV:DISC}. Multi-version STMs (\mvstm{s}), by virtue of multiple versions, can ensure that more \mth{s} succeed \cite{Kumar+:MVTO:ICDCN:2014}. Hence, multi-version STMs can achieve greater concurrency. Suppose the execution shown in \figref{svsf-draw} uses multiple versions for each \tobj. Then both $T_2$ and $T_3$ create a new version corresponding to each \tobj $x$, $z$ and $y$ and return commit while not causing $T_1$ to abort as well. $T_1$ reads the initial value of $z$, and returns commit. So, by maintaining multiple versions all the transactions $T_1$, $T_2$, and $T_3$ can commit with equivalent serial history as $T_1 T_2 T_3$ or $T_1 T_3 T_2$. Thus multiple versions can help with \stfdm without sacrificing on concurrency. This motivated us to develop a multi-version \stf STM system. Although multi-version STMs provide greater concurrency, they suffer from the cost of garbage collection. One way to avoid this is to use bounded-multi-version STMs, where the number of versions is bounded to be at most $K$. Thus, when $(K+1)^{th}$ version is created, the oldest version is removed. Bounding the number of versions can hinder with starvation freedom: a transaction needing to read a version that is currently removed must be aborted. \color{black} } This paper addresses this gap by developing a starvation-free algorithm for bounded \mvstm{s}. Our approach is different from the approach used in \svsftm to provide starvation-freedom in single version STMs (the policy of aborting lower priority transactions in case of conflict) as it does not work for \mvstm{s}. As part of the derivation of our final \stf algorithm, we consider an algorithm (\pkto) that considers this approach and show that it is insufficient to provide starvation freedom. \color{black} \color{black} \vspace{1mm} \noindent \textbf{Contributions of the paper:} \begin{itemize} \item We propose a multi-version \stf STM system as \emph{K-version \stf STM} or \emph{\ksftm} for a given parameter $K$. Here $K$ is the number of versions of each \tobj and can range from 1 to $\infty$. To the best of our knowledge, this is the first \stf \mvstm. We develop \ksftm algorithm in a step-wise manner starting from \mvto \cite{Kumar+:MVTO:ICDCN:2014} as follows: \begin{itemize} \item First, in \subsecref{mvto}, we use the standard idea to provide higher priority to older transactions. Specifically, we propose priority-based $K$-version STM algorithm \emph{Priority-based $K$-version MVTO} or \emph{\pkto}. This algorithm guarantees the safety properties of \stsbty and \lopty. However, it is not \stf. \item We analyze \pkto to identify the characteristics that will help us to achieve preventing a transaction from getting aborted forever. This analysis leads us to the development of \emph{\stf K-version TO} or \emph{\sfkv} (\subsecref{sfmvto}), a multi-version starvation-free STM obtained by revising \pkto. But \sfkv does not satisfy correctness, i.e., \stsbty, and \lopty. \item Finally, we extend \sfkv to develop \ksftm (\subsecref{ksftm}) that preserves the \stfdm, strict-serializability, and \lopty. Our algorithm works on the assumption that any transaction that is not deadlocked, terminates (commits or aborts) in a bounded time. \end{itemize} \item Our experiments (\secref{exp}) show that \ksftm gives an average speedup on the worst-case time to commit of a transaction by a factor of 1.22, 1.89, 23.26 and 13.12 times over \pkto, \svsftm, NOrec STM \cite{Dalessandro:2010:NSS:1693453.1693464} and ESTM \cite{Felber:2017:jpdc} respectively for counter application. \ksftm performs 1.5 and 1.44 times better than \pkto and \svsftm but 1.09 times worse than NOrec for low contention KMEANS application of STAMP \cite{stamp123} benchmark whereas \ksftm performs 1.14, 1.4 and 2.63 times better than \pkto, \svsftm and NOrec for LABYRINTH application of STAMP benchmark which has high contention with long-running transactions. \end{itemize} \ignore{ \begin {table} \captionsetup{justification=centering} \caption{Comparison of the various Algorithms Developed} \label{tab:our-algos} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Algorithm & Starvation Freedom & Correct & Sub-Section\\ \hline \pkto & No & Yes & \ref{subsec:mvto} \\ \hline \sfkv & Yes & No & \ref{subsec:sfmvto} \\ \hline \ksftm & Yes & Yes& \ref{subsec:ksftm} \\ \hline \end{tabular} \end{center} \end{table} } \ignore{ \todo{Outline of paper?} \textbf{Organization of the paper. } \todo{Rel Work} \color{blue} \color{red} In this paper, we explore the problem of achieving starvation-freedom in \mvstm system. We have proposed a $K$ version starvation-free \mvstm system, \emph{\ksftm}. To explain this algorithm, we start with the \mvto algorithm \cite{BernGood:1983:MCC:TDS, Kumar+:MVTO:ICDCN:2014} and then walk through a series of changes to reach the final \stf algorithm. \ksftm maintains $K$ versions where $K$ can range from one to infinity. When $K$ is one then this it boils down to a single-version STM system. If $K$ is infinity, then it is similar to existing MVSTMs which do not maintain an upper bound on the number of versions. In this case, a separate \gc thread will be required. When $K$ is finite, then no separate \gc thread will be required to remove the older versions. For correctness, we show \ksftm{} satisfies \stsbty \cite{Papad:1979:JACM} and local-opacity \cite{KuzSat:NI:ICDCN:2014,KuzSat:NI:TCS:2016}. To the best of our knowledge, this is the first work to explore \stfdm with \mvstm{s}. We have also implemented \ksftm and compared its performance with a single version starvation free STM system (\sftm) which works on the priority principle. Our experiments show that \ksftm achieves more than two-fold speedup over \sftm for lookup intensive workload (90\% read, 10\% write). \ksftm shows significant performance gain, almost two to fifteen times better than existing non-starvation-free state-of-the-art STMs (ESTM, NOrec, and MVTO) on various workloads. } \section{Main Idea behind the working of \ksftm Algorithm} \section{The Working of \ksftm Algorithm} \label{sec:idea} In this section, we propose \emph{K-version \stf STM} or \emph{\ksftm} for a given parameter $K$. Here $K$ is the number of versions of each \tobj and can range from 1 to $\infty$. When $K$ is 1, it boils down to single-version \emph{\stf} STM. If $K$ is $\infty$, then \ksftm uses unbounded versions and needs a separate garbage collection mechanism to delete old versions like other \mvstm{s} proposed in the literature \cite{Kumar+:MVTO:ICDCN:2014,LuScott:GMV:DISC:2013}. We denote \ksftm using unbounded versions as \emph{\mvsftm} and \mvsftm with garbage collection as \emph{\mvsftmgc}. Next, we describe some \emph{\stfdm} preliminaries in \subsecref{prelim} to explain the working of \ksftm algorithm. To explain the intuition behind the \ksftm algorithm, we start with the modification of \mvto \cite{BernGood:1983:MCC:TDS,Kumar+:MVTO:ICDCN:2014} algorithm in \subsecref{mvto}. We then make a sequence of modifications to it to arrive at \ksftm algorithm. \subsection{\emph{Starvation-Freedom} Preliminaries} \label{subsec:prelim} In this section, we start with the definition of \emph{\stfdm}. Then we describe the invocation of transactions by the application. Next, we describe the data structures used by the algorithms. \begin{definition} \label{defn:stf} \textbf{Starvation-Freedom:} A STM system is said to be \stf if a thread invoking a non-parasitic transaction $T_i$ gets the opportunity to retry $T_i$ on every abort, due to the presence of a fair scheduler, then $T_i$ will eventually commit. \end{definition} As explained by Herlihy \& Shavit \cite{HerlihyShavit:Progress:Opodis:2011}, a fair scheduler implies that no thread is forever delayed or crashed. Hence with a fair scheduler, we get that if a thread acquires locks then it will eventually release the locks. Thus a thread cannot block out other threads from progressing. \ignore{ \noindent \textbf{\stfdm:} A STM system is said to be \stf if a thread invoking a transaction $T_i$ gets the opportunity to retry $T_i$ on every abort due to the presence of a fair scheduler then $T_i$ will eventually commit (as described in \secref{intro}). As explained by Herlihy \& Shavit \cite{HerlihyShavit:AMP:Book:2012}, a fair scheduler implies that no thread is forever delayed or crashed. Hence with this assumption of a fair scheduler, we get that if in the course of execution a thread acquires locks then it will eventually release the locks. Since it is not forever delayed, it cannot block out other threads from progressing. } \vspace{1mm} \noindent \textbf{Assumption about Scheduler: } In order for \stf algorithm \ksftm (described in \subsecref{ksftm}) to work correctly, we make the following assumption about the fair scheduler: \begin{assumption} \label{asm:bdtm} \textbf{Bounded-Termination}: For any transaction $T_i$, invoked by a thread $Th_x$, the fair system scheduler\xspace ensures, in the absence of deadlocks, $Th_x$ is given sufficient time on a CPU (and memory etc.) such that $T_i$ terminates (either commits or aborts) in bounded time. \end{assumption} While the bound for each transaction may be different, we use $L$ to denote the maximum bound. In other words, in time $L$, every transaction will either abort or commit due to the absence of deadlocks. There are different ways to satisfy the scheduler requirement. For example, a round-robin scheduler which provides each thread equal amount of time in any window satisfies this requirement as long as the number of threads is bounded. In a system with two threads, even if a scheduler provides one thread 1\% of CPU and another thread 99\% of the CPU, it satisfies the above requirement. On the other hand, a scheduler that schedules the threads as `$T_1, T_2, T_1, T_2, T_2, T_1, T_2$, $T_2, T_2, T_2$, $T_1, T_2, T_2$, $T_2, T_2, T_2, T_2, T_2, T_2, T_1, T_2 (16 times) $' does not satisfy the above requirement. This is due to the fact that over time, thread 1 gets infinitesimally smaller portion of the CPU and, hence, the time required for it to complete (commit or abort) will continue to increase over time. In our algorithm, we will ensure that it is deadlock free using standard techniques from the literature. In other words, each thread is in a position to make progress. We assume that the scheduler provides sufficient CPU time to complete (either commit or abort) within a bounded time. As explained by Herlihy \& Shavit \cite{HerlihyShavit:Progress:Opodis:2011}, a fair scheduler implies that no thread is forever delayed or crashed. Hence with a fair scheduler, we get that if a thread acquires locks then it will eventually release the locks. Thus a thread cannot block out other threads from progressing. \ignore{ \noindent \textbf{\stfdm:} A STM system is said to be \emph{\stf} if a thread invoking a transaction $T_i$ gets the opportunity to retry $T_i$ on every abort due to the presence of a fair scheduler then $T_i$ will eventually commit (as described in \secref{intro}). As explained by Herlihy \& Shavit \cite{HerlihyShavit:AMP:Book:2012}, a fair scheduler implies that no thread is forever delayed or crashed. Hence with this assumption of a fair scheduler, we get that if in the course of execution a thread acquires locks then it will eventually release the locks. Since it is not forever delayed, it cannot block out other threads from progressing. } \noindent \textbf{Transaction Invocation:} Transactions are invoked by threads. Suppose a thread $Th_x$ invokes a transaction $T_i$. If this transaction $T_i$ gets \emph{aborted}, $Th_x$ will reissue it, as a new incarnation of $T_i$, say $T_j$. The thread $Th_x$ will continue to invoke new \inc{s} of $T_i$ until an \inc commits. When the thread $Th_x$ invokes a transaction, say $T_i$, for the first time then the STM system assigns $T_i$ a unique timestamp called \emph{current timestamp or \cts}. If it aborts and retries again as $T_j$, then its \cts will change. However, in this case, the thread $Th_x$ will also pass the \cts value of the first incarnation ($T_i$) to $T_j$. By this, $Th_x$ informs the STM system that, $T_j$ is not a new invocation but is an \inc of $T_i$. We denote the \cts of $T_i$ (first \inc) as \emph{Initial Timestamp or \its} for all the \inc{s} of $T_i$. Thus, the invoking thread $Th_x$ passes $\tcts{i}$ to all the \inc{s} of $T_i$ (including $T_j$). Thus for $T_j$, $\tits{j} = \tcts{i}$. The transaction $T_j$ is associated with the timestamps: $\langle \tits{j}, \tcts{j} \rangle$. For $T_i$, which is the initial \inc, its \its and \cts are the same, i.e., $\tits{i} = \tcts{i}$. For simplicity, we use the notation that for transaction $T_j$, $j$ is its \cts, i.e., $\tcts{j} = j$. \begin{figure}[H] \centerline{\scalebox{0.50}{\input{figs/vlrl.pdf_t}}} \captionsetup{justification=centering} \caption{Data Structures for Maintaining Versions} \label{fig:tobj} \end{figure} We also assume that in the absence of other concurrent conflicting transactions, every transaction will commit. In other words, if a transaction is executing in a system where other concurrent conflicting transactions are not present then it will not self-abort. If transactions can self-abort then providing \emph{\stfdm} is impossible. \ignore{ \color{red} We assume that a transaction $T_i$ on getting \emph{aborted} will be retried by the invoking thread $Th_x$, as a new transaction, say $T_j$. $T_i$ and $T_j$ are said to \emph{incarnations} of each other. The thread $Th_x$ will continue to invoke a new \inc{s} of $T_i$ until an \inc commits. Thus, we can order all the \inc{s} of $T_i$ based on the order in which they have been invoked. Whenever a transaction $T_i$ begins, it is assigned a unique timestamp by the STM system called as current timestamp or \emph{\cts}. $T_i$ on getting \emph{aborted} will be retried by the invoking thread $Th_x$, as a new transaction, say $T_j$. While invoking $T_j$, we assume that $Th_x$ will pass the \cts of the first \inc of $T_j$, which in this case is suppose $T_i$, to the STM system. By this, $Th_x$ informs the STM system this $T_j$ is not the initial invocation and is an \inc of $T_i$. In this case, we denote the \cts of $T_i$ as \emph{Initial Timestamp} or \emph{\its} for all the \inc{s} of $T_i$. Any \inc of $T_i$, $T_j$ is characterized by : $\langle \its_j, \cts_j \rangle$. For $T_i$ both its \its and \cts are the same. \color{black} } \noindent \textbf{Common Data Structures and STM Methods:} Here we describe the common data structures used by all the algorithms proposed in this section. For each \tobj, the algorithms maintain multiple versions in $version$-$list$ (or \emph{\vlist}) using list. Similar to versions in \mvto \cite{Kumar+:MVTO:ICDCN:2014}, each version of a \tobj{} is a tuple denoted as \emph{\vtup} and consists of three fields: (1) timestamp, (or $ts$) of the transaction that created this version which normally is the \cts; (2) the value (or $val$) of the version; (3) a list, called \rlist{} (or $rl$), consisting of transactions ids (can be \cts as well) that read from this version. The \rlist of a version is initially empty. \figref{tobj} illustrates this structure. For a \tobj $x$, we use the notation $x[t]$ to access the version with timestamp $t$. Depending on the algorithm considered, the fields change of this structure. The algorithms have access to a global atomic counter, $\gtcnt$ used for generating timestamps in the various transactional \mth{s}. We assume that the STM system exports the following \mth{s} for a transaction $T_i$: (1) $\begt(t)$ where $t$ is provided by the invoking thread, $Th_x$. From our earlier assumption, it is the \cts of the first \inc. In case $Th_x$ is invoking this transaction for the first time, then $t$ is $null$. This \mth returns a unique timestamp to $Th_x$ which is the \cts/id of the transaction. (2) $\tread_i(x)$ tries to read \tobj $x$. It returns either value $v$ or $\mathcal{A}$. (3) $\twrite_i(x,v)$ operation that updates a \tobj $x$ with value $v$ locally. It returns $ok$. (4) $\tryc_i()$ tries to commit the transaction and returns $\mathcal{C}$ if it succeeds. Otherwise, it returns $\mathcal{A}$. \ignore{ In addition to these global structures, for each transaction $T_i$, \pkto maintains structure that are local to $T_i$: \begin{itemize} \item $rset\xspace_i$(read-set): It is a list of data tuples or \emph{\dtup} of the form $\langle x, val \rangle$, where $x$ is the t-object and $v$ is the value read by the transaction $T_i$. We refer to a tuple in $T_i$'s read-set by $rset\xspace_i[x]$. \item $wset\xspace_i$(write-set): It is a list of \dtup{s} where $x$ is the \tobj{} to which transaction $T_i$ writes the value $val$. Similarly, we refer to a tuple in $T_i$'s write-set by $wset\xspace_i[x]$. \end{itemize} \noindent Next, we describe the \mth{s} exported by \pkto. Here in this discussion, we use the notation that a transaction $T_i$ has its \cts as $i$. } \noindent \textbf{Correctness Criteria:} For ease of exposition, we initially consider \stsbty as \emph{\cc} to illustrate the correctness of the algorithms. But \stsbty does not consider the correctness of \emph{aborted} transactions and as a result not a suitable \emph{\cc} for STMs. Finally, we show that the proposed STM algorithm \ksftm satisfies \lopty, a \emph{\cc} for STMs (described in \secref{model}). We denote the set of histories generated by an STM algorithm, say $A$, as $\gen{A}$. \input{SVSFTM} \cmnt{ \vspace{1mm} \noindent \textbf{Common Data-Structures \& STM Methods:} Here we describe the common data-structures used by all the algorithms proposed in this section. For each \tobj, the algorithm(s) maintains multiple version as a linked-list, \emph{\vlist}. Similar to versions in \mvto \cite{Kumar+:MVTO:ICDCN:2014}, each version of a \tobj{} is a tuple denoted as \emph{\vtup} and consists of three fields: (1) timestamp, $ts$ of the transaction that created this version which normally is the \cts; (2) the value of the version; (3) a list, called \rlist{}, consisting of transactions ids (could be \cts as well) that read from this version. The \rlist of a version is initially empty. \figref{tobj} illustrates this structure. For a \tobj $x$, we use the notation $x[t]$ to access the version with timestamp $t$. Depending on the algorithm considered, the fields change of this structure. \begin{figure} [h] \centerline{\scalebox{0.50}{\input{figs/kstm-ver.pdf_t}}} \captionsetup{justification=centering} \caption{Data Structures for Maintaining Versions} \label{fig:tobj} \end{figure} The algorithms have access to a global atomic counter, $\gtcnt$ used for generating timestamps in the various transactional \mth{s}. We assume that the STM system exports the following \mth{s} for a transaction $T_i$: (1) $\begt(t)$ where $t$ is provided by the invoking thread. From our earlier assumption, it is the \cts of the first \inc. This \mth returns a unique timestamp to the invoking thread which is the \cts/id of the transaction. (2) $\tread_i(x)$ tries to read \tobj $x$. It returns either value $v$ or $\mathcal{A}$. (3) $\twrite_i(x,v)$ operation that tries to update a \tobj $x$ with value $v$. It returns $ok$. (4) $\tryc_i()$ that tries to commit the transaction and returns $ok$ if it succeeds. Otherwise, it returns $\mathcal{A}$. (5) $\trya()$ that aborts the transaction and returns $\mathcal{A}$. \ignore{ In addition to these global structures, for each transaction $T_i$, \pmvto maintains structure that are local to $T_i$: \begin{itemize} \item $rset\xspace_i$(read-set): It is a list of data tuples or \emph{\dtup} of the form $\langle x, val \rangle$, where $x$ is the t-object and $v$ is the value read by the transaction $T_i$. We refer to a tuple in $T_i$'s read-set by $rset\xspace_i[x]$. \item $wset\xspace_i$(write-set): It is a list of \dtup{s} where $x$ is the \tobj{} to which transaction $T_i$ writes the value $val$. Similarly, we refer to a tuple in $T_i$'s write-set by $wset\xspace_i[x]$. \end{itemize} \noindent Next, we describe the \mth{s} exported by \pmvto. Here in this discussion, we use the notation that a transaction $T_i$ has its \cts as $i$. } \vspace{1mm} \noindent \textbf{Simplifying Assumptions:} We next describe the main idea behind the starvation-free STM algorithm \ksftm through a sequence of algorithms. For ease of exposition, we make two simplifying assumptions (1) We assume that in the absence of other concurrent conflicting transactions, every transaction will commit. In other words, if a transaction is executed in a system by itself, it will not self-abort. (2) We initially consider \stsbty as \cc to illustrate the correctness of the algorithms. But \stsbty does not consider the correctness of aborted transactions and as a result not a suitable \cc for STMs. Finally, we show that the proposed STM algorithm \ksftm satisfies \lopty, a \cc for STMs. \noindent We denote the set of histories generated by an STM algorithm, say $A$, as $\gen{A}$. \cmnt { A STM system exports the functions:\begtrans{}, $\read_i, \writei_i$ and $\tryci_i$. A thread invokes a transaction with a \begtrans{} function which returns a unique transaction id which is the timestamp of the transactions. This timestamp is numerically greater than the timestamps of all the transactions invoked so far. This thread invokes future functions of the transaction using this timestamp. We use the notation $T_i$ to denote a transaction where $i$ is the timestamp of $T_i$. } \subsection{Priority-based MVTO Algorithm} \label{subsec:mvto} In this subsection, we describe a modification to the multi-version timestamp ordering (\mvto) algorithm \cite{BernGood:1983:MCC:TDS,Kumar+:MVTO:ICDCN:2014} to ensure that it provides preference to transactions that have low \its, i.e., transactions that have been in the system for a longer time. We denote the basic algorithm which maintains unbounded versions as \emph{Priority-based MVTO} or \emph{\pmvto} (akin to the original \mvto). We denote the variant of \pmvto that maintains $K$ versions as \emph{\pkto} and the unbounded versions variant with garbage collection as \emph{\pmvtogc}. In this sub-section, we specifically describe \pkto. But most of these properties apply to \pmvto and \pmvtogc as well. \ignore{ \color{blue} \color{red} REMOVE We have not described the details of locking of data-items by transactions. \color{black} } \vspace{1mm} \noindent \textbf{$\begt(t)$:} A unique timestamp $ts$ is allocated to $T_i$ which is its \cts ($i$ from our assumption). The timestamp $ts$ is generated by atomically incrementing the global counter $\gtcnt$. If the input $t$ is null, then $\tcts{i} = \tits{i} = ts$ as this is the first \inc of this transaction. Otherwise, the non-null value of $t$ is assigned as $\tits{i}$. \noindent \textbf{$\tread(x)$:} Transaction $T_i$ reads from a version of $x$ in the shared memory (if $x$ does not exist in $T_i$'s local buffer) with timestamp $j$ such that $j$ is the largest timestamp less than $i$ (among the versions $x$), i.e., there exists no version of $x$ with timestamp $k$ such that $j<k<i$. After reading this version of $x$, $T_i$ is stored in $x[j]$'s \rlist. If no such version exists then $T_i$ is \emph{aborted}. \noindent \textbf{$\twrite(x,v)$:} $T_i$ stores this write to value $x$ locally in its $wset\xspace_i$. If $T_i$ ever reads $x$ again, this value will be returned. \noindent \textbf{$\tryc:$} This \op{} consists of three steps. In \stref{val}, it checks whether $T_i$ can be \emph{committed}. In \stref{updt}, it performs the necessary tasks to mark $T_i$ as a \emph{committed} transaction and in \stref{commit}, $T_i$ return commits. \begin{enumerate} \item Before $T_i$ can commit, it needs to verify that any version it creates does not violate consistency. Suppose $T_i$ creates a new version of $x$ with timestamp $i$. Let $j$ be the largest timestamp smaller than $i$ for which version of $x$ exists. Let this version be $x[j]$. Now, $T_i$ needs to make sure that any transaction that has read $x[j]$ is not affected by the new version created by $T_i$. There are two possibilities of concern: \label{step:val} \begin{enumerate} \item Let $T_k$ be some transaction that has read $x[j]$ and $k > i$ ($k$ = \cts of $T_k$). In this scenario, the value read by $T_k$ would be incorrect (w.r.t \stsbty) if $T_i$ is allowed to create a new version. In this case, we say that the transactions $T_i$ and $T_k$ are in \emph{conflict}. So, we do the following: \\(i) if $T_k$ has already \emph{committed} then $T_i$ is \emph{aborted}; \\(ii) if $T_k$ is live and $\tits{k}$ is less than $\tits{i}$. Then again $T_i$ is \emph{aborted}; \\(iii) If $T_k$ is still live with $its_i$ less than $its_k$ then $T_k$ is \emph{aborted}. \label{step:verify} \item The previous version $x[j]$ does not exist. This happens when the previous version $x[j]$ has been overwritten. In this case, $T_i$ is \emph{aborted} since \pkto does not know if $T_i$ conflicts with any other transaction $T_k$ that has read the previous version. \label{step:notfound} \end{enumerate} \item After \stref{val}, we have verified that it is ok for $T_i$ to commit. Now, we have to create a version of each \tobj $x$ in the $wset\xspace$ of $T_i$. This is achieved as follows: \label{step:updt} \begin{enumerate} \item $T_i$ creates a $\vtup$ $\langle i, \wset{i}.x.v, null \rangle$. In this tuple, $i$ (\cts of $T_i$) is the timestamp of the new version; $\wset{i}.x.v$ is the value of $x$ is in $T_i$'s $wset\xspace$, and the \rlist of the $\vtup$ is $null$. \item Suppose the total number of versions of $x$ is $K$. Then among all the versions of $x$, $T_i$ replaces the version with the smallest timestamp with $\vtup$ $\langle i, \wset{i}.x.v, null \rangle$. Otherwise, the $\vtup$ is added to $x$'s $\vlist$. \end{enumerate} \item Transaction $T_i$ is then \emph{committed}. \label{step:commit} \end{enumerate} The algorithm described here is only the main idea. The actual implementation will use locks to ensure that each of these \mth{s} are \lble \cite{HerlWing:1990:TPLS}. It can be seen that \pkto gives preference to the transaction having lower \its in \stref{verify}. Transactions having lower \its have been in the system for a longer time. Hence, \pkto gives preference to them. \subsection{Pseudocode of \pkto} \label{apn:pcode} \begin{algorithm}[H] \caption{STM $\init()$: Invoked at the start of the STM system. Initializes all the \tobj{s} used by the STM System} \label{alg:init} \begin{algorithmic}[1] \State $\gtcnt$ = 1; \ForAll {$x$ in $\mathcal{T}$} \Comment{All the \tobj{s} used by the STM System} \State add $\langle 0, 0, nil \rangle$ to $x.\vl$; \Comment { $T_0$ is initializing $x$} \label{lin:t0-init} \EndFor; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-begin} \caption{STM $\begt(its)$: Invoked by a thread to start a new transaction $T_i$. Thread can pass a parameter $its$ which is the initial timestamp when this transaction was invoked for the first time. If this is the first invocation then $its$ is $nil$. It returns the tuple $\langle id, \gcts \rangle$} \begin{algorithmic}[1] \State $i$ = unique-id; \Comment{An unique id to identify this transaction. It could be same as \gcts} \State \Comment{Initialize transaction specific local and global variables} \If {($its == nil$)} \State \Comment{$\gtcnt.get\&Inc()$ returns the current value of \gtcnt and atomically increments it} \State $\gits_i = \gcts_i = \gtcnt.get\&Inc()$; \Else \State $\gits_i = its$; \State $\gcts_i = \gtcnt.get\&Inc()$; \EndIf \State $rset\xspace_i = wset\xspace_i = null$; \State $\gstat_i$ = \texttt{live}; \State $\gval_i = T$; \State return $\langle i, \gcts_i\rangle$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-read} \caption{STM $read(i, x)$: Invoked by a transaction $T_i$ to read \tobj{} $x$. It returns either the value of $x$ or $\mathcal{A}$} \begin{algorithmic}[1] \If {($x \in rset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $rset\xspace_i$} \State return $rset\xspace_i[x].val$; \ElsIf {($x \in wset\xspace_i$)} \Comment{Check if the \tobj{} $x$ is in $wset\xspace_i$} \State return $wset\xspace_i[x].val$; \Else \Comment{\tobj{} $x$ is not in $rset\xspace_i$ and $wset\xspace_i$} \State lock $x$; lock $\glock_i$; \If {$(\gval_i == F)$} return abort(i); \label{lin:rabort} \EndIf \State \Comment{ \findls: From $x.\vl$, returns the largest \ts value less than $\gcts_i$. If no such version exists, it returns $nil$ } \State $curVer = \findls(\gcts_i,x)$; \If {$(curVer == nil)$} return abort(i); \Comment{Proceed only if $curVer$ is not nil} \EndIf \cmnt { \State /* \findsl: From $x.\vl$, returns the smallest \ts value greater than $\gwts_i$. If no such version exists, it returns $nil$ */ \State $nextVer = \findsl(\gwts_i,x)$; \If {$(nextVer \neq nil)$} \State \Comment{Ensure that $\tutl_i$ remains smaller than $nextVer$'s \vltl} \State $\tutl_i = min(\tutl_i, x[nextVer].vltl-1)$; \EndIf \State \Comment{$\tltl_i$ should be greater than $x[curVer].\vltl$} \State $\tltl_i = max(\tltl_i, x[curVer].\vltl + 1)$; \If {($\tltl_i > \tutl_i$)} \Comment{If the limits have crossed each other, then $T_i$ is aborted} \State return abort(i); \EndIf } \State $val = x[curVer].v$; add $\langle x, val \rangle$ to $rset\xspace_i$; \State add $T_i$ to $x[curVer].rl$; \State unlock $\glock_i$; unlock $x$; \State return $val$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-write} \caption{STM $write_i(x,val)$: A Transaction $T_i$ writes into local memory} \begin{algorithmic}[1] \State Append the $d\_tuple \langle x,val \rangle$ to $wset\xspace_i$. \State return $ok$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-tryc} \caption{STM $\tryc()$: Returns $ok$ on commit else return Abort} \begin{algorithmic}[1] \State \Comment{The following check is an optimization which needs to be performed again later} \State lock $\glock_i$; \If {$(\gval_i == F)$} \State return abort(i); \EndIf \State unlock $\glock_i$; \State $\lrl = \allrl = nil$; \Comment{Initialize larger read list (\lrl), all read list (\allrl) to nil} \ForAll {$x \in wset\xspace_i$} \State lock $x$ in pre-defined order; \State \Comment{ \findls: returns the version with the largest \ts value less than $\gcts_i$. If no such version exists, it returns $nil$. } \State $\prevv = \findls(\gcts_i, x)$; \Comment{\prevv: largest version smaller than $\gcts_i$} \If {$(\prevv == nil)$} \Comment{There exists no version with \ts value less than $\gcts_i$} \State lock $\glock_i$; return abort(i); \EndIf \State \Comment{\textbf{\getl}: obtain the list of reading transactions of $x[\prevv].rl$ whose $\gcts$ is greater than $\gcts_i$} \State $\lrl = \lrl \cup \getl(\gcts_i, x[\prevv].rl)$; \EndFor \Comment{$x \in wset\xspace_i$} \State $\relll = \lrl \cup T_i$; \Comment{Initialize relevant Lock List (\relll)} \ForAll {($T_k \in \relll$)} \State lock $\glock_k$ in pre-defined order; \Comment{Note: Since $T_i$ is also in $\relll$, $\glock_i$ is also locked} \EndFor \State \Comment{Verify if $\gval_i$ is false} \If {$(\gval_i == F)$} \State return abort(i); \EndIf \State $\abl = nil$ \Comment{Initialize abort read list (\abl)} \State \Comment{Among the transactions in $T_k$ in $\lrl$, either $T_k$ or $T_i$ has to be aborted} \ForAll {$(T_k \in \lrl)$} \If {$(\isab(T_k))$} \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \If {$(\gits_i < \gits_k) \land (\gstat_k == \texttt{live})$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \EndIf \EndFor \algstore{myalgtc} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \algrestore{myalgtc} \cmnt { \algstore{tryc-break} \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:ap-tryc-cont} \caption{STM $\tryc()$: Continued} \begin{algorithmic}[1] \algrestore{tryc-break} \State \Comment{Ensure that $\vutl_i$ is less than \vltl of versions in $\nvl$} \ForAll {$(ver \in \nvl)$} \State $x$ = \tobj of $ver$; \State $\tutl_i = min(\tutl_i, x[ver].\vltl - 1)$; \EndFor } \State \Comment{Store the current value of the global counter as commit time and increment it} \State $\ct = \gtcnt.get\&Inc()$; \cmnt{ \ForAll {$(T_k \in \srl)$} \Comment{Iterate through $\srl$ to see if $T_k$ or $T_i$ has to aborted} \If {$(\isab(T_k))$} \State \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \If {$(\tltl_k \geq \tutl_i)$} \label{lin:tk-check} \Comment{Ensure that the limits do not cross for both $T_i$ and $T_k$} \If {$(\gstat_k == live)$} \Comment{Check if $T_k$ is live} \If {$(\gits_i < \gits_k)$} \State \Comment{Transaction $T_k$ has lower priority and is not yet committed. So it needs to be aborted} \State $\abl = \abl \cup T_k$; \Comment{Store $T_k$ in \abl} \Else \Comment{Transaction $T_i$ has to be aborted} \State return abort(i); \EndIf \Comment{$(\gits_i < \gits_k)$} \Else \Comment{($T_k$ is committed. Hence, $T_i$ has to be aborted)} \State return abort(i); \EndIf \Comment{$(\gstat_k == live)$} \EndIf \Comment{$(\tltl_k \geq \tutl_i)$} \EndFor {$(T_k \in \srl)$} \State \Comment{At this point $T_i$ can't abort.} \State $\tltl_i = \tutl_i$; \label{lin:ti-updt} \State \Comment{Since $T_i$ can't abort, we can update $T_k$'s \tutl} \ForAll {$(T_k \in \srl)$} \If {$(\isab(T_k))$} \State \Comment{Transaction $T_k$ can be ignored since it is already aborted or about to be aborted} \State continue; \EndIf \State /* The following line ensure that $\tltl_k \leq \tutl_k < \tltl_i$. Note that this does not cause the limits of $T_k$ to cross each other because of the check in \Lineref{tk-check}.*/ \State $\tutl_k = min(\tutl_k, \tltl_i - 1)$; \EndFor } \ForAll {$T_k \in \abl$} \Comment{Abort all the transactions in \abl} \State $\gval_k = F$; \EndFor \cmnt { \algstore{tryc-break2} \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:tryc-cont2} \caption{STM $\tryc()$: Continued Again} \begin{algorithmic}[1] \algrestore{tryc-break2} } \State \Comment{Having completed all the checks, $T_i$ can be committed} \ForAll {$(x \in wset\xspace_i)$} \State $newTuple = \langle \gcts_i, wset\xspace_i[x].val, nil \rangle$; \Comment { Create new v\_tuple: \gcts, val, \rl for $x$} \If {($|x.vl| > k$)} \State replace the oldest tuple in $x.\vl$ with $newTuple$; \Comment{$x.\vl$ is ordered by timestamp} \Else \State add a $newTuple$ to $x.vl$ in sorted order; \EndIf \EndFor \Comment{$x \in wset\xspace_i$} \State $\gstat_i$ = \texttt{commit}; \State unlock all variables; \State return $\mathcal{C}$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:isab} \caption{$\isab(T_k)$: Verifies if $T_i$ is already aborted or its \gval flag is set to false implying that $T_i$ will be aborted soon} \begin{algorithmic}[1] \If {$(\gval_k == F) \lor (\gstat_k == \texttt{abort}) \lor (T_k \in \abl)$} \State return $T$; \Else \State return $F$; \EndIf \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \label{alg:ap-abort} \caption{$abort(i)$: Invoked by various STM methods to abort transaction $T_i$. It returns $\mathcal{A}$} \begin{algorithmic}[1] \State $\gval_i = F$; $\gstat_i$ = \texttt{abort}; \State unlock all variables locked by $T_i$; \State return $\mathcal{A}$; \end{algorithmic} \end{algorithm} We have the following property on the correctness of \pkto. \begin{property} \label{prop:pmvto-correct} Any history generated by \pkto is strict-serializable. \end{property} Consider a history $H$ generated by \pkto. Let the \emph{committed} \ssch of $H$ be $CSH = \shist{\comm{H}}{H}$. It can be shown that $CSH$ is \opq with the equivalent serialized history $SH'$ is one in which all the transactions of $CSH$ are ordered by their \cts{s}. Hence, $H$ is \stsble. \cmnt{ The read \op{} described in \stref{read} is a common idea used in \mvstm{s} which are based on \ts{s} such as \mvto{} \cite{Kumar+:MVTO:ICDCN:2014}, Generic Multi-Version STM \cite{LuScott:GMV:DISC:2013} etc. Unlike these STMs which have infinite versions, a read \op{} in \pmvto{} can abort. Similarly, the $\tryc{}$ \op{} described in \stref{tryc} is very similar to \mvto, but modified for finite versions. } \noindent \textbf{Possibility of Starvation in \pkto:} As discussed above, \pkto gives priority to transactions having lower \its. But a transaction $T_i$ having the lowest \its could still abort due to one of the following reasons: (1) Upon executing $\tread(x)$ \mth if it does not find any other version of $x$ to read from. This can happen if all the versions of $x$ present have a timestamp greater than $\tcts{i}$. (2) While executing \stref{verify}(i), of the $\tryc$ \mth, if $T_i$ wishes to create a version of $x$ with timestamp $i$. But some other transaction, say $T_k$ has read from a version with timestamp $j$ and $j<i<k$. In this case, $T_i$ has to abort if $T_k$ has already \emph{committed}. This issue is not restricted only to \pkto. It can occur in \pmvto (and \pmvtogc) due to the point (2) described above. \begin{figure}[H] \centering \scalebox{0.5}{\input{figs/pktod.pdf_t}} \captionsetup{justification=centering} \caption{Pictorial representation of execution under \pkto} \label{fig:kstmvl} \end{figure} We illustrate this problem in \pkto with \figref{kstmvl}. Here transaction $T_{26}$, with \its 26 is the lowest among all the live transactions, starves due to \stref{verify}.(i) of the $\tryc$. \emph{First time}, $T_{26}$ gets \emph{aborted} due to higher timestamp transaction $T_{29}$ in the \rlist of $x[25]$ has \emph{committed}. We have denoted it by a `(C)' next to the version. The \emph{second time}, $T_{26}$ retries with same \its 26 but new \cts 33. Now when $T_{33}$ comes for commit, suppose another transaction $T_{34}$ in the \rlist of $x[25]$ has already \emph{committed}. So this will cause $T_{33}$ (another incarnation of $T_{26}$) to abort again. Such scenario can possibly repeat again and again and thus causing no \inc of $T_{26}$ to ever commit leading to its starvation. \noindent \textbf{Garbage Collection in \mvsftmgc and \pmvtogc:} Having multiple versions to increase the performance and to decrease the number of aborts, leads to creating too many versions which are not of any use and hence occupying space. So, such garbage versions need to be taken care of. Hence we come up with a garbage collection over these unwanted versions. This technique help to conserve memory space and increases the performance in turn as no more unnecessary traversing of garbage versions by transactions is necessary. We have used a global, i.e., across all transactions a list that keeps track of all the live transactions in the system. We call this list as \livel. Each transaction at the beginning of its life cycle creates its entry in this \livel. Under the optimistic approach of STM, each transaction in the shared memory performs its updates in the $\tryc$ phase. In this phase, each transaction performs some validations, and if all the validations are successful then the transaction make changes or in simple terms creates versions of the corresponding \tobj{} in the shared memory. While creating a version every transaction, check if it is the least timestamp live transaction present in the system by using \livel{} data structure, if yes then the current transaction deletes all the version of that \tobj{} and create one of its own. Else the transaction does not do any garbage collection or delete any version and look for creating a new version of next \tobj{} in the write set, if at all. \figref{ksftm} and \figref{pkto} show that both \mvsftmgc and \pmvtogc performs better than \mvsftm and \pmvto across all workloads. \ignore{ \begin{enumerate} \item \textbf{read rule:} $T_i$ on invoking $r_i(x)$ reads the value $v$, where $v$ is the value written by a transaction $T_j$ that commits before $r_i(x)$ and $j$ is the largest timestamp $\leq i$. \item \textbf{write rule:} $T_i$ writes into local memory. \item \textbf{commit rule:} $T_i$ on invoking \tryc{} \op{} checks for each \tobj{} $x$, in its \textit{Wset}: \begin{enumerate} \item If a transaction $T_k$ has read $x$ from $T_j$, i.e. $r_k(x, v) \in \evts{T_k}$ and $w_j(x, v) \in \evts{T_j}$ and $j < i < k$, then $\tryc_i$ returns abort, \item otherwise, the transaction is allowed to commit. \end{enumerate} \end{enumerate} } \input{sfalgo} \section{Discussion and Conclusion} \label{sec:conc} In this paper, we propose a $K$ version \emph{\stf} STM system, \emph{\ksftm}. The algorithm ensures that if an \emph{aborted} transaction is retried successively, then it will eventually commit. The algorithm maintains $K$ versions where $K$ can range from between one to infinity. For correctness, we show \ksftm{} satisfies strict-serializability \cite{Papad:1979:JACM} and local opacity \cite{KuzSat:NI:ICDCN:2014, KuzSat:NI:TCS:2016}. To the best of our knowledge, this is the first work to explore \emph{\stfdm} with \mvstm{s}. Our experiments show that \ksftm performs better than single-version STMs (ESTM, Norec STM) under high contention and also single-version \emph{\stf} STM \svsftm developed based on the principle of priority. On the other hand, its performance is comparable or slightly worse than multi-version STM, \pkto (around 2\%). This is the cost of the overhead required to achieve \emph{\stfdm} which we believe is a marginal price. In this document, we have not considered a transactional solution based on two-phase locking (2PL) and its multi-version variants \cite{WeiVoss:2002:Morg}. With the carefully designed 2PL solution, one can ensure that none of the transactions abort \cite{WeiVoss:2002:Morg}. But this will require advance knowledge of the code of the transactions which may not always be available with the STM library. Without such knowledge, it is possible that a 2PL solution can deadlock and cause further aborts which will, raise the issue of \emph{\stfdm} again. \noindent Since we have considered \stsble as one of the \emph{correctness-criteria}, this algorithm can be extended to databases as well. In fact, to the best of our knowledge, there has been no prior work on \emph{\stfdm} in the context of database concurrency control. \subsection{Motivation for Starvation Freedom in Multi-Version Systems } \label{sec:sftm} In this section, first we describe the starvation freedom solution used for single version i.e. \sftm algorithm and then the drawback of it. \subsubsection{Illustration of \sftm} \label{subsec:sftm-main} Forward-oriented optimistic concurrency control protocol (\focc), is a commonly used optimistic algorithm in databases \cite[Chap 4]{WeiVoss:2002:Morg}. In fact, several STM Systems are also based on this idea. In a typical STM system (also in database optimistic concurrency control algorithms), a transaction execution is divided can be two phases - a \emph{\rwph} and \emph{\tryph} (also referred to as validation phase in databases). The various algorithms differ in how the \tryph{} executes. Let the write-set or wset\xspace{} and read-set or rset\xspace{} of a $t_i$ denotes the set of \tobj{s} written \& read by $t_i$. In \focc{} a transaction $t_i$ in its \tryph{} is validated against all live transactions that are in their \rwph{} as follows: $\langle wset\xspace(t_i) \cap (\forall t_j: rset\xspace^{n}(t_j)) = \Phi \rangle$. This implies that the wset\xspace{} of $t_i$ can not have any conflict with the current rset\xspace{} of any transaction $t_j$ in its \rwph. Here $rset\xspace^{n}(t_j)$ implies the rset\xspace{} of $t_j$ till the point of validation of $t_i$. If there is a conflict, then either $t_i$ or $t_j$ (all transactions conflicting with $t_i$) is aborted. A commonly used approach in databases is to abort $t_i$, the validating transaction. In \sftm{} we use \emph{\ts{s}} which are monotonically in increasing order. We implement the \ts{s} using atomic counters. Each transaction $t_i$ has two time-stamps: (i) \emph{current time-stamp or \cts}: this is a unique \ts{} alloted to $t_i$ when it begins; (ii) \emph{initial time-stamp or \its}: this is same as \cts{} when a transaction $t_i$ starts for the first time. When $t_i$ aborts and re-starts later, it gets a new \cts. But it retains its original \cts{} as \its. The value of \its{} is retained across aborts. For achieving starvation freedom, \sftm{} uses \its{} with a modification to \focc{} as follows: a transaction $t_i$ in \tryph{} is validated against all other conflicting transactions, say $t_j$ which are in their \rwph. The \its{} of $t_i$ is compared with the \its{} of any such transaction $t_j$. If \its{} of $t_i$ is smaller than \its{} of all such $t_j$, then all such $t_j$ are aborted while $t_i$ is committed. Otherwise, $t_i$ is aborted. We show that \sftm{} satisfies \opty{} and \stf. \begin{theorem} \label{thm:sftm-correct} Any history generated by \sftm{} is \opq. \end{theorem} \begin{theorem} \label{thm:sftm-stf} \sftm{} ensure \stfdm. \end{theorem} We prove the correctness by showing that the conflict graph \cite[Chap 3]{WeiVoss:2002:Morg}, \cite{KuzSat:NI:ICDCN:2014} of any history generated by \sftm{} is acyclic. We show \stfdm{} by showing that for each transaction $t_i$ there eventually exists a global state in which it has the smallest \its. \cmnt{ \subsection{Illustration of \sftm} \label{sec:sftm-illus} } \figref{sftmex} shows the a sample execution of \sftm. It compares the execution of \focc{} with \sftm. The execution on the left corresponds to \focc, while the execution one the right is of \sftm{} for the same input. It can be seen that each transaction has two \ts{s} in \sftm. They correspond to \cts, \its{} respectively. Thus, transaction $T_{1,1}$ implies that \cts{} and \its{} are $1$. In this execution, transaction $T_3$ executes the read \op{} $r_3(z)$ and is aborted due to conflict with $T_2$. The same happens with $T_{3,3}$. Transaction $T_5$ is re-execution of $T_3$. With \focc{} $T_5$ again aborts due to conflict with $T_4$. In case of \sftm{}, $T_{5,3}$ which is re-execution of $T_{3,3}$ has the same \its{} $3$. Hence, when $T_{4,4}$ validates in \sftm, it aborts as $T_{5,3}$ has lower \its. Later $T_{5,3}$ commits. It can be seen that \its{s} prioritizes the transactions under conflict and the transaction with lower \its{} is given higher priority. \begin{figure}[tbph] \centerline{\scalebox{0.5}{\input{figs/sftm.pdf_t}}} \caption{Sample execution of \sftm} \label{fig:sftmex} \end{figure} \subsubsection{Drawback of \sftm} \label{subsec:sftm-drawback} Figure \ref{fig:stmvl1} is representing history H: $r_1(x,0)r_1(y,0) w_2(x,10)w_3(y,15)a_2 a_3 c_1$ It has three transactions $T_1$, $T_2$ and $T_3$. $T_1$ is having lowest time stamp and after reading it became slow. $T_2$ and $T_3$ wants to write to $x$ and $y$ respectively but when it came into validation phase, due to $r_1(x)$, $r_1(y)$ and not committed yet, $T_2$ and $T_3$ gets aborted. However, when we are using multiple version $T_2$ and $T_3$ both can commit and $T_1$ can also read from $T_0$. The equivalent serial history is $T_1 T_2 T_3$. \begin{figure}[H] \center \scalebox{0.55}{\input{figs/abc.pdf_t}} \caption{Pictorial representation of execution under SFTM} \label{fig:stmvl1} \end{figure} \input{ap-dssftm} \section{Experimental Evaluation} \label{sec:exp} For performance evaluation of \ksftm with the state-of-the-art STMs, we implemented the the algorithms \pkto, \svsftm \cite{Gramoli+:TM2C:Eurosys:2012, WaliSten:Starve:CC:2009, Spear+:CCM:PPoPP:2009} along with \ksftm in C++ \footnote{Code is available here: https://github.com/PDCRL/KSFTM}. We used the available implementations of NOrec STM \cite{Dalessandro:2010:NSS:1693453.1693464}, and ESTM \cite{Felber:2017:jpdc} developed in C++. Although, only \ksftm and \svsftm provide \stfdm, we compared with other STMs as well, to see its performance in practice. \noindent \textbf{Experimental system:} The experimental system is a 2-socket Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz with 14 cores per socket and 2 hyper-threads (HTs) per core, for a total of 56 threads. Each core has a private 32KB L1 cache and 256 KB L2 cache. The machine has 32GB of RAM and runs Ubuntu 16.04.2 LTS. In our implementation, all threads have the same base priority and we use the default Linux scheduling algorithm. This satisfies the \asmref{bdtm} (bounded\text{-}termination\xspace) about the scheduler. We ensured that there no parasitic transactions \cite{BushkovandGuerraoui:2015:COST} in our experiments. \noindent \textbf{Methodology:} Here we have considered two different applications:\textbf{(1)} Counter application - In this, each thread invokes a single transaction which performs 10 reads/writes \op{s} on randomly chosen \tobj{s}. A thread continues to invoke a transaction until it successfully commits. To obtain high contention, we have taken large number of threads ranging from 50-250 where each thread performs its read/write operation over a set of 5 \tobj{s}. We have performed our tests on three workloads stated as: (W1) Li - Lookup intensive: 90\% read, 10\% write, (W2) Mi - Mid intensive: 50\% read, 50\% write and (W3) Ui - Update intensive: 10\% read, 90\% write. This application is undoubtedly very flexible as it allows us to examine performance by tweaking different parameters (refer to \subsecref{countercode} for details). \textbf{(2)} Two benchmarks from STAMP suite \cite{stamp123} - (a) We considered KMEANS which has low contention with short running transactions. The number of data points as 2048 with 16 dimensions and total clusters as 5. (b) We then considered LABYRINTH which has high contention with long running transactions. We considered the grid size as 64x64x3 and paths to route as 48. To study starvation in the various algorithms, we considered \emph{\wct}, which is the maximum time taken by a transaction among all the transactions in a given experiment to commit from its first invocation. This includes time taken by all the aborted \inc{s} of the transaction to execute as well. To reduce the effect of outliers, we took the average of \wct in ten runs as the final result for each application. \cmnt{ and second application is labyrinth from STAMP benchmark \cite{stamp} which has long running transactions with high contention. \cmnt{In our counter application, each thread invokes a single transaction and within a transaction, a thread performs 10 reads/writes \op{s} on randomly chosen \tobj{s} from a set of 5 \tobj{s} to increase the contention level. A thread continues to invoke a transaction until it successfully commits.To increase the contention we considered multiple threads. We have the control of greater flexibility with counter application. So, }We have performed our tests on three workloads stated as: (W1) Li - Lookup intensive (90\% read, 10\% write), (W2) Mi - Mid intensive (50\% read, 50\% write) and (W3) Ui - Update intensive (10\% read, 90\% write). \cmnt{We have also checked the performance on labyrinth which is showing similar results. Due to brevity of space, the comparison of algorithms are in \apnref{apn-result}.} For accurate results, we took an average of ten runs as the final result for each algorithm.} \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/wcts1.pdf} \vspace{-.2cm} \caption{Performance analysis on workload $W1$, $W2$, $W3$}\label{fig:wcts} \end{figure} \noindent \textbf{Results Analysis:} \figref{wcts} illustrates \wct analysis of \ksftm over the above mentioned STMs for the counters application under the workloads $W1$, $W2$ and $W3$ while varying the number of threads from 50 to 250. For \ksftm and \pkto, we chose the value of K as 5 and C as 0.1 as the best results were obtained with these parameters. We can see that \ksftm performs the best for all the three workloads. \ksftm gives an average speedup on \wct by a factor of 1.22, 1.89, 23.26 and 13.12 over \pkto, \svsftm, NOrec STM and ESTM respectively. \ignore{ Under high contention the time taken by the longest running transaction to commit is considered as the worst case time. We implemented 3 variants of \ksftm (\mvsftm, \mvsftmgc, and \ksftm) and \pkto (\pmvto, \pmvtogc, and \pkto) and tested on all the workloads $W1$ $W2$ and $W3$. \ksftm outperforms \mvsftm and \mvsftmgc by a factor of 2.1 and 1.5. Similarly, \pkto outperforms \pmvto and \pmvtogc by a factor of 2 and 1.35. These results show that maintaining finite versions corresponding to each \tobj performs better than maintaining infinite versions and garbage collection on infinite versions corresponding to each \tobj. We identified the best value of K as 5 and optimal value of $C$ as 0.1 for \ksftm on counter application. we ran our experiment, varying value of K and keeping the number of threads as 64 on workload $W1$ and obtained the optimal value of $K$ in \ksftm is 5 as shown in \figref{worstcase}.(b). Similarly, we calculate the best value of $K$ as 5 for \pkto on the same parameters. The optimal value of $C$ as 0.1 for \ksftm on the above parameters (details described in \apnref{apn-result} and technical report \cite{DBLP:journals/corr/abs-1709-01033}). } \figref{stamp}(a) shows analysis of \wct for KMEANS while \figref{stamp}(b) shows for LABYRINTH. In this analysis we have not considered ESTM as the integrated STAMP code for ESTM is not publicly available. For KMEANS, \ksftm performs 1.5 and 1.44 times better than \pkto and \svsftm. But, NOrec is performing 1.09 times better than \ksftm. This is because KMEANS has short running transactions have low contention. As a result, the commit time of the transactions is also low. On the other hand for LABYRINTH, \ksftm again performs the best. It performs 1.14, 1.4 and 2.63 times better than \pkto, \svsftm and NOrec respectively. This is because LABYRINTH has high contention with long running transactions. This result in longer commit times for transactions. \figref{stamp}(c) shows the stability of \ksftm algorithm over time for the counter application. Here we fixed the number of threads to 32, $K$ as 5, $C$ as 0.1, \tobj{s} as 1000, along with 5 seconds warm-up period on $W1$ workload. Each thread invokes transactions until its time-bound of 60 seconds expires. We performed the experiments on number of transactions committed over time in the increments 5 seconds. The experiment shows that over time \ksftm is stable which helps to hold the claim that \ksftm's performance will continue in same manner if time is increased to higher orders. \vspace{1mm} \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/stamp1.pdf} \vspace{-.2cm} \caption{Performance analysis on KMEANS, LABYRINTH and KSFTM's Stability }\label{fig:stamp} \end{figure} \cmnt{ \begin{figure} \captionsetup{justification=centering} \includegraphics[width=12cm, height=8cm]{figs/Graph5.pdf} \vspace{-.5cm} \caption{Worst case time analysis, Optimal value of K and \ksftm stability}\label{fig:worstcase} \end{figure} Our experimental goals are to: (G1) identify the best value of K (i.e., number of versions) in the propose algorithm \ksftm; (G2) to evaluate the performance of all our proposed algorithms (\ksftm, \mvsftm, \mvsftmgc, \pkto \pmvto, \pmvtogc); (G3) to evaluate the overhead of starvation-freedom by comparing the performance of \ksftm and \emph{non-\stf} \pkto STM, and (G4) to compare the performance of \ksftm with state-of-the-art STMs (NOrec, ESTM, MVTO, and \svsftm) on different workloads. \ignore{ \color{red} It can be seen that achieving \stfdm involves some overhead. We want to understand how costly is this overhead by comparing the performance of \ksftm with finite version non-starvation free STM - \pkto. \color{black} } \begin{figure} \captionsetup{justification=centering} \includegraphics[width=12cm, height=6cm]{figs/Graph4.pdf} \caption{Performance on workload $W1$, $W2$, $W3$}\label{fig:performance} \end{figure} \noindent \textbf{Experimental system:} The experimental system is a 2-socket Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz with 14 cores per socket and 2 hyper-threads (HTs) per core, for a total of 56 threads. Each core has a private 32KB L1 cache and 256 KB L2 cache. The machine has 32GB of RAM and runs Ubuntu 16.04.2 LTS. In our implementation, all threads have the same base priority and we use the default Linux scheduling algorithm. This satisfies the \asmref{bdtm} (bounded\text{-}termination\xspace), about the scheduler and no parasitic transaction \cite{BushkovandGuerraoui:2015:COST} exist. \ignore{ \noindent \textbf{STM implementations:} The application first creates N-threads, each thread, in turn, invokes a transaction. Each transaction has two phases, the first phase, i.e., the read-write phase consists of read (from the shared memory) and write (to the transaction's local buffer) operations followed by the second phase, i.e., the commit phase, where the actual writes are made visible in the shared memory, and the transaction tries to commit. We have specifically chosen such a test application to achieve our experimental goals. Also to test the starvation of transactions, we wanted to vary the various parameters and our chosen application suffice this need too. We get the inspiration of single version starvation-freedom, \svsftm from the literature by Gramoli et al. \cite{Gramoli+:TM2C:Eurosys:2012}, Waliullah and Stenstrom \cite{WaliSten:Starve:CC:2009}, Spear et al. \cite{Spear+:CCM:PPoPP:2009}. } \noindent \textbf{Methodology:} To test the algorithms, we considered a test-application in which each thread invokes a single transaction. Within a transaction, a thread performs 10 reads/writes \op{s} on randomly chosen \tobj{s} from a set of 1000 \tobj{s}. A thread continues to invoke a transaction until it successfully commits. We have considered three types of workloads: (W1) Li - Lookup intensive (90\% read, 10\% write), (W2) Mi - Mid intensive (50\% read, 50\% write) and (W3) Ui - Update intensive (10\% read, 90\% write). For accurate results, we took an average of ten runs as the final result for each algorithm. \noindent \textbf{Results Analysis:} We implemented 3 variants of \ksftm (\mvsftm, \mvsftmgc, and \ksftm) and \pkto (\pmvto, \pmvtogc, and \pkto) to do the performance analysis on different workloads $W1$ (Li), $W2$ (Mi) and $W3$ (Ui) respectively. \ksftm outperforms \mvsftm and \mvsftmgc by a factor of 2.1 and 1.5. Similarly, \pkto outperforms \pmvto and \pmvtogc by a factor of 2 and 1.35. These results show that maintaining finite versions corresponding to each \tobj performs better than maintaining infinite versions and garbage collection on infinite versions corresponding to each \tobj. To identify the best value of K for \ksftm on counter application, we ran our experiment, varying value of K and keeping the number of threads as 64 on workload $W1$ and obtained the optimal value of $K$ in \ksftm is 5 as shown in \figref{worstcase}.(b). Similarly, we calculate the best value of $K$ as 5 for \pkto on the same parameters. The optimal value of $C$ as 0.1 for \ksftm on the above parameters (details described in \apnref{apn-result} and technical report \cite{DBLP:journals/corr/abs-1709-01033}). \figref{performance} (a), (b) and (c) shows the performance of all the algorithms (proposed as well as state-of-the-art STMs) for workloads $W1$, $W2$ and $W3$ respectively. For workload $W1$, the graph shows that \ksftm outperforms \svsftm, NOrec, ESTM, and MVTO by a factor of 2.5, 3, 1.7 and 1.5. \ignore{ESTM performs better than \ksftm in low contention when the thread count is 16 and less, while this is not the case for high contention when the thread count is increased to 32 and more.} For workload $W2$, \ksftm exceeds \svsftm, NOrec, ESTM, and MVTO by a factor of 1.5, 2, 1.6 and 1.3 respectively. For workload $W3$, \ksftm again beats \svsftm, NOrec, ESTM, and MVTO by 1.7, 3.3, 3 and 1.4 at thread count 64. So, \ksftm outperforms all the other STM algorithms in low as well as high contention. \ksftm's performance is comparable to \pkto in all workloads. In fact, in all workloads, the performance of \ksftm is 2\% less than \pkto. But as discussed in \subsecref{mvto}, that the transactions can possibly starve with \pkto while this is not the case with \ksftm. We believe that this is the overhead that one has to pay to achieve \stfdm. On the positive side, the overhead is very small, being only around 2\% as compared to \pkto. On the other hand, the performance of \ksftm is much better than single-version \stf algorithm \svsftm. We analyzed time to achieve the \stfdm for \ksftm in worst-case and compared \svsftm, NOrec, ESTM, MVTO and \pkto. High contention certainly increases probability of transactions being aborted. So we have considered varying the threads from 50 to 400, each thread invoking 1000 transaction with $K$ as 5, $C$ as 0.1, \tobj{s} as 1000 and with $W1$ workload. \figref{worstcase}.(a) shows that the worst-case time for the commit of a transaction in \ksftm is consistently better. \begin{figure} \captionsetup{justification=centering} \includegraphics[width=12cm, height=6cm]{figs/Graph5.pdf} \vspace{-.5cm} \caption{Worst case time analysis, Optimal value of K and \ksftm stability}\label{fig:worstcase1} \end{figure} \vspace{-.05cm} \ignore{ \begin{figure} \centering \begin{minipage}[b]{0.49\textwidth} \centering \includegraphics[width=6cm, height=5cm]{figs/ctime.pdf} \caption{Worst-Case Time Comparison}\label{fig:worst-case time1} \end{minipage} \hfill \begin{minipage}[b]{0.49\textwidth} \includegraphics[width=7cm, height=4cm]{figs/ctrans.pdf} \centering \caption{KSFTM Stability}\label{fig:stability1} \end{minipage} \end{figure} } \cmnt{ \begin{figure} \captionsetup{justification=centering} \includegraphics[width=12cm, height=9cm]{figs/ctime.pdf} \caption{\Large Worst-Case Time Comparison}\label{fig:worst-case time2} \end{figure} \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/ctrans.pdf} \caption{\Large KSFTM Stability}\label{fig:stability2} \end{figure} } To check the stability of our proposed algorithm \ksftm over time, we performed an experiment and shown in \figref{worstcase}.(c) where we fixed the number of threads to 32, $K$ as 5, $C$ as 0.1, \tobj{s} as 1000, along with 5 seconds warm-up period on $W1$ workload. Each thread keeps on invoking and executing transactions until $TIME$-$BOUND$ is over, which in our case is 60 seconds. We have done the experiments on number of transactions committed with increase in time as $t = t+ \epsilon$, where $\epsilon$ is fixed to 5 seconds. The experiment shows that over time \ksftm is stable which helps to hold the claim that \ksftm's performance will continue in same manner if time is increased to higher order. To show the benefits of starvation freedom utilizing the power of multiple versions in \ksftm, we tested on an application called Labyrinth provided by STAMP benchmark which has long running transactions along with high contention. \ksftm shows better performance than \pkto and \svsftm. } Maintaining multiple versions to increase the performance and to decrease the number of aborts, leads to creating too many versions which are not of any use and hence occupying space. So, such garbage versions need to be taken care of. Hence we come up with a garbage collection over these unwanted versions. This technique help to conserve memory space and increases the performance in turn as no more unnecessary traversing of garbage versions by transactions is necessary. We have used a global, i.e., across all transactions a list that keeps track of all the live transactions in the system. We call this list as \livel. Each transaction at the beginning of its life cycle creates its entry in this \livel. Under the optimistic approach of STM, each transaction in the shared memory performs its updates in the $\tryc$ phase. In this phase, each transaction performs some validations, and if all the validations are successful then the transaction make changes or in simple terms creates versions of the corresponding \tobj{} in the shared memory. While creating a version every transaction, check if it is the least timestamp live transaction present in the system by using \livel{} data structure, if yes then the current transaction deletes all the version of that \tobj{} and create one of its own. Else the transaction does not do any garbage collection or delete any version and look for creating a new version of next \tobj{} in the write set, if at all. \figref{ksftm} represents three variants of \ksftm (\mvsftm, \mvsftmgc, and \ksftm) and \figref{pkto} shows the three variants of \pkto (\pmvto, \pmvtogc, and \pkto) on all the workloads $W1$ $W2$ and $W3$. \ksftm outperforms \mvsftm and \mvsftmgc by a factor of 2.1 and 1.5. Similarly, \pkto outperforms \pmvto and \pmvtogc by a factor of 2 and 1.35. These results show that maintaining finite versions corresponding to each \tobj performs better than maintaining infinite versions and garbage collection on infinite versions corresponding to each \tobj. \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/ksftm.pdf} \caption{Time comparison among variants of \ksftm}\label{fig:ksftm} \end{figure} \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/pkto.pdf} \caption{Time comparison among variants of \pkto}\label{fig:pkto} \end{figure} \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/aborts.pdf} \caption{Abort Count on workload $W1, W2, W3$}\label{fig:aborts} \end{figure} \cmnt{ \begin{figure} \captionsetup{justification=centering} \includegraphics[width=13cm, height=18cm]{figs/ksftmpkto.pdf} \caption{\Large Execution under variants of $KSFTM$ and $PKTO$}\label{fig:ksftmpkto} \end{figure} } \begin{figure}[H] \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/optimalC.pdf} \caption{Best value of K and optimal value of $C$ for \ksftm}\label{fig:optimalC} \end{figure} \noindent \textbf{Comparison on the basis of Abort count:} \figref{aborts} shows the abort count comparisons of \ksftm with \pkto, ESTM, NOrec, MVTO, and \svsftm across all workloads ($W1$, $W2$, and $W3$). The number of aborts in ESTM and NOrec are high as compared to all other STM algorithms while all other algorithms (\ksftm, \pkto, MVTO, \svsftm) have marginally small differences among them. \vspace{.4cm} \noindent \textbf{Best value of $K$ and optimal value of constant \textit{C}:} To identify the best value of K for \ksftm, we ran our experiment, varying value of K and keeping the number of threads as 64 on workload $W1$ and obtained the optimal value of $K$ in \ksftm is 5 as shown in \figref{optimalC}.(a) for counter application. Similarly, we calculate the best value of $K$ as 5 for \pkto on the same parameters. $C$, is a constant that is used to calculate $WTS$ of a transaction. i.e., $\twts{i} = \tcts{i} + C * (\tcts{i} - \tits{i});$ where, $C$ is any constant greater than 0. We run or experiments across load $W1$, for 64 threads and other parameters are same as defined in the methodology of \secref{exp}, we achieve the best value of $C$ as 0.1 for counter application. Experimental results are shown in \figref{optimalC} (b). \cmnt{ \textbf{Benefit of Starvation freedom and multi-versioning:}Our proposed algorithm provides the dual benefit of starvation freedom utilizing the power of multiple versions. \figref{lib} depicts that proposed algorithm KSFTM performs better than non-starvation free finite version algorithm PKVTO when tested on an application called Labyrinth provided by STAMP benchmark which has long running transactions and has very high contention. For the same application we experimented our algorithm against single version starvation freedom SV-SFTM algorithm, results shown in the \figref{lib} which supports the fact that multi-versioning provides better performance than single versioning. The experiment shows the worst case time a transaction can take over high contention and long running transactions environment, where grid size is 128 x 128 x 3 and paths to route are 64.} \cmnt{ \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/Lib.pdf} \caption{Execution under Labyrinth provided by STAMP benchmark }\label{fig:lib} \end{figure} } \cmnt{ \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{figs/abortc.pdf} \caption{ Aborts on workload $W1, W2, W3$ and Optimal value of C as 0.1}\label{fig:optimalC} \end{figure} } \input{ap-counters.tex}
1,314,259,995,185
arxiv
\section{INTRODUCTION} Cometary ices have undergone little or no processing since their formation in the solar nebula and thus they represent an important clue in understanding the conditions in the early Solar System. However, this interpretation is complicated by the radial mixings in the protoplanetary disk and the dynamical evolution of comets, expelled to the outer Solar System from their original formation site. No pattern for the cometary composition and activity has emerged from the objects studied to date \citep{biver02,crovisier07} and a larger sample is needed. The composition of cometary ices has been studied through spectroscopic observations or {\it in situ} measurements. In the analysis of cometary spectra it is important to establish whether the observed chemical species originate in the nucleus (native source) or in the coma (extended source). For this purpose, spatially-resolved observations are needed to derive the radial composition of the gas outflow. Carbon monoxide (CO) is one of the most abundant molecules, whose mixing ratio relative to water varies greatly among comets \citep{biver06,cometsii}. As a highly volatile compound, with a sublimation temperature of 24~K, CO ice is thought to be a sensitive tracer of the temperature in the environment in which the comets formed. However, its origin as a native compound or a daughter product, and the correlation between the two sources is still not understood. Infrared observations of the CO ro-vibrational lines conclude that the CO source was mainly native in comet C/1996~B2~(Hyakutake) \citep{disanti03} and significantly extended for comet C/1995~O1~(Hale-Bopp) \citep{disanti01}. This paper complements previous findings with the first CO radial profiles constructed from ultraviolet spectra. Care must be taken in interpreting the brightness profiles, since saturation close to the nucleus could mimic the presence of an extended source. A comprehensive fluorescence model is developed to derive reliable CO column densities and estimate CO production rates from cometary spectra obtained by the Space Telescope Imaging Spectrograph (STIS) and the Goddard High Resolution Spectrograph (GHRS) on the {\it Hubble Space Telescope} ({\it HST}). Using the long-slit capabilities of the STIS instrument, this paper offers evidence that the native source of CO is dominant in three long period comets. \begin{deluxetable*}{cccccccc} \tabletypesize{\footnotesize} \tablecaption{Comet observations. \label{table1}} \tablewidth{0pt} \tablehead{ \colhead{Dataset} & \colhead{Date \& Time} & \colhead{Exposure Time} & \colhead{r$_{h}$} & \colhead{ v$_{h}$} & \colhead{$\Delta$} & \colhead{Instrument}& \colhead{Mode}\\ & (UT) & (s) & (AU) & (km~s$^{-1}$) & (AU) & & } \startdata \sidehead{C/1996 B2 (Hyakutake)} Z35FN602T & 1996-04-01 13:09:00 & 1305.600 & 0.885 & --38.51 & 0.259 & $HST$-GHRS & G140L;2.0\\ Z35FN604T & 1996-04-01 13:39:00 & 217.600 & 0.885 & --38.51 & 0.260 & $HST$-GHRS & G140L;2.0\\ Z35FN702T & 1996-04-01 14:44:00 & 1305.600 & 0.884 & --38.53 & 0.262 & $HST$-GHRS & G140L;2.0\\ Z35FN704T & 1996-04-01 15:14:00 & 217.600 & 0.883 & --38.53 & 0.262 & $HST$-GHRS & G140L;2.0\\ \tableline \sidehead{C/2000 WM$_{1}$ (LINEAR)} B0500301000 & 2001-12-07 08:50:00 & & & & & & \\ to B0502401000 & 2001-12-10 06:48:00 & 36,467 & 1.120 & --28.30 & 0.340 & $FUSE$ & LWRS\\ O6GR12010 & 2001-12-09 21:33:00 & 1800.194 & 1.085 & --28.26 & 0.357 & $HST$-STIS & G140L;52$\times$0.2\\ O6GR03010 & 2001-12-09 23:09:10 & 1440.197 & 1.084 & --28.26 & 0.358 & $HST$-STIS & G140L;52$\times$0.2\\ O6GR11010 & 2001-12-10 00:45:00 & 1800.198 & 1.083 & --28.26 & 0.359 & $HST$-STIS & G140L;52$\times$0.2\\ \tableline \sidehead{153P/Ikeya-Zhang} O8FY01010 & 2002-04-20 07:28:00 & 1800.199 & 0.887 & 29.07 & 0.426 & $HST$-STIS & G140L;52$\times$0.2\\ O8FY02010 & 2002-04-20 10:40:00 & 1800.197 & 0.889 & 29.08 & 0.426 & $HST$-STIS & G140L;52$\times$0.2\\ O8FY03010 & 2002-04-21 07:30:00 & 1440.197 & 0.904 & 29.15 & 0.422 & $HST$-STIS & G140L;52$\times$0.2\\ \tableline \sidehead{C/2001 Q4 (NEAT)} E1390101000 & 2004-04-24 00:39:00 & & & & & & \\ to E1390501000 & 2004-04-24 23:09:00 & 68,282 & 1.030 & --10.80 & 0.510 & $FUSE$ & LWRS\\ O8VK04010 & 2004-04-25 20:03:00 & 1800.199 & 1.024 & --10.26 & 0.473 & $HST$-STIS & G140L;52$\times$0.2\\ O8VK01010 & 2004-04-26 00:49:00 & 1683.008 & 1.023 & --10.18 & 0.468 & $HST$-STIS & G140L;52$\times$0.2\\ O8VK07010 & 2004-04-30 00:51:00 & 1800.200 & 1.002 & --8.37 & 0.386 & $HST$-STIS & G140L;52$\times$0.2\\ \enddata \end{deluxetable*} CO ultraviolet fluorescence in cometary spectra was first detected during sounding rocket observations of comet West \citep[C/1975 V1,][]{feld76} and has been subsequently observed by $IUE$ \citep{tozzi98}, $HST$ \citep{Weaver:1998}, $FUSE$ \citep{feld02b}, the Hopkins Ultraviolet Telescope on the {\it Astro-1} Space Shuttle mission \citep{feld91}, and rockets \citep{woods87,sahnow93,mcp99}. The most important spectral features of CO in the ultraviolet (UV) are its electronic transitions belonging to the $A-X$, $B-X$ and $C-X$ systems, or to the forbidden Cameron bands. Fluorescence is the main emission mechanism for the $A^{1}\Pi-X^{1}\Sigma^{+}$ system (1300~--~1900~\AA), $C^{1}\Sigma^{+}-X^{1}\Sigma^{+}$ system (0--0 band at 1087.9~\AA), and $B^{1}\Sigma^{+}-X^{1}\Sigma^{+}$ system (0--0 band at 1150.5~\AA). The Cameron bands, $a^{3}\Pi-X^{1}\Sigma^{+}$ (1900~--~2800~\AA), are mainly excited by electron impact and photodissociation of CO$_{2}$ \citep{Weaver:1994}. Although observations of the $A-X$ or Fourth Positive Group of CO in the UV spectra of comets have a long history, their interpretation has been difficult compared to that of the $B-X$ and $C-X$ systems at shorter wavelengths that have been observed more recently. The spatial information offered by the high resolution long slit $HST$-STIS spectra of comets 153P/Ikeya-Zhang~(C/2002~C1), C/2001~Q4~(NEAT), and C/2000~WM$_{1}$~(LINEAR) shows that close to the nucleus the CO emission in the $A-X$ system is self-absorbed. This follows from the observed change in the relative intensities in various vibrational progressions as the offset from the comet center increases (see \S~3). Self-absorption makes it difficult to derive a reliable value for the CO column density in the absence of a model that takes into account optical depth effects. A detailed model of the $A-X$ system is needed to track the effects of saturation and self-absorption. We constructed a database containing $\sim10^{5}$ transitions between the first 50 rotational levels of each of the 37 vibrational levels of $X^{1}\Sigma^{+}$ and the 23 vibrational levels of $A^{1}\Pi$, taking into account the energy shifts and mixings of the transition probabilities due to interactions between the different parity sublevels \citep{lefloch87,morton94}. Using a simple approximation for fluorescence in subordinate lines \citep{liu96}, expanded with a comprehensive treatment of self-absorption, our model offers an excellent fit to the data. The $A-X$ fluorescence model is used to derive spatial profiles of the CO column density for the three comets observed by STIS. We find that the column density profiles are consistent with a dominant native source in all comets. The resulting production rates range between $3.56 \times 10^{26}$ and $1.76 \times 10^{28}$~molecules~s$^{-1}$ and are corroborated by the results from high resolution $FUSE$ observations of C/2001~Q4~(NEAT) and C/2000~WM$_{1}$~(LINEAR) \citep{weaver02}. The $HST$-GHRS observations of comet C/1996~B2~(Hyakutake), a comet with a high CO production rate \citep{biver99,disanti03} and thus strongly affected by large optical depth effects, require the addition of geometrical corrections to the simple plane-parallel atmosphere model in order to reproduce the relative line strengths. The $HST$ observations are described in \S~2. Details about the model and its simplifying assumptions are found in \S~3. Detailed data analysis follows in \S~4, and a discussion of the results is given in \S~5. We conclude with a summary in \S~6. \section{OBSERVATIONS} The $HST$ observations are summarized in Table~\ref{table1}. Comets Ikeya-Zhang and C/2000~WM$_{1}$ (LINEAR) were observed near times of high solar activity, while C/2001~Q4~(NEAT) and Hyakutake were observed closer to solar minimum. The $HST$-STIS observations used the G140L grating and the 52X0.2 aperture (25\arcsec~$\times$~0\farcs2 for L-mode MAMA), resulting in a spatial resolution of $\sim$0\farcs1 and a spectral resolution of $\sim$4~\AA\ in the wavelength range 1150--1730~\AA. The STIS instrument performance is described in \citet{kimble98} and \citet{woodgate98}. The $HST$-GHRS \citep{heap95} observations of comet C/1996~B2~(Hyakutake), covering the 1290--1590~\AA\ bandpass with a spectral resolution of $\sim$4~\AA, do not provide spatial information. The GHRS Large Science Aperture (LSA) translates into a 1\farcs74~$\times~$1\farcs74 post-COSTAR projected area. Given that $FUSE$ and $HST$-STIS observations of comets C/2000~WM$_{1}$~(LINEAR) and C/2001~Q4~(NEAT) were nearly simultaneous, we choose to include in Table~\ref{table1} a summary of $FUSE$ observations for completeness \citep{feld02b,weaver02}. \section{FLUORESCENCE MODEL FOR THE $A^{1}\Pi$~--~$X^{1}\Sigma^{+}$ SYSTEM OF CO} The Fourth Positive Group of CO, $A^{1}\Pi$~--~$X^{1}\Sigma^{+}$, has non-negligible overlap integrals for most of the non-diagonal vibrational transitions. While for the $C-X$ and $B-X$ systems it is enough to model just the (0--0) and (1--0) bands, for the $A-X$ system we must take into account all bands connecting all 37 vibrational levels of the $X^{1}\Sigma^{+}$ state with all 23 vibrational levels of the $A^{1}\Pi$ state. Using only the first 50 rotational levels, which should be sufficient for typical physical conditions in cometary comae, the final database contains almost $10^{5}$ transitions. The latest values for the parameters of the $A^{1}\Pi$ \citep{kurucz76,morton94,lefloch92} and $X^{1}\Sigma^{+}$ states \citep{george94} were used to derive the energy levels and transition wavelengths. Following \citet{morton94}, the rotational transition probabilities $A_{J\oneprime J\arcsec}$ were obtained from the band transition probabilities $A_{v\oneprime v''}$ \citep{beegle99,eidelsberg99,borges01,kirby89}. Even though there are three rotational branches (P, Q and R, $\Delta J=0,\pm 1$) possible from the same rotational level of the $A^{1}\Pi$ state ($J'$), due to the parity selection rule, the excitation rates and branching ratios for the P and R branches do not mix with those for the Q branch. Each $J\oneprime$ level of the $A^{1}\Pi$ state is split into 2 opposite parity sublevels ($\Lambda$-doubling) that have different energies and interactions with neighboring levels of other electronic states \citep{morton94}. The energy level shifts and the changes in the lifetimes due to these interactions were taken into account when available \citep{morton94,kittrell93,lefloch87}. In a first approximation, the fluorescent emission in the Fourth Positive Group of CO is modeled following the prescription from~\citet{liu96}. This model successfully accounts for saturation in the absorption of the exciting radiation, using a Voigt profile with line overlap. A quick look at the data, however, shows that this approach only partially accounts for optical depth effects, and we must extend the model to include self-absorption in the fluorescent cascade. The STIS spectra of comet 153P/Ikeya-Zhang shown in Figure~\ref{vlines} are averaged over effective apertures of increasing size, centered on the comet nucleus. Self-absorption is easily recognized by noting that the bands connecting to the ground vibrational level (0--0, 1--0, 2--0, 3--0, 4--0, 5--0) show little decrease in brightness as we increase the integrated slit area and the average column density becomes lower, while the other bands from the progressions originating in the same upper level (see the unblended 0--1, 1--3, 2--2, 2--3, 4--1 and 5--1) decrease strongly in intensity, so that at low column densities the relative line ratios are in agreement with the optically thin limit. \begin{figure} \begin{center} \epsscale{0.8} \rotatebox{90}{ \plotone{f1.eps} } \caption{ Spectra of comet 153P/Ikeya-Zhang extracted from apertures that are 1, 2, 4 and 6 arcseconds wide in the spatial direction, top to bottom, centered on the comet nucleus, showing the effects of self-absorption (see text). The strong features near 1564~\AA\ and 1657~\AA\ are emissions from atomic carbon.\label{vlines}} \end{center} \end{figure} The treatment of self-absorption is made under the assumption of local thermodynamic equilibrium (LTE), using the fact that the photon mean free path corresponds to an optical depth $\tau$ of one. Given that the gas temperature in the coma is less than 100~K for our observations, under LTE conditions we need only to take into account absorption out of the lowest (0) vibrational level of $X^{1}\Sigma^{+}$. Unlike for H$_{2}$ for example, the ro-vibrational levels of the ground state of CO have short lifetimes, making LTE a good approximation. The lines connecting to v\arcsec~=~0 for which $\tau$ at line center is greater than one are considered self-absorbed. Only an effective column density $N_{0}^{ij}=1/\sigma_{line-center}^{ij}$, smaller than the total absorbing column $N$, will contribute to the observed emission in a self-absorbed transition $i-j$. The excess brightness, due to absorption by the total column density $N$, is then redistributed among the optically thin lines originating in the same upper level $i$ according to the branching ratios, as seen in Figure~\ref{mod}. \begin{figure} \begin{center} \epsscale{0.8} \rotatebox{90}{ \plotone{f2.eps} } \caption{ Illustration of the CO fluorescence model. The absorption profile for the $A-X$ (1-0) band is applied to the solar spectrum (upper panel) and the subsequent emission in the ($1-v\arcsec$) progression is shown before and after including self-absorption (middle and lower panels, respectively).\label{mod}} \end{center} \end{figure} The model is constructed in the approximation of a uniform plane-parallel atmosphere. This approximation is of limited use, due to the spherical symmetry of the comet atmosphere and the Sun-comet-Earth geometry. The breakdown is more apparent for larger column densities, such as in the case of comet C/1996~B2~(Hyakutake), discussed in \S~4.2. In the optically thin regime, valid at larger distances from the comet nucleus, as well as for unresolved objects, such geometrical effects are negligible. Introducing geometrical corrections in our model can be done by allowing the column density entering the absorption step to be different from the column density used for emission and self-absorption. This procedure accounts for the fact that the projected column density along the line-of-sight is larger than the column density towards the exciting radiation source. This difference leads both to the decrease in line saturation and to the enhancement of the optically thin lines versus the optically thick ones due to more self-absorption. The method is very robust, providing line intensities in agreement with the data. \section{DATA ANALYSIS} Although the brightness of the $A-X$ bands for the same comet should differ slightly from one observation to another due to varying comet heliocentric and geocentric distances, as well as due to possible periodic variations in the volatile vaporization rate, this variation is within the error bars of the observations and the use of an averaged STIS spectrum for each comet in order to improve the signal-to-noise ratio is warranted. We also select only the datasets that do not show significant deviations in background and intensity from one another. High resolution solar spectra from the Ultraviolet Spectrometer Polarimeter Experiment on the {\it Solar Maximum Mission} \citep{Tandberg-Hanssen:1981} were used, scaled to match the solar activity at the time of comet observations as deduced from {\it UARS}/Solstice solar flux measurements \citep{Rottman:2001}. The solar spectrum is shifted according to the comet motion relative to the Sun \citep[Swings effect,][]{Dymond:1989}. For the solar \ion{H}{1} Lyman-$\alpha$ and -$\beta$ lines the {\it SOHO}/SUMER data of \citet{Lemaire:2002} were used. \begin{deluxetable*}{ccccc} \tablecaption{Model parameters. \label{table2}} \tablewidth{0pt} \tablehead{ \colhead{Comet Name} & \colhead{$T_{\rm ROT}$} & \colhead{Doppler $b$ parameter} & \colhead{Solar Activity} & \colhead{$N_{\rm CO}$ Range} \\ & (K) & (km~s$^{-1}$) & & (10$^{14}$ cm$^{-2}$) } \startdata 153P/Ikeya-Zhang & 82\tablenotemark{a} & 0.91\tablenotemark{b}& max & 61.3--1.49\\ C/2001 Q4 (NEAT) & 68\tablenotemark{c} & 0.79\tablenotemark{d} & min & 68.4--1.86\\ C/2000 WM$_{1}$ (LINEAR) & 77\tablenotemark{c} & 0.72\tablenotemark{b} & max & 0.935--0.327\\ C/1996 B2 (Hyakutake) & 72\tablenotemark{e} & 2.0\tablenotemark{f} & min & 145 \\ \enddata \tablenotetext{a}{From 74$\times r_{\rm h}^{-0.93}$~K dependence, \citet{dellorusso04}.} \tablenotetext{b}{\citet{biver06}.} \tablenotetext{c}{Temperature of cold component, $FUSE$ observations (see text).} \tablenotetext{d}{Estimated from outflow velocity 0.8$\times r_{\rm h}^{-0.5}$.} \tablenotetext{e}{From 63$\times r_{\rm h}^{-1.06}$~K dependence, \citet{disanti03}.} \tablenotetext{f}{Within the range of values given by \citet{wouterloot98}.} \end{deluxetable*} The only free parameter of the model is the column density, which is varied over a grid of values until the best fit is found. All model parameters used for each comet are summarized in Table~\ref{table2}. The values for the rotational temperature and Doppler $b$ parameter were chosen in agreement with infrared and radio measurements, when available. The Doppler $b$ parameter is given by $b=\sqrt{v_{therm}^{2}+v_{outflow}^{2}}\approx v_{outflow}$, where v$_{outflow}$ is the source of non-thermal line broadening, and the thermal velocities are comparable to the uncertainties in v$_{outflow}$. Molecular lines of water and other molecules are well resolved by radio observations (usually to better than 0.1~km~s$^{-1}$), and have a Gaussian profile, reflecting the symmetric outflow of the gas \citep{lecacheux03,wouterloot98,biver99}. Our observations have too low a resolution to be used to determine the Doppler line widths directly, so we rely on the published values for the expansion velocity of the gas, derived from the line profiles after correcting for thermal and instrumental broadening. \subsection{$HST$-STIS Observations} \begin{figure} \begin{center} \epsscale{1.2} \rotatebox{0}{ \plotone{f3.ps} } \caption{ $HST$-STIS spectral image of comet 153P/Ikeya-Zhang, showing the line brightness variation along the slit. The spectral region containing the strong geocoronal \ion{H}{1} $\lambda$1216 and \ion{O}{1} $\lambda$1302 has been excluded. The carbon multiplets at 1561.0~\AA\ and 1657.6~\AA\ are relatively constant due to the extended source component. The zero point in the vertical direction marks the location of the brightness peak, chosen as the center of our integration bins. \label{stisim}} \end{center} \end{figure} The high spatial resolution of the $HST$-STIS instrument is illustrated by the spectrum of comet 153P/Ikeya-Zhang in Figure~\ref{stisim}. The CO $A-X$ lines form the majority of the observed features. Their intensity decreases rapidly along the slit, in contrast with the extended \ion{C}{1}~$\lambda$1561.0 and $\lambda$1657.6 multiplets. Using the spatial information available, we derive the CO spatial profile for each comet by fitting spectra extracted from regions of 1\farcs5 width at increasing offsets from the comet center. The selected regions sample areas of varying column density, for which our model estimates an average value. The comet nucleus is identified with the center of brightness, located at zero in Figure~\ref{stisim}. The innermost region extracted, centered on the nucleus, is noisier due to the small integrated area. For intermediate regions the signal is better, due to the averaging of two regions, symmetric about the comet center. For each region, the background subtraction is performed by fitting a quadratic polynomial to selected points from feature-free intervals. These points were selected such that the outliers are discarded and the resulting polynomial fit is optimal. The best-fit column density for each region and its standard deviation are derived by minimizing the $\chi^{2}$ statistics, taking into account the errors in background subtraction. The range of column densities obtained for each comet observed by STIS is listed in Table~\ref{table2}, together with the average value for comet C/1996 B2 (Hyakutake). The results are further compared with the Haser native source model \citep{haser57,opal77}, with an outflow velocity of $0.85~r_{\rm h}^{-0.5}$~km~s$^{-1}$, where $r_{\rm h}$ is the comet-Sun distance in AU \citep{budzien94,biver99}, and a CO lifetime of $2 \times 10^5$ s. We derive the CO production rate for each comet by a least squares fit of the Haser model to the radial column density profile. The native source model is integrated over rectangular regions matching the 1\farcs5 spectral extractions along the STIS slit. The resulting production rates and their magnitude relative to water are listed in Table~\ref{table3}. \subsubsection{153P/Ikeya-Zhang} The values for the rotational temperature and Doppler $b$ parameter are 82~K and 0.91~km~s$^{-1}$ respectively, derived from infrared and radio measurements~\citep{dellorusso04,biver06}. Spectra extracted from 1\farcs5 intervals were fitted using CO column densities ranging from 6.13$\times$10$^{15}$ to 1.49$\times$10$^{14}$~cm$^{-2}$, as shown in Figure~\ref{bigplot}. The data shown are obtained by averaging the first and third STIS observations (Table~\ref{table1}), and the best fit model is overplotted in red. The second observation was not included due to the background mismatch with the other two. The derived values for the CO column density in each region are represented by stars in Figure~\ref{izp}, plotted as a function of the distance from the comet nucleus. The error bars are given by the $1\sigma$ confidence level from the $\chi^2$ statistics. Fitting to this radial profile a native source model integrated over rectangular regions with the same coverage as the extracted spectra~\citep{opal77}, we obtain a production rate of $1.54\pm0.09\times10^{28}$ molecules~s$^{-1}$. The native source model for the derived production rate is shown as a continuous line in the same figure. The resulting CO production rate relative to water is about $7.2\pm0.4\%$. The water production rate ($2.15\times10^{29}$ molecules~s$^{-1}$) was obtained from a vectorial model fit to an $HST$-STIS observation of the OH (0-0) band made on 2002 April 21 at 12:19 UT. \begin{figure*} \begin{center} \epsscale{0.75} \rotatebox{0}{ \plotone{f4.eps}} \caption{ $HST$-STIS spectra of comet 153P/Ikeya-Zhang extracted from 1.5\arcsec\ intervals at increasing offsets from the center of brightness, assumed to be the location of the nucleus. The red line represents the model spectrum of the CO $A-X$ system for the best fit column density for each offset. The corresponding offsets and the best fit column densities are indicated on each panel. Other emission features belong to \ion{C}{2}, \ion{C}{1}, \ion{O}{1}, \ion{S}{1}, and H$_{2}$, as indicated in Figure~\ref{h2}.\label{bigplot}} \end{center} \end{figure*} \begin{figure} \begin{center} \epsscale{1.0} \rotatebox{0}{ \plotone{f5.eps} } \caption{ Spatial profile of the CO column density for comet 153P/Ikeya-Zhang. The stars with error bars represent the column density values derived from averaged spectra using the method presented in the text. The continuous line represents the column density profile predicted by the native source model when averaged over slits covering the same areas as the observations, with a production rate of 1.54$\times$10$^{28}$~molecules~s$^{-1}$.\label{izp}} \end{center} \end{figure} The residuals after subtracting the fluorescence spectrum for the CO $A-X$ system reflect the contributions of atomic species and allow the first detection of H$_{2}$ at wavelengths longward of 1200~\AA. The H$_{2}$ spectrum consists of the P(1) lines of the Lyman (6-v\arcsec) progression pumped by solar \ion{H}{1} Lyman-$\beta$. The H$_{2}$ lines with v\arcsec = 1--3 were first detected in the {\it FUSE} spectrum of comet C/2001~A2 (LINEAR)~\citep{feld02b}. The lines at longer wavelengths, including the strongest one in the progression (v\arcsec = 13 at 1607.5~\AA), remained undetected due to the abundance of CO features and lower resolution of the STIS instrument. The solar maximum value of flux, together with a velocity shift that placed the (6-0)~P(1) line at the peak of the line, made it particularly fortuitous to detect the (6-13)~P(1) line in comet Ikeya-Zhang. Figure~\ref{Lyb} shows the shape and intensity of the solar Ly$\beta$ at minimum and maximum activity and the Doppler shifts of the H$_{2}$ absorption line corresponding to each comet. Although H$_2$ lines pumped by Ly$\beta$ were detected in comet C/2001 Q4 \citep{feld04}, due to the large negative Doppler shift and the low solar activity, in the STIS bandpass the signal-to-noise for the H$_{2}$ lines is too low to warrant a detection. A comparison of the residuals for the two comets after subtracting the CO fluorescence model is shown in Figure~\ref{h2}. The STIS spectrum was integrated over a 4\arcsec\ wide region centered on the comet nucleus, and the CO column density for the two comets ($N=3.07 \times 10^{15}$~cm$^{-2}$ and $3.49 \times 10^{15}$~cm$^{-2}$, respectively) has been obtained through the $\chi^2$ minimization procedure. The residuals for comets C/2000 WM1 and Hyakutake (not included in the figure) show no evidence for H$_{2}$, which can be explained by the large negative Doppler shifts with respect to the solar Ly$\beta$, the low outgassing rate of C/2000 WM1 and the minimum solar activity at the time of Hyakutake observations. \begin{figure} \begin{center} \epsscale{1.0} \rotatebox{0}{ \plotone{f6.eps}} \caption{ Profile of solar Ly$\beta$ at maximum (August 2001) and minimum solar activity (July 1996). The velocity shifts of the absorbing H$_{2}$ (6--0) P1 line are shown for each comet.\label{Lyb}} \end{center} \end{figure} The synthetic H$_{2}$ fluorescence spectra shown in red in Figure~\ref{h2} are constructed under the assumption of an H$_2$O photodissociation source for H$_{2}$. According to the dissociation model, H$_{2}$ is rotationally hot (100-300~K) and its rotational levels are in statistical equilibrium, with an ortho/para ratio of 3, similar to water \citep{budzien94,water00,bonev07}. The uncertainty in the exact value for the rotational temperature has little impact on the emerging spectrum, as the population of the J=1 level of the ground state does not differ significantly for rotational temperatures ranging from 100 to 300~K. The fluorescence efficiencies, or g-factors, used in the model were revised from those cited by \citet{feld02b}, using the solar Lyman-$\beta$ profiles obtained over the past solar cycle by \citet{Lemaire:2002}, and accounting for the comet's heliocentric velocity. The quality of the data does not warrant a $\chi^2$ fit for the H$_{2}$ fluorescence model, so we restrict ourselves to making rough estimates. Using an H$_{2}$ column density of $1.0 \times 10^{14}$~cm$^{-2}$ at a rotational temperature of 200~K we obtain a reasonable agreement with the residuals for 153P/Ikeya-Zhang (Figure~\ref{h2}, upper panel). As was the case for comet C/2001~A2, this value for the H$_2$ column is consistent, within the rather large uncertainties in both the data and the models, with an H$_2$O photodissociation source. Constraining the H$_2$ production helps our understanding of this H$_2$O dissciation channel, for which little laboratory data is available. Under our choice of parameters, the model for 153P/Ikeya-Zhang predicts about 5.5~R for the v\arcsec = 7,9 and 11 lines, at the level of the errors due to noise and CO subtraction, leaving only the (6--13)~P(1) line detectable at a $\sim$3$\sigma$ level, with 18.4~R. This line is not detected in any other comet from our sample. For comparison, we model the H$_{2}$ fluorescence for comet C/2001 Q4. We use the same column density of H$_{2}$, as both C/2001 Q4 and 153P/Ikeya-Zhang have similar water production rates (see notes to Table~\ref{table3}). Due to the low solar activity and the unfavorable Doppler shift, the predicted line intensities are too low to be detected, consistent with the residuals (Figure~\ref{h2}, lower panel). The discrepancies between the H$_{2}$ models and the residuals are due to the uncertainties in the CO fluorescence model itself as well as to the noise in the data increasing towards longer wavelengths. \begin{figure} \begin{center} \epsscale{0.9} \rotatebox{90}{ \plotone{f7.eps}} \caption{ Residuals from the spectrum of 153P/Ikeya-Zhang and C/2001 Q4 after subtracting the best fit CO model ($N=3.07 \times 10^{15}$~cm$^{-2}$ and $3.49 \times 10^{15}$~cm$^{-2}$, respectively). The comet spectra were extracted from a region of 4\arcsec\ width, centered on the comet nucleus. The red line is the predicted H$_2$ fluorescence spectrum pumped by solar Lyman-$\beta$ for a column density of $1.0 \times 10 ^{14}$~cm$^{-2}$ and a rotational temperature of 200~K. Other atomic contributions are shown in blue.\label{h2}} \end{center} \end{figure} The strongest atomic lines seen in the residuals are also indicated in Figure~\ref{h2}. In addition to those lines usually seen in comets \citep{mcp99}, we also note the presence of the \ion{S}{1}~($^1\!D - ^1\!D^o$) transition at 1666.7~\AA. This transition is analogous to \ion{O}{1}~($^1\!D - ^1\!D^o$) at 1152.2~\AA\ \citep{feld02b} and \ion{C}{1}~($^1\!D - ^1\!P^o$) at 1930.9~\AA\ \citep{tozzi98}. Using a g-factor of $1.92 \times 10^{-5}$ photons~atom$^{-1}$~s$^{-1}$ at 1 AU, the $\sim$30 rayleigh brightness corresponds to an average column density of \ion{S}{1}~$^1\!D$~atoms in the aperture of $1.24 \times 10^{12}$~cm$^{-2}$. Since the lifetime of the metastable $^1\!D$ state of sulfur is only 28~s, this requires that within the aperture $^1\!D$ atoms be produced at a rate of $1.9 \times 10^{26}$~s$^{-1}$ (0.09\% relative to water). Collisional de-excitation of $^1\!D$ near the nucleus would raise this number. The likely source of these atoms is the photodissociation of sulfur-bearing molecules such as H$_2$S or CS$_2$, which must be produced at a rate greater than 0.1\% relative to water. \subsubsection{C/2001 Q4 (NEAT)} Adjusting the model parameters to the conditions of NEAT~Q4 observations we derive in the manner described for 153P/Ikeya-Zhang a CO production rate of $1.76\pm0.16\times10^{28}$~molecules~s$^{-1}$, or $8.8\pm0.8\%$ relative to water. For the water production rate, we derive a value of $2.0 \times 10^{29}$ molecules~s$^{-1}$ from STIS observations on 2004-04-23 21:39 UT. The CO model was fitted to the average of the first two STIS observations (Table~\ref{table1}). For the third observation the detector background level is much higher than for the other two. The adopted values for the rotational temperature and Doppler $b$ parameter are listed in Table~\ref{table2}. $FUSE$ observations of the CO $C-X$ Hopfield-Birge system at 1088~\AA\ reveal a band profile consistent with a two component temperature model \citep{feld02b}. The hot component ($\sim$600~K) is believed to describe an extended CO source due to the dissociation of CO$_{2}$ \citep{feld06b}. We use for our model the temperature of the cold component, estimated at $\sim$68~K, characteristic of the native CO source which dominates at the smaller cometocentric distances probed by STIS. Lacking direct radio measurements of the line widths, we use a value of 0.79~km~s$^{-1}$ for the Doppler $b$ parameter, based on the outflow velocity 0.8$\times r_{\rm h}^{-0.5}$. The radial profile of the best-fit column densities and the native source model are plotted in Figure~\ref{q4p} as stars and continuous line, respectively. Using a two-component fit to the CO $C-X$ band observed by $FUSE$ \citep{feld02b} we derive CO column densities of $1.2 \pm 0.3 \times 10^{14}$~cm$^{-2}$ and $2.38 \pm 0.08 \times 10^{13}$~cm$^{-2}$ for the cold and hot component, respectively, averaged over the entire 30\arcsec~$\times$~30\arcsec slit. A native source for the cold component requires a production rate of $1.36\pm0.40\times10^{28}$~molecules~s$^{-1}$, or $6.8\pm2.0\%$ relative to water. The solar flux pumping the $C-X$ fluorescence was based on quiet sun whole disk data from {\it SOHO}/SUMER \citep{Curdt:2001}, normalized, at wavelengths longward of 1200 \AA, to {\it UARS}/SOLSTICE solar flux measurements appropriate to the solar activity at the time of our observation \citep{Rottman:2001}. \begin{figure} \begin{center} \epsscale{1.0} \rotatebox{0}{ \plotone{f8.eps} } \caption{ Same as Figure~\ref{izp}, for comet C/2001 Q4 (NEAT). The production rate for the native source model was $1.76 \times 10^{28}$~molecules~s$^{-1}$.\label{q4p}} \end{center} \end{figure} \subsubsection{C/2000 WM$_{1}$ (LINEAR)} The CO emission detected by STIS in the observation of comet C/2000 WM$_{1}$ (LINEAR) was too weak to allow us to repeat the same analysis performed in the case of comets 153P/Ikeya-Zhang and C/2001 Q4 (NEAT). Instead, we chose to integrate the STIS spectrum over increasing widths centered on the nucleus, in order to make use of the stronger signal in the center and to increase the number of contributing pixels. We started with a 4\arcsec\ wide region which was increased progressively up to 16\arcsec. For the rotational temperature we used the 77~K value derived for the cold component using $FUSE$ observations \citep{weaver02}, while the Doppler $b$ parameter value of 0.72~km~s$^{-1}$ was chosen to match the radio observations of \citet{biver06}. All three STIS observations were averaged together to obtain detectable CO emission features. The best-fit column densities over the selected regions are plotted in Figure~\ref{wm1p} as a function of the integrated slit width. Fitting a native source model integrated over the same rectangular regions we obtain a CO production rate of $3.56\pm0.2\times10^{26}$~molecules~s$^{-1}$. This model is shown by a continuous line in Figure~\ref{wm1p}. A CO production rate of $0.44 \pm 0.03$\% relative to water is obtained adopting the favored H$_{2}$O production rate for the $FUSE$ observations \citep[8.0$\times$10$^{28}$~molecules~s$^{-1}$,][]{weaver02}. This makes C/2000 WM$_{1}$ (LINEAR) the most CO-poor comet of our sample. \begin{figure} \begin{center} \epsscale{1.0} \rotatebox{0}{ \plotone{f9.eps}} \caption{ CO column density profile for comet C/2000 WM$_{1}$ (LINEAR) extracted from regions of increasing widths, centered on the comet nucleus. The continuous line corresponds to the values predicted by the native source model with a production rate of $3.56 \times 10^{26}$~molecules~s$^{-1}$, when integrated over similar slit sizes.\label{wm1p}} \end{center} \end{figure} \begin{deluxetable*}{cccc} \tablecaption{Production rates. \label{table3}} \tablewidth{0pt} \tablehead{ \colhead{Comet Name} & \colhead{$Q_{\rm CO}$\tablenotemark{$\star$}} & \colhead{$Q_{\rm CO}$/$Q_{\rm H_{2}O}$} & {Other $Q_{\rm CO}$ Measurements}\\ & (10$^{28}$~molecules~s$^{-1}$) & (\%) & (10$^{28}$~molecules~s$^{-1}$) } \startdata 153P/Ikeya-Zhang & 1.54 $\pm$ 0.09 & 7.2 $\pm$ 0.4\tablenotemark{a} & 0.73 $\pm$ 0.16\tablenotemark{b}\\ C/2001 Q4 (NEAT) & 1.76 $\pm$ 0.16 & 8.8 $\pm$ 0.8\tablenotemark{c} & 1.36 $\pm$ 0.40\tablenotemark{d}\\ C/2000 WM$_{1}$ (LINEAR) & 0.036 $\pm$ 0.002 & 0.44 $\pm$ 0.03\tablenotemark{e} & 0.035 $\pm$ 0.003\tablenotemark{f}\\ C/1996 B2 (Hyakutake) & 4.97 $\pm$ 0.07 & 20.9 $\pm$ 0.3\tablenotemark{g} & 4.84 $\pm$ 0.58\tablenotemark{h}\\ \enddata \tablenotetext{$\star$}{The error bars are given by the 1 $\sigma$ interval from the $\chi^{2}$ statistics. In addition, we estimate that systematics amount to a 15\% uncertainty in the production rates for the STIS observations.} \tablenotetext{a}{Water production rate 2.15$\times$10$^{29}$~molecules~s$^{-1}$ from $HST$-STIS observations (see text).} \tablenotetext{b}{Using r$_{h}^{-2.1}$ scaling from \citet{biver06}.} \tablenotetext{c}{Water production rate 2.0$\times$10$^{29}$~molecules~s$^{-1}$ from $HST$-STIS observations (see text).} \tablenotetext{d}{Derived in this paper from $FUSE$ observations (\S 4.1.2). The cited value includes only the cold source component.} \tablenotetext{e}{Water production rate 8.0$\times$10$^{28}$~molecules~s$^{-1}$ used for the $FUSE$ observations \citep{weaver02}.} \tablenotetext{f}{Based on $FUSE$ observations \citep{weaver02}, revised for this paper. The cited value includes only the cold source component.} \tablenotetext{g}{Water production rate 2.38$\times$10$^{29}$~molecules~s$^{-1}$ from \citet{combi98}.} \tablenotetext{h}{Value measured on April 1.2 using JCMT radio telescope \citep{biver99}. From the 4.7$\times$10$^{28}r_{\rm h}^{-2.1}$~molecules~s$^{-1}$ dependence the predicted value is 6.07$\times$10$^{28}$~molecules~s$^{-1}$.} \end{deluxetable*} \subsection{$HST$-GHRS Observations: C/1996 B2 (Hyakutake)} Since the $HST$-GHRS observations do not provide spatial information, we are unable to derive a CO column density profile for comet C/1996 B2 (Hyakutake). The CO production rate is obtained by comparing the CO column density derived by fitting our fluorescence model to the GHRS spectrum with the CO column density predicted for the GHRS aperture by the native source model. As model parameters for the synthetic spectrum we used a rotational temperature of 72~K given by the 63$\times r_{\rm h}^{-1.06}$~K dependence from \citet{disanti03}, which is similar to the value given by \citet{lis97}, and a $b$ parameter of 2.0~km~s$^{-1}$, within the range of outflow velocities measured by \citet{wouterloot98} but slighlty higher than derived from the optical line widths of \citet{combi99}. A lower $b$ value is inconsistent with the total amount of absorbed solar radiation (from the conservation of the number of photons), suggesting larger turbulent motions in the 1\farcs74$\times$1\farcs74 area probed by GHRS. Using the CO $A-X$ fluorescence model with self-absorption we obtain a first estimate for the CO column density averaged over the GHRS slit. However, the predicted line ratios are not in agreement with the data (see the red line in Figure~\ref{hkt}, upper panel). In order to improve the fit and better constrain the column density we adjust the model to account for the geometry of the Sun-comet-Earth system. Starting with the previous estimate on the CO column we can constrain the ratio between the line-of-sight column density and the absorbing column on the comet-Sun direction. Iterating this procedure we obtain a best fit value for the line-of-sight CO column density of $1.45 \pm 0.87 \times 10^{16}$~cm$^{-2}$. This model is shown with a red line in the lower panel of Figure~\ref{hkt}. The blue model in the same figure contains contributions from atomic species \ion{C}{2}, \ion{C}{1}, \ion{O}{1}, and \ion{S}{1}, as labeled, which account for the remaining features in the spectrum. Under the assumption of a native source model with the same lifetime and gas outflow velocity as employed for the STIS observations, we obtain a CO production rate of $4.97 \pm 0.07 \times 10^{28}$~molecules~s$^{-1}$. This represents the highest CO production rate relative to water from our sample, $20.9 \pm 0.3$\%, using the water production rate of 2.38$\times$10$^{29}$~molecules~s$^{-1}$ from \citet{combi98}. Figure~\ref{hkt} also clearly shows the presence of several bands originating on the v$'=9$ and 14 levels that are pumped by solar \ion{O}{1} $\lambda$1302 and \ion{H}{1} Lyman-$\alpha$, respectively \citep{Wolven:1998}. \citet{Kassal:1976} first pointed out that scattering of Lyman-$\alpha$ by the (14,0) band is comparable to, if not larger than, the direct solar scattering by all other bands of the CO Fourth Positive system for CO column densities $\ge 10^{17}$~cm$^{-2}$. The (14,4) band was subsequently identified in the spectrum of Venus \citep{Durrance:1981}. These bands are not detected in any of the STIS comet spectra. The high column density in the field-of-view of comet Hyakutake makes them visible, albeit at low S/N, and allows for a determination of column density independent of optical saturation effects. In evaluating the fluorescence efficiency of these bands, the overlap between the solar lines and the individual lines of the CO bands is a sensitive function of rotational temperature and heliocentric velocity and requires accurate profiles of the solar lines \citep{Lemaire:2002}. The (14-3), (14-4), and (9-2) bands are included in the model and are seen to be completely consistent with the CO column density derived above. \begin{figure} \begin{center} \epsscale{0.9} \rotatebox{90}{ \plotone{f10.eps}} \caption{ $HST$-GHRS spectrum of comet C/1996 B2 (Hyakutake), and the model spectrum of the CO $A-X$ system for a line-of-sight column density of $1.45\times10^{16}$~cm$^{-2}$, before (upper panel) and after geometric corrections (lower panel). The CO model spectrum is shown in red. The blue line contains contributions from atomic lines of \ion{C}{2}, \ion{C}{1}, \ion{O}{1}, and \ion{S}{1}, as indicated. The $A-X$ bands that do not change significantly due to the geometric effects are labeled.\label{hkt}} \end{center} \end{figure} We note that the optically thick bands connecting to the v\arcsec~=~0 level of the ground state, namely (1-0) at 1509.8 \AA, (2-0) at 1477.6 \AA, (3-0) at 1447.4 \AA, (4-0) at 1419.1 \AA\ and (5-0) at 1392.6 \AA, all labeled in Figure~\ref{hkt}, do not show a significant variation due to geometric corrections. This can be understood from the fact that while the correction adjusts the line-of-sight column density relative to the column absorbing the solar radiation, the emission in the optically thick lines will still be determined by the column corresponding to one optical depth, which depends only on temperature and $b$ parameter. The (0-0) band at 1544.5 \AA\ does not seem to follow this pattern due to blending with the (3-2) band at 1542.5 \AA. The same lack of variation is exhibited by the bands belonging to optically thin progressions pumped by solar emission lines, such as (14-3) at 1315.7 \AA\ and (9-2) at 1377.8 \AA. This is a result of the simple linear scaling in the optically thin limit, which makes the absorbing column indistinguishable from the emitting column. \section{DISCUSSION} \subsection{Sources of Uncertainty and Comparison with Other Measurements} The derived column densities are subject to uncertainties due to the choice of model parameters and background subtraction. A more complete model would involve a multidimensional $\chi^{2}$ minimization to constrain simultaneously the column density, rotational temperature and Doppler $b$ parameter. Aside from the fact that this approach requires rather large computational resources, we expect that given the quality of the data the resulting $\chi^{2}$ 3D surfaces will have rather low contrast minima and the improvement in the resulting production rates would be negligible. More of a concern is the background subtraction in the $HST$-STIS data. The background is variable both in the spatial and spectral directions from one observation to another, making it impossible to give a comprehensive subtraction prescription. The optimal background subtraction is determined on a case-by-case basis. While the background-related uncertainties can lead to $\sim$30\% variations in the values for the column densities, we estimate that the change in the resulting production rates is only about 6\%. These values are only slightly larger than the $1\sigma$ error bars from the $\chi^{2}$ minimization. Similarly, increasing the rotational temperature from 70~K to 100~K results in a $\sim$30\% decrease in production rate. However, the rotational temperatures relevant for our observations were derived from either $FUSE$ observations or from the T$_{rot}$ vs. r$_{h}$ dependences obtained by radio measurements, and are constrained to better than $\pm$6~K. This results only in a $\sim$7\% variation in production rate, as the heliocentric distance of the comet does not vary significantly during our observations. The absolute values of the CO production rate and column density are also sensitive to the STIS calibration pipeline, which is based on point-source stellar standards. The column densities derived from STIS data could be overpredicted by at most 30\% due to calibration offsets. Variations in the parameters used for the native source model, such as the CO lifetime and outlow velocity, could also change the production rate by a few percent. Other less quantifiable uncertainties to which the fluorescence model is particularly sensitive are the oscillator strengths and the UV solar flux, especially due to the variable emission features at high solar activity. Overall, we estimate that the systematic errors in the production rates amount to at most 15\%. To assess the effects of the error sources mentioned above, it is useful to compare our results to other measurements of the CO production rate in these comets from different spectral regions. The values obtained for comets C/2000 WM$_{1}$ (LINEAR) and C/1996 B2 (Hyakutake) are in excellent agreement with previous measurements. For comet C/1996 B2 (Hyakutake) at similar heliocentric distances \citet{biver99} find vales of $4.98 \pm 0.09 \times 10^{28}$~molecules~s$^{-1}$ (0.952~AU) and $4.84 \pm 0.58 \times 10^{28}$~molecules~s$^{-1}$ (0.894~AU). The production rate relative to water is comparable to previously measured mixing ratios of 14 to 19\% \citep{disanti03} and 22\% \citep{biver99}. For comet C/2000 WM$_{1}$ (LINEAR) both $FUSE$ \citep{weaver02} and radio \citep{biver06} observations are consistent with a CO mixing ratio of $\sim$0.4\% and a CO production rate of $\sim$3 to 4$\times$10$^{26}$~molecules~s$^{-1}$. A good agreement is again found when comparing the CO production rate derived for comet C/2001 Q4 (NEAT) with the results from $FUSE$ observations ($\S$ 4.1.2), listed in Table~\ref{table3}. We note that the measurements based on $FUSE$ observations used only the cold component of CO (see $\S$~4.1.2) in estimating the CO production rate. The cold component is believed to reflect the native source of CO, which is directly probed by STIS. The two values agree marginally within the error bars, and the mismatch can be attributed to short time variability and pointing instability for the $FUSE$ observations. The count rates for the $C-X$ band in the $FUSE$ data show factor of two variations with a periodicity of $\sim$20 hours. The $FUSE$ observation overlaps with a much shorter STIS exposure, but the exact correlation in comet activity between the two datasets is hard to assess due to the different fields of view of the two instruments. For comet 153P/Ikeya-Zhang the derived CO production rate is about a factor of 2 higher than the estimate from the range of values given by \citet{biver06} (using the $r_{\rm h}^{-2.1}$ scaling, see Table~\ref{table3}) and \citet{disanti02}. The factor of two difference may be attributed to the uncertainties in the model parameters (rotational temperature and Doppler parameter) and in the background subtraction. However, as discussed above, the combined sources of error should result in less than 15\% uncertainty. On the other hand, the $r_{\rm h}^{-2.1}$ dependence was derived from two measurements, one at 0.51~AU and the other at 1.26~AU, not excluding the possibility of temporal variations affecting our observations at 0.89~AU. Moreover, the uncertainties in the $r_{\rm h}^{-2.1}$ scaling have not been quantified, making our value for the production rate more reliable. The CO abundance relative to water is again higher than other estimates. However, using the water production rate given by the 19.0$\times$10$^{28}r_{\rm h}^{-4.}$~molecules~s$^{-1}$ dependence derived by \citet{biver06} from H$_2$O observations made by the {\it Odin} satellite \citep{lecacheux03}, the CO mixing ratio decreases from $7.2\pm0.4\%$ to $5.1\pm0.3\%$. This would place the CO relative abundance in comet 153P/Ikeya-Zhang closer to the value of $3.8\pm0.6\%$ at $\sim$1~AU \citep{biver06} and $4.7\pm0.8\%$ at 0.78~AU \citep{disanti02}. \subsection{Application to Broad-band Imaging} The advantage of long-slit spectroscopy with STIS resides in giving us a better understanding of the spatial distribution of CO and the effects of increasing optical depths on the observed line ratios. We show how this information is applicable to broad-band imaging by integrating the brightness of the $A-X$ bands in the STIS bandpass that are not contaminated by atomic lines. We then integrate the fluorescence model over the same bands deriving an equivalent g-factor (as a ratio between the total brightness and the column density). Given the specific optical depth corrections in our model, the resulting g-factor will be in fact a brightness-column density dependence rather than a globally constant ratio. The radial brightness profile of the integrated bands observed by STIS can be then converted into a column density profile. This procedure can be visualized in Figure~\ref{izot}, where the data for comet 153P/Ikeya-Zhang have been used. The brightness profile has been rebinned in 1\arcsec\ bins and background subtracted. The red histogram represents the column density profile obtained using the optically thick brightness-wavelength dependence, while the blue histogram represents the column density profile obtained in the optically thin approximation, using a constant g-factor. The black line is the native source model integrated in 1\arcsec\ $\times$~0\farcs2 bins, for a production rate of 1.54$\times$10$^{28}$~molecules~s$^{-1}$, as derived in \S~4.1.1. This figure illustrates the importance of including optical depth effects in order to predict the correct column density. This method has been used to interpret the imaging data from the {\it HST} Solar Blind Channel of the Advanced Camera for Surveys (ACS/SBC), and give an estimate of the number of molecules released from comet 9P/Tempel~1 as a result of the {\it Deep Impact} encounter \citep{feld06a}. The uncertainties in this type of measurements come mainly from the additional emission features of atomic lines in the bandpass. \begin{figure} \begin{center} \epsscale{1.0} \rotatebox{0}{ \plotone{f11.eps} } \caption{ Spatial profiles of the CO column density for comet 153P/Ikeya-Zhang, derived from the integrated brightness along the slit in 1\arcsec\ bins, selecting the unblended lines from the $A-X$ system. The red line uses the total brightness-column density dependence associated with the present model, while the blue line is obtained in the optically thin approximation. The continuous black line is the native source model for a production rate of 1.54$\times$10$^{28}$~molecules~s$^{-1}$, integrated in 1\arcsec\ bins.\label{izot}} \end{center} \end{figure} \subsection{The Native Source of CO} The $HST$-STIS observations allow us to constrain the dominant source of CO in the region probed by the 25\arcsec\ $\times$~0\farcs2 slit. Reconstructing the surface column density distribution from the radial profile we estimate that in a 10\arcsec\ $\times$~10\arcsec\ box the native source contribution to the total number of molecules is as high as 80\% for comet C/2000 WM$_{1}$ (LINEAR) and from 90\% to 99\% for comets C/2001 Q4 (NEAT) and 153P/Ikeya-Zhang, respectively. This result suggests that the native source dominates in the inner $\sim 3000$~km from the comet center, while the extended component becomes important at larger distances. However, the native CO observed in the coma is linked to the amount of CO in the nucleus through the outgassing mechanism, which is not well known. Laboratory data suggests that the abundance of CO relative to water in the ice can be a factor of 5 to 10 lower than the one observed in the coma \citep{notesco97,colangeliii}. The exact value depends on the temperature at which the comet is outgassing. The large variation in the CO production rate relative to water can be understood from the large range of heliocentric distances over which the comets have formed and then migrated to the outer parts of the Solar System through gravitational interactions with the planets. The observed diversity among comets is thought to be related to the local gas and dust composition and temperature where they formed. Under the assumption that CO has been trapped by water ice during comet formation, the local temperature can be estimated from the observed production rate \citep{notesco97}. The formation temperatures in our sample are expected to range from 50~K or less for comets C/2001 Q4, 153P/Ikeya-Zhang and Hyakutake, to more than 60~K for C/2000 WM$_{1}$. If the gas trapping mechanism is more efficient than deposition, as suggested by laboratory studies of CO and water interactions \citep{collings03}, important enrichments for the species with a higher sublimation temperature can occur \citep{notesco97}. This can explain the lack of correlation between the abundance of CO and that of other molecules with a similar sublimation temperature \citep{gibb03,biver02}. \section{SUMMARY} We have developed a fluorescence model for the interpretation of CO $A-X$ emission observed in several recent comets by the {\it Hubble Space Telescope} employing the latest values for the transition wavelengths and oscillator strengths. The radiative transfer approximation takes into account saturation effects using Voigt profiles for each of the $\sim$10$^{5}$ transitions. Self-absorption is introduced using a photon mean free path approximation. This process is significant for column densities above few$\times$10$^{14}$~cm$^{-2}$, encountered close to the comet nucleus. It is shown that the model reproduces the optically thin limit for lower column densities, at larger distances from the comet center. When the column densities are of the order $\sim$10$^{16}$~cm$^{-2}$ or above, a good fit to the data requires a distinction between the absorbing CO column in the Sun-comet direction and the emitting column along the line-of-sight. The approximations in the radiative transfer model are justified for the quality of the available data. Constraining better the optical depth effects demands an exact treatment, which would be warranted at higher spectral resolution in order to predict the relative strengths of individual lines contained in the $A-X$ bands. For the comets observed by STIS the fit of the column density profile is consistent with a predominantly native source, with production rates ranging from $3.56 \times 10^{26}$~to $1.76 \times 10^{28}$~molecules s$^{-1}$. The quality and the spatial extent of the STIS data does not allow for a detection of a small extended source component. We find large variations in the CO abundance relative to water, from 0.4\% for C/2000 WM$_{1}$ (LINEAR) to 21\% for C/1996 B2 (Hyakutake). This diversity among comets is still not well understood, as no clear trend emerges at the current stage of comet observations \citep{cometsii}. In spite of the caveats discussed in \S~5, the absolute values of the CO production rate and column density are better constrained by the introduction of optical depth effects, and give reasonable confidence levels for the CO production rate and a good agreement with previous results. Moreover, the present model proves to be a valuable tool in analyzing broadband imaging data, such as the ACS/SBC observations of comet 9P/Tempel~1 \citep{feld06a}. \acknowledgements This work is based on observations with the NASA/ESA {\it Hubble Space Telescope} obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under NASA contract NAS 5-26555. We wish to thank Stephan McCandliss for providing us with the H$_{2}$ UV fluorescence model. PDF wishes to acknowledge the hospitality of Arcetri Observatory during a sabbatical visit in April-May 2003. This work was supported by NASA grants HST-GO-09185.04-A, HST-GO-09496.04-A, and HST-GO-09906.04-A to The Johns Hopkins University.
1,314,259,995,186
arxiv
\section{Introduction} \label{Intro} Finding bi-Lipschitz parameterizations of sets is a central question in areas of geometric measure theory and geometric analysis. A Lipschitz function on a metric space plays the role played by a smooth function on a manifold, and a bi-Lipschitz function plays the role of that of a diffeomorphism. Many concepts in metric spaces, such as metric dimensions and Poincar\'e inequalities, are preserved under bi-Lipschitz mappings. Moreover, a bi-Lipschitz parameterization of a set by Euclidean space leads to its uniform rectifiability. Uniform rectifiability is a quantified version of rectifiability which is well adapted to the study of problems in harmonic analysis on non-smooth sets.\\ The type of parameterizations discussed in this paper first appeared in 1960 when Reifenberg \cite{Re} showed that if a closed set $M \subset \mathbb{R}^{n+d}$ is well approximated by affine $n$-planes at every point and every scale, then $M$ is a bi-H\"older image of $\mathbb{R}^{n}$. Such a set is called a \emph{Reifenberg flat} set. In recent years, there has been renewed interest in this result and its proof. In particular, Reifenberg type parameterizations have been used to get good parameterizations of many spaces such as chord arc surfaces with small constant (see \cite{Se1, Se2}), and limits of manifolds with Ricci curvature bounded from below (see \cite{CC, CN}). Moreover, Reifenberg's theorem has been refined to get better parameterizations of a set: bi-Lipschitz parameterizations (see \cite{DS2}, \cite{To1}, \cite{DT1}, \cite{M1}). In fact, it is well known today, due to the authors of the latter references, that Carleson-type conditions are the correct conditions to study when seeking necessary and sufficient conditions for bi-Lipschitz parameterizations of sets. For example, in \cite{To1}, Toro considers a Carleson condition on the Reifenberg flatness of $M$ that guarantees its bi-Lipschitz parameterization. In \cite{DT1}, David and Toro consider a Carleson condition on the Jones beta numbers $\beta_{\infty}$ and on the (possibly smaller) $\beta_{1}$ numbers that guarantees the same result. In \cite{M1}, the author studies a Carleson-type condition on the oscillation of the unit normals to an $n$-rectifiable set $M$ of co-dimension 1, that guarantees its bi-Lipschitz parameterization. An $n$-rectifiable set $M \subset \mathbb{R}^{n+d}$ is a generalization of a smooth $n$-manifold in $\mathbb{R}^{n+d}$. Rectifiable sets are characterized by having (approximate) tangent planes (see Definition \ref{ats}) at $\mathcal{H}^{n}$-almost every point. Moreover, in the special case when the rectifiable set $M$ has co-dimension 1, then $M$ has an (approximate) unit normal $\nu$ (see Remark \ref{remunitnor}) at $\mathcal{H}^{n}$-almost every point. In fact, in \cite{M1}, the author considers an $n$-Ahlfors regular rectifiable set $M \subset \mathbb{R}^{n+1}$, of co-dimension 1, that satisfies the following Poincar\'e-type inequality for $d=1$ and $\lambda =2$:\\ For all $x \in M$, $r > 0$, and $f$ a Lipschitz function on $\mathbb{R}^{n+d}$, we have \begin{equation} \label{eqp} \Xint-_{B_{r}(x)} \left| f(y) - f_{x,r} \right| \, d \mu(y) \leq C_{P} \, r \, \left(\,\Xint-_{B_{\lambda r}(x)}|\nabla^{M}f(y)|^{2} \, d \mu(y) \right)^{\frac{1}{2}}, \end{equation} where $C_{P}$ denotes the Poincar\'{e} constant that appears here, $\lambda \geq 1$ is the dilation constant, $\mu =$ \( \mathcal{H}^{n} \mres M\) is the Hausdorff measure restricted to $M$, $f_{x,r} = \Xint-_{B_{r}(x)}f \, d \mu $ is the average of the function $f$ on $B_{r}(x)$, $B_{r}(x)$ is the Euclidean ball in the ambient space $\mathbb{R}^{n+d}$, and $\nabla^{M}f(y)$ denotes the tangential derivative of $f$ (see Definition \ref{deftander}).\\ Then, the author shows that a Carleson-type condition on the oscillation of the unit normal $\nu$ to $M$ guarantee a bi-Lipschitz parameterization of $M$. \begin{theorem} \label{Th1Merhej} (see \cite{M1}, Theorem 1.5) Let $M \subset B_{2}(0) \subset \mathbb{R}^{n+1}$ be an $n$-Ahlfors regular rectifiable set containing the origin, and let $\mu =$ \( \mathcal{H}^{n} \mres M\) be the Hausdorff measure restricted to $M$. Assume that $M$ satisfies the Poincar\'{e}-type inequality (\ref{eqp}) with $d=1$ and $\lambda = 2$. There exists $\epsilon_{0} = \epsilon_{0}(n, C_{M}, C_{P})>0$, such that if for some choice of unit normal $\nu$ to $M$, we have \begin{equation} \label{103old} \int_{0}^{1} \left(\,\Xint-_{B_{r}(x)} |\nu(y) - \nu_{x,r}|^{2} \, d \mu \right) \frac{dr}{r} < \epsilon_{0}^{2}, \quad \textrm{for} \,\, x \in M \cap B_{\frac{1}{10^{4}}}(0), \end{equation} then $M \cap B_{\frac{1}{10^{4}}}(0)$ is contained in the image of an affine $n$-plane by a bi-Lipschitz mapping, with bi-Lipschitz constant depending only on $n$, $C_{M}$ and $C_{P}$. \end{theorem} In this paper, we generalize Theorem \ref{Th1Merhej} to higher co-dimensions $d$ and arbitrary dilation constants $\lambda \geq 1$. Before stating the theorem, let us introduce some notation. Suppose that $M \subset \mathbb{R}^{n+d}$ is an $n$-Ahlfors regular rectifiable set that satisfies the Poincar\'{e}-type inequality (\ref{eqp}). Fix $x \in M$ and $r >0$. Let $y \in M \cap B_{r}(x)$ such that the approximate tangent plane $T_{y}M$ of $M$ at the point $y$ exists, and denote by $\pi_{T_{y}M}$ the orthogonal projection of $\mathbb{R}^{n+d}$ on $T_{y}M$. Using the standard basis of $\mathbb{R}^{n+d}$, $\{ e_{1} , \ldots , e_{n+d} \}$, we can view $\pi_{T_{y}M}$ as an $(n+d) \times (n+d)$ matrix whose $j^{th}$ column is the vector is $\pi_{T_{y}M}(e_{j})$. Thus, we denote $\pi_{T_{y}M}$ by the matrix $\big (a_{ij}(y)\big )_{ij}$. Finally, let $A_{x,r} = \big((a_{ij})_{x,r}\big)_{ij}$, be the matrix whose ${ij}^{th}$ entry is the average of the function $a_{ij}$ in the ball $B_{r}(x)$.\\ \begin{theorem} \label{MTT'} Let $M \subset B_{2}(0) \subset \mathbb{R}^{n+d}$ be an $n$-Ahlfors regular rectifiable set containing the origin, and let $\mu =$ \( \mathcal{H}^{n} \mres M\) be the Hausdorff measure restricted to $M$. Assume that $M$ satisfies the Poincar\'{e}-type inequality (\ref{eqp}). There exist $\epsilon_{0}= \epsilon_{0}(n,d, C_{M},C_{P}) >0$ and $\theta_{0} = \theta_{0}(\lambda) < 1$, such that if \begin{equation} \label{103} \int_{0}^{1} \left(\,\Xint-_{B_{r}(x)} |\pi_{T_{y}M} - A_{x,r}|^{2} \, d \mu \right) \frac{dr}{r} < \epsilon_{0}^{2} \quad \textrm{for} \,\, x \in M \cap B_{1}(0), \end{equation} where $|\pi_{T_{y}M} - A_{x,r}|$ denotes the Frobenius norm \footnote{ \hspace{0.1cm} $|\pi_{T_{y}M} - A_{x,r}|^{2} = \textrm{trace} \big( (\pi_{T_{y}M} - A_{x,r})^{2} \big) = \displaystyle \sum_{i,j=1}^{n+d} |a_{ij}(y) - (a_{ij})_{x,r}|^{2}$} of $\pi_{T_{y}M} - A_{x,r}$, then there exists an onto $K$-bi-Lipschitz map $g: \mathbb{R}^{n+d} \rightarrow \mathbb{R}^{n+d}$ where the bi-Lipschitz constant $K = K(n,d, C_{M},C_{P})$ and an $n$-dimensional plane $\Sigma_{0}$, with the following properties: \begin{equation} \label{aa} g(z)= z \quad \textrm{when} \,\,\, d(z, \Sigma_{0}) \geq 2, \end{equation} and \begin{equation} \label{bb} |g(z)-z| \leq C_{0} \epsilon_{0} \quad \textrm{for} \,\,\, z \in \mathbb{R}^{n+d}, \end{equation} where $C_{0}= C_{0}(n,d,C_{M},C_{P})$. Moreover, \begin{equation} \label{cc} g(\Sigma_{0})\,\, \textrm{is a} \,\,\, C_{0} \epsilon_{0} \textrm{-Reifenberg flat set}, \end{equation} and \begin{equation} \label{contained} M \cap B_{\theta_{0}}(0) \subset g(\Sigma_{0}). \end{equation} \end{theorem} Notice that the conclusion of Theorem \ref{MTT'} states that $M$ is (locally) \emph{contained in} a bi-Lipschitz image of an $n$-plane instead of $M$ being exactly a (local) bi-Lipschitz image of an $n$-plane. This is very much expected, since we do not assume that $M$ is Reifenberg flat, and thus we have to deal with the fact that $M$ might have holes. However, if we assume, in addition to the hypothesis of Theorem \ref{MTT'}, that $M$ is Reifenberg flat, then we do obtain that $M$ is in fact (locally) a bi-Lipschitz image of an $n$-plane. We show this in this paper as a corollary to Theorem \ref{MTT'}.\\ A natural question is whether the hypotheses of Theorem \ref{MTT'}, that is the Ahlfors regularity of $M$, the Poincar\'e inequality (\ref{eqp}), and the Carleson condition (\ref{103}) imply that $M$ is Reifenberg flat. An affirmative answer to this question would directly imply (by the paragraph above) that the conclusion of Theorem \ref{MTT'} should be that $M$ is \emph{exactly} a bi-Lipschitz image of an $n$-plane instead of $M$ being just \emph{contained in} bi-Lipschitz image of an $n$-plane. A negative answer would show that the conclusion of Theorem \ref{MTT'} is the best that we can hope for. It is not surprising that the Poincar\'e inequality (\ref{eqp}) is the correct condition to explore in order to answer this question (which as we discuss below, will turn out negative). In fact, it is already known that (\ref{eqp}) encodes geometric properties of the set $M$. \\ Let $(M, d_{0}, \mu)$ be a metric measure space, where $M \subset B_{2}(0)$ is an $n$-Ahlfors regular rectifiable set in $\mathbb{R}^{n+d}$, $\mu =$ \( \mathcal{H}^{n} \mres M\) is the measure that lives on $M$, and $d_{0}$ is the metric on $M$ which is the restriction of the standard Euclidean metric on $\mathbb{R}^{n+d}$. In \cite{M1}, the author proves that the Poincar\'e inequality (\ref{eqp}) implies that $M$ is quasiconvex. More precisely, \begin{definition} \label{defqc} A metric space $(X,d)$ is $\kappa_{1}$-quasiconvex if there exists a constant $\kappa_{1} \geq 1$ such that for any two points $x$ and $y$ in $X$, there exists a rectifiable curve $\gamma$ in $X$, joining $x$ and $y$, such that $\textrm{length}(\gamma) \leq \kappa_{1} \, d(x,y)$. \end{definition} \begin{theorem} (see \cite{M1} Theorem 5.5) \label{mt} \footnote{ Notice that Theorem 5.5 in \cite{M1} is stated and proved in the ambient space $\mathbb{R}^{n+1}$ (so $d=1$) and for $\lambda = 2$. However, the proof of Theorem 5.5 in \cite{M1} is independent from the co-dimension $d$ of $M$. Thus the exact same statement holds here in the higher co-dimension case, and the quasiconvexity constant $\kappa_{1}$ stays independent of $d$. Moreover, it is very easy to see that Theorem 5.5 in \cite{M1} still holds with arbitrary $\lambda \geq 1$, and in that case, $\kappa_{1}$ would also depend on $\lambda$. } Let $(M, d_{0}, \mu)$ be as discussed above. Suppose that $M$ satisfies the Poincar\'{e}-type inequality (\ref{eqp}). Then $(M, d_{0}, \mu)$ is $\kappa_{1}$-quasiconvex, with $\kappa_{1}= \kappa_{1}(n, \lambda, C_{M}, C_{P})$.\end{theorem} There are many Poincar\'e-type inequalities found in literature that imply quasiconvexity (see for example \cite{Ch}, \cite{DJS}, \cite{K1}, \cite{K2}). To state a couple of the main ones, let $(X, d, \nu)$ be a measure space endowed with a metric $d$ and a positive complete Borel regular measure $\nu$ supported on $X$. Denote by $B^{X}_{r}(x)$ the metric ball in $X$, center $x \in X$ and radius $r>0$. Moreover, assume that $0 < \nu(B_{r}^{X}(x)) < \infty$ for all $x \in X$ and $r>0$. \begin{definition} (\textbf{\textit{p}-Poincar\'e inequality})\\ Let $p \geq 1$. $(X, d, \nu)$ is said to admit a $p$-Poincar\'e inequality if there exist constants $\kappa\geq1$ and $\lambda\geq1$ such that for any measurable function $u: X \to \mathbb{R}$ and for any upper gradient $\rho$ (see Definition \ref{defuppgrad}) of $u$, the following holds \begin{equation} \label{eqp2} \Xint-_{B^{X}_{r}(x)} \left| u(y) - u_{B^{X}_{r}(x)} \right| \, d \nu(y) \leq \kappa \, r \, \left(\,\Xint-_{B^{X}_{\lambda r}(x)} \rho(y)^{p} \, d \nu(y) \right)^{\frac{1}{p}}, \end{equation} where $x \in X$, $r>0$, and $u_{B^{X}_{r}(x)} := \Xint-_{B^{X}_{r}(x)} u \, d \nu$. \end{definition} \begin{definition} (\textbf{\textit{Lip}-Poincar\'e inequality})\\ Let $p \geq 1$. $(X, d, \nu)$ is said to admit a $Lip$-Poincar\'e inequality if there exist constants $\kappa\geq1$ and $\lambda\geq1$ such that for every Lipschitz function $f$ on $X$, and for every $x \in X$ and $r>0$, we have \begin{equation} \label{eqp3} \Xint-_{B^{X}_{r}(x)} \left| f(y) - f_{B^{X}_{r}(x)}\right| \, d \nu(y) \leq \kappa \, r \left(\,\Xint-_{B^{X}_{\lambda r}(x)} (Lipf(y))^{p} \, d \nu(y) \right)^{\frac{1}{p}}, \end{equation} (see Definition \ref{deflip} for the definition of $Lipf$). \end{definition} These Poincar\'e inequalities are a-priori different because the right hand side varies according to the notion of ``derivative" used on the metric space. However, Keith has shown (see \cite{K1}, \cite{K2}) that if $(X, d, \nu)$ is a complete metric measure space with $\nu$ a doubling measure, then $(\ref{eqp2})$ and $(\ref{eqp3})$ are equivalent. It turns out that the Poincar\'e-type inequality (\ref{eqp}) is also related to (\ref{eqp2}) and (\ref{eqp3}).\\ In this paper, we take $(M, d_{0}, \mu)$ as described above and prove that in this setting, the Poincar\'e-type inequalities (\ref{eqp}) (or a more generalized version of it, see (\ref{eqp1'}) below), (\ref{eqp2}), and (\ref{eqp3}) are equivalent. \begin{theorem} \label{epi} Let $p \geq 1$, and let $(M, d_{0}, \mu)$ be a metric measure space, where $M \subset B_{2}(0)$ is an $n$-Ahlfors regular rectifiable set in $\mathbb{R}^{n+d}$, $\mu =$ \( \mathcal{H}^{n} \mres M\) is the measure that lives on $M$, and $d_{0}$ is the metric on $M$ which is the restriction of the standard Euclidean metric on $\mathbb{R}^{n+d}$.Then, the following are equivalent: \begin{enumerate} [(i)] \item There exist constants $\kappa\geq1$ and $\lambda\geq1$ such that for any measurable function $u: M \to \mathbb{R}$, for any upper gradient $\rho$ of $u$, and for every $x \in M$ and $r>0$, we have \begin{equation} \label{eqp2'} \Xint-_{B_{r}(x)} \left| u(y) - u_{x,r} \right| \, d \mu(y) \leq \kappa \, r \, \left(\,\Xint-_{B_{\lambda r}(x)} \rho(y)^{p} \, d \mu(y) \right)^{\frac{1}{p}}. \end{equation} \item There exist constants $\kappa \geq 1$, and $\lambda \geq 1$, such that for every Lipschitz function $f$ on $M$, and for every $x \in M$ and $r>0$, we have \begin{equation} \label{eqp3'} \Xint-_{B_{r}(x)} \left| f(y) - f_{x,r}\right| \, d \mu(y) \leq \kappa \, r \left(\,\Xint-_{B_{\lambda r}(x)} (Lipf(y))^{p} \, d \mu(y) \right)^{\frac{1}{p}}. \end{equation} \item There exist constants $\kappa \geq 1$, and $\lambda \geq 1$, such that for every Lipschitz function $f$ on $\mathbb{R}^{n+d}$, and for every $x \in M$ and $r>0$, we have \begin{equation} \label{eqp1'} \Xint-_{B_{r}(x)} \left| f(y) - f_{x,r}\right| \, d \mu(y) \leq \kappa \, r \left(\,\Xint-_{B_{\lambda r}(x)} (|\nabla^{M}f|(y))^{p} \, d \mu(y) \right)^{\frac{1}{p}}. \end{equation} \end{enumerate} \end{theorem} Theorem \ref{epi} is interesting in its own right, as it shows that the Poincar\'e inequality (\ref{eqp}) \big (or more generally, (\ref{eqp1'}) \big) is equivalent to the other usual Poincar\'e-type inequalities on metric spaces that imply quasiconvexity. Moreover, Theorem \ref{epi} opens the door to many examples of spaces satisfying the Poincar\'e inequality (\ref{eqp1'}) as there are many examples in literature of spaces satisfying the $p$-Poincar\'e and $Lip$-Poincar\'e inequalities (see for example \cite{BS}, \cite{HK}, \cite{BB}, \cite{La}). This allows us to get an example of a set that is not Reifenberg flat, and yet satisfies all the hypotheses of Theorem \ref{MTT'}. \begin{theorem} \label{construct} There exists a non-Reifenberg flat, $n$-Ahlfors regular, rectifiable set $M \subset B_{2}(0) \subset \mathbb{R}^{n+d}$ that satisfies all the hypotheses of Theorem \ref{MTT'}. \end{theorem} Theorem \ref{construct} shows that the hypotheses of Theorem \ref{MTT'} on the set $M$ are not strong enough to guarantee its Reifenberg flatness, and thus the conclusion of Theorem \ref{MTT'} is optimal.\\ The paper is structured as follows: in Section \ref{Pre}, we introduce some definitions and preliminaries. In Section \ref{secMT}, we prove Theorem \ref{MTT'}. Moreover, we prove that Theorem \ref{Th1Merhej} follows as a corollary from Theorem \ref{MTT'}. Section \ref{PIQ} is dedicated to proving that the Poincar\'e inequality (\ref{eqp1'}) is equivalent to the $p$-Poincar\'e and the $Lip$-Poincar\'e inequalities. Finally, in the last section, we prove Theorem \ref{construct} by constructing a concrete example of a set that is not Reifenberg flat, yet satisfies the hypotheses of Theorem \ref{MTT'}. \section{Preliminaries} \label{Pre} Throughout this paper, our ambient space is $\mathbb{R}^{n+d}$. $B_{r}(x)$ denotes the open ball center $x$ and radius $r$ in $\mathbb{R}^{n+d}$, while $\bar{B}_{r}(x)$ denotes the closed ball center $x$ and radius $r$ in $\mathbb{R}^{n+d}$. $d(.,.)$ denotes the distance function from a point to a set. $\mathcal{H}^{n}$ is the $n$-Hausdorff measure. Finally, constants may vary from line to line, and the parameters they depend on will always be specified in a bracket. For example, $C(n,d)$ will be a constant that depends on $n$ and $d$ that may vary from line to line. \\ We begin by the definitions needed starting section \ref{secMT} and onwards. \begin{definition} Let $M \subset \mathbb{R}^{N_{1}}$. A function $ f: M \rightarrow \mathbb{R}^{N_{2}}$ is called \emph{Lipschitz} if there exists a constant $K>0$, such that for all $x, \, y \in M$ we have \begin{equation} \label{lip1} |f(x) - f(y)| \leq K \, |x-y|. \end{equation} The smallest such constant is called the \emph{Lipschitz constant} and is denoted by $L_{f}$. \end{definition} \begin{definition} A function $ f: \mathbb{R}^{N_{1}} \rightarrow \mathbb{R}^{N_{2}}$ is called $K$-\emph{bi-Lipschitz} if there exists a constant $K>0$, such that for all $x, \, y \in \mathbb{R}^{N_{1}}$ we have \begin{center} $K^{-1} |x-y| \leq |f(x) - f(y)| \leq K \, |x-y|.$ \end{center} \end{definition} Let's introduce the class of \emph{n-rectifiable} sets, and the definition of approximate tangent planes. \begin{definition} \label{rect} Let $M \subset \mathbb{R}^{n+d}$ be an $\mathcal{H}^{n}$-measurable set. $M$ is said to be countably \emph{n-rectifiable} if \begin{center}$ M \subset M_{o} \cup \left(\displaystyle \bigcup_{i=1}^{\infty}f_{i}(A_{i})\right)$, \end{center} where $ \mathcal{H}^{n}(M_{o}) = 0$, and $ f_{i} : A_{i} \rightarrow \mathbb{R}^{n+d}$ is Lipschitz, and $A_{i} \subset \mathbb{R}^{n}$, for $i = 1, 2, \ldots$ \end{definition} \begin{definition} \label{ats} If $M$ is an $\mathcal{H}^{n}$-measurable subset of $\mathbb{R}^{n+d}$. We say that the $n$-dimensional subspace $P(x)$ is the \emph{approximate tangent space of $M$ at $x$}, if \begin{equation} \lim_{h \to 0} h^{-n} \int_{M} {f \left (h^{-1}(y-x)\right) } \, d \mathcal{H}^{n}(y) = \int_{P(x)} f(y) \, d\mathcal{H}^{n}(y) \quad \forall f \in C^1_c(\mathbb{R}^{n+d}, \mathbb{R}). \end{equation} \end{definition} \begin{remark} \label{remunitnor} Notice that if it exists, $P(x)$ is unique. From now on, we shall denote the tangent space of $M$ at $x$ by $T_{x}M$. Moreover, in the special case when $M$ has co-dimension 1, then one can define the unit normal $\nu$ to $M$ at the point $x \in M$ to be the unit normal to $T_{x}M$. Thus, the unit normal $\nu$ exists at every point $x \in M$ that admits a tangent plane, and of course, there are two choices for the direction of the unit normal. \end{remark} It is well known (see \cite{Si}; Theorem 11.6) that $n$-rectifiable sets have tangent planes at $\mathcal{H}^{n}$ almost every point in the set. \begin{definition} \label{deftander} Let $f$ be a real valued Lipschitz function on $\mathbb{R}^{n+d}$. The tangential derivative of $f$ at the point $y \in M$ id denoted by $\nabla^{M}f(y)$ and defined as follows: \begin{equation} \label{td'} \nabla^{M}f(y) = \nabla (f|_{L}) (y)\end{equation} where $L := y + T_{y}M$, $f|_{L}$ is the restriction of $f$ on the affine subspace $L$, and $\nabla(f|_{L})$ is the usual gradient of $f|_{L}$.\\ In the special case when $f$ is a smooth function on $\mathbb{R}^{n+d}$, we have \begin{equation} \label{td} \nabla^{M}f(y) = \pi_{T_{y}M} (\nabla f (y)),\end{equation} where $\pi_{T_{y}M}$ is the orthogonal projection of $\mathbb{R}^{n+1}$ on $T_{y}M$, and $\nabla f$ is the usual gradient of $f$. \end{definition} Note that $\nabla^{M}f(y)$ exists at $\mathcal{H}^{n}$- almost every point in $M$.\\ We also need to define the notion of \emph{Reifenberg flatness}: \begin{definition} Let $M$ be an $n$-dimensional subset of $\mathbb{R}^{n+d}$. We say that $M$ is $\epsilon$-Reifenberg flat for some $\epsilon >0$, if for every $x \in M$ and $0 < r \leq \frac{1}{10^{4}}$, we can find an $n$-dimensional affine subspace $P(x,r)$ of $\mathbb{R}^{n+d}$ that contains $x$ such that \begin{equation*} d(y, P(x,r)) \leq \epsilon r \quad \textrm{for} \,\,y \in M \cap B_{r}(x), \end{equation*} and \begin{equation*} d(y, M) \leq \epsilon r \quad \textrm{for} \,\, y \in P(x,r) \cap B_{r}(x). \end{equation*} \end{definition} \begin{remark} Notice that the above definition is only interesting if $\epsilon$ is small, since any set is 1-Reifenberg flat. \end{remark} In the proof of our Theorem \ref{MTT'}, we need to measure the distance between two $n$-dimensional planes. We do so in terms of normalized local Hausdorff distance: \begin{definition} Let $x$ be a point in $\mathbb{R}^{n+d}$ and let $r >0$. Consider two closed sets $E,\,F \subset \mathbb{R}^{n+d}$ such that both sets meet the ball $B_{r}(x)$. Then, \begin{equation*} d_{x,r}(E,F) = \frac{1}{r} \, \textrm{Max} \left\{ \sup_{y \in E \cap B_{r}(x)} \textrm{dist}(y,F) \,\,; \sup_{y \in F \cap B_{r}(x)} \textrm{dist}(y,E) \right\} \end{equation*} is called the normalized Hausdorff distance between $E$ and $F$ in $B_{r}(x)$. \end{definition} Let us recall the definition of an $n$-Ahlfors regular measure and an $n$-Ahlfors regular set: \begin{definition} Let $M \subset \mathbb{R}^{n+d}$ be a closed, $\mathcal{H}^{n}$ measurable set, and let $\mu=$ \( \mathcal{H}^{n} \mres M\) be the $n$-Hausdorff measure restricted to $M$. We say that $\mu$ is $n$-Ahlfors regular if there exists a constant $C_{M} \geq 1$, such that for every $x \in M$ and $ 0< r \leq 1$, we have \begin{equation} \label{alfh} C_{M}^{-1} \, r^{n} \leq \mu(B_{r}(x)) \leq C_{M} \, r^{n}.\end{equation} In such a case, the set $M$ is called an $n$-Ahlfors regular set, and $C_{M}$ is referred to as the Ahlfors regularity constant. \end{definition} Let us now move to definitions and notations needed in sections \ref{PIQ} and \ref{MCSHH}. In these sections, $(X,d)$ denotes a space $X$ endowed with a metric $d$. $B^{X}_{r}(x)$ denotes the open metric ball of center $x \in X$ and radius $r>0$. Moreover, $(X, d, \nu)$ denotes a measure space endowed with a metric $d$ and a positive complete Borel regular measure $\nu$ supported on $X$ such that $0 < \nu(B_{r}^{X}(x)) < \infty$ for all $x \in X$ and $r>0$. \begin{definition} Let $(X,d, \nu)$ be a metric measure space. We say that $\nu$ is a doubling measure if there is a constant $\kappa_{0} >0$ such that \begin{equation*} \nu\left(B^{X}_{2r}(x)\right)\leq \kappa_{0} \, \nu\left(B^{X}_{r}(x)\right), \end{equation*} where $x \in X$, $r>0$. \end{definition} In sections \ref{PIQ} and \ref{MCSHH}, a curve $\gamma$ in a metric space $(X,d)$ is a continuous non-constant map from a compact interval $I \subset \mathbb{R}$ into $X$. $\gamma$ is said to be rectifiable if it has finite length, where the latter is denoted by $l(\gamma)$. Thus, any rectifiable curve can be parametrized by arc length, and we will always assume that it is.\\ Let us now define the notions of upper gradients, $p$-weak upper gradients, and the Local Lipschitz constant function. \begin{definition} \label{defuppgrad} A non-negative Borel function $\rho: X \rightarrow [0, \infty]$ is said to be an \emph{upper gradient} of a function $u: X \rightarrow \mathbb{R}$ if \begin{equation*} |u(\gamma(0)) - u(\gamma(l_{\gamma}))| \leq \int_{\gamma} \rho \, ds, \end{equation*} for any rectifiable curve $\gamma : [0, l_{\gamma}] \to X$. \end{definition} \begin{definition} Let $p \geq 1$ and let $\Gamma$ be a family of rectifiable curves on $X$. We define the \emph{$p$-modulus} of $\Gamma$ by \begin{equation*} \textrm{Mod}_{p}(\Gamma) = \inf \int_{X} g^{p} \, d \nu \end{equation*} where the infimum is taken over all nonnegative Borel functions $g$ such that $\int_{\gamma} g \, ds \geq 1$ for all $\gamma \in \Gamma$. \end{definition} \begin{definition} A non-negative measurable function $\rho: X \rightarrow [0, \infty]$ is said to be a \emph{p-weak upper gradient} of a function $u: X \rightarrow \mathbb{R}$ if \begin{equation*} |u(\gamma(0)) - u(\gamma(l_{\gamma}))| \leq \int_{\gamma} \rho \, ds, \end{equation*} for $p$-a.e. rectifiable curve $\gamma : [0, l_{\gamma}] \to X$ (that is, with the exception of a curve family of zero $p$-modulus). \end{definition} \begin{definition} \label{deflip} Let $f$ be a Lipschitz function on a metric measure space $(X,d,\nu)$. The local Lipschitz constant function of $f$ is defined as follows \begin{equation} \label{lip} Lipf(x) = \lim_{r \to 0} \sup_{y \in B^{X}_{r}(x), \,y \neq x} \frac{|f(y) - f(x)|}{d(y,x)}, \,\,\,\,\, x \in X,\end{equation} where $B_{r}^{X}(x)$ denotes the metric ball in $X$, center $x$, and radius $r$. \end{definition} \begin{remark} Let us note here that for any Lipschitz function $f$, $L_{f}$ denotes the usual Lipschitz constant (see sentence below (\ref{lip1})), whereas $Lipf(.)$ stands for the local Lipschitz constant function defined above. \end{remark} \section{A bi-Lipschitz parameterization of $M$} \label{secMT} The main goal in this section is to prove Theorem \ref{MTT'}. We begin with three linear Algebra lemmas needed to prove the theorem, as they can be stated and proved independently. \begin{lemma} \label{l4} In the next lemma, let $V$ be an $n$-dimensional subspace of $\mathbb{R}^{n+d}$. Denote by $\pi_{V}$ the orthogonal projection on $V$. Then, there exists a $\delta_{0}= \delta_{0}(n,d) > 0$, such that for any $\delta \leq \delta_{0}$, and for any linear operator $L$ on $\mathbb{R}^{n+d}$ such that \begin{equation} \label{l41} || \pi_{V} - L || \leq \delta , \end{equation} where $|| . ||$ denotes the induced operator norm, $L$ has exactly $n$ eigenvalues $\lambda_{1}, \ldots , \lambda_{n}$ such that \begin{equation} \label{l42'} |\lambda_{j}| \geq 1 - (n+d) \, \delta \geq \frac{3}{4}, \quad \forall \, j \in \{ 1 , \ldots, n\}, \end{equation} and exactly $d$ eigenvalues $\lambda_{n+1}, \ldots , \lambda_{n+d}$, such that \begin{equation} \label{l42} |\lambda_{j}| \leq (n+d) \, \delta \leq \frac{1}{4}, \quad \forall \, j \in \{ n+1 , \ldots, n+d\}. \end{equation} \end{lemma} \begin{proof} Since $\pi_{V}$ is an orthogonal projection, then there exists an orthonormal basis $\{ w_{1} , \ldots , w_{n+d} \} $ of $\mathbb{R}^{n+d}$ such that the matrix representation of $\pi_{V}$ in this basis is \[ \pi_{V} = \begin{pmatrix} Id_{n} & 0 \\ 0 & 0 \end{pmatrix} \] where $Id_{n}$ denotes the $n \times n$ identity matrix. \\ \\ Let $\delta < \delta_{0}$ (with $\delta_{0}$ to be determined later), and suppose $L$ is as in the statement of the lemma. Let $L = (l_{ij})_{ij}$ be the matrix representation of $L$ in the basis $\{ w_{1} , \ldots , w_{n+d} \} $. Then, by (\ref{l41}), we have \begin{equation*} |\pi_{V} w_{j} - L w_{j} |^{2} \leq \delta^{2}, \quad \forall \, j \in \{ 1 \ldots n+d \}, \end{equation*} that is, \begin{equation} \label{l43} |1 - l_{jj}|^{2} + \sum_{i\neq j} |l_{ij}|^{2} \leq \delta^{2}, \quad \forall \, j \in \{ 1 \ldots n \}, \end{equation} and \begin{equation} \label{l44} \sum_{i=1} ^{n+d} |l_{ij}|^{2} \leq \delta^{2}, \quad \forall \, j \in \{ n+1 \ldots n+d \}. \end{equation} Now, for each $ j \in \{ 1 \ldots n+d\}$, consider the closed disk $D_{j}$ in the complex plane, of center $(l_{jj}, 0)$ and radius $R_{j} = \displaystyle \sum_{i \neq j} |l_{ij}|$. Notice that by (\ref{l43}), (\ref{l44}), and the fact that $\delta < \delta_{0}$, we have \begin{equation} \label{l45'} |1 - l_{jj} | \leq \delta \leq \delta_{0}, \quad \forall \, j \in \{ 1 \ldots n \}, \end{equation} \begin{equation} \label{l45} |l_{jj}| \leq \delta \leq \delta_{0}, \quad \forall \, j \in \{ n+1 \ldots n+d \}, \end{equation} and \begin{equation} \label{l46} R_{j} \leq (n+d-1) \delta \leq (n+d-1) \, \delta_{0}, \quad \forall \, j \in \{1 \ldots n+d \}. \end{equation} Choosing $\delta_{0}$ such that $(n+d-1) \delta_{0} \leq \displaystyle \frac{1}{8}$ , we can guarantee that $\displaystyle \bigcup_{j=1}^{n}D_{j}$ is disjoint from $\displaystyle \bigcup_{j=n+1}^{n+d}D_{j}$. Thus, by the Gershgorin circle theorem (see \cite{LeV}, p.277-278), $\displaystyle \bigcup_{j=1}^{n}D_{j}$ contains exactly $n$ eigenvalues of $L$, and $\displaystyle \bigcup_{j=n+1}^{n+d}D_{j}$ contains exactly $d$ eigenvalues of $L$. The lemma follows from (\ref{l45'}), (\ref{l45}) and (\ref{l46}) \end{proof} \textbf{Notation:}\\ Let $V$ be an affine subspace of $\mathbb{R}^{n+d}$ of dimension $k$, $ k \in \{ 0, \dots, n-1\}$. Denote by $N_{\delta}(V)$, the $\delta$-neighborhood of $V$, that is, \begin{equation*} N_{\delta}(V) = \left\{ x \in \mathbb{R}^{n+1} \,\,\textrm{such that} \,\, d(x,V) < \delta \right\}. \end{equation*} \begin{lemma} \label{l1} (see \cite{M1}, Lemma 3.1) \footnote{Notice that Lemma 3.1 in \cite{M1} is stated and proved in the ambient space $\mathbb{R}^{n+1}$, whereas Lemma \ref{l1} here has $\mathbb{R}^{n+d}$ as the ambient space. However, one can very easily adapt the same proof of Lemma 3.1 in \cite{M1} to this higher co-dimension case here, while noticing that $c_{0}$ in the latter case should also depend on the co-dimension $d$.} Let $M$ be an $n$-Ahlfors regular subset of $\mathbb{R}^{n+d}$, and let $\mu =$ \( \mathcal{H}^{n} \mres M\) be the Hausdorff measure restricted to $M$. There exists a constant $c_{0} = c_{0}(n,d, C_{M}) \leq \displaystyle \frac{1}{2}$ such that the following is true: Fix $x_{0} \in M$, $r_{0} < 1$ and let $r = c_{0} \, r_{0}$. Then, for every $V$, an affine subspace of $\mathbb{R}^{n+d}$ of dimension $0 \leq k \leq n-1$, there exists $x \in M \cap B_{r_{0}}(x_{0})$ such that $x \notin N_{11 r}(V)$ and $B_{r}(x) \subset B_{2 r_{0}}(x_{0})$. \end{lemma} \begin{lemma} \label{l3} (see \cite{M1} Lemma 3.3) \footnote{Notice that Lemma 3.3 in \cite{M1} is stated and proved in the ambient space $\mathbb{R}^{n+1}$, whereas Lemma \ref{l3} here has $\mathbb{R}^{n+d}$ as the ambient space. However, the proof of Lemma 3.3 in \cite{M1} is in fact independent from the co-dimension $d$ of $M$. Thus the exact same proof holds here, and the constant $K_{1}$ stays independent of $d$.} Fix $R>0$, and let $\{u_{1}, \ldots u_{n} \}$ be $n$ vectors in $\mathbb{R}^{n+d}$. Suppose there exists a constant $K_{0} >0$ such that \begin{equation} \label{44'} |u_{j}| \leq K_{0} \, R \quad \forall j \in \{1,\ldots, n\}. \end{equation} Moreover, suppose there exists a constant $0 < k_{0} < K_{0}$, such that \begin{equation} \label{44}|u_{1}|\geq k_{0} \, R,\end{equation} and \begin{equation} \label{45} u_{j} \notin N_{k_{0}R}\big(span\{u_{1}, \ldots u_{j-1}\} \big) \quad \forall j \in \{2,\ldots, n\}. \end{equation} Then, for every vector $v \in V:= span\{u_{1}, \ldots u_{n}\} $, $v$ can be written uniquely as \begin{equation} v = \sum_{j=1}^{n} \beta_{j}u_{j},\end{equation} where \begin{equation} \label{46} |\beta_{j}| \,\leq K_{1}\frac{1}{R} \, |v|, \quad \forall j \in \{1,\ldots, n\} \end{equation} with $K_{1}$ being a constant depending only on $n$, $k_{0}$, and $K_{0}$. \end{lemma} Throughout the rest of the paper, $M$ denotes an $n$-Ahlfors regular rectifiable subset of $\mathbb{R}^{n+d}$ and $\mu =$ \( \mathcal{H}^{n} \mres M\) denotes the Hausdorff measure restricted to $M$. The average of a function $f$ on the ball $B_{r}(x)$ is denoted by \begin{equation} \label{average} f_{x,r}= \Xint-_{B_{r}(x)} f \, d \mu(y) = \displaystyle \frac{1}{\mu(M \cap B_{r}(x))} \int_{B_{r}(x)} f \, d \mu(y). \end{equation} We recall the statement of Theorem \ref{MTT'}: if $M$ satisfies the Poincar\'e-type condition (\ref{eqp}), and if the Carleson-type condition (\ref{103}) on the oscillation of the tangent planes to $M$ is satisfied, and if then $M$ is contained in a bi-Lipschitz image of an $n$-dimensional plane.\\ To prove this theorem, we follow steps similar to those used in \cite{M1} to prove the co-dimension 1 case (see Theorem 1.5 in \cite{M1}) which is stated as Theorem \ref{Th1Merhej} in this paper. First, we define what we call the $\alpha$-numbers \begin{equation} \label{a'} \alpha(x,r) := \left(\,\Xint-_{B_{r}(x)} |\pi_{T_{y}M} - A_{x,r}|^{2} \, d \mu \right)^{\frac{1}{2}}, \end{equation} where $x \in M$, and $0 < r \leq \displaystyle \frac{1}{10}$, $\pi_{T_{y}M}$ has $\big ( a_{ij}(y) \big)_{ij}$ as its matrix representation in the standard basis of $\mathbb{R}^{n+d}$, and $A_{x,r}= \big( (a_{ij})_{x,r} \big)_{ij}$ is the matrix whose ${ij}^{th}$ entry is the average of the function $a_{ij}$ in the ball $B_{r}(x)$.\\ These numbers are the key ingredient to proving our theorem. In Lemma \ref{l5}, we show that the Carleson condition (\ref{103}) implies that these numbers are small at every point $x \in M$ and every scale $0 < r < \frac{1}{10}$. Moreover, for every point $x \in M$, and series $\displaystyle \sum_{i=1}^{\infty} \alpha^2(x, 10^{-j})$ is finite. Then, in Theorem \ref{t1}, we use the Poincar\'e-type inequality to get an $n$-plane $P_{x,r}$ at every point $x \in M$ and every scale $0<r \leq\frac{1}{10 \lambda}$ such that the distance (in integral form) from $M \cap B_{r}(x)$ to $P_{x,r}$ is bounded by $\alpha(x,\lambda r)$. This means, by Lemma \ref{l5}, that those distances are small, and for a fixed point $x$, when we add these distances at the scales $ 10^{-j}$ for $j \in \mathbb{N}$, this series is finite \footnote{ A note for the interested reader: Theorem \ref{t1} implies that the series $\displaystyle \sum_{i=1}^{\infty} \beta_{1}^2(x, 10^{-j})$ is finite. See \cite{M1} on how this relates to the $\beta_{1}$-numbers, and the theorems found in \cite{DT1} that involve a Carleson condition on the $\beta_{1}$-numbers that guarantees a bi-Lipschitz parameterization of the set.}. Theorem \ref{t1} is the key point that allows us to use the bi-Lipschitz parameterization that G. David and T. Toro construct in \cite{DT1}. In fact, what they do is construct approximating $n$-planes, and prove that at any two points that are close together, the two planes associated to these points at the same scale, or at two consecutive scales are close in the Hausdorff distance sense. From there, they construct a bi-H\"older parameterization for $M$. Then, they show that the sum of these distances at scales $ 10^{-j}$ for $j \in \mathbb{N}$ is finite (uniformly for every $x \in M$). This is what is needed for their parameterization to be bi-Lipschitz (see Theorem \ref{t2} below and the definition before it). Thus, the rest of the proof is devoted to using Theorem \ref{t1} in order to prove the compatibility conditions between the approximating planes mentioned above.\\ Note that, in the process of proving Theorem \ref{MTT'}, we find several parts of the proof very similar to the proof of the co-dimension 1 case found in \cite{M1} (see Theorem 1.5 in \cite{M1} or Theorem \ref{Th1Merhej} in this paper). In fact, most of the differences in the proof happen in Lemma \ref{l5} and Theorem \ref{t1}, with the most important difference being in the latter. The rest of the proof follows closely to the proof of co-dimension 1 case. Thus, in this paper we do as follows: first, we prove Lemma \ref{l5} and Theorem \ref{t1} and include all the details. Then, for the rest of the proof (that is introducing the David and Toro bi-Lipschitz construction, and proving the compatibility conditions between the approximating planes that allow us to use this construction), we only give an outline of the main ideas, and leave the smaller details and tedious calculations out. However, in each place where the details are omitted, we refer the reader to the parts of the proof of Theorem 1.5 in \cite{M1} where they can be found. That being said, this part of the proof of Theorem \ref{MTT'} still has enough details so that the reader understands all the steps needed to get the bi-Lipschitz parameterization of $M$, and the intuition behind them. Moreover, the way the proof is presented here includes all the information that we need from the construction of the bi-Lipschitz parameterization of $M$ to prove the corollaries that follow from Theorem \ref{MTT'}.\\ Let us begin with Lemma \ref{l5} that decodes the Carleson condition (\ref{103}). \begin{lemma} \label{l5} Let $M \subset B_{2}(0)$ be an $n$-Ahlfors regular rectifiable set containing the origin, and let $\mu =$ \( \mathcal{H}^{n} \mres M\) be the Hausdorff measure restricted to $M$. Let $\epsilon > 0$, and suppose that \begin{equation} \label{0} \int_{0}^{1} \left(\,\Xint-_{B_{r}(x)}|\pi_{T_{y}M} - A_{x,r}|^{2} \, d \mu \right) \frac{dr}{r} < \epsilon^{2}, \quad \forall \, x \in M. \end{equation} Then, for every $x \in M$, we have \begin{equation} \label{a} \sum_{k=1}^{\infty} \alpha^{2}(x, 10^{-k}) \leq C \, \epsilon^{2} ,\end{equation} where the $\alpha$-numbers are as defined in (\ref{a'}) and $C = C(n, C_{M})$. Moreover, for every $x \in M$ and $0 < r \leq \displaystyle \frac{1}{10}$, we have \begin{equation} \label{a''} \alpha(x,r) \leq C \, \epsilon ,\end{equation} where $C = C(n, C_{M})$.\end{lemma} \begin{proof} Let $\epsilon > 0$ and suppose that (\ref{0}) holds. By the definition of the Frobenius norm, (\ref{0}) becomes \begin{equation} \label{l11} \sum_{i,j=1}^{n+d} \int_{0}^{1} \left(\,\Xint-_{B_{r}(x)}| a_{ij}(y) - (a_{ij})_{x,r}|^{2} \, d \mu \right) \frac{dr}{r} < \epsilon^{2}, \quad \forall \, x \in M,\end{equation} where $\pi_{T_{y}M} = \big(a_{ij}(y)\big)_{ij}$ and $A_{x,r} = \big((a_{ij})_{x,r}\big)_{ij}$.\\ \\ Fix $x \in M$, and fix $i, \, j \in \{ 1, \ldots n+d\}$. For all $a \in \mathbb{R}$, and for all $0 < r_{0} \leq 1$, we have \begin{equation} \label{10} \Xint-_{B_{r_{0}}(x)} |a_{ij}(y) - (a_{ij})_{x,r_{0}}|^{2} \, d \mu \leq \,\Xint-_{B_{r_{0}}(x)} |a_{ij}(y) - a|^{2} \, d \mu,\end{equation} since the average $(a_{ij})_{x,r_{0}}$ of $a_{ij}$ in the ball $B_{r_{0}}(x)$ minimizes the integrand on the right hand side of (\ref{10}).\\ To prove (\ref{a}), we note that \begin{equation} \label{17} \sum_{k=1}^{\infty} \Xint-_{B_{ 10^{-k}}(x)} |a_{ij}(y) - (a_{ij})_{x, 10^{-k}}|^{2} \, d \mu \leq C(n,C_{M}) \, \sum_{k=0}^{\infty} \int_{ 10^{-k-1}}^{ 10^{-k}} \Xint-_{B_{r}(x)} |a_{ij}(y) - (a_{ij})_{x,r}|^{2} \, d \mu \, \frac{dr}{r}.\end{equation} This is a straightforward computation that uses (\ref{10}) and the Ahlfors regularity of $\mu$, and is found in details in \cite{M1} (see \cite{M1}, Lemma 4.1 proof of inequality (4.6)). Moreover, it is trivial to check that \begin{equation} \label{baa} \sum_{k=0}^{\infty} \int_{ 10^{-k-1}}^{ 10^{-k}} \Xint-_{B_{r}(x)} |a_{ij}(y) - (a_{ij})_{x,r}|^{2} \, d \mu \, \frac{dr}{r} = \int_{0}^{1} \left(\,\Xint-_{B_{r}(x)} |a_{ij}(y) - (a_{ij})_{x,r}|^{2} \, d \mu \right) \frac{dr}{r}.\end{equation} Thus, plugging (\ref{baa}) in (\ref{17}), we get \begin{equation} \label{l120} \sum_{k=1}^{\infty} \Xint-_{B_{ 10^{-k}}(x)} |a_{ij}(y) - (a_{ij})_{x, 10^{-k}}|^{2} \, d \mu \leq C(n,C_{M})\, \int_{0}^{1} \left(\,\Xint-_{B_{r}(x)} |a_{ij}(y) - (a_{ij})_{x,r}|^{2} \, d \mu \right) \frac{dr}{r}. \end{equation} Since (\ref{l120}) is true for every $ i, \, j \in \{ 1, \ldots n+d\}$, we can take the sum over $i$ and $j$ on both sides of (\ref{l120}), and using (\ref{a'}) and (\ref{l11}), we get \begin{equation*} \sum_{k=1}^{\infty} \alpha^{2}(x, 10^{-k}) \leq C(n,C_{M}) \, \epsilon^{2} ,\end{equation*} which is exactly (\ref{a}). To prove inequality (\ref{a''}), fix $x \in M$ and $0 < r \leq \displaystyle \frac{1}{10}$. Then, there exists $k \geq 1$ such that \begin{equation} \label{111} 10^{-k-1} < r \leq 10^{-k}, \quad \textrm{that is} \quad \frac{1}{ 10^{-k}} \leq \frac{1}{r} < \frac{1}{ 10^{-k-1}}. \end{equation} Now, fix $i, \, j \in \{ 1, \ldots n+d\}$. Using inequality (\ref{10}) for $a = (a_{ij})_{x, 10^{-k}}$ and $r_{0} = r$, (\ref{111}), and the fact that $\mu$ is Ahlfors regular, we get that \begin{equation} \label{l121}\Xint-_{B_{r}(x)} |a_{ij}(y) - (a_{ij})_{x,r}|^{2} \, d \mu \leq C(n,C_{M}) \, \Xint-_{B_{ 10^{-k}}(x)} |a_{ij}(y) - (a_{ij})_{x, 10^{-k}}|^{2} \, d \mu.\end{equation} Summing over $i$ and $j$ on both sides of (\ref{l121}), and using the definition of the the Frobenius norm together with (\ref{a'}), we get \begin{equation} \label{baaaa} \alpha^{2}(x,r) \leq C(n,C_{M}) \, \alpha^{2}(x, 10^{-k}). \end{equation} Taking the square root on both sides of (\ref{baaaa}) and using (\ref{a}) finishes the proof of (\ref{a''}) \end{proof} Next, we use the Poincar\'e inequality to get good approximating $n$-planes for $M$ at every point $x \in M$ and at every scale $0 < r< \frac{1}{10 \lambda}$. In this context, a good approximating $n$-plane at the point $x \in M$ and radius $r$, is a plane $P_{x,r}$ such that the distance (in integral form) from $M \cap B_{r}(x)$ to $P_{x,r}$ is small. \begin{theorem} \label{t1} Let $M \subset B_{2}(0)$ be an $n$-Ahlfors regular rectifiable set containing the origin, and let $\mu =$ \( \mathcal{H}^{n} \mres M\) be the Hausdorff measure restricted to $M$. Assume that $M$ satisfies the Poincar\'{e}-type inequality (\ref{eqp}). There exists an $\epsilon_{1} >0 = \epsilon_{1}(n,d,C_{M})$, that for every $0 < \epsilon \leq \epsilon_{1}$, if \begin{equation} \label{again} \int_{0}^{1} \left(\,\Xint-_{B_{r}(x)} |\pi_{T_{y}M} - A_{x,r}|^{2} \, d \mu \right) \frac{dr}{r} < \epsilon^{2}, \quad \forall x \in M,\end{equation} then for every $x \in M$ and $0< r \leq \displaystyle \frac{1}{10 \lambda}$, there exists an affine $n$-dimensional plane $P_{x,r}$ such that \begin{equation} \label{121} \Xint-_{B_{r}(x)} \frac{d(y,P_{x,r})}{r} \, d \mu(y) \leq C \, \alpha(x,\lambda r),\end{equation} where $C= C(n,d,C_{P})$. \end{theorem} \begin{proof} Fix $x \in M$ and $r \leq \displaystyle \frac{1}{10 \lambda}$. Let $\epsilon \leq \epsilon_{1}$ (with $\epsilon_{1}$ to be determined later) such that (\ref{again}) is satisfied. By (\ref{a'}), (\ref{a''}) from Lemma \ref{l5}, and the fact that $\lambda r \leq \displaystyle \frac{1}{10}$, we have \begin{equation} \label{t11} \Xint-_{B_{\lambda r}(x)} |\pi_{T_{y}M} - A_{x,\lambda r}|^{2} \, d \mu = \alpha^{2} (x, \lambda r) \leq C(n,C_{M}) \,\epsilon^{2}.\end{equation} From (\ref{t11}) and the fact that $M$ is rectifiable (so approximate tangent planes exist $\mu$-a.e.), it is easy to check that there exists $y_{0} \in B_{\lambda r}(x) \cap M$ such that $T_{y_{0}}M$ exists, and \begin{equation*} |\pi_{T_{y_{0}}M} - A_{x,\lambda r}| \leq \alpha(x,\lambda r) \leq C_{1} \, \epsilon,\end{equation*} where $C_{1}$ is a (fixed) constant depending only on $n$ and $C_{M}$. Comparing the operator norm with the Frobenius norm (the operator norm is at most the Frobenius norm), we get \begin{equation} \label{t13} ||\pi_{T_{y_{0}}M} - A_{x,\lambda r}|| \leq \alpha(x,\lambda r) \leq C_{1} \, \epsilon \leq C_{1} \epsilon_{1}.\end{equation} \noindent Let $\delta_{0}$ be the constant from Lemma \ref{l4}, and choose $\epsilon_{1} \leq \displaystyle\frac{\delta_{0}}{C_{1}}$. Then, (\ref{t13}) becomes \begin{equation*} ||\pi_{T_{y_{0}}M} - A_{x,\lambda r}|| \leq \alpha(x,\lambda r) \leq \delta_{0}, \end{equation*} and by Lemma \ref{l4} (with $\delta = \alpha(x,\lambda r)$, $V = T_{y_{0}}M$, and $L = A_{x,\lambda r}$), we deduce that $A_{x,\lambda r}$ has exactly $n$ eigenvalues such that $\lambda^{1}_{x,\lambda r}, \ldots, \lambda^{n}_{x,\lambda r}$ such that $ |\lambda^{i}_{x,\lambda r}| \geq 1 - c \, \alpha(x,\lambda r)$, for all $i \in \{ 1, \ldots , n\}$, and exactly $d$ eigenvalues $\lambda^{n+1}_{x,\lambda r}, \ldots, \lambda^{n+d}_{x,\lambda r}$ such that \begin{equation} \label{eigen} |\lambda^{i}_{x,\lambda r}| \leq C(n,d) \, \alpha(x,\lambda r) \quad \forall \, i \in \{ n+1, \ldots , n+d\} .\end{equation} Since $A_{x,\lambda r}$ is a real symmetric matrix, $n+d$ eigenvectors of the matrix $A_{x,\lambda r}$, say $v^{1}_{x,\lambda r} , \ldots v^{n+d}_{x,\lambda r}$ (each corresponding to exactly one of the $n+d$ eigenvalues mentioned above) can be chosen to be orthonormal. Thus, $v^{1}_{x,\lambda r} , \ldots v^{n+d}_{x,\lambda r}$ are unit, linearly independent vectors such that \begin{equation} \label{value} A_{x,\lambda r}v^{i}_{x,\lambda r} = \lambda^{i}_{x,\lambda r}v^{i}_{x,\lambda r} \quad \forall \, i \in \{ 1, \ldots n+d\} .\end{equation} \\ \noindent Let us now fix our attention to the last $d$ eigenvector and eigenvalues. For $i \in \{ n+1 , \ldots n+d \}$ and consider the function $f_{i}$ on $\mathbb{R}^{n+d}$ defined by \begin{equation*} f_{i}(y) = \left<y, v^{i}_{x,\lambda r}\right>, \,\,\,\,\,\, y \in \mathbb{R}^{n+d}.\end{equation*} Notice that $f_{i}$ is a smooth function on $\mathbb{R}^{n+d}$, and for every point $y \in M$ where the tangent plane $T_{y}M$ exists, (which, again, is almost everywhere in $M$), we have \begin{equation} \label{1}|\nabla^{M}f_{i}(y)| \leq | \pi_{T_{y}M} - A_{x,\lambda r}| + |\lambda^{i}_{x,\lambda r}| . \end{equation} In fact, \begin{equation*} \nabla^{M}f_{i}(y) = \pi_{T_{y}M} \big(\nabla f(y)\big) = \pi_{T_{y}M}(v^{i}_{x,\lambda r}) = (\pi_{T_{y}M} - A_{x,\lambda r})(v^{i}_{x,\lambda r}) + A_{x,\lambda r}v^{i}_{x,\lambda r}. \end{equation*} Thus, using the definition of the operator norm, the fact that $v^{i}_{x,\lambda r}$ is unit, (\ref{value}), and the fact that the operator norm of a matrix is at most its Frobenius norm we get \begin{eqnarray*} | \nabla^{M}f_{i}(y)| &\leq& |\pi_{T_{y}M} - A_{x,\lambda r})(v^{i}_{x,r})| + |A_{x,\lambda r}v^{i}_{x,\lambda r}| \\ &\leq& ||\pi_{T_{y}M} - A_{x,\lambda r}|| + |\lambda^{i}_{x,\lambda r}| \leq |\pi_{T_{y}M} - A_{x,\lambda r}| + |\lambda^{i}_{x,\lambda r}| . \end{eqnarray*} \noindent Now, applying the Poincar\'{e} inequality to the function $f_{i}$ and the ball $B_{r}(x)$, and using (\ref{1}), we get \begin{equation} \label{2} \begin{split} \frac{1}{r} \, \Xint-_{B_{r}(x)} \left| \left<y,v^{i}_{x,\lambda r}\right> - \Xint-_{B_{r}(x)} \left<z ,v^{i}_{x,\lambda r}\right> \right. & \left. d \mu(z) \right. \bigg| d \mu(y) \\ &\leq C_{P} \left(\, \Xint-_{B_{\lambda r}(x)} \left( |\pi_{T_{y}M} - A_{x,\lambda r}| + |\lambda^{i}_{x,\lambda r}| \right)^{2} d \mu(y)\right)^{\frac{1}{2}}. \end{split} \end{equation} But $v^{i}_{x, \lambda r}$ is a constant vector, so (\ref{2}) can be rewritten as \begin{equation} \label{3} \begin{split} \frac{1}{r} \,\Xint-_{B_{r}(x)} \left| \left<y,v^{i}_{x,\lambda r}\right> - \left< \Xint-_{B_{r}(x)} z d \mu(z) \right. \right. &,\left. \left. v^{i}_{x,\lambda r} \right. \bigg> \right| d \mu(y) \\ &\leq C_{P} \left( \, \Xint-_{B_{\lambda r}(x)} \left( |\pi_{T_{y}M} - A_{x,\lambda r}| + |\lambda^{i}_{x,\lambda r}| \right)^{2} d \mu(y)\right)^{\frac{1}{2}}, \end{split} \end{equation} that is, \begin{equation} \label{4} \begin{split} \frac{1}{r} \, \Xint-_{B_{r}(x)} \left|\left<y - \, \Xint-_{B_{r}(x)} z \, d\mu(z) ,v^{i}_{x,\lambda r}\right> \right| &d \mu(y) \\ &\leq C_{P} \left( \,\Xint-_{B_{\lambda r}(x)} \left( |\pi_{T_{y}M} - A_{x,\lambda r}| + |\lambda^{i}_{x,\lambda r}| \right)^{2} d \mu(y)\right)^{\frac{1}{2}} \\ &\leq C(C_{P}) \, \left( \left( \Xint-_{B_{\lambda r}(x)} |\pi_{T_{y}M} - A_{x,\lambda r}|^{2}\right)^{\frac{1}{2}} + |\lambda^{i}_{x,\lambda r}| \right). \end{split} \end{equation} \noindent Using (\ref{eigen}) and (\ref{a'}), (\ref{4}) becomes \begin{equation} \label{t15} \frac{1}{r} \left(\, \Xint-_{B_{r}(x)} \left|\left<y - \, \Xint-_{B_{r}(x)} z \, d\mu(z) ,v^{i}_{x,\lambda r}\right> \right| d \mu(y) \right) \leq C(n,d,C_{P}) \, \left( \Xint-_{B_{\lambda r}(x)} |\pi_{T_{y}M} - A_{x,\lambda r}|^{2}\right)^{\frac{1}{2}}. \end{equation} \noindent Since (\ref{t15}) is true for every $ i \in \{n+1, \ldots , n+d \}$, we can take the sum over $i$ on both sides of (\ref{t15}) to get \begin{equation} \label{t16} \frac{1}{r} \sum_{i=n+1}^{n+d} \Xint-_{B_{r}(x)} \left|\left<y - \Xint-_{B_{r}(x)} z d\mu(z) ,v^{i}_{x,\lambda r}\right> \right| d \mu(y) \leq C(n,d,C_{P}) \left( \Xint-_{B_{\lambda r}(x)} |\pi_{T_{y}M} - A_{x,\lambda r}|^{2}\right)^{\frac{1}{2}}. \end{equation} \noindent We are now ready to choose our plane $P_{x,r}$. Take $P_{x,r}$ to be the $n$-plane passing through the point $ c_{x,r} := \,\Xint-_{B_{r}(x)} z \, d \mu(z) $, the centre of mass of $\mu$ in the ball $B_{r}(x)$, and such that $P_{x,r} - c = \textrm{span} \{ v^{1}_{x,\lambda r}, \ldots , v^{n}_{x,\lambda r} \}$. In other words, $(P_{x,r} - c_{x,r})^{\perp} = \textrm{span} \{ v^{n+1}_{x,\lambda r}, \ldots , v^{n+d}_{x,\lambda r} \}$. Here $(P_{x,r} - c_{x,r})^{\perp}$ denotes the $d$-plane of $\mathbb{R}^{n+d}$ perpendicular to the $n$-plane $P_{x,r} - c_{x,r}$. \\ \\ \noindent For $y \in B_{r}(x)$, we have that \begin{equation} \label{5} d(y, P_{x,r}) = d(y - c_{x,r}, P_{x,r} - c_{x,r}) = \left| \sum_{i=n+1}^{n+d} \left< y - c_{x,r} , v^{i}_{x,\lambda r}\right> v^{i}_{x,\lambda r}\right| \leq \sum_{i=n+1}^{n+d} \left| \left< y - c_{x,r} , v^{i}_{x,\lambda r} \right>\right| \end{equation} \noindent Dividing by $r$ and taking the average over $B_{r}(x)$ on both sides of (\ref{5}), and using the definition of $c_{x,r}$, we get \begin{eqnarray*} \Xint-_{B_{r}(x)} \frac{d(y, P_{x,r})}{r} \, d \mu(y) &\leq & \, \frac{1}{r} \sum_{i=n+1}^{n+d} \Xint-_{B_{r}(x)} \left| \left< y - \,\Xint-_{B_{r}(x)} z \, d \mu(z) , v^{i}_{x,\lambda r} \right>\right|\, d \mu(y) \\ & \leq & C(n,d,C_{P}) \, \left(\, \Xint-_{B_{\lambda r}(x)} \left| \pi_{T_{y}M} - A_{x,\lambda r} \right|^{2} d \mu\right)^{\frac{1}{2}}, \end{eqnarray*} where the last inequality comes from (\ref{t16}).\\ \noindent Thus, by the definition of $\alpha(x,\lambda r)$ (see (\ref{a'})), we get (\ref{121}) and the proof is done. \end{proof} As mentioned earlier, we want to use the construction of the bi-Lipschitz map given by David and Toro in their paper \cite{DT1}. To do that, we introduce the notion of a \textbf{coherent collection of balls and planes}. Here, we follow the steps given by David and Toro (see \cite{DT1}, chapter 2). \\ First, let $l_{0} \in \mathbb{N}$ such that $10^{l_{0}} \leq \lambda \leq 10^{l_{0}+1}$, and set $r_{k} = 10^{-k-l_{0} - 5}$ for $ k \in \mathbb{N}$, and let $\epsilon$ be a small number (will be chosen later) that depends only on $n$ and $d$. Choose a collection $\{x_{jk} \}, \, \, j \in J_{k}$ of points in $\mathbb{R}^{n+d}$, so that \begin{equation} \label{59} |x_{jk} - x_{ik}| \geq r_{k} \quad \textrm{for}\,\,\, i,j \in J_{k}, \, i \neq j. \end{equation} Set $B_{jk} := B_{r_{k}}(x_{jk})$ and $V_{k}^{\lambda} := \displaystyle \bigcup_{j \in J_{k}} \lambda B_{jk} = \displaystyle \bigcup_{j \in J_{k}} B_{\lambda r_{k}}(x_{jk}),\,$ for $\lambda > 1$.\\ We also ask for our collection $\{x_{jk} \}, \, \, j \in J_{k}$ and $k \geq 1$ to satisfy \begin{equation} \label{60} x_{jk} \in V_{k-1}^{2} \quad \textrm{for}\,\,\, k \geq 1 \,\,\, \textrm{and} \,\,\, j \in J_{k}. \end{equation} Suppose that our initial net $\{x_{j0} \}$ is close to an $n$-dimensional plane $\Sigma_{0}$, that is \begin{equation} \label{101} d(x_{j0},\Sigma_{0}) \leq \epsilon \quad \forall\, j \in J_{0}.\end{equation} For each $k \geq 0$ and $j \in J_{k}$, suppose you have an $n$-dimensional plane $P_{jk}$, passing through $x_{jk}$ such that the following compatibility conditions hold:\\ \begin{equation} \label{102} d_{x_{i0},100r_{0}}(P_{i0}, \Sigma_{0}) \leq \epsilon \,\,\,\, \textrm{for} \,\, i \in J_{0}, \end{equation} \begin{equation} \label{74} d_{x_{ik},100r_{k}}(P_{ik},P_{jk}) \leq \epsilon \,\,\,\, \textrm{for}\, k\geq 0 \,\,\,\textrm{and} \,\,\, i,j \in J_{k} \,\,\, \textrm{such that} \,\,\, |x_{ik} - x_{jk}| \leq 100r_{k},\end{equation} and \begin{equation} \label{75} d_{x_{ik},20r_{k}}(P_{ik},P_{j,k+1}) \leq \epsilon \,\,\, \textrm{for}\, k\geq 0 \,\,\,\textrm{and} \,\,\, i \in J_{k},\,j \in J_{k+1} \,\, \textrm{such that} \,\,\, |x_{ik} - x_{j,k+1}| \leq 2r_{k}.\end{equation} We can now define a \textbf{coherent collection of balls and planes}: \begin{definition} A \textbf{coherent collection of balls and planes}, (in short a CCBP), is a triple $(\Sigma_{0}, \{B_{jk} \}, \{P_{jk}\})$ where the properties (\ref{59}) up to (\ref{75}) above are satisfied, with a prescribed $\epsilon$ that is small enough, and depends only on $n$ and $d$. \end{definition} \begin{theorem} \label{t2} (see Theorems 2.4 in \cite{DT1}) There exists $\epsilon_{2} > 0$ depending only on $n$ and $d$, such that the following holds: If $\epsilon \leq \epsilon_{2}$, and $(\Sigma_{0}, \{B_{jk} \}, \{P_{jk}\})$ is a CCBP (with $\epsilon$), then there exists a bijection $g: \mathbb{R}^{n+d} \rightarrow \mathbb{R}^{n+d}$ with the following properties: \begin{equation} g(z)= z \quad \textrm{when} \,\,\, d(z, \Sigma_{0}) \geq 2, \end{equation} and \begin{equation} |g(z)-z| \leq C^{'}_{0} \epsilon \quad \textrm{for} \,\,\, z \in \mathbb{R}^{n+d}, \end{equation} where $C^{'}_{0}= C^{'}_{0}(n,d)$. Moreover, $g(\Sigma_{0})$ is a $C^{'}_{0} \epsilon$-Reifenberg flat set that contains the accumulation set \begin{eqnarray*} E_{\infty} = &\{x& \in \mathbb{R}^{n+d}; \,\, x\,\, \textrm{can be written as} \\ &x& = \lim_{m \to \infty} x_{j(m),k(m)}, \,\, \textrm{with}\,\,k(m) \in \mathbb{N}, \\ &\textrm{and}& \,\, j(m) \in J_{k_{m}} \,\, \textrm{for}\,\, m \geq 0 \,\, \textrm{and} \,\, \lim_{m \to \infty} k(m) = \infty\} .\end{eqnarray*} \end{theorem} In \cite{DT1}, David and Toro give a sufficient condition for $g$ to be bi-Lipschitz that we want to use in our proof. To state this condition, we need some technical details from the construction of the map $g$ from Theorem \ref{t2}. So, let us briefly discuss the construction here: David and Toro defined a mapping $f$ whose goal is to push a small neighborhood of $\Sigma_{0}$ towards a final set, which they proved to be Reifenberg flat. They obtained $f$ as a limit of the composed functions $f_{k} = \sigma_{k-1} \circ \ldots \sigma_{0}$ where each $\sigma_{k}$ is a smooth function that moves points near the planes $P_{jk}$ at the scale $r_{k}$. More precisely, \begin{equation} \label{sigmas} \sigma_{k} (y) = y + \sum_{j \in J_{k}} \theta_{jk}(y)[\pi_{jk}(y)-y],\end{equation} where $\{\theta_{jk}\}_{j \in J_{k}, k\geq 0}$ is a partition of unity with each $\theta_{jk}$ supported on $10B_{jk}$, and $\pi_{jk}$ denotes the orthogonal projection from $\mathbb{R}^{n+d}$ onto the plane $P_{jk}$.\\ \noindent Since $f$ in their construction was defined on $\Sigma_{0}$, $g$ was defined to be the extension of $f$ on the whole space.\\ \begin{corollary} \label{cr1} (see Proposition 11.2 in \cite{DT1}) Suppose we are in the setting of Theorem \ref{t2}. Define the quantity \begin{equation} \label{nasty} \begin{split} \epsilon^{'}_{k}(y) &= \\ &sup\{d_{x_{im},100 r_{m}}(P_{jk},P_{im}); \,\,\, j \in J_{k}, \,\, i \in J_{m},\,\,\, m \in \{k,k-1\}, \,\,\textrm{and}\,\, y \in 10B_{jk} \cap 11B_{im} \} \end{split} \end{equation} for $k \geq 1 \,\, \textrm{and} \,\,y \in V_{k}^{10}$, and $\epsilon_{k}^{'}(y)=0 \,\, \textrm{when}\,\, y \in \mathbb{R}^{n+d} \setminus V_{k}^{10}$ (when there are no pairs $(j,k)$ as above). If there exists $N > 0$ such that \begin{equation} \label{89} \sum_{k=0}^{\infty} \epsilon^{'}_{k}(f_{k}(z))^{2} < N, \end{equation} then the map $g$ constructed in Theorem \ref{t2} is $K$-bi-Lipschitz, where the bi-Lipschitz constant $K = K(n,d,N)$.\end{corollary} We are finally ready to prove Theorem \ref{MTT'}. \textbf{\underline{\textit{Proof of Theorem \ref{MTT'}:}}} \begin{proof} As mentioned before, from here on, the proof of this theorem is essentially the same as that of its co-dimension 1 analogue found in \cite{M1} (Theorem 1.5 in \cite{M1}). In fact, the essential differences in the proofs of Theorem \ref{MTT'} and its co-dimension 1 analogue took place in Lemma \ref{l5} and Theorem \ref{t1}. Thus, we continue this proof by outlining the main ideas and referring the reader to the proof of Theorem 1.5 in \cite{M1} for a more detailed proof.\\ Let $\epsilon_{0} > 0$ (to be determined later), and suppose that (\ref{103}) holds. Let $\epsilon_{2}$ be the constant from Theorem \ref{t2}. We would like to apply Theorem \ref{t2} for $\epsilon = \epsilon_{2}$, and then Corollary \ref{cr1}. So our first goal is to construct a CCBP, and we do that in several steps:\\ Let us start with a collection $\{\tilde{x}_{jk}\},\, j \in J_{k}$ of points in $M \cap B_{\frac{1}{10^{l_{0} +4}}}(0)$ that is maximal under the constraint \begin{equation} \label{62}|\tilde{x}_{jk} - \tilde{x}_{ik}| \geq \displaystyle\frac {4r_{k}}{3} \quad \textrm{when}\,\, i,j \in J_{k}\,\,\, \textrm{and}\,\,\,i \neq j.\end{equation} Of course, we can arrange matters so that the point $0$ belongs to our initial maximal set, at scale $r_{0}$. Thus, $0 = \tilde{x}_{i_{0},0} $ for some $i_{0} \in J_{0}$. Notice that for every $k \geq 0$, we have \begin{equation} \label{63} M \cap B_{\frac{1}{10^{l_{0} +4}}}(0)\subset \displaystyle \bigcup_{j \in J_{k}}\bar{B}_{\frac{4r_{k}}{3}}(\tilde{x}_{jk}).\end{equation} \\ Later, we choose \begin{equation} \label{61} x_{jk} \in M \cap B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk}), \quad j \in J_{k} .\end{equation} By (\ref{63}) and (\ref{61}), we can see \begin{equation} \label{63'} M \cap B_{\frac{1}{10^{l_{0} +4}}}(0)\subset \displaystyle \bigcup_{j \in J_{k}}\bar{B}_{\frac{4r_{k}}{3}}(\tilde{x}_{jk}) \subset \displaystyle \bigcup_{j \in J_{k}}B_{\frac{3r_{k}}{2}}(x_{jk}) .\end{equation} Using (\ref{62}), (\ref{61}), and (\ref{63'}), it is easy to see that the collection $\{x_{jk} \},\,\,\, j \in J_{k}$ satisfies (\ref{59}) and (\ref{60}). (for details, see \cite{M1}, page 23).\\ Next, we choose our planes $P_{jk}$ and our collection $\{ x_{jk} \}$, for $k \geq 0$ and $ j \in J_{k}$. Fix $k\geq 0$ and $j \in J_{k}$. Let $\epsilon_{1}$ be the constant from Theorem \ref{t1}. For \begin{equation} \label{c} \epsilon_{0} \leq \epsilon_{1}, \end{equation} we apply Theorem \ref{t1} to the point $\tilde{x}_{jk}$ (by construction $\tilde{x}_{jk} \in M$) and radius $120r_{k}$ (notice that $120\,r_{k} \leq \frac{1}{10 \lambda}$) to get an $n$-plane $P_{\tilde{x}_{jk},120r_{k}}$, denoted in this proof by $ P^{'}_{jk}$ for simplicity reasons, such that \begin{equation} \label{66} \Xint-_{B_{120r_{k}}(\tilde{x}_{jk})} \frac{d(y, P^{'}_{jk} )}{120r_{k}} \, d \mu \leq \, C(n,d,C_{P}) \, \alpha(\tilde{x}_{jk}, 120 \lambda r_{k}). \end{equation} Thus, by (\ref{66}) and the fact that $\mu$ is Ahlfors regular, there exists $x_{jk} \in M \cap B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk})$ such that \begin{eqnarray} \label{68} d(x_{jk},P^{'}_{jk}) &\leq& \Xint-_{B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk})}d(y, P^{'}_{jk} ) \, d \mu \nonumber \\ &\leq& C(n,C_{M}) \, \Xint-_{B_{120r_{k}}(\tilde{x}_{jk})} d(y, P^{'}_{jk} ) \, d \mu \leq \, C(n,d,C_{M},C_{P}) \, \alpha(\tilde{x}_{jk}, 120 \lambda r_{k}) \,r_{k}. \end{eqnarray} Let $P_{jk}$ be the plane parallel to $P^{'}_{jk}$ and passing through $x_{jk}$. From (\ref{66}), (\ref{68}) and the fact that the two planes are parallel, we see that (see \cite{M1} p. 24) \begin{equation} \label{73} \Xint-_{ B_{120r_{k}}(\tilde{x}_{jk})} \frac{d(y,P_{jk})}{120r_{k}} \, d \mu \leq C(n,d,C_{M},C_{P})\, \alpha(\tilde{x}_{jk}, 120 \lambda r_{k}). \end{equation} To summarize what we did so far, we have chosen $n$-dimensional planes $P_{jk}$ for $k\geq 0$ and $j \in J_{k}$ where each $P_{jk}$ passes through $x_{jk}$, and satisfies (\ref{73}). Notice that (\ref{73}) shows that $P_{jk}$ is a good approximating plane to $M$ in the ball $B_{120r_{k}}(\tilde{x}_{jk})$.\\ We want to get our CCBP with $\epsilon_{2}$. Thus, we show that (\ref{101}), (\ref{102}), (\ref{74}), and (\ref{75}) hold with $\epsilon = \epsilon_{2}$. Since the proofs of these inequalities are the same as the proofs of their analogue inequalities in the co-dimension 1 case, we only outline their proofs here (see \cite{M1} p. 25-- p. 31 for a detailed proof of the inequalities).\\ \textbf{\textit{Outline of the proofs for (\ref{74}) and (\ref{75}):}} Inequalities (\ref{74}) and (\ref{75}) can be proved simultaneously. Fix $k \geq 0$ and $j \in J_{k}$; let $m \in \{k, k-1 \}$ and $i \in J_m$ such that $|x_{jk} - x_{im}| \leq 100r_{m}$. We want to show that $P_{jk}$ and $P_{im}$ are close together. To do that, we construct $n$ linearly independent vectors that ``effectively'' span $P_{jk}$, (that is, these vectors span $P_{jk}$, and are far away from each other in a uniform quantitative manner), and that are close to $P_{im}$. More precisely, using Lemma \ref{l1} inductively, together with (\ref{73}), we can prove the following claim:\\ \textbf{\textit{Claim 1:}} Denote by $\pi_{jk}$ is the orthogonal projection of $\mathbb{R}^{n+d}$ on the plane $P_{jk}$. Let $r = c_{0} \, r_{k}$, where $c_{0} \leq \frac{1}{2}$ is the constant from Lemma \ref{l1} depending only on $n$, $d$, and $C_{M}$. There exists $C_{1} = C_{1}(n,d,C_{M},C_{P})$, such that if $C_{1} \epsilon_{0} \leq 1$, then there exists a sequence of $n+1$ balls $\{B_{r}(y_{l})\}_{l=0}^{n}$, such that \begin{enumerate} \item $\forall \, l \in \{ 0, \ldots n\}$, we have $y_{l} \in M$ and $B_{r}(y_{l}) \subset B_{2r_{k}}(\tilde{x}_{jk}).$ \item $q_{1} - q_{0} \notin B_{5r}(0)$, and $\forall \, l \in \{ 2, \ldots n\}$, we have $q_{l} - q_{0} \notin N_{5r}\big(span \{q_{1} - q_{0}, \ldots, q_{l-1} - q_{0} \}\big),$ \end{enumerate} where $q_{l} = \pi_{jk}(p(y_{l}))$ and $p(y_{l}) = \Xint-_{B_{r}(y_{l})}z \, d\mu(z)$ is the centre of mass of $\mu$ in the ball $B_{r}(y_{l})$.\\ Now, on one hand, notice that \begin{equation} \label{78} P_{jk} - q_{0} = span \{q_{1} - q_{0} , \ldots, q_{n} - q_{0} \}.\end{equation} On the other hand, by the definition of $p(y_{l})$, Jensen's inequality applied on the convex function $\phi(.) = d(.,P_{jk})$, the fact that $\mu$ is Ahlfors regular, $B_{r}(y_{l}) \subset B_{2r_{k}}(\tilde{x}_{jk})$, $r = c_{0}\, r_{k}$, and (\ref{73}), we have that \begin{equation} \label{33} d\big(p(y_{l}),P_{jk}\big) \leq C(n,d, C_{M}, C_{P}) \, \alpha(\tilde{x}_{jk},120 \lambda \,r_{k}) \, r_{k}, \quad \forall \, l \in \{ 0, \ldots n\}.\end{equation} Similarly, we have that \begin{equation} \label{33again} d\big(p(y_{l}),P_{im}\big) \leq C(n,d, C_{M}, C_{P}) \, \alpha(\tilde{x}_{im},120 \lambda \,r_{m}) \, r_{m}, \quad \forall \, l \in \{ 0, \ldots n\}. \end{equation} Thus, combining (\ref{33}) and (\ref{33again}), we directly get \begin{equation} \label{80} d\big(q_{l},P_{im}\big) \leq C(n,d,C_{M},C_{P}) \, \big( \alpha(\tilde{x}_{jk},120 \lambda r_{k})\,r_{k} + \alpha(\tilde{x}_{im}, 120 \lambda r_{m}) \, r_{m}\big), \quad \forall \, l \in \{ 0, \ldots n\}. \end{equation} To compute the distance between $P_{jk}$ and $P_{im}$, let $y \in P_{jk} \cap B_{\rho}(x_{im})$ where $\rho= \{20r_m, 100r_{m}\}$. By (\ref{78}), $y$ can be written uniquely as \begin{equation} \label{82} y = q_{0} + \sum_{l=1}^{n} \beta_{l}(q_{l} - q_{0}).\end{equation} Using Lemma \ref{l3}, for $u_{l} = q_{l} - q_{0}$, $R = r$, and $v = y - q_{0}$ to get an upper bound on the $\beta_{l}$'s that show up in (\ref{82}), together with (\ref{80}), we get that \begin{equation*} d\big(y, P_{im}\big) \leq C(n,d,C_{M},C_{P}) \bigg( \alpha(\tilde{x}_{jk}, 120\lambda r_{k})\,r_{k} + \alpha(\tilde{x}_{im}, 120 \lambda r_{m}) \, r_{m}\bigg) \end{equation*} Thus, \begin{equation} \label{88}d_{x_{im}, \rho} (P_{jk}, P_{im}) \leq c \, \bigg( \alpha(\tilde{x}_{jk},120 \lambda r_{k}) + \alpha(\tilde{x}_{im},120 \lambda r_{m})\bigg)\,\,\,\,\, \rho \in\{20r_m, 100r_{m}\}. \end{equation} Now, by Lemma \ref{l5}, we know that $ \alpha(\tilde{x}_{jk},120 \lambda r_{k}) \leq C(n,C_{M}) \, \epsilon_{0}$, and $\alpha(\tilde{x}_{im},120 \lambda r_{m}) \leq C(n,C_{M}) \, \epsilon_{0}$. Thus, (\ref{88}) becomes \begin{equation} \label{135} d_{x_{im}, \rho} (P_{jk}, P_{im}) \leq C(n,d,C_{M},C_{P}) \epsilon_{0} \,\,\,\,\, \rho \in\{20r_m, 100r_{m}\}. \end{equation} So, we have shown that there exist two constants $C_{2}$ and $C_{3}$, each depending only on $n$, $d$, $C_{M}$, and $C_{P}$, such that \begin{equation} \label{74'} d_{x_{ik},100r_{k}}(P_{ik},P_{jk}) \leq C_{2} \,\epsilon_{0} \,\,\,\, \textrm{for}\, k\geq 0 \,\,\,\textrm{and} \,\,\, i,j \in J_{k} \,\,\, \textrm{such that} \,\,\, |x_{ik} - x_{jk}| \leq 100r_{k},\end{equation} and \begin{equation} \label{75'} d_{x_{ik},20r_{k}}(P_{ik},P_{j,k+1}) \leq C_{3} \,\epsilon_{0} \,\,\, \textrm{for}\, k\geq 0 \,\,\,\textrm{and} \,\,\, i \in J_{k},\,j \in J_{k+1} \,\, \textrm{such that} \,\,\, |x_{ik} - x_{j,k+1}| \leq 2r_{k}.\end{equation} For \begin{equation} \label{c'''} C_{2} \,\epsilon_{0} \leq \epsilon_{2} \quad \textrm{and} \quad C_{3} \,\epsilon_{0} \leq \epsilon_{2}, \end{equation} we get (\ref{74}) and (\ref{75}). \\ \textbf{\textit{Outline of the proofs for (\ref{101}) and (\ref{102}):}} We start with (\ref{102}). Recall that $0 = \tilde{x}_{i_{0},0}$ for some $i_{0} \in J_{0}$. Choose $\Sigma_{0}$ to be the plane $P_{i_{0},0}$ described above (recall that $P_{i_{0},0}$ passes through $x_{i_{0},0}$, where $r_{0} = 10^{-l_{0} - 5}$). Then, what we need to show is \begin{equation} \label{102'} d_{x_{j0},100r_{0}}(P_{j0}, P_{i_{0},0}) \leq \epsilon_{2} \quad \textrm{for} \,\, j \in J_{0}. \end{equation} Fix $j \in J_{0}$, and take the corresponding $x_{j0}$. Since by construction $|\tilde{x}_{j0}| < \displaystyle \frac{1}{10^{l_{0}+4}}$ and since (\ref{61}) says that $ |x_{j_{0},0} - \tilde{x}_{j_{0},0}| \leq \displaystyle \frac{r_{0}}{6}$, then, we have \begin{equation} \label{1''}|x_{j0}| \leq \frac{r_{0}}{6} + \frac{1}{10^{l_{0}+4}},\,\,\,\,\,\, j \in J_{0}.\end{equation} Moreover, by (\ref{61}) and the fact that $0 = \tilde{x}_{i_{0},0}$ , we have \begin{equation} \label{2''} |x_{i_{0},0} - \tilde{x}_{i_{0},0}| = |x_{i_{0},0}| \leq \frac{r_{0}}{6}.\end{equation} Combining (\ref{1''}) and (\ref{2''}), and using the fact that $r_{0} = 10^{-l_{0}-4}$ we get \begin{equation} |x_{j0} - x_{i_{0},0} | \leq \frac{r_{0}}{6} + \frac{1}{10^{l_{0}+4}} + \frac{r_{0}}{6} \leq \frac{r_{0}}{6} + 10 r_{0} + \frac{r_{0}}{6} \leq 100r_{0}. \end{equation} Thus, by (\ref{74}) for $x_{ik} = x_{j0}$, $P_{ik} = P_{j0}$, and $P_{jk} = P_{i_{0},0}$, we get exactly (\ref{102'}), hence finishing the proof for (\ref{102}).\\ It remains to show (\ref{101}) with $\epsilon = \epsilon_{2}$, that is \begin{equation} \label{101'} d(x_{j0}, P_{i_{0},0}) \leq \epsilon_{2} ,\quad \textrm{for}\,\, j \in J_{0}. \end{equation} However, notice that since $x_{j0} \in P_{j0}$, (\ref{101}) follows directly from (\ref{102}).\\ We finally have our CCBP. Now, by the proof of Theorem \ref{t2} (see paragraph above (\ref{sigmas})) we get the smooth maps $\sigma_{k} \,\, \textrm{and} \,\, f_{k} = \sigma_{k-1} \circ \ldots \sigma_{0} \,\, \textrm{for} \,\, k \geq 0$, and then the map $f = \displaystyle \lim_{k \to \infty} f_{k}$ defined on $\Sigma_{0}$, and finally the map $g$ that we want. Moreover, by Theorem \ref{t2}, we know that $g: \mathbb{R}^{n+d} \rightarrow \mathbb{R}^{n+d}$ is a bijection with the following properties: \begin{equation} \label{aa1} g(z)= z \,\,\, \,\,\, \textrm{when} \,\,\, d(z, \Sigma_{0}) \geq 2, \end{equation} \begin{equation} \label{bb1} |g(z)-z| \leq C^{'}_{0} \epsilon_{2} \,\,\,\,\,\,\, \textrm{for} \,\,\, z \in \mathbb{R}^{n+d}, \end{equation} and \begin{equation} \label{cc1} g(\Sigma_{0}) \,\, \textrm{is a } \,\,\, C^{'}_{0} \epsilon_{2} \textrm{-Reifenberg flat set}. \end{equation} Fix $\epsilon_{0}$ such that (\ref{c}), (\ref{c'''}), and the hypothesis of Claim 1 are all satisfied. Notice that by the choice of $\epsilon_{0}$, we can write $\epsilon_{0} = c_{4}\, \epsilon_{2}$, where $c_{4} = c_{4}(n,d,C_{M},C_{P})$. Hence, from (\ref{aa1}), (\ref{bb1}), (\ref{cc1}), we directly get (\ref{aa}), (\ref{bb}), and (\ref{cc}). \\ Next, we show that \begin{equation} \label{12''} M \cap B_{\frac{1}{10^{l_{0}+4}}}(0) \subset g(\Sigma_{0}). \end{equation} Fix $x \in M \cap B_{\frac{1}{10^{l_{0}+4}}}(0) $. Then, by (\ref{63'}), we see that for all $k \geq 0$, there exists a point $x_{jk}$ such that $|x - x_{jk}| \leq \displaystyle \frac{3r_{k}}{2}$, and hence $x \in E_{\infty} \subset g(\Sigma_{0})$ ($E_{\infty}$ is the set defined in Theorem \ref{t2}). Since $x$ was an arbitrary point in $M \cap B_{\frac{1}{10^{l_{0}+4}}}(0)$, (\ref{12''}) is proved. This shows that (\ref{contained}) holds for $\theta_{0} := \frac{1}{10^{l_{0}+4}}$.\\ We still need to show that $g$ is bi-Lipschitz. By Corollary \ref{cr1}, it suffices to show (\ref{89}). To do that, we need the following inequality from \cite{DT1} (see inequality (6.8) page 27 in \cite{DT1} \begin{equation} \label{in} |f(z) - f_{k}(z)| \leq C(n,d) \epsilon_{2} \, r_{k} \quad \textrm{for}\,\, k \geq 0\,\, \textrm{and} \,\, z \in \Sigma_{0}.\end{equation} Let $z \in \Sigma_{0}$, and choose $\bar{z} \in M \cap B_{\frac{1}{10^{l_{0}+4}}}(0)$ such that \begin{equation} \label{90} |\bar{z} - f(z)| \leq 2 \, d(f(z),M \cap B_{\frac{1}{10^{l_{0}+4}}}(0)). \end{equation} Fix $k \geq 0$, and consider the index $m \in \{k,k-1\}$ and the indices $j \in J_{k}$ and $i \in J_{m}$ such that $f_{k}(z) \in 10B_{jk} \cap 11B_{im}$. We show that \begin{equation} \label{91} d_{x_{im},100 r_{m}}(P_{jk},P_{im}) \leq C(n,d,C_{M},C_{P}) \, \alpha(\bar{z},r_{k-l_{0}-5}) \quad \textrm{for} \,\, k \geq 1. \end{equation} In fact, by (\ref{90}) and (\ref{in}), and since $\tilde{x}_{jk} \in M \cap B_{\frac{1}{10^{l_{0}+4}}}(0)$, $|\tilde{x}_{jk} - x_{jk}| \leq \displaystyle \frac{r_{k}}{6}$, and $f_{k}(z) \in 10B_{jk}$, one can show that (see \cite{M1} p. 32-33 for detailed proof) \begin{equation} \label{96} B_{120 \lambda r_{m}}(\tilde{x}_{im}) \cup B_{120 \lambda r_{k}}(\tilde{x}_{jk}) \subset B_{r_{k-l_{0}-5}}(\bar{z}). \end{equation} Now, writing $\pi_{T_{y}M} = \big(a_{pq}(y)\big)_{pq}$, and using the definition of the Frobenius norm, together with (\ref{10}) for $ a = (a_{pq})_{\bar{z},r_{k-l_{0}-5}}$, (\ref{96}), and the fact that $\mu$ is Ahlfors regular \begin{eqnarray*} \alpha^{2}(\tilde{x}_{jk}, 120 \lambda r_{k}) &=& \Xint-_{B_{120 \lambda r_{k}}(\tilde{x}_{jk})} |\pi_{T_{y}M} - A_{\tilde{x}_{jk}, 120 \lambda r_{k}}|^{2}\, d \mu\\ &=& \sum_{p,q = 1} ^{n+d} \Xint-_{B_{120 \lambda r_{k}}(\tilde{x}_{jk})} |a_{pq}(y) - (a_{pq})_{\tilde{x}_{jk}, 120 \lambda r_{k}}|^{2}\, d \mu \\ &\leq& \sum_{p,q = 1} ^{n+d} \Xint-_{B_{120 \lambda r_{k}}(\tilde{x}_{jk})} |a_{pq}(y) - (a_{pq})_{\bar{z},r_{k-l_{0} -5}}|^{2}\, d \mu \\ &\leq& C(n,C_{M}) \sum_{p,q = 1} ^{n+d} \, \Xint-_{B_{r_{k-l_{0}-5}}(\bar{z})} |a_{pq}(y) - (a_{pq})_{\bar{z},r_{k-l_{0} -5}}|^{2}\, d \mu \\ &=& C(n,C_{M}) \Xint-_{B_{r_{k-l_{0}-5}}(\bar{z})} |\pi_{T_{y}M} - A_{\bar{z},r_{k-l_{0}-5}}|^{2}\, d \mu \\ &=& C(n,C_{M})\, \alpha^{2}(\bar{z},r_{k-l_{0}-5}), \end{eqnarray*} and thus, \begin{equation} \label{97} \alpha(\tilde{x}_{jk}, 120 \lambda r_{k}) \leq C(n,C_{M})\, \alpha(\bar{z},r_{k-l_{0}-5}). \end{equation} Similarly, we can show that \begin{equation} \label{98} \alpha(\tilde{x}_{im}, 120 \lambda r_{m}) \leq C(n,C_{M})\, \alpha(\bar{z},r_{k-l_{0}-5}). \end{equation} Plugging (\ref{97}) and (\ref{98}) in (\ref{88}) for $\rho = 100r_{m}$, we get \begin{equation} \label{99} d_{x_{im},100 r_{m}}(P_{jk},P_{im}) \leq C(n,d,C_{M},C_{P}) \, \alpha(\bar{z},r_{k-l_{0}-5}), \quad \forall k \geq 1. \end{equation} This finishes the proof of (\ref{91}).\\ \\ Hence, we have shown that $\epsilon^{'}_{k}(f_{k}(z)) \leq C(n,d,C_{M},C_{P}) \, \alpha(\bar{z},r_{k-l_{0}-5})$ for every $k\geq 1$, that is \begin{equation} \label{136} \epsilon^{'}_{k}(f_{k}(z))^{2} \leq C(n,d,C_{M},C_{P}) \, \alpha^{2}(\bar{z},r_{k-l_{0}-5}), \quad \forall \, k\geq 1\end{equation} Summing both sides of (\ref{136}) over $k \geq 0$, and using (\ref{a}) in Lemma \ref{l5} together with the fact that $\bar{z} \in M\cap B_{\frac{1}{10^{l_{0}+4}}}(0)$, we get \begin{equation} \label{100} \sum_{k=0}^{\infty} \epsilon^{'}_{k}(f_{k}(z))^{2} \leq 1 + C(n,d,C_{M},C_{P}) \, \sum_{k=1}^{\infty} \alpha^{2}(\bar{z},r_{k-l_{0}-5}) \leq 1 + C(n,d,C_{M},C_{P}) \, \epsilon^{2}_{0} \,\, := N. \end{equation} Inequality (\ref{89}) is proved, and our theorem follows. \end{proof} \vspace{0.5cm} As mentioned in the introduction, in the special case when $M$ has co-dimension 1, (\ref{103}) translates a Carleson-type condition on the oscillation of the unit normals to $M$.\\ \textbf{\textit{Proof that Theorem \ref{Th1Merhej} follows from Theorem \ref{MTT'}}} \begin{proof} Suppose that (\ref{103old}) holds for some choice of unit normal $\nu$ to $M$. We show that (\ref{103old}) is in fact exactly inequality (\ref{103}). Fix $x \in M$ and $ 0 < r < 1$ and let $y \in M \cap B_{r}(x)$ be a point where the approximate tangent plane $T_{y}M$ (and thus the unit normal $\nu(y)$) exists. Denote by $ {T_{y}M}^{\perp} $ the subspace perpendicular to $T_{y}M$. Then, using the matrix representation of $ \pi_{T_{y}M} $ in the standard basis of $\mathbb{R}^{n+1}$, and the fact that $ \pi_{{T_{y}M}^{\perp}} = Id_{n+1} - \pi_{T_{y}M}$ where $Id_{n+1}$ is the $(n+1) \times (n+1)$ identity matrix, one can easily see that \begin{equation} \label{r1} |\pi_{T_{y}M} - A_{x,r}|^{2} = |\pi_{{T_{y}M}^{\perp}}- B_{x,r}|^{2}, \end{equation} where $ \pi_{{T_{y}M}^{\perp}} = \big ( b_{ij}(y) \big)_{ij}$ and $B_{x,r} = Id_{n+d} - A_{x,r} = \big( (b_{ij})_{x,r} \big)_{ij}$.\\ Now, we want to express the right hand side of (\ref{r1}) using a different basis than the standard basis of $\mathbb{R}^{n+1}$. For any choice of orthonormal basis $\{ \nu_{1}(y), \ldots \nu_{n}(y) \}$ of $T_{y}M$, we have that $\{ \nu_{1}(y), \ldots , \nu_{n}(y), \nu(y)\}$ is an orthonormal basis for $\mathbb{R}^{n+1}$. The matrix representation of $ \pi_{{T_{y}M}^{\perp}}$ with $\{ \nu_{1}(y), \ldots , \nu_{n}(y), \nu(y)\}$ as a basis for the domain $\mathbb{R}^{n+1}$ and the standard basis for the range $\mathbb{R}^{n+1}$, is the $(n+1) \times (n+1)$ matrix whose last column is $\nu(y)$ while the other columns are all zero. Thus, with this choice of bases and matrix representations, $B_{x,r}$ becomes the matrix whose last column is $\nu_{x,r}$ while the other column are all zero \footnote{ Note that considering this choice of bases and matrix representations is only valid in co-dimension 1, as otherwise $B_{x,r}$ will not be well defined. This is because in higher co-dimensions, one will have infinitely many choices for the unit normals that span the normal plane, instead of the one choice (modulo direction) in co-dimension 1.}. Hence, using (\ref{r1}), we get that \begin{equation} \label{cor1} |\pi_{T_{y}M} - A_{x,r}|^{2} = |\pi_{{T_{y}M}^{\perp}}- B_{x,r}|^{2} = | \nu(y) - \nu_{x,r}|^{2}. \end{equation} Since (\ref{cor1}) is true for any $y \in B_{r}(x)$, and since $x$ and $r$ are arbitrary, then, \begin{equation*} \begin{split} \sup_{x \in M \cap B_{\frac{1}{10^{4}}}(0)} \,\, \int_{0}^{1} \left(\, \Xint-_{B_{r}(x)} |\nu(y) - \nu_{x,r}|^{2} d \mu \right) & \frac{dr}{r} = \\ &\sup_{x \in M \cap B_{\frac{1}{10^{4}}}(0)} \,\, \int_{0}^{1} \left(\,\Xint-_{B_{r}(x)} |\pi_{T_{y}M} - A_{x,r}|^{2} d \mu \right) \frac{dr}{r} , \end{split} \end{equation*} and the proof is done \end{proof} We now show that if we assume, in addition to the hypothesis of Theorem \ref{MTT'}, that $M$ is Reifenberg flat, then (locally) $M$ is exactly the bi-Lipschitz image of an $n$-plane. In other words, the containment in (\ref{contained}) becomes an equality. \begin{corollary} \label{MTT'RF} Let $M \subset B_{2}(0)$ be an $n$-Ahlfors regular rectifiable set containing the origin, and let $\mu =$ \( \mathcal{H}^{n} \mres M\) be the Hausdorff measure restricted to $M$. Assume that $M$ satisfies the Poincar\'{e}-type inequality (\ref{eqp}). There exist $\epsilon_{3} = \epsilon_{3}(n,d,C_{M},C_{P})>0$, and $\theta_{1} = \theta_{1}(\lambda)$ such that if (\ref{103}) is satisfied with $\epsilon_{3}$ instead of $\epsilon_{0}$, and if for every $x \in M$ and $r < 1$ there is an $n$-plane $Q_{x,r}$, passing through $x$ such that \begin{equation} \label{rf1} d(y, Q_{x,r}) \leq \epsilon_{3} \, r \quad \forall \, y \in M \cap B_{10r}(x) \end{equation} and \begin{equation} \label{rf2} d(y, M) \leq \epsilon_{3} \, r \quad \forall \, y \in Q_{x,r} \cap B_{10r}(x), \end{equation} then there exists an onto $K$-bi-Lipschitz map $g: \mathbb{R}^{n+d} \rightarrow \mathbb{R}^{n+d}$ where the bi-Lipschitz constant $K=K(n,d,C_{M},C_{P})$ and an $n$-dimensional plane $\Sigma_{0}$, such that (\ref{aa}) holds, (\ref{bb}) holds with $\epsilon_{3}$ instead of $\epsilon_{0}$, and with $C_{0}'' = C_{0}''(n,d,C_{M},C_{P})$ instead of $C_{0}$, and \begin{equation} \label{containedrf} M \cap B_{\theta_{1}}(0) = g(\Sigma_{0}) \cap B_{\theta_{1}}(0). \end{equation} \end{corollary} \begin{proof} Let $\epsilon_{2}$ be as in Theorem \ref{t2}, and let $\epsilon_{3} \leq \epsilon \leq \epsilon_{2}$ ($\epsilon_{3}$ and $\epsilon$ to be determined later). Going through the exact same steps as in the proof of Theorem \ref{MTT'}, but with $\epsilon$ instead of $\epsilon_{2}$, and $\epsilon_{3}$ instead of $\epsilon_{0}$, we get a bijective map $g: \mathbb{R}^{n+d} \rightarrow \mathbb{R}^{n+d}$ such that (\ref{aa}) holds, \begin{equation} \label{bb1rf} |g(z)-z| \leq C^{'}_{0} \epsilon \quad, \textrm{for} \,\,\, z \in \mathbb{R}^{n+d}, \end{equation} and \begin{equation} \label{cc1rf} M \cap B_{\frac{1}{10^{l_{0}+4}}}(0) \subset g(\Sigma_{0}). \end{equation} Note that we have not fixed $\epsilon_{3}$ and $\epsilon$ yet. However, we know that the above holds for $\epsilon_{3} \leq \epsilon \leq \epsilon_{2}$ while inequality (\ref{c}) is satisfied with $\epsilon_{3}$ instead of $\epsilon_{0}$, (\ref{c'''}) is satisfied with $\epsilon$ instead of $\epsilon_{2}$ and $\epsilon_{3}$ instead of $\epsilon_{0}$, and the hypothesis of Claim 1 is satisfied with $\epsilon_{3}$ instead of $\epsilon_{0}$. Now, we want to show that \begin{equation} \label{rf3} g(\Sigma_{0}) \cap B_{\frac{1}{10^{l_{0}+8}}}(0) \subset M. \end{equation} We first show that for every $k \geq 0$ and for every $j \in J_{k}$, $M \cap B_{120 r_{k}}(\tilde{x}_{jk})$ is close to $P_{jk}$ and that the $n$-planes $P_{jk}$ and $Q_{jk} := Q_{x_{jk}, r_{k}}$ are close to each other (in the Hausdorff distance sense). Let us begin by showing that for every $k \geq 0$ and for every $j \in J_{k}$, \begin{equation} \label{rf4} d(z, P_{jk}) \leq \epsilon \, r_{k} \quad \forall \, z \in M \cap B_{120 r_{k}}(\tilde{x}_{jk}). \end{equation} By Markov's inequality, we know that \begin{equation*} \begin{split} \label{3''} \mu \bigg( x \in B_{120r_{k}}(\tilde{x}_{jk}); \frac{d(x,P_{jk})}{120r_{k}}\geq \alpha^{\frac{1}{2}} (\tilde{x}_{jk}, 120 \lambda r_{k}) & \bigg) \leq \\ &\frac{1}{ \alpha^{\frac{1}{2}}(\tilde{x}_{jk}, 120 \lambda r_{k})} \int_{B_{120r_{k}}(\tilde{x}_{jk})} \frac{d(y,P_{jk})}{120r_{k}} \, d \mu \end{split} \end{equation*} Using (\ref{73}) with the fact that $\mu$ is Ahlfors regular, and (\ref{103}) with (\ref{a''}) from Lemma \ref{l5} and the fact that $120 \lambda r_{k} \leq \frac{1}{10}$, we get \begin{equation} \begin{split} \label{4''} \mu \bigg( x \in B_{120r_{k}}(\tilde{x}_{jk}); \frac{d(x,P_{jk})}{120r_{k}}\geq \alpha^{\frac{1}{2}} (\tilde{x}_{jk}, 120 \lambda r_{k}) &\bigg) \leq \nonumber \\ & \frac{\mu(B_{120r_{k}}(\tilde{x}_{jk}))}{ \alpha^{\frac{1}{2}}(\tilde{x}_{jk}, 120 \lambda r_{k})} \, \Xint-_{B_{120r_{k}}(\tilde{x}_{jk})} \frac{d(y,P_{jk})}{120r_{k}} \, d \mu \nonumber \\ &\leq C(n,d, C_{M}, C_{P}) \, r_{k}^{n} \, \alpha^{\frac{1}{2}}\left(\tilde{x}_{jk},120 \lambda r_{k}\right) \nonumber \\ &\leq C(n,d, C_{M}, C_{P})\, r_{k}^{n} \, \epsilon_{3} ^{\frac{1}{2}}. \end{split} \end{equation} Now, take a point $z \in M \cap B_{120r_{k}}(\tilde{x}_{jk})$. We consider two cases:\\ Either \begin{equation} \label{5'''} \frac{d(z ,P_{jk})}{120r_{k}} \leq \alpha^{\frac{1}{2}}\left(\tilde{x}_{jk}, 120 \lambda r_{k}\right) \end{equation} or \begin{equation} \label{5'} \frac{d(z ,P_{jk})}{120r_{k}} > \alpha^{\frac{1}{2}}\left(\tilde{x}_{jk}, 120 \lambda r_{k}\right). \end{equation} In the first case, combining (\ref{5'''}) with (\ref{103}) and (\ref{a''}), we get \begin{equation} \label{6} d(z ,P_{jk}) \leq C(n, C_{M}) \, r_{k} \, \epsilon_{3}^{\frac{1}{2}} .\end{equation} In case of (\ref{5'}), let $\rho$ be the biggest radius such that \begin{equation*} B_{\rho}(z) \subset \left\{ x \in B_{120r_{k}}(\tilde{x}_{jk}) ; \,\,\, \frac{d(x,P_{jk})}{120r_{k}} > \alpha^{\frac{1}{2}}\left(\tilde{x}_{jk}, 120 \lambda r_{k}\right)\right\}. \end{equation*} Now, since $z \in M$ and $\mu$ is Ahlfors regular, we get using (\ref{4''}) that \begin{equation} \label{8''} C_{M} \, \rho^{n} \leq \mu(B_{\rho}(z)) \leq C(n,d, C_{M}, C_{P}) \, r_{k}^{n} \,\epsilon_{3}^{\frac{1}{2}}. \end{equation} Thus, relabelling, (\ref{8''}) becomes \begin{equation} \label{9''} \rho \leq C(n, C_{M}, C_{P}) \,r_{k} \, \epsilon_{3}^{\frac{1}{2n}}.\end{equation} On the other hand, since $\rho$ is the biggest radius such that $B_{\rho}(z) \subset \\ \left\{ x \in B_{120r_{k}}(\tilde{x}_{jk}) ; \,\,\, \frac{d(x,P_{jk})}{120r_{k}} > \alpha^{\frac{1}{2}}\left(\tilde{x}_{jk}, 120 \lambda r_{k}\right)\right\}$ , then there exists $x_{0} \in \partial B_{\rho}(z)$ such that \begin{equation} \label{5''} \frac{d(x_{0} ,P_{jk})}{120r_{k}} \leq \alpha^{\frac{1}{2}}\left(\tilde{x}_{jk}, 120 \lambda r_{k}\right). \end{equation} Thus, by (\ref{5''}), (\ref{9''}) and (\ref{103}) together with (\ref{a''}), we get \begin{eqnarray} \label{7''} d(z ,P_{jk}) &\leq& |z-x_{0}| + d(x_{0} ,P_{jk}) \nonumber \\ &=& \rho + d(x_{0} ,P_{jk}) \leq C(n,d, C_{M}, C_{P}) \, r_{k} \, \epsilon_{3}^{\frac{1}{2n}}+ 120r_{k} \, \alpha^{\frac{1}{2}}\left(\tilde{x}_{jk}, 120 \lambda r_{k}\right) \nonumber\\ &\leq& C(n,d,C_{M}, C_{P}) \,r_{k} \, \epsilon_{3}^{\frac{1}{2n}}. \end{eqnarray} Combining (\ref{6}) and (\ref{7''}), we get that \begin{equation} \label{10''''} d(z ,P_{jk}) \leq C_{5} \,r_{k} \, \epsilon_{3}^{\frac{1}{2n}} \quad \textrm{for}\,\, z \in M \cap B_{120r_{k}}(\tilde{x}_{jk}), \end{equation} where $C_{5}= C_{5}(n,d,C_{M},C_{P})$. Thus, for $ C_{5} \, \epsilon_{3}^{\frac{1}{2n}} \leq \epsilon,$ we get (\ref{rf4}) which is the desired inequality. Now, let us show that $P_{jk}$ and $Q_{jk}$ are close together, that is \begin{equation} \label{rf5} d_{x_{jk}, 5 r_{k}}(P_{jk}, Q_{jk}) \leq 3 \epsilon \, r_{k}. \end{equation} Since $P_{jk}$ and $Q_{jk}$ are $n$-planes, it is enough to show \begin{equation} \label{rf6} \sup_{y \in Q_{jk} \cap B_{5r_{k}}(x_{jk})} d(y, P_{jk}) \leq 3 \epsilon \, r_{k}. \end{equation} Let $y \in Q_{jk} \cap B_{5r_{k}}(x_{jk})$. By (\ref{rf2}), we get that $d(y,M) \leq \epsilon_{0} r_{k}$, and thus, there exists $y' \in M$ such that $|y - y'| \leq 2 \, \epsilon_{0} \, r_{k}$. Recalling that $ x_{jk} \in M \cap B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk})$ (see (\ref{61})), we get \begin{equation*} |y' - \tilde{x}_{jk}| \leq |y' - y| + |y - x_{jk}| + |x_{jk} - \tilde{x}_{jk}| \leq 2 \epsilon_{3} \, r_{k} + 5 r_{k} + \frac{r_{k}}{6} \leq 120 r_{k},\end{equation*} that is $y' \in B_{120 r_{k}}(\tilde{x}_{jk})$. Hence, by (\ref{rf4}), we get that $d(y', P_{jk}) \leq \epsilon \,r_{k}$, and using the fact that $\epsilon_{3} \leq \epsilon$, we get \begin{equation*} d(y, P_{jk}) \leq |y - y'| + d(y', P_{jk}) \leq 3 \epsilon \, r_{k} ,\end{equation*} which finishes the proof of (\ref{rf6}) and in particular (\ref{rf5}). \\ Before starting the proof of (\ref{rf3}), let us recall a little bit how the map $g$ was defined. In the proof of Theorem \ref{t2} (see paragraph above (\ref{sigmas})) David and Toro constructed the smooth maps $\sigma_{k} \,\, \textrm{and} \,\, f_{k}$ where $f_{0} = Id$ and $f_{k} = \sigma_{k-1} \circ \ldots \sigma_{0} \,\, \textrm{for} \,\, k \geq 1$, and then defined the map $f = \displaystyle \lim_{k \to \infty} f_{k}$ defined on $\Sigma_{0}$, and finally the map $g$ was the extension of $f$ to the whole space. \\ In order to prove (\ref{rf3}), we will need the following inequality from \cite{DT1} (see proposition 5.1 page 19 in \cite{DT1}) \begin{equation} \label{inrf} d(f_{k}(z), P_{jk}) \leq C(n,d) \, \epsilon \, r_{k}, \quad \forall \, z \in \Sigma_{0}, \, k\geq 0 \,\,\, \textrm{and}\,\,\, j \in J_{k} , \,\,\, \textrm{such that} \,\,\, f_{k}(z) \in B_{5r_{k}}(x_{jk}).\end{equation} We are finally ready to prove (\ref{rf3}). Let $w \in g(\Sigma_{0}) \cap B_{\frac{1}{10^{l_{0}+8}}}(0)$, and let $d_{0} := d(w, M)$. We would like to prove that $d_{0}=0$ (recall that $M$ is closed by assumption). Let $z \in \Sigma_{0}$ such that $ w = g(z)$. Notice that by (\ref{in}) (with $\epsilon$ instead of $\epsilon_{2}$), the definition of $f_{0}$, and the fact that $g$ and $f$ agree on $\Sigma_{0}$, we have \begin{equation} \label{rf7} |w - z| = |g(z)-z| = |f(z) - f_{0}(z)| \leq C(n,d) \epsilon \, r_{0}. \end{equation} Recalling that $\Sigma_{0} = P_{i_{0}0}$, $\tilde{x}_{i_{0}0} = 0$ , $r_{0} = \frac{1}{10^{l_{0}+5}}$, and that $ x_{jk} \in B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk})$ (see (\ref{61})), we get \begin{eqnarray} \label{rf8} |z - x_{i_{0}0}| &\leq& |z - w| + | w - \tilde{x}_{i_{0}0}| + |\tilde{x}_{i_{0}0} - x_{i_{0}0}| \nonumber \\ &\leq& C(n,d) \epsilon \, r_{0}+ \frac{1}{10^{l_{0}+8}} + \frac{r_{0}}{6} \leq C_{6} \epsilon \, r_{0} + 2 r_{0} \leq 3 r_{0}, \end{eqnarray} for $\epsilon$ such that $C_{6} \epsilon \leq 1$, where $C_{6} = C_{6}(n,d)$. Thus, $z \in P_{i_{0}0} \cap B_{5 r_{0}}(x_{i_{0}0})$, and by (\ref{rf5}), there is a point $z' \in Q_{i_{0}0}$ such that $ |z - z'| \leq 6 \epsilon \, r_{0}$. Moreover, \begin{equation} \label{rf10} |z' - x_{i_{0}0}| \leq |z' - z| + |z - x_{i_{0}0}| \leq 6 \epsilon \, r_{0} + 3 r_{0} \leq 10 r_{0}, \end{equation} for $\epsilon < 1$. Thus, $z' \in Q_{i_{0}0} \cap B_{10 r_{0}}(x_{i_{0}0})$, and by (\ref{rf2}), we get that $d(z', M) \leq \epsilon_{3} \, r_{0}.$\\ Combining (\ref{rf7}), the line after (\ref{rf8}), the line before and the line after (\ref{rf10}), and the fact that $\epsilon_{3} \leq \epsilon$, we get \begin{equation} \label{rf11} d_{0} = d(w, M) \leq |w - z| + |z - z'| + d(z', M) \leq C_{6} \epsilon \, r_{0}+ 6 \epsilon r_{0} + \epsilon_{3} \, r_{0} = (C_{6} +7) \, \epsilon \, r_{0} \leq \frac{r_{0}}{10},\end{equation} for $\epsilon$ such that $(C_{6} + 7) \, \epsilon \leq \frac{1}{10}$, where $C_{6} = C_{6}(n,d)$. We proceed by contradiction. Suppose $d_{0} > 0$, then there exists $k \geq 0$ such that $r_{k+1} < d_{0} \leq r_{k}$. Notice that since $w = g(z)$, $z \in \Sigma_{0}$, and the maps $g$ and $f$ agree on $\Sigma_{0}$, then by (\ref{in}), we have \begin{equation} \label{rf12} |w - f_{k}(z)| \leq C(n,d) \, \epsilon \, r_{k}. \end{equation} Now, by the definition of $d_{0}$, there exists $\xi \in M$ such that $ |\xi - w| \leq \frac{3}{2} d_{0}$. Using (\ref{rf11}) and the fact that $r_{0} = \frac{1}{10^{l_{0}+5}}$, we get \begin{equation} \label{rf18} |\xi| \leq |\xi - w| + |w| \leq \frac{3}{2} \frac{r_{0}}{10} + \frac{1}{10^{l_{0}+8}} \leq \frac{1}{10^{l_{0}+4}}, \end{equation} and thus by (\ref{63'}), there exists $j \in J_{k}$ such that $ \xi \in B_{\frac{3}{2}r_{k}}(x_{jk})$. \\ Since both $k$ and $j$ are now fixed, consider the $n$-plane $P_{jk}$ and the point $x_{jk}$. By the line under (\ref{rf18}), the line under (\ref{rf12}), (\ref{rf12}), and the fact that $d_{0} \leq r_{k}$, we have \begin{equation} \label{rf100} |x_{jk} - f_{k}(z)| \leq |x_{jk} - \xi| + |\xi - w| + |w - f_{k}(z)| \leq \frac{3}{2}r_{k} + \frac{3}{2} d_{0} + C(n,d) \, \epsilon \, r_{k} \leq 3 r_{k} + C_{7} \, \epsilon \, r_{k} \leq 4r_{k}, \end{equation} for $\epsilon$ such that $C_{7} \epsilon \leq 1$, where $C_{7} = C_{7}(n,d)$. Thus, inequality (\ref{inrf}) tell us that $d(f_{k}(z), P_{jk}) \leq C(n,d) \, \epsilon \, r_{k}$. Let $y \in P_{jk}$ such that $| y - f_{k}(z) | \leq C(n,d) \, \epsilon \, r_{k}$. Then, by (\ref{rf12}), the line below it, the line below (\ref{rf18}), and recalling that $d_{0} \leq r_{k}$, we get \begin{equation} \label{rf17} | y - x_{jk}| \leq |y - f_{k}(z)| + |f_{k}(z) - w| + |w - \xi | + |\xi - x_{jk}| \leq C_{8} \, \epsilon \, r_{k} + 3 r_{k} \leq 5 r_{k} \end{equation} for $\epsilon$ such that $C_{8} \, \epsilon \leq 1$, where $C_{8} = C_{8}(n,d)$. Thus, $y \in P_{jk} \cap B_{5 r_{k}}(x_{jk})$, and by (\ref{rf5}) there exists $y' \in Q_{jk}$ such that $| y - y'| \leq 3 \epsilon \, r_{k}$. But then, $| y' - x_{jk}| \leq |y - y'| + |y - x_{jk}| \leq 10 \, r_{k}$; thus $ y' \in Q_{jk} \cap B_{10 r_{k}}(x_{jk})$ and by (\ref{rf2}) we get that $d(y',M) \leq \epsilon_{3} \, r_{k}$. \\ Finally, using (\ref{rf12}), the two lines before (\ref{rf17}), and the three lines below it, we get \begin{equation} \label{rf19} d_{0} = d(w,M) \leq |w - f_{k}(z)| + |f_{k}(z) - y| + |y - y'| + d(y',M) \leq C(n,d) \, \epsilon \, r_{k} = C_{9} \epsilon \, r_{k} \leq r_{k+1} \end{equation} for $\epsilon$ such that $C_{9} \epsilon \leq \frac{1}{10}$, where $C_{9}= C_{9}(n,d)$ which contradicts the fact that $d > r_{k+1}$. This finishes the proof of (\ref{rf3}).\\ Fix $\epsilon < \epsilon_{2} < 1$ such that the lines after (\ref{rf8}), (\ref{rf11}), (\ref{rf100}), (\ref{rf17}), and (\ref{rf19}) hold, and then fix $\epsilon_{3} \leq \epsilon \leq \epsilon_{2}$ such that inequality (\ref{c}) is satisfied with $\epsilon_{3}$ instead of $\epsilon_{0}$, (\ref{c'''}) is satisfied with $\epsilon$ instead of $\epsilon_{2}$ and $\epsilon_{3}$ instead of $\epsilon_{0}$, the hypothesis of Claim 1 is satisfied with $\epsilon_{3}$ instead of $\epsilon_{0}$, and such that the line below (\ref{10''''}) is satisfied. Writing $\epsilon_{3} = c_{10} \, \epsilon$, where $c_{10} = c_{10}(n,d,C_{M},C_{P})$, and replacing in (\ref{bb1rf}), we get (\ref{bb}). The proof that $g$ is bi-Lipschitz is the same as from Theorem \ref{MTT'}. \end{proof} \section {The Poincar\'e Inequality (\ref{eqp1'}) is equivalent to the $p$-Poincar\'e inequality} \label{PIQ} Let $(M, d_{0}, \mu)$ to be the metric measure space where $M \subset B_{2}(0)$ is $n$-Ahlfors regular rectifiable set in $\mathbb{R}^{n+d}$, $\mu =$ \( \mathcal{H}^{n} \mres M\) is the Hausdorff measure restricted to $M$, and $d_{0}$ is the restriction of the standard Euclidean distance in $\mathbb{R}^{n+d}$ to $M$. In this section, we prove Theorem \ref{epi}, which states that in the setting described above, the Poincar\'e inequality (\ref{eqp1'}) is equivalent to the $p$-Poincar\'e inequality (\ref{eqp2'}) and the $Lip$-Poincar\'e inequality (\ref{eqp3'}).\\ We prove that $(iii) \implies (ii) \implies (i) \implies (iii)$. In fact, $(iii) \implies (ii)$ is proved in \cite{M1}. The fact that $(ii) \implies (i)$ follows from a theorem in \cite{K1} where Keith proves the equivalence between $p$-Poincar\'e inequalities and $Lip$-Poincar\'e inequalities. Finally, to prove $(i) \implies (iii)$, we use the well known fact that $X$ supporting a $p$-Poincar\'e inequality is equivalent to having inequality (\ref{eqp2}) hold for all measurable functions $u$ on $X$ and all $p$-weak upper gradients $\rho$ of $u$. Then, we show that $|\nabla^{M}f|$ is a $p$-weak upper gradient of $f$, when $f$ is a Lipschitz function on $\mathbb{R}^{n+d}$. \\ Let us start with stating the theorems that we need, as mentioned in the paragraph above. \begin{theorem} (see \cite{K1}, Theorem 2) \label{theoremqc} Let $p \geq 1$, and let $(X,d,\nu)$ be a complete metric measure space, with $\nu$ a doubling measure. Then, the following are equivalent: \begin{itemize} \item $(X,d,\nu)$ admits a $p$-Poincar\'e inequality for all measurable functions $u$ on $X$. \item $(X,d,\nu)$ admits a $Lip$-Poincar\'e inequality for all Lipschitz functions $f$ on $X$. \end{itemize} \end{theorem} \begin{theorem} (see \cite{BB}, Proposition 4.13) \label{propbjorns} Let $p \geq 1$, and let $(X,d,\nu)$ be a metric measure space. Then, the following are equivalent: \begin{itemize} \item Inequality (\ref{eqp2}) holds for all measurable (resp. Lipschitz) functions $u$ on $X$ and all upper gradients $\rho$ of $u$. \item Inequality (\ref{eqp2}) holds for all measurable (resp. Lipschitz) functions $u$ on $X$ and all $p$-weak upper gradients $\rho$ of $u$. \end{itemize} \end{theorem} Before stating the theorem we need from \cite{M1}, let us make a remark on how the metric balls looks like in the metric measure space $(M, d_{0}, \mu)$. In fact, fix $x \in M$ and $r>0$. It is easy to see that \begin{equation} \label{ballslook} B^{M}_{r}(x) = B_{r}(x) \cap M, \end{equation} where $B_{r}(x)$ denotes the Euclidean ball in $\mathbb{R}^{n+d}$ of center $x$ and radius $r$. \begin{theorem} \label{lemmaqc} (see \cite{M1} Corollary 5.8) \footnote{ Notice that Corollary 5.8 in \cite{M1} is stated and proved in the ambient space $\mathbb{R}^{n+1}$. However, the proof of Corollary 5.8 in \cite{M1} is independent from the co-dimension $d$ of $M$. Thus the exact same statement holds here in the higher co-dimension case. Moreover, notice that in Corollary 5.8, the Poincar\'e inequality assumed is (\ref{eqp1'}) but for $p= \lambda = 2$. This results in getting the Poincar\'e inequality (\ref{eqp3'}) but also for $p=\lambda = 2$. However, it is easy to see that one can assume the Poincar\'e inequality (\ref{eqp1'}) for any $p \geq 1$ and $\lambda \geq 1$, and get inequality (\ref{eqp3'}) for the same $p$ and $\lambda$ that one started with.} Let $(M, d_{0}, \mu)$ be as above. Assume that $M$ satisfies $(iii)$. Then, $M$ satisfies $(ii)$. \end{theorem} To show that $|\nabla^{M}f|$ is a $p$-weak upper gradient of $f$, when $f$ is a Lipschitz function on $\mathbb{R}^{n+d}$, we need the following lemma from \cite{BB}: \begin{lemma} (see \cite{BB}, Lemma 1.42) \label{lemmabjorns} Let $p \geq 1$ and let $(M, d_{0}, \mu)$ be as above. Suppose that $E \subset M$, with $\mu(E) = 0$. Denote by $\Gamma(M)$ the set of all rectifiable curves in $M$, and let \begin{equation*} \Gamma_{E} = \left \{ \gamma \in \Gamma(M), \,\, \textrm{such that} \,\, \mathcal{L}^{*}_{1}(\gamma^{-1}(E)) \neq 0 \right \}, \end{equation*} where $\mathcal{L}^{*}_{1}$ denotes the Lebesgue outer measure on $\mathbb{R}$. Then, $ \textrm{Mod}_{p}(\Gamma_{E}) = 0$. \end{lemma} \begin{proposition} \label{lemmaqc2} Let $(M, d_{0}, \mu)$ be as above, and suppose $f$ be a Lipschitz function on $\mathbb{R}^{n+d}$. Then, $|\nabla^{M}f|$ (or more precisely, any non-negative extension of $|\nabla^{M}f|$ to the whole space $M$) is a $p$-weak upper gradient of $f|_{M}$, the restriction of $f$ on $M$. \end{proposition} \begin{proof} Since $f$ Lipschitz on $\mathbb{R}^{n+d}$, we know that $\nabla^{M}f$ exists $\mu$-almost everywhere. Let \begin{equation*} E = \left \{ x \in M \,\,\, \textrm{such that} \,\,\, \nabla^{M}f(x) \,\, \textrm{does not exist} \right \}.\end{equation*} Then, $\mu(E)=0$, and by Lemma \ref{lemmabjorns}, we know that Mod$_{p}(\Gamma_{E}) = 0$. Now, let $\gamma$ be a rectifiable curve in $M$, parametrized by arc length, such that $\gamma \notin \Gamma_{E}$. Then, $\mathcal{L}_{1}(\gamma^{-1}(E)) = 0$. Moreover, Since $f \circ \gamma$ is Lipschitz, and thus absolutely continuous on $[0, l_{\gamma}]$, we have \begin{eqnarray} \label{np1} \big | f|_{M}(\gamma(0)) - f|_{M}(\gamma(l_{\gamma})) \big | &=& | f(\gamma(0)) - f(\gamma(l_{\gamma}))| \nonumber \\ &=& \left | \int_{0}^{l_{\gamma}} (f \circ \gamma)'(t) \, dt \right | \nonumber \\ &=& \left | \int_{t \in [0, l_{\gamma}]; \, \gamma(t) \notin E} (f \circ \gamma)'(t) \, dt \right | \leq \int_{t \in [0, l_{\gamma}]; \, \gamma(t) \notin E} |(f \circ \gamma)'(t)| \, dt \end{eqnarray} Let $t \in [0, l_{\gamma}]$ such that $\gamma(t) \notin E$. Then, $T_{\gamma(t)}M$ exists, and $\nabla^{M}f(\gamma(t)) \in T_{\gamma(t)}M$. We first show that \begin{equation} \label{np4} | (f \circ \gamma)'(t)| \leq | \nabla^{M} f (\gamma(t))|. \end{equation} Since $\gamma '(t) \in T_{\gamma(t)}M$ \footnote{This follows directly from the facts that for any sequence $r \to 0$, we have $\gamma'(t) = \displaystyle \lim_{r \to 0} \displaystyle \frac{\gamma(t+r) - \gamma(t)}{r}$ and $\displaystyle \lim_{r \to 0} \displaystyle \sup_{y \in \frac{M - \gamma(t)}{r}} d(y, T_{\gamma(t)}M) = 0$.}is a unit vector, then by Rademacher's Theorem, we have \begin{equation} \label{np3} \lim_{h \to 0} \frac{| f \big (\gamma(t) +h \gamma '(t) \big ) - f \big(\gamma(t)\big) - h <\nabla^{M} f (\gamma(t)), \gamma '(t) >|}{h} = 0. \end{equation} Now, for any $-t < h < l_{\gamma} - t$, we have \begin{equation} \begin{split} \label{np5} \frac{|f \big (\gamma(t+h) \big ) - f \big(\gamma(t)\big)|}{h} & \leq \\ &\frac{|f \big (\gamma(t+h) \big ) - f \big (\gamma(t) + h \gamma '(t) \big )|}{h} + \frac{|f \big (\gamma(t) + h \gamma '(t) \big ) - f\big(\gamma(t)\big)|}{h} \nonumber \\ &\leq L_{f}\, \frac{| \gamma(t+h) - \gamma(t) - h \gamma '(t) |}{h} + \frac{|f \big (\gamma(t) + h \gamma '(t) \big ) - f\big(\gamma(t)\big)|}{h} \, , \end{split} \end{equation} where in the last step, we used the fact that $f$ is Lipschitz on $\mathbb{R}^{n+d}$.\\ Taking the limit as $h \to 0$ on both sides of (\ref{np5}), and using (\ref{np3}) and the fact that $\gamma '(t)$ is a unit vector, we get \begin{equation*} | (f \circ \gamma)'(t)| \leq | \nabla^{M} f(\gamma(t)) \cdot \gamma ' (t)| \leq | \nabla^{M} f(\gamma(t))| \end{equation*} which is exactly (\ref{np4}). Replacing (\ref{np4}) in (\ref{np1}), we get \begin{equation} \label{np8} \big | f|_{M}(\gamma(0)) - f|_{M}(\gamma(l_{\gamma}))\big | \leq \int_{t \in [0, l_{\gamma}]; \, \gamma(t) \notin E} | \nabla^{M} f (\gamma(t))| \, dt. \end{equation} Now, define the map $G : M \to [0, \infty]$ to be any non-negative extension of $|\nabla^{M}f|$ to the whole space $M$ (that is, $G(x) = |\nabla^{M}f(x)|$ on $M \setminus E$, which means that $G = |\nabla^{M}f|\, \mu$-a.e.). Plugging back in (\ref{np8}), we get \begin{eqnarray} \label{np7} | f|_{M}(\gamma(0)) - f|_{M}(\gamma(l_{\gamma}))| &\leq& \int_{t \in [0, l_{\gamma}]; \, \gamma(t) \notin E} G(\gamma(t)) \, dt \nonumber \\ &=& \int_{t \in [0, l_{\gamma}]; \, \gamma(t) \notin E} G(\gamma(t)) \, dt + \int_{t \in [0, l_{\gamma}]; \, \gamma(t) \in E} G(\gamma(t)) \, dt \nonumber \\ &=& \int_{0}^{l_{\gamma}} G\big(\gamma (t)\big) dt = \int_{\gamma} G \, ds. \end{eqnarray} \footnote{ The function G defined here is clearly measurable. However, since any non-negative measurable function coincides $\mu$-almost everywhere with a non-negative Borel function (see \cite{BB}, Proposition 1.2), we can assume, without any loss of generality that $G$ is Borel. In this case, $\int_{\gamma}G \, ds$ is well defined for any rectifiable curve $\gamma$ in $M$, and we do not need to worry about the last step in (\ref{np7}).} This finishes the proof that $G$ is a $p$-weak upper gradient of $f|_{M}$. \end{proof} We are finally ready to prove Theorem \ref{epi}: \\ \textbf{\underline{\textit{Proof of Theorem \ref{epi}:}}} \begin{proof} We prove $ (iii) \implies (ii) \implies (i) \implies (iii)$:\\ \noindent $(iii) \implies (ii)$:\\ This is exactly Theorem \ref{lemmaqc}.\\ \noindent $(ii) \implies (i)$: \\ Notice that by using (\ref{ballslook}), we will be done if we apply Theorem \ref{theoremqc} to the metric measure space $(M, \mu, d_{0})$. In fact, $M$ is complete since it is closed and bounded. Moreover, the fact that $\mu$ is doubling follows from (\ref{ballslook}) and the Ahlfors regularity of $\mu$. Hence, we can apply Theorem \ref{theoremqc} to $(M, \mu, d_{0})$.\\ \noindent $(i) \implies (iii)$: \\ Notice that by Theorem \ref{propbjorns}, we know that $(i)$ implies that inequality (\ref{eqp2}) holds for all measurable functions $u$ on $M$ and all $p$-weak upper gradients $\rho$ of $u$. Let $f$ be a Lipschitz function $f$ on $\mathbb{R}^{n+d}$, and fix $x \in M$ and $r>0$. Then, $f|_{M}$ is a Lipschitz function on $M$, and by Lemma \ref{lemmaqc2}, $|\nabla^{M}f|$ agrees $\mu$-almost everywhere with $G$, a $p$-weak upper gradient of $f|_{M}$. Applying (\ref{eqp2}) for $u = f|_{M}$, $\rho = G$, and the ball $B = B_{r}(x) \cap M$, we get \begin{equation*} \Xint-_{B_{r}(x)} \left| f(y) - f_{x,r}\right| d \mu(y) \leq \kappa r \left(\,\Xint-_{B_{\lambda r}(x)} G(y)^{2} \, d \mu(y) \right)^{\frac{1}{2}} = \kappa r \left(\,\Xint-_{B_{\lambda r}(x)} (|\nabla^{M}f|(y))^{2} \, d \mu(y) \right)^{\frac{1}{2}} \end{equation*} hence finishing the proof \end{proof} \section{The conclusion of Theorem \ref{MTT'} is optimal} \label{MCSHH} In this section, we prove Theorem \ref{construct} by giving an example of a non-Reifenberg flat, $2$-Ahlfors regular rectifiable set $M \subset \mathbb{R}^{3}$ that satisfies the Carleson condition (\ref{103}) and the Poincar\'e-type inequality (\ref{eqp}). \\ To construct this example, we use the well known fact that Lipschitz domains support a $p$-Poincar\'e-type inequality, together with Theorem \ref{epi} that allows us to go from a $p$-Poincar\'e inequality to the Poincar\'e inequality (\ref{eqp1'}). \\ In order to keep track of where the balls live, $B^{2}_{r}(x)$ will denote the Euclidean ball in $\mathbb{R}^{2}$ of center $x$ and radius $r$, whereas $B^{3}_{r}(x)$ will be that in $\mathbb{R}^{3}$. Moreover diam($A$) denotes the diameter of a set $A$. \begin{definition} We say that a bounded set $A \subset \mathbb{R}^{2}$ satisfies the \emph{corkscrew condition} if there exists $\delta >0$ such that for all $x \in \bar{A}$ and $0<r \leq \textrm{diam}(A)$, the set $B^{2}_{r}(x) \cap A$ contains a ball with radius $\delta r$. \end{definition} \begin{definition} We say that an open, bounded set $A \subset \mathbb{R}^{2}$ is Lipschitz domain if the boundary of $A$, $\partial A$ can be written, locally, as a graph of a Lipschitz function. More precisely, A is a Lipschitz domain if for every point $x \in \partial A$ there exists a radius $r>0$ and a bijective map $h_{x}: B^{2}_{r}(x) \to B^{2}_{1}(0)$ such that the following holds: \begin{itemize} \item $h_{x}$ and $h_{x}^{-1}$ are Lipschitz continuous. \item $h_{x}(\partial A \cap B^{2}_{r}(x)) = Q_{0}$, and \item $h_{x}(A \cap B^{2}_{r}(x)) = Q_{1}$, \end{itemize} where $Q_{0} = \left\{ (x_{1}, x_{2}) \in B^{2}_{1}(0); x_{2} = 0 \right\}$ and $Q_{1} = \left\{ (x_{1}, x_{2}) \in B^{2}_{1}(0); x_{2} > 0 \right\}$. \end{definition} In \cite{BS}, J. Bj\"{o}rn and N. Shanmugalingam prove that Lipschitz domains support $p$-Poincar\'e-type inequalities: \begin{theorem} \label{Ht1} (see \cite{BS} Theorem 4.4) Consider the Hausdorff measure $\mathcal{H}^{2}$ on $\mathbb{R}^{2}$. Let $\Omega$ be any Lipschitz domain on $\mathbb{R}^{2}$. Then, $\Omega$ supports a 2-Poincar\'e-type inequality, that is there exist constants $\kappa\geq1$ and $\lambda\geq1$ such that for every $x \in \bar{ \Omega}$, and $r>0$, and for every Lipschitz function $u: \Omega \rightarrow \mathbb{R}$ and any upper gradient $\rho$ of $u$ in $\Omega$, the following holds \begin{equation} \label{Heq1} \Xint-_{B^{2}_{r}(x) \cap \Omega} \left| u(y) - u_{x,r} \right| \, d \mathcal{H}^{2}(y) \leq \kappa \, r \, \left(\,\Xint-_{B^{2}_{\lambda r}(x) \cap \Omega} \rho(y)^{2} \, d \mathcal{H}^{2}(y) \right)^{\frac{1}{2}}, \end{equation} where $u_{x,r} := \Xint-_{B^{2}_{r}(x) \cap \Omega} u \, d \mathcal{H}^{2}$. \end{theorem} We are now ready to construct our example. Let $\Omega := B^{2}_{1}(0) \setminus Q$ where $Q$ is the closed square of center $(\frac{1}{2},0)$, and side $l = \frac{1}{10}$. Since $\Omega$ is a Lipschitz domain, by Theorem \ref{Ht1}, it supports the 2-Poincar\'e-type inequality (\ref{Heq1}). \\ \textbf{\underline{\textit{Proof of Theorem \ref{construct}:}}} \begin{proof} Let $\Omega$ be as in the construction above, and let $M: = \bar{\Omega} \times \{0\} \subset \mathbb{R}^{3}$. We prove this theorem for $n=2$, $d = 1$, and $\mu =$ \( \mathcal{H}^{2} \mres M\). However, with a similar construction\footnote{ In general, we take $\Omega := B^{n}_{1}(0) \setminus Q$ where $Q$ is the closed $n$-cube of center $(\frac{1}{2}, \underbrace{0, \ldots,0}_\text{$n-1$-times})$, and side $l = \frac{1}{10}$. Then, $M := \bar{\Omega} \times \underbrace{(0, \ldots, 0)}_\text{$d$-times}$.}, the theorem holds for any $n \geq 2$ and $d \geq 1$.\\ It is trivial to see that $M$ is a rectifiable non-Reifenberg flat set. To see that $M$ is 2-Ahlfors regular, first note that $M$ is closed by construction. So, we show that there exists a constant $C_{M} \geq 1$ such that for every $x \in M$ and $0<r \leq 1$, we have \begin{equation} \label{Heq2} C_{M}^{-1} \, r^{2} \leq \mu(M\cap B^{3}_{r}(x)) \leq C_{M} \, r^{2}. \end{equation} By the definition of $\mu$ and the construction of $M$, proving (\ref{Heq2}) translates to proving that for every $\bar{x} \in \bar{\Omega}$ and $0<r \leq 1$, \begin{equation} \label{Heq3} C_{M}^{-1} \, r^{2} \leq \mathcal{H}^{2} (\bar{\Omega} \cap B^{2}_{r}(\bar{x})) \leq C_{M} \, r^{2}. \end{equation} The right hand side of (\ref{Heq3}) is trivial since $\mathcal{H}^{2}(\bar{\Omega} \cap B^{2}_{r}(\bar{x})) \leq \mathcal{H}^{2}(B^{2}_{r}(\bar{x})) = \omega_{2} \, r^{2}$. For the left hand side, notice that since $\Omega$ is a Lipschitz domain, then it is automatically a corkscrew domain, and thus there exists an $\delta >0$, such that for every $\bar{x} \in \bar{\Omega}$ and for every $0<r \leq \textrm{diam}(\Omega) = 1$, there is a ball $B^{2}_{\delta r}(\bar{x}) \subset \bar{\Omega} \cap B^{2}_{r}(\bar{x})$. So, $\omega_{2} \, \delta^{2}r^{2} =\mathcal{H}^{2}( B^{2}_{\delta r}(\bar{x})) \leq \mathcal{H}^{2}(\bar{\Omega} \cap B^{2}_{r}(\bar{x})) $, and the proof of (\ref{Heq3}) is done.\\ Let us now prove that the Carleson-type condition (\ref{103}) holds. Let $\epsilon_{0}$ be the constant from the statement of Theorem \ref{MTT'}. Since $M$ has co-dimension 1, (\ref{103}) can be written as (\ref{103old}), and thus proving (\ref{103}) translates to proving \begin{equation} \label{Heq4} \sup_{x \in A \cap B^{3}_{1}(0)} \,\, \int_{0}^{1} \left(\,\Xint-_{B^{3}_{r}(x)} |\nu(y) - \nu_{x,r}|^{2} \, d \mu \right) \frac{dr}{r} < \epsilon_{0}^{2} ,\end{equation} where $\nu$ denotes the unit normal to $M$ and $\nu_{x,r} := \Xint-_{B^{3}_{r}(x)} \nu \, d \mu$. But for $\mu$-almost every $y$, $\nu(y)$ exists and $\nu(y) = <0,0,1>$. Thus, the left hand side of (\ref{Heq4}) is always 0, and (\ref{Heq4}) is satisfied. \\ Finally, let us prove that $M$ satisfies the following Poincar\'e inequality \begin{equation} \label{Heq6ag} \Xint-_{B^{3}_{r}(x)} \left| f(y) - f_{x,r} \right| \, d \mu(y) \leq \kappa \, r \, \left(\,\Xint-_{B^{3}_{\lambda r}(x)}|\nabla^{M}f(y)|^{2} \, d \mu(y) \right)^{\frac{1}{2}},\end{equation} for some $\kappa \geq 1$ and $\lambda \geq 1$, and where $x \in M$, $r >0$, $f$ is a Lipschitz function on $\mathbb{R}^{3}$, and $f_{x,r} := \Xint-_{B^{3}_{r}(x)} f \, d \mu$. By Theorem \ref{epi}, it suffices to show that \begin{equation} \label{Heq6agag} \Xint-_{B^{3}_{r}(x)} \left| f(y) - f_{x,r} \right| \, d \mu(y) \leq \kappa \, r \, \left(\,\Xint-_{B^{3}_{\lambda r}(x)} \rho(y)^{2} \, d \mu(y) \right)^{\frac{1}{2}},\end{equation} for some $\kappa \geq 1$ and $\lambda \geq 1$, and where $x \in M$, $r >0$, $f$ is a Lipschitz \footnote{Notice that $(i)$ in Theorem \ref{epi} states that inequality (\ref{eqp2'}) should hold for all measurable functions $f$ and not only Lipschitz functions. However, from the proof of Theorem \ref{epi}, we know that the theorem still holds if we restrict $(i)$ to Lipschitz functions only.} function on $M$, $\rho$ is an upper gradient of $f$ in $M$, and $f_{x,r} := \Xint-_{B^{3}_{r}(x)} f \, d \mu$.\\ Let $f$ be a Lipschitz function on $M$, and $\rho$ an upper gradient of $f$ on $M$. Fix $x \in M$ and $r>0$. Let $\tilde{x} \in \bar{\Omega}$ such that $(\tilde{x},0) = x$, and define the functions $\tilde{f} : \Omega \to \mathbb{R}$ and $\tilde{\rho} : \Omega \to [0, \infty]$ such that $\tilde{f}(a,b) = f(a,b,0)$ and $\tilde{\rho}(a,b) = \rho(a,b,0)$. It is easy to see that $\tilde{f}$ is a Lipschitz function on $\Omega$, and $\tilde{\rho}$ is an upper gradient to $\tilde{f}$ in $\Omega$. Thus, by the definition of $\mu$, the construction of $M$, the fact that $\mathcal{H}^{2}(\bar{\Omega} \setminus \Omega) = 0$, and using (\ref{Heq1}) (for $x = \tilde{x}$, $u = \tilde{f}$, and $\rho = \tilde{\rho}$), we get \begin{eqnarray*} \Xint-_{B^{3}_{r}(x)} \left| f(y) - f_{x,r} \right| \, d \mu(y) &=& \Xint-_{B^{2}_{r}(\tilde{x}) \cap \Omega} \left| \tilde{f}(y) - \tilde{f}_{\tilde{x},r} \right| \, d \mathcal{H}^{2}(y) \\ &\leq& \kappa \, r \, \left(\,\Xint-_{B^{2}_{\lambda r}(\tilde{x})\cap \Omega} \tilde{\rho}(y)^{2} \, d \mathcal{H}^{2}(y) \right)^{\frac{1}{2}} \\ &=& \kappa \, r \, \left(\,\Xint-_{B^{3}_{\lambda r}(x)} \rho(y)^{2} \, d \mu(y) \right)^{\frac{1}{2}}, \end{eqnarray*} which is exactly (\ref{Heq6agag}) hence finishing the proof of this theorem \end{proof} \begin{remark} Notice that one could take away more that one square $Q$ from the ball $B^{2}_{1}(0)$ and still get the same result of this section. The important thing about the construction above is that $\Omega$ is a Lipschitz domain; Thus if we want to construct a set with $m$ holes that satisfies the hypotheses of Theorem \ref{MTT'}, all we need to do is make sure that the squares we take away from the ball $B^{2}_{1}(0)$ and are far away from each other (that is, they do not accumulate). That way, $\Omega \setminus \displaystyle \bigcup_{i=1}^{m} Q_{i}$ remains a Lipschitz domain and the rest of the argument follows directly. \end{remark} As mentioned in the introduction, the example constructed in Theorem \ref{construct} proves that the conclusion of Theorem \ref{MTT'} is optimal. \begin{ack} The author would like to thank T. Toro for her supervision, direction, and numerous insights into the subject of this project. \end{ack}
1,314,259,995,187
arxiv
\section{Introduction} Extracting general conclusions from the Einstein field equations has always been a difficult task, specially if matter is included. A particularly simple but useful matter model is electrically counterpoised dust (ECD). It corresponds to a charged perfect fluid without pressure. Although it might seem that including charge, therefore having to deal with the Einstein-Maxwell system of equations, would make everything more complicated, it can be seen that if the matter and charge density are perfectly balanced, then any static distribution of matter is a solution of the field equations. This was first noticed by Majumdar \cite{Majumdar1947} and Papapetrou \cite{Papapetrou1947} for a system consisting of discrete particles. If one considers only electrovacuum, then to each particle there is an event horizon that is interpreted as an extremal Reissner-Nordstr\"om (ERN) black hole \cite{Hartle1972}. One way of completing the Majumdar-Papapetrou spacetimes is by matching the exterior solution to static interior solutions made of ECD, as shown in \cite{Das1962}, and more recently analyzed in detail in \cite{Varela2003}. Also, the Majumdar-Papapetrou formalism can be extended to higher dimensions, keeping its distinguishing features \cite{Lemos2005}. This extremely particular property of ECD, that any static charge distribution gives rise to a solution of the Einstein-Maxwell field equations, has been exploited to construct spacetimes with particular characteristics. In this way, features that proved difficult for a general analysis, allowed an analytic treatment for particular solutions. Examples of this includes the study of the relation between charge and mass in the Reissner-Nordstr\"om spacetimes and the construction of a point charge model \cite{Bonnor1960}. Also, in \cite{Bonnor1972}, ECD is used to construct static objects with unbounded density, and in \cite{Bonnor1975} it is shown that redshifts as high as desired can be obtained from regular objects. Even more elaborate analytic models can be constructed, in particular ECD spheroids, that allow to discuss the hoop conjecture \cite{Bonnor1998}. A special characteristic of the constructed solutions is that they can be made to be close to the ERN black hole. This has been studied in connection with the bifurcation of the solutions \cite{Horvat2005}, and in \cite{Meinel2011} it has been shown that such black hole limit is a general feature of ECD solutions. Although extremely interesting for analyzing such difficult general questions, one underlying assumption when generalizing particular results of explicit solutions is that said solutions are stable. If the solution is stable one expects that physical realistic situations that are close to the solution could appear in nature, although the exact solution would not, as it would always be subjected to some perturbations. On the other hand, if the solution is unstable, then there is no hope of finding it in nature, and the general conclusions that have been extracted lose ground. The discussion of ECD regarding stability is, as in general with stability concerns, not an easy one. Leaving aside that dust would always be an approximation where thermal energy has been discarded, the fact that the mass and charge density needs to be equal implies a particular fine tuning difficult to justify from more fundamental matter models. No known particle satisfy such relationship, and in fact, it is generally grossly violated. For example, if one wants to make such a matter system starting with neutral hydrogen, one needs to ionize only one in $10^{18}$ atoms. Such is the disparity between electric and gravitational force for fundamental particles. As ECD has the same mass and charge density, then it is considered to be extremal in tis mass-charge relationship, and corresponds microscopically to the ERN black hole. This is also a reflection of the fact that from the perspective of general relativity all fundamental particles are over-extremal. Another point that calls into question the stability of static ECD solutions is their Newtonian counterparts. If we single out a portion of the fluid, then the total force on such portion vanish. If it starts to move then the restitutive force must come from gravitational and electromagnetic interactions, as there is no pressure, and therefore it has to be of second order in the perturbation. This implies that the acceleration would be of second order on the perturbation. The argument in itself does not mean that the system is unstable, but shows that the perturbations have a grater chance of growing. Closely related to the question of stability of ECD spacetimes is the stability of charged fluid spheres with pressure. This problem was tackled in \cite{Anninos2001}, where the Tolman-Oppenheimer-Volkov equations were integrated numerically for several equations of state and the stability was examined by both a normal mode and an energy analysis. It was found that in general there is a stability limit, beyond which the spheres are unstable and therefore undergo gravitational collapse, in all cases before they reach the ERN limit $Q=M$. In this article we take a first step towards determining the stability of ECD spacetimes. We consider static spherically symmetric solutions of the Einstein-Maxwell system of equations with ECD as the matter model, and analyze the behaviour of linear spherically symmetric perturbations. The article is organized as follows. In Section \ref{secSpacetime} we present the equations that the ECD spacetime needs to satisfy. Then the perturbations are analyzed in Section \ref{secPert}. In Section \ref{secExample} an explicit example is constructed, which shows that if the original spacetime is close to a black hole, then the perturbation can make the spacetime to collapse into a black hole. The conclusions are discussed in Section \ref{secConc}. \section{Spherically symmetric ECD spacetime}\label{secSpacetime} The contents of this section are well known and the reader can refer for example to \cite{Bonnor1960} and \cite{Anninos2001} . The Einstein-Maxwell system of equations is \begin{equation} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi(T^{d}_{\mu\nu}+T^{e}_{\mu\nu}), \end{equation} \begin{equation} \nabla_{\alpha}F^{\mu\alpha}=4\pi j^{\mu}, \end{equation} where the electromagnetic tensor is given in terms of the electromagnetic potential \begin{equation} F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}, \end{equation} and the associated energy-momentum tensor is \begin{equation} T^e_{\mu\nu} = \frac{1}{4\pi} \left(F_{\alpha\mu}F^\alpha\,_\nu-\frac{1}{4}F_{\alpha\beta}F^{\alpha\beta}g_{\mu\nu}\right). \end{equation} The matter model is dust, the corresponding energy-momentum tensor being \begin{equation} T^{d}_{\mu\nu}=\rho\, u_{\mu}u_{\nu}, \end{equation} where $u^\mu$ is the four-velocity of the fluid and $\rho$ is the mass density. The current density is given in terms of the charge density, $\sigma$, by \begin{equation} j^{\mu}=\sigma u^{\mu}. \end{equation} Let us consider a static spherically symmetric spacetime, whose metric can always be written in Schwarzschild coordinates as \begin{equation}\label{eqMetric} ds^2 = -e^{2\Phi(r)}dt^2+e^{2\Lambda(r)}dr^2+r^2(d\theta^2+\sin^2\theta d\phi^2). \end{equation} The four-velocity of the fluid is \begin{equation} u^{\mu} = e^{-\Phi(r)}\,\partial_t, \end{equation} and the electromagnetic potential can be expressed in terms of a scalar potential $\nu(r)$, \begin{equation} A_\mu = \nu(r)\, dt. \end{equation} Denoting by prime the derivative with respect to $r$, the field equations are \begin{equation}\label{eqLambda} e^\Lambda = 1 + r\Phi', \end{equation} \begin{equation}\label{eqRho} \rho = \frac{\Phi'' + \Phi'^2 + 2\Phi'/r}{4\pi(1+r\Phi')^3}, \end{equation} \begin{equation} \nu = e^\Phi, \end{equation} and \begin{equation}\label{eqSigma} \sigma = k\rho, \end{equation} with $k=\pm 1$ (in the following we take $k=1$, as it simply ammounts to the convention of which charge is considered positive). The usual procedure to obtain a solution that represents a compact object is to consider a ball of coordinate radius $r=R$, in whose interior the mass (and correspondingly charge) density is different from zero, and outside there is electrovacuum. Then, the exterior solution is ERN, \begin{equation} \Phi_E(r) = - \Lambda_E(r) = \ln \left(1-\frac{M}{r}\right), \end{equation} where $M$ is the ADM mass of the spacetime and the subindex $E$ means that the quantities refer to the exterior region ($r>R$). For the interior region ($r<R$) we leave the functions without any subindex. To obtain an interior solution we can follow the typical procedure of making a simple ansatz for the metric function $\Phi$. Then we glue the interior and exterior functions using the juncture conditions, which are \begin{equation} \Phi(R) = \Phi_E(R),\quad \Phi'(R) = \Phi_E'(R),\quad \Lambda(R) = \Lambda_E(R),\quad \Lambda'(R) = \Lambda_E'(R). \end{equation} Also, to ensure regularity at $r=0$ \begin{equation} \Lambda(0) = 0,\quad \Lambda'(0)=0,\quad \Phi'(0) = 0. \end{equation} Since an ansatz for $\Phi$ is made, all the previous conditions are satisfied if we enforce that \begin{equation} \Phi(0)=0,\quad \Phi(R) = \Phi_E(R),\quad \Phi'(R) = \Phi_E'(R),\quad \Phi''(R) = \Phi_E''(R). \end{equation} The function $\Lambda$ and the mass density $\rho$ are then calculated using \eqref{eqLambda} and \eqref{eqRho}. For the discussion of Section \ref{secExample} it is convenient to use the charge inside a sphere of radius $r$ as the function to make the ansatz. The charge can be integrated explicitly using the equations \eqref{eqSigma}, \eqref{eqRho} and \eqref{eqLambda}, as \begin{equation} Q = \int_V \sigma\,dV = 4\pi\int_0^r r^2\,e^\Lambda\,\sigma\,dr = r(1-e^{-\Lambda}). \end{equation} Now we can write the other functions in terms of $Q$, \begin{equation} \Lambda = -\ln\left(1-\frac{Q}{r}\right), \end{equation} \begin{equation}\label{eqPhi} \Phi' = \frac{Q}{r(r-Q)}, \end{equation} \begin{equation} \rho = \frac{Q'(r-Q)}{4\pi r^3}. \end{equation} The regularity and junction conditions then become \begin{equation} Q(0)=0,\quad Q'(0)=0,\quad Q''(0)=0,\quad Q(R)=M,\quad Q'(R)=0, \end{equation} and the integration constant in \eqref{eqPhi} is fixed by $\Phi(R) = \Phi_E(R)$. From the previous equations we see that if $Q=r$ for any $0<r\leq R$ then the solution fails to be regular. This is expected because if $Q=r$ then there is an event horizon and the solution is no longer a regular ECD spacetime but the ERN black hole. We will use the difference between $r$ and $Q$ in Section \ref{secExample} as a measure of how far the solution is from forming a black hole. \section{Spherically symmetric linear perturbations}\label{secPert} Given the symmetries of the solution, we study the simplest possible perturbations, the linear spherical ones. We follow \cite{Anninos2001} and here we omit the lengthy steps to obtain the first order equations, as the procedure is completely analogous. As in Section \ref{secSpacetime}, the metric can again be written in the form \eqref{eqMetric}, but now the functions $\Phi$ and $\Lambda$ depend also on $t$. As we consider perturbations, these functions are written as \begin{equation} \label{20} \Phi(t,r)=\Phi_{0}(r)+\Phi_{1}(t,r),\quad\Lambda(t,r)=\Lambda_{0}(r)+\Lambda_{1}(t,r), \end{equation} where the subindex $0$ is used to denote the unperturbed background quantities and the subindex $1$ for the perturbation, which is assumed to be small with respect to the unperturbed quantity, and with the corresponding expressions for the other functions. The fundamental quantity in the perturbation scheme is the displacement of a fluid element. A fluid element located at coordinate radius $r$ in the unperturbed configuration is displaced to coordinate radius $r + \xi(r,t)$ at coordinate time $t$ in the perturbed configuration. As it turns out, the first order perturbation of all the quantities are functions of the unperturbed quantities and of $\xi$. We have for these quantities \begin{equation} \Lambda_1=-4\pi r e^{2\Lambda_0}\rho_0 \xi, \end{equation} \begin{equation} \Phi_1' = -4\pi e^{3\Lambda_0}\rho_0 \xi, \end{equation} \begin{equation} \rho_1 = -\rho_0 \xi' - \left[\rho_0'+\left(\frac{2}{r}-\Phi_0'\right)\rho_0\right]\xi, \end{equation} \begin{equation} Q_1 = -Q_0'\xi. \end{equation} The equation for the fluid displacement turns out to be simply \begin{equation}\label{eqXi} \frac{\partial^2\xi}{\partial t^2} = 0. \end{equation} Therefore we can freely specify two functions of $r$, $\xi_0(r)$ and $\xi_1(r)$, and the solution to \eqref{eqXi} is \begin{equation} \xi(r,t) = \xi_0(r) + \xi_1(r)\,t. \end{equation} The function $\xi_0(r)$ corresponds to the initial displacement of the fluid, while $\xi_1(r)$ encodes the velocity of the fluid element. The boundary conditions are \begin{equation} \lim_{r\rightarrow 0}\frac{\xi}{r} = constant,\quad \lim_{r\rightarrow R}\xi=0. \end{equation} The first condition is imposed in order to have a regular behavior of the mass density at the origin, while the second condition ensures that a surface density mass is not generated at the boundary of the object For the discussion in the following section it is convenient to work with adimensional quantities. We define \begin{equation} x:=\frac{r}{R},\quad \mu:=\frac{M}{R},\quad q:=\frac{Q}{R},\quad \zeta:=\frac{\xi}{R}, \quad \tilde{t} := \frac{t}{R}. \end{equation} Then the interior correspond to $0<x<1$ and we write the solution in terms of $q_0$. Denoting by a dot the derivative with respect to $x$ we have \begin{equation} \Lambda_0 = -\ln\left(1-\frac{q_0}{x}\right), \end{equation} \begin{equation} \dot{\Phi}_0 = \frac{q_0}{x(x-q_0)}, \end{equation} and for the perturbation \begin{equation} \Lambda_1 = -\frac{\dot{q}_0}{x-q_0}\zeta, \end{equation} \begin{equation} \dot{\Phi}_1 = -\frac{\dot{q}_0}{(x-q_0)^2}\zeta, \end{equation} \begin{equation} q_1 = -\dot{q}_0 \zeta, \end{equation} with \begin{equation}\label{eqZeta} \zeta(x,t) = \zeta_0(x) + \zeta_1(x)\tilde{t}, \end{equation} and where $\zeta_0$ and $\zeta_1$ are arbitrary functions of $x$ satisfying \begin{equation} \lim_{x\rightarrow 0}\frac{\zeta}{x} = constant,\quad \lim_{x\rightarrow 1}\zeta=0. \end{equation} Also, the function $q_0$ needs to satisfy \begin{equation}\label{condQ} q_0(0) = 0, \quad \dot{q}_0(0) = 0, \quad \ddot{q}_0(0) = 0, \quad q_0(1) = \mu,\quad \dot{q}_0(1) = 0. \end{equation} \section{An explicit example}\label{secExample} In this section we present as explicit example a family of ECD spacetimes, and consider the linear perturbations on them. The purpose of this section is to show that if the original spacetime is close to forming a black hole, then by a small perturbation a black hole can be formed. We consider the parameter $\mu$, that correspond to the total mass and charge of the spacetime, to parametrize the family of solutions. We choose the function $q_0$ simply as a polynomial in $x$ that satisfy \eqref{condQ}, \begin{equation} q_0 = \frac{\mu}{2}x^3(5-3x^2). \end{equation} Then the functions $\Lambda_0$ and $\Phi_0$ can be explicitly calculated. For the discussion is convenient to use as metric functions \begin{equation} L_0 := e^{\Lambda_0},\quad F_0 := e^{\Phi_0}, \end{equation} and then \begin{equation} L_0 = \frac{2}{2-\mu x^2(5-3x^2)}, \end{equation} \begin{eqnarray} F_0 & = & \frac{(1-\mu)^\frac{5}{4}}{\left[1-\frac{1}{2}\mu x^2(5-3x^2)\right]^\frac{1}{4}} \\ & & \times \exp\left[-\frac{5\mu}{2\sqrt{24-25\mu}}\arctan\left(\frac{\sqrt{24-25\mu}(1-x^2)}{4-\mu(5-x^2)}\right)\right] \end{eqnarray} We see that if \begin{equation} \mu \geq \frac{24}{25}. \end{equation} then $L_0$ diverges at \begin{equation} x = \sqrt{\frac{5}{6}}\sqrt{1\pm\sqrt{1-\frac{24}{25\mu}}}. \end{equation} If $\mu<\frac{24}{25}$ the solution is regular for all values of $x$. In Figure \ref{fig1} we have plotted the function $q_0$ and the charge density for the spacetime with $\mu=\frac{4}{5}$. Also, the functions $L_0$ and $F_0$ are shown in Figure \ref{fig2}. \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{plotQ.eps} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{plotS.eps} \end{subfigure} \caption{Carga y densidad de carga para $\mu=\frac{4}{5}$.} \label{fig1} \end{figure} In order to see how close the spacetime is to forming a black hole we consider the function $q_0/x$. The maximum is attained at \begin{equation} x_M = \sqrt{\frac{5}{6}}, \end{equation} where its value is \begin{equation} \frac{q_0(x_M)}{x_M} = \frac{25}{24}\mu. \end{equation} If $q/x=1$ we have a black hole and therefore we define the parameter \begin{equation} \delta = 1 - \frac{25}{24}\mu \end{equation} as a measure of how close the regular spacetime is to a black hole spacetime. Now we consider the linear perturbations on these spacetimes. The function $q$ to first order is \begin{equation} q = q_0 + q_1 = q_0 - \dot{q}_0 \zeta, \end{equation} and \begin{equation} \dot{q}_0 = \frac{15}{2}\mu x^2(1-x^2). \end{equation} Therefore we have that if \begin{equation} \zeta(x_M,\tilde{t}) = -\sqrt{\frac{5}{6}}\frac{\delta}{1-\delta} \Rightarrow \frac{q(x_M)}{x_M} = 1. \end{equation} Here the exact value of $\zeta$ is of no real importance, what matters is its order of magnitude. Also, from \eqref{eqZeta} we see that if we choose $\zeta_1(x)=0$, just by choosing $\zeta_0(x)$ with negative sign and of order $\delta$ we can make a black hole. On the other hand, and more interesting, as it departs from staticity, is to choose $\zeta_0(x)=0$ and $\zeta_1(x)< 0$. This amounts to perturb the solution by giving the fluid elements an inward velocity. Then in a time proportional to $\delta$ a black hole is formed. \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{plotL.eps} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{plotF.eps} \end{subfigure} \caption{Funciones $L_0$ y $F_0$ para $\mu=\frac{4}{5}$.} \label{fig2} \end{figure} \section{Conclusions}\label{secConc} We have considered linear spherically symmetric perturbations on static spherically symmetric ECD spacetimes. We have obtained the equation for the evolution of the displacement of the fluid elements under the perturbation. This equation is interpreted as that there is no net force acting on the fluid elements, and therefore the displacement grows linearly in time. There is no restitutive force acting on the elements, but also the perturbation does not grow exponentially. This corresponds to the so called indifferent equilibrium of the solution. Although it has been argued that the solution is in such an equilibrium, and therefore it is stable, we consider that our treatment sheds light on the specifics of the ECD spacetimes and their equilibria. In particular, we constructed a family of solutions where there is a clear idea of how far the solution is from forming a black hole. We showed that in a time proportional to the distance to being a black hole the perturbation leads to a concentration of charge sufficient to form a black hole. Although the static solution is indeed indifferently stable, in general relativity this does not mean what one would expect from a stable spacetime. If the ECD solution is perturbed by giving a velocity distribution to the fluid elements then the perturbation grows linearly and never settles to an ECD solution, more so, it never returns to the original configuration. Also, in a more extreme scenario, the perturbation could lead to the formation of a black hole, changing the solution radically. In this sense we do not consider the ECD solutions to be stable, and we think that caution should be exercised when extracting conclusions for astronomical objects from such spacetimes, as it has been done for example with regard to possible redshifts. \section*{Acknowledgements} Several calculations were performed and the figures produced in SageMath \cite{SageMath} with the use of the package SageManifolds \cite{SageManifolds}. This work was supported by grants PIP 112-201301-00532 of CONICET, Argentina, and M076 and M060 of SIIP, Universidad Nacional de Cuyo, Argentina.
1,314,259,995,188
arxiv
\section{Introduction} There is currently a great deal of interest in applying the methods of computational algebraic geometry to string phenomenology and closely related sub-fields of theoretical physics. For some examples of recent work see \cite{VacspaceBlock,Benvenuti:2006qr,Feng:2007ur,Forcella:2008bb,GIOBlock1,Ferrari:2008rz,Distler:2005hi,StringvacuaBlock,Font:2008vd,ModelBuildingBlock,Kaura:2006mv,Raby:2007yc,NumericMetricBlock,Candelas:2008wb} and references therein. These papers utilise advances in algorithmic techniques in commutative algebra to study a wide range of subjects including various aspects of globally supersymmetric gauge theory \cite{VacspaceBlock,Benvenuti:2006qr,Feng:2007ur,Forcella:2008bb,GIOBlock1,Ferrari:2008rz}, finding flux vacua in string phenomenology \cite{Distler:2005hi,StringvacuaBlock,Font:2008vd}, studying heterotic model building on smooth Calabi-Yau in non-standard embeddings \cite{ModelBuildingBlock} and more besides \cite{Kaura:2006mv,Raby:2007yc,NumericMetricBlock,Candelas:2008wb}. Despite the wide range of physical problems which have been addressed within this context, the computational tools which are being used are all based, finally, on the same algorithm. The Buchberger algorithm \cite{Buchberger} is at once what lends these methods their power and also the rate limiting step - placing bounds on the size of problem that can be dealt with. The recent burst of activity in this field has been fueled, in part, by the advent of freely available, efficient implementations of this algorithm \cite{m2,sing}. There are also interfaces available between the commutative algebra program \cite{sing} and Mathematica \cite{singm,StringvacuaBlock}, with \cite{StringvacuaBlock} being particularly geared towards physicist's needs. The aim of this talk is to give an elementary introduction to the Buchberger algorithm and some of its recent applications. In order to give an idea of how one simple algorithm can make so much possible, I will present the Buchberger algorithm and then show how it may be applied to physics in 3 elementary examples. Firstly, I will describe how it can be used to obtain constraints on the flux parameters in four dimensional descriptions of string phenomenological models which are necessary and sufficient for the existence of certain types of vacuum \cite{StringvacuaBlock}. Secondly, I will describe how the Buchberger algorithm can be used to simplify the equations describing the vacua of such systems - making problems of finding minima much more tractable \cite{StringvacuaBlock}. Finally, I shall describe how the same simple algorithm can be used to calculate the supersymmetric vacuum space geometry of the electroweak sector of the MSSM \cite{VacspaceBlock}. The remainder of this talk is structured as follows. In the next section, I take a few pages to explain the algorithm and the few mathematical concepts that we will require. In the three sections following that, I then describe the three examples mentioned above. I shall conclude by making a few final comments about the versatility and scaling of the Buchberger algorithm. \section{A tiny bit of commutative algebra} \label{math} Two pages of simple mathematics will suffice to achieve all of the physical goals mentioned in the introduction. First of all we define the notion of a polynomial ring. In this paper we will call the fields of the physical systems we study $\phi^i$ and any parameters present, such as flux parameters, $a^{\alpha}$. The polynomial rings $\mathbb{C}\left[ \phi^i ,a^{\alpha} \right]$ and $\mathbb{C}\left[a^{\alpha} \right]$ are then simply the infinite set of all polynomials in the fields and parameters, and the infinite set of all polynomials in the parameters respectively. Another mathematical concept we will require is that of a monomial ordering. This is simply an unambiguous way of stating whether any given monomial is formally bigger than any other given monomial. We may denote this in a particular case by saying $m_1 > m_2$ where $m_1, m_2 \in \mathbb{C} \left[ \phi^i, a^{\alpha} \right]$ are monomials in the fields and parameters. It is important to say what is {\it not} meant by this. We are not saying that we are taking values of the variables such that the monomial $m_1$ is numerically larger than the monomial $m_2$. Rather we are saying that, in our formal ordering, $m_1$ is considered to come before $m_2$. For our purposes we will require a special type of monomial ordering called an elimination ordering. This means that our formal ordering of monomials has the following property. \begin{eqnarray} P \in \mathbb{C} \left[ \phi^i, a^{\alpha} \right] , \textnormal{LM}(P) \in \mathbb{C}\left[a^{\alpha}\right] \Rightarrow P \in \mathbb{C}\left[ a^{\alpha} \right] \end{eqnarray} In words this just says that if the largest monomial in $P$ according to our ordering, $\textnormal{LM}(P)$, does not depend on $\phi^i$ then $P$ does not depend on the fields at all. The monomial ordering classes all monomials with fields in them as being bigger than all of those without such constituents. Given this notion of monomial orderings we can now present the one algorithm we will need to use - the Buchberger algorithm \cite{Buchberger}. The Buchberger algorithm takes as its input a set of polynomials. These may be thought of as a system of polynomial equations by the simple expedient of setting all of the polynomials to zero. The algorithm returns a new set of polynomials which, when thought of as a system of equations in the same way, has the same solution set as the input. The output system, however, has several additional useful properties as we will see. \subsection*{The Buchberger Algorithm} \begin{enumerate} \item Start with a set of polynomials, call this set ${\cal G}$. \item Choose a monomial ordering with the elimination property described above. \item For any pair of polynomials $P_i, P_j \in {\cal G}$ multiply by monomials and form a difference so as to cancel the leading monomials with respect to the monomial ordering: \begin{eqnarray} S = p_1 P_I - p_2 P_J \;\; \textnormal{s.t.} \;\; p_1 \textnormal{LM}(P_I), p_2 \textnormal{LM}(P_J) \;\; \textnormal{cancel} \;.\end{eqnarray} \item Perform polynomial long division of $S$ with respect to ${\cal G}$. That is, form $\tilde{h} = S - m_3 P_k$ where $m_3$ is a monomial and $P_k \in {\cal G}$ such that $m_3 \textnormal{LM}(P_k)$ cancels a monomial in $S$. Repeat until no further reduction is possible. Call the result $h$. \item If $h=0$ consider the next pair. If $h \neq 0$ add $h$ to ${\cal G}$ and return to step 3. \end{enumerate} The algorithm terminates when all S-polynomials which may be formed reduce to $0$. The final set of polynomials is called a Gr\"obner basis. As mentioned above, the resulting set of polynomials has several nice properties. The feature which is often taken as defining is that polynomial long division with respect to this new set of polynomials always gives the same answer - it does not matter in which order we divide the polynomials out by. For us, however, the important point about our Gr\"obner basis ${\cal G}$ is that it has what is called the elimination property. ${\cal G} \cap \mathbb{C} \left[ a^{\alpha} \right]$, the set of all polynomials in ${\cal G}$ which depend only upon the parameters, gives a complete set of equations on the $a^{\alpha}$ which are necessary and sufficient for the existence of a solution to the set of equations we started with. The reason why this is so is actually very straightforward. Our elimination ordering says that any monomial with a field in it is greater than any monomial only made up of parameters. Looking back at step 3 of the Buchberger algorithm we see that we are repeatedly canceling off the leading terms of our polynomials - those containing the fields - as much as we can. Thus if it is possible to rearrange our initial equations to get expressions which do not depend upon the fields $\phi^i$ then the Buchberger algorithm will do this for us. Clearly, while we have interpreted the $a^{\alpha}$ as parameters and the $\phi^i$ as fields in the above, as this is what we will require for the next section, this was not necessary. The Buchberger algorithm can be used to eliminate any unwanted set of variables from a problem, in the manner we have described. This completes all of the mathematics we will need for our entire discussion and we may now move on to apply what we have learnt. \section{Constraints} The first physical question we wish to answer is the following. Given a four dimensional ${\cal N}=1$ supergravity describing a flux compactification, what are the constraints on the flux parameters which are necessary and sufficient for the existence of a particular kind of vacuum? This question can be asked, and answered \cite{StringvacuaBlock}, for any kind of vacuum, but in the interests of concreteness and brevity let us restrict ourselves to the simple case of supersymmetric Minkowski vacua. Here is the superpotential of a typical system, taken from \cite{Shelton:2005cf}. It describes a non-geometric compactification of type IIB string theory. \begin{eqnarray} \label{sheltonW} W & = & a_0 - 3a_1 \tau + 3a_2 \tau^2 - a_3 \tau^3\\ \nonumber & & \hspace{0.2in} + S (-b_0 + 3b_1 \tau - 3b_2 \tau^2 + b_3 \tau^3) \nonumber\\ & & \hspace{0.2in} + 3 U (c_0 + (\hat{c}_1 +\check{ c}_1 + \tilde{c}_1) \tau - (\hat{c}_2 +\check{ c}_2 + \tilde{c}_2) \tau^2 -c_3 \tau^3), \nonumber \end{eqnarray} This system has some known constraints on its parameters which are necessary for the existence of a permissible vacuum. These come from, for example, tadpole cancellation conditions. \begin{eqnarray}\label{constSTW} a_0 b_3-3 a_1 b_2+3 a_2b_1-a_3b_0 = 16 \, \, \,\, && \\ \nonumber a_0 c_3+a_1 (\check{c}_2 + \hat{c}_2-\tilde{c}_2) -a_2 (\check{c}_1 + \hat{c}_1 -\tilde{c}_1)-a_3c_0 = 0 \,\,\,\, && \\ \nonumber \begin{array}{ccc} \begin{array}{rcl} c_0 b_2-\tilde{c}_1 b_1+\hat{c}_1 b_1-\check{c}_2 b_0 & = & 0\\ \check{c}_1 b_3-\hat{c}_2 b_2+\tilde{c}_2 b_2-c_3 b_1 & = & 0\\ c_0 b_3-\tilde{c}_1 b_2+ \hat{c}_1 b_2-\check{c}_2 b_1 & = & 0\\ \check{c}_1 b_2-\hat{c}_2 b_1+\tilde{c}_2 b_1-c_3 b_0 & = & 0\\ \end{array} & \begin{array}{rcl} c_0\tilde{c}_2-\check{c}_1 ^ 2+\tilde{c}_1\hat{c}_1-\hat{c}_2 c_0 & = & 0\\ c_3\tilde{c}_1-\check{c}_2 ^ 2 +\tilde{c}_2\hat{c}_2-\hat{c}_1 c_3 & = & 0\\ c_3 c_0-\check{c}_2\hat{c}_1 +\tilde{c}_2\check{c}_1-\hat{c}_1\tilde{c}_2 & = & 0\\ \hat{c}_2\tilde{c}_1-\tilde{c}_1\check{c}_2 +\check{c}_1\hat{c}_2-c_0c_3 & =& 0 \ . \\ \end{array} \end{array} \end{eqnarray} We also have the same constraints with the hats and checks switched around. In this example the fields, which we have been calling $\phi^i$, are $S,\tau$ and $U$ and everything else is a ``flux'' parameter, or an $a^{\alpha}$ in our notation. In total, the equations which must be satisfied if a supersymmetric Minkowski vacuum is to exist are $W=0$, $\partial_S W=0$, $\partial_{\tau} W=0$, $\partial_U W=0$ and the constraints on the flux parameters given above. To extract a set of constraints solely involving the parameters which are necessary and sufficient for the existence of a solution to these equations we simply follow the procedure outlined in the previous section. We can carry out this calculation trivially in Stringvacua \cite{StringvacuaBlock} and, in fact, this example is provided for the user in the help system. The result is \begin{eqnarray} 0 &=& \tilde{c}_1 = \tilde{c}_2 = \hat{c}_1 = \hat{c}_2 = \check{c}_1 = \check{c}_2 = c_0 = c_3 \\ \nonumber 0 &=& 16+a_3 b_0-3 a_2 b_1+3 a_1 b_2-a_0 b_3 \\ \nonumber 0 &=& 16 a_3^2 b_0^2-96 a_2 a_3 b_0 b_1-288 a_2^2 b_1^2+432 a_1 a_3 b_1^2+54 a_2^3 b_1^3-81 a_1 a_2 a_3 b_1^3+27 a_0 a_3^2 b_1^3 +432 a_1 a_3 b_0 b_2 \\ \nonumber && -27 a_2^2 a_3 b_0^2 b_2+48 a_1 a_3^2 b_0^2 b_2-288 a_0 a_3 b_1 b_2-18 a_1 a_2 a_3 b_0 b_1 b_2-45 a_0 a_3^2 b_0 b_1 b_2 -54 a_1 a_2^2 b_1^2 b_2\\ \nonumber && +81 a_1^2 a_3 b_1^2 b_2-27 a_0 a_2 a_3 b_1^2 b_2+54 a_0 a_2 a_3 b_0 b_2^2+27 a_0 a_1 a_3 b_1 \ b_2^2-27 a_0^2 a_3 b_2^3-288 a_1 a_2 b_0 b_3 \\ \nonumber && -32 a_0 a_3 b_0 b_3+27 a_2^3 b_0^2 \ b_3-45 a_1 a_2 a_3 b_0^2 b_3+432 a_0 a_2 b_1 b_3-27 a_1 a_2^2 b_0 b_1 b_3+54 a_1^2 a_3 b_0 b_1 b_3 \\ \nonumber && +48 a_0 a_2 a_3 b_0 b_1 b_3 +18 a_0 a_2^2 b_1^2 b_3-81 a_0 a_1 a_3 b_1^2 b_3-144 a_0 a_1 b_2 b_3+27 a_1^2 a_2 b_0 b_2 b_3-54 a_0 a_2^2 b_0 b_2 b_3 \\ \nonumber && -51 a_0 a_1 a_3 b_0 b_2 b_3 +27 a_0 a_1 a_2 b_1 b_2 b_3+45 a_0^2 a_3 b_1 b_2 b_3-27 a_0 a_1^2 b_2^2 b_3+27 a_0^2 a_2 b_2^2 b_3+16 a_0^2 b_3^2 \\ \nonumber && -27 a_1^3 b_0 b_3^2+45 a_0 a_1 a_2 b_0 b_3^2 +27 a_0 a_1^2 b_1 b_3^2-48 a_0^2 a_2 b_1 b_3^2+3 a_0^2 a_1 b_2 b_3^2 \end{eqnarray} The reader will note that the result is a somewhat lengthy set of equations. In principle one has to find quantized solutions to these expressions, an obviously intractable Diophantine problem, and therefore it might be asked why this result is of any use. In fact, knowledge of such constraints on the flux parameters is hugely useful for a number of reasons. \begin{itemize} \item Firstly, we note that, while the full result of this process is often complex, some of the constraints can give us simple information about the system. In the current case, for example it can be seen that $\tilde{c}_2=0$ is required for the existence of a supersymmetric Minkowski vacuum. \item Secondly, if one is scanning over a range of flux parameters and trying to numerically solve the equations to find vacua one can speed up ones analysis by first substituting any given set of flux parameters into the constraints we have obtained. If the constraints are not satisfied then vacua do not exist and there is no point in searching numerically for a solution. This turns what would be a time consuming numerical process giving inconclusive results (no solution was found) into a quick analytic conclusion (no solution exists). \item Lastly, knowledge of such constraints can greatly speed up algebraic approaches to finding vacua such as those outlined in \cite{StringvacuaBlock}. \end{itemize} \section{Simplifying equations for vacua} Another use for the mathematics we learnt in section \ref{math} are the so called ``splitting tools'' used in work such as \cite{StringvacuaBlock}. The physical idea here is simple. Consider trying to solve the equations $\partial V / \partial \phi^i =0$ to find the vacua, including those which spontaneously break supersymmetry, of some supergravity theory. These equations are often extremely complicated. One way of viewing why this is so is that the equations for the turning points of the potential contain a {\it lot} of information. They describe not only the isolated minima of the potential which are of interest, but also lines of maxima, saddle points of various sorts and so forth. A useful tool to have, therefore, would be an algorithm that takes such a system as an input and returns a whole series of separate sets of equations, each individually describing fewer turning points. Since each separate equation system would then contain less information one might expect them to be easier to solve. It would be beneficial to choose a division of these equations which has physical interest. The choice we will make here, and which programs like Stringvacua implement \cite{StringvacuaBlock}, is to split up the equations for the turning points according to how they break supersymmetry - that is according to which F-terms vanish when evaluated on those loci. The ability that packages such as Stringvacua have to split up equations in this manner is based upon the following splitting tool (see \cite{Stillman} for a nice set of more detailed notes on this kind of mathematical technique). Say that one of the F-terms of our theory is called $F$. Then we can split the equations describing turning points of the potential into two pieces. \begin{eqnarray} \label{splitone} \partial V / \partial \phi^i =0 \;,\; F = 0 \\ \label{neqpart} \partial V/ \partial \phi^i = 0 \;,\; F \neq 0 \end{eqnarray} The first of these expressions is a set of equations which is easier to solve, in general, than $\partial V / \partial \phi^i =0$ alone. We can use the F-term to simplify the equations for the turning points of the potential. On the other hand, expression \eqref{neqpart} is not even a set of equations - it contains an inequality. We can convert \eqref{neqpart} into a system purely involving equalities by making use of the mathematics we learned in section \ref{math}. Consider the following set of equations, including a dummy variable $t$. \begin{eqnarray} \label{splitconsider} \partial V/\partial \phi^i =0 \; ,\; F t-1 =0 \end{eqnarray} The second equation in \eqref{splitconsider} has a solution if and only if $F \neq 0$, simply $t = 1/F$. If $F=0$ the equation reduces to $-1=0$ which clearly has no solutions. The equations \eqref{splitconsider}, then, have a solution whenever the set of equalities and inequalities \eqref{neqpart} do. Unfortunately they also depend upon one extra, and unwanted, variable - $t$. This is not a problem as we already know how to remove unwanted variables from our equations. We can simply eliminate them, as we did the fields in section \ref{math}. This will leave us with a necessary and sufficient set of equations in $\phi^i$ and $a^{\alpha}$ for a solution to \eqref{splitconsider} and thus to \eqref{neqpart}. So we can split the equations for the turning points of our potential into two simpler systems. One describes the turning points of $V$ for which $F=0$ and the other those for which $F \neq 0$. We can of course perform such a splitting many times - once for each F-term! In fact, using additional techniques from algorithmic algebraic geometry \cite{primdec,StringvacuaBlock}, which are essentially based upon the same trick, one can go much further. One can split the equations for the turning points up into component parts gaining one set of equations for every separate locus. Because we know which F-terms are non-zero on each of them these are classified according to how they break supersymmetry. The researcher interested in a certain type of breaking can therefore select the equations describing the vacua of interest and throw everything else away. The above process of splitting up the equations for the vacua of a system can be very simply carried out in Stringvacua. Numerous examples can be found in the Mathematica help files which come with the package \cite{StringvacuaBlock}. Here, let us consider the example of M-theory compactified on the coset $\frac{SU(3) \times U(1)}{U(1) \times U(1)}$. The K\"ahler and superpotential for this coset, which has $SU(3)$ structure, has been presented in \cite{Micu:2006ey}. \begin{eqnarray} K &=& -4 \log (-i(U- \bar{U})) - \log (-i (T_1-\bar{T}_1) (T_2 -\bar{T}_2) (T_3 - \bar{T}_3)) \\ \nonumber W &=& \frac{1}{\sqrt{8}} \left[ 4 U (T_1+T_2+T_3) + 2 T_2 T_3 - T_1 T_3 - T_1 T_2 + 200 \right] \end{eqnarray} Even this, relatively simple, model results in a potential of considerable size. Defining $T_i = -i t_i + \tau_i$ and $U = -i x + y$ we find, \begin{eqnarray} \label{bigchap} \nonumber V&=& \frac{1}{256 t_1 t_2 t_3 x^4} (40000 + t_3^2 \tau_1^2 - 400 \tau_1 \tau_2 - 4 t_3^2 \tau_1 \tau_2 + 4 t_3^2 \tau_2^2 + \tau_1^2 \tau_2^2 - 400 \tau_1 \tau_3 + 800 \tau_2 \tau_3 + \\ && 2 \tau_1^2 \tau_2 \tau_3 - 4 \tau_1 \tau_2^2 \tau_3 + \tau_1^2 \tau_3^2 - 4 \tau_1 \tau_2 \tau_3^2 + 4 \tau_2^2 \tau_3^2 - 24 t_2 t_3 x^2 + 4 t_3^2 x^2 - 24 t_1 (t_2 + t_3) x^2 \\ \nonumber && + 4 \tau_1^2 x^2 + 8 \tau_1 \tau_2 x^2 + 4 \tau_2^2 x^2 + 8 \tau_1 \tau_3 x^2 + 8 \tau_2 \tau_3 x^2 + 4 \tau_3^2 x^2 + 1600 \tau_1 y - 8 t_3^2 \tau_1 y \\ \nonumber && + 1600 \tau_2 y + 16 t_3^2 \tau_2 y - 8 \tau_1^2 \tau_2 y - 8 \tau_1 \tau_2^2 y + 1600 \tau_3 y - 8 \tau_1^2 \tau_3 y + 16 \tau_2^2 \tau_3 y - 8 \tau_1 \tau_3^2 y \\ \nonumber && + 16 \tau_2 \tau_3^2 y + 16 t_3^2 y^2 + 16 \tau_1^2 y^2 + 32 \tau_1 \tau_2 y^2 + 16 \tau_2^2 y^2 + 32 \tau_1 \tau_3 y^2 + 32 \tau_2 \tau_3 y^2 + 16 \tau_3^2 y^2 \\ \nonumber && + t_1^2 (t_2^2 + t_3^2 + \tau_2^2 + 2 \tau_2 \tau_3 + \tau_3^2 + 4 x^2 - 8 \tau_2 y - 8 \tau_3 y + 16 y^2 ) + t_2^2 (4 t_3^2 + \tau_1^2 - 4 \tau_1 (\tau_3 + 2 y ) \\ \nonumber && + 4 (\tau_3^2 + x^2 + 4 \tau_3 y + 4 y^2)) \;. \end{eqnarray} To find the turning points of this potential we naively need to take eight different derivatives of \eqref{bigchap} and solve the resulting set of inter-coupled equations in eight variables. This is clearly prohibitively difficult. Using the techniques described in this section, however, Stringvacua, can separate off parts of the vacuum space for us with ease. Consider, for example, the vacua which are isolated in field space and for which the real parts of all of the F-terms are non-zero, with the imaginary parts vanishing. To find these, the package tells us, we need only solve the equations, \begin{eqnarray} 9 x^2 - 500 = 0 \;,\; 5 t_1-2 x =0 \;,\; t_2-x=0 \;,\; t_3-x=0 \;,\; \tau_1=\tau_2=\tau_3=y=0 \;. \end{eqnarray} Because they only describe a small subset of all of the turning points of the full potential these equations are extremely simple in form and may be trivially solved. For this particular example the physically acceptable turning point that results is a saddle - something which can be readily ascertained once its location has been discovered. \section{Geometry of vacuum spaces} As a final example of what we can do with the simple techniques introduced in section \ref{math} we will show how to calculate the vacuum space of a globally supersymmetric gauge theory. It is a well known result (see \cite{Luty:1995sd} and references therein) that the supersymmetric vacuum space of a such a theory, with gauge group $G$, can be described as the space of holomorphic gauge invariant operators (GIO's) built out of F-flat field configurations. What does this space look like? Consider a space, the coordinates of which are identified with the GIO's of the theory. If there were no relations amongst the gauge invariant operators then this space would be the vacuum space. However there frequently are relations because of the way in which the GIO's are built out of the fields. For example, if we have three gauge invariant operators $S^1,S^2$ and $S^3$ which are built out of the fields as $S^1 = (\phi^1)^2, S^2 = (\phi^2)^2 , S^3 = \phi^1 \phi^2$ then we have the relation $S^1 S^2 = (S^3)^2$. If we take these GIO's to be built out of the F-flat field configurations then there will be still further relations among them. The vacuum space of the theory is the subspace defined by the solutions of these equations describing relations amongst the gauge invariant operators, once F-flatness has been taken into account. How can we calculate such a thing? The holomorphic gauge invariant operators of a globally supersymmetric gauge theory are given in terms of the fields. \begin{eqnarray} S^I = f^I (\phi^i) \end{eqnarray} Here $S^I$ are our GIO's and the $f^I$ are the functions of the fields that define them. Let us write the F-terms of the theory as $F^i$. Consider the following set of equations. \begin{eqnarray} \label{finalcase} F^i =0 \;,\; S^I - f^I(\phi^i) =0 \end{eqnarray} These equations have solutions whenever the $S^I$ are given by functions of the fields in the correct way and when those field configurations which are being used are F-flat. However, according to the proceeding discussion, we wish to simply have equations in terms of the GIO's to describe our vacuum space. As in previous sections, we can eliminate the unwanted variables in our problem, in this case the fields $\phi^i$, using the algorithm of section \ref{math} to obtain the equations describing the vacuum space. As a simple example, let us take the electroweak sector of the MSSM \cite{VacspaceBlock} (with right handed neutrinos). Given the field content of the left handed leptons, $L^i_{\alpha}$, the right handed leptons, $e^i$ and $\nu^i$, and the two Higgs, $H$ and $\bar{H}$, one can build the elementary GIO's given in table \ref{table1}. The indices $i,j$ run over the 3 flavours and the indices $\alpha, \beta$ label the fundamental of $SU(2)$. \begin{table} {\begin{center} \begin{tabular}{|c||c|c|c|}\hline \mbox{Type} & \mbox{Explicit Sum} & \mbox{Index} & \mbox{Number} \\ \hline \hline $LH$ & $L^i_\alpha H_\beta \epsilon^{\alpha \beta}$ & $i=1,2,3$ & 3 \\ \hline $H\bar{H}$ & $H_\alpha \bar{H}_\beta \epsilon^{\alpha \beta}$ & & 1 \\ \hline $LLe$ & $L^i_\alpha L^j_\beta e^k \epsilon^{\alpha \beta}$ & $i,j=1,2,3; k=1,\ldots,j-1$ & 9 \\ \hline $L\bar{H} e$ & $L^i_\alpha \bar{H}_\beta \epsilon^{\alpha \beta} e^j$ & $i,j=1,2,3$ & 9 \\ \hline $\nu$ & $\nu^i$ & $i=1,2,3$ & 1 \\ \hline \end{tabular} \end{center}}{\caption{\label{table1}{\bf The set of elementary gauge invariant operators for the electroweak sector of the MSSM}.}} \end{table} To compute the F-terms we require the superpotential. Let us take the most general renormalizable form which is compatible with the symmetries of the theory and R-parity. \begin{equation}\label{renorm-ew-nu} W_{\rm minimal} = C^0 \sum_{\alpha, \beta} H^\alpha \bar{H}^\beta \epsilon_{\alpha \beta} + \sum_{i,j} C^3_{ij} e^i \sum_{\alpha, \beta} L^j_{\alpha} \bar{H}_\beta \epsilon^{\alpha \beta} + \sum_{i,j} C^4_{ij} \nu^i \nu^j + \sum_i C^5_{ij} \nu^i \sum_{\alpha, \beta} L^j_\alpha H_\beta \epsilon^{\alpha \beta}. \end{equation} Here $\epsilon$ is the invariant tensor of $SU(2)$ and $C^0, C^3_{ij}, C^4_{ij}, \textnormal{and}, C^5_{ij}$ are constant coefficients. We now just follow the procedure outlined at the start of this section. We calculate the F-terms by taking derivatives of the superpotential, we label the gauge invariant operators $S_1$ to $S_{23}$, we form the equations \eqref{finalcase} and then simply run the elimination algorithm given in section \ref{math}. The result is, upon simplification, given by six quadratic equations in 6 variables. It is a simple description of an affine version of a famous algebraic variety - the Veronese surface \cite{VacspaceBlock}. What can be done with such a result? The first observation we can make is that this vacuum space is not a Calabi-Yau. This means, for example, that one can say definitively that it is not possible to engineer this theory by placing a single D3 brane on a singularity in a Calabi-Yau manifold, without having to get into any details of model building. Secondly one can study such vacuum spaces in the hope of finding hints at the structure of the theory's higher energy origins. In the case we have studied in this section, for example, we can ``projectivize'' (pretend the GIO's are homogeneous coordinates on projective space rather than flat space coordinates) and study the Hodge diamond of the result. The structure of supersymmetric field theory tells us that this Hodge diamond should depend on 4 arbitrary integers, but there is nothing at low energies which prevents us from building theories with any such integers we like. Interestingly, in the case of electroweak theory, these integers are all zero or one. \begin{equation} h^{p,q} \quad = \quad {\begin{array}{ccccc} &&h^{0,0}&& \\ &h^{0,1}&&h^{0,1}& \\ h^{0,2}&&h^{1,1}&&h^{0,2} \\ &h^{0,1}&&h^{0,1}& \\ &&h^{0,0}&& \\ \end{array}} \quad = \quad {\begin{array}{ccccc} &&1&& \\ &0&&0& \\ 0&&1&&0 \\ &0&&0& \\ &&1&& \\ \end{array}}. \label{hodge} \end{equation} Whether this structure is indeed a hint of some high energy antecedent or just a reflection of the simplicity of the theory is debatable. This example does, however, demonstrate the idea of searching for such evidence of new physics in vacuum space structure. We should also add here that similar techniques can be used to show that the vacuum space of SQCD is a Calabi-Yau \cite{GIOBlock1}. \section{Final Comments} To conclude we shall make several points - one of which is a note of caution, with the rest being more optimistic. The first point which we shall make is that we should be careful lest the above discussion makes the algorithm we have been describing sound like an all-powerful tool. There is, as ever, a catch. In this case it is the way the algorithm scales with the complexity of the problem. A ``worst case'' upper bound for the degree of the polynomials in a {\it reduced} Gr\"obner basis can be found in \cite{MollerMora84}. If $d$ is the largest degree found in your original set of equations then this bound is, \begin{eqnarray} \label{scaling} 2 \left( \frac{d^2}{2} + d \right)^{2^{n-1}} \;, \end{eqnarray} where $n$ is the number of variables. This {\it worst case} bound is therefore scaling doubly exponentially in the number of degrees of freedom. These very high degree polynomials are an indication that the problem is becoming very complex and thus computationally intensive. Despite this, physically useful cases can be analysed using this algorithm quickly, as demonstrated in this talk and in the references. This scaling does mean that one is not likely to gain much by putting one's problem on a much faster computer. One good point about \eqref{scaling} is that if you can find a way, using physical insight, to simplify the problem under study, then what you can achieve may improve doubly exponentially. Such a piece of physical insight was one of the keystones of the application of these methods to finding flux vacua \cite{StringvacuaBlock}. We finish by commenting that the methods of computational commutative algebra which we have discussed here are extremely versatile. We have been able to perform three very different tasks simply utilizing one algorithm in a very simple manner. These methods are of great utility in problems taken from the literature and their implementation in a user friendly way in Stringvacua means that they may be tried out on any given problem with very little expenditure of time and effort by the researcher. Many more techniques from the field of algorithmic commutative algebra could be applied to physical systems than those described here, or indeed in the physics literature. We can therefore expect that this subject will only increase in importance in the future. \section*{Acknowledgements} The author is funded by STFC and would like to thank the University of Pennsylvania for generous hospitality while some of this document was being written. In addition he would like to thank the organisers of the 2008 Vienna ESI workshop ``Mathematical Challenges in String Phenomenology'' where the talk upon which these notes are based was first given. The author would like to offer heartfelt thanks to his collaborators on the various projects upon which this talk is based. These include Lara Anderson, Daniel Grayson, Amihay Hanany, Yang-Hui He, Anton Ilderton, Vishnu Jejjala, Andr\'e Lukas, Noppadol Mekareeya and Brent Nelson.
1,314,259,995,189
arxiv
\section{Introduction} Corvaja and Zannier conjectured that an abelian variety over a number field satisfies a modified version of the Hilbert property. We investigate their conjecture for products of elliptic curves using Kawamata's structure result for ramified covers of abelian varieties, and Faltings's finiteness theorem for rational points on higher genus curves. Recall that a normal integral variety $X$ over a field $k$ satisfies the Hilbert property over $k$ (as defined in \cite[\S 4]{Serre}) if, for every positive integer $n$ and every collection of finite surjective morphisms $\pi_i:Y_i\to X$, $1\leq i\leq n$, with $Y_i$ geometrically integral over $k$ and $\deg \pi_i \geq 2$, the set $X(k)\setminus \cup_{i=1}^n \pi_i(Y_i(k))$ is dense in $X$. In particular, if $X$ satisfies the Hilbert property over $k$, then $X(k)$ is dense. The Hilbert property is closely related to the inverse Galois problem for $\QQ$; see \cite[\S4]{Serre}. In this paper we study a \emph{modified} version of the Hilbert property, motivated by conjectures of Campana and Corvaja-Zannier on rational points for varieties over number fields. By Hilbert's Irreducibility Theorem \cite[Theorem~3.4.1]{Serre}, a rational variety over a number field satisfies the Hilbert property. On the other hand, an abelian variety over a number field does not satisfy the Hilbert property. Nonetheless, despite the failure of the Hilbert property for abelian varieties, Lang's conjecture on rational points of pseudo-hyperbolic varieties (see \cite{Lang1}) predicts that abelian varieties should satisfy a \emph{modified} version of the Hilbert property. The aim of this paper is to investigate such modified Hilbert properties for products of elliptic curves. We start with the ``very-weak-Hilbert property''. This notion is obtained by restricting oneself, in the definition of the Hilbert property (see \cite[\S 3]{Serre}), to ramified covers and to ``single'' covers. \begin{definition} Let $k$ be a field. A normal projective geometrically connected variety $X$ over $k$ satisfies the \emph{very-weak-Hilbert property over $k$} if, for every finite surjective ramified morphism $\pi:Y\to X$ with $Y$ geometrically integral and normal, the set $ X(k) \setminus \pi(Y(k)) $ is dense in $X$. \end{definition} If $k$ is a finitely generated field of characteristic zero and $A$ is an abelian variety over $k$, then the Mordell-Weil and Lang-N\'eron theorem imply that $A(k)$ is a finitely generated abelian group; see \cite[Corollary~7.2]{ConradTrace}. We prove that the product $\prod_{i=1}^n E_i$ of elliptic curves $E_1,\ldots, E_n$ over a number field satisfies the very-weak-Hilbert property under the (necessary) assumption that the rank of each $E_i(k)$ is positive. \begin{theorem}\label{thm2} Let $k$ be a finitely generated field of characteristic zero, and let $E_1, \ldots, E_n$ be elliptic curves over $k$ with positive rank over $k$. Then $\prod_{i=1}^n E_i$ satisfies the very-weak-Hilbert property over $k$. \end{theorem} We were first led to investigate the very-weak-Hilbert property for abelian varieties by the work of Corvaja-Zannier on the Hilbert property for the Fermat K3 surface \cite{CZHP}, the work of Coccia on the ``affine'' Hilbert property \cite{Coccia}, Demeio's extensions of Corvaja-Zannier's work \cite{Demeio1, Demeio2}, Streeter's verification of the Hilbert property for certain del Pezzo surfaces \cite{Streeter}, and Zannier's seminal work on Hilbert's irreducibility theorem for powers of elliptic curves \cite{ZannierDuke}. Let us recall that in \cite{CZHP} Corvaja and Zannier introduced the following modified version of the Hilbert property. \begin{definition}[Corvaja-Zannier] Let $k$ be a field. A normal projective geometrically connected variety $X$ over $k$ satisfies the \emph{weak-Hilbert property over $k$} if, for every integer $n\geq 1$ and finite surjective ramified morphisms $\pi_i:Y_i\to X$ with $Y_i$ a geometrically integral normal variety over $k$ ($i=1,\ldots, n$), the set \[ X(k) \setminus \cup_{i=1}^n \pi_i(Y_i(k)) \] is dense in $X$. \end{definition} Our second result is that the product of two elliptic curves with positive rank satisfies the weak-Hilbert property. This modest contribution requires the input of Kawamata's extension of Ueno's fibration theorem for closed subvarieties of abelian varieties to the case of ramified covers of products of two elliptic curves (see Theorem \ref{thm:kawamata_fibn}), and uses Faltings's finiteness theorem for higher genus curves in several ways. \begin{theorem}\label{thm33} Let $k$ be a finitely generated field of characteristic zero, and let $E_1$ and $E_2$ be elliptic curves over $k$. If $E_1(k)$ and $E_2(k)$ have positive rank, then $E_1\times E_2$ has the weak-Hilbert property over $k$. \end{theorem} Note that, if $X$ satisfies the weak-Hilbert property over $k$, then $X$ satisfies the very-weak-Hilbert property over $k$. However, the very-weak-Hilbert property defined above differs \emph{a priori} from Corvaja-Zannier's definition. Nonetheless, it seems reasonable to suspect that these notions are equivalent. Clearly, a normal projective geometrically connected variety $X$ over a field $k$ with the Hilbert property (as defined in \cite[\S 3]{Serre}) satisfies the weak-Hilbert property over $k$. Thus, in particular, by Hilbert's irreducibility theorem, any rational variety over a number field $k$ satisfies the weak-Hilbert property over $k$ and, in particular, the very-weak-Hilbert property over $k$. By \cite[Theorem~1.6]{CZHP}, if $X$ is a smooth projective geometrically connected variety over a number field $k$ with the weak-Hilbert property, then $X$ satisfies the Hilbert property over $k$ if and only if it is geometrically simply-connected (i.e., $\pi_1^{et}(X_{\overline{k}}) = {1}$). Indeed, by \emph{loc. cit.}, a smooth projective geometrically connected variety $X$ over a number field $k$ with the Hilbert property is geometrically simply-connected. In particular, since abelian varieties over number fields are not geometrically-simply connected, they do not have the Hilbert property. Corvaja and Zannier conjectured that a smooth projective geometrically connected variety $X$ over a number field $k$ for which the set $X(k)$ is dense satisfies the weak-Hilbert property over a finite field extension of $k$. We state Corvaja-Zannier's conjecture in the slightly more general context of varieties over finitely generated fields of characteristic zero, and also include the implied (currently not known) equivalence between the weak-Hilbert property and the very-weak-Hilbert property (up to a finite field extension). \begin{conjecture}[Corvaja-Zannier]\label{conj} Let $X$ be a smooth projective geometrically connected variety over a finitely generated field $k$ of characteristic zero. Then the following statements are equivalent. \begin{enumerate} \item There is a finite extension $M/k$ such that $X_M$ satisfies the weak-Hilbert property over $M$. \item There is a finite exension $N/k$ such that $X_N$ satisfies the very-weak-Hilbert property over $N$. \item There is a finite extension $L/k$ such that $X(L$) is Zariski-dense in $X$; \end{enumerate} \end{conjecture} Campana's conjectures on ``special'' varieties provide another perspective on Conjecture \ref{conj}. Indeed, Campana conjectured that $(3)$ (and thus also $(1)$ and $(2)$) should be equivalent to $X_{\overline{k}}$ being special; see \cite[Conjecture~9.20]{CampanaOr0} (and also \cite{CampanaBook}). Examples of special varieties are abelian varieties, K3 surfaces, and rationally connected smooth projective varieties. Such varieties are thus expected (guided by the above conjectures) to satisfy the weak-Hilbert property over some finite extension of the finitely generated base field $k$ of characteristic zero. Proving that such varieties satisfy the weak-Hilbert property seems very difficult, as it is currently not even known whether all K3 surfaces or Fano varieties have a potentially dense set of rational points. We will comment a bit more on Campana's conjectures below. In \cite{FreyJarden} Frey-Jarden proved that an abelian variety $A$ over a finitely generated field $k$ of characteristic zero admits a finite extension $L/k$ such that $A(L)$ is Zariski-dense in $X$ (see also \cite[\S3]{HassettTschinkel} or \cite[\S3]{JAut}). Thus, Corvaja-Zannier's conjecture (Conjecture \ref{conj}) predicts that an abelian variety $A$ over a finitely generated field $k$ of characteristic zero satisfies the weak-Hilbert property over some finite field extension of $k$. Theorems \ref{thm2} and \ref{thm33} provide evidence for Corvaja-Zannier's conjecture. The fact that an elliptic curve of positive rank over $k$ satisfies the weak-Hilbert-property is already known and is, as noted in \cite{CZHP}, a consequence of Faltings's theorem (\emph{quondam} Mordell's conjecture) \cite{Faltings2, FaltingsComplements}. Our results (Theorem \ref{thm2} and Theorem \ref{thm33}) generalize earlier work of Zannier in which evidence for Conjecture \ref{conj} was provided for abelian varieties $A$ which are isogenous to $E^n$ with $E$ a non-CM elliptic curve \cite{ZannierDuke, ZannierRendi}. Note that Zannier's arguments are very different from ours and rely on Hilbertian properties of cyclotomic fields (see \cite{DvornZann, ZannierPisot}). Theorem \ref{thm2} also provides a non-linear analogue of Corvaja's theorem for linear algebraic groups \cite{CorvajaHilb}. Since elliptic curves of positive rank over a number field satisfy the weak-Hilbert property, the most natural approach to proving that the product of elliptic curves satisfies the weak-Hilbert property would be to show that the product of two varieties satisfying the weak-Hilbert property over $k$ satisfies the weak-Hilbert property. This product property seems however difficult to establish. Instead, to prove Theorem \ref{thm2}, we verify a ``weaker'' expectation. \begin{theorem}\label{thm3} Let $k$ be a field and $X_1,\ldots, X_n$ be integral normal projective varieties over $k$. Assume that, for every $i=1,\ldots, n$, the variety $X_i$ satisfies the weak-Hilbert property over $k$. Then $X_1\times \ldots \times X_n$ satisfies the very-weak-Hilbert property over $k$. \end{theorem} Our approach to Theorem \ref{thm3} is inspired greatly by the arguments of Bary-Soroker--Fehm--Petersen \cite{BarySoroker}. Indeed, in \emph{loc. cit.} it is shown that, if $X$ and $Y$ satisfy the Hilbert property over $k$, then $X\times Y$ satisfies the Hilbert property over $k$. Their result answers an old question of Serre in the positive (see the Problem stated in \cite[\S3.1]{Serre}). We mention that Bary-Soroker--Fehm--Petersen's product theorem for varieties with the Hilbert property can also be deduced from \cite[Lemma~8.12]{HarpazWittenberg} (which builds on Wittenberg's thesis \cite[Lemma~3.12]{Wittenberg}). The most general criterion we prove for verifying the very-weak-Hilbert property for a variety is Theorem \ref{prop:thm}. It is precisely this result which was inspired by Bary-Soroker--Fehm--Petersen's work \cite{BarySoroker}. Let us briefly mention that Theorem \ref{thm3} has further consequences. For example, if $E$ is an elliptic curve over a finitely generated field $k$ of characteristic zero with $E(k)$ of positive rank, then the variety $E^n \times \mathbb{P}^m_k$ satisfies the very-weak-Hilbert property over $k$. Moreover, if $X$ is the K3 surface defined by $x^4+y^4=z^4+w^4$ in $\mathbb{P}^3_{k}$, then $E^n\times X$ also satisfies the very-weak-Hilbert property over $k$, as Corvaja-Zannier proved that $X$ satisfies the Hilbert property over $k$ (see \cite[Theorem~1.4]{CZHP}). \subsection{Campana's conjectures} Campana's aforementioned notion of special variety forms an important guiding principle in our study of varieties with the weak-Hilbert property. In fact, Campana's conjectures reach much further and also predict a precise interplay between density of rational points and dense entire curves (much like Lang's conjectures \cite{Lang1}); this is also hinted at by Corvaja-Zannier (see \cite[\S 2.4]{CZHP}). To explain this, let us say that a variety $X$ over $\mathbb{C}$ satisfies the \emph{Brody-Hilbert property} if, for every integer $n\geq 1$ and finite surjective ramified morphisms $\pi_i:Y_i\to X$ with $Y_i$ integral and normal ($i=1,\ldots, n$), there is a holomorphic map $\mathbb{C}\to X^{\an}$ with Zariski-dense image which does not lift to any of the covers $\pi_i^{\an}:Y_i^{\an}\to X^{\an}$. A special smooth projective connected variety over $\mathbb{C}$ is conjectured to satisfy the Brody-Hilbert property; see \cite{CampanaBook}. In this direction it was shown recently by Campana-Winkelmann that a rationally connected variety over $\CC$ satisfies the Brody-Hilbert property; see \cite{CampanaWinkelmann}. We also mention that an abelian variety $A$ over $\mathbb{C}$ satisfies the Brody-Hilbert property. To see this, given a ramified cover $X\to A$ with $X$ a normal integral variety over $\CC$, note that a dense entire curve $\CC\to A^{\an}$ which is transversal to the branch locus of $X\to A$ does not lift to $X^{\an}$. On the other hand, it is not known whether every K3 surface satisfies the Brody-Hilbert property, as we do not know whether such surfaces admit a dense entire curve. This being said, our motivation for writing this short note is to call some attention to the beautiful string of new ideas surrounding the weak-Hilbert property, potential density of rational points on varieties over number fields, the existence of dense entire curves, and Campana's special varieties. In fact, we were naturally led to investigating these problems by our work on Lang's conjectures \cite{Lang1} (see \cite{vBJK, JBook, JKa, JLevin, JLitt, JXie}.) \begin{ack} We are grateful to Fr\'ed\'eric Campana for many useful and inspiring discussions. We thank David Holmes and Siddharth Mathur for helpful discussions about unramified morphisms. We thank Raymond van Bommel and Olivier Wittenberg for their help in finding a simple proof of Lemma \ref{lem:dom0}. We gratefully acknowledge support from the IHES. We thank the referee for several useful comments. \end{ack} \begin{con} If $k$ is a field, then a variety over $k$ is a finite type separated scheme over $k$. If $X$ and $Y$ are varieties over $k$, then we let $X\times Y$ denote the fiber product $X\times_{\Spec k} Y$. A field $k$ is said to be \emph{finitely generated} if it is finitely generated over its prime field. We follow the stacks project and say that a morphism of schemes is \emph{unramified} if it is unramified at every point of $X$; see \cite[Tag~02G3]{stacks-project}. A morphism of schemes $f:X\to Y$ is \emph{ramified} if it is not unramified. If $X\to S$ is a morphism of schemes and $s\in S$, then $X_s$ denotes the scheme-theoretic fibre of $X$ over $s$. \end{con} \section{The very-weak-Hilbert property} Throughout this section, let $k$ be a field. Moreover, let $f:X\to S$ be a morphism of smooth projective integral varieties over $k$. Furthermore, let $\pi:Y\to X$ be a finite surjective ramified morphism and let \[\xymatrix{ & & Y \ar[ddll] \ar[d]^{\pi} \\ & & X \ar[d] \\ T \ar[rr]^{\psi} & & S }\] be the Stein factorization of the composed morphism $Y\to X\to S$ with $T$ projective normal integral over $k$ and the geometric fibers of $Y\to T$ connected (see \cite[\S III.11]{Har}). \begin{proposition} \label{prop1} Let $U\subset S$ be a dense open subset. Assume that $S$ satisfies the very-weak-Hilbert property over $k$ and that, for every $s$ in $U(k)\setminus \psi(T(k))$, the set $X_s(k)$ is dense in $X_s$. If the morphism $\psi:T\to S$ is ramified, then $X(k)\setminus \pi(Y(k))$ is dense. \end{proposition} \begin{proof} Since $S$ satisfies the very-weak-Hilbert property over $k$ and $T\to S$ is a ramified finite surjective morphism with $T$ a normal integral variety over $k$, the set $S(k)\setminus \psi(T(k))$ is dense in $S$. In particular, the set $U(k)\setminus \psi(T(k))$ is dense in $S$. Now, note that the set \[ \bigcup_{s\in U(k)\setminus \psi(T(k))} X_s(k). \] is dense in $X$. Indeed, since $X_s(k)$ is dense in $X_s$, the closure of $ \bigcup_{s\in U(k)\setminus \psi(T(k))} X_s(k)$ in $X$ contains the dense set $\cup_{s\in U(k)\setminus \psi(T(k))} X_s$. Now, note that $X(k)\setminus \pi(Y(k))$ contains the (dense) set \[ \bigcup_{s\in U(k)\setminus \psi(T(k))} X_s(k). \] This concludes the proof. \end{proof} \begin{lemma}\label{lemmatje} Assume that the branch locus $D$ of $\pi:Y\to X$ dominates $S$ (i.e., $f(D) = S$). Then, for every point $s$ in $S$, the morphism $Y_s\to X_s$ is finite surjective ramified. \end{lemma} \begin{proof} A morphism of varieties $V\to W$ over $k$ is unramified if and only if, for every $w $ in $W$, the morphism $V_w\to \Spec k(w)$ is unramified (i.e., \'etale); see \cite[Tag~00UV]{stacks-project}. Now, let $s$ be a point of $S$. To show that the finite surjective morphism $Y_s\to X_s$ is ramified, let $d\in D$ be a point lying over $s$. Then, by the definition of the branch locus, $Y_d\to \Spec k(d)$ is ramified. Note that $Y_d = Y_s\times_{X_s} d$ as schemes over $d=\Spec k(d)$. As the fibre of $Y_s\to X_s$ over $d$ is ramified, it follows that $Y_s\to X_s$ is ramified. \end{proof} \begin{theorem}\label{prop:thm} Let $U\subset S$ be a dense open subscheme of $S$. Assume that the following statements hold. \begin{enumerate} \item The variety $S$ satisfies the very-weak-Hilbert property over $k$. \item For every $s$ in $U(k)$, the projective variety $X_s$ is normal integral and satisfies the weak-Hilbert property over $k$. \item The branch locus $D$ of $\pi:Y\to X$ dominates $S$, i.e., $f(D) = S$. \end{enumerate} Then $X(k)\setminus \pi(Y(k))$ is dense in $X$. \end{theorem} \begin{proof} If $\psi:T\to S$ is ramified, then it follows from Proposition \ref{prop1} that $X(k)\setminus \pi(Y(k))$ is dense in $X$. (We do not need here the assumption that $f(D) = S$.) Thus, to prove the theorem, we may and do assume that $\psi:T\to S$ is unramified. Since $S$ is smooth and $\psi:T\to S$ is a finite surjective unramified morphism, it follows that $T$ is smooth, so that $\psi:T\to S$ is in fact flat, hence \'etale. Note that we have a commutative diagram of morphisms \[ \xymatrix{& & Y_T \ar[d]_{\pi_T} \ar[rr] & & Y\ar[d]^{\pi} & & \\ D_T \ar[rrd]_{\textrm{surjective}} & & X_T \ar[d]_{f_T} \ar[rr]^{\textrm{finite \'etale}} & & X \ar[d]^{f} & & D \ar[dll]^{\textrm{surjective}} \\ & & T \ar[rr]_{\psi}^{\textrm{finite \'etale}} & & S & & } \] As the branch locus $D$ of $\pi$ dominates $S$, it follows that the branch locus $D_T$ of $\pi_T:Y_T\to X_T$ dominates $T$. This implies that, for all $t$ in $T$, the morphism $Y_t\to X_t$ is ramified (Lemma \ref{lemmatje}). We now use this observation. For $s\in U(k)$, consider the finite surjective morphism $Y_s\to X_s$. Let $\{t_1,\ldots, t_r\} = \psi^{-1}\{s\}$. Then $Y_s = Y_{t_1}\sqcup \ldots \sqcup Y_{t_r}$ and, as explained above, every induced finite surjective morphism $\pi_{s,j}:Y_{t_j}\to X_s$ is ramified. Since every $Y_{t_i}$ is integral and normal and $X_s$ satisfies the weak-Hilbert property over $k$, it follows that \[ X_s(k)\setminus \cup_{j=1}^r \pi_{s,j}(Y_{t_j}(k)) = X_s(k) \setminus \pi_s(Y_s(k)) \] is dense in $X_s$. Since, for every $s$ in $U(k)$, the latter set is dense in $X_s$, we conclude that $X(k)\setminus \pi(Y(k))$ is dense in $X$, as required. \end{proof} \section{Products of varieties} To study products of varieties $X_1,\ldots, X_n$, we will exploit the many projections such a product is equipped with. \begin{definition}\label{defn} Let $X_1,\ldots, X_n$ be varieties over $k$ and let $X:=X_1\times \times \ldots \times X_n$. Define $\widetilde{X_i}$ to be the product of $X_1, \ldots, X_{i-1}, X_{i+1},\ldots, X_n$. We let $p_i:X\to \widetilde{X_i}$ be the natural projection. \end{definition} We include a brief proof of the following simple observation. \begin{lemma}\label{lem:dom0} Let $X_1,\ldots, X_n$ be smooth projective geometrically integral varieties over $k$, and let $D\subset \prod_{i=1}^n X_i$ be a non-empty closed subscheme of codimension one. Then, there is an integer $j\in \{1,\ldots, n\}$ such that $p_j(D) = \widetilde{X_j}$. \end{lemma} \begin{proof} We argue by induction on $n$. We may and do assume that $D$ is integral. Write $X =\prod_{i=1}^n X_i$. Note that $$D\subseteq X_1 \times p_1(D) \subseteq X.$$ If $X_1\times p_1(D) = X$, then $p_1(D) = \widetilde{X_1}$, as required. Thus, we may assume that $X_1\times p_1(D)\neq X$. Then, as $D$ is of codimension one, it follows that $D = X_1 \times p_1(D)$. In this case, as $p_1(D)$ is integral and of codimension one in $\widetilde{X_1}$, after relabeling if necessary, it follows from the induction hypothesis that $p_1(D)$ surjects onto $X_3\times \ldots\times X_n$. This implies that $D = X_1\times p_1(D)$ surjects onto $\widetilde{X_2} = X_1\times X_3\times \ldots X_n$, as required. \end{proof} \begin{lemma}\label{lem:dom} Let $X_1,\ldots, X_n$ be smooth projective geometrically integral varieties over $k$, and let $\pi:Y\to X_1\times \ldots \times X_n$ be a finite surjective ramified morphism with $Y$ an integral normal projective variety. Let $D$ be the branch locus of $\pi$. Then there is an integer $j\in \{1,\ldots, n\}$ such that $p_j(D) = \widetilde{X_j}$. \end{lemma} \begin{proof} Note that $D$ is non-empty, as $\pi$ is ramified. Then, by Zariski-Nagata purity \cite[Theorem~X.3.1]{SGA1}, the branch locus $D$ is a closed subscheme pure of codimension one, so that the lemma follows from Lemma \ref{lem:dom0}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm3}] We argue by induction on $n$. If $n=1$, the statement is obvious. Thus, we may and do assume that $n>1$. Write $X= \prod_{i=1}^n X_i$ and let $\pi:Y\to X$ be a finite surjective ramified morphism. It suffices to show that $X(k)\setminus \pi(Y(k))$ is dense in $X$. By Lemma \ref{lem:dom}, there is an integer $j\in \{1,\ldots, n\}$ such that the branch locus of $\pi:Y\to X$ dominates $\widetilde{X}_j$ (Definition \ref{defn}). Define $S:=\widetilde{X}_j$ and consider the natural morphism $p_j:X\to S$. Note that, by the induction hypothesis, the smooth projective integral variety $S$ satisfies the very-weak-Hilbert property over $k$. Moreover, for every $s$ in $S(k)$, the projective variety $X_s$ is naturally isomorphic to $X_j$, and is therefore a smooth projective integral variety over $k$ satisfying the weak-Hilbert property over $k$. Thus, conditions $(1), (2), (3)$ of Theorem \ref{prop:thm} are satisfied. We conclude that $X(k)\setminus \pi(Y(k))$ is dense in $X$. \end{proof} \begin{remark} Let $X$ and $Y$ be smooth projective connected varieties over a finitely generated field $k$ of characteristic zero. If $X$ and $Y$ are special in the sense of Campana \cite{CampanaOr0}, then $X\times Y$ is special. Moreover, the conjectures of Campana and Corvaja-Zannier predict that $X$ is special if and only if there is a finite field extension $L/k$ such that $X_L$ has the weak-Hilbert property over $L$. In particular, Theorem \ref{thm3} is in accordance with the conjectures of Campana and Corvaja-Zannier as it verifies that a product of varieties with the weak-Hilbert property satisfies the very-weak-Hilbert property. \end{remark} We now prove Theorem \ref{thm2}. Note that the proof is a straightforward application of Theorem \ref{thm3} and Faltings's finiteness theorems for higher genus curves. \begin{proof}[Proof of Theorem \ref{thm2}] As in the statement of the theorem, we let $k$ be a finitely generated field of characteristic zero. Moreover, let $E_1,\ldots, E_n$ be elliptic curves over $k$ of positive rank over $k$. Then, for every $i=1,\ldots, n$, the elliptic curve $E_i$ satisfies the weak-Hilbert property over $k$ by Faltings's theorem \cite{Faltings2, FaltingsComplements}. (Indeed, it suffices to note that, if $E$ is an elliptic curve over $k$ and $\pi:Y\to E$ is a ramified finite surjective morphism, then the set $Y(k)$ is finite.) Thus, it follows from Theorem \ref{thm3} that $E_1\times \ldots\times E_n$ satisfies the very-weak-Hilbert property over $k$. \end{proof} \section{Kawamata's theorem} To prove that the product of two elliptic curves satisfies the weak-Hilbert property, we will use Kawamata's theorem on finite covers of abelian varieties. Note that Kawamata's theorem is a generalization of Ueno's fibration theorem for closed subvarieties of abelian varieties. \begin{theorem}[Kawamata]\label{thm:kawamata_fibn} Let $K$ be an algebraically closed field of characteristic zero, and let $A$ be an abelian variety over $K$. Let $X$ be a normal algebraic variety over $K$ and let $X\to A$ be a finite morphism. Then there exist \begin{enumerate} \item an abelian subvariety $B$ of $A$; \item finite \'etale Galois covers $X'\to X$ and $B'\to B$; \item a normal projective variety $Y$ of general type over $K$; \item a finite morphism $Y\to A/B$ with $A/B$ the quotient of $A$ by $B$ such that $X'$ is a fiber bundle over $Y$ with fibers $B'$ and with translations by $B'$ as structure group \end{enumerate} such that the following diagram \[ \xymatrix{X' \ar[dd]_{B'-\textrm{fiber bundle}} \ar[rr]^{\textrm{finite étale}} & &X \ar[rr]^{\textrm{finite}} & & A \ar[dd] \\ & & & & \\ Y \ar[rrrr]_{\textrm{finite}} & & & & A/B } \] commutes. \end{theorem} \begin{proof} See \cite[Theorem~23]{KawamataChar}. \end{proof} \begin{lemma}\label{lem:kaw2} Let $k$ be a finitely generated field of characteristic zero, let $A$ be an abelian surface over $k$, and let $Y\to A$ be a finite surjective ramified morphism with $Y$ integral normal. If the Kodaira dimension of $Y$ is not two, then $Y(k)$ is not dense. \end{lemma} \begin{proof} Note that the Kodaira dimension of $Y$ is non-negative, as $Y$ admits a finite surjective morphism to an abelian variety. If the Kodaira dimension of $Y$ is zero, then $Y\to A$ is \'etale by Kawamata's theorem (Theorem \ref{thm:kawamata_fibn}). This contradicts our assumption that $Y\to A$ is ramified. Thus, we may and do assume that the Kodaira dimension of $Y$ equals one. Then, by Kawamata's theorem (Theorem \ref{thm:kawamata_fibn}), there is a finite field extension $L/k$ and a finite \'etale cover $Y'\to Y_L$ of the surface $Y_L$ such that $Y'$ dominates a curve $C$ over $L$ of genus at least two. By Chevalley-Weil \cite[\S 8]{JLitt}, if $Y(k)$ is dense, then there is a finite field extension $M/k$ such that $Y'(M)$ is dense. As $Y'\to C$ is surjective, it follows that $C(M)$ is dense. However, this contradicts Faltings's theorem \cite{FaltingsComplements} that $C(M)$ is finite. We conclude that the set $Y(k)$ is not dense in $Y$. \end{proof} \begin{lemma}\label{lem:rh} Let $A$ and $B$ be elliptic curves over $k$, and let $\pi:Y\to A\times B$ be a finite surjective morphism with $Y$ of general type. Then the branch locus of $\pi$ dominates $A$ and $B$. \end{lemma} \begin{proof} Let $\psi:\widetilde{Y}\to Y$ be a resolution of singularities, and let $E= E_1\cup \ldots \cup E_n$ be the exceptional locus. Let $R$ be the ramification divsor of $\pi:Y\to A\times B$. Then, by Riemann-Hurwitz, we have that \[K_Y = \pi^\ast K_{A\times B} +R = R, \quad K_{\widetilde{Y}} = \psi^\ast R + \sum a_i E_i. \] As the canonical divisor $K_{\widetilde{Y}}$ is big on $\widetilde{Y}$ (as $\widetilde{Y}$ is of general type), we see that $\pi_\ast R$ is big on $A\times B$. Now, assume that the branch locus of $\pi$ does not dominate $A$. Then, the big divisor $\pi_\ast R$ is contained in $S\times B$ with $S$ a finite closed subset of $E_1$. However, as $S\times B$ is not big, this contradicts the bigness of $\pi_\ast R$. We conclude that the branch locus of $\pi$ dominates $A$ (hence also $B$ by symmetry). \end{proof} \begin{proof}[Proof of Theorem \ref{thm33}] Define $X:= E_1\times E_2$ and $S:=E_1$. Let $f:X\to S$ be the projection map. For $i=1,\ldots,n $, let $Y_i$ be an integral normal variety over $k$ and let $\pi_i:Y_i\to X$ be a finite surjective ramified morphism. It suffices to show that $X(k) \setminus \cup_{i=1}^n \pi_i(Y_i(k))$ is dense in $X$. To this end, let us first note that $X(k)$ is dense in $X$ (as $E_1(k)$ and $E_2(k)$ have positive rank). Now, if $Y_i$ has Kodaira dimension $<2$, then $Y_i(k)$ is not dense (Lemma \ref{lem:kaw2}), so that we may discard such $Y_i$ from the collection of coverings $\pi_i:Y_i\to X$. That is, we may and do assume that, for $i=1,\ldots, n$, the variety $Y_i$ is of general type. Moreover, if $Y_i\to T_i\to E_1$ is the Stein factorization of the composed morphism $Y_i\to E_1\times E_2\to E_1$ and $T_i\to E_1$ is ramified, then $Y_i(k)$ is not dense in $Y$, as $T_i(k)$ is finite by Faltings's finiteness theorem \cite{Faltings2, FaltingsComplements}. Therefore, we may also discard such morphisms $\pi_i:Y_i\to X$ from the collection of coverings $\pi_i:Y_i\to X$. Thus, for $i=1,\ldots, n$, the morphism $T_i\to S$ is finite unramified, hence \'etale. Moreover, as $Y_i$ is of general type, by Lemma \ref{lem:rh}, for $i=1,\ldots, n$, the branch locus of $\pi_{i}$ dominates $S:=E_1$. We now argue similarly as in the end of the proof of Theorem \ref{prop:thm}. Let $U\subset S$ be a dense open subset such that, for every $s$ in $U$, the scheme $Y_s$ is normal. For $s\in U(k)$, consider the finite surjective morphism $\pi_{i,s}:Y_{i,s}\to X_s$. Let $\{t_{i,1},\ldots, t_{i,r_i}\} = \psi_i^{-1}\{s\}$. Then $Y_{i,s} = Y_{t_{i,1}}\sqcup \ldots \sqcup Y_{t_{i,r_i}}$ with $Y_{t_{i,1}}, \ldots, Y_{t_{i,r_i}}$ integral normal varieties over $k$. Moreover, for every $i=1,\ldots, n$, every $s\in U(k)$, and every integer $1\leq j \leq r_i$, by Lemma \ref{lemmatje}, the induced finite surjective morphism $\pi_{i,s,j}:Y_{t_{i,j}}\to X_s$ is ramified (as the branch locus of $Y_i\to X$ dominates $S$, so that the branch locus of $Y_{i,T}\to X_T$ dominates $T$). Therefore, since $X_s = E_2$ satisfies the weak-Hilbert property over $k$ (by assumption), it follows that \[ X_s(k)\setminus \cup_{i=1}^n\cup_{j=1}^{r_i} \pi_{i,s,j}(Y_{i,t_j}(k)) = X_s(k) \setminus \cup_{i=1}^n\pi_{i,s}(Y_{i,s}(k)) \] is dense in $X_s$. Note that, for every $s$ in $U(k)$, the set $X(k)\setminus \cup_{i=1}^n\pi_i(Y_i(k))$ contains the set \[ X_s(k) \setminus \cup_{i=1}^n\pi_{i,s}(Y_{i,s}(k)).\] Since $S(k) = E_1(k)$ is dense in $E_1$, we have that $U(k)$ is dense in $E_1$, so that $$X(k)\setminus \cup_{i=1}^n\pi_i(Y_i(k))$$ is dense in $X$. \end{proof}
1,314,259,995,190
arxiv
\section{Introduction} Current-driven spin-orbit torques have become a prodigal area of research in the past ten years \cite{Brataas2014,Manchon2019}. This magnetic torque enables the electrical control of ferromagnets using spin densities generated through angular momentum transfer between the orbital and spin degrees of freedom \cite{Miron2011b,Liu2012}. Understanding the physical origin of the torques, their symmetries and materials dependence has been the subject of intense collaborations between experimentalists and theorists. From the theory standpoint, several mechanisms have been identified, among which spin Hall effect \cite{Sinova2015}, inverse spin galvanic effect \cite{Manchon2008b,Manchon2009b,Garate2009} (also called Rashba-Edelstein effect), spin swapping \cite{Lifshits2009,Saidaoui2015b,Saidaoui2016}, interfacial spin precession\cite{Amin2018,Freimuth2018} etc. In spite of these efforts, important questions remain to be answered as experimental data point toward complex thickness, angular and temperature dependences \cite{Kim2013,Garello2013,Avci2014,Qiu2015,Ghosh2017}. Although initial oversimplified theories attributed the dissipative "damping-like" component of the torque (even under time reversal) to spin Hall effect\cite{Liu2011,Haney2013b} and the reactive "field-like" component (odd under time reversal) to the inverse spin galvanic effect\cite{Manchon2008b,Miron2010}, this crude picture has been severely questioned by the most recent experiments. It remains unclear whether the torque components can be solely attributed to spin Hall effect, inverse spin galvanic effect, or a combination of both. In addition, a recent series of experiments\cite{Fan2013,Baek2018,Safranski2019} have identified unexpected torque components that are attributed to mechanisms beyond spin Hall and inverse spin galvanic effects \cite{Saidaoui2016,Amin2018,Freimuth2018}. A detailed discussion on these open questions can be found in Ref. \onlinecite{Manchon2019}. In order to properly characterize and predict the behavior of spin-orbit torques in heterostructures, the model should be both comprehensive and transparent. As a matter of fact, such a model should ideally account for the realistic band structure of the heterostructure to treat bulk and interfacial spin-orbit effects on equal footing. But it should also be able to provide general trends that can serve as guidelines to experiments. To date, most models have addressed only certain aspects of the spin-orbit torques such as the interfacial inverse spin galvanic through model Hamiltonians \cite{Manchon2008b,Manchon2009b,Garate2009,Bijl2012,Li2015b,Qaiumzadeh2015,Ado2017} or spin Hall effect either through drift-diffusion or Boltzmann transport equation \cite{Haney2013b,Chen2017b}. Whereas these approaches are quite transparent, their main limitation is their inability to treat both interface and bulk effects altogether and in particular the neglect of interfacial orbital hybridization that is known to be crucial in transition metal multilayers\cite{Blugel2007,Grytsyuk2016,Wang2016b}. The common denominator between spin Hall and inverse spin galvanic effects is that they both stem from non-equilibrium orbital currents \cite{Tanaka2008,Jo2018} or densities \cite{Yoda2018} that involve specific admixture of atomic orbitals. Consequently, the proper modeling of spin-orbit effects in heterostructures beyond the Rashba and spin Hall phenomenologies requires a multi-orbital scheme. To date, the most accurate approach to compute multi-orbital transport properties is to rely on density functional theory. This approach has been used extensively to compute spin and anomalous Hall effects in the bulk \cite{Yao2004,Guo2008,Lowitzer2011,Sun2016}, and has been recently extended to compute spin transport in heterostructures (e.g., Ref. \onlinecite{Haney2013a}). Various techniques have been proposed including Wannier interpolation of the band structure \cite{Freimuth2014a,Geranton2015,Geranton2016,Mahfouzi2018a}, Korringa-Kohn-Rostoker method \cite{Wimmer2016,Ebert2011b}, or real space Hamiltonian with tight-binding linear muffin-tin orbitals \cite{Wang2016,Belashchenko2019}. While the first two methods are well adapted to compute Kubo-Streda formula, the latter is suitable for two-terminal simulations, following the Landauer-B\"uttiker scheme. These different methods present the crucial advantage of modeling accurately the orbital hybridization across the whole structure. They are however computationally intensive, which makes them hardly adapted for systematic investigations (such as thickness dependence).\par From this standpoint, developing a multi-orbital tight-binding model is an interesting option as it reduces the size of the matrices to deal with numerically\cite{Papaconstantopoulos2003,Papaconstantopoulos2015}. When interfaced with density functional theory, this method enables the accurate simulation of magnetic \cite{Barreteau2016} as well as transport properties \cite{Tanaka2008} in bulk materials. Using this approach, the intrinsic contribution to spin Hall effect has been computed in various materials (semiconductors\cite{Guo2005}, transition metals \cite{Yao2005,Tanaka2008,Freimuth2010}, topological insulators \cite{Sahin2015}, Weyl semimetals \cite{Sun2016} etc.) using the zero-temperature Berry curvature formula. Unfortunately, these results are only valid for vanishing disorder in the bulk, and cannot be transposed to experimentally relevant setup where injection through interface dominate \cite{Sinova2015}. To properly compute spin-charge conversion processes and spin-orbit torque, one needs to model the full heterostructure, including the interface \cite{Haney2013a,Freimuth2014a}. Recently, we applied this approach to a heterostructure made of a topological insulator capped with a (ferro- or antiferro-) magnetic material \cite{Ghosh2018,Ghosh2019}. In this work, each unit cell is modeled by a 4$\times$4 Hamiltonian matrix regularized on a cubic lattice \cite{Marchand2012}. Although quite crude, this approximation allowed us to model spin-orbit torque in various transport regimes and determine the minor role of spin Hall effect in these structures. In the present work, we use a multi-orbital tight-binding model to compute the spin-orbit torque in transition metal heterostructures. Whereas this method does not provide the accurate band structure obtained by density functional theory, it retains the most prominent features of the density of states, atomic spin-orbit coupling and interfacial orbital hybridization. It is also more flexible and computationally efficient, allowing for systematic characterization of the non-equilibrium properties of the heterostructure. In particular, we investigate the thickness and angular dependences of the torque components and obtain a large "planar" damping-like torque. We also investigate the nature of the self-torque, i.e. the spin-orbit torque taking place in the ferromagnet itself, and demonstrate that it can be substantial in spite of the large magnetic exchange \cite{Pauyac2018,Wang2019b}. \section{Model and formalism\label{s:model}} In this section, we first introduce a toy model to discuss how the interfacial orbital mixing gives rise to "Rashba-like" spin-orbit coupling. Then, we describe the tight-binding model of the heterostructure, and finally we expose the formalism we use to compute the transport properties. \subsection{Interfacial spin splitting with p and d orbitals\label{s:porbitals}} In centrosymmetric materials, such as the transition metals we consider in this work, the spin Hall effect occurring in the bulk is usually attributed to intrinsic origin \cite{Murakami2003,Sinova2004}, i.e. to the Berry curvature of the wave functions. Following the scenario established by Tanaka et al. \cite{Tanaka2008,Kontani2009,Jo2018}, Berry curvature in momentum space creates an orbital Hall current, which is spin-polarized by turning on the atomic spin-orbit coupling. In contrast, little is known about the orbital origin of the interfacial "Rashba" spin-orbit coupling. Since the early works on this topic \cite{Vasko1979,Ohkawa1974,Bychkov1984}, it was proposed that upon inversion symmetry breaking, the spin-orbit coupling experienced by the Bloch electrons acquires a momentum-dependent Zeeman energy term, usually written \begin{eqnarray} {\cal H}_{\rm R}=\alpha_{\rm R}\hat{\bm\sigma}\cdot(\hat{\bf p}\times {\bf z}), \end{eqnarray} where $\alpha_{\rm R}$ is called the Rashba parameter. In their pioneering work, Petersen and Hedeg\aa rd \cite{Petersen2000} considered the Rashba spin splitting of Au (111) surface and proposed that the surface potential facilitates the admixture between p$_z$ and p$_{x,y}$ orbitals. This hybridization results in Rashba spin-orbit coupling when atomic spin-orbit coupling is turned on. A similar idea was put forward by Bihlmayer et al. \cite{Bihlmayer2006}, suggesting that inversion symmetry breaking promotes the admixture between $l$ and $l\pm1$ orbitals. In this section, we wish to provide an explicit derivation of this effect and establish a direct connection between orbital mixture due to inversion symmetry breaking and Rashba-like spin-orbit coupling. \begin{figure} \begin{center} \includegraphics[width=8cm]{Fig1.png} \caption{(Color online) Schematics of the diatomic chain model. The atoms of the bottom chain (gray) possess both p$_z$ (a) and p$_x$ (b) orbitals, while the atoms of the top chain has only p$_z$ orbitals. The phase acquired by Bloch electrons hopping from one orbital to the other is also given.\label{FigRashbap}} \end{center} \end{figure} Let us consider a chain of atoms, extended along $x$ and with all three p$_x$, p$_y$ and p$_z$ orbitals. We can discard the p$_y$ orbitals from our discussion right away since they don't couple to either p$_x$, or p$_z$. We now break the inversion symmetry by coupling this chain with another chain of atoms with only p$_z$ orbitals. The system is depicted on Fig. \ref{FigRashbap}(a) and (b). The atoms of the bottom chain (with both p$_x$ and p$_z$) are represented in gray and the atoms of the top chain (with p$_z$ only) are in light blue. In the two-center tight-binding approximation and the $\{{\rm p}_z^t,{\rm p}_z^b,{\rm p}_x^b\}$ basis, the Hamiltonian of this diatomic chain reads \begin{equation} {\cal H}_{\rm chain}= \left(\begin{matrix} \varepsilon_{k}^{t}& V_{zz} &V_{zx} \\ V_{zz} &\varepsilon_{k}^{z}&0\\ V_{zx}^* &0&\varepsilon_{k}^{x}\\ \end{matrix}\right). \end{equation} Here p$_\nu^\eta$ refers to the $\nu$-th orbital of the top ($\eta=t$) or bottom chain ($\eta=b$), $V_{zz}=(V_\sigma+V_\pi)\cos k_xa/2$ and $V_{zx}=-i(V_\sigma-V_\pi)\sin k_xa/2$, $V_{\sigma,\pi}$ being the Slater-Koster hopping integrals \cite{Slater1954}. In order to keep our result analytically tractable, we assume that $\varepsilon_{k}^{z}=\varepsilon_{k}^{x}$. Then, we end up with three bands with dispersion, \begin{eqnarray} \varepsilon_{\bf k}^0&=&\varepsilon_{k}^{ z},\;\varepsilon_{\bf k}^\pm=\frac{\varepsilon_{k}^{t}+\varepsilon_{k}^{z}}{2}\pm\frac{1}{2}\Delta_k, \end{eqnarray} with $\Delta_k=\sqrt{(\varepsilon_{k}^{t}-\varepsilon_{k}^{z})^2+4(|V_{zz}|^2+|V_{zx}|^2)}$. The corresponding eigenstates read \begin{eqnarray} |0\rangle&=&\frac{1}{\sqrt{|V_{zz}|^2+|V_{zx}|^2}}\left(-V_{zx}|{\rm p}^b_z\rangle+V_{zz}|{\rm p}^b_x\rangle\right),\\ |+\rangle&=&\cos\chi|{\rm p}_z^t\rangle+\frac{\sin\chi}{\sqrt{|V_{zz}|^2+|V_{zx}|^2}}\left(V_{zz}|{\rm p}_z^b\rangle+V_{zx}^*|{\rm p}_x^b\rangle\right),\\ |-\rangle&=&-\sin\chi|{\rm p}_z^t\rangle+\frac{\cos\chi}{\sqrt{|V_{zz}|^2+|V_{zx}|^2}}\left(V_{zz}|{\rm p}_z^b\rangle+V_{zx}^*|{\rm p}_x^b\rangle\right),\nonumber\\ \end{eqnarray} where $\cos2\chi=(\varepsilon_{k}^t-\varepsilon_{k}^z)/\Delta_k$. We now evaluate the orbital momentum on the bottom chain, and using $\langle {\rm p}^b_x|{\bf L}|{\rm p}^b_z\rangle=i{\bf y}$, we get \begin{eqnarray} \langle 0|{\bf L}|0\rangle&=&-\frac{2{\rm Im}\left[V_{zz}V_{zx}^*\right]}{\sqrt{|V_{zz}|^2+|V_{zx}|^2}}{\bf y},\\ \langle +|{\bf L}|+\rangle&=&\sin^2\chi\frac{2{\rm Im}\left[V_{zz}V_{zx}^*\right]}{\sqrt{|V_{zz}|^2+|V_{zx}|^2}}{\bf y},\\ \langle -|{\bf L}|-\rangle&=&\cos^2\chi\frac{2{\rm Im}\left[V_{zz}V_{zx}^*\right]}{\sqrt{|V_{zz}|^2+|V_{zx}|^2}}{\bf y}, \end{eqnarray} where $2{\rm Im}\left[V_{zz}V_{zx}^*\right]=[(V_\sigma)^2-(V_\pi)^2]\sin k_xa$. This toy model shows that, due to the lack of inversion symmetry, the eigenstates of the diatomic chain acquire an orbital momentum that is {\em odd} in linear momentum $k$. This orbit-momentum locking results in orbital Edelstein effect \cite{Yoda2018}, i.e. the electrical generation of an orbital magnetic moment. \par When atomic spin-orbit coupling is turned on, the spin momentum of the Bloch electron aligns on its orbital momentum. Therefore, in the $\{|0\rangle,|+\rangle,|-\rangle\}$ basis, the spin-diagonal Hamiltonian of these Bloch states acquires an off-diagonal contribution, \begin{eqnarray}\label{eq:rashba} \langle\xi_{\rm so}{\bf L}\cdot\hat{\bm \sigma}\rangle= {\cal M}_{\rm so}\frac{[(V_\sigma)^2-(V_\pi)^2]\sin k_xa}{\sqrt{[V_\sigma)^2+(V_\pi)^2+2V_\sigma V_\pi \cos k_xa}}\hat{\sigma}_y,\nonumber\\ \end{eqnarray} where $\hat{\bm \sigma}$ is the vector of Pauli spin matrices, and ${\cal M}_{\rm so}={\rm Diag}(-\xi_{\rm so}^t,\xi_{\rm so}^b\sin^2\chi,\xi_{\rm so}^b\cos^2\chi)$, Diag(...) being the diagonal matrix and $\xi_{\rm so}^\eta$ the spin-orbit coupling energy of the $\eta$-th chain ($\eta=t,b$). This Hamiltonian explicitly connects the linear momentum $k_x$ with the spin momentum $\sigma_y$, resulting in Rashba and Dzyaloshinskii-Moriya effects \cite{Manchon2015}. This model can be straightforwardly extended to higher dimensions and higher order orbitals (d, f etc.). In the case of a transition metal interface, the orbital admixture required to obtain a spin density along $S_y$ for an electron propagating along $x$ is typically d$_{xy}$-d$_{yz}$, d$_{zx}$-d$_{z^2}$ or d$_{zx}$-d$_{x^2-y^2}$. A microscopic model of spin-orbit effects at interfaces should {\em a minima} contain these orbitals. \subsection{Tight-binding model of the heterostructure} \begin{figure} \includegraphics[width=8cm]{Fig2.png} \caption{(Color online) (a) Schematics of the bcc heterostructure composed of a ferromagnetic metal (blue) deposited on top of a nonmagnetic metal (gray). For simplicity, we consider that both metals possess the same lattice parameter. (b) Hopping parameters at the interface between the ferromagnet and the nonmagnetic metal. $t_1$ stands for the nearest neighbor hopping and $t_2$ stands for the second nearest neighbor hopping.\label{Fig0}} \end{figure} We now move on to the description of the tight-binding model of our transition metal heterostructure, depicted on Fig. \ref{Fig0}(a). This heterostructure consists of two adjacent metallic slabs with bcc crystal structure along the (001) direction, possessing the same lattice parameter. Each metallic slab is constituted of monolayers stacked on top of each other. Considering the ten d-orbitals, each monolayer adopts a square lattice described by the Hamiltonian \begin{equation} {\cal H}_{\rm mono}= \left(\begin{matrix} \gamma_{xy}^{\bf k}&0 &0 & t_{xy,z^2}^{\bf k} &0\\ 0&\gamma_{yz}^{\bf k}&t_{zx,yz}^{{\bf k}} &0 & 0\\ 0 &t_{zx,yz}^{{\bf k},*}&\gamma_{zx}^{\bf k}&0 &0\\ t_{xy,z^2}^{{\bf k},*} &0&0&\gamma_{z^2}^{\bf k}&0\\ 0&0&0&0&\gamma_{x^2-y^2}^{\bf k}\\ \end{matrix}\right)\label{eq:Hmono} \end{equation} where the parameters $\gamma_{\nu}^{\bf k}$ and $t_{\mu,\nu}^{\bf k}$ are given explicitly in the Appendix. This Hamiltonian is written in the basis $\{{\rm d}_{xy},{\rm d}_{yz},{\rm d}_{zx},{\rm d}_{z^2},{\rm d}_{x^2-y^2}\}$. This form is valid for each spin species, so that the spin-dependent Hamiltonian reads ${\cal H}_{\rm mono}\otimes\hat{\sigma}_0$. In addition, we define the exchange Hamiltonian, ${\cal H}_{\rm ex}$, as \begin{eqnarray} {\cal H}_{\rm ex}=\frac{1}{2}{\rm Diag}\left(\Delta_{xy},\Delta_{yz},\Delta_{zx},\Delta_{z^2},\Delta_{x^2-y^2}\right)\otimes\hat{\bm\sigma}\cdot{\bf m}.\nonumber\\ \end{eqnarray} Here $\Delta_{\nu}$ is the exchange energy of the $\nu$-th d orbital and ${\bf m}$ is the ferromagnetic order parameter. Hence, the Hamiltonian for a square lattice monolayer is a 10$\times$10 matrix. In addition, one needs to account for the spin-orbit coupling matrix, \begin{equation} {\cal H}_{\rm soc}=\xi_{\rm so} \left(\begin{matrix} 0&i\hat{\sigma}_y &-i\hat{\sigma}_x &0&2i\hat{\sigma}_z\\ -i\hat{\sigma}_y&0&i\hat{\sigma}_z &-i\sqrt{3}\hat{\sigma}_x &-i\hat{\sigma}_x\\ i\hat{\sigma}_x &-i\hat{\sigma}_z&0&i\sqrt{3}\hat{\sigma}_y &-i\hat{\sigma}_y\\ 0&i\sqrt{3}\hat{\sigma}_x&-i\sqrt{3}\hat{\sigma}_y &0&0\\ -2i\hat{\sigma}_z&i\hat{\sigma}_x&i\hat{\sigma}_y&0&0\\ \end{matrix}\right).\label{eq:soc} \end{equation} The Hamiltonian of a monolayer is therefore \begin{equation} {\cal H}_{0}={\cal H}_{\rm mono}\otimes\hat{\sigma}_0+{\cal H}_{\rm ex}+{\cal H}_{\rm soc}. \end{equation} This monolayer is connected to the top nearest monolayer by the matrix ${\cal T}_1$ whose elements are the nearest neighbor hopping parameters $t_{\mu,\nu}^{z,{\bf k}}$ between orbital d$_{\mu}$ in the bottom layer and orbital d$_\nu$ in the top layer. The connection to the second top nearest monolayer is accounted for by the matrix ${\cal T}_2$. The elements of both matrices are given explicitly in the Appendix. The Hamiltonian of one bcc slab is then defined \begin{equation}\label{Hlayer} \cal{H}_{\rm layer}= \left(\begin{matrix} {\cal H}_0 & {\cal T}_1 & {\cal T}_2 & 0&\\ {\cal T}^\dagger_1 & {\cal H}_0& {\cal T}_1 & {\cal T}_2 &\ddots\\ {\cal T}^\dagger_2 &{\cal T}^\dagger_1& {\cal H}_0&{\cal T}_1&\ddots \\ 0&{\cal T}^\dagger_2 &{\cal T}^\dagger_1&{\cal H}_0&\ddots\\ & \ddots&\ddots& \ddots&\ddots\\ \end{matrix}\right). \end{equation} \begin{table} \begin{tabular}{c|cccccc} & $V_\sigma^1$ & $V_\pi^1$ & $V_\delta^1$& $V_\sigma^{2}$ & $V_\pi^2$ & $V_\delta^2$\\\hline FM & -0.618 & 0.37 &-0.035&-0.37&0.08 & 0.01\\ NM & -1.61 & 0.71 &0.034&-0.99 & -0.17 & 0.12\\ FM/NM & -1.11 & 0.54 &-0.001 & -0.68 & -0.046 &0.066\\ \end{tabular} \begin{tabular}{c|ccccc} &$\varepsilon_{xy,yz,zx}$&$\varepsilon_{z^2,x^2-y^2}$&$\Delta_{xy,yz,zx}$&$\Delta_{z^2,x^2-y^2}$&$\xi_{\rm soc}$\\\hline FM & 12.8 & 12.5 &1.85&1.73&0.065 \\ NM & 13.16 & 11.66 & 0 &0 & 0.367 \\ \end{tabular} \caption{Slater-Koster parameters used to model the FM/NM heterostructure and extracted from Ref. \onlinecite{Papaconstantopoulos2015}. The parameters are given in eV.\label{table1}} \end{table} The matrix elements of ${\cal H}_{\rm mono}$, ${\cal T}_1$ and ${\cal T}_2$ are written in terms of the two-site Slater-Koster parameters (see Appendix) given for each slab in Table \ref{table1}. We adopt the parameters computed by Papaconstantopoulos \cite{Papaconstantopoulos2015} for {\em bulk} bcc Fe and bcc W. The lattice parameter of both slabs is set to that of bulk bcc W, $a_0=3.155$ \AA, imposing 9\% lattice mismatch with bcc Fe whose bulk lattice parameter is 2.866 \AA. Notice that the onsite energies of the ferromagnetic orbitals are rigidly shifted by an offset $\varepsilon_0$ compared to their value in bulk Fe in order to allow for band structure alignement between the ferromagnetic and nonmagnetic metals (see below). With these parameters, we determine the Hamiltonian for the nonmagnetic (NM) and ferromagnetic (FM) slabs, $\cal{H}_{\rm NM}$ and $\cal{H}_{\rm F}$. \par Finally, the heterostructure is obtained by stitching the two individual slabs together. \begin{equation} \cal{H}= \left(\begin{matrix} {\cal H}_{\rm F} & {\cal T}_{\rm FN} \\ {\cal T}_{\rm FN}^\dagger & {\cal H}_{\rm NM}\\ \end{matrix}\right) \end{equation} The hopping matrix ${\cal T}_{\rm FN}$ is simply given by ${\cal T}_1$ and ${\cal T}_2$ adopting the parameters of Table \ref{table1}. In the absence of further knowledge, the hopping parameters between the highest nonmagnetic layer and the lowest magnetic layer are taken as the average of the bulk hopping parameters of Fe and W. Ideally, one would need to fit the tight-binding parameters to the band structure of the heterostructure computed self-consistently from first principles \cite{Barreteau2016}, which remains out of the scope of the present work but constitutes an appealing development of the present work. Indeed, we emphasize that the nonmagnetic transition metal is expected to acquire interfacial magnetization by proximity with the ferromagnetic metal \cite{Grytsyuk2016}. This induced magnetization is neglected in our model because our tight-binding parameters are that of the bulk materials. Nevertheless, a previous first principles investigation of the Pt/Co(111) interface has shown that such an induced magnetization has minor effect on the spin-orbit torque (see Fig. 7 in Ref. \onlinecite{Haney2013a}).\par \begin{figure} \includegraphics[width=6cm]{Fig3.png} \caption{(Color online) Spin-resolved density of states of Fe(5)/W(7) projected on the d-orbitals and calculated by (a) density functional theory and (b) our tight-binding model (with $\Gamma=10$ meV). The blue shaded region corresponds to W (nonmagnetic layer) while the red shaded region corresponds to Fe (magnetic layer). The vertical dashed lines in (b) correspond to two cases of interest discussed in Section \ref{s:prof}. \label{Fig1}} \end{figure} Considering the numerous approximations we took (first and second nearest neighbor hopping only, no self-consistent computation of the interfacial and exchange potentials, constrained lattice parameter, neglect of s and p orbitals, etc.), we do not expect our tight-binding model to accurately represent a realistic Fe/W bilayer. Nonetheless, we benchmarked our tight-binding model against the density of states of a Fe/W bilayer computed by density functional theory in order to enforce its reliability. These simulations have been conducted using Vienna ab initio simulation package (VASP) \cite{Kresse1996a,Kresse1996b} with PAW-PBE GGA pseudopotentials \cite{Blochl1994, Kresse1999}. The structure has been relaxed until forces on all the atoms go below 0.001 eV/$\rm \AA$ allowing both the atomic coordinates and lattice vectors to change. We have used an energy cut off of 500 eV. For self consistent cycles we have used a $16 \times 16 \times 1$ k-mesh and for the density of states, we have used a $24 \times 24 \times 1$ k-mesh. We have neglected the effect of spin-orbit coupling and conducted a spin-polarized calculation as we are interested in spin-resolved density of states. Including spin-orbit coupling does not make any drastic change in the total density of states.\par The first principles density of states of Fe(5)/W(7) projected on the d-orbitals only is reported on Fig. \ref{Fig1}(a) together with the density of states obtained for our FM(5)/NM(7) system [Fig. \ref{Fig1}(b)]. The figures in parenthesis indicate the number of monolayers, and the density of states is defined $-\frac{1}{\pi}{\rm Im}[\hat{G}^R]$, where $\hat{G}^R=(\varepsilon-{\cal H}+i\Gamma)$ is the retarded Green's function and $\Gamma$ is the homogeneous broadening. The tight-binding density of state in Fig. \ref{Fig1}(b) is obtained for a rigid energy shift $\varepsilon_0=3.1$ eV, and the Fermi energy is fixed at 14 eV in order to qualitatively reproduce the balance between up and down Fermi electrons obtained by VASP. With these parameters, the total number of electrons in the ferromagnetic and nonmagnetic metals are $n^{\rm FM}_\uparrow=4.6$, $n^{\rm FM}_\downarrow=2.34$ and $n^{\rm NM}_\uparrow+n^{\rm NM}_\downarrow=4.73$.\par We immediately observe a number of differences between the two densities of states in terms of bandwidth and peak position. These differences are attributed to the crude approximations of the tight-binding model mentioned above. Nevertheless, both densities of states display the same essential features: similar bandwidth, spin splitting of the ferromagnetic metal, large overlap between the two materials close to Fermi level etc. Therefore, although our FM/NM heterostructure does not reproduce the ideal Fe/W case, it is a good representative of transition metal heterostructures.\par We conclude this discussion by considering the spin texture in momentum space. As explained above, symmetry breaking at the interface results in orbital Edelstein effect, which promotes the onset of spin-momentum locking in the presence of spin-orbit coupling. Figure \ref{FigBS} shows the band structure around $\bar{\Gamma}$ point projected on the spin momentum components, $s_{x,y,z}=\langle\hat{\sigma}_{x,y,z}\rangle $. In this calculation, the magnetization is set along ${\bf z}$. Figure \ref{FigBS}(a) displays $s_x$ component when spanning the momentum between $\bar{\rm Y}$ and $\bar{\Gamma}$ points, Fig. \ref{FigBS}(b) displays $s_y$ component when spanning the momentum between $\bar{\rm X}$ and $\bar{\Gamma}$ points, and Fig. \ref{FigBS}(c) displays $s_z$ component along the $\bar{\rm X}-\bar{\Gamma}-\bar{\rm Y}$ path. The in-plane spin texture is antisymmetric in momentum and displays the $\hat{\bm\sigma}\sim {\bf z}\times{\bf k}$ symmetry expected for Rashba spin-orbit coupling. In contrast, the $s_z$ component is symmetric and reflects the spin polarization of the bands due to magnetic exchange. \begin{figure} \subfloat{\includegraphics[width = 4.9cm]{Fig4a.png}}\\ \subfloat{\includegraphics[width = 4.9cm]{Fig4b.png}}\\ \subfloat{\includegraphics[width = 4.9cm]{Fig4c.png}} \caption{(Color online) Spin-resolved band structure for FM(5)/NM(7) bilayer: (a) $s_x$ along $\bar{\rm Y}-\bar{\Gamma}-\bar{\rm Y}$, (b) $s_y$ along $\bar{\rm X}-\bar{\Gamma}-\bar{\rm X}$, and (c) $s_z$ along $\bar{\rm X}-\bar{\Gamma}-\bar{\rm Y}$. Blue and red colors refer to opposite sign of the spin momentum.} \label{FigBS} \end{figure} \subsection{Transport formalism} The transport properties are computed using Kubo-Streda formula \cite{Sinitsyn2006,Freimuth2014a}. In this framework, the conductivity tensor reads \begin{eqnarray} \sigma_{ij}&=&\frac{e\hbar}{2\pi}\int d\varepsilon \partial_\varepsilon f(\varepsilon){\rm Tr}\left[\hat{v}_j\hat{G}^R\hat{v}_i(\hat{G}^R-\hat{G}^A)\right]. \end{eqnarray} Here, ${\rm Tr}$ denotes the trace over the orbital, spin and monolayer degrees of freedom and the sum over the Brillouin zone, $e=-|e|$ is the electron charge, and ${\hat v}_i=(1/\hbar)\partial_{{\bf k}_i}{\cal H}$ is the velocity operator. The local spin density on monolayer $\eta$ per unit electric field reads \begin{equation} {\bf S}_\eta=\frac{e\hbar}{2\pi}\int d\varepsilon \partial_\varepsilon f(\varepsilon){\rm Tr}\left[{\hat P}_\eta\otimes\hat{\bm\sigma}\hat{G}^R\hat{v}_i(\hat{G}^R-\hat{G}^A)\right], \end{equation} where ${\hat P}_\eta$ is the projector on monolayer $\eta$. By construction, the matrix elements of ${\hat P}_\eta$ are equal to the 5$\times$5 identity matrix $\mathds{I}_{5}$ at the position of layer $\eta$ and zero elsewhere, \begin{equation} {\hat P}_\eta= \left(\begin{matrix} \ddots && &&\\ &0& && \\ & &\mathds{I}_{5}&&\\ & &&0&\\ & && &\ddots\\ \end{matrix}\right). \end{equation} The torque per unit electric field is defined as \begin{equation} {\bf T}=-\frac{e\hbar}{2\pi}\int d\varepsilon \partial_\varepsilon f(\varepsilon){\rm Tr}\left[{\bf m}\times{\bm \Omega}_{\rm ex}\hat{G}^R\hat{v}_i(\hat{G}^R-\hat{G}^A)\right] \end{equation} where ${\bf m}\times{\bm \Omega}_{\rm ex}=-{\bf m}\times\partial_{\bf m}{\cal H}$ is the torque operator. In the remaining of the article, the conductivity of the slab is defined as $\sigma_{ij}/t$, $t$ being the thickness of the full heterostructure. The local spin density per unit electric field is in $m^{-1}$ and the torque is expressed as a spin conductivity, in the units of $(\hbar/2e)~\Omega^{-1}\cdot m^{-1}$. The disorder is accounted for through a homogeneous broadening $\Gamma$. Under this approximation, no higher order scattering events are taken into account (e.g., skew scattering, spin swapping etc.). \section{Currents-driven spin-orbit torques in FM/NM heterostructure} \subsection{Spin density profile\label{s:prof}} \begin{figure} \includegraphics[width=6cm]{Fig5.png} \caption{(Color online) Non-equilibrium in-plane spin density profile per unit electric field across the FM/NM bilayer. The shaded blue area refers to the FM region and the shaded yellow area refers to the NM region. The black solid line is the spin density when the spin-orbit coupling of both ferromagnetic and nonmagnetic layers is turned on, and the red solid line is the spin density when only the spin-orbit coupling of the ferromagnetic layer is on. Here the magnetization points perpendicular to the plane, along ${\bf z}$.\label{Fig2}} \end{figure} We first compute the current-driven spin density profile throughout the heterostructure, when the magnetization ${\bf m}$ points out of plane (${\bf m}\|{\bf z}$). The two in-plane components, $S_x$ and $S_y$, are given in Figs. \ref{Fig2}(a) and (b), respectively. The black curves correspond to the case where spin-orbit coupling is present in both ferromagnetic and nonmagnetic layers, while the red curves correspond to the case where only the ferromagnetic layer possesses spin-orbit coupling (see Section \ref{s:self}). When spin-orbit coupling is present in both ferromagnetic and nonmagnetic layers, we observe a clear accumulation of $S_x$ and $S_y$ components in the nonmagnetic metal. It is instructive to notice that the scale over which the spin density accumulates close to the interface is different for the two components. The $S_y$ component is localized close to the interface and vanishes quickly over about 10 monolayers (ML - corresponding to about 1.3 nm), while $S_x$ slowly decays over a few tens of ML (i.e., about 5 nm). Notice also that $S_x$ penetrates deeper in the ferromagnetic layer than $S_y$. This distinction suggests that $S_x$ is controlled by non-local transport processes (e.g., scattering and diffusion), while $S_y$ is much more localized at the interface. Finally, a last important feature that distinguishes $S_x$ and $S_y$ is the presence of a non-vanishing $S_y$ component close to the outer surface of the nonmagnetic layer. These two features are consistent with the standard representation of spin-orbit torque as arising from diffusive spin Hall effect and interfacial Rashba-like effect. As a result, one expects the torque to display two components, conventionally referred to as field-like and damping-like components and reading \begin{eqnarray}\label{eq:fl} {\bf T}_{\rm FL}&=&\tau_{\rm FL}{\bf m}\times({\bf z}\times{\bf E}),\\ {\bf T}_{\rm DL}&=&\tau_{\rm DL}{\bf m}\times[({\bf z}\times{\bf E})\times{\bf m}].\label{eq:dl} \end{eqnarray} \begin{figure} \includegraphics[width=7cm]{Fig6.png} \caption{(Color online) Dependence of the two torque components, (a) field-like torque and (b) damping-like torque, as a function of the homogeneous broadening $\Gamma$, for different values of the transport energy, $E-E_{\rm f}=1$ eV (black), $E-E_{\rm f}=0$ eV (blue) and $E-E_{\rm f}=-1$ eV (red). The inset displays the slab conductivity. Here the magnetization points perpendicular to the plane, along ${\bf z}$.\label{Fig3}} \end{figure} We conclude this preliminary study by computing the torque exerted on the ferromagnetic layer as a function of the disorder, shown in Fig. \ref{Fig3}. The disorder-dependence of the torque components has been extensively used in previous studies to identify their physical origin \cite{Freimuth2014a,Li2015b}: a $1/\Gamma$-dependence, resembling the one of conductivity, suggests that extrinsic, intraband-dominated processes are involved, while a constant value when $\Gamma\rightarrow0$ indicates that intrinsic, interband-dominated processes govern the effect. Figure \ref{Fig3} displays the disorder-dependence of the (a) field-like and (b) damping-like components for three different Fermi energies, corresponding to different hybridization conditions as indicated by the dashed vertical lines in Fig. \ref{Fig1}(b). The conductivity and field-like torque both show $1/\Gamma$-dependence, confirming the intraband and extrinsic origin of this component (see, e.g., Ref. \onlinecite{Li2015b}). The damping-like torque saturates for $\Gamma\rightarrow0$, as expected for an interband intrinsic effect, but shows a more irregular behavior and even a change of sign for large disorder strength. In summary, the disorder dependence computed in Fig. \ref{Fig3} is consistent with the previous calculations of spin-orbit torque, both assuming a model Hamiltonian \cite{Li2015b} and using realistic density functional theory \cite{Freimuth2014a}. \subsection{Thickness dependence\label{s:thickn}} We now address the thickness dependence of the two torque components, a property that has been investigated in numerous experiments \cite{Kim2013,Fan2014c,Pai2015,Skinner2014,Nguyen2016,Ghosh2017}. To the best of our knowledge, such a thickness dependence has not been computed within density functional theory due to the prohibitive numerical cost. Hence, it has only been addressed using phenomenological models based on drift-diffusion or Boltzmann transport equations \cite{Manchon2012,Haney2013b,Amin2016b,Fischer2016}. In these works, the inverse spin galvanic effect is modeled by an interfacial Rashba interaction and the spin Hall effect is modeled using bulk drift-diffusion (e.g., Refs. \onlinecite{Shchelushkin2005b,Pauyac2018}). These models disregard quantum and semiclassical size effects as well as higher order scattering events such as spin swapping \cite{Saidaoui2015b,Saidaoui2016} and interfacial spin precession \cite{Amin2018}. The only physical mechanism giving rise to a non-trivial thickness dependence within these approaches is the spin relaxation in the nonmagnetic layer. In this context, the magnitude of both torque components follows a $\sim 1-\cosh^{-1}(t_{\rm NM}/\lambda_{\rm sf})$ law, where $t_{\rm NM}$ is the nonmagnetic layer thickness and $\lambda_{\rm sf}$ is its spin relaxation length. This law has been confirmed, at least phenomenologically, in several experimental studies \cite{Kim2013,Hayashi2014} (see also Fig. 24 in Ref. \onlinecite{Manchon2019}). However, at very small thicknesses ($\approx0.5$ nm for Ta substrate and $\approx2$ nm for Hf substrate), a change of sign of the torque components has been reported that remains unexplained \cite{Kim2013,Akyol2016,Ramaswamy2016}.\par \begin{figure} \includegraphics[width=8.5cm]{Fig7.png} \caption{(Color online) Transport properties upon varying the nonmagnetic layer thickness (left panels) and the ferromagnetic layer thickness (right panels). This figure shows the thickness dependence of (a,c) the field-like torque and (b,d) the damping-like torque. The curves are calculated for a magnetization pointing along ${\bf z}$ and for various disorder strength, $\Gamma=10$ meV (black), $\Gamma=20$ meV (blue), $\Gamma=50$ meV (red), and $\Gamma=100$ meV (green).\label{Fig4}} \end{figure} Figure \ref{Fig4} shows (a) field-like torque and (b) damping-like torque for various disorder strengths $\Gamma$ as a function of the thickness of the nonmagnetic metal. The corresponding conductivity is shown in the insert of Fig. \ref{Fig4bis}(b) for reference. It displays the usual $G_0/(1+3\lambda/8t)$ behavior expected in the semiclassical size effect regime \cite{Sondheimer2001}, which clearly indicates that the heterostructure doesn't enter the diffusive regime before the nonmagnetic layer thickness reaches about 10 nm, which is consistent with experimental reports \cite{Nguyen2016}. The field-like torque [Fig. \ref{Fig4}(a)] is mostly constant over the thickness range, displaying quantum oscillations over the first 20 monolayers ($\approx2.7$ nm) but keeping the same sign. In contrast, the damping-like torque [Fig. \ref{Fig4}(b)] progressively increases from a negative value to a positive one, before reaching saturation. The thickness at which the saturation is reached strongly depends on the disorder strength, suggesting that spin-dependent scattering plays an important role here. The change of sign occurs around 20 monolayers ($\approx2.7$ nm) and is weakly sensitive to the disorder, suggesting a transition between two "intrinsic" (i.e., band structure driven) mechanisms of opposite signs. This sign change is similar to the one observed experimentally \cite{Kim2013,Akyol2016,Ramaswamy2016}. Since our model does not account for complex scattering events, we suggest that this change of sign is associated with the competition between the interfacial Berry-curvature induced damping-like torque \cite{Kurebayashi2014} and the spin Hall effect coming from the bulk of the nonmagnetic material. Since the Berry-curvature induced damping-like torque is an interfacial effect, it does not significantly depend on the nonmagnetic metal thickness. On the contrary, the contribution to the damping-like torque from the spin Hall effect necessitates a nonmagnetic layer thickness larger than the spin relaxation length to be efficient and compensate the interfacial Berry-curvature induced contribution. One last remark is in order: in our simulation, the spin Hall and Berry-curvature induced contributions have opposite sign. However, we speculate this is only accidental as the spin Hall-driven contribution is controlled by the interplay between spin-orbit coupling and band filling as governed by Hund's third rule \cite{Tanaka2008,Freimuth2010}, whereas the interfacial Berry-curvature contribution is governed by the interfacial potential drop. This feature is therefore not general.\par \begin{figure} \includegraphics[width=6cm]{Fig8.png} \caption{(Color online) Efficiency of the (a) field-like torque and (b) damping-like torque as a function of the nonmagnetic layer thickness. The curves are calculated for a magnetization pointing along ${\bf z}$ and for various disorder strength, $\Gamma=10$ meV (black), $\Gamma=20$ meV (blue), $\Gamma=50$ meV (red), and $\Gamma=100$ meV (green).\label{Fig4bis}} \end{figure} It is instructive to consider the thickness dependence of the torque efficiency, defined as the ratio between the torque and the conductivity of the heterostructure. This efficiency would be equivalent to the spin Hall angle in the case only spin Hall effect were present in the structure. The efficiency of the field-like and damping-like torques is reported on Fig. \ref{Fig4bis}(a) and (b), respectively, while the conductivity of the heterostructure is shown in the inset of (b), for reference. It is clear that the field-like torque efficiency is much larger for small thicknesses, as the current density is concentrated close to the interface. A similar feature is obtained for the damping-like torque efficiency. It is noticeable that the efficiency drops significantly within the first 10-15 monolayers ($\approx 2$ nm), showing that quantum confinement can be beneficial for spin-orbit torque. To complete this study, let us now consider the influence of the ferromagnetic layer thickness. Experimentally, it is found that the field-like component decreases strongly with the ferromagnetic layer thickness while the damping-like component remains mostly constant \cite{Kim2013}. We observe a similar feature in our calculations, shown in Fig. \ref{Fig4}(c) and (d). The field-like component increases upon increasing the ferromagnetic layer thickness and saturates after about 10 monolayers. The damping-like component displays a similar increase as a function of the ferromagnetic layer thickness, but it also exhibits large quantum oscillations, which makes the systematic increase more difficult to see at first glance. This behavior is associated with the absorption of the transverse spin current by the ferromagnetic layer over the spin dephasing length. If the ferromagnetic layer thickness is thinner than the spin dephasing length, the injected spin current (or, equivalently, the spin density smearing into the ferromagnetic layer) is not entirely absorbed and is reflected back into the nonmagnetic layer, resulting in a reduced torque. Upon increasing the ferromagnetic layer thickness, more spin current is absorbed, resulting in an increase and saturation of the torque (see, e.g., Ref. \onlinecite{Zwierzycki2005}). This scenario was experimentally confirmed recently \cite{Qiu2016}, but cannot be properly modeled using drift-diffusion theories due to the importance of quantum oscillations in this thickness range \cite{Haney2013b,Amin2016b}. \subsection{Angular dependence} The calculations presented above were all performed by setting the magnetization along ${\bf z}$. Yet, several experimental\cite{Garello2013,Qiu2015,Safranski2019} and theoretical studies\cite{Lee2015,Pauyac2013,Hals2014,Zelezny2017,Belashchenko2019} have pointed out that the spin-orbit torque does not reduce to the forms given in Eqs. \eqref{eq:fl}-\eqref{eq:dl}. For the highest $C_{\infty}$ symmetry, Belashchenko et al.\cite{Belashchenko2019} proposed that the spin-orbit torque be written \begin{eqnarray}\label{eq:garello} {\bf T}&=&-P_\theta^A{\bf m}\times({\bf z}\times{\bf E})+P_\theta^{A'}({\bf m}\cdot{\bf E}){\bf m}\times({\bf z}\times{\bf m})\\ &&-P_\theta^{B}{\bf m}\times[({\bf z}\times{\bf E})\times{\bf m}]+P_\theta^{B'}({\bf m}\cdot{\bf E}){\bf m}\times{\bf z}+...\nonumber \end{eqnarray} where $P_\theta^X=\sum_nX_{2n}P_{2n}(\cos\theta)$, $P_{2n}(x)$ being the Legendre polynomials. The first and third terms are simply the conventional field-like and damping-like torques. The second and fourth terms can be referred to as "planar" field-like and "planar" damping-like torques, respectively. These components are only non-zero when the magnetization lies along the applied electric field. In Ref. \onlinecite{Pauyac2013}, these two planar components were obtained analytically and related to the presence of D'yakonov-Perel' anisotropic spin relaxation. As a matter of fact, in such ultrathin magnetic heterostructures the spin component pointing perpendicular to the plane of the interface and that pointing in-plane relax at different rates, which modifies the overall spin dynamics at the interface, resulting in these additional torque components.\par To evaluate this angular anisotropy, we computed the two components $T_x$ and $T_y$ when varying the magnetization in the ($y,z$) and ($z,x$) planes. From Eq. \eqref{eq:garello}, we expect \begin{eqnarray}\label{eq:garello2} T_x/\cos\theta&=&P_\theta^A,\\ T_y/\cos^2\theta&=&P_\theta^B \end{eqnarray} when the magnetization rotates in the ($y,z$) plane, and \begin{eqnarray}\label{eq:garello3} T_x/\cos\theta&=&P_\theta^A-\sin^2\theta P_\theta^{A'},\\ T_y&=&P_\theta^B+\sin^2\theta P_\theta^{B'} \end{eqnarray} where the magnetization rotates in the ($z,x$) plane. By fitting these angular dependences using Legendre polynomials, we obtain the first four components of the expansion of Eq. \eqref{eq:garello}. These components are reported on Fig. \ref{fig:AngularFit} upon varying the thickness of the nonmagnetic metal. \begin{figure} \includegraphics[width=8.5cm]{Fig9.png} \caption{(Color online) Legendre expansion coefficients as function of the thickness of the nonmagnetic layer. These coefficients correspond to (a) the conventional field-like torque, (b) the conventional damping-like torque, (c) the planar field-like torque and (d) the planar damping-like torque.\label{fig:AngularFit}} \end{figure} The conventional field-like torque, reported in Fig. \ref{fig:AngularFit}(a), dominates all the other components and exhibits almost no angular dependence ($A_0\gg A_{2,4,6}$). The conventional damping-like torque, reported in Fig. \ref{fig:AngularFit}(b), is about one order of magnitude smaller, exhibits the sign reversal discussed previously and displays a sizable angular dependence at small thicknesses ($B_2\gg B_{4,6}$). This angular dependence vanishes upon increasing the thickness of the nonmagnetic layer. Interestingly, the two "planar" components exhibit a radically different behavior. First of all, both components are comparable in magnitude with the damping-like torque, which means that they play a crucial role in current-driven dynamics and cannot be neglected. Second, the planar field-like torque [Fig. \ref{fig:AngularFit}(c)] exhibits a substantial angular dependence ($A_{0}'\approx A_{2}'\gg A'_{4,6}$) that saturates after a few monolayers only. This indicates that this component is mostly of interfacial origin, in agreement with the D'yakonov-Perel' scenario evoked in Ref. \onlinecite{Pauyac2013}. Finally, the planar damping-like torque presents a surprising behavior [Fig. \ref{fig:AngularFit}(d)]. It displays almost no angular dependence (except at small thicknesses), and increases steadily over a few tens of monolayers before reaching saturation at large thicknesses. This progressive saturation is similar to the one expected for spin Hall-driven damping torque originating from the nonmagnetic layer, as mentioned in Section \ref{s:thickn}. Therefore, the results reported on Fig. \ref{fig:AngularFit} suggest that the planar field-like torque is associated with an interfacial effect and can be seen at the companion of the conventional interfacial (Rashba) field-like torque, while the planar damping-like torque is associated with bulk mechanisms and accompanies the conventional (spin Hall-driven) damping-like torque. To complete this discussion we emphasize that \citet{Safranski2019} reported a planar Hall torque that they attributed to the planar Hall effect from the bulk of the ferromagnet. In our case, the spin-orbit coupling of the ferromagnet remains quite small (see Section \ref{s:self}) and it is unlikely that such a mechanism contributes to the torques reported on Fig. \ref{fig:AngularFit}. \section{Self-torque in the ferromagnet \label{s:self}} To complete this study, we now turn off the spin-orbit coupling of the nonmagnetic layer. The spatial profile of the spin density is shown in Fig. \ref{Fig2}, red curves. The features described above survive: $S_x$ is more delocalized than $S_y$, although their magnitude is much (three or four times) weaker than in the case where spin-orbit coupling is present in both layers. The thickness dependence is shown in Fig. \ref{Fig10}, blue curves. The black curves represent the case where the spin-orbit coupling is present in both layers and serves as a reference. The field-like torque starts slightly positive [Fig. \ref{Fig10}(a)], switches sign around about 10 monolayers and increases negatively until reaching saturation at about 50 monolayers. The damping-like torque shows a similar behavior. It also starts slightly positive [Fig. \ref{Fig10}(b)], switches sign about 15 monolayers and increases negatively until reaching saturation at about 60 monolayers. It is interesting to note that the self-field-like torque has the same sign as the case where spin-orbit coupling is present everywhere, whereas the self-damping-like torque is opposite. \begin{figure} \includegraphics[width=8cm]{Fig10.png} \caption{(Color online) Spin-orbit torque components upon varying the nonmagnetic layer thickness (left panels) and the ferromagnetic layer thickness (right panels), when spin-orbit coupling is present in both nonmagnetic and ferromagnetic layers (black) and when it is present only in the ferromagnetic layer (blue). The curves are calculated for a magnetization pointing along ${\bf z}$ and for $\Gamma=50$ meV.\label{Fig10}} \end{figure} To understand this distinct behavior, we compute the dependence of the anomalous Hall conductivity and torque components as a function of the spin-orbit coupling energy of the individual layers. The results are reported on Fig. \ref{Fig11}. The black lines correspond to the case where the spin-orbit coupling is in the nonmagnetic metal only, whereas the blue lines correspond to the case where the spin-orbit coupling is in the ferromagnet only. The anomalous Hall conductivity [Fig. \ref{Fig11}(a)] and damping-like torque [Fig. \ref{Fig11}(c)] both change sign depending on which layer possesses spin-orbit coupling. In contrast, the field-like torque remains negative, irrespective of where the spin-orbit coupling is [Fig. \ref{Fig11}(b)].\par The field-like torque, as explained in Section \ref{s:porbitals}, is associated with the interfacial, Rashba-like spin-orbit coupling, whose sign is governed by the interfacial potential drop. Therefore, for a given spin-orbit coupling strength, its sign is opposite on the two sides of the interface [see Eq. \eqref{eq:rashba}]. This seems contradictory with the results of Fig. \ref{Fig11}(b) and suggests that the sign of the spin-orbit coupling experienced by the Bloch states of the nonmagnetic layer is opposite to the one experienced by the Bloch states of the ferromagnet. This observation is consistent with Hund's third rule that states that for materials with more-than-half-filled electronic shells such as Fe, the spin and orbital momenta are aligned with each other, while for materials with less-than-half-filled electronic shells like W, there are anti-aligned. Since our tight-binding model is parameterized on these two elements, it is reasonable that Hund's third rule applies. As a consequence, the opposite potential drop felt by Bloch states on each side of the interface is compensated by the opposite effective spin-orbit coupling, and the field-like torque is the same whether the spin-orbit coupling is on the ferromagnet or on the nonmagnetic layer.\par In contrast, the damping-like torque at large thicknesses and the anomalous Hall conductivity are not associated with interfacial potential drop, but rather with the (spin) Berry curvature of the bulk material. It is therefore solely governed by the effective spin-orbit coupling experienced by the Bloch electrons and is opposite when switching the spin-orbit coupling from the ferromagnet to the nonmagnetic metal. \begin{figure} \includegraphics[width=9cm]{Fig11.png} \caption{(Color online) Dependence of the (a) anomalous Hall conductivity, (b) field-like and (c) damping-like components of the spin-orbit torque upon varying the spin-orbit coupling $\xi_{\rm so}$. The black lines represent the case where the spin-orbit coupling of the ferromagnet is set to zero, whereas the blue lines represent the case where the spin-orbit coupling of the nonmagnetic metal is set to zero. In this calculation, we set the nonmagnetic metal thickness to 40 monolayers and the ferromagnet thickness to 7 monolayers. The curves are calculated for a magnetization pointing along ${\bf z}$ and for $\Gamma=50$ meV.\label{Fig11}} \end{figure} The dependence as a function of the ferromagnetic layer thickness is reported in Fig. \ref{Fig10}(c) and (d) for the field-like and damping-like torques, respectively. We obtain similar thickness dependence as in the case where spin-orbit coupling is present in both layers, reflecting the importance of the spin dephasing length. Using our realistic parameters, the self-torque we obtain is about four to fives times smaller in magnitude compared to the torque arising from the nonmagnetic metal, consistent with the relative magnitude of the spin-orbit coupling (about 65 meV in Fe compared to 360 meV in W). These calculations support an idea that was put forward in Ref. \onlinecite{Pauyac2018}: the spin Hall current generated inside the ferromagnetic layer can create an efficient torque on the magnetic order as long as the two opposite interfaces are dissimilar. \section{Conclusion} Using a multi-orbital tight-binding model, we computed the spin-orbit torque in a transition metal heterostructure, treating bulk and interfacial spin-orbit effects coherently and on equal footing. Thickness and angular dependences of the torque show that it possesses four sizable components, the conventional field-like and damping-like torques, as well as two planar components that vanish when the magnetization lies out-of-plane. The conventional field-like torque is entirely controlled by the interface, as expected from interfacial inverse spin galvanic effect, while the damping-like torque possesses two components, an interfacial one dominating at small thicknesses and a bulk contribution dominating at large thicknesses. The former is attributed to the intrinsic interfacial Berry-curvature-driven damping torque \cite{Kurebayashi2014}, whereas the latter is associated with the spin Hall effect generated in the bulk of the nonmagnetic metal.\par Interestingly, the planar field-like torque shows substantial angular dependence and is of interfacial origin, like the conventional field-like torque. In contrast, the planar damping-like torque does not exhibit angular dependence and increases with the nonmagnetic metal thickness, indicating that it originates from the bulk of the nonmagnetic layer, similarly to the conventional spin Hall-driven damping torque. Our results demonstrate that these four torque components are present in any transition metal heterostructures and must be taken into account when interpreting the experimental data, and in particular the current-driven magnetization dynamics.\par Finally, we investigate the self-torque exerted on the ferromagnet when spin-orbit coupling of the nonmagnetic metal is turned off. Our results suggest that the spin accumulation that builds up inside the ferromagnet can be large enough to induce magnetization excitations. \acknowledgments This work was supported by the King Abdullah University of Science and Technology (KAUST) through the Office of Sponsored Research (OSR) [Grant Number OSR-2017-CRG6-3390].
1,314,259,995,191
arxiv
\section{Introduction} Since the discovery of superconductivity at 26 K in oxy-pnictide LaFeAsO$_{1-x}$F$_x$\cite{Hosono}, enormous interests have been stimulated in the fields of condensed matter physics and material sciences. Among the several types of iron based superconductors with different structures\cite{Hosono,Rotter,ChuCW,WangXC,ChuCW2,WuMK,Cava,VFeAs21311}, FeSe with the PbO structure has received special attention since its structure is simpler than other iron pnictide superconductors. However, the superconducting transition temperature (T$_c$) in iron chalcogenide compounds is not enhanced as high as other iron pnictide superconductors under ambient pressure until the superconductivity at above 30 K in K$_x$Fe$_{2-y}$Se$_2$ is discovered\cite{ChenXL}. The insulating and the superconducting state are both observed in K$_x$Fe$_{2-y}$Se$_2$ with different stoichiometries and some groups have tuned the system from insulating to superconducting by varying the ratio of starting materials\cite{ChenGF,FangMH,ChenXH}. Here we give two new tuning methods: direct quenching or post-annealing followed by quenching. On one hand, by directly quenching at different furnace temperatures in the cooling process of growth we can get a series of K$_x$Fe$_{2-y}$Se$_2$ samples with different superconducting properties, while the sample cooled with furnace slowly is non-superconducting but insulating. On the other hand, by post-annealing and then quenching we can tune the previous insulating K$_x$Fe$_{2-y}$Se$_2$ sample into superconducting state again, which was discovered by us for the first time and confirmed by another group\cite{Petrovic}. We also find that the tuning is reversible, since the superconducting state disappears about 20 days later and the insulating state comes out again in the post-annealed and quenched crystals. As mentioned above, we think that the quenching is important to the appearance of superconductivity and the superconducting state which needs to be frozen by quenching is metastable. \section{Sample preparation} By using the self-flux method, we successfully grown high-quality single-crystalline samples of K$_x$Fe$_{2-y}$Se$_2$. First FeSe powders were obtained by the chemical reaction method with Fe powders (purity 99.99\%) and Se powders(purity 99.99\%). Then the starting materials in the fixed ratio of K: FeSe = 0.8: 2 were placed in an alumina crucible and sealed in a quartz tube under vacuum. All the weighing, mixing, grounding and pressing procedures were finished in a glove box under argon atmosphere with the moisture and oxygen below 0.1 PPM. The contents were then heated up to 1030 $^o$C for 3 hours. Subsequently the furnace was cooled down to 750 $^o$C at a rate of 5 $^o$C/h. Below 750 $^o$C, the sample cooled with furnace was kept in furnace and cooled down slowly to room temperature while the directly quenched samples were took out from furnace and quenched in air at different furnace temperatures. We cleaved some crystals from the previous sample cooled with furnace in the glove box, put them in a one-end-sealed quartz tube, and sealed the other end of the quartz tube with a closed valve. The valve was then connected with pump and opened under vacuum. To protect the crystals from heat, the quartz tube was wrapt with wet paper. All these made the quartz tube sealing procedure be performed with the crystals not exposed to air and not heated. After tube sealing, a post-annealing procedure is carried out on the crystals (enclosed in the evacuated quartz tube) with a heating plate at different temperatures for 1 hour and then the crystals were rapidly removed from the heating stage. \section{Experimental data and discussion} The X-ray diffraction (XRD) measurements of our samples were carried out on a $Mac-Science$ MXP18A-HF equipment with a scanning range of 10$^\circ$ to 80$^\circ$ and a step of 0.01$^\circ$. The DC magnetization measurements were done with a superconducting quantum interference device (Quantum Design, SQUID, MPMS-7T). The resistance data were collected using a four-probe technique on the Quantum Design instrument physical property measurement system (Quantum Design, PPMS-9T) with magnetic fields up to 9$\;$T. The electric contacts were made using silver paste at room temperature. The data acquisition was done using a DC mode of the PPMS, which measures the voltage under an alternative DC current and the sample resistivity is obtained by averaging these signals at each temperature. In this way the contacting thermal power is naturally removed. The temperature stabilization was better than 0.1$\%$ and the resolution of the voltmeter was better than 10$\;$nV. \subsection{Direct quenching} \begin{figure} \includegraphics[width=13cm]{Fig1.eps} \caption{(Color online) Temperate dependence of dc magnetization for the sample cooled with furnace and the samples directly quenched at about 200 $^o$C, 300 $^o$C, and 400 $^o$C, respectively. The measurements were carried out under a magnetic field of 50 Oe in zero-field cooled (ZFC) and field-cooled (FC) processes with the field parallel to the ab-plane.} \label{fig1} \end{figure} In Fig.1 we show the temperate dependence of dc magnetization for the sample cooled with furnace and the samples directly quenched at about 200 $^o$C, 300 $^o$C, and 400 $^o$C, respectively. The measurements were carried out under a magnetic field of 50 Oe in zero-field cooled (ZFC) and field-cooled (FC) processes with the field parallel to the ab-plane. Paramagnetic signal is observed in the sample cooled with furnace and there is not diamagnetization in the low temperature regime. A weak diamagnetic signal, which is corresponding to superconductivity, appears below about 28 K in the sample directly quenched at 200 $^o$C. When the quenching temperature increases to above 300 $^o$C, strong diamagnetic signals appear below about 31.5 K. We find that the quenching temperature has an important influence on the diamagnetization signal. In sharp contrast to it, we find that the transition temperature does not strongly depend on the quenching temperature since the diamagnetization signals all appear below 28-32 K in the samples directly quenched at 200 $^o$C, 300 $^o$C, and 400 $^o$C. \begin{figure} \includegraphics[width=13cm]{Fig2.eps} \caption{(Color online) Temperature dependence of resistivity for the sample cooled with furnace and the samples directly quenched at about 200 $^o$C, 300 $^o$C, and 400 $^o$C, respectively.} \label{fig1} \end{figure} By electrical resistivity measurements we find that the non-superconducting sample cooled with furnace has a insulating behavior, as shown in in the top panel of Fig.2. The sample directly quenched at 200 $^o$C has a superconducting transition at 32.5 K and a hump-like anomaly at 150 K in the curve of $\rho$(T). The sample directly quenched at 300 $^o$C and 400 $^o$C is also superconducting with the same T$_c$ as the sample directly quenched at 200 $^o$C and the hump-like anomaly shifts to about 250 K. We find that the absolute value of resistivity decreases with the quenching temperature increasing from 200 to 400 $^o$C, which is a strong support to that K$_x$Fe$_{2-y}$Se$_2$ is a phase-separation system composed of a metallic phase and a insulating phase. We perform magnetization and resistivity measurements for several times and find both the magnetization and the resistivity results are reproducible, which indicates that the insulating property in the sample cooled with furnace and the superconductivity in the directly quenched samples is bulk. To investigate what effect the quenching procedure has on the K$_x$Fe$_{2-y}$Se$_2$ samples and why the superconductivity appears, we carried out X-ray diffractions on these samples and used inductively coupled plasma (ICP) to determine the stoichiometries of K$_x$Fe$_{2-y}$Se$_2$ samples. \begin{figure} \includegraphics[width=13cm]{Fig3.eps} \caption{(Color online) X-ray diffraction patterns showing the (00$l$) reflections from the basal plane of the K$_x$Fe$_{2-y}$Se$_2$ sample cooled with furnace and the samples directly quenched at 200 $^o$C, 300 $^o$C, and 400 $^o$C.} \label{fig1} \end{figure} As shown in Fig.3, the peaks from the (00$l$) reflections are still very sharp, indicating excellent crystalline quality. And we hardly find very obvious shifting among these peaks. However, the peaks marked by the asterisks in the samples directly quenched seem to shift closer to the nearby (008) and (00$\overline{10}$) peaks than in the sample cooled with furnace, which possibly indicates the two weak peaks come from a super-lattice of iron vacancies order and the phase with iron vacancies order peters out after quenching. \begin{table}[!h] \tabcolsep 0pt \caption{Stoichiometries and iron valences for the sample cooled with furnace and the samples directly quenched at different temperatures while the nominal stoichiometries of these samples are fixed as K$_{0.8}$Fe$_{2}$Se$_{2}$.} \vspace*{6pt} \begin{center} \def\temptablewidth{1\textwidth} {\rule{\temptablewidth}{1pt}} \begin{tabular*}{\temptablewidth}{@{\extracolsep{\fill}}ccccccc} Quenching temperature & Stoichiometry & Valence of iron\\ \hline Cooled with furnace &K$_{0.80}$Fe$_{1.69}$Se$_{2}$ &1.890 \\ 200 $^o$C &K$_{0.76}$Fe$_{1.71}$Se$_{2}$ &1.895 \\ 300 $^o$C &K$_{0.78}$Fe$_{1.70}$Se$_{2}$ &1.894 \\ 400 $^o$C &K$_{0.76}$Fe$_{1.70}$Se$_{2}$ &1.906 \\ \end{tabular*} {\rule{\temptablewidth}{1pt}} \end{center} \end{table} We find that the actual compositions and the iron valences of all the four samples are very similar to each other. It is noteworthy that the iron valences are all located at the non-superconducting region in the electronic and magnetic phase diagram of K$_x$Fe$_{2-y}$Se$_2$ as a function of iron valence\cite{ChenXH}. Based on this point, we think that the phase diagram as a function of iron valence didn't solve the problem what determines K$_x$Fe$_{2-y}$Se$_2$ to be superconducting or not. There must be a more essential factor working effect instead of iron valence. We notice that different groups including our group got different results even though they prepared samples in a same nominal stoichiometry. When the nominal stoichiometry was K$_{0.8}$Fe$_2$Se$_2$, some groups saw clear superconductivity in the samples not quenched from high temperatures with the actual compositions as K$_{0.78}$Fe$_{1.70}$Se$_2$\cite{ChenXL} and K$_{0.80}$Fe$_{1.76}$Se$_2$\cite{HuRW,HuRW3} respectively. The iron valences of the two samples were also located at the non-superconducting region in the electronic and magnetic phase diagram. The phase diagram makers reported that their K$_{0.8}$Fe$_2$Se$_2$ sample was also superconducting and had an actual composition of K$_{0.73}$Fe$_{1.67}$Se$_2$. In sharp contrast to their results, we never see superconductivity in not quenched samples when the nominal composition is K$_{0.8}$Fe$_2$Se$_2$ as well and the actual composition is similar to one of the actual compositions reported by other groups, K$_{0.78}$Fe$_{1.70}$Se$_2$. In consideration of these different results, we think that the property of K$_x$Fe$_{2-y}$Se$_2$ is sensitive to preparation conditions. Many conditions such as the quality of quartz tube and the warm-keeping performance of furnace all have influences on the property of K$_x$Fe$_{2-y}$Se$_2$. For instance, the different actual compositions of K$_x$Fe$_{2-y}$Se$_2$ among the samples prepared by different groups may be caused by the different qualities of quartz tubes which make different loss of K in amount. The different transport properties of K$_x$Fe$_{2-y}$Se$_2$ even among the samples not quenched with similar actual compositions may be caused by the different warm-keeping performances of furnaces which make different cooling rates after the furnaces are turned off. We control the preparation conditions consistent, so our result is repeatable, credible and not contradictory to other different experimental results. \subsection{Post-annealing and then quenching} \begin{figure} \includegraphics[width=13cm]{Fig4.eps} \caption{(Color online) Temperate dependence of dc magnetization for the crystals post-annealed and then quenched at at about 200 $^o$C, 300 $^o$C, and 400 $^o$C, which were cleaved from No.1 sample in advance. The measurements were carried out under a magnetic field of 50 Oe in zero-field cooled (ZFC) and field-cooled (FC) processes with the field parallel to the ab-plane.} \label{fig1} \end{figure} By post-annealing and then quenching at 200 $^o$C, 300 $^o$C, and 400 $^o$C, we tune the No.1 sample from insulating to superconducting. In Fig.4 we show the temperate dependence of dc magnetization for the crystals post-annealed and then quenched at about 200 $^o$C, 300 $^o$C, and 400 $^o$C, respectively. The measurements were carried out under a magnetic field of 50 Oe in zero-fieldcooled (ZFC) and field-cooled (FC) processes with the field parallel to the ab-plane. There is not a diamagnetic signal observed in the crystal post-annealed and quenched at 200 $^o$C. When the annealing and quenching temperature increases to above 300 $^o$C, diamagnetic signals begin to appear below about 26 K. We find that the annealing and quenching temperature has an important influence on the diamagnetization signal like the condition of direct quenching. However, the diamagnetic signals of the crystals post-annealed and then quenched are not as strong as the ones of the samples directly quenched under the same quenching temperatures. \begin{figure} \includegraphics[width=13cm]{Fig5.eps} \caption{(Color online) Temperature dependence of resistivity for the crystals post-annealed and then quenched at at about 200 $^o$C, 300 $^o$C, and 400 $^o$C, which were cleaved from No.1 sample in advance.} \label{fig1} \end{figure} By electrical resistivity measurements we find that the post-annealed and quenched crystals' transport characters are getting different from the No.1 sample, as shown in Fig.5. The crystal after post-annealing and quenching at 200 $^o$C has a semiconducting/insulating behavior from 30 to 300 K and the resistivity drops when the temperature decreases below 30 K. The dropping may imply that more metallic phase is achieved corresponding to the superconductivity. The crystal post-annealed and quenched at 300 $^o$C has been tuned to superconducting state and there is a hump-like anomaly at 155 K in the curve of $\rho$(T). The crystal post-annealed and quenched at 400 $^o$C is also superconducting and the hump-like anomaly shifts to about 250 K. We also find that the absolute value of resistivity decreases with increasing the quenching temperature from 200 to 400 $^o$C. \begin{figure} \includegraphics[width=13cm]{Fig6.eps} \caption{(Color online) X-ray diffraction patterns showing the (00$l$) reflections from the basal plane of the K$_x$Fe$_{2-y}$Se$_2$ sample cooled with furnace and the crystals post-annealed and then quenched at about 200 $^o$C, 300 $^o$C, and 400 $^o$C. The samples undergoing different treating processes were cleaved from the sample cooled with furnace and not superconducting.} \label{fig1} \end{figure} We also carried out X-ray diffraction on these crystals. As shown in Fig.6, the peaks from the (00$l$) reflections are still very sharp after annealing. We still hardly find very obvious shifting among these peaks. And the peaks marked by the asterisks seem to shift closer to the nearby (008) and (00$\overline{10}$) peaks after annealing and quenching. This feature is similar to the case of direct quenching. \begin{figure} \includegraphics[width=13cm]{Fig7.eps} \caption{(Color online) After 20 days, the temperate dependence of dc magnetization and resistivity for the crystal post-annealed and quenched at 400 $^o$C. } \label{fig1} \end{figure} Surprisingly, the crystals post-annealed and quenched lost their superconducting characters after a period of time, for example, 20 days later. In this period the crystals were always kept in the argon atmosphere. As shown in Fig.7, after 20 days, the strong diamagnetization signal has disappeared and the insulating state comes out again in the No.7 crystal. Obviously, the superconducting state tuned from the insulating state by post-annealing and quenching is unstable. However, there is no time dependence of superconductivity observed in our directly quenched samples and reported by other groups, which suggests that the freezing effect of post-annealing and quenching is temporary and less effective than directly quenching. \section{Conclusions} In summary, we find that the samples directly quenched in the cooling process of growth show superconducting while the one cooled with furnace is insulating, and the latter can be tuned from insulating to superconducting by post-annealing and then quenching. In addition, the actual compositions and the iron valences of all the non-superconducting and superconducting samples are very similar to each other since we fixed the nominal stoichiometries in preparing process. Based on the two factors, we conclude that the superconducting state in K$_x$Fe$_{2-y}$Se$_2$ is metastable, and quenching is the key point to achieve the superconducting state. The similar stoichiometries of all the non-superconducting and superconducting samples also indicate that the iron valence doesn't play a decisive role in determining whether a K$_x$Fe$_{2-y}$Se$_2$ sample is superconducting. Our result in resistivity indicates that K$_x$Fe$_{2-y}$Se$_2$ is a phase-separation system composed of a metallic phase and a insulating phase. Our XRD results suggest that there is a super-lattice of iron vacancies order in this system and the phase with iron vacancies is less in the superconducting samples than in the insulating samples. All these results give a support to the M\"{o}ssbauer result reported before that the superconductivity in K$_x$Fe$_{2-y}$Se$_2$ comes from a minority phase which does not have large moment and the long range magnetic order belongs to a non-superconducting majority phase\cite{HuRW2}. Combining with the result obtained in the K$_x$Fe$_{2-y}$Se$_2$ thin films prepared by molecular beam epitaxy (MBE)\cite{XueQK}, we argue that our superconducting sample partly corresponds to the phase without iron vacancies as seen by scanning tunneling microscopy (STM), and the insulating sample mainly corresponds to the phase with $\sqrt{5}\times\sqrt{5}$ iron vacancy order. Quenching may play a role of freezing the phase without iron vacancies. Therefore, whether the K$_x$Fe$_{2-y}$Se$_2$ sample contains the phase without iron vacancies is more essential to achieve superconductivity, instead, the iron valence does not play an important role. By varying the ratio of starting materials to tune the iron valence or quenching a sample with fixed stoichiometry are both effective to obtain the superconducting state. There is, perhaps a difference, that the superconducting state tuned by varying the ratio of starting materials is more stable, while the superconducting state frozen by quenching is metastable. \section*{Acknowledgements} This work is supported by the Natural Science Foundation of China, the Ministry of Science and Technology of China (973 project: 2011CBA00102).
1,314,259,995,192
arxiv
\section{Introduction} Very luminous X-ray sources with X-ray luminosities ($L_{\rm X}$) in the range $10^{38}$--$10^{39}$ ergs s$^{-1}$ are often binaries with stellar mass black holes or, less frequently, neutron stars as primaries. In contrast, ultraluminous X-ray sources (ULXs) with $L_{\rm X} \sim 10^{40}-10^{41}$ ergs s$^{-1}$ are of great interest because their bolometric luminosities surpass the Eddington limit for a 100 $M_\odot$ black hole ($1.4\times10^{40}$ ergs~s$^{-1}$). Although these unresolved, extragalactic, off-nuclear ultraluminous sources were first discovered with the {\it Einstein X-ray Observatory} \citep{FABBIANO1}, systematic studies of ULXs and similar luminous sources have been made possible only recently by the high angular resolution afforded by the {\it Chandra X-ray Observatory} \citep{HUMPHREY1,SWARTZ1}. While some of these objects may be associated with recent supernovae, or in some cases misidentified background active galactic nuclei \citep{SWARTZ1}, there exist ULXs that exhibit variability over short time scales, indicating a compact nature \citep[e.g.,][]{STROHMAYER1}. These compact ULXs may be binary systems containing black holes with masses of 10$^2$--10$^4$ $M_\odot$ \citep{COLBERT1,WANG04}. The existence of such intermediate-mass black holes (IMBHs) is intriguing, as they cannot be produced by current stellar evolutionary models. The hypothesis of a massive accreting black hole nature is supported by spectral analyses of several ULXs whose X-ray spectra can be successfully fitted with a regular or modified disk blackbody model, assuming an optically thick accretion disk \citep{COLBERT1, MAKISHIMA1, WANG04}. It has recently been suggested that thin accretion disks with inhomogeneities may produce fluxes exceeding the Eddington limit by factors of 10--100 \citep{BEGELMAN1}. Thus stellar mass black hole binaries could account for sources with luminosities up to $L_{\rm X} \sim 10^{39}$ ergs s$^{-1}$. The more luminous sources with $L_{\rm X} \ge 10^{40} \, {\rm ergs} \, {\rm s}^{-1}$, however, may still harbor IMBHs \citep{MMN2005}. Alternatively, if ULXs radiate in the form of anisotropic ``beams" in the direction of the observer, their luminosities would be sub--Eddington, a proposal that is not entirely implausible as examples exist of beamed Galactic X-ray sources \citep{LIU2}. In this case, ULXs may contain more conventional black holes or neutron stars with masses $\leq 10 M_{\odot}$ \citep{KING1}. The nature of a luminous X-ray source can be ascertained if its optical counterpart can be unambiguously identified \citep{LIU3}. For example, M101 ULX-1 is coincident with a B supergiant, and their physical association is confirmed by the unresolved \ion{He}{2} $\lambda$4686 emission in the stellar spectrum; thus M101 ULX-1 is most likely a high-mass X-ray binary \citep[HMXB;][]{KUNTZ1}. When optical counterparts cannot be uniquely identified, the local stellar population can be used to infer the nature of the sources; for example, the presence of early-type stars supports a HMXB origin \citep{LIU3,ROBERTS2,ROBERTS1, SORIA1}. Observations of the local interstellar environment can also be used to infer the nature of these luminous X-ray sources \citep{PM02}. Luminous X-ray sources are often observed to be surrounded by shell nebulae with diameters reaching up to several hundred parsecs, called ``supershells"; the X-ray sources may contribute to the expansion and ionization of these nebulae \citep{PAKULL2,MILLER3}. \citet{P+05} give two potential scenarios for the formation and expansion of such nebulae. The first scenario posits that the compact X-ray source and the diffuse nebula were both formed in a single supernova explosion; the second, that continuous input by stellar winds and/or jets creates a large ``bubble" around the system. (Of course, these scenarios are not mutually exclusive.) Calculations of the energy required to create a supernova remnant, as required by the first hypothesis, equivalent to the observed nebulae around ULXs require explosions of $10^{52}-10^{53}$ ergs, which has led some authors to suggest these as the remnants of particularly energetic events, called ``hypernovae" \citep[e.g.,][]{W2002}. \citet{P+05} offers the more prosaic interpretation that the supernova may have exploded in the low-density surroundings of a pre-existing superbubble, brightening substantially as it hit the superbubble wall \citep[e.g.,][]{CM90}. The second hypothesis, of continuous energy input, encounters a similar problem of high input energies, as well as questions about the lifetime over which such a system will produce strong winds/jets. Again, these problems can be largely ameliorated if the system is within a pre-existing cavity. Most importantly, nebulae photoionized by X-rays are ``\ion{He}{3} regions" in which He is present in the He$^{+2}$ state and its recombination leads to \ion{He}{2} $\lambda$4686 emission \citep{PAKULL3}. Therefore, the distribution of nebular \ion{He}{2} emission and the nebular ionization requirement can be used to assess whether the source emits beamed radiation \citep[e.g.,][]{PM02,KAARET3}. Another possibility, as \citet{PAKULL2} suggest, is that these nebulae are shock-ionized by same processes that generate the mechanical energy inputs discussed above. However, in low-density regions this process will have very low efficiency. A third scenario is that the nebula was flash-ionized in a very energetic explosion, and is still in the recombination stage; but the lifetime of this phase is not long (of order $10^5$/$n$, where $n$ is the ambient density). All in all, the coupling of energetics between the ULX and the nebula is not well understood. A critical study of supershells associated with luminous X-ray sources may provide insight into their nature and energetics, just as studies of supershells that are hypernova remnant candidates have been used to assess their energetics and confirm their nature \citep{LAI1, CHEN1}. We have obtained high-dispersion echelle spectroscopic observations of the nebular environments of seven luminous X-ray sources. These data are used to search for \ion{He}{2} emission and to study the kinematics of the surrounding medium. Additionally, we have utilized archival {\it Hubble Space Telescope} ({\it HST}) images to study the local stellar populations for some of these sources. In this paper, we report our observations and reduction in \S2, and analyze individual objects in \S3. The conclusions are summarized in \S4. \section{Observations and Reduction} \subsection{High-Dispersion Echelle Spectroscopic Observations} High-dispersion spectra of the nebular environments of luminous X-ray sources were obtained with the echelle spectrographs on the 4 m telescopes at Kitt Peak National Observatory (KPNO) from an observing run in 2003 November and at Cerro Tololo Inter-American Observatory (CTIO) in 2004 January. The KPNO observations were all made in the multi-order observing configuration, using the 79 line mm$^{-1}$ echelle grating, G226-1 cross dispersor, a GG385 filter, the red long focus camera, and a Tek2K CCD (T2KB). A slit width of 300 $\mu$m (2\arcsec) was used, giving an instrumental FWHM of 18$\pm$1 km s$^{-1}$ in the H$\alpha$ line. The pixel size of 24 $\mu$m corresponds to $\sim$0\farcs26 along the spatial axis and 0.082 \AA\ along the dispersion for the H$\alpha$ line. The observations covered a wavelength range of 4500--6900 \AA. The CTIO observations were made in both the multi-order/short-slit and the single-order/long-slit observing configurations. The multi-order configuration used a setup similar to that used in the KPNO observations, but with a SITe2K\#6 CCD, which had the same pixel size and thus the same scales as the T2KB. These observations covered a wavelength range of 4400--6900 \AA. The single-order configuration used the 79 line mm$^{-1}$ echelle grating with a flat mirror replacing the cross disperser, and a post-slit H$\alpha$ filter ($\lambda_c$ = 6563 \AA, $\Delta \lambda$ = 75 \AA) was inserted to isolate a single order. This spectral coverage is wide enough to include both the H$\alpha$ line and the flanking [\ion{N}{2}]~$\lambda\lambda$6548, 6583 lines. For both configurations, a slit width of 250 $\mu$m (1\farcs64) was used, and the resulting FWHM of the instrumental profile was $13.5\pm0.5$ km s$^{-1}$. For both the KPNO and CTIO observations, the spectral dispersion was calibrated by a Th-Ar lamp exposure taken in the beginning of the night, but the absolute wavelength was calibrated against the geocoronal H$\alpha$ line present in the nebular observations. A journal of the echelle observations is given in Table~1. A total of seven nebulae were observed. The luminous source LMC X-1 is included to assess the feasibility of our echelle observing configuration, as it is known to possess a \ion{He}{3} region \citep{PAKULL3}. As X-ray sources are not visible, target acquisitions often need to be made through blind-offset from nearby bright stars. The ionized nebulae associated with our target X-ray sources are very faint, so their observations are very sensitive to sky conditions and moonlight. Furthermore, these objects are mostly at megaparsec distances, and have angular diameters of only a few arcseconds. Owing to these difficulties, we missed the key position of two objects, Ho II X-1 and IC 342 X-1 and hence they will be discussed in the appendix. However, we were able to collect useful H${\alpha}$ spectra for Ho IX X-1, IC 10 X-1, M 81 X-6, and NGC 1313 X-2, in addition to LMC X-1. These observations are analyzed in this paper. \subsection{{\it Hubble Space Telescope} Images} To study the stellar environments of our sources, we have used the archival {\it HST} continuum images of Ho IX X-1, IC 10 X-1, M81 X-6, and NGC 1313 X-2. These observations, listed in Table 2, were made with either the Wide Field Planetary Camera 2 (WFPC2) or the Advanced Camera for Surveys (ACS). The calibrated pipeline-processed images were acquired from the MAST archive and photometry was carried out using the APPHOT package in IRAF. \section{Results for Individual Nebulae} We first analyze our observations of the \ion{He}{3} nebula around LMC X-1 to illustrate the range of capabilities of our observing configuration, such as the ability to detect \ion{He}{2} emission. We use the same method to analyze the nebulae around luminous X-ray sources and in some cases, we also analyze their stellar populations. Below we report on our detailed analysis of the five objects for which we obtained useful echelle observations. In addition, two objects for which we missed the key positions are discussed in the appendix. \subsection{LMC X-1} LMC X-1 is a $L_{\rm X}$ = 1--2 $\times 10^{38}$ ergs s$^{-1}$ source \citep{SETAL94} in the \ion{H}{2} region N159F which has a size of 24$''$, corresponding to 6 pc at the distance to the LMC, 50 kpc \citep{Feast99}. We have N-S and E-W slit positions centered on the star R148 at a position 2\farcs5 east of LMC X-1. In each case not only the \ion{He}{2} $\lambda$4686 line but also the \ion{He}{2} $\lambda$6560 line was detected (see Fig.\ 1). The \ion{He}{2} $\lambda$4686 line is narrow, with an observed FWHM of 0.39$\pm$0.07 \AA, or 25$\pm$5 km~s$^{-1}$. The observed FWHM consists of contributions from the thermal width ($\sim$11 km~s$^{-1}$ for \ion{He}{2} at 10$^4$ K), instrumental broadening, and turbulence in the gas. Quadratically subtracting the thermal and instrumental widths from the observed width, we obtain a turbulent FWHM of 18 km~s$^{-1}$. The turbulence in the nebula photoionized by LMC X-1 is thus of order of 10 km~s$^{-1}$, comparable to the isothermal sound velocity of a 10$^4$ K medium. There is no highly supersonic motion in the \ion{He}{3} nebula around LMC X-1. Therefore it does not appear that LMC X-1 is injecting a significant amount of mechanical energy into the interstellar medium. Our echelle observations of the \ion{He}{3} nebula around LMC X-1 illustrate that high-dispersion spectroscopy provides a powerful way to study such objects. \subsection{Holmberg IX X-1 (M81 X-9)} This ULX was first cataloged as M81 X-9 based on {\it Einstein} observations \citep{FABBIANO1}; however, it is projected within 2$'$ of the nucleus of Holmberg IX, corresponding to a projected separation of 2 kpc at a distance D $=$ 3.6 Mpc \citep{F1994}, so it has also been referred to as Ho IX X-1. It is found at a position near the center of an ionized gas shell with $V_{\rm hel}$ $\sim 47-52$ km s$^{-1}$ \citep{MILLER3}. This velocity is similar to the systemic velocity of Holmberg IX, 64 km s$^{-1}$, as opposed to the \emph{negative} velocities expected for M 81 at this position due to its rotation \citep{A1996}. If the ULX is indeed associated with this ionized gas shell, it is likely a member of Holmberg IX; thus, we will call it Ho IX X-1 in this paper. \subsubsection{Optical Counterpart of Ho IX X-1} To search for an optical counterpart of Ho IX X-1, we have used archival {\it Chandra} ACIS-S observations (Seq Num 600406, Obs ID 4751) and {\it HST} ACS images listed in Table 2. We have used six stars in the USNO B1.0 catalog to determine astrometric solutions for the ACS image, resulting in an rms accuracy of $\sim0\farcs25$. Our initial comparison of {\it Chandra} X-ray images with the {\it HST} optical images shows the centroid of the Ho IX X-1 source to be offset by $0\farcs5$ from a bright star at 09$^{\rm h}$57$^{\rm m}$53$\rlap{.}{^{\rm s}}$25, $+$69$^\circ$03$'$48\farcs3 (J2000.0). To verify the alignment of the X-ray and optical images, we searched for other X-ray sources within the ACIS-S B3 field and their possible optical counterparts. We found two faint X-ray sources at 9$^{\rm h}$58$^{\rm m}$06$\rlap{.}{^{\rm s}}$8, $+$69$^\circ$04$'$39\farcs3 (J2000.0) and 9$^{\rm h}$58$^{\rm m}$12$\rlap{.}{^{\rm s}}$1, $+$69$^\circ$04$'$57\farcs0 (J2000.0), respectively. At these X-ray positions, the ACS images show extended objects, possibly galaxies, at an offset of $\sim$0\farcs5. Figure 2 shows that the offsets from the X-ray positions to the nearest optical objects are similar for all three X-ray sources. If we assume that these offsets are caused by pointing uncertainties of {\it Chandra} and {\it HST}, Ho IX X-1 would be coincident with the aforementioned star within 0\farcs1. Therefore, we suggest that this bright star is the optical counterpart of the ULX Ho IX X-1. \subsubsection{Kinematics and Ionization of the Supershell around Ho IX X-1} The ionized gas shell around Ho IX X-1 has been observed by \citet{MILLER3} to have strong [\ion{S}{2}] and [\ion{O}{1}] line emissions that are characteristic of supernova remnants (SNRs). However, the shell size of 25$''\times17''$, corresponding to $290\times440$ pc, is much larger than those of known SNRs. It is possible that this shell is energized by an OB association that \citet{MILLER3} describes as a knot of blue stars. The high-resolution H$\alpha$ image presented by \citet{PAKULL2} shows a complex nebular morphology consisting of a bright, thin, inner ring and a fainter and broader outer ring (see Fig.\ 3a). We obtained echelle observations for two slit positions on the supershell around Ho IX X-1: a N-S slit along the eastern rim and a E-W slit from the eastern rim to the center of the shell (see Fig.\ 3). Nebular emission is detected in H$\alpha$, H$\beta$, [\ion{O}{3}] $\lambda\lambda$4959, 5007, and [\ion{N}{2}] $\lambda\lambda$6548, 6583 lines, but not \ion{He}{2} $\lambda$4686 or $\lambda$6560. The velocity structures appear similar in each of the lines detected, thus we use the strongest detection, the H$\alpha$ line, for our analysis of the shell kinematics. Both slit positions show a bright emission component centered at $V_{\rm hel} \simeq 58\pm2$ km~s$^{-1}$. This velocity is in good agreement with the velocity of Holmberg IX, and is adopted as the systemic velocity of the shell. The E-W slit position shows faint emission extending up to $\Delta V$ = +60 and $-$80 km~s$^{-1}$ from the systemic velocity, while the N-S slit position shows faint emission up to $\Delta V$ = +70 and $-$100 km~s$^{-1}$. The velocity structure does not resemble that of a simple expansion: the highest velocity offsets along the N-S slit occur at the outer edge of the shell; the E-W slit shows a brighter blue-shifted component at the inner ring, but it is dominated by a red-shifted component at the outer ring. The 3-D structure of the shell may be pear-shaped, with the inner ring corresponding to the top of the pear which is expanding toward us (with a negative velocity), and the outer ring corresponding to the base which is expanding away from us (with positive velocity), suggestive of a bipolar expansion structure. While the expansion structure cannot be determined unambiguously, the overall expansion velocity is likely on the order of 80--100 km~s$^{-1}$. The H$\alpha$ flux of the ionized gas shell has been reported by \citet{MILLER2} to be $6.4\times10^{-14}$ ergs s$^{-1}$ cm$^{-2}$. This H$\alpha$ emission requires an ionizing flux of $7.3\times10^{49}$ photons s$^{-1}$, if the nebula is optically thick (to ionizing photons). Assuming that the emitting material is distributed in a prolate ellipsoidal $290\times290\times440$ pc shell with a fractional shell thickness ($\Delta R/R$) of 0.1, we estimate the rms density of the shell to be 1.4 H-atom cm$^{-3}$ and the shell mass to be $6.37\times10^5$ $M_\odot$. For an expansion velocity of 80 km~s$^{-1}$, the kinetic energy of the shell is $1.1\times10^{52}$ ergs. \subsubsection{Local Stellar Population and Energy Consideration} To determine the roles played by the local stellar population, and possibly the ULX, in the ionization and energetics of the supershell, we examine the stellar content using the {\it HST} ACS images of Ho IX X-1. We have carried out photometry of stars in the F450W, F555W, and F814W bands, and transformed the results into the VEGAMAG system, which is similar to the Johnson $BVI$ system. We have produced color-magnitude diagrams (CMDs) in $M_V$ versus $(B-V)$ and $M_V$ versus $(V-I)$, correcting for only a distance modulus of 27.78. The former was found to be more useful because the stars of interest are blue. This CMD of $M_V$ versus $(B-V)$ for stars within the shell is presented in Figure 4 (in filled symbols), overplotted with evolutionary tracks for massive stars from \citet{LS01}. The highest concentration of stars is distributed near $(B-V) \sim -0.1$ and most likely represents the upper part of the main sequence. If these are O-stars with $(B-V) = -0.32$, the amount of reddening would be $E(B-V) \sim 0.2$. We have applied this reddening correction to all stars and plotted them in open symbols in Figure 4. It is apparent that the main sequence stars detected are within the mass range of $5$--$40~M_{\odot}$, suggesting an age of 4--6 Myr for this OB association. To determine the amount of stellar energy that has been injected into the surrounding medium, we need to know the number of supernovae that have exploded in the past and the masses and temperatures of the most massive living stars, but this information cannot be determined from the photometry alone. We therefore assume a Salpeter initial mass function, and estimate the massive stellar population by extrapolation from the observed main sequence stars in a lower mass range that is not seriously affected by incompleteness. We then use the relation $N = \int^{M_{u}}_{M_{l}} K M^{-2.35} dM $, where $N$ is the number of stars, $K$ is a constant that can be determined from the star counts, $M$ is the mass, and $M_{\rm u}$ and $M_l$ are the upper and lower mass limits. Five stars are observed to be in the 12--20 $M_\odot$ range, thus we estimate $K \lesssim 600$. The number of stars with mass greater than $20~M_\odot$ is $\int^{100~M_\odot}_{20~M_\odot} K M^{-2.35} dM \lesssim 7$, adopting an $M_u$ of 100 $M_{\odot}$. Since we observe only one star with $M > 20~M_\odot$, we estimate roughly six supernovae have likely exploded. Assuming each supernova releases $\sim 10^{51}$ ergs of explosion energy, the total supernova energy input is approximately $6 \times 10^{51}$ ergs, roughly half of the observed kinetic energy of the expanding shell. Additionally, using Starburst99 \citep{Letal99}, we estimate a collective ionizing luminosity of 0.4--2.0$\times10^{49}$ photons s$^{-1}$, only 1/2 of that required by the observed H$\alpha$ luminosity of the shell. Therefore, we conclude that the stellar population alone cannot provide sufficient energy to produce the observed kinematics and ionization of the nebula. It is possible that Ho IX X-1 supplies the additional required energy. Our echelle spectroscopic observations suggest that there may be a bipolar expansion. If this is the case, the source may be ``beaming" X-ray emission into the nebula toward and away from us. As our echelle observations had limited spatial coverage, the suggested bipolar expansion needs to be confirmed in the future with kinematic observations over the full extent of the nebula. \subsection{IC 10 X-1} Using {\it ROSAT} HRI, IC 10 X-1 was discovered as a rather luminous X-ray source within the optical extent of the galaxy \citep{B1997}. Recent {\it Chandra} and XMM-Newton observations show that IC 10 X-1 has a mean 0.3-8.0 keV luminosity of $1.2\times10^{38}$ ergs s$^{-1}$ and shows a large variation by a factor of up to $\sim$6 on time-scales of $\sim10^4$ s \citep{Wetal05}. It was found to be within $\sim8''$ of the centroid of a nonthermal radio supershell 45$''$ in diameter, corresponding to 150 pc at the distance of IC 10, $~0.7~$Mpc \citep{Y1993}. \subsubsection{Optical Counterpart and Stellar Environment} A possible optical counterpart, an emission-line source, was identified as WR star [MAC92] 17 \citep{M1992}. This was later resolved into two components, [MAC92] 17A and B, of which only the A component is a WR star \citep{CROWTHER1}. Recent {\it Chandra} and {\it HST} observations by \citet{BAUER1} with improved spatial resolution and astrometric accuracy allowed further characterization of the nature of the X-ray source. They give a J2000.0 position 00$^{\rm h}$20$^{\rm m}$29$\rlap{.}{^{\rm s}}$09, $+$59$^\circ$16$'$51\farcs95 for the point source, and report a 0.5--8.0 keV absorbed flux of $1.7\times10^{-12}$ ergs cm$^{-2}$ s$^{-1}$ with faint emission extending up to $\sim22''$ away. Their observations support the association with [MAC92] 17A, located within 0\farcs23 of the X-ray source. The high luminosity and variability of this X-ray source, and its likely association with the nearby WR star, lead them to hypothesize that the source is most likely a massive BH binary with a progenitor that evolved more rapidly, and would thus have been more massive than [MAC92] 17A. We have carried out photometry using the {\it HST} ACS images in the F435W, F606W, and F814W bands, and constructed CMDs for a distance modulus of 24.23. The CMD of $M_V$ versus $(B-V)$ for stars in the vicinity of IC 10 X-1 is presented in Figure 5a. As IC 10 is located at a low Galactic latitude, the foreground extinction due to the Galactic plane is evident. Assuming that the bluest stars are main sequence stars with an intrinsic color of $(B-V) = -0.32$, the reddening is $E(B-V) \sim 0.85$. We have applied this reddening correction to all stars and produced another CMD in Figure 5b, along with evolutionary tracks for massive stars \citep{LS01}. The most massive stars in the vicinity of the IC 10 X-1 have masses of 20$\pm5~M_{\odot}$. The lack of more massive stars suggests that this stellar population was formed about 4--6 Myr ago. This age is consistent with the what would be expected from the presence of a WR star, the optical counterpart of IC 10 X-1. \subsubsection{Complex Interstellar Environment} We obtained N-S and E-W echelle spectra with slit positions centered near IC 10 X-1 using a slit length of 15$''$ (see Fig.\ 6). No \ion{He}{2} line was detected. The spectral lines from the E-W slit position were uniform and symmetric and thus were useful in estimating a systemic velocity for the nebula, $-$330 km s$^{-1}$, close to the systemic velocity of IC 10, $-$344 km s$^{-1}$ \citep{dV1991}. We apply this systemic velocity to the N-S spectrum which shows an expansion structure with low velocity at the center of the slit and higher velocities at the edges. We find an overall expansion velocity $\sim80$ km s$^{-1}$, somewhat higher than the 50--70 km s$^{-1}$ found by \citet{BR02}. It is not unusual that high-dispersion spectroscopy can detect emission at higher velocities than lower-dispersion spectroscopy. The velocity structure along the N-S slit position does not represent a regular expanding shell. This is understandable as the H$\alpha$ image in Figure 6b shows an irregular, filamentary structure that is best described as a turbulent, frothy interstellar medium. This interstellar structure may have resulted from the explosion of the most massive stars in the population around IC 10 X-1. This interstellar environment is too complex to assign specific features that have been energized by IC 10 X-1. It is thus difficult to assess unambiguously the influence of this X-ray source on its surrounding medium. \subsection{M81 X-6 (NGC 3031 X-11)} The ULX M 81 X-6 is located near a nebula with an enhanced [\ion{S}{2}]/H$\alpha$ ratio that was identified as the SNR MF 22 \citep{MATONICK1}. \citet{PAKULL2} presented a high-quality H$\alpha$ image of this region revealing a large 260$\times$350 pc shell structure with the SNRs MF 22 and MF 23 on the southern and northern portions of the shell, respectively. Comparing {\it Chandra} and {\it HST} images, \citet{LIU2} identified a unique optical counterpart for M81 X-6, and further suggested it to be an O8 V star based on its colors and magnitudes. To show the distribution of stars and the ionized interstellar gas near the ULX M81 X-6, we present an archival {\it HST} F555W image in Figure 7a and the continuum-subtracted H$\alpha$ image from \citet{PAKULL2} in Figure 7b. The continuum image shows regions of high extinction, and the H$\alpha$ emission appears brightest in regions of lower extinction. We obtained two echelle spectra with N-S and E-W orientations. The slit positions are marked in Figure 7b and the data are shown in Figures 7c and 7d. The echellograms show a main velocity component at $V_{\rm hel} = -172$ km~s$^{-1}$ with a FWHM of 30--40 km~s$^{-1}$. In addition, the N-S slit clearly shows a velocity spike extending over 250 km~s$^{-1}$, which is typically seen in unresolved extragalactic SNRs \citep[e.g.,][]{Detal00}. The E-W slit shows bright emission at the west end, and diffuse emission fanning out with high velocity at the east end. Reconstruction of the exact slit positions reveals that the spike corresponds to MF 22, confirming its SNR nature. If the nebulae surrounding the two objects were physically related as one supershell, we would expect to see a symmetric expansion structure throughout the length of the N-S slit with two distinct velocity components, red- and blue-shifted. We did not, however, observe a split-line structure even at the center of the ``supershell'' that would indicate an expansion. The apparent shell morphology results from foreground extinction. We cannot draw any conclusions about the role of the ULX in the ionization or energetics of the surrounding ISM. \subsection{NGC 1313 X-2} NGC 1313 X-2, located $\sim6'$ south of the nucleus of NGC 1313, was among the first ULXs discovered \citep{FT87}. The position of this ULX was accurately determined by \emph{Chandra} observations to be 3$^{\rm h}$18$^{\rm m}$22$\rlap{.}{^{\rm s}}$18, $-$66$^\circ$36$'$03\farcs3 (J2000.0), which aided in the identification of an $R = 21.6$ pointlike object as the optical counterpart of the ULX. Based on this optical identification and X-ray spectral properties, \citet{Z2004} suggest that NGC 1313 X-2 is a black hole X-ray binary with a 15--20 $M_\odot$ companion. The black hole mass has been estimated to be $\sim$800 $M_\odot$ based on the spectral analysis of XMM-Newton observations of NGC 1313 X-2 \citep{M2003, WANG04}. NGC 1313 X-2 does not reside in a region of active star formation. A deep H$\alpha$ image reveals an elongated $25''\times17''$ supershell around the ULX, and spectroscopic observations show strong [\ion{S}{2}] and [\ion{O}{1}] lines with a FWHM of 80 km~s$^{-1}$ centered near the systemic velocity of NGC~1313 \citep{PAKULL2}. The nebular spectra change abruptly across the supershell, with the west side brighter in H$\alpha$ emission and the east side higher in [\ion{O}{3}]/H$\alpha$ ratio \citep{Z2004}. \subsubsection{Optical Counterpart and Stellar Environment} To examine the optical counterpart and stellar environment of NGC 1313 X-2, we have used archival {\emph HST} ACS images in the F435W, F555W, and F814W bands. The exposure times and observation dates of these ACS images are given in Table 2. Figure 8 shows the F435W and F814W images along with the H$\alpha$ image from \citet{PM02}. We have used stars in the USNO\,B1.0 catalog to determine astrometric solutions for these ACS images, resulting in an rms accuracy of $\sim$0\farcs5. With our refined ACS coordinates, it is easy to identify the optical counterpart of the ULX, as marked on the ACS images in Figure 8. It is also evident that within the H$\alpha$ shell, a higher concentration of bright stars exists to the west of the ULX. This optical identification is consistent with the identification of \citet{P+05} based on \ion{He}{2} emission \citep{MZ+05}. We have carried out photometric measurements for the optical counterpart of the ULX and the bright stars to its west. The results are transformed to $BVI$ magnitudes and used to construct CMDs in $M_V$ versus $(B-V)$ and in $M_V$ versus $(V-I)$. We have adopted a distance of 4.1 Mpc \citep{Metal02}, but did not make any extinction correction. Figure 9 shows the $M_V$ vs (B-V) CMD plotted with evolutionary tracks of stars of various masses \citep[from][]{LS01}. The stars are marked in the F555W image in Figure 8b. The optical counterpart of NGC 1313 X-2 has $M_V = -3.96\pm0.02$, $(B-V) = -0.14\pm0.03$, and $(V-I) = -0.11\pm0.04$, equivalent to a B1--B2 giant reddened by $E(B-V) \sim 0.1$. The mass of the star would be $7\pm1~M_\odot$. The second set of F555W images taken three months after the first show a $-0.13\pm0.03$ mag variation in $M_V$. This variation is small but real. It is interesting to note that the optical counterparts of M101 ULX-1 and the ULX NGC 5204 are B supergiants, and in the case of M101 ULX-1 the B supergiant shows no detectable photometric variation over a timespan of 13 months \citep{LIU3,KUNTZ1}. The optical counterpart of NGC 1313 X-2 is one of the three brightest blue stars in this region; these three blue stars are all 7--9 $M_\odot$ early B giants. The fainter blue stars are most likely main sequence stars with masses $\lesssim 10~M_\odot$. The brightest red star has colors and magnitudes consistent with either a 20 $M_\odot$ red supergiant in NGC 1313 or a Galactic M8 dwarf at a distance of $\sim$160 pc. As there are no blue main sequence stars above 10 $M_\odot$ in this region of NGC 1313, we consider the Galactic M dwarf a more likely interpretation for this red star. The stellar population within the supershell is thus at least 10$^7$ yr old and consequently no longer contains O stars. \subsubsection{Ionization and Energetics of the Supershell surrounding NGC1313 X-2 } The 500 $\times$ 340 pc H$\alpha$ shell surrounding NGC 1313 X-2, if photoionized and optically thick, requires an ionizing flux of $6.3\times10^{49}$ photons s$^{-1}$, assuming a prolate ellipsoidal shell with a fractional shell thickness of $\Delta R/R$ = 0.1 and a density of 1 H-atom cm$^{-3}$. This ionizing flux exceeds that produced by the observed blue stars, mostly B stars, by two orders of magnitude. \citet{PM02} suggested that the shell is ionized by shocks. To study the energetics of the supershell around NGC 1313 X-2, we obtained echelle observations for two slit positions (Figure 8). The N-S slit position was observed in the multi-order mode; broad nebular emission is detected in H$\alpha$, but the S/N ratio is too low for accurate velocity measurements. The \ion{He}{2} line was not detected. The E-W slit position was observed in the single-order, long-slit mode, and this observation has provided the most useful kinematic information. The H$\alpha$ line shows a narrow component at a nearly constant heliocentric velocity of $V_{\rm hel} \sim +380$ km~s$^{-1}$ within the shell boundary and at 30$''$ west of the shell. This component represents the local interstellar medium, and its velocity will be adopted as the systemic velocity of the shell around NGC 1313 X-2. The H$\alpha$ line also shows curved blue-shifted and red-shifted components indicating an expanding shell. The red-shifted component shows an extreme velocity offset of +110 km~s$^{-1}$ from the systemic velocity. The blue-shifted component is blended with the bright telluric OH lines at 6571.383 and 6571.386 \AA, so it is difficult to determine its extreme velocity offset, but it is at least $-100$ km~s$^{-1}$ from the systemic velocity. We thus conservatively assign an expansion velocity of 100 km~s$^{-1}$ to the large shell around NGC 1313 X-2. This expansion velocity is higher than the 80 km~s$^{-1}$ determined by \citet{PM02} using a lower-resolution spectrum. For a shell $\sim500$ pc in size, this 100 km~s$^{-1}$ expansion velocity is unusually high, and supports the shock ionization of the nebula, as suggested by \citet{PAKULL2}. The kinetic energy of this shell, assuming the aforementioned shell density and geometry, is $2\times10^{52}$ ergs. This energy is much higher than the canonical supernova explosion energy of 10$^{51}$ ergs and the shell size is much larger than those of known SNRs. There could be a large number of normal supernova explosions that power the expansion of this large supershell, but this is not supported by the small number of main sequence B stars. It is most likely that the energetic process producing the ULX NGC 1313 X-2 is also responsible for powering this large expanding shell. \section{Conclusions} The luminous X-ray sources for which we obtained echelle observations were selected because they were surrounded by gaseous nebulae. It is thus not surprising that all seven of our program X-ray sources are in young stellar environments. For example, the CMDs of stellar populations near Ho IX X-1, IC 10 X-1, and NGC 1313 X-2 all show main sequence stars typically in the mass range 10--25 $M_\odot$. The young stellar environment suggests that these X-ray sources are likely HMXBs. The most direct way to assess the nature of a luminous X-ray source is to identify an optical counterpart. Optical counterparts have been identified for six of our program X-ray sources, and in all cases the counterparts are massive stars ranging from $\sim7~M_\odot$ to several tens of $M_\odot$. These results further support the suggestion that these luminous or ultraluminous X-ray sources are HMXBs. (See Table 3 for a summary of the stellar and interstellar environments of the luminous X-ray sources we studied.) To confirm the association between these stars and the X-ray sources, spectroscopic observations of the optical counterparts are needed to search for irradiated stellar \ion{He}{2} $\lambda$4686 emission \citep[e.g.,][]{KUNTZ1}. Once confirmed, spectroscopic monitoring observations can be used to measure the orbital parameters of these systems in order to determine conclusively whether the ULXs are associated with IMBHs. The detection of a spatially resolved \ion{He}{2} $\lambda$4686 emitting nebula around a luminous X-ray source can be used to determine whether the X-ray emission is beamed or isotropic. However, besides LMC X-1, none of the nebulae around luminous X-ray sources for which we obtained clear, accurately positioned echelle spectra emitted the \ion{He}{2} $\lambda$4686 line. To assess our negative results, we compare the nebulae for which we did not detect the \ion {He}{2} line to the \ion{He}{2}-emitting nebulae associated with LMC X-1 and Ho II X-1. LMC X-1 is surrounded by a dense \ion{H}{2} region with a diameter of 6 pc, while Ho II X-1 is in an \ion{H}{2} region 30$\times$45 pc in size. This size difference reflects the two orders of magnitude difference in X-ray luminosity between these two objects. The four nebulae for which we do not detect the \ion{He}{2} line are large supershells or diffuse \ion{H}{2} regions with low concentrations of gas in the vicinity of the X-ray source. For such distributions of interstellar gas, X-ray emission from the source would be dispersed into a large volume of low-density gas, resulting in an extremely low surface brightness of \ion{He}{2} recombination emission. Thus the \ion{He}{2} $\lambda$4686 line would be difficult, if not impossible, to detect for these regions. By comparing the total ionization energy requirements and expansion velocity of a shell nebula with contributions from the local stellar population, we make estimates of energy contributions from Ho IX X-1 and NGC 1313 X-2. In both cases, the stellar populations were insufficient in producing the energy required for the observed ionization and kinematics of the nebulae, and thus we concluded that these ULXs played a significant role in the energetics of their respective supershells.
1,314,259,995,193
arxiv
\section{Introduction} Active matter is a nonequilibrium branch of soft matter and has risen strong interest in recent years. Self-propelled particles (or, swimmers) have their own engines to self-propel them in fluid media in the absence of external forces in a directed motion \cite{purcell:AJP1977,berg:book2003,Lauga:RPP009,ramaswamy:ARCMP2010,golestanian:SoftMatter2011,lauga:SoftMatter2011,Romanczuk:EPJ2012,Marchetti:RMP2013,Yeomans:EPJ2014,Elgeti:RPP2015,cates:ARCMP2015,Goldstein:ARFM2015,Lauga:ARFM2016,Zottl:JPCM2016,bechinger:RMP2016}. However, in the presence of torque, the line of the motion is no longer aligned with that of the self-phoretic force and the swimmer tends to execute a circular motion (known as chiral swimmers) \cite{Brokaw:JEB1958,Brokaw:JCCP1959,Teeffelen:PRE2008,Friedrich:PRL2009,Kummel:PRL2013,Volpe:AJM2013,Crespi:PRE2013,Volpe:AJP2014,Boymelgreen:PRE2014,Reichhardt:PRE2014,Breier:PRE2014,Li:PRE2014,Mijalkov:SoftMatter2015,Xue:EPL2015,Crespi:PRL2015,Lowen:EPJST2016,Zaeifi:JCP2017,Ai:SoftMatter2018}. In general, these systems often exhibit athermal (or active) Brownian motion and the self-propulsion force dominates random thermal fluctuations. In nature biological machines are abundant and they are able to transport chemical energy into a directed motion using their own motor \cite{berg:book2003,Goldstein:ARFM2015,SHACK:BMB1974,Woolley:Reproduction2003,SELLERS:BBA20003,Roberts:NRMCB2013,Hirokawa:NRMCB2009}. Examples of self-propelled particles are biological motors \cite{SELLERS:BBA20003,Roberts:NRMCB2013,Hirokawa:NRMCB2009}, multiflagellate bacterium {\em E. coli} \cite{berg:book2003}, biflagellate alga {\em C. reinhardtii} \cite{Goldstein:ARFM2015} and uniflagellated sperm cells \cite{SHACK:BMB1974,Woolley:Reproduction2003}. Also, recent advances in experimental techniques have enabled fabrication of artificial nano-/swimmers (e.g., active Janus particles) \cite{Perro:JMC2005,Mano:JACS2005,Dhar:NL2006,Walther:SoftMatt2008,Jiang:AM2010,Douglass:NP2012,Walther:CR2013,Buttinoni:PRL2013,Volpe:SR2014,Bianchi:SR2016,Poggi:CPS2017,Zhang:langmuir2017}. The performance of swimmers is often affected by the presence of boundaries, interfaces and other particles. It has been proven that the swimmers (biological, synthetic or simulated) tends to swim and accumulate close to the boundaries \cite{ROTHSCHILD:Nat1963,Biondi:AIChE2004,Mannik:PNAS2009,Lebleu:JMS2009,binz:ME2010,Wensink:EPJST2013,Ao:EPJST2014,Soto:PRE2014,Elgeti:RPP2015,Malgaretti:JCP2017}. Nature has many examples of moving biological swimmers in confined regions e.g. sperm cells in female reproductive tract \cite{Suarez:hpd2006} and technological applications of microfluidic devices \cite{Barbot:SR2016}. These situations have been studied in a more systematic way in recent years by theoretical \cite{Lowen:JPCM2001,Kumar:EPL2012,Lee:NJP2013,Costanzo:EPL2014,Malgaretti:JCP2017} and experimental \cite{Yu:Small2016,Wu:JACS2017,Yang:PNAS2017} techniques. When a self-propelled particle collides with an interface, this is a purely steric interaction and as a result, its normal component of the self-propulsion velocity is canceled due to hard-core repulsive force. This causes to particle slides along the interface until its rotational diffusion coefficient changes the orientation of the particle. However, rotational coefficient still can reorient the particle toward the interface which can lead to particle stays close to interface even for a longer period of time. By putting {\em nonactive} macromolecular (e.g., colloidal) particles (inclusions) in a medium filled by smaller (e.g., polymers) particles (depletants), causes an induced effective, short ranged, attractive interaction between them \cite{Lekkerkerker:book2011,Likos:PR2001,Likos:PRL2003,Angelani:PRL2011,Harder:JCP2014,Cacciuto:PRE2016,Cacciuto:SoftMatter2016,Leite:PRE2016,Zaeifi:JCP2017,Dolai:SoftMatter2018,Yang:JPCM2018}. This force is mainly dependent on the shape and propulsion of the bath particles. For example, in the case of colloidal disk immersed in a bath of disk-shaped nonactive particles (no self-propulsion) this force is a purely attractive force that has an entropic origin. However, by introducing particle self-propulsion, this effective force changes to an induced short-range repulsive interaction \cite{Harder:JCP2014,Zaeifi:JCP2017}. In this case, the surface of the large inclusions serves as a spatial confinement for the suspending smaller particles. In our previous work \cite{Zaeifi:JCP2017} we show that the chirality of the {\em active} particles changes this short-ranged attractive force into an induced repulsive force. Here, the formation of circular layers (or \emph{`rings'}) of active particles around the colloidal inclusions plays a critical role and the effective force induced is mainly determined by the interactions of these rings. Depletion interactions have been studied extensively, both experimentally and theoretically, in the context of nonactive macromolecular mixtures in equilibrium that is restricted to a confined geometry \cite{Asakura:JCP1954,Trokhymchuk:Langmuir2001,Xiao:EPL2006,Chervanyov:PRE2011,Rahul:SoftMatter2013,Usatenko:EPJST2017,Lekkerkerker:book2011}. However, these interactions are yet to be known for the case of active particles. In this paper, we tackle this problem using a simplified and standard model of disk-shaped inclusions in an active bath within a commonly used, two dimensional model of self-propelled particles constrained to a narrow channel. By introducing the wall into a mixture of colloidal inclusions in an active bath, apart from the formation of the circular layers around the colloids, also, due to the interactions between surface and the swimmers, layers of swimmers exist close to the wall. The sequential overlaps between the colloidal rings and the wall layers generate the distinct features of the force profiles. In a competition between colloids and the channel wall as a site for active particles gathering places, because of the convex curvature of the colloidal surfaces, the layers of particles are more populated for the wall. However, the density of layers has a strong dependence on the swim speed. We investigate these parameters along with the orientation of colloidal inclusions on effective interactions mediated between colloidal disks. In our previous work \cite{Zaeifi:JCP2017}, we investigated the effect of layer formation on the interactions between colloidal inclusions in a bath of active particles and focused on the regime of low occupied area fractions. Here, we take a similar direction, however, we confine the system into a narrow channel to study the effects of the channel confinement and consider both cases of relatively low and relatively high area fractions but, in any case, below the onset of motility-induced phase separation \cite{Gompper:SM2018}. Since the channel introduces an anisotropy on the system, we consider two different orientation for the colloidal inclusions, with their center-to-center axis being parallel or perpendicular to the channel walls. The rest of this paper is organized as follows: In Section~\ref{sec:Model}, we introduce our model and details of the numerical simulation techniques used in this work by focusing on a standard two-dimensional model of self-propelled particles. In Section~\ref{sec:results}, we discuss the results for the depletion interaction between two disk-shaped inclusions in a bath of self-propelled swimmers at two different colloidal orientations, Section~\ref{sec:Vert} for the parallel alignment and Section~\ref{sec:Horz} for the perpendicular alignment. Both cases are studied at two different area fractions of bath particles. The paper is concluded in Section~\ref{sec:Conclusion}. \section{Model and methods} \label{sec:Model} Our two-dimensional model consists of a pair of identical, impenetrable and nonactive colloidal inclusions of diameter $a_c$ confined within a narrow channel of height $H$, containing $N$ identical, self-propelled Brownian particles (swimmers) of smaller radius $a$ at area fraction $\phi$. Active particles are self-propelled at constant speed $V_s$ along a direction specified by an angular coordinate, $\theta$, hence, their swim orientation is characterized by the unit vector ${\mathbf n}=(\cos \theta, \sin \theta)$ in within the $x-y$ plane. The particles are subjected to both translational and rotational diffusion with coefficients $D_T$ and $D_R$, respectively. We ignore active noise effects and assume that the diffusive processes are thermal in nature; therefore, $D_T$ is related to particle (translational) mobility through the Einstein relation, $D_T = \mu_T k_{\mathrm{B}}T$, where $k_{\mathrm{B}}$ is the Boltzmann constant and $T$ is the ambient temperature. Also, for no-slip sphere in the low-Reynolds-number regime, as supposed for the model particles here, the translational and rotational diffusion coefficients are related as $D_R=3D_T/4a^2$ \cite{happel:book1983}. The dynamics of the active particle position, ${{\mathbf r}}_i(t)$, and orientation, ${\theta}_i(t)$, obey the overdamped Langevin equations \begin{eqnarray} \dot{{\mathbf r}}_i &= &V_s{\mathbf n}_i-\mu_T\frac{\partial U(\{{\mathbf r}_j\})}{\partial {{\mathbf r}_i}}+\sqrt{2 D_T}\, {\boldsymbol \eta}_i(t), \label{Eq:langevin_a} \\ \dot{\theta}_i&=& \sqrt{2D_R}\, \zeta_i(t), \label{Eq:langevin_b} \end{eqnarray} where ${\boldsymbol \eta}_i(t)$ and $\zeta_i(t)$ are independent Gaussian white-noise terms, representing random force and random torque respectively, with zero mean, $ \langle { \eta}_i^\alpha(t) \rangle= \langle \zeta_i(t) \rangle=0$, and two-point time correlation functions $ \langle { \eta}_i^\alpha(t) { \eta}_j^\beta(t') \rangle=\delta_{ij}\delta_{\alpha\beta}\delta(t-t')$ and $ \langle \zeta_i(t) \zeta_j(t') \rangle=\delta(t-t')$, with $i, j=1,\ldots,N$ and $\alpha, \beta=1, 2$, denoting the Cartesian coordinates. $U$ in Eq. (\ref{Eq:langevin_a}) is the sum of pair potentials acting between any pairs of particles in the system. The particle-particle $(V({\mathbf r}_{ij}))$ and particle-wall interactions $(V_{\mathrm{W}}^\pm(y_i)$ with $\pm$ denoting the top and bottom walls) are modeled using the purely repulsive pair potentials according to a conventionally modified form of the Weeks-Chandler-Andersen (WCA) potential, i.e., \begin{equation} V({\mathbf r}_{ij}) = \left\{\begin{array}{ll} 4\epsilon\! \left [ \left ( \frac{\sigma_{\mathrm{eff}}}{|{\mathbf r}_{ij}|} \right )^{12} -2 \left ( \frac{\sigma_{\mathrm{eff}}}{|{\mathbf r}_{ij}|} \right )^{6}+1\right ] &\,\, |{\mathbf r}_{ij}| \leq \sigma_{\mathrm{eff}}, \\ \\ 0 &\,\, |{\mathbf r}_{ij}| > \sigma_{\mathrm{eff}}, \end{array}\right. \end{equation} and \begin{equation} V_{\mathrm{W}}^\pm(y_i) = \left\{\begin{array}{ll} 4\epsilon\! \left [ \left ( \frac{a}{|\delta y_i^\pm|} \right )^{12} -2 \left ( \frac{a}{|\delta y_i^\pm|} \right )^{6}+1\right ] &\,\, |\delta y_i^\pm| \leq a, \\ \\ 0 &\,\, |\delta y_i^\pm| > a, \end{array}\right. \end{equation} respectively. Here, $|{\mathbf r}_{ij}|$ is the center-to-center distance between any two particles with $\sigma_{\mathrm{eff}}=2a$, when the pair of particles considered are both of active particles, $\sigma_{\mathrm{eff}}=2a_c$, when the pair are both of colloidal inclusions, and $\sigma_{\mathrm{eff}}=a+a_c$, when one of the particles considered is an active particle and the other one is a colloidal inclusion. Also, $\delta y_i^\pm=y_i\mp H/2$ is the perpendicular distance of the particle from the top/bottom walls. In all cases, it is assumed that the interaction energy strength is given by a single parameter, $\epsilon$. \begin{figure} \begin{center} \includegraphics[width=3.0in]{Fig_1} \caption{Schematic view of two fixed, identical, impenetrable, and nonactive colloidal disks of radius $a_c$ at surface-to-surface distance $\Delta$ inside a channel of height $H$, containing active particles of radius $a$ and self-propulsive speed $V_s$. } \label{fig:schem_colloids} \end{center} \end{figure} Our assumption of thermal noises through the Langevin equations ensures that the system approaches its appropriate Boltzmann-weighted equilibrium determined by the potential $U(\{{\mathbf r}_j\})$ in the stationary state, when the active forces are switched off. We use a dimensionless representation in our simulations by rescaling the units of length and time with the radius of the active particles and the corresponding time-scale for translational diffusion as \begin{equation} \tilde x = \frac{x}{a},\quad \tilde y = \frac{y}{a},\quad \tilde t = \frac{D_T t}{a^{2}},\quad \tilde H = \frac{H}{a}. \label{Eq:dimensionless} \end{equation} Equations (\ref{Eq:langevin_a}) and (\ref{Eq:langevin_b}) can be solved numerically using Brownian Dynamics simulations by rewriting them in dimensionless and discrete forms for time-evolution over a sufficiently small time-step, $\Delta \tilde t$, as \begin{eqnarray} &&\tilde x_i(\tilde t +\Delta \tilde t ) = \tilde x_i(\tilde t)+[Pe_s \cos \theta_i(\tilde t) + \tilde f^x_i(\tilde t)] \Delta \tilde t + \sqrt{2 \Delta \tilde t}\,R^x_i\nonumber\\ \\ &&\tilde y_i(\tilde t +\Delta \tilde t ) = \tilde y_i(\tilde t)+[Pe_s \sin \theta_i(\tilde t) + \tilde f^y_i(\tilde t)] \Delta \tilde t + \sqrt{2 \Delta \tilde t}\,R^y_i\nonumber\\ \\ &&\theta_i (\tilde t +\Delta \tilde t ) = \theta_i(\tilde t) + \sqrt{2 \chi \Delta \tilde t}\,R^\theta_i, \label{Eq:2Ddimensionless} \end{eqnarray} where $\tilde f^x_i = -\partial \tilde U/\partial \tilde x_i$ and $\tilde f^y_i = -\partial \tilde U/\partial \tilde y_i$ are the Cartesian components of the dimensionless force derived from the rescaled potential $\tilde U=U/(k_{\mathrm{B}}T)$, and $R^x_i$, $R^y_i$ and $R^\theta_i$ are independent, Gaussian random numbers with zero mean and unit variance. We have defined $\chi = a^2D_R/D_T = 3/4$, which, as noted before, follows from the standard relations for diffusion coefficients of spherical particles \cite{happel:book1983}. We also define the rescaled swim P\'eclet number, \begin{equation} Pe_s=\frac{a V_s}{D_T}, \end{equation} as the ratio of the characteristic time-scale of translation diffusion, $a^2/D_T$, and that of the particle swim, $a/V_s$. In the rescaled representation, the system is described by the size ratio, $a_c/a$, the rescaled area fraction, $\phi a^2$, the swim P\'eclet number, $Pe_s$, and the rescaled center-to-center distance between adjacent colloidal inclusions, $d/a$, or equivalently, their rescaled surface-to-surface distance, $\tilde \Delta = \Delta/a$ (see Fig. \ref{fig:schem_colloids}). In what follows, we use the fixed parameter values $\epsilon/(k_{\textup{B}}T)=10$, $\phi a^2 = 0.1$ and $0.3$, $a_c/a = 5$ and consider only two and three fixed colloidal inclusions in the simulated active bath (the more general cases with many fixed or mobile colloidal inclusions will be considered elsewhere \cite{Zaeifi:JCP2017}). The swim P\'eclet number is increased from $Pe_s=0$ up to $Pe_s=50$. Typical rescaled parameter values such as $Pe_s = 5$ can thus be mapped to a wide range of experimentally accessible, actual parameter values such as $a = 1\, \mu{\mathrm{m}}$, $V_s = 10\, \mu{\mathrm{m}}\cdot{\mathrm{s}}^{-1}$, $D_T \simeq 0.22\, \mu{\mathrm{m}}^2\cdot{\mathrm{s}}^{-1}$, $D_R \simeq 0.16\,{\mathrm{s}}^{-1}$, and $\eta = 0.001\, {\mathrm{Pa\cdot s}}$. Our simulations are performed using $N = 400$ active bath particles distributed initially in random positions in the channel bounded laterally (along the $x$-axis) by a box of lateral size $\tilde {L}= 4 N \pi / \phi \tilde H$, where we impose periodic boundary conditions.The simulations typically run for $10^6-10^8$ time steps (with an initial $10^6$ steps used for relaxation purposes) and averaged quantities are calculated over a sufficiently large statistical sample (by choosing different initial conditions) after the system reaches a steady state. One of the key quantities that we shall use later in the text is the net force acting on colloidal inclusions applied by individual bath particles while colliding with them. This force is calculated using averaged sum of forces ($\tilde f^x_i$ and $\tilde f^y_i$) applied by swimmers-colloid collisions. The bath of our system is homogeneous and isotropic, therefore, for a single colloidal inclusion, the {\em net} force due to the bath particle collisions will be averaged to zero (the force applied from either side of colloid will be equal in magnitude and cancel out each other). However, in the case of two colloidal inclusions, depending on their orientation, the isotropicity of the bath along the center-to-center axes of colloids breaks and as a results a {\em net} force is exerted on colloidal inclusions. This non-zero force is interpreted as effective two-body force and is defined as $\tilde {\mathbf F}_2 = \tilde F_2 \hat{\mathbf x}$. This quantity is calculated in our simulations using averaging over different configurations using $\tilde {\mathbf F}_2 = \sum_i \langle\tilde {\mathbf f}_i\rangle$, where the brackets $\langle\cdots\rangle$ represents the statistical average over different configurations and $\tilde {\mathbf f_i}=(\tilde f_i^x, \tilde f_i^y)$ is the force applied to the $i$th particle by neighbor particles that collides with it via the WCA interaction potential. For the sake of demonstration, we have divided the force $\tilde {\mathbf f_i}$ by swim P\'eclet number, $Pe_s$, which to include the case where $Pe_s=0$ we have conventionally chosen to divide the effective force values by $Pe_s+1$. This defines a new quantity defined as $\hat F_2 \equiv \tilde F_2/(Pe_s+1)$. \section{Results} \label{sec:results} Here, we will focus on two different orientational configurations for the fixed colloidal inclusions inside the channel, i.e., with their center-to-center line being either parallel or perpendicular to the $x$-axis, in such a way that the up-down symmetry with respect to the centerline of the channel is preserved. In each case, we select two different area fractions, $\phi a^{2} = 0.1$ and 0.3, corresponding to low and high area fractions (and, in any case, below the onset of motility-induced phase separation \cite{Gompper:SM2018}). \\ \subsection{Parallel orientation} \label{sec:Horz} \subsubsection{Low area fraction $(\phi a^{2} = 0.1)$} \label{sec:rho0p1Horz} Figure \ref{fig:fig2} shows our simulation data for the rescaled effective two-body force acting on each of the colloidal inclusions as a function of the colloidal surface-to-surface distance $(\tilde \Delta)$ at different P\'eclet numbers and for a fixed rescaled channel height of $\tilde H = 15$. Because of the symmetric positional arrangement of the inclusions within the channel, the net (statistically averaged) force acting on the two disks are equal with the $x$ (parallel-to-channel) component averaging out to be zero. Thus, Fig.~\ref{fig:fig2} shows only the $y$ component of the rescaled net force acting on each of the inclusions due to the active particles. At vanishing and small swim P\'eclet numbers $Pe_s$, similar to the passive case, the rescaled force is attractive \cite{Lekkerkerker:book2011}, over short distances where the depletion layers (thin, dark-blue, circular regions in Fig. \ref{fig:fig3}, top) around the colloids can overlap. However, by increasing the P\'eclet number of the bath particles, the rescaled force exerted on the colloidal disks changes to a repulsive one. The repulsive force profiles exhibit nonmonotonic behaviors as functions of the separation between the colloids, but with distinctly different features than those found in the nonconfined (bulk) geometry, where the two disks are immersed in an unbounded active bath \cite{Harder:JCP2014,Zaeifi:JCP2017}. As it was shown in the bulk case \cite{Zaeifi:JCP2017}, the nonmonotonicity of the interaction profiles is linked with swimmer structuring around the colloidal disks; i.e., the swimmers form ring-like, high-density regions around the colloidal disks and the sequential overlaps between these rings from one colloid with the surface of the other, as the distance between the disks is decreased, resulting in nonmonotonic changes (alternating or oscillating rise-and-fall behavior) in the force profiles. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_2} \caption{Rescaled effective two-body force, $\hat F_2$, acting on each of the two colloidal inclusions immersed parallelly in an active bath with area fraction $\phi a^{2} = 0.1$ is shown as a function of their rescaled surface-to-surface distance, $\tilde \Delta$, at fixed rescaled channel height of $\tilde H = 15$ and for different values of the P\'eclet number, $Pe_s$, as indicated on the graph. Note that the plot shows effective force divided by $Pe_s+1$ as defined in the text. Symbols are simulation data and lines are guides to the eye. Inset: Contour plot of $\hat F_2$ as a function of the system parameters $Pe_s$ and $\tilde \Delta$. The magnitude of attractive and repulsive forces are separated with different contour lines and colors. } \label{fig:fig2} \end{figure} In a confined channel (Fig. \ref{fig:fig2}), we find the following qualitative differences relative to the bulk system at the same area fraction: First, rather than the typical alternating rise-and-fall behaviors (see Fig. 2 in Ref. \cite{Zaeifi:JCP2017}, and Section \ref{sec:rho0p3Horz} below), here we find a single, relatively broad peak at short surface-to-surface separations. Second, the typical force magnitude increases by P\'eclet number until it reaches a maximum and decreases afterward; this occurs as the location of the peak in the force profiles moves to smaller separations. These two features are more clearly discerned through the color-coded contour plots of the force profiles in the $Pe_s-\tilde \Delta$ plot shown in the inset of Fig. \ref{fig:fig2}. They reflect the fact that, unlike the bulk system, most of the swimmers in the present case are absorbed by the bounding channel walls, because the typical swimmer `detention' times on a flat surface are larger than those on a convex surface \cite{Lowen:JPCM2001,Malgaretti:JCP2017}. Thus, at a given $Pe_s$, a smaller fraction of swimmers are bound to the colloids as compared with the bulk system, and only a single `ring' or swimmer layer forms around each of the colloids at the area fraction considered here (thin, white, circular regions in Fig. \ref{fig:fig3}, bottom); hence, a single hump develops in the force profiles due to the ring-colloid surface overlaps at small surface-to-surface separations \cite{Zaeifi:JCP2017}. \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{Fig_3} \caption{Steady-state particle density maps for $Pe_s = 0$ (top) and $Pe_s = 50$ (bottom) around two colloidal inclusions located parallelly with $\tilde \Delta = 2$, $\tilde H = 15$ and $\phi a^{2} = 0.1$. } \label{fig:fig3} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{Fig_4} \caption{Effective two-body force, $\hat F_2$, acting on each of the two colloidal inclusions immersed parallelly in an active bath is shown as a function of rescaled channel height, $\tilde H$, for $Pe_s = 50$, $\phi a^{2} = 0.1$, and two fixed values of $\tilde \Delta$, as indicated on the graph. Symbols are simulation data and lines are guides to the eye.} \label{fig:fig4} \end{figure} Also, as the swim P\'eclet number is increased, the fraction of swimmers attracted to the channel wall increases making the proximity of the colloids less swimmer-populated and, as a consequence, the repulsive force is smaller in magnitude. As seen in Fig. \ref{fig:fig3}, the swimmer absorption by the channel walls leads to formation of more than one flat layers of swimmers next to the walls (here two layers shown by the white strips at each walls). These layers can even overlap with the rings formed around the colloids. To illustrate the effects of these boundary layers, we fix the position of the colloidal inclusions at two different surface-to-surface distances, $\Delta = 0.4$ and $\Delta = 2.4$, and plot the rescaled force,$\hat F_2$, acting on each of the colloids as a function of the channel width, $\tilde H$, in Figure~\ref{fig:fig4}. The force at shorter colloidal distances is larger but the qualitative behavior of the force as a function of $\tilde H$ remains qualitatively the same and of a nonmonotonic form with a local minimum at around $\tilde H\simeq 16.2 $. This is the distance where the layers associated with walls avoid overlapping the rings associated with the colloids. For channel widths above the mentioned value, the force increases linearly with $\tilde H$. This is due to the fact that when the value of $\tilde H$ increases the length of the channel decreases. Due to the low density of swimmers only two layers exist around the walls and as a result, the density of swimmers in the bulk of the channel increases and this cause an enhancement of the rings around the colloids. At very larger channel widths, the force is expected to drop eventually and tend to its bulk value \cite{Harder:JCP2014,Zaeifi:JCP2017}, albeit to one with a smaller corresponding effective area fraction, as a fraction of swimmers remain indefinitely bound to the channel walls. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_5} \caption{Rescaled effective two-body force, $\hat F_2$, acting on each of the two colloidal inclusions immersed parallelly in an active bath with occupied area fraction of $\phi a^{2} = 0.3$ is shown as a function of their rescaled surface-to-surface distance, $\tilde \Delta$, at fixed rescaled channel height of $\tilde H = 15$ for the cases $Pe_s=10$ and 50. Note that the plot shows effective force divided by $Pe_s+1$ as defined in the text. Symbols are simulation data and lines are guides to the eye. Inset: Contour plot of $\hat F_2$ as a function of the system parameters $Pe_s$ and $\tilde \Delta$. The magnitude of attractive and repulsive forces are separated with different contour lines and colors. } \label{fig:fig5} \end{figure} \subsubsection{High area fraction $(\phi a^{2} = 0.3)$} \label{sec:rho0p3Horz} The area fraction occupied by the swimmers is another key parameter that significantly affects the two-body force mediated between two colloidal inclusion in a bath of active particles. Figure \ref{fig:fig5} shows that the force behavior as a function of the colloidal surface-to-surface distance changes drastically at the higher area fraction of $\phi a^{2} = 0.3$, when compared to the behavior of the low-area-fraction system in Fig. \ref{fig:fig2} ($\phi a^{2} = 0.1$) with identical parameter values, i.e., $\tilde H=15$ and $Pe_s=50$. As seen, the force profile exhibit the alternating or oscillating behavior, which, as we noted in the preceding section, emerges at even lower area fractions in bulk systems \cite{Zaeifi:JCP2017}. Such a behavior is reflective of the formation of ringlike structures (circular layers with high swimmer density) around the colloids. Unlike the low-area-fraction case (Fig. \ref{fig:fig3}, bottom), here we find not only a primary but also a secondary ring. This is because a larger fraction of swimmers are available to be attracted to the colloidal surfaces at higher volume fractions. As the surface-to-surface distance between the colloids decreases, first the outer (secondary) rings of one colloid intersect with the surface of the other colloid and then the inner (primary) rings of one colloid intersect with the other colloid surface. These surface intersections give rise to the two peaks in the force profile in Fig. \ref{fig:fig5}; see Ref. \cite{Zaeifi:JCP2017} for further details. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_6} \caption{Effective two-body force, $\hat F_2$, acting on each of the two colloidal inclusions immersed parallelly in an active bath with occupied area fraction of $\phi a^{2} = 0.3$ is shown as a function of rescaled channel height, $\tilde H$, for fixed $Pe_s = 50$ and $\tilde \Delta = 0.6$. Symbols are simulation data and lines are guides to the eye. } \label{fig:fig6} \end{figure} \begin{figure} \centering \includegraphics[width=7cm]{Fig_7_2.jpg} \caption{Steady-state density map of active microswimmers with $Pe_s = 50$, around two colloidal inclusions located parallelly with $\tilde \Delta = 2$ in a channel with different channel height, $\tilde H$, as indicated on the graph. The occupied area fraction of the active bath is $\phi a^{2} = 0.3$.} \label{fig:fig7} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_8} \caption{Effective two-body force, $F_2$, acting on each of the two colloidal inclusions immersed parallelly in an active bath with fixed $\phi a^{2} = 0.3$, $Pe_s = 50$ and rescaled channel width, $\tilde H$, values shown on the graph. Symbols are simulation data and lines are guides to the eye. } \label{fig:fig8} \end{figure} The alternating rise-and-fall behavior of the two-body force {$\hat F_2$ is shown across a wide range of $Pe_s$ in the force contour plot of Fig. \ref{fig:fig5}, clearly indicating that the said behavior starts to appear at relatively small P\'eclet numbers $Pe_s<10$ (note that the error bars in the plotted values of $\hat F_2$ is $\lesssim 0.5$). The positive (repulsive) peaks (yellow/red regions, e.g., marked by B and D) are well developed already for $Pe_s>10$. The force falls down to negative (attractive) minimum values (purple regions, e.g., marked by A, C and E) as $Pe_s$ is increased further, i.e., $Pe_s>20$. The two-body force also shows a rather complex alternating behavior, when the colloidal distance is fixed and the rescaled channel width is varied. This behavior is shown in Fig. \ref{fig:fig6}, where we have fixed $Pe_s = 50$ and $\tilde \Delta = 0.6$ and plot $\hat F_2$ as a function of $\tilde H$. The origin of this behavior should be traced back to the flat boundary layers of swimmers formed at the two channel walls and the overlaps they produce with the circular layers (rings) formed around the colloids. These overlaps can be quite prominent and result in an intriguing pattern of highly populated swimmer regions (appearing as white/yellow spots, layers and arcs in the panels shown in Fig \ref{fig:fig7}), reminiscent of wavelike interference patterns. As seen in the bottom-most panel for $\tilde H=50$, there are quite a few flat layers at the walls and around the colloids (up to seven layers may be discernible at each of the channel walls and three rings around each of the colloids). As $\tilde H$ is decreased, the rings around the colloids become further enhanced (more strongly populated by the swimmers), while the flat layers at the wall become more suppressed, especially at distances further away from the central colloids. Due to their complexity, the full understanding of the impact of these overlapping patterns remains to be explored. The two-body force profiles shown as functions of the surface-to-surface distance in Figure \ref{fig:fig8} indicate that the effects of the channel width on the force diminish already for $\tilde H=30$ and 40, where the force values nearly saturate. The confinement effects can be considered as substantial for $\tilde H\gtrsim 20$. Thus, while strengthening the confinement initially increases the forces in a quantitative and monotonic fashion across all distances, $\tilde \Delta$, the effects become of qualitative nature in very narrow channel, e.g., for $\tilde H=15$. \subsection{Perpendicular orientation} \label{sec:Vert} To illustrate the intriguing aspects of the attractive and repulsive forces mediated by the swimmers, we proceed by considering a perpendicular configuration of colloidal particles confined in the channel. To be consistent with the previous section, we consider both low and moderately high occupied area fraction for the bath particles. \subsubsection{Low area fraction $(\phi a^{2} = 0.1)$} \label{sec:rho0p1Vert} In the perpendicular position (see, e.g., Fig. \ref{fig:fig2}), the fixed colloidal inclusions in the bath behave like a barrier to the swimmers. In Fig.~\ref{fig:fig9}, we show the simulated effective two-body force acting on the inclusions as a function of rescaled channel height, $\tilde H$, for different values of the P\'eclet number. The surface-to-surface distance of the colloids is kept fixed as $\tilde \Delta = 2$ (being equal to the diameter of an active particle). Because of the symmetric positional arrangement of the inclusions within the channel, the net (statistically averaged) force acting on the two disks are equal with the $x$ (parallel-to-channel) component averaging out to be zero. Thus, Fig.~\ref{fig:fig9} shows only the $y$ component of the rescaled net force acting on each of the inclusions due to the active particles. As seen in the figure, the force induced by the active particles on the colloidal inclusions is attractive, which is in contrast with the predominantly repulsive force predicted to occur in the bulk system \cite{Harder:JCP2014,Zaeifi:JCP2017}. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_9} \caption{ Rescaled effective two-body force, $\hat F_2$, acting on each of the two colloidal inclusions immersed perpendicularly in an active bath is shown as a function of rescaled channel height, $\tilde H$, for different values of the P\'eclet number and at fixed $\tilde \Delta = 2$ and $\phi a^{2} = 0.1$. Symbols are simulation data and lines are guides to the eye. The situations corresponding to the points A, B and C are discussed in the text. The error bars in all cases are typically of the same size as shown for the nonactive case (purple symbols). } \label{fig:fig9} \end{figure} \begin{figure} \includegraphics[width=1.0\linewidth]{Fig_10_2.jpg} \caption{Steady-state density map of active swimmers with $Pe_s = 0$ (top) and $Pe_s = 10$ and $50$ (bottom), around two colloidal inclusions located perpendicularly with $\tilde \Delta = 2$, $\tilde H$, and $\phi a^{2} = 0.1$. } \label{fig:fig10} \end{figure} The two-body force exhibits a nonmonotonic, oscillating, behavior, with the force magnitude exerted on the colloidal disks showing large variations in narrow channels (e.g., $\tilde H<28)$ at fixed bath activity strength, where it also shows an overall increase with the bath activity strength. These features indicate swimmer structuring around the colloidal disks \cite{Zaeifi:JCP2017}. Let us focus on the exemplary case of $Pe_s = 10$ (orange left-triangles in Fig.~\ref{fig:fig9}), in which case the point marked A in the figure is the one giving the maximum magnitude of the attractive force. This point corresponds to the channel width of $\tilde H=25$ and, since the inter-colloid gap is fixed as $\tilde \Delta=2$, the narrow upper (lower) gap between the top (bottom) colloid of the top (bottom) channel wall is only $3/2$ of a single swimmer radius. In the cases like this where the surface-to-surface distance between the colloidal inclusions stays constant, the main factor influencing the effective two-body force is the width of the upper and lower gaps. Any swimmer attempting to go through these narrow passages can squeeze in the first circular swimmer layer that forms around the colloidal inclusions (see Fig.~\ref{fig:fig10}), producing a large attractive force between the colloids. In other words, this force is mediated by the intersection between 1st wall layers and the colloidal surfaces. As the channel gets wider, the induced force on the colloidal inclusion decreases until there is enough space between layers of colloidal inclusions and the wall to fit a complete swimmer particle (point B is the distance that $\tilde H = 26$). After this points, the wall layers moves away from colloidal surfaces and therefore, their overlap gets weaker and as a result, the attractive force mediated from their intersection become weaker. However, the attraction force between colloidal inclusions increases as the height of the channel increases and the particles are trying to enter to the gap between the formed layers around colloids and walls until point C in the figure and starts to decrease as the gap gets wider till the second layers can form for the walls. As expected, the force applied by the intersection between first layers of wall and the colloidal rings has a higher effect comparing to the second wall layers because of their lower swimmer population. This behavior shifts to closer distances as the value of the $Pe_s$ increase which is mainly because a stronger overlap can be possible for bath particles. Figure \ref{fig:fig10} shows the steady state density plot along with the {\it y} density profile corresponding to $Pe_s = 10$ and $Pe_s = 50$ as two examples representing the points $(A, B, C)$ from Fig.~\ref{fig:fig9}. In the case of passive bath particles, there is no layer formation around colloidal inclusions and the wall of the channel and the bath represent a homogeneous medium and the {\it y} profile shows a uniform distribution of the swimmers along the channel width. However, by increasing the activity of bath particles and creating active swimmers, as expected, the density plots show an aggregation of particles around colloidal inclusions and wall of the channel. The second and third layers for the walls and the second layer of the colloids have formed around colloidal inclusions and the wall of the channel, respectively. At low P\'eclet number (e.g. $Pe_s = 10)$, the population of swimmers in the area between two walls is high and the second layer can be formed around the colloidal inclusions (the second layer is formed for the walls as well but not its third layer). However, by increasing the P\'eclet number, the second layer around the colloidal inclusions disappears and instead, the third layer forms for the walls. Migration of swimmers from the colloidal layers to the channel layers can be detected from this figure. The \textit{y}-profile shows how the height of the peaks of the wall layers increases as the P\'eclet number increases. In addition, for a given P\'eclet number, as expected, the height of the peaks (population of microswimmers) increases as the channel height increases. \subsubsection{High area fraction $(\phi a^{2} = 0.3)$} \label{sec:rho0p3Vert} We now consider the same system as above but with a larger area fraction, $\phi a^{2} = 0.3$, occupied by the swimmers. Figure~\ref{fig:fig11} shows the results for the effective two-body force acting on inclusions as a function of channel rescaled height, $\tilde H$ for the case where $Pe_s = 50$. The oscillating force is created between two colloidal inclusions and the attractive part of the force gradually decays as the channel height increases. However, the magnitude of the repulsive part stays almost the same and channel height has less effect on the repulsive part of the force. This is because of the constant surface-to-surface distance that has been chosen for this case. The repulsive force between colloidal inclusions is dominated by the swimmer particles that locate themselves in the area between the two colloidal disks. Since this distance is kept constant the magnitude of the maximum repulsive force stays unchanged. However, the attractive force comes from the particles that is located in the areas between colloidal inclusions and the walls. By increasing this distance the magnitude of maximum attractive force decreases as channel gets wider where no repulsive force is induced from the wall-trapped swimmers. The repulsive force is dominated by the intersection of the second and first rings of one colloid with the surface of the other colloids. Since the number of the rings around colloids stays unchanged as function of channel diameter, therefore, their repulsive interaction stays unchanged. However, the attractive force is dominated by the channel layers-colloid surface intersection, which this decays as channel gets wider (see Fig.~\ref{fig:fig12}). \begin{figure} \includegraphics[width=1.0\linewidth]{Fig_11} \caption{Rescaled effective two-body force, $\hat F_2$, acting on each of the two colloidal inclusions immersed perpendicularly in an active bath is shown as a function of channel height, $\tilde H$, for P\'eclet number, $Pe_s = 50$, $\tilde \Delta= 2.0$, and $\phi a^{2} = 0.3$. Note that the plot shows effective force divided by $Pe_s+1$ as defined in the text. Symbols are simulation data and lines are guides to the eye. } \label{fig:fig11} \end{figure} Figure \ref{fig:fig12} shows some of the corresponding steady-state density plots from Fig.~\ref{fig:fig11} as examples. As a result of the high concentration of active particles in the bath, even in the case of no colloidal inclusions, there is a high number of layers are formed around the wall of the channel (not shown). By introducing the colloidal particles in a perpendicular orientation, they behave as a barrier and dynamic cluster structures forms around them. Even though at lower channel wall separations the multilayers are formed around colloidal disks, however, by moving two walls away from each other, due to the higher affinity of swimmers to the wall rather than the convex surface of the colloids, number of layers of the colloids decreases and the number of layers for the walls increases. The {\it y}-profiles shows this migration of layers from colloids to the walls as the heights of the peaks increases as channel gets wider. \begin{figure} \includegraphics[width=6.0cm]{Fig_12_2.jpg} \caption{Steady-state density map of swimmers with $Pe_s = 50$, around two colloidal inclusions located perpendicularly with $\tilde \Delta = 2$ in channels of different heights, $\tilde H$, as indicated on the graph, at fixed $\phi a^{2} = 0.3$. } \label{fig:fig12} \end{figure} \section{Conclusion} \label{sec:Conclusion} We have studied interactions between colloidal inclusions in a confined two-dimensional geometry containing a bath of self-propelled active particles. The effect of confinement on the effective interaction induced by swimming bath particles on colloidal particles as a function of the orientation of colloidal particles, the magnitude of confinement and the strength of propelling force for swimmers have been investigated. Our results show that the confinement has a strong effect on the interactions between colloidal particles, however, this mainly depends on the colloidal orientation inside the channel. Effect of confinement on the interaction of colloidal disks is more dominant as the self-propulsion increases. In a narrow channel, and unlike the bulk case \cite{Harder:JCP2014,Zaeifi:JCP2017}, the orientation of colloidal particles plays a critical role in the force applied to them from active bath particles. Since the active bath is homogeneous and isotropic, these forces sum up to zero in the case of a single inclusion. In the presence of a second colloidal inclusion, the spatial isotropy around the first one is broken, giving rise to finite net forces of equal magnitudes (to within the simulation error bars) acting on the inclusions along their center-to-center axis (or the $x$-axis as shown in Fig. \ref{fig:schem_colloids}). In the case of nonactive (nearly-hard) particles, this effective {\em two-body interaction force}, which also represents the potential of mean force between the inclusions \cite{chandler:book1987}, becomes sizable only at sufficiently small separation distances and, as noted previously, originates from the particle depletion effects. The interaction profile in an active bath exhibits alternating behavior, whose features and origin have been investigated in our study. Other interesting problems that can be explored in the present context in the future include the role of particle-wall and inter-particle hydrodynamic couplings \cite{BaskaranPNAS2009,StarkPRL2014,wallattraction,ardekani,stark-wall,Hernandez-Ortiz1,elgeti2016} as well as particle chirality and the effects of active noise in the dynamics of the self-propelled particles \cite{Romanczuk:PRL2011,Grosmann:NJPhys2013,Romanczuk:EPJ2015}. \section{Acknowledgements} \label{sec:Acknowledgments} The computational calculations are provided by the High Performance Computing center of the Institute for Research in Fundamental Sciences (IPM). A.N. acknowledges partial support from the Associateship Scheme of The Abdus Salam International Centre for Theoretical Physics (Trieste, Italy).
1,314,259,995,194
arxiv
\section{Introduction} \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{figures/Diagnosis_Process.pdf} \caption{Illustration of the multi-task learning setting for WSI analysis.} \label{Diagnosis_Process} \vspace{-0.5cm} \end{figure} Histopathology analysis is the gold standard method for cancer diagnosis and prognosis. Experienced pathologists can provide accurate analysis of biopsy specimens based on whole slide image (WSI), \emph{i.e.}, the high-resolution digitalization of the entire histology slide~\cite{khened2021generalized, pataki2022huncrc}. However, analyzing the WSIs is time-consuming and laborious due to the massive size of the WSIs and the complex colors and patterns of different tissue structures. To elevate the precision and speed of the examination, extensive research tools have been developed for automate computational WSI inspection~\cite{wang2019weakly, wang2019pathology, coudray2018classification}. Due to the powerful expressivity of neural networks, many deep learning-based methods have been proposed for WSI analysis recently. However, WSIs usually have a huge size (\emph{e.g.}, $150,000\times 150,000$), and it will be expensive to obtain detailed pixel-level annotations. To overcome such challenges, multiple instance learning (MIL)~\cite{maron1998framework} becomes a promising direction to analyze WSI from slide-level annotations. Specifically, MIL-based approaches first extract the feature embeddings of image tiles (\emph{i.e.}~patches) with a Convolution Network (CNN)~\cite{he2016deep, riasatian2021fine} or Vision Transformer Network (ViT)~\cite{chen2022scaling}. Then, the feature embeddings are fed into an aggregation network to produce the slide-level predictions. Various network architectures have been employed to aggregate the information, including Graph Neural Network (GNN)~\cite{hou2022h2, guan2022node}, Transformer network~\cite{chen2022scaling, wang2022lymph}, and \etc~ Currently, Graph-Transformer architecture has also been introduced into WSI analysis~\cite{zheng2021deep} due to its nature to extract both low-level local features and high-level global information from the graphs (\emph{i.e.} WSIs). However, most of the above works were limited to the setting with a single task, while pathologists often conducts more than one diagnosis results for one particular WSI (per patient), as shown in Figure~\ref{Diagnosis_Process}. Besides, it is believed that multi-task learning paradigm can improve learning efficiency and prediction accuracy by exploiting commonalities and differences across tasks. Although there are also some works~\cite{yang2020detecting, vuong2020multi, murthy2017center} discussing multi-task learning in WSI analysis. They were designed for the patch-level prediction tasks instead of the slide-level ones. Therefore, they require patch-level annotations for training and hardly to be extended to the weakly-supervised slide-level label prediction directly. To address the above issues, we present a multi-task Graph-Transformer framework (\emph{i.e.}, MulGT) to conduct multiple slide-level diagnosis tasks simultaneously. Our framework leverages the architecture of Graph-Transformer from two aspects: (1) learning task-agnostic low-level representation with a shared GNN, and (2) learning task-specific high-level representation with independent Transformer branches. Meanwhile, considering that different tasks in WSI analysis usually require different features and properties of the tissue, we thereby design a novel Task-aware Knowledge Injection module in our framework to transfer the task-shared graph embedding into task-specific feature spaces via the cross-attention mechanism with a set of trainable task-specific latent tokens. Furthermore, to reduce the computation cost, a graph pooling layer is usually adopted between the GNN part and the Transformer part in the Graph-Transformer architecture. However, no attention has been paid to discussing the relationship among tasks or the graph pooling methods. In this paper, we are the first to argue that, in order to boost the performance of the Graph-Transformer architecture, it is necessary to design a task-aware pooling method to meet the different requirements of different downstream tasks. Especially in multi-task learning settings, the graph pooling methods should vary in different task branches if the nature of the tasks is different. Therefore, we elaborately design a novel Domain Knowledge-driven Graph Pooling module in our framework to improve both the accuracy and robustness of different task branches by leveraging the different diagnosis patterns of multiple WSI analysis tasks. Our main contributions can be summarized as follows. \begin{itemize} \item We devise a novel multi-task Graph-Transformer for slide-level WSI analysis. Different from methods, our framework conducts multiple diagnosis tasks simultaneously, thus benefiting from learning both the commonalities and differences of multiple tasks. Extensive experiments with promising results on two public WSI datasets validate the effectiveness of our designed framework. \item To learn task-specific features, we design a novel Task-aware Knowledge Injection module to transfer the task-shared feature into task-specific feature spaces via the cross-attention mechanism with the latent tokens that contain task-specific knowledge. \item To import the prior knowledge from different diagnosis patterns of different tasks, we elaborately design a novel Domain Knowledge-driven Graph Pooling module to represent the information of the whole graph more properly for different tasks, facilitating the prediction process and reducing the computation cost. \end{itemize} \begin{figure*}[ht] \centering \includegraphics[width=0.9\textwidth]{figures/GT_Framwork.pdf} \caption{Overview of the proposed MulGT framework. Patches are extracted from WSIs and abstracted as graph nodes. Follow the multi-task learning paradigm, the GNN part served as the task-shared layers to learn task-agnostic low-level local representation, while our proposed Task-aware Knowledge Injection and Domain Knowledge-drive Graph Pooling modules together with the transformer stack served as the task-independent layers to learn accurate high-level global representation. } \label{GT_Framwork_Overview} \end{figure*} \section{Related Work} \subsubsection{Multiple Instance Learning for WSI.} Multiple instance learning (MIL) methods are widely used for WSI analysis and can be categorized into two paradigms: (1) instance-level methods and (2) embedding-level methods~\cite{amores2013multiple}. Generally, instance-level methods typically focus more on local information, while embedding-level methods emphasize global representation. Several recent works adopted attention mechanisms into MIL for WSI analysis for instance aggregation. Particularly, the attention-based approach is able to identify the contribution of different instances during the global aggregation, like ABMIL~\cite{ilse2018attention}, DeepAttnMIL~\cite{yao2020whole}, and CLAM~\cite{lu2021data}. Recently, Graph-based and Transformer-based methods have also been utilized in computational pathology, as WSI instances could be abstracted as nodes of a graph or tokens of Transformer architecture. For example, H2Graph~\cite{hou2022h2} built a heterogeneous graph with different resolutions of WSI to learn a hierarchical representation, while HIPT~\cite{chen2022scaling} introduced a new ViT architecture to learn from the natural image hierarchical structure inherent in WSIs. However, most of the previous works were limited to the single task setting for slide-level analysis. \subsubsection{Multi-task Learning.} Multi-task learning~\cite{caruana1997multitask} jointly optimizes a set of tasks with hard or soft parameter sharing. It is well known that learning multiple tasks simultaneously can offer several advantages, including improved data efficiency and reduced overfitting through the regularization among multiple tasks~\cite{DBLP:journals/corr/abs-2009-09796}. Some previous literature leveraged the relationship among multiple tasks in an explicit way. For example, ML-GCN~\cite{chen2019multi} built a directed graph over different object labels to facilitate multi-label image recognition, where each node is one particular object (\emph{i.e.} task) and edges are object correlations. Meanwhile, some works~\cite{kendall2018multi, chen2018gradnorm} adopted adaptive weights for different tasks to balance the training process, while Liu \textit{et. al.}~\shortcite{liu2021conflict} introduced gradient-based methods to mitigate the negative transfer across tasks. Especially, RotoGrad~\cite{javaloy2021rotograd} used a set of rotation matrices to rotate the task-shared features into different feature spaces before the task-specific branches to avoid the gradient conflict among tasks. Partially inspired that transferring task-shared features into different task-specific feature spaces may benefit model learning, this paper designed a Task-aware Knowledge Injection module to differentiate the features in different task branches. \subsubsection{Combining Graph and Transformer.} Recently, the Transformer model has been introduced to deal with graph-structured data. According to the relative position of the GNN and Transformer layers, current works could be divided into three architectures~\cite{min2022transformer}: (1) building Transformer blocks on top of GNN blocks; (2) alternately stacking GNN and Transformer blocks~\shortcite{lin2021mesh}; and (3) parallelizing GNN and Transformer blocks~\shortcite{zhang2020graph}. Most works~\cite{rong2020self, mialon2021graphit} adopted the first architecture. Especially, GraphTrans~\cite{wu2021representing} applied a permutation-invariant Transformer module after a standard GNN module to learn the high-level and long-range relationships. Graph-Transformer architecture has also been introduced to handle the WSI analysis tasks~\cite{zheng2021deep}. However, the existing studies are limited to the single task setting and pay no attention to leveraging the domain knowledge from the pathologists for better model design. \begin{comment} \subsubsection{Graph Pooling.} Graph pooling is an important component of GNN. Except using mean or max pooling to generate the whole graph embedding~\cite{duvenaud2015convolutional, dai2016discriminative}, there are two major approaches in pooling a graph: (1) Node drop methods in a random way or by the rank score of each node. For instance, SAGPool~\cite{lee2019self} calculated a self-attention score for top-k node selection. (2) Node clustering method by clustering similar nodes into a single cluster in a hard or soft manner with a cluster assignment matrix. For example, MinCutPool~\cite{bianchi2020spectral} calculated the assignment matrix from the graph embedding with a MLP layer. Some recent works~\cite{ranjan2020asap} leveraged the above two methods in a sequence manner. In this paper, we design our graph pooling method based on the above two approaches and the domain knowledge from pathologists to achieve task-aware pooling for tumor staging and tumor typing tasks. \end{comment} \section{Methodology} In this section, we elaborate on our designed multi-task framework for WSI analysis with specially designed Task-aware Knowledge Injection and Domain Knowledge-driven Graph Pooling modules. Figure~\ref{GT_Framwork_Overview} shows the overview of the proposed framework. Given a WSI $X$, our framework predicts the labels of two tasks simultaneously: the slide-level tumor typing label $\hat{Y}_{type}$ and staging label $\hat{Y}_{stage}$. Specifically, we first construct graph $\mathcal{G}$ followed by the task-shared Graph Convolution (GC) layer. After that, the framework is divided into two task-specific branches by the corresponding Task-specific Knowledge Injection modules. Further, the transferred task-specific graph representation is fed into the corresponding Domain Knowledge-driven Graph Pooling module for each branch. Finally, a sequence of task-specific Transformer layers followed by MLP are employed to predict slide-level labels of multiple tasks according to the pooled task-specific representations. \subsection{Graph-based Shared Feature Learning} As illustrated in Figure~\ref{GT_Framwork_Overview}, given a WSI $X$ under $20\times$ magnification, we first apply the sliding window strategy to crop $X$ into numerous image tiles without overlap. Then we construct a graph $\mathcal{G} = \{\mathcal{V}, \mathcal{E}\}$, where $\mathcal{V}$ represents the extracted feature embeddings of the preserved image tiles. The edge set $\mathcal{E}$ represents the bordering relationships between image tiles in an 8-adjacent manner as shown in Figure~\ref{GT_Framwork_Overview}. Then, the generated graph $\mathcal{G}$ is able to depict the feature and spatial relations of the WSI, and is thus suitable for further analysis. Following the principle of a Graph-Transformer architecture, our framework first uses a Graph Convolution (GC) layer to extract the task-shared low-level representation of $\mathcal{G}$. As illustrated in previous works~\cite{wu2021representing, NEURIPS2020_94aef384}, the GNN part in Graph-Transformer architecture learns the representation at graph nodes from neighborhood features. This neighborhood aggregation of GNN is helpful for learning local and short-range correlations of graph nodes, and thus suitable to serve as the shared layers for multiple different tasks. The message propagation and aggregation of the graph are defined as \begin{equation} H_{{l}+1}=R e L U\left(\hat{A} H_{{l}} W_{{l}}\right), \quad l=1,2, \ldots, L. \end{equation} \begin{equation} \hat{A}=\tilde{D}^{-\frac{1}{2}} \tilde{A} \tilde{D}^{-\frac{1}{2}}, \end{equation} where $\tilde{A}=A+I_N$ is the adjacency matrix $A$ of graph $\mathcal{G}$ with added self-connections, and $I_N$ is the identity matrix. $\tilde{D}_{i i}=\sum_{j} \tilde{A}_{i j}$ is a diagonal matrix, and $W_{{l}}\in \mathbb{R}^{d\times d}$ is a layer-specific trainable weight matrix. $H_{{l}}\in \mathbb{R}^{|\mathcal{V}|\times d}$ is the input of the $l_{th}$ GC layer, where $|\mathcal{V}|$ is the number of nodes and $d$ is the dimension of each node, and $H_{{l}}$ is initialized with the node features of $\mathcal{G}$. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/Task_Dictionary.pdf} \caption{Illustration of Task-aware Knowledge Injection.} \label{Task Knowledge Injection} \end{figure} \subsection{Task-aware Knowledge Injection} For a more accurate representation learning for different tasks, we propose a Task-aware Knowledge Injection module to store the task-specific knowledge in different task branches and thus transfer the task-shared feature from the task-shared GCN into task-specific feature spaces. The developed module calculates the correlation among the task-shared features with the task-specific knowledge based on the multi-head attention mechanism~\cite{vaswani2017attention}: \begin{equation} \operatorname{MH}(Q, K, V)=\left[O_{1}, \ldots, O_{h}\right] W^{O}, \end{equation} \begin{equation} O_{i}=\operatorname{Att}\left(Q W_{i}^{Q}, K W_{i}^{K}, V W_{i}^{V}\right), \end{equation} where $\operatorname{Att}(\boldsymbol{Q}, \boldsymbol{K}, \boldsymbol{V})=\sigma\left(\boldsymbol{Q} \boldsymbol{K}^{T}\right) \boldsymbol{V}$, $h$ is the number of parallel attention layers, and $\sigma$ is an activation function. To transfer the task-shared features into task-specific spaces, we design a novel Task-aware Knowledge Injection multi-head cross attention (TKIMH) block, as shown in Figure~\ref{Task Knowledge Injection}. Specifically, we take the task-shared hidden representation $H_{\mathrm{L}}$ as the query ($\boldsymbol{Q}$), and the task-specific trainable latent tokens $T_j$ as the key ($\boldsymbol{K}$) and value ($\boldsymbol{V}$) for the cross multi-head attention calculation. Each task branch has an independent set of trainable latent tokens, which is able to store the task-aware knowledge learned from the dataset during the training process. The TKIMH for task $j$ can be denoted as: \begin{equation} \operatorname{TKIMH}(H_{\mathrm{L}}, T_j)=\left[O_{1j}, \ldots, O_{hj}\right] W^{O}_j, \end{equation} \begin{equation} O_{ij}=\operatorname{Att}\left(H_{\mathrm{L}} W_{ij}^{Q}, T_j W_{ij}^{K}, T_j W_{ij}^{V}\right), \end{equation} where $H_{\mathrm{L}}\in \mathbb{R}^{|\mathcal{V}|\times d}$ is the task-shared hidden representation, $T_j\in \mathbb{R}^{m \times d}$ is the learnable latent tokens containing the task-specific knowledge for task $j$, $m$ is the number of the latent tokens, $W_{ij}^{Q}, W_{ij}^{K}, W_{ij}^{V}, W_j^{O}\in \mathbb{R}^{d\times d}$ are parameter matrices for linear projection operations for task $j$. Using the ingredients above, the Task-aware Knowledge Injection module for task $j$ is defined as follows: \begin{equation} Z_j=\mathrm{LN}(H_{\mathrm{L}}+\operatorname{TKIMH}(H_{\mathrm{L}}, T_j)), \end{equation} \begin{equation} \hat{H}_j=\mathrm{LN}(Z_j+\mathrm{rFF}(Z_j)), \end{equation} where $\mathrm{rFF}$ is a row-wise feedforward layer that processes each individual row independently and identically, LN is a layer normalization~\cite{ba2016layer}, and $\hat{H}_j$ is the transferred task-specific graph features for task $j$. Note that the above module can be easily adapted to any Graph-Transformer architecture. \subsection{Domain Knowledge-driven Graph Pooling} Domain Knowledge-driven Graph Pooling is developed by task-aware pooling methods to meet the requirements of different downstream tasks. As shown in Figure~\ref{Domain_Driven_Pooling}, we adopt two different graph pooling methods (\emph{i.e.} node drop method and node clustering method) for two tasks (\emph{i.e.} tumor staging and tumor typing) with different diagnosis patterns. \begin{figure}[t] \centering \includegraphics[width=0.91\linewidth]{figures/Domain_Driven_Pooling.pdf} \caption{Overview of Domain Knowledge-driven Pooling.} \label{Domain_Driven_Pooling} \end{figure} \paragraph{Node Drop Pooling for Typing.} During the clinical diagnosis process, the pathologists first examine the WSI to locate the tumor region and then determine the tumor type. Our node drop pooling method is designed to leverage the clinical process (as shown at the top of Figure~\ref{Domain_Driven_Pooling}). The model decision highly depends on the discriminative nodes (\emph{i.e.}, tumor subtype A/B node) instead of the ratio or shape of different types of nodes in the whole graph. Therefore, the node drop method will be sufficient for the tumor typing task as long as one of the tumor nodes can be preserved. As previous work~\cite{papp2021dropgnn} has also pointed out that random dropping will increase the expressiveness of GNN, we implemented a random and independent node dropping in each training runs to generate the task-aware pooled representation $\hat{H}_{type}^{pool}$ for tumor typing task. Compared with the rank-based dropping method, our scheme will make the task more challenging and serve as a data augmentation method, which will make the corresponding branch more robust and more powerful in detecting discriminative image patches. \paragraph{Node Clustering Pooling for Staging.} Several elements influence the tumor stage diagnosis results of pathologists, including abnormal cells, the presence and size of tumor regions, and metastatic tumors. In general, the ratio and the shape of the tumor tissue nodes in the whole graph will be essential for the slide-level tumor stage diagnosis, as observed in the bottom of Figure~\ref{Domain_Driven_Pooling}. Node clustering pooling methods are more suitable for the staging task to preserve the whole graph information, as node drop methods may lose the above information during dropping. Inspired by GMPool~\shortcite{baek2021accurate}, We design GCMinCut, an improved version of MinCut Pooling~\cite{bianchi2020spectral}, in which we replaced the MLP during the pooling with an additional GC layer to import the neighboring information of the graph. The GCMinCut pooling can be denoted as follows: \begin{equation} \mathbf{S}=R e L U\left(\hat{A} \hat{H}_{stage} W_{\mathrm{pool}}\right), \hat{H}_{stage}^{\text {pool }}=\mathbf{S}^{T} \hat{H}_{stage}, \end{equation} where $\hat{H}_{stage}\in \mathbb{R}^{|\mathcal{V}|\times d}$ is the task-specific representation transferred by the task-aware knowledge injection module in the tumor staging branch, $W_{\mathrm{pool}}\in \mathbb{R}^{d\times d}$ is a trainable weight matrix, and $\mathbf{S} \in \mathbb{R}^{ p \times |\mathcal{V}|}$ is the assignment matrix for soft node clustering, $p$ is the number of node clusters (\emph{i.e.}, the number of nodes after pooling), and $\hat{H}_{stage}^{pool}\in \mathbb{R}^{p\times d}$ is the task-aware pooled representation for tumor staging task. \subsection{Technical Details and Training Procedure} After the Domain Knowledge-driven Graph Pooling module, the task-aware pooled representations are fed into a standard Transformer layer stack with no additive positional embeddings as the GNN has already encoded the structural information into the node embeddings. After that, we apply task-specific MLP heads for each branch to predict the task labels. The label prediction $\hat{Y}_{i}$ of task $i$ can be denoted as: \begin{equation} \hat{X}_{i}= \operatorname{Transformer}\left( [CLS; \hat{H}_{i}^{\text {pool }}] \right), \hat{Y}_{i}=\operatorname{MLP}\left(\hat{X}_{i}^{(0)} \right), \end{equation} where $CLS\in \mathbb{R}^{1\times d}$ is the class token in Transformer. To train the network, we first employed the cross-entropy loss for both tasks. Take the type prediction task as an example, the objectiveness is: \begin{equation} \mathcal{L}_{type}=-\frac{1}{N} \sum_{i=1}^{N} \sum_{j=1}^{C_{type}} Y_{type}^{(ij)} \log \left(\hat{Y}_{type}^{(ij)}\right), \end{equation} where $N$ is the whole sample number, $C_{type}$ is category number for type prediction task, and $Y$ is the one-hot label. Then unsupervised MinCut pooling loss~\cite{bianchi2020spectral} is adopted for extra regularization: \begin{equation} \mathcal{L}_{mincut}=\underbrace{-\frac{\operatorname{Tr}\left(\mathbf{S}^{T} \tilde{\mathbf{A}} \mathbf{S}\right)}{\operatorname{Tr}\left(\mathbf{S}^{T} \tilde{\mathbf{D}} \mathbf{S}\right)}}_{\mathcal{L}_{c}}+\underbrace{\left\|\frac{\mathbf{S}^{T} \mathbf{S}}{\left\|\mathbf{S}^{T} \mathbf{S}\right\|_{F}}-\frac{\mathbf{I}_{p}}{\sqrt{p}}\right\|_{F}}_{\mathcal{L}_{o}}, \end{equation} where $\|\cdot\| F$ indicates the Frobenius norm. The cut loss term $\mathcal{L}_c$ encourages strongly connected nodes to be clustered together, and the orthogonality loss term $\mathcal{L}_o$ encourages the cluster assignments to be a similar size. Finally, the total loss $\mathcal{L}_{total}$ can be denoted as the weighted summation of the above losses: \begin{equation} \mathcal{L}_{total}=w_{t}\mathcal{L}_{type}+w_{s}\mathcal{L}_{stage}+w_{m}\mathcal{L}_{mincut}. \end{equation} \section{Experiments} \begin{table*}[ht] \small \centering \caption{Comparison with other methods on KICA dataset. Top results are shown in \textbf{bold}. } \resizebox{0.85\textwidth}{!}{% \begin{tabular}{lccc|ccc} \toprule[1pt] \multirow{2}{*}{{Method}} & \multicolumn{3}{c}{{Typing}} & \multicolumn{3}{c}{{Staging}} \\ \cmidrule(r){2-4}\cmidrule(r){5-7} & {AUC} & {ACC} & {F1} & {AUC} & {ACC} & {F1} \\ \bottomrule[1pt] ABMIL~\shortcite{ilse2018attention} & $95.42\pm2.02$ & $89.90\pm2.77$ & $89.82\pm2.75$ & $75.35\pm3.74$ & $70.51\pm1.88$ & $58.54\pm2.55$ \\ Gated-ABMIL~\shortcite{ilse2018attention} & $94.84\pm1.60$ & $88.63\pm2.98$ & $88.61\pm2.87$ & $73.65\pm3.25$ & $69.69\pm2.33$ & $58.65\pm2.78$ \\ DeepAttnMIL~\shortcite{yao2020whole} & $96.87\pm1.44$ & $91.37\pm2.53$ & $91.37\pm2.49$ & $76.53\pm2.84$ & $70.32\pm2.19$ & $58.44\pm2.86$ \\ CLAM-MIL~\shortcite{lu2021data} & $84.93\pm3.15$ & $79.46\pm2.91$ & $78.57\pm3.27$ & $70.97\pm3.20$ & $70.32\pm2.20$ & $58.64\pm3.17$ \\ CLAM-SB~\shortcite{lu2021data} & $95.69\pm2.31$ & $90.62\pm2.93$ & $90.60\pm2.96$ & $74.94\pm4.22$ & $70.39\pm2.22$ & $58.28\pm2.78$ \\ DS-MIL~\shortcite{li2021dual} & $93.97\pm2.52$ & $87.08\pm3.04$ & $86.90\pm3.12$ & $73.21\pm4.35$ & $68.94\pm2.37$ & $59.31\pm2.31$ \\ GT-MIL~\shortcite{zheng2021deep} & $97.20\pm1.19$ & $92.31\pm2.52$ & $92.33\pm2.46$ & $78.63\pm3.56$ & $71.20\pm3.60$ & $68.38\pm3.37$ \\ Trans-MIL~\shortcite{wang2022lymph} & $95.56\pm2.11$ & $89.14\pm3.30$ & $89.04\pm3.31$ & $73.34\pm3.15$ & $68.56\pm3.46$ & $57.70\pm2.34$ \\ \bottomrule[1pt] \textbf{Ours} & $\textbf{98.44}\pm\textbf{0.67}$ & $\textbf{93.89}\pm\textbf{1.60}$ & $\textbf{93.89}\pm\textbf{1.59}$ & $\textbf{80.22}\pm\textbf{1.94}$ & $\textbf{74.98}\pm\textbf{3.08}$ & $\textbf{72.55}\pm\textbf{2.48}$ \\ \bottomrule[1pt] \end{tabular} } \label{Comparison with Single-Task SOTAs on KICA Dataset} \end{table*} \begin{table*}[ht] \small \centering \caption{Comparison with other methods on ESCA dataset. Top results are shown in \textbf{bold}. } \resizebox{0.85\textwidth}{!}{% \begin{tabular}{lccc|ccc} \toprule[1pt] \multirow{2}{*}{{Method}} & \multicolumn{3}{c}{{Typing}} & \multicolumn{3}{c}{{Staging}} \\ \cmidrule(r){2-4}\cmidrule(r){5-7} & {AUC} & {ACC} & {F1} & {AUC} & {ACC} & {F1} \\ \bottomrule[1pt] ABMIL~\shortcite{ilse2018attention} & $92.51\pm3.39$ & $86.47\pm4.16$ & $86.33\pm4.23$ & $53.01\pm3.95$ & $54.34\pm3.02$ & $51.36\pm3.19$ \\ Gated-ABMIL~\shortcite{ilse2018attention} & $95.17\pm2.47$ & $88.54\pm3.05$ & $88.39\pm3.12$ & $50.64\pm4.12$ & $53.38\pm4.22$ & $53.54\pm5.02$ \\ DeepAttnMIL~\shortcite{yao2020whole} & $96.12\pm1.84$ & $90.64\pm2.93$ & $90.50\pm3.07$ & $61.87\pm3.32$ & $61.48\pm4.28$ & $50.59\pm2.79$ \\ CLAM-MIL~\shortcite{lu2021data} & $77.89\pm5.90$ & $73.98\pm5.25$ & $73.55\pm5.57$ & $61.23\pm4.15$ & $58.38\pm4.11$ & $50.38\pm4.14$ \\ CLAM-SB~\shortcite{lu2021data} & $95.85\pm1.78$ & $90.66\pm3.02$ & $90.57\pm3.10$ & $63.01\pm3.05$ & $59.96\pm3.46$ & $51.75\pm3.79$ \\ DS-MIL~\shortcite{li2021dual} & $87.80\pm3.97$ & $81.63\pm4.81$ & $81.05\pm5.16$ & $61.75\pm2.20$ & $59.39\pm3.30$ & $54.36\pm5.27$ \\ GT-MIL~\shortcite{zheng2021deep} & $95.93\pm1.58$ & $89.87\pm3.64$ & $89.83\pm3.60$ & $69.23\pm3.64$ & $65.20\pm3.72$ & $62.64\pm3.22$ \\ Trans-MIL~\shortcite{wang2022lymph} & $94.24\pm2.33$ & $86.59\pm3.17$ & $86.48\pm3.14$ & $60.56\pm4.72$ & $61.47\pm3.87$ & $49.73\pm3.32$ \\ \bottomrule[1pt] \textbf{Ours} & $\textbf{97.49}\pm\textbf{1.46}$ & $\textbf{92.81}\pm\textbf{2.35}$ & $\textbf{92.74}\pm\textbf{2.41}$ & $\textbf{71.48}\pm\textbf{3.42}$ & $\textbf{66.63}\pm\textbf{3.14}$ & $\textbf{65.73}\pm\textbf{2.83}$ \\ \bottomrule[1pt] \end{tabular} } \label{Comparison with Single-Task SOTAs on ESCA Dataset} \end{table*} \begin{table*}[ht] \small \centering \caption{Ablation study on KICA dataset. \textbf{DomainPool} and \textbf{TK-Injection} denote the Domain Knowledge-driven Graph Pooling module and the Task-aware Knowledge Injection module, respectively. \textbf{Drop-based} and \textbf{Cluster-based} denote that replacing the \textbf{DomainPool} with node drop pooling methods or node clustering pooling methods, respectively. \textbf{Linear} denotes that replacing the cross-attention mechanism in the Task-aware Knowledge Injection module with task-specific linear projections.} \resizebox{0.9\textwidth}{!}{% \begin{tabular}{ccccc|ccc} \toprule[1pt] \multirow{2}{*}{DomainPool} &\multirow{2}{*}{TK-Injection} & \multicolumn{3}{c}{Typing} & \multicolumn{3}{c}{Staging} \\ \cmidrule(r){3-5}\cmidrule(r){6-8} & & AUC & ACC & F1 & AUC & ACC & F1 \\ \bottomrule[1pt] Drop-based & & $95.72\pm1.64$ & $90.11\pm2.39$ & $90.01\pm2.35$ & $77.07\pm2.33$ & $71.78\pm2.78$ & $69.93\pm3.70$ \\ Cluster-based & & $97.40\pm0.99$ & $91.97\pm2.27$ & $92.02\pm2.13$ & $80.12\pm3.51$ & $73.45\pm2.82$ & $71.47\pm3.68$ \\ \checkmark & & $97.90\pm1.32$ & $93.50\pm1.91$ & $93.53\pm1.88$ & $\textbf{80.67}\pm\textbf{3.51}$ & $74.07\pm3.53$ & $70.78\pm3.33$ \\ \checkmark & Linear & $98.10\pm0.55$ & $93.59\pm1.56$ & $93.55\pm1.61$ & $79.86\pm2.20$ & $73.57\pm2.94$ & $71.06\pm3.13$ \\ \checkmark & \checkmark& $\textbf{98.44}\pm\textbf{0.67}$ & $\textbf{93.89}\pm\textbf{1.60}$ & $\textbf{93.89}\pm\textbf{1.59}$ & $80.22\pm1.94$ & $\textbf{74.98}\pm\textbf{3.08}$ & $\textbf{72.55}\pm\textbf{2.48}$ \\ \bottomrule[1pt] \end{tabular} } \label{Ablation Study on KICA Dataset} \end{table*} \if 0 \begin{table}[ht] \small \centering \caption{Comparison of different numbers of latent token.} \begin{tabular}{cccc|ccc} \toprule[1pt] \multirow{2}{*}{Number} & \multicolumn{3}{c}{Typing} & \multicolumn{3}{c}{Staging} \\ \cmidrule(r){2-4}\cmidrule(r){5-7} & AUC & ACC & F1 & AUC & ACC & F1\\ \bottomrule[1pt] 100& $98.18$ & $93.45$ & $93.45$ & $79.61$ & $73.84$ & $70.81$\\ \textbf{150}& $\textbf{98.44}$ & $\textbf{93.89}$ & $\textbf{93.89}$ & $\textbf{80.22}$ & $\textbf{74.98}$ & $\textbf{72.55}$\\ 200& $98.03$ & $93.33$ & $93.35$ & $80.18$ & $73.63$ & $71.59$\\ \bottomrule[1pt] \end{tabular} \label{Comparison of different numbers of task-knowledge latent tokens} \end{table} \fi \begin{table}[ht] \small \centering \caption{Comparison of single-task and multi-task paradigm on MulGT. \textbf{Single} for single-task while \textbf{Multi} for multi-task.} \begin{tabular}{cccc|ccc} \toprule[1pt] \multirow{2}{*}{Paradigm} & \multicolumn{3}{c}{Typing} & \multicolumn{3}{c}{Staging} \\ \cmidrule(r){2-4}\cmidrule(r){5-7} & AUC & ACC &F1 & AUC & ACC & F1\\ \bottomrule[1pt] Single& $98.41$ & $93.07$ & $93.08$ & $80.14$ & $73.67$ & $71.45$\\ Multi& $\textbf{98.44}$ & $\textbf{93.89}$ & $\textbf{93.89}$ & $\textbf{80.22}$ & $\textbf{74.98}$ & $\textbf{72.55}$\\ \bottomrule[1pt] \end{tabular} \label{Effectiveness of Multi-task Learning Paradigm} \end{table} \begin{table}[ht] \small \centering \caption{Comparison of different task-knowledge latent token schemes in MulGT. \textbf{Shared} means using a shared set of latent tokens, while \textbf{Specific} means using independent sets of latent tokens in different task branches.} \begin{tabular}{cccc|ccc} \toprule[1pt] \multirow{2}{*}{Scheme} & \multicolumn{3}{c}{Typing} & \multicolumn{3}{c}{Staging} \\ \cmidrule(r){2-4}\cmidrule(r){5-7} & AUC & ACC & F1 & AUC & ACC & F1\\ \bottomrule[1pt] Shared& $98.23$ & $93.30$ & $92.32$ & $78.54$ & $72.71$ & $69.28$\\ \textbf{Specific}& $\textbf{98.44}$ & $\textbf{93.89}$ & $\textbf{93.89}$ & $\textbf{80.22}$ & $\textbf{74.98}$ & $\textbf{72.55}$\\ \bottomrule[1pt] \end{tabular} \label{Comparison of Separated and Shared Latent Tokens in Task Knowledge Injection Module} \end{table} \subsection{Datasets} We evaluate the proposed MulGT framework on two public datasets (KICA and ESCA) from The Cancer Genome Atlas (TCGA) repository. In the tumor staging task, patients with TNM labels as I/II stage are categorized as the early stage while patients with TNM labels as III/IV are categorized as the late stage. We excluded patients with missing diagnostic WSI, tumor type diagnosis, and TNM label. The details of the above two datasets are as follows: \begin{itemize} \item \textbf{KICA} is the kidney carcinoma project containing 371 cases with 279 early-stage cases and 92 late-stage cases. For the tumor typing task, there are 259 cases diagnosed as kidney renal papillary cell carcinoma and 112 cases diagnosed as kidney chromophobe. \item \textbf{ESCA} is the esophageal carcinoma project containing 161 cases with 96 early-stage cases and 65 late-stage cases. For the tumor typing task, there are 94 cases diagnosed as adenomas and adenocaricinomas and 67 cases diagnosed as squamous cell carcinoma. \end{itemize} \subsection{Experimental Setup} The proposed framework was implemented using PyTorch~\cite{paszke2019pytorch} and PyTorch Geometric~\cite{fey2019fast} frameworks. All experiments were conducted on a workstation with four NVIDIA RTX 3090 GPUs. For a fair comparison, the proposed framework and other SOTA methods were all tested using non-overlapping $512 \times 512$ image tiles cropped under $20 \times$ magnification from the WSIs, we filtered out image tiles containing less than 85\% tissue region. Besides, KimiaNet~\cite{riasatian2020kimianet} served as the feature extractor for all methods to convert each $512 \times 512$ image tile into 1024-dimensional features for graph initialization. We set the number of nodes after graph pooling as 100, following the setting in GT-MIL~\shortcite{zheng2021deep}. We select the number of latent tokens of Task-aware Knowledge Injection module as $150$ by hyper-parameter searching. All methods are trained with a batch size of 8 for 40 epochs with the Adam optimizer. For evaluation, the area under the curve (AUC) of receiver operating characteristic, the accuracy (ACC), and the F1-score were adopted. All approaches were evaluated with five-fold cross-validation from three different runs (initializations). \subsection{Experimental Results} \subsubsection{Comparison with State-of-the-art Methods.} We compare our framework with eight single-task state-of-the-art (SOTA) methods for WSI analysis including: (1) ABMIL~\shortcite{ilse2018attention}, (2) Gated-ABMIL~\shortcite{ilse2018attention}, (3) CLAM-MIL~\shortcite{lu2021data}, (4) CLAM-SB~\shortcite{lu2021data}, (5) DeepAttnMIL~\shortcite{yao2020whole}, (6) DS-MIL~\shortcite{li2021dual}, (7) GT-MIL~\shortcite{zheng2021deep}, and (8) Trans-MIL~\shortcite{wang2022lymph}. For DS-MIL~\shortcite{li2021dual} method, it was introduced with a pyramidal fusion mechanism for multi-scale WSI features. We only test its performance under the single-scale setting for a fair comparison. The results for KICA and ESCA datasets are summarized in Table~\ref{Comparison with Single-Task SOTAs on KICA Dataset} and Table~\ref{Comparison with Single-Task SOTAs on ESCA Dataset}, respectively. Overall, across all tasks and different datasets, our frameworks consistently achieve the highest performance on all the evaluation metrics. GT-MIL~\shortcite{zheng2021deep} performs best among the previous SOTAs, which demonstrates the powerful representation of Graph-Transformer architecture in WSI analysis. However, compared with our methods, GT-MIL~\shortcite{zheng2021deep} only built a Graph-Transformer network in the single-task setting and only adopted MinCut pooling~\cite{bianchi2020spectral}. In comparison with GT-MIL~\shortcite{zheng2021deep}, for instance, our framework achieved a performance increase of $\textbf{1.24\%}$ in AUC, $\textbf{1.58\%}$ in ACC, $\textbf{1.56\%}$ in F1-score for tumor typing task, and $\textbf{1.59\%}$ in AUC, $\textbf{3.78\%}$ in ACC, $\textbf{4.17\%}$ in F1-score for tumor staging task on KICA dataset, which validates the effectiveness of the designs in our framework. More importantly, we observe an obvious improvement in tumor staging tasks, which is more challenging among the two tasks. The reason is probably the more general and robust task-shared representations learned in the multi-task learning paradigm. \subsubsection{Ablation Study.} We conducted an ablation study on KICA dataset to demonstrate the effectiveness of each proposed component. We first compare our Domain Knowledge-driven Graph Pooling module with node drop based methods as well as node clustering based methods. We test multiple node drop based methods (SortPool~\shortcite{zhang2018end}, TopKPool~\shortcite{gao2019graph}, and SAGPool~\shortcite{lee2019self}) and node clustering based methods (DiffPool~\shortcite{ying2018hierarchical}, MinCutPool~\shortcite{bianchi2020spectral}, and GMPool~\shortcite{baek2021accurate}), and report the best performance in the above two groups. As observed from the first three rows in Table~\ref{Ablation Study on KICA Dataset}, obvious improvement could be seen in all evaluation metrics except F1 in tumor staging, which demonstrates the effectiveness of exploiting the domain knowledge for different tasks during the graph pooling process. The effectiveness of the proposed TK-Injection module is shown by comparison with baselines with no task-specific transferring and simple task-specific linear projections from the last three rows in Table~\ref{Ablation Study on KICA Dataset}. Compared with baselines without task-specific transferring, performance increases can be found in all the evaluation metrics except AUC and ACC in tumor staging in task-specific linear projections, which demonstrate that it is essential to transfer the task-agnostic feature into different task-specific spaces during multi-task learning. Moreover, our cross-attention based task-ware knowledge injection module performs better in all aspects than task-specific linear projections, which illustrate the effectiveness of storing the task-specific knowledge in latent tokens and importing them via the attention mechanism. \subsubsection{Investigation of Multi-task Learning Paradigm.} To figure out the effectiveness of the multi-task learning paradigm, we also tested our framework and the elaborately designed modules under the single-task setting on KICA dataset. Table~\ref{Effectiveness of Multi-task Learning Paradigm} summarizes the experimental results, where the multi-task paradigm benefited both tasks, especially the more challenging one, \emph{i.e.}, tumor staging. The ACC and F1-score in tumor staging task increase by $1.31\%$ and $1.10\%$, respectively. The performance improvement in tumor typing is less than staging, as it has already achieved very high performance. However, note that our framework still outperforms all previous SOTAs in Table~\ref{Comparison with Single-Task SOTAs on KICA Dataset} including the GT-MIL~\shortcite{zheng2021deep} under the single-task setting, which means that our Task-aware Knowledge Injection and Domain Knowledge-driven Pooling modules could also improve the performance of single-task based methods. \subsubsection{Investigation of Task-knowledge Latent Token.} We also investigate the impact of different schemes for task-aware knowledge latent tokens on KICA dataset, and show the mean results in Table~\ref{Comparison of Separated and Shared Latent Tokens in Task Knowledge Injection Module}. The ``Shared" scheme means that different task branches use a shared set of knowledge latent tokens, while the ``Specific" scheme means that different task branches have independent knowledge latent token sets. The ``Specific" scheme achieves better performance in all metrics, especially in F1-score for typing task and ACC and F1-score in staging task. These experimental results show that different diagnostic tasks require different knowledge (sets), which is the same as the pathologists' experiences. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figures/tsne_vis.pdf} \caption{t-SNE visualization of task-specific features in different Task-aware Knowledge Injection modules. } \label{tsne_vis} \end{figure} \subsubsection{Visualization of Task-specific Feature Spaces.} We further conduct t-SNE visualization of six different WSIs to demonstrate the learned task-specific features in Figure~\ref{tsne_vis}. The blue and orange nodes denote transferred node features in the Task-aware Knowledge Injection modules of typing branch and staging branch, respectively. All samples show a clear separation of the nodes with different colors, which means the task-shared features are indeed successfully transferred into different task-specific feature spaces. \section{Conclusion} In this paper, we propose a novel MulGT framework for WSI analysis with multi-task learning. By exploring the commonalities and different diagnosis patterns in different WSI diagnosis tasks, our framework is able to learn more general and robust task-shared representation as well as more accurate task-specific features. Specially, a Task-aware Knowledge Injection module is introduced to store and import the knowledge of different tasks, thus transferring the task-shared representation into different task-specific feature spaces. Meanwhile, to leverage the domain knowledge from the pathologists, a Domain Knowledge-driven Graph Pooling module is elaborately designed to simulate the diagnosis pattern of different analysis tasks. Above building commons lead to performance improvement on both tasks. Extensive experiments validate the prominence of the proposed framework. In the future, we will extend our framework to other WSI analysis tasks, such as survival prediction and prognosis analysis, with domain knowledge from pathologists. Meanwhile, we will develop a hierarchical multi-task Graph-Transformer framework to leverage the natural image pyramid structure of WSI for multi-scale analysis. \section*{Acknowledgements} The work described in this paper was supported in part by a grant from the Research Grants Council of the Hong Kong SAR, China (Project No. T45-401/22-N) and in part by HKU Seed Fund for Basic Research (Project No. 202009185079 and 202111159073). The computations in this paper were partly performed using research computing facilities offered by Information Technology Services, The University of Hong Kong.
1,314,259,995,195
arxiv
\section{Introduction} In this paper we study K\"ahler Ricci flows \begin{equation} \label{krf} \partial_t g_{i\bar j} = - R_{i\bar j} + g_{i\bar{j}} = \partial_i \partial_{\bar j} u, \quad t>0, \end{equation} on a compact, K\"ahler manifold ${\bf M}$ of complex dimension $m=n/2$, with positive first Chern class. Given initial K\"ahler metric $g_{i\bar j}(0)$, H. D. Cao \cite{Ca:1} proved that (\ref{krf}) has a solution for all time $t$. Recently, many results concerning long time and uniform behavior of (\ref{krf}) have appeared. For example, when the curvature operator or the bisectional curvature is nonnegative, it is known that solutions to (\ref{krf}) stays smooth when time goes to infinity (see \cite{CCZ:1}, \cite{CT:1} and \cite{CT:2} for examples). In the general case, Perelman (cf \cite{ST:1}) proved that the scalar curvature $R$ is uniformly bounded, and the Ricci potential $u(\cdot, t)$ is uniformly bounded in $C^1$ norm, with respect to $g(t)$. When the complex dimension $m=2$, let $({{\bf M}}, g(t))$ be a solution to (\ref{krf}), it is proved in (\cite{CW:1}) that the isoperimetric constant for $({{\bf M}}, g(t))$ is bounded from below by a uniform constant. We mention that an isoperimetric estimate for the Ricci flow on the two sphere was already proven by Hamilton in \cite{Ha:1}. In this paper, we prove that in all complex dimensions, the isoperimetric constant for $({{\bf M}}, g(t))$ is bounded from below by a uniform constant. This extends the result of Chen-Wang mentioned above. This result seem to add more weight to the belief that the K\"ahler Ricci flow converges to a K\"ahler Ricci soliton as $t \to \infty$, except on a subvariety of complex codimension $2$, To make the statement precise, let's introduce notations and definition. We use ${\bf M}$ to denote a compact Riemann manifold and $g(t)$ to denote the metric at time $t$; $d(x, y, t)$ is the geodesic distance under $g(t)$; $B(x, r, t) = \{ y \in {{\bf M}} \ | \ d(x, y, t) < r \}$ is the geodesic ball of radius $r$, under metric $g(t)$, centered at $x$, and $|B(x, r, t)|_{g(t)}$ is the volume of $B(x, r, t)$ under $g(t)$; $d g(t)$ is the volume element. We also reserve $R=R(x, t)$ as the scalar curvature under $g(t)$. When the time variable $t$ is not explicitly used, we may also suppress it in the notations mentioned above. \medskip The main result of the paper is the following theorem. \begin{theorem} \label{thm1.1} Let $({{\bf M}}, g(t))$, $\partial_t g_{i\bar j} = - R_{i\bar j} + g_{i\bar{j}}$, be a K\"ahler Ricci flow on a $n$ real dimensional compact, K\"ahler manifold with positive first Chern class. Then there exists a uniform constant $S_0$, depending only on the initial metric $g(0)$ and a numerical constant $C$, such that \[ \left[ \int_{{\bf M}} | u |^{n/(n-1)} d g(t) \right]^{(n-1)/n} \le S_0 \int_{{\bf M}} | \nabla u | dg(t) + \frac{C}{|{\bf M}|^{1/n}_{g(t)}} \int_{{\bf M}} | u| dg(t) \]for all $ u \in C^\infty({\bf M})$. \end{theorem} {\it Remark. It is well known that Theorem \ref{thm1.1} implies a uniform lower bound for the isoperimetric constant of $({{\bf M}}, g(t))$, i.e. there exists a positive constant $c_0$, depending only on the initial metric such that \[ I({{\bf M}}, g(t)) \equiv \inf_{D \subset {\bf M}} \, \frac{ | \partial D |}{ \left[ \min \{ |D|, \, |{{\bf M}}-D|\} \right]^{(n-1)/n} } \ge c_0. \] Here all the volume are with respect to $g(t)$; and $D$ is a subdomain of ${\bf M}$ such that $\partial D$ is a $n-1$ dimensional submanifold of ${\bf M}$. A proof can be found in \cite{CLN:1} Section 5.1 e.g. } \medskip The proof of the theorem is based on the following properties for K\"ahler Ricci flow on a compact manifold with positive first Chern class. \vskip 0.5cm \noindent {\it Property A. Let $({{\bf M}}, g(t))$ be a K\"ahler Ricci flow (\ref{krf}) on a compact manifold with positive first Chern class. There exist uniform positive constants $C$, and $\kappa$ so that \\ 1. \quad $| R(g(t))| \le C,$ 2. $diam ({\bf M}, g(t)) \le C,$ 3. $\Vert u \Vert_{C^1} \le C.$ 4. $|B(x, r, t)|_{g(t)} \ge \kappa r^n$, for all $t>0$ and $r \in (0, diam ({\bf M}, g(t)))$. } 5. $ |B(x, r, t)|_{g(t)} \le \kappa^{-1} r^n $ for all $r>0$, $t>0$. \vskip 0.5cm \noindent {\it Property B. Under the same assumption as in Property A, there exists a uniform constant $S_2$ so that the following $L^2$ Sobolev inequality holds: \\ \[ \left( \int_{{\bf M}} v^{2n/(n-2)} d g(t) \right)^{(n-2)/n} \le S_2 \left( \int_{{\bf M}} | \nabla v |^2 dg(t) + \int_{{\bf M}} v^2 dg(t) \right) \]for all $ v \in C^\infty({\bf M}, g(t))$.\\ } Property A 1-4 is due to Perelman (c.f. \cite{ST:1}), Property B was first established in \cite{Z07:1} (see also \cite{Ye:1}, \cite{Z10:1} ). Property A 5 can be found in \cite{Z11:1} and also \cite{CW:2}. The rest of the paper is organized as follows. In Section 2, we prove some gradient bounds for harmonic functions on $({{\bf M}}, g(t))$. Since the bounds do not rely on the usual lower bound of Ricci curvature, the result may be of independent interest. Using these bounds, we prove the theorem in Section 3. \medskip \section{gradient bounds for harmonic functions} In order to prove the theorem, in this section we state and prove a number of results on harmonic functions on certain manifolds with fixed metric. These results are well known when the manifold has nonnegative Ricci curvature, a property that is not available for us. Since some of these results may be of independent interest, we will also deal with the real variable case and impose some conditions which are more general than needed for the proof of the theorems in Section 1. As the metric is independent of time in this section, we will suppress the time variable $t$. In this section, the basic assumptions on the $n$ real dimensional manifolds ${\bf M}$ are {\it Assumption 1. $L^2$ Sobolev inequality: there is a positive constant $\alpha$ such that \[ \left( \int_{{\bf M}} u^{2n/(n-2)} d g(t) \right)^{(n-2)/n} \le \alpha \left( \int_{{\bf M}} | \nabla u |^2 dg(t) + \int_{{\bf M}} u^2 dg(t) \right) \]for all $ u \in C^\infty({\bf M})$. } {\it Assumption 2. There exists a positive constant $\kappa$, such that \[ \kappa r^n \le |B(x, r)| \le \kappa^{-1} r^n, \qquad x \in {\bf M}, \quad 0 < r< diam({\bf M}) \le 1. \]} {\it Assumption 3. There exists a smooth function $L=L(x)$ and 2 smooth parallel $(2, 2)$ tensor fields $P$ and $Q$ such that the Ricci curvature is given by \[ R_{ij} =P^{kl}_{ij} \partial_k \partial_l L + Q^{kl}_{ij} g_{kl} \] under a local coordinates. Moreover $\Vert P \Vert_\infty \le 1$, $\Vert Q \Vert_\infty \le 1$. Here $\partial_k \partial_l L $ is the Hessian of $L$. } Note that Assumption 3 includes as a special case, the formula for the Ricci curvature on K\"ahler manifolds $\partial_i\partial_{\bar j} u = g_{i\bar j}-R_{i\bar j}$ where $u$ is the Ricci potential. \begin{lemma} \label{ledumean} Suppose $({\bf M}, g)$ is a compact Riemann manifold of real dimension $n$, satisfying Assumptions 1, 2, 3. Let $u$ be a smooth harmonic function in $B(x_0, r)$ where $x_0 \in {\bf M}$ and $ r \le diam({\bf M})$. Then there exists a positive constant $C_0=C_0(\alpha, \kappa, \Vert \nabla L \Vert_\infty)$ such that \[ \sup_{x \in B(x_0, r/2)} |\nabla u(x)| \le C_0 \frac{1}{r} \left(\frac{1}{r^n} \int_{B(x_0, r)} u^2 dg \right)^{1/2}. \] \end{lemma} \proof Since $u$ solves $\Delta u=0$, by Bochner's formula, we have \begin{equation} \label{dddu} \Delta |\nabla u |^2 = 2 | Hess \, u |^2 + 2 Ric (\nabla u, \nabla u). \end{equation} Given $\sigma \in (0, 1)$, let $\psi=\psi(x)$ be a standard Lipschitz cut-off function such that $\psi(x)=0$ when $x \in B(x_0, r)^c$; $0 \le \psi \le 1$ and $\psi(x)=1$ when $x \in B(x_0, \sigma r)$ and $|\nabla \psi| \le \frac{4}{(1-\sigma) r}$. We mention that no second order derivatives of $\psi$ are involved. So we only need $\psi$ is Lipschitz. For clarity of presentation, we write \[ f = |\nabla u|^2. \]Given a number $p \ge 1$, using $f \psi^2$ as a test function on (\ref{dddu}), after a routine calculation, we derive \begin{equation} \label{dfpsi} \aligned &\frac{2p-1}{p^2} \int_{B(x_0, r)} |\nabla (f^p \psi)|^2 dg \\ &\le \frac{C}{(1-\sigma)^2 r^2} \int_{B(x_0, r)} f^2 dg -2 \int_{B(x_0, r)} | Hess \, u |^2 f^{2p-1} dg - 2 \int_{B(x_0, r)} Ric (\nabla u, \nabla u) f^{2p-1} \psi^2 dg\\ &\equiv I_1 + I_2 +I_3. \endaligned \end{equation} Now we want to absorb part of $I_3$ by $I_2$ which is a good term. In a local orthonormal coordinates, we denote $u_i$ the i-th component of $\nabla u$. Then By Assumption 3, we have, after integrating by parts, \[ \aligned I_3 &= - 2 \int R_{ij} u_i u_j f^{2p-1} \psi^2 dg\\ &=-2 \int P^{kl}_{ij} (\partial_k \partial_l L) \, u_i u_j f^{2p-1} \psi^2 dg - 2 \int Q^{kl}_{ij} g_{kl} u_i u_j f^{2p-1} \psi^2 dg\\ &=2 \int P^{kl}_{ij} (\partial_l L) \, (\partial_k u_i) u_j f^{2p-1} \psi^2 dg + 2 \int P^{kl}_{ij} (\partial_l L) \, u_i (\partial_k u_j) f^{2p-1} \psi^2 dg\\ & \qquad+2 \int P^{kl}_{ij} (\partial_l L) \, u_i u_j \partial_k (f^{2p-1} \psi^2) dg - 2 \int Q^{kl}_{ij} g_{kl} u_i u_j f^{2p-1} \psi^2 dg. \endaligned \] Here we also used the assumption that the $P$ tensor is parallel. To control the second from last term in the above identity, we notice that $ |u_i u_j | \le |\nabla u|^2 = f$ and that \[ f \partial_k (f^{2p-1} \psi^2) =(\partial_k f^p) \psi^2 \frac{2p-1}{p} f^p + 2 f^{2p} (\partial_k \psi) \psi. \] From here, using Young's inequality, it is easy to see that \[ I_3 \le \frac{1}{2} \int |\nabla (f \psi)|^2 dg + \int | Hess \, u |^2 f^{2p-1} \psi^2 dg + C \left(\frac{ \Vert \nabla L \Vert^2_\infty}{[(1-\sigma) r]^2} + \Vert \nabla L \Vert^2_\infty +1 \right) \int f^{2p} \psi^2 dg. \]Substituting this to (\ref{dfpsi}), we deduce \[ \int |\nabla (f^p \psi)|^2 dg \le C p \left(\frac{ \Vert \nabla L \Vert^2_\infty}{[(1-\sigma) r]^2} + \Vert \nabla L \Vert^2_\infty +1 \right) \int f^{2p} \psi^2 dg. \] Since $diam ({\bf M}) \le 1$ by Assumption 2, the last inequality implies \[ \int |\nabla (f^p \psi)|^2 dg \le C p \frac{ \Vert \nabla L \Vert^2_\infty + 1}{[(1-\sigma) r]^2} \int f^{2p} \psi^2 dg. \] Using the $L^2$ Sobolev inequality in Assumption 1 and Moser's iteration, we deduce, by choosing $p= (n/(n-2))^i$, and replacing $(1-\sigma)r$ by $\left(1/4\right)^i r$, $i=0, 1, 2, ...$, that \begin{equation} \label{du3r/4} \sup_{x \in B(x_0, r/2)} |\nabla u(x)|^2 \le C_1 \frac{1}{r^n} \int_{B(x_0, 3r/4)} |\nabla u|^2 dg \end{equation} where $C_1=C_1(\alpha, \kappa, \Vert \nabla L \Vert_\infty)$. We observe that even though that the number $p$ appears on the right hand side of the inequality before (\ref{du3r/4}), but its growth is only an exponential of $i$. As well known, it will be suppressed by the Moser iteration process, just like the term $1/(1-\sigma)^2$. Next we take $\sigma=3/4$ in the definition of the cut off function $\psi$. Using $u \psi^2$ as a test function on $\Delta u =0$, we infer, after a routine calculation \[ \int_{B(x_0, 3r/4)} |\nabla u|^2 dg \le \frac{C}{r^2} \int_{B(x_0, r)} u^2 dg. \] Here $C$ is a numerical constant. Combining the last two inequalities we arrive at \[ \sup_{x \in B(x_0, r/2)} |\nabla u(x)|^2 \le C_0 \frac{1}{r^{n+2}} \int_{B(x_0, r)} u^2 dg \] where $C_0=C_0(\alpha, \kappa, \Vert \nabla L \Vert_\infty)$. \qed The next lemma is simply the $L^2$ mean value inequality for the Laplace and heat equation under Assumptions 1 and 2. Since the result is well known (Grigoryan \cite{Gr:1} and Saloff-Coste \cite{Sa:1}), we omit the proof. \begin{lemma} \label{lemvp} Let ${\bf M}$ be a manifold satisfying Assumptions 1, 2. Suppose $u$ be is a smooth harmonic function in $B(x_0, r)$ where $x_0 \in {\bf M}$ and $ r \le diam({\bf M})$. Then there exists a positive constant $C_1=C_1(\alpha, \kappa)$ such that \[ \sup_{x \in B(x_0, r/2)} |u(x)| \le C_1 \left(\frac{1}{r^n} \int_{B(x_0, r)} u^2 dg \right)^{1/2}. \] Suppose $u$ is a solution of the heat equation $\Delta u -\partial_t u =0$ in the space time cube $B(x_0, r) \times [t_0-r^2, t_0]$. Then \[ \sup_{(x, t) \in B(x_0, r/2) \times [t_0 - r^2/4]} |u(x, t)| \le C_1 \left(\frac{1}{r^{n+2}} \int^t_{t-r^2}\int_{B(x_0, r)} u^2 dg ds \right)^{1/2}. \] \end{lemma} The next lemma provides bounds for the Green's function of the Laplacian and its gradients. \begin{lemma} \label{leDGbound} Let ${\bf M}$ be a manifold satisfying Assumptions 1, 2 and 3. Assume also $diam({\bf M})>\beta>0$ for a positive constant $\beta$. Let $\Gamma_0$ be the Green's function of the Laplacian $\Delta$ on ${\bf M}$. Then there exists a positive constant $C_0=(\alpha, \beta, \kappa, \Vert \nabla L \Vert_\infty)$ such that (a). $ |\Gamma_0(x, y)| \le \frac{C_0}{d(x, y)^{n-2}}$, $x, y \in {\bf M}$, (b). $ |\nabla_x \Gamma_0(x, y)| \le \frac{C_0}{d(x, y)^{n-1}}$, $x, y \in {\bf M}$. \end{lemma} \proof Once (a) is proven, (b) is just a consequence of (a) and Lemma \ref{ledumean} applied on the ball $B(x, d(x, y)/2)$. So now we just need to prove (a). On a compact manifold ${\bf M}$, we know that \begin{equation} \label{gammaG} \Gamma_0(x, y) = \int^\infty_0 \left( G(x, t, y)- \frac{1}{|{\bf M}|} \right) dt \end{equation} where $G$ is the fundamental solution of the heat equation $\Delta u - \partial_t u=0$. We remark that the metric is fixed here. So we need to bound $G$. Under Assumptions 1 and 2, Grigoryan \cite{Gr:1} and Saloff-Coste \cite{Sa:1} proved that there exist positive constants $A_1, A_2, A_3$ which depend only on $\alpha$ and $\kappa$ such that \begin{equation} \label{Gaub} G(x, t, y) \le A_1 ( 1 + \frac{1}{t^{n/2}} ) e^{- A_2 d(x, y)^2/t}. \end{equation} Fixing $x, y$ and $t$, we write $u=u(z, l) =G(z, l, y)$ and regard it as a solution of the heat equation in the cube $B(x, r) \times [t-r^2, t]$. Here $r=\sqrt{t}/2$. Extending Lemma \ref{ledumean} to the parabolic case in a routine manner, we know that \[ |\nabla u(x, t)| \le \frac{C_1}{r} \left(\frac{1}{r^{n+2}} \int^t_{t-r^2}\int_{B(x, r)} u^2 dg ds \right)^{1/2}. \]Substituting (\ref{Gaub}) to the right hand side, we know that \[ | \nabla_x G(x, t, y)| \le A_1 ( 1 + \frac{1}{t^{(n+1)/2}} ) e^{- A_2 d(x, y)^2/t}. \]Here the constants $A_1$ and $A_2$ may have changed. It is well known that this gradient bound and the upper bound (\ref{Gaub}) together imply a Gaussian lower bound for the heat kernel $G$. See \cite{CD:1}, p1165 e.g. Now, by \cite{Sa:1}, the following $L^2$ Poincar\'e inequality holds: for any $u \in C^\infty({\bf M})$, $r \in (0, diam({\bf M})]$, \begin{equation} \label{l2Poin} \int_{B(x, r/2)} | u - \bar u_{B(x, r/2)}|^2 dg \le A_3 r^2 \int_{B(x, r)} |\nabla u|^2 dg. \end{equation} By a trick in Jerison \cite{J:1}, which uses only volume doubling property, one has that \begin{equation} \label{goodl2Poin} \int_{B(x, r)} | u - \bar u_{B(x, r)}|^2 dg \le C A_3 r^2 \int_{B(x, r)} |\nabla u|^2 dg. \end{equation} Here $C$ depends only on $\kappa$. We mention that some of the cited results were stated for complete noncompact manifolds. But they are also valid for complete, closed manifolds as long as the diameters are uniformly bounded. Let $u_0 \in C^\infty({\bf M})$ be a function such that $\int_{\bf M} u_0 dg =0$. Then the function \begin{equation} \label{uxt=} u(x, t) = \int_{\bf M} \left( G(x, t, z)- \frac{1}{|{\bf M}|} \right) u_0(z) dg(z) \end{equation} is a solution to the heat equation such that $\int_{\bf M} u(x, t) dg(x)=0$. By the $L^2$ Poincar\'e inequality with $r=diam ({\bf M})$, we have \[ \int_{\bf M} u^2 dg \le C \, A_3 \, diam({\bf M})^2 \int_{\bf M} |\nabla u|^2 dg \le C A_3 \int_{\bf M} |\nabla u|^2 dg \]since $diam({\bf M}) \le 1$ by assumption. From this we deduce \[ \frac{d}{d t} \int_{\bf M} u^2 dg = - 2 \int_{\bf M} |\nabla u|^2 dg = -2 (CA_3)^{-1} \int_{\bf M} u^2 dg \] and consequently \[ \int_{\bf M} u^2(z, s) dg \le e^{ - 2 (CA_3)^{-1} s} \int_{\bf M} u^2_0(z) dg, \qquad s>0. \] Recall that we assume $diam({\bf M})>\beta>0$. For $t \ge \beta^2$, we can apply Lemma \ref{lemvp} to get \[ u^2(x, t) \le C^2_1 \frac{1}{\beta^{n+2}} \int^t_{t-\beta^2} \int_{{\bf M}} u^2(z, s) dg ds. \] Combining this with the previous inequality, we arrive at \[ u^2(x, t) \le C_2 e^{ - 2 (CA_3)^{-1} t} \int_{\bf M} u^2_0(z) dg \] where $C_2=C_0(\alpha, \beta, \kappa, A_3)$. By (\ref{uxt=}), this means \[ \left[ \int_{\bf M} \left( G(x, t, z)- \frac{1}{|{\bf M}|} \right) u_0(z) dg \right]^2 \le C_2 e^{ - 2 (CA_3)^{-1} t} \int_{\bf M} u^2_0(z) dg \] Fixing $x \in {\bf M}$ and $t \ge \beta^2$, and taking $u_0(z) = G(x, t, z)- \frac{1}{|{\bf M}|}$ in the above inequality, we obtain \begin{equation} \label{intg-1m} \int_{\bf M} \left( G(x, t, z)- \frac{1}{|{\bf M}|} \right)^2 dg \le C_2 e^{ - 2 (CA_3)^{-1} t}, \quad t \ge \beta^2. \end{equation} Fixing $x$, the function $h(z, t) \equiv G(x, t, z)- \frac{1}{|{\bf M}|}$ is also a solution to the heat equation. Applying the mean value inequality in Lemma \ref{lemvp} on the cube $B(y, \beta) \times [t-\beta^2, t]$, we infer \[ h^2(y, t) \le C^2_1 \frac{1}{\beta^{n+2}} \int^t_{t-\beta^2} \int_{{\bf M}} h^2(z, s) dg ds. \] That is \[ \left(G(x, t, y)- \frac{1}{|{\bf M}|}\right)^2 \le C^2_1 \frac{1}{\beta^{n+2}} \int^t_{t-\beta^2} \int_{{\bf M}} \left( G(x, s, z)- \frac{1}{|{\bf M}|} \right)^2 dg ds. \]Substituting (\ref{intg-1m}) to the last inequality, we deduce \begin{equation} \label{Gboundt>b} | G(x, t, y)- \frac{1}{|{\bf M}|} | \le C_3 e^{- C_4 t}, \qquad t \ge \beta^2, \end{equation} where $C_3, C_4$ depend only on $\alpha, \beta, \kappa$ and $A_3$ which only depends on $\alpha, \kappa$. From (\ref{gammaG}), \[ \aligned \Gamma_0(x, y) &= \int^{\beta^2}_0 \left( G(x, t, y)- \frac{1}{|{\bf M}|} \right) dt + \int^\infty_{\beta^2} \left( G(x, t, y)- \frac{1}{|{\bf M}|} \right) dt\\ &\equiv I_1 + I_2. \endaligned \]Using the bound (\ref{Gaub}) on $I_1$ and (\ref{Gboundt>b}) on $I_2$, we derive, after simple integration, \[ |\Gamma_0(x, y) | \le \frac{C_0}{d(x, y)^{n-2}}, \]where $C_0$ depends only on $\alpha, \beta, \kappa$ . This proves part (a) of the Lemma. As mentioned earlier, part (b) follows from part (a) and Lemma \ref{ledumean}. \qed The next result is a Cheng-Yau type log gradient estimate. Although not used in the proof of the theorems, it may be of independent interest. \begin{proposition} Let ${\bf M}$ be a manifold satisfying Assumptions 1, 2 and 3. Let $u$ be a positive harmonic function in the geodesic ball $B(x, 2r)$, which is properly contained in ${\bf M}$. Then there exists a positive constant $C$, depending only on the controlling constants in Assumptions 1-3, such that \[ \sup_{B(x, r)} | \nabla \ln u | \le \frac{C}{r} \] when $ r \in (0, 1]$. \proof \end{proposition} For convenience, we use the following notations \[ h \equiv \ln u, \quad F \equiv | \nabla h|^2. \] Following \cite{CY:1}, it is well known that $\Delta h = - F$ and \[ \Delta F = - 2 \nabla h \nabla F + 2 | Hess \, h|^2 + 2 Ric (\nabla h, \nabla h). \] Consider the function \begin{equation} \label{w=} w \equiv F^{5n}. \end{equation} By a routine calculation, we know that, for any $p \ge 1$, \begin{equation} \label{ddwp} \Delta w^p \ge - 2 \nabla h \nabla w^p + 10 n p F^{5np-1} | Hess \, h|^2 + 10 n p F^{5np-1} Ric (\nabla h, \nabla h) \end{equation} Given $\sigma \in (0, 1)$, let $\psi=\psi(x)$ be a standard smooth cut-off function such that $\psi(x)=0$ when $x \in B(x_0, r)^c$; $0 \le \psi \le 1$ and $\psi(x)=1$ when $x \in B(x_0, \sigma r)$ and $|\nabla \psi| \le \frac{4}{(1-\sigma) r}$. Using $w^2 \psi$ as a test function on (\ref{ddwp}), we deduce, after a straight forward calculation, that \begin{equation} \label{dwpp} \aligned \int |\nabla(w^p \psi)|^2 dg &\le -10 n p \int F^{5np-1} \, | Hess \, h|^2 w^p \psi^2 dg + 2 \int \nabla h \nabla w^p \, w^p \psi^2 dg\\ &\qquad - 10 n p \int F^{5np-1} Ric (\nabla h, \nabla h) w^p \psi^2 dg\\ &\equiv I_1 + I_2 + I_3. \endaligned \end{equation} Next we will show that the negative term $I_1$ dominate $I_2$ and $I_3$, modulo some harmless terms. Observe that \[ \aligned I_2 &= \int \psi^2 \nabla h \nabla w^{2p} dg \\ &= -2 \int \psi \nabla \psi \nabla h \, w^{2p} dg - \int \psi^2 \Delta h \, w^{2p} dg. \endaligned \] Recall that $\Delta h = -|\nabla h|^2 =F $. Hence, by Young's inequality, for any given $\epsilon>0$, \begin{equation} \label{i2<} I_2 \le (\epsilon +1) \int F w^{2 p} \psi^2 dg + \epsilon^{-1} \Vert \nabla \psi \Vert^2_\infty \int w^{2p} \psi^2 dg. \end{equation} It takes a little longer to prove the bound for $I_3$. By our condition on the Ricci curvature $R_{ij}$, we have \[ I_3 = -10 n p \int F^{5np-1} ( P^{kl}_{ij} \partial_k \partial_l L + Q^{kl}_{ij} g_{kl}) \, \partial_i h \partial_j h \, w^p \psi^2 dg. \] After integration by parts, this becomes \begin{equation} \label{i3=} \aligned I_3 &= 10 np (5np-1) \int F^{5 np-2} \partial_k F \, P^{kl}_{ij} \partial_l L \, \partial_i h \partial_j h \, w^p \psi^2 dg \\ & \qquad + 10 n p \int F^{5 np-1} \, P^{kl}_{ij} \partial_l L \, (\partial_k \partial_i h) \, \partial_j h \, w^p \psi^2 dg\\ & \qquad + 10 n p \int F^{5 np-1} \, P^{kl}_{ij} \partial_l L \, \partial_i h \, (\partial_k \partial_j h) \, w^p \psi^2 dg \\ & \qquad + 10 n p \int F^{5 np-1} \, P^{kl}_{ij} \partial_l L \, \partial_i h \, \partial_j h \, \partial_k (w^p \psi) \psi dg \\ & \qquad + 10 n p \int F^{5 np-1} \, P^{kl}_{ij} \partial_l L \, \partial_i h \, \partial_j h \, w^p \psi \partial_k \psi\\ &\qquad -10 n p \int F^{5np-1} Q^{kl}_{ij} g_{kl}\, \partial_i h \partial_j h \, w^p \psi^2 dg\\ &\equiv T_1 + ... + T_6. \endaligned \end{equation} Let us bound $T_i$, $i=1, ..., 6$. Observe that \[ | T_1 | \le 10 np (5np-1) \Vert \nabla L \Vert_\infty \int F^{5 np-2} |\nabla F | \, | \nabla h |^2 \, w^p \psi^2 dg. \] Since $| \nabla h |^2 = F$, we deduce, using $w^p=F^{ 5np}$, \[ \aligned | T_1 | &\le 10 np (5np-1) \Vert \nabla L \Vert_\infty \int F^{5 np-1} |\nabla F | \, w^p \psi^2 dg\\ &\le 10 np \Vert \nabla L \Vert_\infty \int |\nabla w^p | \, w^p \psi^2 dg. \endaligned \] Thus, after a little calculation, we obtain, \begin{equation} \label{t1<} | T_1 | \le \frac{1}{10} \int | \nabla (w^p \psi) |^2 dg + c p^2 \Vert \nabla L \Vert^2_\infty \int w^{2 p} \psi^2 dg + c \Vert \nabla \psi \Vert^2_\infty \int_{supp \, \psi} w^{2 p} dg. \end{equation} Next \[ \aligned | T_2 | &\le 10 np \Vert \nabla L \Vert_\infty \int F^{5 np-1} \, | Hess \, h | \, | \nabla h | \, w^p \psi^2 dg \\ & \le np \int F^{5 np-1} \, | Hess \, h |^2 \, \, w^p \psi^2 dg + c np \Vert \nabla L \Vert^2_\infty \int F^{5 np-1} \, |\nabla h |^2 \, \, w^p \psi^2 dg. \endaligned \] Recalling again that $|\nabla h |^2 = F$ and the definition of $I_1$, we deduce \begin{equation} \label{t2<} | T_2 | \le - \frac{I_1}{10} + c np \Vert \nabla L \Vert^2_\infty \int \, w^{2 p} \psi^2 dg. \end{equation} Since $T_3$ is similar to $T_2$, we also have \begin{equation} \label{t3<} | T_3 | \le - \frac{I_1}{10} + c np \Vert \nabla L \Vert^2_\infty \int \, w^{2 p} \psi^2 dg. \end{equation} By Young's inequality \[ | T_4 | \le \frac{1}{2} \int | \nabla ( w^p \psi)|^2 dg + 50 n^2 p^2 \Vert \nabla L \Vert^2_\infty \int F^{10 np-2} \, |\nabla h |^4 \psi^2\, dg. \] Since $F = |\nabla h|^2 $ and $w=F^{5n}$, this shows \begin{equation} \label{t4<} | T_4 | \le \frac{1}{2} \int | \nabla ( w^p \psi)|^2 dg + c p^2 \Vert \nabla L \Vert^2_\infty \int w^{2 p} \psi^2\, dg. \end{equation} Next \[ |T_5| \le 10 np \Vert \nabla L \Vert_\infty \, \Vert \psi \Vert_\infty \int F^{5np-1} | \nabla h|^2 w^p \psi dg, \] which becomes \begin{equation} \label{t5<} |T_5| \le 10 np \Vert \nabla L \Vert_\infty \, \Vert \psi \Vert_\infty \int w^{2p} \psi dg. \end{equation} Lastly \begin{equation} \label{t6<} | T_6 | \le 10 np \int F^{5np-1} | \nabla h|^2 w^p \psi^2 dg = 10 np \int w^{2 p} \psi^2 dg. \end{equation} Substituting (\ref{t1<})-(\ref{t6<}) into (\ref{i3=}), we find that \begin{equation} \label{i3<} | I_3 | \le \frac{|I_1|}{5} + \frac{3}{5} \int | \nabla ( w^p \psi)|^2 dg + c \frac{ p^2 \Vert \nabla L \Vert^2_\infty +1}{[(1-\sigma) r]^2} \int_{supp \, \psi} w^{2p} dg. \end{equation} Here we recall that \[ I_1= -10 n p \int F^{5np-1} \, | Hess \, h|^2 w^p \psi^2 dg. \]Using the inequality \[ | Hess \, h |^2 \ge \frac{1}{n} ( \Delta h )^2 = \frac{1}{n} |\nabla h |^4, \]we find that \[ I_1 = \frac{I_1}{2} + \frac{I_1}{2} \le \frac{I_1}{2} - 5p \int F^{5np-1} \, | \nabla h|^4 w^p \psi^2 dg \] which induces, since $w=F^{ 5n}$ and $F = | \nabla h |^2$, that \begin{equation} \label{i1<2} I_1 \le \frac{I_1}{2} - 5p \int F w^{2 p} \psi^2 dg. \end{equation} Substituting (\ref{i1<2}), (\ref{i3<}) and (\ref{i2<}) into (\ref{dwpp}), we deduce \[ \aligned \int |\nabla(w^p \psi)|^2 dg &\le \frac{I_1}{2} - 5p \int F w^{2 p} \psi^2 dg + (\epsilon +1) \int F w^{2 p} \psi^2 dg + \epsilon^{-1} \Vert \nabla \psi \Vert^2_\infty \int w^{2p} \psi^2 dg \\ &\qquad + \frac{|I_1|}{5} + \frac{3}{5} \int | \nabla ( w^p \psi)|^2 dg + c \frac{ p^2 \Vert \nabla L \Vert^2_\infty +1}{[(1-\sigma) r]^2} \int_{supp \, \psi} w^{2p} dg. \endaligned \] Since $p \ge 1$, we can take$\epsilon=1$ and obtain \[ \int |\nabla(w^p \psi)|^2 dg + \int (w^p \psi)^2 dg \le c \frac{ p^2 \Vert \nabla L \Vert^2_\infty +1}{[(1-\sigma) r]^2} \int_{supp \, \psi} w^{2p} dg \]where $c$ may have changed in value. By the Sobolev inequality in Assumption 1, this implies \[ \left( \int (w^p \psi)^{2n/(n-2)} dg \right)^{(n-2)/n} \le c \alpha \frac{ p^2 \Vert \nabla L \Vert^2_\infty +1}{[(1-\sigma) r]^2} \int_{supp \, \psi} w^{2p} dg. \] From this, the standard Moser's iteration implies \[ \sup_{B(x, \sigma r)} w^2 \le \frac{C(\alpha, n, \Vert \nabla L \Vert^2_\infty)}{(1-\sigma)^n r^n} \int_{B(x, r)} w^2 dg \]for $r, \sigma \in (0, 1]$. Using $w=F^{5n}$, we arrive at \[ \sup_{B(x, \sigma r)} F \le \left( \frac{C(\alpha, n, \Vert \nabla L \Vert^2_\infty)}{(1-\sigma)^n r^n} \int_{B(x, r)} F^{10n} dg \right)^{1/(10n)} \]for $r, \sigma \in (0, 1]$. Using the volume doubling property and an algebraic trick in \cite{LS:1} e.g., we deduce \begin{equation} \label{F<} \sup_{B(x, r/2)} F \le \frac{C(\alpha, n, \Vert \nabla L \Vert^2_\infty)}{r^n} \int_{B(x, r)} F dg \end{equation} for $r, \sigma \in (0, 1]$. Using integration by parts, it is known that \[ \int_{B(x, r)} F dg = \int_{B(x, r)} |\nabla (\ln u)|^2 dg \le 4 \frac{|B(x, 4 r)|}{ r^2} \le c r^{n-2} \]where we have used Assumption 2. Substituting this to (\ref{F<}), we arrive at \[ \sup_{B(x, r/2)} |\nabla (\ln u)| \le \frac{C(\alpha, n, \Vert \nabla L \Vert^2_\infty)}{r} \]proving the proposition. \qed \section{Proof of the Theorem} \proof (Theorem \ref{thm1.1}). For simplicity of presentation, we omit the time variable in the proof. It is also clear that we can take $\bar u =0$. {\it Step 1.} \medskip Pick $ u \in C^\infty({\bf M})$. Since $\Delta u = \Delta u$ and $\bar u=0$, we have \[ u(x) = -\int_{\bf M} \Gamma_0(x, y) \Delta u(y) dg(y), \] where $\Gamma_0$ is the Green's function of the Laplacian on ${\bf M}$. Pick a small balls $B(x, r)$. Then, \[ \aligned u(x) &= -\lim_{r \to 0} \int_{{\bf M}-B(x, r)} \Gamma_0(x, y) \Delta u(y) dg(y)\\ & = \lim_{r \to 0} \int_{{\bf M}-B(x, r)} \nabla \Gamma_0(x, y) \nabla u(y) dg(y) - \lim_{r \to 0}\int_{\partial B(x, r)} \Gamma_0(x, y) \partial_n u(y) dS. \endaligned \] Here we have used integration by parts. Note that $|\Gamma_0(x, y)| \le \frac{C_0}{d(x, y)^{n-2}}$ by Lemma\ref{leDGbound}. Also the volume of $\partial B(x, r)$, the small spheres of radius $r$, is bounded from above by $C r^{n-1}$. So the second limit is $0$. We mention that one does not need a uniform in time bound for $|\partial B(x, r)|$ since we are freezing a time and taking the limit $r \to 0$. Hence \[ u(x) = \int_{\bf M} \nabla \Gamma_0(x, y) \nabla u(y) dg(y). \] According to Lemma \ref{leDGbound}, this implies \begin{equation} \label{u<I1du} |u(x)| \le C_0 \int_{\bf M} \frac{|\nabla u(y)|}{d(x, y)^{n-1}} dg(y) \equiv C_0 I_1(|\nabla u|) (x). \end{equation} Here $I_1$ is the Riesz potential of order $1$. We claim that there exists a constant $C_1$, depending only on the constant $\kappa$ in Property A 5, such that \begin{equation} \label{I1mf} |I_1(f)(x)| \le C_1 [ M(f) (x)]^{1-(1/n)} \, \Vert f \Vert^{1/n}_1. \end{equation} for all smooth function $f$ on ${\bf M}$. Here $M(f)$ is the Hardy-Littlewood maximal function. The proof given here is more or less the same as in the Euclidean case (p86 \cite{Zi:1}), under Property A 5, i.e. $\kappa r^n \le |B(x, r)| \le \kappa^{-1} r^n$. Let $\delta$ be a positive number, then \[ \aligned |I_1(f)(x)| &\le \int_{B(x, \delta)} \frac{|f(y)|}{d(x, y)^{n-1}} dg + \int_{B^c(x, \delta)} \frac{|f(y)|}{d(x, y)^{n-1}} dg\\ &\le \Sigma^\infty_{j=0} \int_{\{ 2^{-j-1} \delta \le d(x, y) <2^{-j} \delta \}} \frac{|f(y)|}{d(x, y)^{n-1}} dg + \delta^{1-n} \int_{\bf M} |f(y)| dg\\ & \le \Sigma^\infty_{j=0} (2^{(j+1)}/\delta)^{n-1} |B(x, 2^{-j} \delta)| \frac{1}{|B(x, 2^{-j} \delta)|} \int_{B(x, 2^{-j} \delta)} |f(y)|dg + \delta^{1-n} \int_{\bf M} |f(y)| dg\\ &\le \Sigma^\infty_{j=0} (2^{(j+1)}/\delta)^{n-1} |B(x, 2^{-j} \delta)| \, M(f)(x) + \delta^{1-n} \Vert f \Vert_1. \endaligned \] By Property A 5, \[ |B(x, 2^{-j} \delta)| \le \kappa^{-1} (2^{-j} \delta)^n. \] Combining the last 2 inequalities we deduce \[ |I_1(f)(x)| \le C \kappa^{-1} \delta \, M(f)(x) + \delta^{1-n} \Vert f \Vert_1 \] which implies (\ref{I1mf}) by taking $\delta =[ M(f)(x)/\Vert f \Vert_1]^{-1/n}$. We remark that if $\delta>diam ({\bf M})$, then the integral $\int_{B^c(x, \delta)} \frac{|f(y)|}{d(x, y)^{n-1}} dg$ is regarded as zero. Since Property A 4-5 induces volume doubling property, it is well known that the maximal operator is bounded from $L^1({\bf M})$ to weak $L^1({\bf M})$, i.e. there is a positive constant $C_2$, depending only on $\kappa$ such that \[ \beta | \{ x \, | \, M(f)(x)> \beta \} | \le C_2 \Vert f \Vert_1, \qquad \]for all $\beta>0$. A short proof can be found in Chapter 3 of Folland's book \cite{Fo:1} e.g. Note the proof there is written for the Euclidean space. But as indicated below, it is clear that it works for all metric spaces with volume doubling property. Pick $x \in S_\beta \equiv \{ x \, | \, M(f)(x)> \beta \}$. Then by definition of $M(f)(x)$, there exists radius $r_x>0$ such that \[ \frac{1}{|B(x, r_x)|} \int_{B(x, r_x)} |f(y)| dg > \beta. \]Note that the family of balls $\{ B(x, r_x) \, | \, x \in S_\beta \}$ is an open cover of $S_\beta$. Since the manifold is compact, by well known covering argument for compact metric spaces, there exists a finite subfamily $\{ B(x, r_{x_i}) \, | \, i=1, ..., m \} $ of disjoint balls such that $\{ B(x, 3 r_{x_i}) \, | \, i=1, ..., m \}$ covers $S_\beta$. Using volume doubling property, one has \[ \beta |S_\beta| \le \beta \Sigma_i |B(x, 3 r_{x_i})| \le C \Sigma_i \beta |B(x, r_{x_i})| \le \Vert f \Vert_1. \] Combining this with (\ref{I1mf}), we obtain, for all $\alpha>0$, \[ \aligned | \{ x \, | \, I_1(f)(x)> \alpha \} |& \le | \{ x \, | \, M(f)(x)> \frac{\alpha^{n/(n-1)}}{\Vert f \Vert^{1/(n-1)}_1 C^{n/(n-1)}_1}\} | \\ &\le C_2 C^{n/(n-1)}_1 \Vert f \Vert^{1/(n-1)}_1 \alpha^{-n/(n-1)} \Vert f \Vert_1. \endaligned \]Thus \begin{equation} \label{I1weak} \alpha^{n/(n-1)} | \{ x \, | \, I_1(f)(x)> \alpha \} | \le C_2 C^{n/(n-1)}_1 \Vert f \Vert^{n/(n-1)}_1 \end{equation} By (\ref{u<I1du}) we have \[ | \{ x \, | \, |u(x)|>\alpha \} \le | \{ x \, | \, |I_1(\nabla u)(x)|>\alpha C^{-1}_0 \} , \] which infers, via (\ref{I1weak}) with $f=|\nabla u|$ the following statement: if $\bar u =0$ then for all $\alpha>0$, it holds \begin{equation} \label{uweak} \alpha^{n/(n-1)} | \{ x \, | \, |u(x)|> \alpha \} | \le C_3 \Vert \nabla u \Vert^{n/(n-1)}_1. \end{equation} Here $C_3$ is a constant depending only on the controlling constants in Properties A and B. {\it Step 2.} \medskip Now we will convert the weak type inequality (\ref{uweak}) to the desired $L^1$ Sobolev inequality, using an argument based on the idea in \cite{FGW:1}. See also \cite{CDG:1}. Define the sets \[ D_k = \{ x \, | \, |u(x)| > 2^k \}, \quad k \, \, \text{are integers}. \]Then \[ \Vert u \Vert_p =\left( \Sigma^\infty_{k=-\infty} \int_{D_k -D_{k+1}} |u(x)|^p dg \right)^{1/p} \] where $p=n/(n-1)$ here and later in the proof. This shows \begin{equation} \label{up<sum} \Vert u \Vert_p \le \left( \Sigma^\infty_{k=-\infty} 2^{(k+1) p} | D_k| \right)^{1/p} = \left( \Sigma^\infty_{k=-\infty} 2^{(k+1)p } | \{ x \, | \, |u(x)|>2^k \}| \right)^{1/p}. \end{equation} Now we define \[ g_k=g_k(x)= \begin{cases} 2^{k-1}, \qquad x \in D_k = \{ x \, | \, |u(x)| > 2^k \},\\ |u(x)|-2^{k-1}, \qquad x \in D_{k-1}-D_k= \{ x \, | \, 2^{k-1} < |u(x)| \le 2^k \},\\ 0, \qquad x \in D^c_{k-1}= \{ x \, | \, |u(x)| \le 2^{k-1} \}. \end{cases} \] It is clear that $g_k$ is a Lipschitz function such that $0 \le g_k \le |u|/2$. Observe that \[ D_k \subset \{ x \, | \, g_k(x) = 2^{k-1} \} \subset \{ x \, | \, g_k(x) >2^{k-2} \} \subset \{ x \, | \, |g_k(x)-\bar g_k| >2^{k-3} \} \cup \{ x \, | \, \bar g_k >2^{k-3} \} \] Here $\bar g_k$ is the average of $g_k$ on ${\bf M}$. Hence \begin{equation} \label{Dk<sum} \aligned |D_k| &\le |\{ x \, | \, |g_k(x)-\bar g_k| >2^{k-3} \}| + | \{ x \, | \, \bar g_k >2^{k-3} \}|\\ &\equiv T_{k1} + T_{k2}. \endaligned \end{equation} Note the average of the function $g_k-\bar g_k$ is $0$. Thus we can apply (\ref{uweak}), with $u$ there being replaced by $g_k-\bar g_k$, to deduce \begin{equation} \label{Tk1} T_{k1}=|\{ x \, | \, |g_k(x)-\bar g_k| >2^{k-3} \}| \le C C_3 2^{-p k} \Vert \nabla g_k \Vert^{p}_1. \end{equation} To treat $T_{k2}$, recall that $g_k \le |u|/2$ which implies \[ \bar g_k \le \Vert u \Vert_1 /(2 |{\bf M}|). \] Therefore \[ T_{k2} = | \{ x \, | \, \bar g_k >2^{k-3} \}| \le | \{ x \, | \, \frac{\Vert u \Vert_1}{|{\bf M}|} >2^{k-2} \}|. \] This shows that \begin{equation} \label{Tk2} \aligned T_{k2} = \begin{cases} 0, \quad \text{when} \quad k > 2 + \log_2 \frac{\Vert u \Vert_1}{|{\bf M}|} \equiv k_0\\ |{\bf M}|, \quad \quad k \le k_0. \end{cases} \endaligned \end{equation} Substituting (\ref{Tk1}) and (\ref{Tk2}) into (\ref{Dk<sum}), we deduce \[ \aligned |D_k| \le \begin{cases} C C_3 2^{-p k} \Vert \nabla g_k \Vert^{p}_1, \quad \text{when} \quad k > k_0\\ C C_3 2^{-p k} \Vert \nabla g_k \Vert^{p}_1+ |{\bf M}|, \quad \quad k \le k_0. \end{cases} \endaligned \]Substituting this to (\ref{up<sum}) and using Minkowski inequality, we obtain \[ \Vert u \Vert_p \le C_4 \Sigma^\infty_{k=-\infty} \Vert \nabla g_k \Vert_1 + C |{\bf M}|^{1/p} \Sigma^{[k_0] +1}_{k=-\infty} 2^k. \] Here $[k_0]$ is the greatest integer less than or equal to $k_0$. Note that the supports of $\nabla g_k$ are disjoint and $\nabla g_k = \nabla |u|$ in the supports. Also by the definition of $k_0$ in (\ref{Tk2}), we have $2^{k_0} = 4 \Vert u \Vert_1/|{\bf M}|$. Hence \[ \Vert u \Vert_p \le C_4 \Vert \nabla u \Vert_1 + C |{\bf M}|^{1/p} \Vert u \Vert_1/|{\bf M}|, \] which implies, since $p=n/(n-1)$, \[ \Vert u \Vert_{n/(n-1)} \le C_4 \Vert \nabla u \Vert_1 + C \frac{1}{|{\bf M}|^{1/n}} \Vert u \Vert_1. \]Here $C$ is a numerical constant. This proves Theorem \ref{thm1.1}. \qed Two final remarks are in order. Let $\alpha$ be the average of $u$ in ${\bf M}$. Then By Theorem \ref{thm1.1}, we have \[ \Vert u-\alpha \Vert_{n/(n-1)} \le C \Vert \nabla u \Vert_1 + C \frac{1}{|{\bf M}|^{1/n}} \Vert u -\alpha \Vert_1. \]Since the average of $u-\alpha$ is zero, inequality (\ref{u<I1du}) implies \[ |u(x)-\alpha| \le C_0 \int_{\bf M} \frac{|\nabla u(y)|}{d(x, y)^{n-1}} dg(y). \]After integration, using the $\kappa$ noninflating property, we find that \[ \Vert u -\alpha \Vert_1 \le C diam ({\bf M}) \Vert \nabla u \Vert_1. \]By the $\kappa$ noncollapsing property of ${\bf M}$, there holds $diam ({\bf M}) \le C |{\bf M}|^{1/n}$. This shows the usual isoperimetric inequality \[ \Vert u-\alpha \Vert_{n/(n-1)} \le C \Vert \nabla u \Vert_1. \] Notice the $L^2$ Poincar\'e inequality (\ref{goodl2Poin}), \cite{Z11:1} and Section 9 of \cite{Ch:1} imply the following long time convergence result: {\it the K\"ahler Ricci flow in Theorem \ref{thm1.1} converges sequentially in time, under Gromov-Hausdorff topology, to a compact metric space with $L^2$ Poincar\'e inequality and volume doubling condition.} By Cheeger's Theorem 11.7 \cite{Ch:1}, the limit space can be equipped with differential structure a.e.. {\bf Acknowledgment.} Q. S. Z. would like to thank Professors L. Capogna, X. X. Chen, D. Jerison, Bing Wang and Bun Wong for helpful conversations. Part of the work was done when he was a visiting professor at Nanjing University under a Siyuan Foundation grant, the support of which is gratefully acknowledged. Both of us wish to thank the referee for checking the paper carefully and making helpful corrections and suggestions.
1,314,259,995,196
arxiv
\section{Introduction} \noindent Since their introduction in the seminal article of Linetsky (cf.~\cite{li99}) and their generalization in the subsequent work of Davydov and Linetsky (cf.~\cite{dl02}) geometric step options have constantly gained attention in both the financial industry and the academic literature (cf.~\cite{ccw10}, \cite{cmw13}, \cite{xy13}, \cite{lz16}, \cite{wzb17}, \cite{dlm19}). As a whole class of financial contracts written on an underlying asset, these options have the particularity to cumulatively and proportionally loose or gain value when the underlying asset price stays below or above a predetermined threshold and consequently offer a continuum of alternatives between standard options and (standard) barrier options. Especially when compared with the latter options, geometric step contracts bring clear advantages: Due to their immediate cancellation (or activation) when the barrier level is breached, (standard) barrier options are extremely sensitive to any (temporary) change in the underlying asset price near the barrier so that (delta-)hedging is not reasonably feasible in this region. Additionally, the immediate knock-out (or knock-in) feature inherent to (standard) barrier options may incentivize influential market participants to manipulate the underlying asset price close to the barrier, hence triggering cancellation (or activation) of these options. Switching from an immediate to a cumulative and proportional knock-out (or knock-in) feature instead substantially helps addressing these concerns. Indeed, in contrast to (standard) barrier options, the delta of geometric step contracts does not explode and is even continuous at the barrier. This already allows for typical delta-hedges across the barrier level. Furthermore, since it is more difficult to control underlying asset prices over an extended period of time, geometric step options are more robust to temporary market manipulations and therefore better protect their holders against adverse actions of market participants in the underlying asset. \vspace{1em} \\ \noindent The present article studies (European-type and American-type) geometric step contracts under exponential Lévy dynamics. Our paper's contribution is manifold and extends several aspects of the geometric step option pricing literature: Firstly, we establish symmetry and parity relations for geometric double barrier step contracts under exponential Lévy models. Since standard options are naturally embedded in the whole class of geometric double barrier step options, these results generalize in particular the ones obtained in~\cite{fm06}, \cite{fm14}. Secondly, we derive various characterizations for European-type and American-type geometric double barrier step contracts as well as for their respective maturity-randomized quantities. Most notably, we are able to derive a jump-diffusion disentanglement for the early exercise premium of American-type geometric double barrier step options and its maturity-randomized equivalent as well as to characterize the diffusion and jump contributions to these early exercise premiums separately by means of partial integro-differential equations (PIDEs) and ordinary integro-differential equations (OIDEs). Our results translate the formalism introduced in \cite{fmv19} to the setting of geometric double barrier step contracts and generalize at the same time the ideas introduced in \cite{cy13}, \cite{lv17} and \cite{cv18} to Lévy-driven markets. Next, as an application of these characterizations, we derive semi-analytical pricing results for (regular) European-type and American-type geometric down-and-out step call options under hyper-exponential jump-diffusion processes.\footnote{It is worth recalling that hyper-exponential jump-diffusion processes are particularly suitable for financial modeling since they are able to provide arbitrarily close approximations to Lévy processes having a completely monotone jump density. The latter processes form an important class of Lévy models and include popular market dynamics such as Variance Gamma (VG) processes (cf.~\cite{ms90}, \cite{mcc98}), the CGMY model (cf.~\cite{cgmy02}), and Normal Inverse Gaussian (NIG) processes (cf.~\cite{bn95}).}~Although semi-analytical pricing results for European-type geometric step options were already obtained by other authors under similar asset dynamics (cf.~\cite{ccw10}, \cite{lz16}, \cite{wzb17}), we note that these results employed double Laplace transform techniques while our method only relies on a one-dimensional \mbox{Laplace(-Carson)} transform. Additionally, the current geometric step option pricing literature seems to either study the Black~\&~Scholes framework (cf.~\cite{bs73}) or only European-type geometric step options under more advanced models. To the best of our knowledge, we are therefore the first to provide characterizations as well as (tractable) pricing results for American-type geometric step options. Lastly, we discuss the early exercise structure of geometric step options once jumps are added and subsequently provide an analysis of the impact of jumps on the price and hedging parameters of (European-type and American-type) geometric step contracts. As of now, no clear investigation of this sensitivity to jumps has been provided in the geometric step option pricing literature, which is mainly due to the scarcity of publications dealing with (American-type) geometric step options with jumps. \vspace{1em} \\ \noindent The remaining of this paper is structured as follows: In Section~\ref{SEC2}, we introduce (European-type and American-type) geometric step options under exponential Lévy markets and discuss symmetry and parity relations as well as PIDE and OIDE characterizations of these options. Section~\ref{SEC3} deals with geometric step contracts under hyper-exponential jump-diffusion models. Here, semi-analytical pricing results for both European-type and American-type contracts are derived by combining the derivations of Section~\ref{SEC2} with certain properties of the hyper-exponential distribution. These theoretical results are subsequently exemplified in Section~\ref{MR_NUMRES}, where structural and numerical properties of (regular) geometric down-and-out step call options with jumps are illustrated and a comparison to the respective results in the standard Black \& Scholes framework is provided. The paper concludes with Section~\ref{SecConCLUSION}. All proofs and complementary results are presented in the appendices (Appendix A and B). \section{Geometric Step Options and Exponential Lévy Markets} \label{SEC2} \subsection{General Framework} \label{GenLev} We start with a filtered probability space $(\Omega, \mathcal{F}, \mathbf{F}, \mathbb{Q})$ -- a chosen risk-neutral probability space\footnote{It is well-known that exponential Lévy markets are incomplete as defined by Harrison and Pliska (cf.~\cite{hp81}). Specifying or discussing a particular choice of risk-neutral measure is not the sake of this article. Instead, we assume that a pricing measure under which our model has the required dynamics was previously fixed.} --, whose filtration $\mathbf{F} = \left( \mathcal{F}_{t} \right)_{t \geq 0}$ satisfies the usual conditions and consider two assets, a deterministic savings account $(B_{t}(r))_{t \geq 0}$ satisfying \begin{equation} B_{t}(r) = e^{r t}, \hspace{1.5em} r \geq 0, \, t \geq 0, \label{market1} \end{equation} and a risky asset $(S_{t})_{t \geq 0}$, whose price dynamics, under $\mathbb{Q}$, are described by the following (ordinary) exponential Lévy model \begin{equation} S_{t} = S_{0}e^{X_{t}}, \hspace{1.5em} S_{0}>0, \, t \geq 0. \label{market2} \end{equation} \noindent \noindent Here, the process $(X_{t})_{t \geq 0}$ is an $\mathbf{F}$-Lévy process associated with a triplet $(b_{X}, \sigma_{X}^{2}, \Pi_{X})$, i.e.~a càdlàg (right-continuous with left limits) process having independent and stationary increments and Lévy-exponent $\Psi_{X}(\cdot)$ defined, for $\theta \in \mathbb{R}$, by \begin{equation} \Psi_{X}( \theta ) := - \log \left( \mathbb{E}^{\mathbb{Q}} \left[ e^{i \theta X_{1}} \right] \right) = -ib_{X} \theta + \frac{1}{2} \sigma_{X}^{2}\theta^{2} + \int \limits_{ \mathbb{R}} \big(1 - e^{i \theta y} + i \theta y \mathds{1}_{\{ | y | \leq 1\}} \big) \Pi_{X}( dy), \label{CHARexp} \end{equation} \noindent where $\mathbb{E}^{\mathbb{Q}}[\cdot]$ refers to expectation with respect to the measure $\mathbb{Q}$. Numerous models in the financial literature fall into this framework. Important examples include hyper-exponential jump-diffusion (HEJD) models (cf.~\cite{ko02}, \cite{ca09}), Variance Gamma (VG) processes (cf.~\cite{ms90}, \cite{mcc98}), the CGMY model (cf.~\cite{cgmy02}) as well as Generalized Hyperbolic (GH) processes such as the popular Normal Inverse Gaussian (NIG) model (cf.~\cite{bn95}).\vspace{1em} \\ \noindent Applying standard results (cf.~\cite{sa}, \cite{Ap}), allows us to decompose $(X_{t})_{t \geq 0}$ in terms of its diffusion and jump parts as \begin{equation} X_{t} = b_{X}t + \sigma_{X} W_{t} + \int \limits_{\mathbb{R}} y \; \bar{N}_{X}(t,dy), \hspace{1.5em} t \geq 0, \label{DecompX} \end{equation} \noindent where $(W_{t})_{t \geq 0}$ denotes an $\mathbf{F}$-Brownian motion, and $N_{X}$ refers to an independent Poisson random measure on $[0,\infty) \times \mathbb{R} \setminus \{0\}$ that has intensity measure given by $\Pi_{X}$. Here, we use for $t \geq 0$ and any Borel set $A \in \mathcal{B}(\mathbb{R}\setminus\{0 \})$ the following notation: \begin{align*} N_{X}(t,A) &:=N_{X}((0,t]\times A), \\ \tilde{N}_{X}(dt,dy) & := N_{X}(dt,dy) - \Pi_{X}(dy)dt ,\\ \bar{N}_{X}(dt,dy)&:= \left \{ \begin{array}{cc} \tilde{N}_{X}(dt,dy), & \mbox{if} \; |y|\leq 1, \\ N_{X}(dt,dy), & \mbox{if} \; |y| > 1. \end{array} \right. \end{align*} \noindent Additionally, the Laplace exponent of the Lévy process $(X_{t})_{t \geq 0}$ can be defined for any $ \theta \in \mathbb{R}$ satisfying $ \mathbb{E}^{\mathbb{Q}} \left[ e^{\theta X_{1}} \right] < \infty$ and is then recovered from $\Psi_{X}(\cdot)$ via the following identity: \begin{align} \Phi_{X}( \theta ) := -\Psi_{X}(-i\theta) & = b_{X} \theta + \frac{1}{2} \sigma_{X}^{2} \theta^{2} - \int \limits_{ \mathbb{R}} \big(1 - e^{ \theta y} + \theta y \mathds{1}_{\{ | y | \leq 1\}} \big) \Pi_{X}( dy). \end{align} \noindent In the sequel, we always assume that $\Phi_{X}(\cdot)$ is at least for $\theta =1$ well-defined or, equivalently, that the price process $(S_{t})_{t \geq 0}$ is integrable. Additionally, we assume that the asset $(S_{t})_{t \geq 0}$ pays a proportional dividend with constant rate $\delta \geq 0$. In terms of the asset dynamics, this implies that the discounted cum-dividend price process $(e^{-(r-\delta)t} S_{t})_{t \geq 0}$ is a martingale under $\mathbb{Q}$, which then requires that \begin{equation} \Phi_{X}(1)= r - \delta. \label{MartCond} \end{equation} \noindent In particular, rewriting (\ref{MartCond}) allows us to recover the following expression for $b_{X}$: \begin{equation} b_{X} = r- \delta - \frac{1}{2}\sigma_{X}^{2} + \int \limits_{\mathbb{R}} \big( 1- e^{y} + y \mathds{1}_{\{ | y | \leq 1\}} \big) \Pi_{X}(dy). \label{bXequa} \end{equation} \noindent Such dynamics are typically found when studying foreign exchange markets. In this case, holdings in the foreign currency can earn the foreign risk-free interest rate, which therefore corresponds, for each investment in the foreign currency, to a dividend payment of a certain amount $\delta \geq 0$ (cf.~\cite{jc02}, \cite{gk83}). \vspace{1em} \\ \noindent Finally, it should be noted that $(S_{t})_{t \geq 0}$ has a Markovian structure. Following standard theory of Markov processes, we therefore recall that its infinitesimal generator is a partial integro-differential operator given, for sufficiently smooth $V: [0,\infty) \times \mathbb{R} \rightarrow \mathbb{R}$, by \begin{align} \mathcal{A}_{S} V(\mathcal{T},x) & := \lim \limits_{t \downarrow 0} \, \frac{\mathbb{E}^{\mathbb{Q}}_{x} \big[ V(\mathcal{T},S_{t})\big] - V(\mathcal{T},x)}{t} \nonumber \\ & = \frac{1}{2} \sigma^{2}_{X} x^{2} \partial_{x}^{2} V(\mathcal{T},x) + \Phi_{X}(1) x \partial_{x} V(\mathcal{T},x) \hspace{15em} \nonumber \\ & \hspace{5.5em} + \int \limits_{\mathbb{R}} \big[ V(\mathcal{T},xe^{y}) - V(\mathcal{T},x) - x(e^{y}-1)\partial_{x} V(\mathcal{T},x) \big] \Pi_{X}(dy), \label{AINFA} \end{align} \noindent where $\mathbb{E}_{x}^{\mathbb{Q}}[ \cdot]$ denotes expectation under $\mathbb{Q}_{x}$, the pricing measure having initial distribution $S_{0}=x$. We will extensively make use of these notations in the upcoming sections. \subsection{Characterizing Geometric Step Options} \label{SECCHARA} \noindent As mentioned in the introduction, geometric step options are financial contracts that are written on an underlying asset and that cumulatively and proportionally loose or gain value when the underlying's price stays above or below a certain, predetermined threshold. As such, these contracts are closely linked to the time the asset's price spends above or below a barrier level, so-called occupation times. To fix the notation, we define, for a time $t \geq 0$, the occupation time of asset $(S_{t})_{t \geq 0}$ below (\raisebox{.4\height}{\scalebox{.75}{$-$}}) and above (\raisebox{.4\height}{\scalebox{.75}{$+$}}) a constant barrier level $\ell > 0$ over the time interval $[0,t]$ via \begin{equation} \Gamma_{t,\ell}^{-} := \int \limits_{0}^{t} \mathds{1}_{(0,\ell)}(S_{r}) dr, \hspace{1.5em} \mbox{and} \hspace{1.9em} \Gamma_{t,\ell}^{+} := \int \limits_{0}^{t} \mathds{1}_{(\ell,\infty)}(S_{r}) dr. \label{OT1} \end{equation} \noindent In addition, we set, for $\gamma \geq 0$, \begin{equation} \Gamma_{t,\ell}^{\pm}(\gamma) := \gamma + \Gamma_{t,\ell}^{\pm} \label{OT2} \end{equation} \noindent and allow this way each of the occupation times $\Gamma_{t,\ell}^{-}$ and $\Gamma_{t,\ell}^{+}$ to start at a given initial value $\gamma \geq 0$. This generalization proves useful when valuing geometric step options over their entire lifetime. In this case, $\gamma$ refers to the occupation time the process $(S_{t})_{t \geq 0}$ has spent in the respective region from the establishment of the contract until the valuation date under consideration. \vspace{1em} \\ \noindent As for many other types of options, geometric step options can be found in various styles. Depending on the exercise specification, there exist European-type and American-type geometric step call and put options. Additionally, one can distinguish between ``knock-in'', ``knock-out'' as well as ``up'' and ``down'' features. Therefore, it is possible to construct a total of 32 different geometric step contracts, all of which can be studied in the unifying framework of geometric double barrier step options. A geometric double barrier step option with initial values $S_{0}=x \geq 0$ and $\Gamma_{0,L}^{-}(\gamma_{L}) = \gamma_{L} \geq 0$, $\Gamma_{0,H}^{+}(\gamma_{H}) = \gamma_{H} \geq 0$, strike price $K \geq 0$, barrier levels $0 \leq L \leq H < \infty$, and knock-out/knock-in rates $\rho_{L}, \rho_{H} \in \mathbb{R}$ pays off \begin{equation} e^{\rho_{L} \Gamma_{t,L}^{-}(\gamma_{L}) \, + \, \rho_{H} \Gamma_{t,H}^{+}(\gamma_{H})} \left(S_{t}-K \right)^{+} \hspace{0.7em} \mbox{(for a call)} \hspace{1.7em} \mbox{or} \hspace{2em} e^{\rho_{L} \Gamma_{t,L}^{-}(\gamma_{L}) \, + \, \rho_{H} \Gamma_{t,H}^{+}(\gamma_{H})} \left(K- S_{t}\right)^{+} \hspace{0.7em} \mbox{(for a put)} \end{equation} \noindent at the exercise time $t \geq 0$. Here, any of the barrier levels, $\ell \in \{L,H \}$, is said to be of knock-out type whenever $\rho_{\ell} \leq 0$, while the case of $\rho_{\ell} >0$ is referred to as a knock-in feature. \vspace{1em} \\ \noindent Using standard valuation principles, probabilistic representations for the value of any type of geometric double barrier step options are readily obtained. For instance, the value of a European-type geometric double barrier knock-out step call defined on the exponential Lévy market (\ref{market1}), (\ref{market2}), (\ref{MartCond}) and having maturity $\mathcal{T} \geq 0$, initial values $S_{0}=x \geq 0$ and $\Gamma_{0,L}^{-}(\gamma_{L}) = \gamma_{L} \geq 0$, $\Gamma_{0,H}^{+}(\gamma_{H}) = \gamma_{H} \geq 0$, strike price $K \geq 0$, barrier levels $0 \leq L \leq H < \infty$, and knock-out rates $\rho_{L},\rho_H \leq 0$ is obtained as \begin{equation} \hspace{0.5em} \mathcal{DSC}_{E}\big( \mathcal{T},x, \gamma_{L}, \gamma_{H}; r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot)\big) : = \mathbb{E}_{x}^{\mathbb{Q}} \left[ B_{\mathcal{T}}(r)^{-1} \, e^{\rho_{L} \Gamma_{\mathcal{T},L}^{-}(\gamma_{L}) \, + \, \rho_{H} \Gamma_{\mathcal{T},H}^{+}(\gamma_{H})} \,\left(S_{\mathcal{T}} - K \right)^{+} \right], \label{EuroDef} \end{equation} \noindent where we use the Lévy-exponent $\Psi_{X}(\cdot)$ to refer to the dynamics of the Lévy process (\ref{DecompX}) and therefore to further characterize the dynamics of the underlying price process $(S_{t})_{t \geq 0}$ specified in (\ref{market2}). Similarly, the value of a corresponding American-type geometric double barrier knock-out step call can be shown to have the representation \begin{equation} \mathcal{DSC}_{A}\big( \mathcal{T},x, \gamma_{L}, \gamma_{H}; r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot)\big) : = \sup \limits_{ \tau \in \mathfrak{T}_{[0,\mathcal{T}]} } \mathbb{E}_{x}^{\mathbb{Q}} \left[ B_{\tau}(r)^{-1} \, e^{\rho_{L} \Gamma_{\tau,L}^{-}(\gamma_{L}) \, + \, \rho_{H} \Gamma_{\tau,H}^{+}(\gamma_{H})} \,\left(S_{\tau} - K \right)^{+} \right], \label{AmerDef} \end{equation} where $\mathfrak{T}_{[0,\mathcal{T}]}$ denotes the set of stopping times that take values in the interval $[0,\mathcal{T}]$. Here, we note that both values (\ref{EuroDef}) and (\ref{AmerDef}) may be understood, for a given pair of times $(t,T)$ satisfying $0 \leq t \leq T < \infty$, as the time-$t$ value of the respective geometric step contract having maturity $T$, i.e.~we usually have in mind that $\mathcal{T} = T-t$ denotes the remaining time to maturity. \vspace{1em} \\ \noindent At this point, it is important to emphasize that other types of step options exist. Already in his seminal work, Linetsky introduced the class of arithmetic step options as other alternative to barrier options. Compared to standard call and put options, both geometric and arithmetic step options are characterized by an additional adjustment factor. However, while the adjustment factor of geometric step options is given as exponential function of (possibly one of) the occupation times defined in (\ref{OT2}), arithmetic step contracts are characterized by truncated linear adjustments. This implies in particular that, under comparable knock-out rates, arithmetic step contracts will knock-out faster than their geometric counterparts (cf.~\cite{li99}, \cite{dl02}). Clearly, our goal is not to discuss results for all existing types of step options. We will therefore mainly focus on geometric double barrier knock-out step calls and leverage on the fact that certain symmetry and parity relations hold between different geometric step contracts. Establishing these relations is the content of the next section. \subsection{Symmetry and Parity Relations} \label{SECSYPA} \noindent To allow for a simultaneous treatment of both European-type and American-type geometric step contracts, we start by introducing, for $T>0$, any stopping time $\tau \in \mathfrak{T}_{[0,T]}$, initial values $S_{0}=x \geq 0$ and \mbox{$\Gamma_{0,L}^{-}(\gamma_{L}) = \gamma_{L} \geq 0$,} $\Gamma_{0,H}^{+}(\gamma_{H}) = \gamma_{H} \geq 0$, strike price $K \geq 0$, barrier levels $0 \leq L \leq H < \infty$, and knock-out/knock-in rates $\rho_{L}, \rho_{H} \in \mathbb{R}$, the following quantities: \begin{align} \mathcal{DSC}\big( \tau ,x, \gamma_{L}, \gamma_{H}; r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot)\big) : = \mathbb{E}_{x}^{\mathbb{Q}} \left[ B_{\tau}(r)^{-1} \, e^{\rho_{L} \Gamma_{\tau,L}^{-}(\gamma_{L}) \, + \, \rho_{H} \Gamma_{\tau,H}^{+}(\gamma_{H})} \,\left(S_{\tau} - K \right)^{+} \right], \label{not1}\\ \mathcal{DSP}\big( \tau ,x, \gamma_{L}, \gamma_{H}; r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot)\big) : = \mathbb{E}_{x}^{\mathbb{Q}} \left[ B_{\tau}(r)^{-1} \, e^{\rho_{L} \Gamma_{\tau,L}^{-}(\gamma_{L}) \, + \, \rho_{H} \Gamma_{\tau,H}^{+}(\gamma_{H})} \,\left(K - S_{\tau}\right)^{+} \right]. \label{not2} \end{align} \noindent Using this notation, the next put-call-duality result can be derived. A proof is provided in Appendix~A. \begin{Lem}[Duality of Geometric Step Contracts] \label{lem1} Consider an exponential Lévy market, as introduced in (\ref{market1}), (\ref{market2}) and (\ref{MartCond}), with driving process $(X_{t})_{t \geq 0}$ having Lévy exponent given as in (\ref{CHARexp}). \noindent Then, under the notation (\ref{not1}) and (\ref{not2}), we have for any $T >0$ and stopping time $\tau \in \mathfrak{T}_{[0,T]}$ that \begin{equation} \mathcal{DSC}\big( \tau ,x, \gamma_{L}, \gamma_{H}; \,r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot) \big) = \mathcal{DSP}\Big(\tau,K,\gamma_{H}, \gamma_{L} ; \delta,r,x,\frac{xK}{H},\frac{xK}{L}, \rho_{H}, \rho_{L},\Psi_{Y}(\cdot)\Big), \label{EqLem1} \end{equation} \noindent where $\Psi_{Y}(\cdot)$ represents the Lévy exponent of another Lévy process $(Y_{t})_{t \geq 0}$ driving an exponential Lévy market with \begin{equation} \Psi_{Y}(\theta) = \Psi_{X}(-(\theta + i)) + \Phi_{X}(1). \label{Ysatisf} \end{equation} \noindent In particular, we obtain that the Lévy exponent $\Psi_{Y}(\cdot)$ is given by \begin{equation} \Psi_{Y}(\theta) = -ib_{Y} \theta + \frac{1}{2} \sigma_{Y}^{2}\theta^{2} + \int \limits_{ \mathbb{R}} \big(1 - e^{i \theta y} + i \theta y \mathds{1}_{\{ | y | \leq 1\}} \big) \Pi_{Y}( dy), \label{Yequation} \end{equation} \noindent where $\left( b_{Y}, \sigma_{Y}^2, \Pi_{Y} \right)$ are obtained as \begin{align} b_{Y} & = \delta - r - \frac{1}{2}\sigma_{Y}^{2} + \int \limits_{\mathbb{R}} \big( 1 - e^{y} + y \mathds{1}_{\{ | y | \leq 1\}} \big) \Pi_{Y}(dy), \\ \sigma_{Y}^{2} & = \sigma_{X}^{2}, \\ \Pi_{Y}(dy) & = e^{-y} \, \Pi_{X}(-dy). \label{inteMeas} \end{align} \end{Lem} $\mbox{ }$ \vspace{0.9em} \\ \noindent \underline{\bf Remark 1.} \vspace{0.3em} \\ \noindent Our results in Lemma~\ref{lem1} are similar to Lemma 1 in \cite{fm06}. However, while these authors consider standard options, our results hold within the whole class of geometric double barrier step contracts. In particular, since geometric double barrier step options reduce to standard options for $\rho_{L} = \rho_{H} = 0$, Lemma \ref{lem1} offers a generalization of the derivations obtained in \cite{fm06}. Additionally, our proof reveals that similar results could be derived for other occupation time derivatives. Due to the focus of our article, we nevertheless refrain from discussing further duality results here. \\ $\mbox{}$ \hspace{44.8em} \scalebox{0.75}{$\blacklozenge$} \vspace{1em} \\ \noindent Combining Lemma \ref{lem1} with few simple transformations allows us to derive duality and symmetry relations for European-type and American-type geometric step options. The results are summarized in the next corollary, whose proof is given in Appendix A. \begin{Cor}[Duality and Symmetry of Geometric Step Contracts] \label{coro1} Consider an exponential Lévy market, as introduced in (\ref{market1}), (\ref{market2}) and (\ref{MartCond}), with driving process $(X_{t})_{t \geq 0}$ having Lévy exponent given as in (\ref{CHARexp}). Then, the following duality and symmetry results hold \begin{align} \mathcal{DSC}_{\bullet}\big( \mathcal{T} ,x, \gamma_{L}, \gamma_{H}; \,r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot) \big) = \mathcal{DSP}_{\bullet} \Big(\mathcal{T},K,\gamma_{H}, \gamma_{L} ; \delta,r,x,\frac{xK}{H},\frac{xK}{L}, \rho_{H}, \rho_{L},\Psi_{Y}(\cdot)\Big), \hspace{0.5em} \label{Toprove1a}\\ \mathcal{DSC}_{\bullet}\big( \mathcal{T} ,x, \gamma_{L}, \gamma_{H}; r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot) \big) = xK \cdot \mathcal{DSP}_{\bullet} \Big( \mathcal{T} ,\frac{1}{x}, \gamma_{H}, \gamma_{L}; \delta,r, \frac{1}{K}, \frac{1}{H}, \frac{1}{L},\rho_{H}, \rho_{L}, \Psi_{Y}(\cdot) \Big), \label{Toprove1b} \end{align} \noindent where the Lévy exponents $\Psi_{Y}(\cdot)$ is defined as in Lemma \ref{lem1} and $\bullet$ refers to the exercise specification of the options, i.e.~$\bullet \in \{E,A \}$. \end{Cor} \subsection{Geometric Step Options and PIDEs} \label{GEOPIDE} \noindent We next turn to the pricing of geometric double barrier step contracts. As already mentioned in Section \ref{SECCHARA}, we focus from now on on geometric double barrier knock-out step call options, i.e.~we take $\rho_{L},\rho_{H} \leq 0$ and leverage on the relations obtained in Section~\ref{SECSYPA}. We emphasize however that the approach followed in the upcoming sections is general enough to produce similar results for other types of geometric step contracts and that only few, slight adaptions are needed. \vspace{1em} \\ \noindent In order to price both European-type as well as American-type double barrier step (call) options, it is sufficient to focus on corresponding step contracts that are initiated at the valuation date under consideration. This clearly follows since for $\bullet \in \{E,A \}$ and any $ \mathcal{T}, x, \gamma_{L}, \gamma_{H}, r,\delta, K, L, H,\rho_{L}, \rho_{H}$, and $\Psi_{X}(\cdot)$, we have that \begin{align} \mathcal{DSC}_{\bullet}\big( \mathcal{T},x, \gamma_{L}, \gamma_{H}; r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot) \big) = e^{\rho_{L} \gamma_{L} \, + \, \rho_{H} \gamma_{H}} \cdot \mathcal{DSC}_{\bullet}\big( \mathcal{T} ,x, 0, 0; \,r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot) \big). \label{SimpliBUP} \end{align} \noindent Therefore, we assume from now on that an exponential Lévy market, described in terms of its characteristic exponent $\Psi_{X}(\cdot)$, has been pre-specified and concentrate, for $\bullet \in \{E,A \}$, on geometric step contracts of the form \begin{equation} \mathcal{DSC}^{\star}_{\bullet} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) := \mathcal{DSC}_{\bullet}( \mathcal{T} ,x, 0, 0; \,r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot)), \label{SimpliBUP2} \end{equation} \noindent with $\bm{\ell}:= (L,H)$ and $\bm{\rho}_{\bm{\ell}} := (\rho_{L},\rho_{H})$. \subsubsection{European-Type Contracts} \noindent We first treat European-type contracts and characterize them by means of partial integro-differential equations (PIDEs). This is the content of the next proposition, whose proof is presented in Appendix A. \begin{Prop} \label{prop1} \noindent For any fixed $T >0$, strike $K \geq 0$, barrier levels $0 \leq L \leq H < \infty$, and knock-out rates $ \rho_{L}, \rho_{H} \leq 0$, the value of the European-type geometric double barrier step call, $\mathcal{DSC}_{E}^{\star}(\cdot)$, is continuous on $[0,T] \times [0,\infty)$ and solves the partial integro-differential equation \begin{equation} - \partial_{\mathcal{T}} \mathcal{DSC}^{\star}_{E} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) + \mathcal{A}_{S} \mathcal{DSC}^{\star}_{E} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) -\bigg( r - \bm{\rho}_{\bm{\ell}} \cdot \bigg( \begin{array}{c} \mathds{1}_{(0,L)}(x) \\ \mathds{1}_{(H,\infty)}(x) \end{array} \bigg) \bigg) \mathcal{DSC}^{\star}_{E} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = 0 , \label{GSCEuPIDE1} \end{equation} \noindent on $(0,T] \times [0,\infty)$ with initial condition \begin{equation} \mathcal{DSC}_{E}^{\star}(0,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = (x-K )^{+}, \hspace{1.5em} x \in [0,\infty). \label{GSCEuPIDE2} \end{equation} \end{Prop} \subsubsection{American-Type Contracts} \label{GEOPIDEAmer} \noindent We now discuss American-type contracts. First, as in the proof of Proposition \ref{prop1}, we note that American-type double barrier step call options can be re-expressed in the form \begin{equation} \mathcal{DSC}^{\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = \sup \limits_{ \tau \in \mathfrak{T}_{[0,\mathcal{T}]} } \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\tau} - K \right)^{+} \right], \label{PROOOOB} \end{equation} \noindent where $(\bar{S}_{t})_{t \geq 0}$ refers to the (strong) Markov process obtained by ``killing''\footnote{The reader is referred, for further details, to the proof of Proposition \ref{prop1}.} the sample path of $(S_{t})_{t \geq 0}$ at the proportional rate $\lambda(x) := r - \bm{\rho}_{\bm{\ell}} \cdot \bigg( \begin{array}{c} \mathds{1}_{(0,L)}(x) \\ \mathds{1}_{(H,\infty)}(x) \end{array} \bigg)$ and whose cemetery state is given, without loss of generality, by $\partial \equiv 0$. Therefore, using the fact that the payoff function $x \mapsto (x-K)^{+}$ is continuous as well as standard optimal stopping arguments (cf.~Corollary 2.9.~and Remark~2.10. in \cite{pe06}), we obtain that the continuation and stopping regions read for a (fixed) valuation horizon $[0,T]$, respectively \begin{align} \mathcal{D}_{c} & = \left \{ (\mathcal{T},x) \in [0,T] \times [0,\infty): \, \mathcal{DSC}^{\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) > (x-K)^{+} \right \} , \label{Cregion} \\ \mathcal{D}_{s} & = \left \{ (\mathcal{T},x) \in [0,T] \times [0,\infty): \, \mathcal{DSC}^{\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = (x-K)^{+} \right \}, \label{Sregion} \end{align} \noindent and that, for any $\mathcal{T} \in [0,T]$, the first-entry time \begin{equation} \tau_{\mathcal{D}_{s}} := \inf \big \{ 0 \leq t \leq \mathcal{T}: \, (\mathcal{T}-t,\bar{S}_{t}) \in \mathcal{D}_{s} \big \} \label{OTime} \end{equation} \noindent is optimal in (\ref{PROOOOB}). This subsequently allows us to make use of standard strong Markovian arguments to derive a characterization of the American-type contract, $\mathcal{DSC}^{\star}_{A}(\cdot)$, in terms of a Cauchy-type problem. This is the content of the next proposition, whose proof is provided in Appendix A. \begin{Prop} \label{prop2} \noindent For any fixed $T>0$, strike $K \geq 0$, barrier levels $0 \leq L \leq H < \infty$, and knock-out rates $ \rho_{L}, \rho_{H} \leq 0$, the value of the American-type geometric double barrier step call, $\mathcal{DSC}_{A}^{\star}(\cdot)$, is continuous on $[0,T] \times [0,\infty)$ and satisfies the following Cauchy-type problem: \begin{equation} - \partial_{\mathcal{T}} \mathcal{DSC}^{\star}_{A} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) + \mathcal{A}_{S} \mathcal{DSC}^{\star}_{A} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) -\bigg( r - \bm{\rho}_{\bm{\ell}} \cdot \bigg( \begin{array}{c} \mathds{1}_{(0,L)}(x) \\ \mathds{1}_{(H,\infty)}(x) \end{array} \bigg) \bigg) \mathcal{DSC}^{\star}_{A} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = 0, \label{GSCAmePIDE1} \end{equation} \noindent for $ (\mathcal{T},x) \in \mathcal{D}_{c}$ with boundary condition \begin{equation} \mathcal{DSC}_{A}^{\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = (x-K )^{+}, \hspace{1.5em} \mbox{for} \; \, (\mathcal{T},x) \in \mathcal{D}_{s}. \label{GSCAmePIDE2} \end{equation} \end{Prop} \noindent Proposition \ref{prop1} and Proposition \ref{prop2} are of great practical importance since they both provide a characterization of the respective geometric step contracts in terms of a PIDE problem and therefore already allow for a simple treatment of the options $\mathcal{DSC}_{E}^{\star}(\cdot)$ and $\mathcal{DSC}_{A}^{\star}(\cdot)$ by means of standard numerical techniques. However, these results do not offer any additional insights on the early exercise structure of these options. Instead, an early exercise decomposition into diffusion and jump contributions can be specified and PIDE characterizations thereof can be derived by analyzing the early exercise premium, $\mathcal{E}_{\mathcal{DSC}}^{\star}(\cdot)$, that is defined, for any $\mathcal{T},x, K, \bm{\ell}$, and $\bm{\rho}_{\bm{\ell}}$, by \begin{equation} \mathcal{E}_{\mathcal{DSC}}^{\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) := \mathcal{DSC}_{A}^{\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) - \mathcal{DSC}_{E}^{\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}). \label{DefNormalEEP} \end{equation} \noindent Deriving these characterizations is the content of the following discussion, where we restrict ourselves to jump distributions that are absolutely continuous with respect to the Lebesgue measure, i.e.~we only consider Lévy processes whose intensity measure takes the form \begin{equation} \Pi_{X}(dy) = \pi_{X}(y) dy \label{JumpAssum} \end{equation} \noindent for a certain jump density $\pi_{X}(\cdot)$. This is to ensure that the upcoming decomposition stays meaningful. However, we emphasize that this assumption could be relaxed and additionally note that it does not constitute a real restriction since (almost) all Lévy processes studied in the financial literature satisfy this property. \vspace{1em} \\ \noindent We start our discussion by noting that the stopping region $\mathcal{D}_{s}$ is a closed and left-connected\footnote{We define left-connectedness in terms of the time to maturity and require the following property: $$\forall 0 \leq \mathcal{T}_{1} \leq \mathcal{T}_{2} \leq T, \; x \in [0,\infty) : \,\big( (\mathcal{T}_{2},x) \in \mathcal{D}_{s} \Rightarrow (\mathcal{T}_{1},x) \in \mathcal{D}_{s} \big).$$}~set in $[0,T] \times [0,\infty)$ that additionally has the following decomposition \begin{equation} \mathcal{D}_{s} = \mathcal{D}_{s}^{L} \cup \mathcal{D}_{s}^{H}, \label{SetsDec} \end{equation} where $\mathcal{D}_{s}^{L}$ and $\mathcal{D}_{s}^{H}$ are themselves closed and left-connected sets in $[0,T] \times [0,\infty)$, with $\mathcal{D}_{s}^{L}$ and $\mathcal{D}_{s}^{H}\setminus \{L\}$ being disjoint. This can be seen from the following arguments: First, the closedness of $\mathcal{D}_{s}$ directly follows from the continuity of the function $(\mathcal{T},x) \mapsto \mathcal{DSC}^{\star}_{A}(\mathcal{T},x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}})$ on $[0,T] \times [0,\infty)$ for any $K,\bm{\ell}$, and $\bm{\rho}_{\bm{\ell}} $ (cf.~\cite{pe06}), while the fact that $\mathcal{T} \mapsto \mathcal{DSC}_{A}^{\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}})$ is, for any $x, K, \bm{\ell}$, and $\bm{\rho}_{\bm{\ell}}$, non-decreasing implies, for $0\leq \mathcal{T}_{1} \leq \mathcal{T}_{2} \leq T$, that we have $(\mathcal{T}_{1},x) \in \mathcal{D}_{s}$ whenever $(\mathcal{T}_{2},x) \in \mathcal{D}_{s}$. This already gives what is often referred to as left-connectedness. Therefore, we only have to prove the disjointness of the sets $\mathcal{D}_{s}^{L}$, $\mathcal{D}_{s}^{H} \setminus \{L\}$ in the decomposition~(\ref{SetsDec}). To see this property, we note that, for any $\mathcal{T},x, K$, and $\bm{\ell}$, the following inequality holds \begin{equation} \mathcal{DSC}_{A}^{\star}(\mathcal{T},x; K, \bm{\ell},\bm{\tilde{\rho}}_{\bm{\ell}}) \leq \mathcal{DSC}_{A}^{\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}), \hspace{1.5em} \mbox{whenever} \; \, \bm{\tilde{\rho}}_{\bm{\ell}} \leq \bm{\rho}_{\bm{\ell}}, \end{equation} \noindent where $ (\tilde{\rho}_{L}, \tilde{\rho}_{H}) = \bm{\tilde{\rho}}_{\bm{\ell}} \leq \bm{\rho}_{\bm{\ell}} = (\rho_{L},\rho_{H})$ refers to the componentwise inequalities $\tilde{\rho}_{L} \leq \rho_{L} $ and $\tilde{\rho}_{H} \leq \rho_{H}$. Since standard options are recovered from geometric double barrier step options by replacing $\bm{\rho}_{\bm{\ell}}$ with $\bm{\rho^{S}}_{\bm{\ell}} := (0,0)$ in (\ref{SimpliBUP2}) and (standard) double barrier knock-out options can be understood as ``limit'' of geometric double barrier step contracts, e.g.~via the sequence $\big(\bm{\rho^{B}}_{n,\bm{\ell}}\big)_{n \in \mathbb{N}} := \big((-n, -n)\big)_{n \in \mathbb{N}}$, we obtain, in particular, that \begin{equation} \mathcal{DBC}_{A}(\mathcal{T},x; K, \bm{\ell}) \leq \mathcal{DSC}_{A}^{\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) \leq \mathcal{C}_{A}(\mathcal{T},x; K). \end{equation} \noindent Here, $\mathcal{C}_{A}(\cdot)$ and $\mathcal{DBC}_{A}(\cdot)$ refer to the (standard) American-type call and the (standard) American-type double barrier knock-out call, obtained by \begin{align} \mathcal{C}_{A}(\mathcal{T},x; K) & := \mathcal{DSC}_{A}^{\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho^{S}}_{\bm{\ell}}), \\ \mathcal{DBC}_{A}(\mathcal{T},x; K, \bm{\ell}) &:= \sup \limits_{\tau \in \mathfrak{T}_{[0,\mathcal{T}]}} \lim \limits_{n \uparrow \infty} \mathcal{DSC}^{\star}(\tau,x; K, \bm{\ell},\bm{\rho^{B}}_{n,\bm{\ell}}), \end{align} \noindent where $\mathcal{DSC}^{\star}(\tau,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}}) = \mathcal{DSC}\big( \tau ,x, 0, 0; \,r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot)\big)$ denotes the contract version of (\ref{not1}) that is initiated at the valuation date under consideration, i.e.~in the sense of the notation introduced in~(\ref{SimpliBUP2}). Hence, this gives that $\mathcal{D}_{S,s} \subseteq \mathcal{D}_{s} \subseteq \mathcal{D}_{B,s}$, with $\mathcal{D}_{S,s}$ and $\mathcal{D}_{B,s}$ denoting the stopping region of the corresponding (standard) American-type call and (standard) American-type double barrier knock-out call, respectively, i.e. \begin{align} \mathcal{D}_{S,s} & = \left \{ (\mathcal{T},x) \in [0,T] \times [0,\infty): \, \mathcal{C}_{A}(\mathcal{T},x; K) = (x-K)^{+} \right \}, \\ \mathcal{D}_{B,s} = & \big\{ (\mathcal{T},x) \in [0,T] \times [0,\infty): \, \mathcal{DBC}_{A}(\mathcal{T},x; K, \bm{\ell}) = (x-K)^{+} \big \}. \end{align} \noindent In particular, $\mathcal{D}_{S,s} \subseteq \mathcal{D}_{s}$ directly implies, for $\delta > 0$, the non-emptyness of the stopping region $\mathcal{D}_{s}$ (cf.~\cite{Ma18}), whereas combining well-known results for (standard) American-type double barrier options with the relation $\mathcal{D}_{s} \subseteq \mathcal{D}_{B,s}$ gives that early exercise of the geometric double barrier knock-out step call can only occur, for a fixed $\mathcal{T} \in [0,T]$, in subregions of the intervals $I_{1} := (K,L]$, whenever $L > K$, and $I_{2} := \big[\mathfrak{b}^{B}(\mathcal{T}),\infty\big)$, where $\mathfrak{b}^{B}(\mathcal{T}) \geq \max(K,L)$ denotes the early exercise up-boundary of the corresponding (standard) American-type double barrier knock-out call. This provides (\ref{SetsDec}). \vspace{1em} \\ \noindent Next, combining the closedness of $\mathcal{D}_{s}$ with its left-connectedness and decomposition (\ref{SetsDec}) leads to the following observations:\footnote{We refer the reader for similar ideas to \cite{fmv19}; see also \cite{lv17} and \cite{cv18}.}~First, any entry of the stopping region that is triggered by the diffusion part of the process $(S_{t})_{t \geq 0}$\footnote{Or, equivalently, by the diffusion part of the underlying Lévy process $(X_{t})_{t \geq 0}$.}~will happen by crossing the boundary $\partial \mathcal{D}_{s}$ of the set $\mathcal{D}_{s}$, where $$ \partial \mathcal{D}_{s} := \Big\{ (\mathcal{T},x) \in \mathcal{D}_{s}: \, \forall \epsilon > 0: B_{\epsilon}\big((\mathcal{T},x)\big) \cap \mathcal{D}_{s} \neq \emptyset \; \land \; B_{\epsilon}\big(\big((\mathcal{T},x)\big) \cap \Big( \big( [0,T] \times [0,\infty) \big) \setminus \mathcal{D}_{s}\Big) \neq \emptyset \Big\} ,$$ \noindent and $B_{\epsilon}\big((\mathcal{T},x)\big)$ denotes the open ball around the (mid-)point $(\mathcal{T},x)$ and with radius $\epsilon > 0$. \noindent On the other hand, first-passage entries in the stopping region that are triggered by jumps will always occur at an interior point of the set $\mathcal{D}_{s}$, i.e.~within $\mathcal{D}_{s}^{\circ} := \mathcal{D}_{s} \setminus \partial \mathcal{D}_{s}$, whenever the $\mathcal{T}$-section \mbox{$\mathcal{D}_{s,\mathcal{T}} := \{ x \in [0,\infty): \, (\mathcal{T},x) \in \mathcal{D}_{s} \}$} contains, for all $\mathcal{T} \in [0,T]$, only finitely many $x$ with $(\mathcal{T},x) \in \partial \mathcal{D}_{s}$, i.e.~whenever we have for all $\mathcal{T} \in [0,T]$ that \mbox{$\# \left( \partial \mathcal{D}_{s} \cap \big(\{\mathcal{T} \} \times \mathcal{D}_{s,\mathcal{T}} \big) \right) < \infty$}. This is a direct consequence of Assumption (\ref{JumpAssum}), as this assumption implies that, conditional on a jump occuring at time $t$, events of the form $\{ S_{t} = \varphi + S_{t-}\}$ have for any fixed $\varphi \in \mathbb{R}$ zero probability. Additionally, in cases where $\# \left( \partial \mathcal{D}_{s} \cap \big( \{\mathcal{T}_{0} \} \times \mathcal{D}_{s,\mathcal{T}_{0}} \big)\right) = \infty$ holds for some $\mathcal{T}_{0} \in [0,T]$, the stopping region has the particularity to suddenly increase in size at this particular point in time $\mathcal{T}_{0}$ and any entry in $\partial \mathcal{D}_{s} \cap \big( \{ \mathcal{T}_{0} \} \times \mathcal{D}_{s,\mathcal{T}_{0}} \big)$ is very much due to the drastic change in the shape of the stopping region at this point. In particular, since Lévy processes are quasi left-continuous, i.e.~left-continuous over predictable stopping times, these stopping scenarios can only be due to the diffusion part of the process $(S_{t})_{t \geq 0}$. Consequently, these observations justify the usage of the sets $\partial \mathcal{D}_{s}$ and $\mathcal{D}_{s}^{\circ}$ to decompose the stopping region $\mathcal{D}_{s}$ into sub-regions where stopping is purely triggered by diffusion and by jumps, respectively. This subsequently results in a decomposition of the early exercise premium, $\mathcal{E}_{\mathcal{DSC}}^{\star}(\cdot)$, of the following form: \begin{equation} \mathcal{E}_{\mathcal{DSC}}^{\star}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = \mathcal{E}_{\mathcal{DSC}}^{0,\star}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) + \mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ). \end{equation} \noindent Here, the premiums $\mathcal{E}_{\mathcal{DSC}}^{0,\star}(\cdot)$ and $\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}(\cdot)$ refer to the early exercise contributions of the diffusion and jump parts, respectively, and are defined in the following way \begin{align} \mathcal{E}_{\mathcal{DSC}}^{0,\star}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & := \mathcal{DSC}^{0,\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) - \mathcal{DSC}^{0,\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ), \\ \mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & := \mathcal{DSC}^{\mathcal{J},\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) - \mathcal{DSC}^{\mathcal{J},\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ), \end{align} \noindent where the European-type functions $\mathcal{DSC}^{0,\star}_{E}(\cdot)$ and $\mathcal{DSC}^{\mathcal{J},\star}_{E}(\cdot)$ are given by \begin{align} \mathcal{DSC}^{0,\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\mathcal{T}} - K \right)^{+} \mathds{1}_{\partial \mathcal{D}_{s}}\big((\mathcal{T}-\tau_{\mathcal{D}_{s}}, \bar{S}_{\tau_{\mathcal{D}_{s}}}) \big) \right], \\ \mathcal{DSC}^{\mathcal{J},\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\mathcal{T}} - K \right)^{+} \mathds{1}_{ \mathcal{D}_{s}^{\circ}} \big((\mathcal{T}-\tau_{\mathcal{D}_{s}}, \bar{S}_{\tau_{\mathcal{D}_{s}}}) \big) \right], \end{align} \noindent and the American-type contributions $\mathcal{DSC}^{0,\star}_{A}(\cdot)$ and $\mathcal{DSC}^{\mathcal{J},\star}_{A}(\cdot)$ are defined, accordingly, as \begin{align} \mathcal{DSC}^{0,\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\tau_{\mathcal{D}_{s}}} - K \right)^{+} \mathds{1}_{\partial \mathcal{D}_{s}} \big( (\mathcal{T}-\tau_{\mathcal{D}_{s}}, \bar{S}_{\tau_{\mathcal{D}_{s}}}) \big) \right], \\ \mathcal{DSC}^{\mathcal{J},\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\tau_{\mathcal{D}_{s}}} - K \right)^{+} \mathds{1}_{\mathcal{D}_{s}^{\circ}} \big( (\mathcal{T}-\tau_{\mathcal{D}_{s}}, \bar{S}_{\tau_{\mathcal{D}_{s}}}) \big) \right]. \end{align} \noindent Combining these definitions with strong Markovian arguments finally allows us to derive PIDE characterizations of the early exercise contributions $\mathcal{E}_{\mathcal{DSC}}^{0,\star}(\cdot)$ and $\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}(\cdot)$. This is the content of the next proposition, whose proof is presented in Appendix~A. \begin{Prop} \label{prop3} \noindent For any fixed $T>0$, strike $K \geq 0$, barrier levels $0 \leq L \leq H < \infty$, and knock-out rates $ \rho_{L}, \rho_{H} \leq 0$, the value of the diffusion contribution to the early exercise premium of the geometric double barrier step call, $\mathcal{E}_{\mathcal{DSC}}^{0,\star}(\cdot)$, satisfies the following Cauchy-type problem: \begin{equation} - \partial_{\mathcal{T}} \mathcal{E}_{\mathcal{DSC}}^{0,\star} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) + \mathcal{A}_{S} \mathcal{E}_{\mathcal{DSC}}^{0,\star} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) -\bigg( r - \bm{\rho}_{\bm{\ell}} \cdot \bigg( \begin{array}{c} \mathds{1}_{(0,L)}(x) \\ \mathds{1}_{(H,\infty)}(x) \end{array} \bigg) \bigg) \mathcal{E}_{\mathcal{DSC}}^{0,\star} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = 0 , \label{GSCAmeEEPPIDE1} \end{equation} \noindent for $ (\mathcal{T},x) \in \mathcal{D}_{c}$ with boundary conditions \begin{align} \mathcal{E}_{\mathcal{DSC}}^{0,\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = (x-K )^{+} - & \mathcal{DSC}^{\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ), \hspace{1.5em} \mbox{for} \; \, (\mathcal{T},x) \in \partial \mathcal{D}_{s}, \label{GSCAmeEEPPIDE1-1} \\ \mathcal{E}_{\mathcal{DSC}}^{0,\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) & = 0, \hspace{1.5em} \mbox{for} \; \, (\mathcal{T},x) \in \mathcal{D}_{s}^{\circ}. \label{GSCAmeEEPPIDE1-2} \end{align} \noindent Similarly, the value of the jump contribution to the early exercise premium of the geometric double barrier step call, $\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}(\cdot)$, solves the following Cauchy-type problem: \begin{equation} - \partial_{\mathcal{T}} \mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) + \mathcal{A}_{S} \mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) -\bigg( r - \bm{\rho}_{\bm{\ell}} \cdot \bigg( \begin{array}{c} \mathds{1}_{(0,L)}(x) \\ \mathds{1}_{(H,\infty)}(x) \end{array} \bigg) \bigg) \mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star} (\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = 0 , \label{GSCAmeEEPPIDE2} \end{equation} \noindent for $ (\mathcal{T},x) \in \mathcal{D}_{c}$ with boundary conditions \begin{align} \mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) & = 0, \hspace{1.5em} \mbox{for} \; \, (\mathcal{T},x) \in \partial \mathcal{D}_{s}, \label{GSCAmeEEPPIDE2-1} \\ \mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}(\mathcal{T},x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = (x-K )^{+} - & \mathcal{DSC}^{\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ), \hspace{1.5em} \mbox{for} \; \, (\mathcal{T},x) \in \mathcal{D}_{s}^{\circ}. \label{GSCAmeEEPPIDE2-2} \end{align} \end{Prop} $\mbox{ }$ \vspace{-0.7em} \\ \noindent \underline{\bf Remark 2.} \vspace{0.3em} \\ \noindent Although Proposition \ref{prop3} provides a meaningful characterization of diffusion and jump contributions to the early exercise premium of geometric step options, one may have the impression that these results are lacking applicability. In particular, it seems difficult to make use of these characterizations in practice since the sets~$\mathcal{D}_{s}$, $\partial \mathcal{D}_{s}$, and $\mathcal{D}_{s}^{\circ}$ are usually not known in advance. However, we will see that Proposition~\ref{prop3} and the upcoming results of Section~\ref{MROIDE} will play a crucial role in Section \ref{SEC3}, where they will allow for a derivation of semi-analytical diffusion and jump contributions to the early exercise premium of geometric down-and-out step call options under hyper-exponential jump-diffusion markets. \\ $\mbox{}$ \hspace{44.8em} \scalebox{0.75}{$\blacklozenge$} \\ \subsection{Maturity-Randomization and OIDEs} \label{MROIDE} \noindent We next deal with maturity-randomized geometric step contracts. To this end, we consider for a function $g: \mathbb{R}^{+} \rightarrow \mathbb{R}$ satisfying \begin{equation} \int \limits_{0}^{\infty} e^{-\vartheta t} |g(t)| \, dt < \infty, \hspace{1.5em} \forall \vartheta >0, \label{GAVGAVEquLCTransform0} \end{equation} \noindent the Laplace-Carson transform $\widehat{g}(\cdot)$ defined via \begin{align} \widehat{g}(\vartheta) & := \int \limits_{0}^{\infty} \vartheta e^{-\vartheta t } \, g(t) \,dt \label{EquLCTransform1} \end{align} \noindent and note that this transform has several desirable properties.\footnote{We refer the interested reader to \cite{kw03}, \cite{ki10}, \cite{lv17}, and \cite{fmv19} for a discussion of some of these properties.}~In particular, applying the Laplace-Carson transform in the context of mathematical finance allows to randomize the maturity of~(certain) financial contracts, i.e.~to switch from objects with deterministic maturity to corresponding objects with stochastic maturity. This last property offers various approaches to the valuation of financial positions and has therefore led to a wide adoption of the Laplace-Carson transform in the option pricing literature, with \cite{ca98} being one of the seminal articles in this context. \vspace{1em} \\ \noindent Once an (analytical or numerical) expression for the Laplace-Carson transform has been obtained, inversion is carried out numerically through an inversion algorithm. One possible choice is the Gaver-Stehfest algorithm that has the advantage to allow for an inversion of the transform on the real line and that has been successfully used by several authors in the option pricing literature (cf.~\cite{kw03}, \cite{ki10}, \cite{wz10}, \cite{hm13}, \cite{lv17}, \cite{cv18}, \cite{lv19}). We will also rely on this algorithm, i.e.~we set \begin{equation} g_{N}(t) := \sum \limits_{k=1}^{2N} \zeta_{k,N} \, \mathcal{LC}\big(g\big)\left( \frac{k \log(2)}{t}\right), \hspace{1.5em} N \in \mathbb{N}, \; t > 0, \end{equation} \noindent where the coefficients are given by \begin{equation} \label{zetaEQUA} \zeta_{k,N} := \frac{(-1)^{N+k}}{k} \sum \limits_{j = \lfloor (k+1)/2 \rfloor }^{\min \{k,N \}} \frac{j^{N+1}}{N!} \binom{N}{j} \binom{2j}{j} \binom{j}{k-j}, \hspace{1.5em} N \in \mathbb{N}, \; 1 \leq k \leq 2N, \end{equation} \noindent with $\lfloor a \rfloor := \sup \{z \in \mathbb{Z}: \, z \leq a \}$, and will recover the original function $g(\cdot)$ by means of the following relation \begin{equation} \lim \limits_{N \rightarrow \infty} g_{N}(t) = g(t). \label{CONVer} \end{equation} \noindent More technical details around the Gaver-Stehfest inversion as well as formal proofs of the convergence result~(\ref{CONVer}) for ``sufficiently well-behaved functions'' are provided in \cite{va04}, \cite{aw06}, \cite{ku13}, and references therein. \subsubsection{European-Type Contracts} \label{EUROtypeCo} \noindent To start, we focus on maturity-randomized versions of the European-type geometric step option $\mathcal{DSC}_{E}^{\star}(\cdot)$, i.e.~we consider geometric step contracts of the form \begin{equation} \widehat{\mathcal{DSC}_{E}^{\star}}(\vartheta,x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}}) := \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\mathcal{T}_{\vartheta}} - K \right)^{+} \right], \label{MREuro1} \end{equation} \noindent where $(\bar{S}_{t})_{t \geq 0}$ refers, once again, to the (strong) Markov process obtained by ``killing'' the sample path of $(S_{t})_{t \geq 0}$ at the proportional rate $\lambda(x) := r - \bm{\rho}_{\bm{\ell}} \cdot \bigg( \begin{array}{c} \mathds{1}_{(0,L)}(x) \\ \mathds{1}_{(H,\infty)}(x) \end{array} \bigg)$ and whose cemetery state is given by $\partial \equiv 0$, and $\mathcal{T}_{\vartheta}$ denotes an exponentially distributed random time of intensity $\vartheta > 0$ that is independent of $(S_{t})_{t \geq 0}$. It is not hard to see that (\ref{MREuro1}) re-writes as \begin{equation} \widehat{\mathcal{DSC}_{E}^{\star}}(\vartheta,x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}}) = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \, \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\mathcal{T}_{\vartheta}} - K \right)^{+} \big| \mathcal{T}_{\vartheta} \right] \, \right] = \int \limits_{0}^{\infty} \vartheta e^{-\vartheta t } \, \mathcal{DSC}_{E}^{\star}(t,x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}}) \,dt, \label{LCTMREuro1} \end{equation} \noindent and therefore that the maturity-randomized versions (\ref{MREuro1}) correspond, for any fixed $x, K, \bm{\ell},$ and $\bm{\rho}_{\bm{\ell}}$, to a strict application of the Laplace-Carson transform to the function $\mathcal{T} \mapsto \mathcal{DSC}_{E}^{\star}(\mathcal{T},x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}})$. Additionally, we note that this transform is well-defined. Indeed, this was already shown in a slightly different context for standard (European- and American-type) options in \cite{Ma18} and directly follows from these results, for $\bm{\rho}_{\bm{\ell}} \leq 0$ and $\bullet \in \{E,A\}$, by means of the inequality \begin{equation} \mathcal{DSC}_{\bullet}^{\star}(\mathcal{T},x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}}) \leq \mathcal{DSC}_{\bullet}^{\star}(\mathcal{T},x;K,\bm{\ell}, (0,0)) =: \mathcal{C}_{\bullet}(\mathcal{T},x;K). \end{equation} \noindent Consequently, combining these properties with arguments similarly used in the proof of Proposition~\ref{prop1} allows to obtain an OIDE characterization of the maturity-randomized European-type contracts~(\ref{MREuro1}). This is the content of the next proposition, whose proof is provided in Appendix A. \begin{Prop} \label{prop4} \noindent For any intensity $\vartheta >0$, strike $K \geq 0$, barrier levels $0 \leq L \leq H < \infty$, and knock-out rates $ \rho_{L}, \rho_{H} \leq 0$, the value of the maturity-randomized European-type geometric double barrier step call, $\widehat{\mathcal{DSC}_{E}^{\star}}(\cdot)$, is continuous on $[0,\infty)$ and solves the ordinary integro-differential equation \begin{equation} \vartheta (x-K)^{+} + \mathcal{A}_{S} \widehat{\mathcal{DSC}^{\star}_{E}} (\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) -\bigg( (r + \vartheta) - \bm{\rho}_{\bm{\ell}} \cdot \bigg( \begin{array}{c} \mathds{1}_{(0,L)}(x) \\ \mathds{1}_{(H,\infty)}(x) \end{array} \bigg) \bigg) \widehat{\mathcal{DSC}^{\star}_{E}} (\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = 0 , \label{MRGSCEuOIDE1} \end{equation} \noindent on $(0,\infty)$ with initial condition \begin{equation} \widehat{\mathcal{DSC}_{E}^{\star}}(\vartheta,0; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = 0. \label{MRGSCEuOIDE2} \end{equation} \end{Prop} \subsubsection{American-Type Contracts} Lastly, we discuss maturity-randomized versions of the American-type geometric step option $\mathcal{DSC}_{A}^{\star}(\cdot)$, i.e.~we consider the following geometric step contracts \begin{equation} \widehat{\mathcal{DSC}_{A}^{\star}}(\vartheta,x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}}) := \sup \limits_{\tau \in \mathfrak{T}_{[0,\infty)}} \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\mathcal{T}_{\vartheta} \wedge \tau} - K \right)^{+} \right], \label{MRAmer1} \end{equation} \noindent where we use the notation introduced in Section~\ref{MROIDE}.\ref{EUROtypeCo} Due to their complex early exercise structure, these maturity-randomized contracts do not anymore coincide with a strict application of the Laplace-Carson transform to their deterministic counterparts $\mathcal{T} \mapsto \mathcal{DSC}_{A}^{\star}(\mathcal{T},x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}})$. Instead, conditioning on the (independent) exponential random time $\mathcal{T}_{\vartheta}$ only leads to the following expression \begin{equation} \widehat{\mathcal{DSC}_{A}^{\star}}(\vartheta,x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}}) = \sup \limits_{\tau \in \mathfrak{T}_{[0,\infty)}} \mathbb{E}_{x}^{\mathbb{Q}} \left[ \, \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\mathcal{T}_{\vartheta} \wedge \tau} - K \right)^{+} \big | \mathcal{T}_{\vartheta} \right] \, \right] = \sup \limits_{\tau \in \mathfrak{T}_{[0,\infty)}} \int \limits_{0}^{\infty} \vartheta e^{-\vartheta t } \, \mathcal{DSC}^{\star}(t \wedge \tau,x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}}) \,dt, \label{RandTimeInt} \end{equation} \noindent where $\mathcal{DSC}^{\star}(\tau,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}}) = \mathcal{DSC}\big( \tau ,x, 0, 0; \,r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot) \big) $ denotes, as earlier, for any $T >0$ and stopping time $\tau \in \mathfrak{T}_{[0,T]}$, the contract version of (\ref{not1}) that is initiated at the valuation date under consideration, i.e.~in the sense of the notation introduced in~(\ref{SimpliBUP2}). Nevertheless, the same arguments as in Section~\ref{MROIDE}.\ref{EUROtypeCo} (cf.~\cite{Ma18}) directly show that the right-hand side in (\ref{RandTimeInt}) is well-defined for $\bm{\rho}_{\bm{\ell}} \leq 0$ and any $\vartheta >0$. Furthermore, OIDE characterizations of the maturity-randomized American-type contract $\widehat{\mathcal{DSC}_{A}^{\star}}(\cdot)$ as well as of the respective early exercise premiums can be derived using strong Markovian arguments. This is the content of the following discussion. \vspace{1em} \\ To start, we recall that the (independent) exponential random time $\mathcal{T}_{\vartheta}$ can be interpreted as the (first) jump time of a corresponding (independent) Poisson process $(N_{t})_{t \geq 0}$ with intensity $\vartheta > 0$ and that this can be used to re-express the optimal stopping problem in a slightly different form. In particular, we can consider, for any $\vartheta > 0 $ and initial value $z = (n,x) \in \mathbb{N}_{0} \times [0,\infty)$, the process $(Z_{t})_{t \geq 0}$ defined on the state domain $\mathcal{D} := \mathbb{N}_{0} \times [0,\infty)$ via $Z_{t} := (n + N_{t} , \bar{S}_{t})$, $\bar{S}_{0} = x$, as well as its stopped version, $(Z_{t}^{\mathcal{S}_{J}})_{t \geq 0}$, defined, for $t \geq 0$, via \begin{equation} Z_{t}^{\mathcal{S}_{J}} := Z_{t \wedge \tau_{\mathcal{S}_{J}}}, \hspace{1.5em} \mbox{with} \hspace{1.7em} \tau_{\mathcal{S}_{J}} := \inf \{t \geq 0 : \, Z_{t} \in \mathcal{S}_{J} \}, \hspace{1.5em} \mbox{and} \hspace{1.7em} \mathcal{S}_{J} := \mathbb{N} \times [0,\infty). \label{STOPPEDprocDEF} \end{equation} \noindent Clearly, the process $(Z_{t}^{\mathcal{S}_{J}})_{t \geq 0}$ behaves exactly like the process $(Z_{t})_{t \geq 0}$ for all times $t < \tau_{\mathcal{S}_{J}}$, which implies that most of the properties of $(Z_{t})_{t \geq 0}$ naturally extend to $(Z_{t}^{\mathcal{S}_{J}})_{t \geq 0}$.\footnote{In particular, the process $(Z_{t}^{\mathcal{S}_{J}})_{t \geq 0}$ is again strongly Markovian on the state domain~$\mathcal{D}$.}~Additionally, $\widehat{\mathcal{DSC}_{A}^{\star}}(\cdot)$ can be re-expressed, for $\vartheta, K,\bm{\ell}$ and $\bm{\rho}_{\bm{\ell}}$, in the form \begin{equation} \widehat{\mathcal{DSC}^{\star}_{A}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = \widehat{V_{A}}\big((0,x)\big), \label{MRAmerIMeq} \end{equation} \noindent where the value function $\widehat{V_{A}}(\cdot)$ has the following representation under the measure $\mathbb{Q}_{z}^{Z}$ having initial distribution $Z_{0} = z$: \begin{align} \widehat{V_{A}}(z) : = \sup \limits_{\tau \in \mathfrak{T}_{[0,\infty)}} \mathbb{E}^{\mathbb{Q}^{Z}}_{z} \big[ G(Z_{\tau}^{\mathcal{S}_{J}}) \big], \hspace{1.5em} G(z) := (x - K)^{+}. \label{NEWproBL} \end{align} \noindent Therefore, using the fact that the payoff function $x \mapsto (x-K)^{+}$ is continuous as well as standard optimal stopping arguments (cf.~Corollary 2.9.~and Remark~2.10. in \cite{pe06}), we can infer that the continuation and stopping regions to (the more general) Problem (\ref{NEWproBL}) read, respectively \begin{align} \widehat{\mathcal{D}_{c}^{Gen.}} = \left \{ z \in \mathcal{D}: \, \widehat{V_{A}}(z) > G(z) \right \} , \hspace{1.5em} \mbox{and} \hspace{1.7em} \widehat{\mathcal{D}_{s}^{Gen.}} = \left \{ z \in \mathcal{D}: \, \widehat{V_{A}}(z) = G(z) \right \}, \label{NEWregion} \end{align} \noindent and that the first-entry time \begin{equation} \tau_{\widehat{\mathcal{D}_{s}^{Gen.}}} := \inf \Big \{ t \geq 0: \, Z_{t}^{\mathcal{S}_{J}} \in \widehat{\mathcal{D}_{s}^{Gen.}} \Big \} \label{NeWWOTime} \end{equation} \noindent is optimal in (\ref{NEWproBL}).\footnote{Note that the finiteness of this stopping time directly follows from the finiteness of the first moment of any exponential distribution and the fact that $\mathcal{S}_{J} \subseteq \widehat{\mathcal{D}_{s}^{Gen.}}$.}~This then allows us to make use of standard strong Markovian arguments to derive a characterization of the American-type contract $\widehat{\mathcal{DSC}^{\star}_{A}}(\cdot)$ in terms of a Cauchy-type problem and leads via Relation (\ref{MRAmerIMeq}) and the following continuation and stopping regions \begin{align} \widehat{\mathcal{D}}_{\vartheta,c} & = \left \{ x \in [0,\infty): \, \widehat{\mathcal{DSC}^{\star}_{A}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) > (x-K)^{+} \right \} , \label{MRCregion} \\ \widehat{\mathcal{D}}_{\vartheta,s} & = \left \{ x \in [0,\infty): \, \widehat{\mathcal{DSC}^{\star}_{A}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = (x-K)^{+} \right \}, \label{MRSregion} \end{align} to the next proposition. A proof is presented in Appendix A. \begin{Prop} \label{prop5} \noindent For any intensity $\vartheta >0$, strike $K \geq 0$, barrier levels $0 \leq L \leq H < \infty$, and knock-out rates $\rho_{L}, \rho_{H} \leq 0$, the value of the maturity-randomized American-type geometric double barrier step \mbox{call, $\widehat{\mathcal{DSC}_{A}^{\star}}(\cdot)$,} is continuous on $[0,\infty)$ and satisfies the following Cauchy-type problem: \begin{equation} \vartheta (x-K)^{+} + \mathcal{A}_{S} \widehat{\mathcal{DSC}^{\star}_{A}} (\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) -\bigg( (r + \vartheta) - \bm{\rho}_{\bm{\ell}} \cdot \bigg( \begin{array}{c} \mathds{1}_{(0,L)}(x) \\ \mathds{1}_{(H,\infty)}(x) \end{array} \bigg) \bigg) \widehat{\mathcal{DSC}^{\star}_{A}} (\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = 0 , \label{MRGSCAmerOIDE1} \end{equation} \noindent for $x \in \widehat{\mathcal{D}}_{\vartheta,c}$ with boundary condition \begin{equation} \widehat{\mathcal{DSC}_{A}^{\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = (x-K)^{+}, \hspace{1.5em} \mbox{for} \; \, x \in \widehat{\mathcal{D}}_{\vartheta,s}. \label{MRGSCAmerOIDE2} \end{equation} \end{Prop} \noindent To finalize our discussion, we aim to characterize diffusion and jump contributions to the maturity-randomized early exercise premium of geometric double barrier step contracts, that is defined for $\vartheta, x, K, \bm{\ell},$ and $\bm{\rho}_{\bm{\ell}}$ via \begin{equation} \widehat{\mathcal{E}_{\mathcal{DSC}}^{\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) := \widehat{\mathcal{DSC}_{A}^{\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) - \widehat{\mathcal{DSC}_{E}^{\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}). \label{EEP_MR1bup} \end{equation} \noindent For simplicity of the exposition, we directly rely on the continuation and stopping regions introduced in (\ref{MRCregion}), (\ref{MRSregion}) and note that the maturity-randomized American-type option $\widehat{\mathcal{DSC}_{A}^{\star}}(\cdot)$ can be equivalently written as \begin{equation} \widehat{\mathcal{DSC}_{A}^{\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\mathcal{T}_{\vartheta} \wedge \tau_{\widehat{\mathcal{D}}_{\vartheta,s}}} - K \right)^{+} \right], \end{equation} \noindent since the first-entry time $\tau_{\widehat{\mathcal{D}}_{\vartheta,s}} := \inf \left \{ t \geq 0: \, \bar{S}_{t} \in \widehat{\mathcal{D}}_{\vartheta,s} \right \}$ clearly inherits the optimality of its counterpart~(\ref{NeWWOTime}) in the more general problem~(\ref{NEWproBL}). Then, following the line of the arguments provided in Section~\ref{GEOPIDE}.\ref{GEOPIDEAmer}, we can make use of the sets $\partial \widehat{\mathcal{D}}_{\vartheta,s}$ and $\widehat{\mathcal{D}}_{\vartheta,s}^{\circ}$ to decompose the stopping region into sub-regions where (early) stopping is purely due to diffusion and jumps, respectively, and subsequently derive a decomposition of the maturity-randomized early exercise premium, $\widehat{\mathcal{E}_{\mathcal{DSC}}^{\star}}(\cdot)$, of the form \begin{equation} \widehat{\mathcal{E}_{\mathcal{DSC}}^{\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = \widehat{\mathcal{E}_{\mathcal{DSC}}^{0,\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) + \widehat{\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}). \end{equation} \noindent Here, the premiums $\widehat{\mathcal{E}_{\mathcal{DSC}}^{0,\star}}(\cdot)$ and $\widehat{\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}}(\cdot)$ refer to the maturity-randomized early exercise contributions of the diffusion and jump parts, respectively, and are defined via \begin{align} \widehat{\mathcal{E}_{\mathcal{DSC}}^{0,\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & := \widehat{\mathcal{DSC}_{A}^{0,\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) - \widehat{\mathcal{DSC}_{E}^{0,\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ), \\ \widehat{\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & := \widehat{\mathcal{DSC}_{A}^{\mathcal{J},\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) - \widehat{\mathcal{DSC}_{E}^{\mathcal{J},\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ), \end{align} \noindent where the maturity-randomized European-type functions $\widehat{\mathcal{DSC}_{E}^{0,\star}}(\cdot)$ and $\widehat{\mathcal{DSC}_{E}^{\mathcal{J},\star}}(\cdot)$ are given by \begin{align} \widehat{\mathcal{DSC}_{E}^{0,\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\mathcal{T}_{\vartheta} } - K \right)^{+} \mathds{1}_{\partial \widehat{\mathcal{D}}_{\vartheta,s}}\big(\bar{S}_{\mathcal{T}_{\vartheta} \wedge \tau_{\widehat{\mathcal{D}}_{\vartheta,s}}} \big) \right], \\ \widehat{\mathcal{DSC}_{E}^{\mathcal{J},\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\mathcal{T}_{\vartheta}} - K \right)^{+} \mathds{1}_{\widehat{\mathcal{D}}_{\vartheta,s}^{\circ}}\big(\bar{S}_{\mathcal{T}_{\vartheta} \wedge \tau_{\widehat{\mathcal{D}}_{\vartheta,s}}} \big) \right], \end{align} \noindent and the maturity-randomized American-type contributions $\widehat{\mathcal{DSC}_{A}^{0,\star}}(\cdot)$ and $\widehat{\mathcal{DSC}_{A}^{\mathcal{J},\star}}(\cdot)$ are defined accordingly, as \begin{align} \widehat{\mathcal{DSC}_{A}^{0,\star}}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \Big(\bar{S}_{\mathcal{T}_{\vartheta} \wedge \tau_{\widehat{\mathcal{D}}_{\vartheta,s}} } - K \Big)^{+} \mathds{1}_{\partial \widehat{\mathcal{D}}_{\vartheta,s}}\big(\bar{S}_{\mathcal{T}_{\vartheta} \wedge \tau_{\widehat{\mathcal{D}}_{\vartheta,s}}} \big) \right], \\ \widehat{\mathcal{DSC}_{A}^{\mathcal{J},\star}}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \Big(\bar{S}_{\mathcal{T}_{\vartheta} \wedge \tau_{\widehat{\mathcal{D}}_{\vartheta,s}} } - K \Big)^{+} \mathds{1}_{\widehat{\mathcal{D}}_{\vartheta,s}^{\circ}}\big(\bar{S}_{\mathcal{T}_{\vartheta} \wedge \tau_{\widehat{\mathcal{D}}_{\vartheta,s}}} \big) \right]. \end{align} \noindent Combining these definitions with strong Markovian arguments similarly used in the proof of the previous propositions and the memorylessness of the exponential distribution finally allows us to derive OIDE characterizations of the early exercise contributions $\widehat{\mathcal{E}_{\mathcal{DSC}}^{0,\star}}(\cdot)$ and $\widehat{\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}}(\cdot)$. This is the content of the next proposition, whose proof is provided in Appendix~A. \begin{Prop} \label{prop6} \noindent For any intensity $\vartheta > 0$, strike $K \geq 0$, barrier levels $0 \leq L \leq H < \infty$, and knock-out rates $ \rho_{L}, \rho_{H} \leq 0$, the value of the diffusion contribution to the maturity-randomized early exercise premium of the geometric double barrier step call, $\widehat{\mathcal{E}_{\mathcal{DSC}}^{0,\star}}(\cdot)$, satisfies the following Cauchy-type problem: \begin{equation} \mathcal{A}_{S} \widehat{\mathcal{E}_{\mathcal{DSC}}^{0,\star}} (\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) -\bigg( (r + \vartheta)- \bm{\rho}_{\bm{\ell}} \cdot \bigg( \begin{array}{c} \mathds{1}_{(0,L)}(x) \\ \mathds{1}_{(H,\infty)}(x) \end{array} \bigg) \bigg) \widehat{\mathcal{E}_{\mathcal{DSC}}^{0,\star}} (\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = 0 , \label{GSCAmeEEPOIDE1} \end{equation} \noindent for $ x \in \widehat{\mathcal{D}}_{\vartheta,c}$ with boundary conditions \begin{align} \widehat{\mathcal{E}_{\mathcal{DSC}}^{0,\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = (x-K )^{+} - & \widehat{\mathcal{DSC}_{E}^{\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ), \hspace{1.5em} \mbox{for} \; \, x \in \partial \widehat{\mathcal{D}}_{\vartheta,s}, \label{GSCAmeEEPOIDE1-1} \\ \widehat{\mathcal{E}_{\mathcal{DSC}}^{0,\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) & = 0, \hspace{1.5em} \mbox{for} \; \, x \in \widehat{\mathcal{D}}_{\vartheta,s}^{\circ}. \label{GSCAmeEEPOIDE1-2} \end{align} \noindent Similarly, the value of the jump contribution to the maturity-randomized early exercise premium of the geometric double barrier step call, $\widehat{\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}}(\cdot)$, solves the following Cauchy-type problem: \begin{equation} \mathcal{A}_{S} \widehat{\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) -\bigg( (r + \vartheta)- \bm{\rho}_{\bm{\ell}} \cdot \bigg( \begin{array}{c} \mathds{1}_{(0,L)}(x) \\ \mathds{1}_{(H,\infty)}(x) \end{array} \bigg) \bigg) \widehat{\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = 0 , \label{GSCAmeEEPOIDE2} \end{equation} \noindent for $ x \in \widehat{\mathcal{D}}_{\vartheta,c}$ with boundary conditions \begin{align} \widehat{\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) & = 0, \hspace{1.5em} \mbox{for} \; \, x \in \partial \widehat{\mathcal{D}}_{\vartheta,s}, \label{GSCAmeEEPOIDE2-1} \\ \widehat{\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}}(\vartheta,x; K, \bm{\ell},\bm{\rho}_{\bm{\ell}}) = (x-K )^{+} - & \widehat{\mathcal{DSC}_{E}^{\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ), \hspace{1.5em} \mbox{for} \; \, x \in \widehat{\mathcal{D}}_{\vartheta,c}^{\circ}. \label{GSCAmeEEPOIDE2-2} \end{align} \end{Prop} $\mbox{ }$ \vspace{-0.7em} \\ \noindent \underline{\bf Remark 3.} \vspace{0.2em} \\ \noindent Although maturity-randomized American-type contracts and maturity-randomized early exercise premiums do not anymore coincide with a strict application of the Laplace-Carson transform to their deterministic counterparts, they exhibit a very similar structure. This becomes clear when comparing Equations (\ref{RandTimeInt}) and (\ref{EEP_MR1bup}) with Identity (\ref{LCTMREuro1}). Hence, once (analytical or numerical) results are obtained for these quantities, a very natural pricing algorithm consists in dealing with their results as if they would actually correspond to proper Laplace-Carson applications and therefore in inverting them via an algorithm such as the one proposed in the Gaver-Stehfest inversion. This has been already investigated by other authors in a similar context (cf.~\cite{wz10}, \cite{lv17}, \cite{cv18}) where this approach has proven to deliver a very good pricing accuracy. We will follow the idea of this literature and will provide in Section \ref{MR_NUMRES} numerical results for geometric down-and-out step call options under hyper-exponential jump-diffusion markets based on this ansatz. This also justifies our slight abuse of notation in the current section, where we intentionally used for both maturity-randomized American-type contracts as well as maturity-randomized early exercise premiums the same notation as for Laplace-Carson transforms. \\ $\mbox{}$ \hspace{44.8em} \scalebox{0.75}{$\blacklozenge$} \\ \section{Geometric Step Options and Hyper-Exponential Jump-Diffusion Markets} \label{SEC3} As an application of the theory developed in Section \ref{SEC2}, we derive semi-analytical pricing results for (regular) geometric down-and-out step call options under hyper-exponential jump-diffusion markets, i.e.~we fix in~(\ref{market1}), (\ref{market2}) hyper-exponential jump-diffusion dynamics $(X_{t})_{t \geq 0}$ and consider geometric step options of the form \begin{equation} \mathcal{DOSC}^{\star}_{\bullet}(\mathcal{T},x; K, L, \rho_{L} ) := \mathcal{DSC}^{\star}_{\bullet} \big(\mathcal{T},x; K, (L,L), (\rho_{L},0) \big), \end{equation} \noindent for $\bullet \in \{E, A \}$, time to maturity $\mathcal{T} \geq 0$, initial value $x \geq 0$, strike $K \geq 0$, lower barrier $0 \leq L \leq K < \infty$ and knock-out rate $\rho_{L} \leq 0$. \subsection{Generalities on Hyper-Exponential Jump-Diffusion Markets} \noindent We recall that a hyper-exponential jump-diffusion market is a Lévy market consisting of a deterministic savings account $(B_{t}(r))_{t \geq 0}$ (cf.~(\ref{market1})) and a risky asset $(S_{t})_{t \geq 0}$ (cf.~(\ref{market2})) whose driving process $(X_{t})_{t \geq 0}$ combines a Brownian diffusion with hyper-exponentially distributed jumps. In particular, the underlying dynamics $(X_{t})_{t \geq 0}$ have the usual jump-diffusion structure, i.e.~they can be characterized on a filtered probability space $(\Omega, \mathcal{F}, \mathbf{F}, \mathbb{P})$ via \begin{equation} X_{t} = \Big( r -\delta -\lambda \zeta - \frac{1}{2}\sigma_{X}^{2} \Big) t + \sigma_{X} W_{t} + \sum \limits_{i=1}^{N_{t}} J_{i}, \hspace{1.5em} t \geq 0, \label{DYNhypexp} \end{equation} \noindent where $(W_{t})_{t \geq 0}$ denotes an $\mathbf{F}$-Brownian motion and $(N_{t})_{t \geq 0}$ is an $\mathbf{F}$-Poisson process having intensity parameter $\lambda >0$. The constants $\zeta := \mathbb{E}^{\mathbb{Q}} \left[ e^{J_{1}} - 1 \right]$ and $\sigma_{X} > 0$ express the average (percentage) jump size and the volatility of the diffusion part, respectively. Additionally, the jumps $(J_{i})_{i \in \mathbb{N}}$ are assumed to be independent of $(N_{t})_{t \geq 0}$ and to form a sequence of independent and identically distributed random variables following a hyper-exponential distribution, i.e.~their (common) density function $f_{J_{1}}(\cdot)$ is given by \begin{equation} f_{J_{1}}(y) = \sum \limits_{i=1}^{m} p_{i} \xi_{i}e^{-\xi_{i} y} \mathds{1}_{ \{ y \geq 0 \}} + \sum \limits_{j=1}^{n} q_{j} \eta_{j} e^{\eta_{j} y } \mathds{1}_{ \{ y < 0 \} }, \label{hypexpDens} \end{equation} \noindent where $p_{i} >0$ and $\xi_{i} > 1$ for $i \in \{1,\ldots, m\}$ and $q_{j}>0$ and $\eta_{j} >0$ for $j \in \{ 1, \ldots, n \}$. Here, the parameters $(p_{i})_{i \in \{1,\ldots,m \}}$ and $(q_{j})_{j \in \{1,\ldots,n \}}$ represent the proportion of jumps that are attributed to particular jump types and are therefore assumed to satisfy the condition $\sum \limits_{i=1}^{m} p_{i} + \sum \limits_{j=1}^{n} q_{j} = 1$. For notational simplicity, we require that the intensity parameters $(\xi_{i})_{i \in \{1, \ldots, m \}}$ and $(\eta_{j})_{j \in \{1, \ldots, n \}}$ are ordered in the sense that \begin{equation} \xi_{1} < \xi_{2} < \cdots < \xi_{m} \hspace{2.5em} \mbox{and} \hspace{2.5em} \eta_{1} < \eta_{2} < \cdots < \eta_{n} \end{equation} \noindent and note that this does not consist in a loss of generality. \vspace{1em} \\ \noindent As special class of Lévy markets, hyper-exponential jump-diffusion markets can be equivalently characterized in terms of their Lévy triplet $\left(b_{X},\sigma_{X}^{2},\Pi_{X} \right)$, where $b_{X}$ and $\Pi_{X}$ are then obtained as \begin{align} b_{X} := \Big(r -\delta -\lambda \zeta - \frac{1}{2}\sigma_{X}^{2}\Big) + \int \limits_{\{ |y|\leq 1 \}} y \, \Pi_{X}(dy) \hspace{2em} \mbox{and} \hspace{2.3em} \Pi_{X}(dy) := \lambda f_{J_{1}}(y) dy . \label{INTENSmeasure} \end{align} \noindent Combining these results with Equation (\ref{CHARexp}), their Lévy exponent, $\Psi_{X}(\cdot)$, is then easily derived as \begin{align} \Psi_{X}(\theta) = -i \Big( r -\delta -\lambda \zeta - \frac{1}{2}\sigma_{X}^{2} \Big) \theta + \frac{1}{2} \sigma_{X}^2 \theta^2 - \lambda \left( \sum \limits_{i=1}^{m} \frac{p_{i} \xi_{i}}{\xi_{i} - i \theta} + \sum \limits_{j=1}^{n} \frac{q_{j} \eta_{j}}{\eta_{j} + i \theta} - 1\right). \end{align} \noindent Similarly, their Laplace exponent, $\Phi_{X}(\cdot)$, is well-defined for $\theta \in (-\eta_{1}, \xi_{1}) $ and equals \begin{equation} \Phi_{X}(\theta) = \Big( r -\delta -\lambda \zeta - \frac{1}{2}\sigma_{X}^{2} \Big) \theta + \frac{1}{2} \sigma_{X}^2 \theta^2 + \lambda \left( \sum \limits_{i=1}^{m} \frac{p_{i} \xi_{i}}{\xi_{i} - \theta} + \sum \limits_{j=1}^{n} \frac{q_{j} \eta_{j}}{\eta_{j} + \theta} - 1\right). \label{LAP1} \end{equation} \noindent In what follows, we will consider the Laplace exponent as standalone function on the extended real domain $\Phi_{X}: \mathbb{R} \setminus \{\xi_{1}, \ldots, \xi_{m}, -\eta_{1}, \ldots, -\eta_{n} \} \rightarrow \mathbb{R}$. This quantity will play a central role in the upcoming derivations. In fact, many distributional properties of hyper-exponential jump-diffusion markets (and of their generalizations) are closely linked to the roots of the equation $\Phi_{X}(\theta) = \alpha$, for $\alpha \geq 0$. This was already used in various articles dealing with option pricing and risk management within the class of mixed-exponential jump-diffusion models (cf.~among others \cite{ca09}, \cite{cc09}, \cite{ck11}, \cite{ck12}). In this context, the following (important) lemma was derived in \cite{ca09} under hyper-exponential jump-diffusion models. The interested reader is referred for a proof to the latter article. \begin{Lem} \label{SimpleCAIlem} Let $\sigma_{X} > 0$ and $\Phi_{X}(\cdot)$ be defined as in (\ref{LAP1}). Then, for any $\alpha >0$, the equation $\Phi_{X}(\theta) = \alpha$ has $(m+n+2)$ real roots $\beta_{1,\alpha}, \ldots, \beta_{m+1,\alpha}$ and $\gamma_{1,\alpha}, \ldots, \gamma_{n+1,\alpha}$ that satisfy \begin{align} -\infty < \gamma_{n+1,\alpha} < -\eta_{n} < \gamma_{n,\alpha} < -\eta_{n-1} < \cdots < \gamma_{2,\alpha} < -\eta_{1} < \gamma_{1,\alpha} < 0, \\ 0 < \beta_{1,\alpha} < \xi_{1} < \beta_{2,\alpha} < \cdots < \xi_{m-1} < \beta_{m,\alpha} < \xi_{m} < \beta_{m+1,\alpha} < \infty. \hspace{0.5em} \end{align} \end{Lem} $\mbox{ }$ \vspace{-0.7em} \\ \noindent \underline{\bf Remark 4.} \begin{itemize} \setlength \itemsep{-0.1em} \item[i)] At this point, one should note that the roots in Lemma~\ref{SimpleCAIlem} are only known in analytical form in very few cases. Nevertheless, this does not impact the importance and practicability of this result since all roots can be anyway recovered using standard numerical techniques. \item[ii)] Similar characterizations to the one presented in Lemma~\ref{SimpleCAIlem} can be derived under the assumption that $\sigma_{X} =0$ (cf.~\cite{fmv19}) and combining these characterizations with the upcoming derivations of Section~\ref{SECmaRaHYPER} subsequently allows to derive semi-analytical pricing results under hyper-exponential jump-diffusion markets with $\sigma_{X} = 0$. However, since the main techniques do not substantially differ from the ones presented in this article, we refrain from discussing this type of results and focus on the more important case where $\sigma_{X} > 0$.\footnote{A discussion of results for $\sigma_{X} = 0$ in a slightly different context is provided in \cite{fmv19}.} \end{itemize} $\mbox{}$ \hspace{44.8em} \scalebox{0.75}{$\blacklozenge$} \\ \subsection{Maturity-Randomization and OIDEs} \label{SECmaRaHYPER} \noindent We now go back to the OIDE characterizations of Proposition~\ref{prop4}, Proposition~\ref{prop5}, and Proposition~\ref{prop6}, and consider the respective problems (\ref{MRGSCEuOIDE1})-(\ref{MRGSCEuOIDE2}), (\ref{MRGSCAmerOIDE1})-(\ref{MRGSCAmerOIDE2}), and (\ref{GSCAmeEEPOIDE1})-(\ref{GSCAmeEEPOIDE2-2}) for (regular) geometric down-and-out step call options under hyper-exponential jump-diffusion markets with $\sigma_{X} > 0$. First, we note that the infinitesimal generator (\ref{AINFA}) simplifies in this case to \begin{equation} \mathcal{A}_{S} V(\mathcal{T},x) = \frac{1}{2} \sigma^{2}_{X} x^{2} \partial_{x}^{2} V(\mathcal{T},x) +(r-\delta-\lambda \zeta) x \partial_{x} V(\mathcal{T},x) + \lambda \int \limits_{\mathbb{R}} \big( V(\mathcal{T},xe^{y}) - V(\mathcal{T},x) \big) f_{J_{1}}(y)dy. \label{HyperINFIG} \end{equation} \noindent Together with the properties of the hyper-exponential density $f_{J_{1}}(\cdot)$, this allows us to uniquely solve the problems (\ref{MRGSCEuOIDE1})-(\ref{MRGSCEuOIDE2}), (\ref{MRGSCAmerOIDE1})-(\ref{MRGSCAmerOIDE2}), and (\ref{GSCAmeEEPOIDE1})-(\ref{GSCAmeEEPOIDE2-2}), and to derive closed-form expressions for the (regular) maturity-randomized geometric down-and-out step contracts $\widehat{\mathcal{DOSC}^{\star}_{E}}(\cdot)$, $\widehat{\mathcal{DOSC}^{\star}_{A}}(\cdot)$, and corresponding early exercise premiums $\widehat{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\cdot)$, $\widehat{\mathcal{E}_{\mathcal{DOSC}}^{0,\star}}(\cdot)$, and $\widehat{\mathcal{E}_{\mathcal{DOSC}}^{\mathcal{J},\star}}(\cdot)$. This is discussed next.\vspace{1em} \\ \noindent We start by dealing with the maturity-randomized European-type contract $\widehat{\mathcal{DOSC}^{\star}_{E}}(\cdot)$. Here, upon imposing a natural smooth-fit condition (cf.~among others \cite{ccw10}, \cite{xy13}, \cite{lz16}), the following characterization of the (regular) maturity-randomized European-type geometric down-and-out step call option $\widehat{\mathcal{DOSC}^{\star}_{E}}(\cdot)$ can be obtained. A proof is provided in Appendix B. \begin{Prop} \label{PropEurHEJD} \noindent Consider a hyper-exponential jump-diffusion market as described by (\ref{market1}), (\ref{market2}), and (\ref{DYNhypexp}), (\ref{hypexpDens}). Then, for any intensity parameter $\vartheta >0$, the (regular) maturity-randomized European-type geometric down-and-out step call, $\widehat{\mathcal{DOSC}^{\star}_{E}}(\cdot)$, has the following representation \begin{equation} \widehat{\mathcal{DOSC}^{\star}_{E}}(\vartheta,x; K, L, \rho_{L}) = \left \{ \begin{array}{lc} \sum \limits_{s=1}^{m+1} A_{s}^{+} \Big(\frac{x}{L} \Big)^{\beta_{s,(r+\vartheta-\rho_{L})}}, & 0 \leq x < L,\\ \sum \limits_{s=1}^{m+1} B_{s}^{+} \Big(\frac{x}{L} \Big)^{\beta_{s,(r+\vartheta)}} + \sum \limits_{u=1}^{n+1} B_{u}^{-} \Big(\frac{x}{K} \Big)^{\gamma_{u,(r+\vartheta)}}, & L \leq x \leq K, \\ \sum \limits_{u=1}^{n+1} C_{u}^{-} \Big(\frac{x}{K} \Big)^{\gamma_{u,(r+\vartheta)}} + \vartheta \left( \frac{x}{\delta + \vartheta } -\frac{K}{r + \vartheta} \right) , & K < x < \infty , \end{array} \right. \end{equation} \noindent where the vector of coefficients $\mathbf{v} := (A_{1}^{+}, \ldots, A_{m+1}^{+},B_{1}^{+}, \ldots, B_{m+1}^{+}, B_{1}^{-}, \ldots, B_{n+1}^{-}, C_{1}^{-}, \ldots, C_{n+1}^{-})^{\intercal}$ solves the system of equations given in (\ref{AppendixBSysEq}) of Appendix B. \end{Prop} \noindent We next derive (semi-)analytical results for the (regular) maturity-randomized American-type geometric down-and-out step call contract $\widehat{\mathcal{DOSC}^{\star}_{A}}(\cdot)$. Having already obtained a closed-form expression for the European-type option $\widehat{\mathcal{DOSC}^{\star}_{E}}(\cdot)$, we can now focus on the maturity-randomized early exercise pricing problem instead. Indeed, although a direct application of the techniques developed in the proof of Proposition~\ref{PropEurHEJD} to $\widehat{\mathcal{DOSC}^{\star}_{A}}(\cdot)$ is equally feasible, switching to the maturity-randomized early exercise pricing problem substantially reduces the complexity of the resulting equations. We therefore follow this approach and decompose the American-type contract $\widehat{\mathcal{DOSC}^{\star}_{A}}(\cdot)$ as sum of the European-type option $\widehat{\mathcal{DOSC}^{\star}_{E}}(\cdot)$ and the early exercise premium $\widehat{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\cdot)$. Additionally, since we have seen in Section~\ref{SEC2} that the stopping region of a (maturity-randomized) geometric knock-out option is a sub-domain of the stopping region for the respective (maturity-randomized) barrier-type knock-out option, we can follow the ansatz in \cite{xy13} (cf.~\cite{lv17}, \cite{cv18}) and conjecture that the early-exercise region is delimited by a free-boundary $\mathfrak{b}_{s} > K$, whose value has to be found. Combining these observations, we therefore arrive at the next proposition, whose proof is provided in Appendix B. \begin{Prop} \label{PropAmerHEJD} \noindent Consider a hyper-exponential jump-diffusion market as described by (\ref{market1}), (\ref{market2}), and (\ref{DYNhypexp}), (\ref{hypexpDens}). Then, for any intensity parameter $\vartheta >0$, the (regular) maturity-randomized American-type geometric down-and-out step call option, $\widehat{\mathcal{DOSC}^{\star}_{A}}(\cdot)$, is given by \begin{equation} \widehat{\mathcal{DOSC}^{\star}_{A}}(\vartheta,x; K, L, \rho_{L}) = \widehat{\mathcal{DOSC}^{\star}_{E}}(\vartheta,x; K, L, \rho_{L}) + \widehat{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,x; K, L, \rho_{L}) , \end{equation} \noindent where the maturity-randomized early exercise premium to the (regular) geometric down-and-out step call, $\widehat{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\cdot)$, has the following representation: \begin{equation} \widehat{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,x; K, L, \rho_{L}) = \left \{ \begin{array}{lc} \sum \limits_{s=1}^{m+1} D_{s}^{+} \Big(\frac{x}{L} \Big)^{\beta_{s,(r+\vartheta-\rho_{L})}}, & 0 \leq x < L,\\ \sum \limits_{s=1}^{m+1} F_{s}^{+} \Big(\frac{x}{L} \Big)^{\beta_{s,(r+\vartheta)}} + \sum \limits_{u=1}^{n+1} F_{u}^{-} \Big(\frac{x}{\mathfrak{b}_{s}} \Big)^{\gamma_{u,(r+\vartheta)}}, & L \leq x < \mathfrak{b}_{s}, \\ x-K-\widehat{\mathcal{DOSC}^{\star}_{E}}(\vartheta,x; K, L, \rho_{L}) , & \mathfrak{b}_{s} \leq x < \infty . \end{array} \right. \end{equation} \noindent Here, the vector of coefficients $\mathbf{w} := (D_{1}^{+}, \ldots, D_{m+1}^{+},F_{1}^{+}, \ldots, F_{m+1}^{+}, F_{1}^{-}, \ldots, F_{n+1}^{-})^{\intercal}$ solves the system of equations given in (\ref{AppendixBAmerSysEq}) of Appendix B and the early exercise boundary $\mathfrak{b}_{s}$ is implicitly given by combining (\ref{AppendixBAmerSysEq}) with Equation (\ref{SPuseful}). \end{Prop} \noindent To complete our derivations, we lastly generalize the results obtained in \cite{lv17} to American-type geometric step contracts and provide a jump-diffusion disentanglement of the maturity-randomized early exercise premium to the (regular) geometric down-an-out step call. Here, combining our results in Proposition~\ref{prop6} with ideas similarly employed in~\cite{lv17}, \cite{cv18}, and \cite{fmv19}, allows us to derive (semi-)analytical expressions for $\widehat{\mathcal{E}_{\mathcal{DOSC}}^{0,\star}}(\cdot)$ and $\widehat{\mathcal{E}_{\mathcal{DOSC}}^{\mathcal{J},\star}}(\cdot)$, the maturity-randomized early exercise contribution of the diffusion and jump parts to the geometric down-and-out step call option. This leads to our final proposition, whose proof is provided in Appendix B. \begin{Prop} \label{PropEepHEJD} \noindent Consider a hyper-exponential jump-diffusion market as described by (\ref{market1}), (\ref{market2}), and (\ref{DYNhypexp}), (\ref{hypexpDens}). Then, for any intensity parameter $\vartheta >0$, the maturity-randomized early exercise premium to the (regular) geometric down-and-out step call, $\widehat{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\cdot)$, has the following decomposition \begin{equation} \widehat{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,x; K, L, \rho_{L}) = \widehat{\mathcal{E}_{\mathcal{DOSC}}^{0,\star}}(\vartheta,x; K, L, \rho_{L}) + \widehat{\mathcal{E}_{\mathcal{DOSC}}^{\mathcal{J},\star}}(\vartheta,x; K, L, \rho_{L}). \end{equation} \noindent Here, the premiums $\widehat{\mathcal{E}_{\mathcal{DOSC}}^{0,\star}}(\cdot)$ and $\widehat{\mathcal{E}_{\mathcal{DOSC}}^{\mathcal{J},\star}}(\cdot)$ refer to the maturity-randomized early exercise contributions of the diffusion and jump parts, respectively, and are given by \begin{equation} \widehat{\mathcal{E}_{\mathcal{DOSC}}^{0,\star}}(\vartheta,x; K, L, \rho_{L}) = \left \{ \begin{array}{lc} \sum \limits_{s=1}^{m+1} D_{s}^{0,+} \Big(\frac{x}{L} \Big)^{\beta_{s,(r+\vartheta-\rho_{L})}}, & 0 \leq x < L,\\ \sum \limits_{s=1}^{m+1} F_{s}^{0,+} \Big(\frac{x}{L} \Big)^{\beta_{s,(r+\vartheta)}} + \sum \limits_{u=1}^{n+1} F_{u}^{0-} \Big(\frac{x}{\mathfrak{b}_{s}} \Big)^{\gamma_{u,(r+\vartheta)}}, & L \leq x < \mathfrak{b}_{s}, \\ x-K-\widehat{\mathcal{DOSC}^{\star}_{E}}(\vartheta,x; K, L, \rho_{L}) , & x = \mathfrak{b}_{s}, \\ 0, & \mathfrak{b}_{s} < x < \infty, \end{array} \right. \label{DiffEEP_MR1} \end{equation} \begin{equation} \widehat{\mathcal{E}_{\mathcal{DOSC}}^{\mathcal{J},\star}}(\vartheta,x; K, L, \rho_{L}) = \left \{ \begin{array}{lc} \sum \limits_{s=1}^{m+1} D_{s}^{\mathcal{J},+} \Big(\frac{x}{L} \Big)^{\beta_{s,(r+\vartheta-\rho_{L})}}, & 0 \leq x < L,\\ \sum \limits_{s=1}^{m+1} F_{s}^{\mathcal{J},+} \Big(\frac{x}{L} \Big)^{\beta_{s,(r+\vartheta)}} + \sum \limits_{u=1}^{n+1} F_{u}^{\mathcal{J},-} \Big(\frac{x}{\mathfrak{b}_{s}} \Big)^{\gamma_{u,(r+\vartheta)}}, & L \leq x < \mathfrak{b}_{s}, \\ 0, & x = \mathfrak{b}_{s}, \\ x-K-\widehat{\mathcal{DOSC}^{\star}_{E}}(\vartheta,x; K, L, \rho_{L}) , & \mathfrak{b}_{s} < x < \infty , \end{array} \right. \label{JumpEEP_MR1} \end{equation} \noindent where the two vectors of coefficients $\mathbf{w_{0}} := (D_{1}^{0,+}, \ldots, D_{m+1}^{0,+},F_{1}^{0,+}, \ldots, F_{m+1}^{0,+}, F_{1}^{0,-}, \ldots, F_{n+1}^{0,-})^{\intercal}$ and $\mathbf{w_{J}} := (D_{1}^{\mathcal{J},+}, \ldots, D_{m+1}^{\mathcal{J},+},F_{1}^{\mathcal{J},+}, \ldots, F_{m+1}^{\mathcal{J},+}, F_{1}^{\mathcal{J},-}, \ldots, F_{n+1}^{\mathcal{J},-})^{\intercal}$ solve the systems of equations given in (\ref{AppendixBEepSysEq}). \end{Prop} \section{Numerical Results} \label{MR_NUMRES} \noindent To complement the theoretical results of Section \ref{SEC2} and Section \ref{SEC3}, we lastly illustrate structural and numerical properties of (regular) geometric down-and-out step call options under hyper-exponential jump-diffusion markets. For simplicity of the exposition as well as to allow for a better comparability of our results with the existing literature, we rely on Kou's double-exponential jump-diffusion model (cf.~\cite{ko02}) as class representative and combine a variety of parameters that were similarly used in the following related articles:~\cite{li99}, \cite{kw04}, \cite{cc09}, \cite{ccw10}, \cite{ck12}, \cite{lz16}, \cite{lv17}, \cite{cv18}, and \cite{dlm19}. All our numerical results are obtained using Matlab R2017b on an Intel CORE i7 processor. \begin{center} \captionof{table}{Theoretical (down-and-out) call values and diffusion contributions to the early exercise premium for $r=0.05$, $\delta= 0.07$, $S_{0}=100$, $K=100$, $L=95$, $\rho_{L}=-26.34$, $p=0.7$, $\xi=25$ and $\eta = 50$.} \label{table1STEP} \scalebox{0.764}{ \begin{tabular}{lrrrrrrrrrrrrr} \toprule \multicolumn{11}{c}{\bf (Down-and-Out) Call Option Prices} \\ \bottomrule \multicolumn{2}{c}{\it Parameters} & \multicolumn{3}{c}{\it Standard Call Price} & \multicolumn{3}{c}{\it Step Call Price} & \multicolumn{3}{c}{\it Barrier Call Price} \\ \cmidrule(r){1-2} \cmidrule(r){3-5} \cmidrule(r){6-8} \cmidrule(l){9-11} & $\lambda$ & \it Euro & \it Amer & \it DC (\%) & \it Euro & \it Amer & \it DC (\%) & \it Euro & \it Amer & \it DC (\%) \\ \midrule \midrule & $1$ & $6.833$ & $7.040$ & $91.52 \%$ & $4.596$ & $4.789$ & $91.71 \% $ & $3.374$ & $3.551$ & $91.88 \%$ \\ $S_{0} = 100$ & $0.1$ & $6.622$ & $6.822$ & $99.07 \%$ & $4.519$ & $4.706$ & $99.09 \%$ & $3.338$ & $3.514$ & $99.12 \%$ \\ $\sigma_{X}=0.2$ & $0.01$ & $6.600$ & $6.800$ & $99.91 \% $ & $4.511$ & $4.698$ & $99.91 \%$ & $3.334$ & $3.510$ & $99.91 \%$ \\ $\mathcal{T} = 1.0$ & $0.001$ & $6.598$ & $6.797$ & $99.99 \%$ & $4.510$ & $4.697$ & $99.99 \%$ & $3.333$ & $3.509$ & $99.99 \%$ \\ & $0.0001$ & $6.598$ & $6.797$ & $100.00 \%$ & $4.510$ & $4.697$ & $100.00\% $ & $3.333$ & $3.509$ & $100.00 \% $ \\ \bottomrule {\bf B\&S Values} & {\bf --} & {\bf 6.698} & {\bf 6.885} & {\bf --} & {\bf 4.511} & {\bf 4.745} & {\bf --} & {\bf 3.332} & {\bf 3.529} & {\bf --} \\ \mbox{\bf Rel. Error (\%)} & {\bf --} & {\bf 0.001\%} & {\bf -1.277\%} & {\bf --} & {\bf 0.015\%} & {\bf -1.025\%} & {\bf --} & {\bf 0.025\%} & {\bf -0.568\%} & {\bf --} \\ \bottomrule \end{tabular} } \end{center} $\mbox{}$ \vspace{-0.8em} \\ \subsection{Geometric Step Options and Limiting Contracts} \noindent We start our illustrations by investigating the convergence of geometric knock-out step call options to their limiting contracts. As already pointed out in Section~\ref{SEC2}, standard and (standard) barrier-type options can be understood as extremities on a continuum of geometric double barrier knock-out step contracts, namely when the knock-out rates are chosen as $\bm{\rho}_{\bm{\ell}} = (0,0)$ and $\bm{\rho}_{\bm{\ell}}=(-\infty, -\infty)$, respectively. Furthermore, since hyper-exponential jump-diffusion markets reduce to the Black \& Scholes market (cf.~\cite{bs73}) when the jump intensity $\lambda $ is zero, our results should be consistent in the limit $\lambda \downarrow 0$ with those obtained e.g.~in \cite{li99} and \cite{dlm19}. We verify these results in Table~\ref{table1STEP}, where we compare the value of (regular) European-type and American-type geometric down-and-out step call options for $\rho_{L}=0$ (``Standard Call Price''), $\rho_{L} = -26.34$ (``Step Call Price''), and $\rho_{L} = -50'000'000$ (``Barrier Call Price'') with the respective Black~\&~Scholes values (``B\&S Values'').\footnote{We compute the value of the American-type contracts under the Black \& Scholes model using the algorithm in \cite{dlm19} as well as Ritchken's trinomial tree method with $5'000$ time steps.}~As in these papers (cf.~also \cite{lz16}), we take $\mathcal{T}=1.0$, $\sigma_{X} = 0.2$, $r=0.05$, $\delta=0.07$, $S_{0}=100$, $K=100$, $L=95$, and $\rho_{L} = -26.34$. Furthermore, we align the parameters of the double-exponential distribution to frequent choices in the literature and fix the probability of an up-jump with $p=0.7$ (cf.~\cite{lv17}) and positive and negative jump parameters with $\xi=25$ and $\eta=50$, respectively (cf.~\cite{kw04},\cite{ck12}, \cite{lv17}, \cite{cv18}). Finally, as in \cite{lz16} the convergence to the Black \& Scholes values is investigated via $\lambda \in \{1, 0.1, 0.01, 0.001, 0.0001 \}$. \vspace{1em} \\ \noindent As expected, the results in Table~\ref{table1STEP} show that standard options, geometric step options, and (standard) barrier-type options under the Black \& Scholes market can be recovered by means of their respective contracts under double-exponential jump-diffusion markets as $\lambda \downarrow 0$. Furthermore, our results confirm the convergence of geometric down-and-out step call options to barrier-type down-and-out call contracts as $\rho_{L} \downarrow -\infty$. This becomes evident when looking at the ``Barrier Call Price'' of Table~\ref{table1STEP} while recalling that the Black \& Scholes value is a true barrier-type value that was obtained using Ritchken's trinomial tree method and that the converging values correspond to those of geometric down-and-out step call options with $\rho_{L} = -50'000'000$. Finally, we note that our results are in line with the observations in \cite{lv17}, where the pricing accuracy of the Gaver-Stehfest inversion algorithm for European-type options was very high\footnote{In this article, the relative pricing errors of the Gaver-Stehfest inversion algorithm for European-type contracts never exceeded $\pm 0.22 \%$.}~and the relative pricing errors of the same inversion method applied to American-type options instead ranged from roughly $\pm 0.33 \%$ to $\pm 1.38 \%$. As explained in Remark~3, this is mainly due to the fact that maturity-randomized American-type contracts as well as maturity-randomized early exercise premiums do not anymore coincide with a strict application of the Laplace-Carson transform but are regardless treated as such. \begin{center} \captionof{table}{Theoretical (down-and-out) call values and structure of the early exercise premium for $r=0.05$, $\delta= 0.07$, $K=100$, $L=95$, $\rho_{L} = -26.34$, $p=0.5$, $\xi=50$ and $\eta = 25$.} \label{tableSTEP:HEJD1} \scalebox{0.764}{ \begin{tabular}{lrrrrrrrrrrrrr} \toprule \multicolumn{14}{c}{\bf (Down-and-Out) Call Option Prices} \\ \bottomrule \multicolumn{2}{c}{\it Parameters} & \multicolumn{4}{c}{\it Standard Call Price} & \multicolumn{4}{c}{\it Step Call Price} & \multicolumn{4}{c}{\it Barrier Call Price} \\ \cmidrule(r){1-2} \cmidrule(r){3-6} \cmidrule(r){7-10} \cmidrule(l){11-14} & $S_{0}$ & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) \\ \midrule \midrule & $90$ & $3.500$ & $0.062$ & $1.74 \%$ & $94.20 \%$ & $0.268$ & $0.009$ & $3.07 \% $ & $94.32 \%$ & $0$ & $0$ & {\bf --} & {\bf --} \\ (1) & $95$ & $5.241$ & $0.112$ & $2.09 \%$ & $94.27 \%$ & $1.757$ & $0.059$ & $3.23 \%$ & $94.33 \% $ & $0$ & $0$ & {\bf --} & {\bf --} \\ $\sigma_{X}=0.2$ & $100$ & $7.416$ & $0.190$ & $2.50 \%$ & $94.34 \%$ & $4.992$ & $0.178$ & $3.45 \%$ & $94.36 \%$ & $3.686$ & $0.165$ & $4.28 \%$ & $94.37 \%$ \\ $\lambda = 5.0$ & $105$ & $10.011$ & $0.305$ & $2.96 \% $ & $94.40 \% $ & $8.309$ & $0.330$ & $3.82 \%$ & $94.39 \%$ & $7.305$ & $0.353$ & $4.61 \%$ & $94.40 \%$ \\ $\mathcal{T} = 1.0$ & $110$ & $12.992$ & $0.469$ & $3.48 \% $ & $94.46 \%$ & $11.804$ & $0.535$ & $4.34 \%$ & $94.44 \%$ & $11.037$ & $0.597$ & $5.13 \%$ & $94.44 \%$ \\ & $115$ & $16.314$ & $0.691$ & $4.07 \% $ & $94.52 \%$ & $15.492$ & $0.811$ & $4.98 \%$ & $94.50 \% $ & $14.914$ & $0.920$ & $5.81 \%$ & $94.54 \% $ \\ \midrule \midrule & $90$ & $4.098$ & $0.065$ & $1.57 \%$ & $89.68 \%$ & $0.344$ & $0.010$ & $2.79 \% $ & $89.87 \%$ & $0$ & $0$ & {\bf --} & {\bf --} \\ (2) & $95$ & $5.933$ & $0.113$ & $1.87 \%$ & $89.80 \%$ & $2.012$ & $0.061$ & $2.93 \%$ & $89.89 \% $ & $0$ & $0$ & {\bf --} & {\bf --} \\ $\sigma_{X}=0.2$ & $100$ & $8.169$ & $0.186$ & $2.22 \%$ & $89.90 \%$ & $5.413$ & $0.175$ & $3.12 \%$ & $89.93 \%$ & $3.990$ & $0.161$ & $3.88 \%$ & $89.95 \%$ \\ $\lambda = 10.0$ & $105$ & $10.791$ & $0.290$ & $2.62 \% $ & $90.00 \% $ & $8.791$ & $0.314$ & $3.44 \%$ & $89.99 \%$ & $7.683$ & $0.334$ & $4.17 \%$ & $89.99 \%$ \\ $\mathcal{T} = 1.0$ & $110$ & $13.767$ & $0.435$ & $3.06 \% $ & $90.08 \%$ & $12.313$ & $0.497$ & $3.88 \%$ & $90.05 \%$ & $11.442$ & $0.552$ & $4.60 \%$ & $90.05 \%$ \\ & $115$ & $17.056$ & $0.628$ & $3.55 \% $ & $90.17 \%$ & $16.004$ & $0.738$ & $4.41 \%$ & $90.12 \% $ & $15.325$ & $0.835$ & $5.16 \%$ & $90.13 \% $ \\ \bottomrule \end{tabular} } \end{center} $\mbox{}$ \vspace{-0.8em} \\ \subsection{Early Exercise Structure of Geometric Step Options with Jumps} \noindent Having verified the convergence of geometric step options to their limiting contracts, we next investigate the early exercise structure of (regular) geometric down-and-out step call options. To this end, we start by computing absolute European-type values (``Euro''), absolute early exercise premiums (``EEP''), relative early exercise contributions\footnote{The relative early exercise contribution is expressed as percentage of the American-type geometric step option price.}~(``EEP\%''), and diffusion contributions to the early exercise premium (``DC\%'') for standard call options (``Standard Call Price''), (regular) geometric down-and-out step call options (``Step Call Price'') and (regular) pseudo barrier-type down-and-out call options (``Barrier Call Price'').\footnote{As earlier, we rely on results for geometric down-and-out step call contracts with $\rho_{L} = -50'000'000$ to derive pseudo barrier-type down-and-out call option values.}~Here, we combine again the parameter choices in \cite{li99} and \cite{dlm19} with frequent jump specifications in the literature. More specifically, we choose $\mathcal{T}=1.0$, $\sigma_{X} = 0.2$, $r=0.05$, $\delta = 0.07$, $S_{0} \in \{90,95,100,105,110,115 \}$, $K=100$, $L = 95$, $\rho_{L}=-26.24$ and fix the intensity measure $\Pi_{X}$ in (\ref{INTENSmeasure}) by taking $\lambda \in \{5,10 \}$ (cf.~\cite{lv17}, \cite{cv18}), $p=0.5$ (cf.~\cite{cc09}, \cite{ccw10}, \cite{lz16}, \cite{cv18}), and $(\xi,\eta) \in \{(50,25), (50,50), (25,50), (25,25) \}$ (cf.~\cite{kw04}, \cite{ck12}, \cite{lv17},\cite{cv18}). The results are presented in Tables~\ref{tableSTEP:HEJD1}-\ref{tableSTEP:HEJD4}. \begin{center} \captionof{table}{Theoretical (down-and-out) call values and structure of the early exercise premium for $r=0.05$, $\delta= 0.07$, $K=100$, $L=95$, $\rho_{L}=-26.34$, $p=0.5$, $\xi=50$ and $\eta = 50$.} \label{tableSTEP:HEJD2} \scalebox{0.764}{ \begin{tabular}{lrrrrrrrrrrrrr} \toprule \multicolumn{14}{c}{\bf (Down-and-Out) Call Option Prices} \\ \bottomrule \multicolumn{2}{c}{\it Parameters} & \multicolumn{4}{c}{\it Standard Call Price} & \multicolumn{4}{c}{\it Step Call Price} & \multicolumn{4}{c}{\it Barrier Call Price} \\ \cmidrule(r){1-2} \cmidrule(r){3-6} \cmidrule(r){7-10} \cmidrule(l){11-14} & $S_{0}$ & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) \\ \midrule \midrule & $90$ & $3.163$ & $0.064$ & $1.98 \%$ & $93.97 \%$ & $0.232$ & $0.008$ & $3.46 \% $ & $94.10 \%$ & $0$ & $0$ & {\bf --} & {\bf --} \\ (1) & $95$ & $4.835$ & $0.117$ & $2.37 \%$ & $94.05 \%$ & $1.588$ & $0.060$ & $3.63 \%$ & $94.12 \% $ & $0$ & $0$ & {\bf --} & {\bf --} \\ $\sigma_{X}=0.2$ & $100$ & $6.958$ & $0.202$ & $2.82 \%$ & $94.12 \%$ & $4.679$ & $0.188$ & $3.87 \%$ & $94.14 \%$ & $3.432$ & $0.174$ & $4.82\%$ & $94.15 \%$ \\ $\lambda = 5.0$ & $105$ & $9.523$ & $0.328$ & $3.33 \% $ & $94.19 \% $ & $7.949$ & $0.355$ & $4.28 \%$ & $94.18 \%$ & $6.983$ & $0.382$ & $5.18 \%$ & $94.18 \%$ \\ $\mathcal{T} = 1.0$ & $110$ & $12.498$ & $0.509$ & $3.91 \% $ & $94.25 \%$ & $11.430$ & $0.583$ & $4.85 \%$ & $94.23 \%$ & $10.702$ & $0.654$ & $5.76 \%$ & $94.24 \%$ \\ & $115$ & $15.835$ & $0.758$ & $4.57 \% $ & $94.33 \%$ & $15.122$ & $0.891$ & $5.57 \%$ & $94.31 \% $ & $14.586$ & $1.017$ & $6.52 \%$ & $94.43 \% $ \\ \midrule \midrule & $90$ & $3.441$ & $0.068$ & $1.94 \%$ & $88.95 \%$ & $0.268$ & $0.009$ & $3.40 \% $ & $89.16 \%$ & $0$ & $0$ & {\bf --} & {\bf --} \\ (2) & $95$ & $5.155$ & $0.121$ & $2.30 \%$ & $89.08 \%$ & $1.685$ & $0.062$ & $3.55 \%$ & $89.19 \% $ & $0$ & $0$ & {\bf --} & {\bf --} \\ $\sigma_{X}=0.2$ & $100$ & $7.303$ & $0.204$ & $2.72 \%$ & $89.20 \%$ & $4.836$ & $0.190$ & $3.77 \%$ & $89.23 \%$ & $3.522$ & $0.174$ & $4.71 \%$ & $89.25 \%$ \\ $\lambda = 10.0$ & $105$ & $9.875$ & $0.325$ & $3.19 \% $ & $89.30 \% $ & $8.138$ & $0.352$ & $4.14 \%$ & $89.29 \%$ & $7.107$ & $0.377$ & $5.04 \%$ & $89.29 \%$ \\ $\mathcal{T} = 1.0$ & $110$ & $12.839$ & $0.497$ & $3.72 \% $ & $89.40 \%$ & $11.636$ & $0.569$ & $4.66 \%$ & $89.36 \%$ & $10.845$ & $0.638$ & $5.56 \%$ & $89.37 \%$ \\ & $115$ & $16.152$ & $0.729$ & $4.32 \% $ & $89.53 \%$ & $15.330$ & $0.859$ & $5.31 \%$ & $89.46 \% $ & $14.737$ & $0.981$ & $6.24 \%$ & $89.57 \% $ \\ \bottomrule \end{tabular} } \end{center} $\mbox{}$ \vspace{-0.8em} \\ \begin{center} \captionof{table}{Theoretical (down-and-out) call values and structure of the early exercise premium for $r=0.05$, $\delta= 0.07$, $K=100$, $L=95$, $\rho_{L}= -26.34$, $p=0.5$, $\xi=25$ and $\eta = 50$.} \label{tableSTEP:HEJD3} \scalebox{0.764}{ \begin{tabular}{lrrrrrrrrrrrrr} \toprule \multicolumn{14}{c}{\bf (Down-and-Out) Call Option Prices} \\ \bottomrule \multicolumn{2}{c}{\it Parameters} & \multicolumn{4}{c}{\it Standard Call Price} & \multicolumn{4}{c}{\it Step Call Price} & \multicolumn{4}{c}{\it Barrier Call Price} \\ \cmidrule(r){1-2} \cmidrule(r){3-6} \cmidrule(r){7-10} \cmidrule(l){11-14} & $S_{0}$ & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) \\ \midrule \midrule & $90$ & $3.645$ & $0.080$ & $2.15 \%$ & $75.53 \%$ & $0.294$ & $0.012$ & $3.75 \% $ & $76.36 \%$ & $0$ & $0$ & {\bf --} & {\bf --} \\ (1) & $95$ & $5.362$ & $0.137$ & $2.49 \%$ & $75.97 \%$ & $1.685$ & $0.067$ & $3.82 \%$ & $76.45 \% $ & $0$ & $0$ & {\bf --} & {\bf --} \\ $\sigma_{X}=0.2$ & $100$ & $7.501$ & $0.222$ & $2.88 \%$ & $76.37 \%$ & $4.854$ & $0.202$ & $3.99 \%$ & $76.61 \%$ & $3.506$ & $0.182$ & $4.94\%$ & $76.81 \%$ \\ $\lambda = 5.0$ & $105$ & $10.054$ & $0.345$ & $3.31 \% $ & $76.71 \% $ & $8.177$ & $0.368$ & $4.31 \%$ & $76.94 \%$ & $7.110$ & $0.391$ & $5.21 \%$ & $77.25 \%$ \\ $\mathcal{T} = 1.0$ & $110$ & $12.994$ & $0.514$ & $3.80 \% $ & $76.98 \%$ & $11.685$ & $0.585$ & $4.77 \%$ & $77.55 \%$ & $10.861$ & $0.652$ & $5.67 \%$ & $78.24 \%$ \\ & $115$ & $16.279$ & $0.740$ & $4.35 \% $ & $77.14 \%$ & $15.381$ & $0.870$ & $5.35 \%$ & $78.79 \% $ & $14.759$ & $0.989$ & $6.28 \%$ & $80.55 \% $ \\ \midrule \midrule & $90$ & $4.347$ & $0.096$ & $2.16 \%$ & $62.58 \%$ & $0.391$ & $0.015$ & $3.78 \% $ & $63.44 \%$ & $0$ & $0$ & {\bf --} & {\bf --} \\ (2) & $95$ & $6.141$ & $0.155$ & $2.45 \%$ & $63.00 \%$ & $1.865$ & $0.074$ & $3.82 \%$ & $63.47 \% $ & $0$ & $0$ & {\bf --} & {\bf --} \\ $\sigma_{X}=0.2$ & $100$ & $8.321$ & $0.238$ & $2.78 \%$ & $63.38 \%$ & $5.152$ & $0.212$ & $3.94 \%$ & $63.56 \%$ & $3.649$ & $0.188$ & $4.89\%$ & $63.68 \%$ \\ $\lambda = 5.0$ & $105$ & $10.878$ & $0.354$ & $3.15 \% $ & $63.72 \% $ & $8.549$ & $0.374$ & $4.19 \%$ & $63.76 \%$ & $7.328$ & $0.393$ & $5.09 \%$ & $63.90 \%$ \\ $\mathcal{T} = 1.0$ & $110$ & $13.788$ & $0.508$ & $3.55 \% $ & $64.01 \%$ & $12.099$ & $0.577$ & $4.55 \%$ & $64.10 \%$ & $11.126$ & $0.640$ & $5.44 \%$ & $64.46 \%$ \\ & $115$ & $17.019$ & $0.709$ & $4.00 \% $ & $64.18 \%$ & $15.809$ & $0.834$ & $5.01 \%$ & $64.74 \% $ & $15.047$ & $0.946$ & $5.91 \%$ & $65.93 \% $ \\ \bottomrule \end{tabular} } \end{center} $\mbox{}$ \vspace{1em} \\ \noindent The results in Tables~\ref{tableSTEP:HEJD1}-\ref{tableSTEP:HEJD4} show that the early exercise premium comprises a substantial part of the price of American-type geometric step contracts even if the option is out of the money. Additionally, they suggest that the absolute early exercise premium is for any rate $\rho_{L}$ increasing in the underlying price $S_{0}$ and that the relative early exercise contribution tends to increase with more severe (i.e.~more negative) knock-out rates. This is intuitively clear, since increasing the magnitude of the knock-out rate widens the early exercise domain of the American-type geometric step option and therefore further incentivizes early stopping. This subsequently raises the importance of the early exercise premium in the American-type geometric step option value and consequently increases its relative contribution. Next, we note that the diffusion contribution to the early exercise premium is a non-decreasing function of the underlying price $S_{0}$ and that this similarly seems to hold for the relative early exercise contribution. However, this last suggestion is wrong as can be seen in Figure~\ref{HEJD:FIG1:sub11} where we have plotted the relative early exercise contribution of the geometric down-and-out step call as a function of the underlying price $S_{0} \in [85,115]$ and the knock-out rate $\rho_{L} \in [-1000,0]$ using the following standard parameters: $\mathcal{T} = 1.0$, $\sigma_{X} =0.2$, $r=0.05$, $\delta =0.07$, $K=100$, $L =95$, $\lambda = 5$, $p=0.5$, $\xi = 25$, $\eta = 50$. As it turns out, the general behavior of the relative early exercise contribution depends on the location of the spot price relative to the barrier level $L$. In particular, while the relative early exercise contribution is increasing in the underlying price $S_{0}$ above the barrier $L=95$, it may be decreasing below the barrier for severe (i.e.~large negative) knock-out rates $\rho_{L}$. Nevertheless, we note that the results in Figure~\ref{HEJD:FIG1} also confirm many of the properties already discussed. In particular, the monotonicity of the relative early exercise premium as function of the knock-out rate is clearly documented here. Additionally, Figure~\ref{HEJD:FIG1:sub12} provides further evidence for the monotonicity of the diffusion contribution to the early exercise premium as function of the underlying price $S_{0}$, while Figure~\ref{HEJD:FIG1:sub13} confirms the monotonicity of the absolute early exercise premium as function of the underlying price $S_{0}$. \begin{center} \captionof{table}{Theoretical (down-and-out) call values and structure of the early exercise premium for $r=0.05$, $\delta= 0.07$, $K=100$, $L=95$, $\rho_{L}=-26.34$, $p=0.5$, $\xi=25$ and $\eta = 25$.} \label{tableSTEP:HEJD4} \scalebox{0.764}{ \begin{tabular}{lrrrrrrrrrrrrr} \toprule \multicolumn{14}{c}{\bf (Down-and-Out) Call Option Prices} \\ \bottomrule \multicolumn{2}{c}{\it Parameters} & \multicolumn{4}{c}{\it Standard Call Price} & \multicolumn{4}{c}{\it Step Call Price} & \multicolumn{4}{c}{\it Barrier Call Price} \\ \cmidrule(r){1-2} \cmidrule(r){3-6} \cmidrule(r){7-10} \cmidrule(l){11-14} & $S_{0}$ & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) & \it Euro & \it EEP & \it EEP (\%) & \it DC (\%) \\ \midrule \midrule & $90$ & $3.966$ & $0.077$ & $1.91 \%$ & $76.61 \%$ & $0.330$ & $0.012$ & $3.37 \% $ & $77.40 \%$ & $0$ & $0$ & {\bf --} & {\bf --} \\ (1) & $95$ & $5.745$ & $0.131$ & $2.23 \%$ & $77.04 \%$ & $1.845$ & $0.066$ & $3.44 \%$ & $77.48 \% $ & $0$ & $0$ & {\bf --} & {\bf --} \\ $\sigma_{X}=0.2$ & $100$ & $7.931$ & $0.210$ & $2.58 \%$ & $77.42 \%$ & $5.151$ & $0.192$ & $3.60 \%$ & $77.62 \%$ & $3.748$ & $0.174$ & $4.44\%$ & $77.79 \%$ \\ $\lambda = 5.0$ & $105$ & $10.514$ & $0.323$ & $2.98 \% $ & $77.75 \% $ & $8.516$ & $0.346$ & $3.90 \%$ & $77.89 \%$ & $7.415$ & $0.366$ & $4.70 \%$ & $78.11 \%$ \\ $\mathcal{T} = 1.0$ & $110$ & $13.463$ & $0.479$ & $3.43 \% $ & $78.01 \%$ & $12.037$ & $0.544$ & $4.32 \%$ & $78.35 \%$ & $11.178$ & $0.603$ & $5.12 \%$ & $78.81 \%$ \\ & $115$ & $16.740$ & $0.685$ & $3.93 \% $ & $78.17 \%$ & $15.730$ & $0.803$ & $4.85 \%$ & $79.21 \% $ & $15.069$ & $0.907$ & $5.68 \%$ & $80.34 \% $ \\ \midrule \midrule & $90$ & $4.950$ & $0.091$ & $1.81 \%$ & $64.97 \%$ & $0.468$ & $0.016$ & $3.20 \% $ & $65.77 \%$ & $0$ & $0$ & {\bf --} & {\bf --} \\ (2) & $95$ & $6.842$ & $0.144$ & $2.07 \%$ & $65.37 \%$ & $2.166$ & $0.073$ & $3.25 \%$ & $65.81 \% $ & $0$ & $0$ & {\bf --} & {\bf --} \\ $\sigma_{X}=0.2$ & $100$ & $9.098$ & $0.220$ & $2.36 \%$ & $65.74 \%$ & $5.678$ & $0.198$ & $3.37 \%$ & $65.91 \%$ & $4.077$ & $0.177$ & $4.16\%$ & $66.01 \%$ \\ $\lambda = 5.0$ & $105$ & $11.704$ & $0.322$ & $2.68 \% $ & $66.08 \% $ & $9.138$ & $0.341$ & $3.60 \%$ & $66.09 \%$ & $7.852$ & $0.358$ & $4.35 \%$ & $66.17 \%$ \\ $\mathcal{T} = 1.0$ & $110$ & $14.634$ & $0.458$ & $3.03 \% $ & $66.37 \%$ & $12.709$ & $0.518$ & $3.92 \%$ & $66.34 \%$ & $11.668$ & $0.571$ & $4.67 \%$ & $66.50 \%$ \\ & $115$ & $17.857$ & $0.632$ & $3.42 \% $ & $66.60 \%$ & $16.419$ & $0.741$ & $4.32 \%$ & $66.73 \% $ & $15.583$ & $0.834$ & $5.08 \%$ & $67.20 \% $ \\ \bottomrule \end{tabular} } \end{center} $\mbox{}$ \vspace{-0.8em} \\ \subsection{The Impact of Jumps on Geometric Step Options} \begin{figure}[!t] \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.16]{PLOT_PERCENT_EEP} \caption{Relative Early Exercise Contribution.} \label{HEJD:FIG1:sub11} \end{subfigure}% \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{PLOT_PERCENT_DC} \caption{Diffusion Contribution.} \label{HEJD:FIG1:sub12} \end{subfigure}\\[1ex] \begin{subfigure}{\linewidth} \centering \includegraphics[scale=.16]{PLOT_AbsoluteEEP} \caption{Absolute Early Exercise Premium.} \label{HEJD:FIG1:sub13} \end{subfigure} \caption{Relative early exercise contribution, diffusion contribution to the early exercise premium, and absolute early exercise premium of the geometric down-and-out step call as functions of the underlying price $S_{0} \in [85,115]$ and the knock-out rate $\rho_{L} \in [-1000,0]$, when the remaining parameters are chosen as: $\mathcal{T} = 1.0$, $\sigma_{X} =0.2$, $r=0.05$, $\delta =0.07$, $K=100$, $L =95$, $\lambda = 5$, $p=0.5$, $\xi = 25$, $\eta = 50$.} \label{HEJD:FIG1} \end{figure} \begin{figure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{A1_Jump_Impact_Price_Diff} \caption{European Price Difference} \label{HEJD:FIG2:sub21} \end{subfigure}% \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{B1_Jump_Impact_EEP_Diff} \caption{EEP Price Difference} \label{HEJD:FIG2:sub22} \end{subfigure}\\[1ex] \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{A2_Jump_Impact_PriceDelta_Diff} \caption{European Delta Difference} \label{HEJD:FIG2:sub23} \end{subfigure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{B2_Jump_Impact_EEPDelta_Diff} \caption{EEP Delta Difference} \label{HEJD:FIG2:sub24} \end{subfigure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{A3_Jump_Impact_PriceGamma_Diff} \caption{European Gamma Difference} \label{HEJD:FIG2:sub25} \end{subfigure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{B3_Jump_Impact_EEPGamma_Diff} \caption{EEP Gamma Difference} \label{HEJD:FIG2:sub26} \end{subfigure} \caption{Difference in the prices, deltas, and gammas for the geometric down-and-out step calls with and without jumps as functions of the underlying price $S_{0} \in [85.115]$ and the intensity parameter $\lambda \in [0,20]$, when the remaining parameters are chosen as: $\mathcal{T} = 1.0$, $\sigma_{X} =0.2$, $r=0.05$, $\delta =0.07$, $K=100$, $L =95$, $\rho_{L} = -26.34$, $p=0.5$, $\xi = 25$, $\eta = 50$.} \label{HEJD:FIG2} \end{figure} \noindent The vast majority of the geometric step option pricing literature either studies the Black \& Scholes market (cf.~\cite{li99}, \cite{dl02}, \cite{xy13}, \cite{dlm19}) or only European-type geometric step options under more advanced models (cf.~\cite{ccw10}, \cite{cmw13}, \cite{lz16}, \cite{wzb17}). Additionally, although the inclusion of jumps naturally raises questions about their importance, no clear investigation of jump risk on the price and hedging parameters of geometric step options has been provided yet. This is the content of the next discussion. \vspace{1em} \\ \noindent We start by quantifying the impact of the jump intensity $\lambda$ on the prices and greeks of (regular) geometric down-and-out step call options and of their respective early exercise premiums. Here, we plot in Figure~\ref{HEJD:FIG2} the difference in the prices, deltas, and gammas for the geometric down-and-out step call options with and without jumps as function of the underlying price $S_{0} \in [85.115]$ and the intensity parameter $\lambda \in [0,20]$ for the following parameters: $\mathcal{T} = 1.0$, $\sigma_{X} =0.2$, $r=0.05$, $\delta =0.07$, $K=100$, $L =95$, $\rho_{L} = -26.34$, $p=0.5$, $\xi = 25$, $\eta = 50$. As expected, all differences vanish as the jump parameter approaches zero and the value of the European-type contracts increases when jumps are added (cf.~Figure~\ref{HEJD:FIG2:sub21}). However, including jumps to the asset dynamics does not necessarily increase the value of the early exercise premium. This becomes evident when looking at Figure~\ref{HEJD:FIG2:sub22} where the difference in the early exercise premiums of the geometric down-and-out step calls with and without jumps becomes negative for out of the money options. Accordingly, the difference in the deltas of the European-type geometric step options with and without jumps is always positive (cf.~Figure~\ref{HEJD:FIG2:sub23}) while the difference of the deltas for the corresponding early exercise premiums may become negative (cf.~Figure~\ref{HEJD:FIG2:sub24}). Finally, one should note that the difference in the deltas attains for both European-type options and early exercise premiums its maximum at the barrier level $L$. These findings similarly hold true for the gamma differences, where the main (positive and negative) differences are found near the barrier (cf.~Figure~\ref{HEJD:FIG2:sub25} and Figure~\ref{HEJD:FIG2:sub26}).\vspace{1em} \\ \noindent Secondly, we investigate the effect of the positive jump size $\xi$ on the prices and greeks of (regular) geometric down-and-out step call options and of their respective early exercise premiums. This is demonstrated in Figure~\ref{HEJD:FIG3} where we have plotted the difference in the prices, deltas, and gammas for the geometric down-and-out step call options with and without jumps as functions of the underlying price $S_{0} \in [85.115]$ and the positive jump parameter $\xi \in [5,100]$ for the following specification: $\mathcal{T} = 1.0$, $\sigma_{X} =0.2$, $r=0.05$, $\delta =0.07$, $K=100$, $L =95$, $\rho_{L} = -26.34$, $\lambda = 5$, $p=0.5$, $\eta = 50$. Here, for a given spot $S_{0}$ the difference in prices of the geometric down-and-out step calls with and without jumps increases with increasing average jump size~$\frac{1}{\xi}$ (cf.~Figure~\ref{HEJD:FIG3:sub31}) and the same holds true for the difference in the early exercise premiums (cf.~Figure~\ref{HEJD:FIG3:sub32}), except in parts of the payoff exercise domain, where an opposite relation is observed. While this result may seem surprising at first, it was already noticed for American-type Parisian options in \cite{cv18}, where the authors argue that the behavior is due to the structure of the early exercise premium, as difference between the intrinsic value of the option (which does not depend on the model parameters) and the corresponding European-type option price (which increases with increasing average jump size $\frac{1}{\xi}$). The same rationale also holds true in our case and the net effect then becomes negative in parts of the payoff exercise domain. Finally, an increase in the average jump size~$\frac{1}{\xi}$ also usually leads to higher sensitivities for both European-type geometric down-and-out step calls and their respective early exercise premiums, except in parts of the payoff exercise domain where the same opposite relation is observed (cf.~Figure~\ref{HEJD:FIG3:sub33}, Figure~\ref{HEJD:FIG3:sub34}, Figure~\ref{HEJD:FIG3:sub35}, and Figure~\ref{HEJD:FIG3:sub36}). \begin{figure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{A1_JumpSize_Impact_Price_Diff} \caption{European Price Difference} \label{HEJD:FIG3:sub31} \end{subfigure}% \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{B1_JumpSize_Impact_EEP_Diff} \caption{EEP Price Difference} \label{HEJD:FIG3:sub32} \end{subfigure}\\[1ex] \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{A2_JumpSize_Impact_PriceDelta_Diff} \caption{European Delta Difference} \label{HEJD:FIG3:sub33} \end{subfigure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{B2_JumpSize_Impact_EEPDelta_Diff} \caption{EEP Delta Difference} \label{HEJD:FIG3:sub34} \end{subfigure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{A3_JumpSize_Impact_PriceGamma_Diff} \caption{European Gamma Difference} \label{HEJD:FIG3:sub35} \end{subfigure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=.16]{B3_JumpSize_Impact_EEPGamma_Diff} \caption{EEP Gamma Difference} \label{HEJD:FIG3:sub36} \end{subfigure} \caption{Difference in the prices, deltas, and gammas for the geometric down-and-out step calls with and without jumps as functions of the underlying price $S_{0} \in [85.115]$ and the positive jump parameter $\xi \in [5,100]$, when the remaining parameters are chosen as: $\mathcal{T} = 1.0$, $\sigma_{X} =0.2$, $r=0.05$, $\delta =0.07$, $K=100$, $L =95$, $\rho_{L} = -26.34$, $\lambda = 5$, $p=0.5$, $\eta = 50$.} \label{HEJD:FIG3} \end{figure} \section{Conclusion} \label{SecConCLUSION} \noindent In the present article, we have extended the current literature on geometric step option pricing in several directions. Firstly, we have derived symmetry and parity relations and obtained various characterizations for both European-type and American-type geometric double barrier step options under exponential Lévy markets. In particular, we were able to translate the formalism introduced in \cite{fmv19} to the setting of geometric double barrier step options and to generalize at the same time the ideas introduced in \cite{cy13}, \cite{lv17} and \cite{cv18} to Lévy-driven markets. As a result of these extensions, we were able to derive a jump-diffusion disentanglement for the early exercise premium of American-type geometric double barrier step options and its maturity-randomized equivalent as well as to characterize the diffusion and jump contributions to these early exercise premiums separately by means of partial integro-differential equations and ordinary integro-differential equations. To illustrate the practicability and importance of our characterizations, we have subsequently derived semi-analytical pricing results for (regular) European-type and American-type geometric down-and-out step call options under hyper-exponential jump-diffusion markets. Lastly, we have used the latter results to discuss the early exercise structure of geometric step options once jumps are added and to provide an analysis of the impact of jumps on the price and hedging parameters of (European-type and American-type) geometric step contracts. \vspace{2em} \\ \noindent \acknow{The authors would like to thank Nikola Vasiljevi\'c for his helpful comments.} \vspace{1em} \\ \newpage \section*{Appendices} \renewcommand{\theequation}{A.\arabic{equation}} \subsection*{Appendix A: Proofs - Section \ref{SEC2}} \begin{proof}[\bf Proof of Lemma \ref{lem1}] \noindent For the sake of better exposition, we start by expanding our notation and define, for a Lévy process $(X_{t})_{t \geq 0}$, $t \geq 0$, $x \geq 0$, $\gamma \geq 0$ and given barrier level $\ell >0$, \begin{equation} \Gamma_{X,t,\ell}^{-}(x,\gamma) := \gamma + \int \limits_{0}^{t} \mathds{1}_{(0,\ell)}\big(xe^{X_{s}} \big) ds, \hspace{2em} \mbox{and} \hspace{2.3em} \Gamma_{X,t,\ell}^{+}(x,\gamma) := \gamma + \int \limits_{0}^{t} \mathds{1}_{(\ell,\infty)}\big(xe^{X_{s}} \big) ds. \label{AppNota} \end{equation} \noindent Then, we denote by $(\tilde{X}_{t})_{t \geq 0}$ the dual process to $(X_{t})_{t \geq 0}$, i.e.~the process defined for $t \geq 0$ by $\tilde{X}_{t} := -X_{t}$, and note that, for $t \geq 0$, $x \geq 0$, $K \geq 0$, $\gamma \geq 0$ and $\ell>0$, the following relation holds \begin{equation} \Gamma_{X,t,\ell}^{\pm}(x,\gamma) = \Gamma_{\tilde{X},t,\frac{xK}{\ell}}^{\mp}(K,\gamma). \label{RelA2} \end{equation} \noindent Combining (\ref{RelA2}) with the change of measure defined by the ($1$-)Esscher transform\footnote{\noindent The Esscher transform was first introduced 1932 by Esscher and later established in the theory of option pricing by Gerber and Shiu (cf.~\cite{gs94}). For an economical interpretation of this pricing technique in the continuous-time framework, we refer to \cite{gs94}.} \begin{equation} Z_{t} : = \left. \frac{d \mathbb{Q}^{(1)}}{d \mathbb{Q}} \right|_{\mathcal{F}_{t}} := \frac{e^{1 \cdot X_{t}}}{\mathbb{E}^{\mathbb{Q}}\left[ e^{1 \cdot X_{t}}\right]} = e^{X_{t} - t \Phi_{X}(1)}, \label{EsscherT} \end{equation} \noindent allows us to recover (with $\delta = r- \Phi_{X}(1)$) that for any $T > 0$ and stopping time $\tau \in \mathfrak{T}_{[0,T]}$ \begin{align} \mathcal{DSC}\big( \tau ,x, \gamma_{L}, \gamma_{H}; & \,r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot)\big) \nonumber \\ & = \mathbb{E}^{\mathbb{Q}} \left[ B_{\tau}(r)^{-1} \exp\big\{\rho_{L} \Gamma_{X,\tau,L}^{-}(x,\gamma_{L}) + \rho_{H} \Gamma_{X,\tau,H}^{+}(x,\gamma_{H})\big \} \left(xe^{X_{\tau}} - K \right)^{+} \right] \nonumber \\ & = \mathbb{E}^{\mathbb{Q}} \Big[ Z_{\tau} B_{\tau}(\delta)^{-1} \exp\big\{\rho_{H} \Gamma_{\tilde{X},\tau,\frac{xK}{H}}^{-}(K,\gamma_{H}) + \rho_{L} \Gamma_{\tilde{X},\tau,\frac{xK}{L}}^{+}(K,\gamma_{L}) \big\} \big(x - Ke^{ \tilde{X}_{\tau}} \big)^{+} \Big] \nonumber \\ & = \mathbb{E}^{\mathbb{Q}^{(1)}} \Big[B_{\tau}(\delta)^{-1} \, \exp\big\{\rho_{H} \Gamma_{\tilde{X},\tau,\frac{xK}{H}}^{-}(K,\gamma_{H}) + \rho_{L} \Gamma_{\tilde{X},\tau,\frac{xK}{L}}^{+}(K,\gamma_{L}) \big\} \big(x - Ke^{ \tilde{X}_{\tau}} \big)^{+} \Big] \label{SymProof} \end{align} holds. Therefore, if one shows that $(\tilde{X}_{t})_{t \geq 0}$ is again a Lévy process under the measure $\mathbb{Q}^{(1)}$, (\ref{SymProof}) implies that \begin{align} \mathcal{DSC}\big( \tau ,x, \gamma_{L}, \gamma_{H}; \,r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot)\big) = \mathcal{DSP}\Big(\tau,K,\gamma_{H}, \gamma_{L} ; \delta,r,x,\frac{xK}{H},\frac{xK}{L}, \rho_{H}, \rho_{L},\Psi_{\tilde{X}}^{(1)}(\cdot)\Big), \end{align} \noindent where $\Psi_{\tilde{X}}^{(1)}(\cdot)$ denotes the Lévy exponent of $(\tilde{X}_{t})_{t \geq 0}$ under the measure $\mathbb{Q}^{(1)}$. In fact, showing that $(\tilde{X}_{t})_{t \geq 0}$ is a Lévy process is not hard and can be done as in \cite{Ma18} (see also \cite{fm06}). To conclude, we therefore need to verify that $\Psi_{\tilde{X}}^{(1)} \equiv \Psi_{Y}$ holds, where $\Psi_{Y}(\cdot)$ satisfies (\ref{Ysatisf}) and is given as in (\ref{Yequation}). To this end, we first note that \begin{equation} \mathbb{E}^{\mathbb{Q}^{(1)}} \left[ e^{i \theta \tilde{X}_{1}} \right] = \mathbb{E}^{\mathbb{Q}} \left[ Z_{1} e^{-i \theta X_{1}} \right] = \mathbb{E}^{\mathbb{Q}} \left[ e^{i \left(-(\theta+i) \right) X_{1}} \right] e^{-\Phi_{X}(1)} = e^{- \left( \Psi_{X}\left(-(\theta + i) \right) + \Phi_{X}(1) \right)}. \end{equation} \noindent Therefore, the Lévy exponent of $(\tilde{X}_{t})_{t \geq 0}$ under $\mathbb{Q}^{(1)}$ can be recovered as \begin{align} \Psi_{\tilde{X}}^{(1)}(\theta) & = \Psi_{X}\left(-(\theta + i) \right) + \Phi_{X}(1) \nonumber \\ & = i \big( b_{X} + \sigma_{X}^{2} \big) \theta + \frac{1}{2} \sigma_{X}^{2} \theta^{2} + \int \limits_{\mathbb{R}} \big( e^{y} - e^{-i(\theta + i) y} - i \theta y \mathds{1}_{\{ | y | \leq 1\}} \big) \Pi_{X}(dy) \nonumber \\ & = i \Big( b_{X} + \sigma_{X}^{2} - \int \limits_{\mathbb{R}} \big( 1- e^{y} \big) y \mathds{1}_{\{ | y | \leq 1\}} \Pi_{X}(dy) \Big) \theta + \frac{1}{2} \sigma_{X}^{2} \theta^{2} + \int \limits_{\mathbb{R}} e^{y} \big( 1 - e^{i\theta(-y)} + i \theta (-y) \mathds{1}_{\{ | y | \leq 1\}} \big) \Pi_{X}(dy) \nonumber \\ & = i \Big( b_{X} + \sigma_{X}^{2} - \int \limits_{\mathbb{R}} \big( 1- e^{y} \big) y \mathds{1}_{\{ | y | \leq 1\}} \Pi_{X}(dy) \Big) \theta + \frac{1}{2} \sigma_{X}^{2} \theta^{2} + \int \limits_{\mathbb{R}} \big( 1 - e^{i\theta y} + i \theta y \mathds{1}_{\{ | y | \leq 1\}} \big) \Pi^{\star}(dy), \end{align} \noindent where $\Pi^{\star}(dy) := e^{-y} \,\Pi_{\tilde{X}}(dy)$ and the jump measure of the dual process $(\tilde{X}_{t})_{t \geq 0}$ satisfies $\Pi_{\tilde{X}}(dy) = \Pi_{X}(-dy)$. Lastly, we can combine these results with Equation (\ref{bXequa}) to obtain that \begin{align} b_{Y} & = - \Big( b_{X} + \sigma_{X}^{2} - \int \limits_{\mathbb{R}} \big( 1- e^{y} \big) y \mathds{1}_{\{ | y | \leq 1\}} \Pi_{X}(dy) \Big) = \delta - r - \frac{1}{2}\sigma_{X}^{2} + \int \limits_{\mathbb{R}} \big( 1 - e^{y} + y \mathds{1}_{\{ | y | \leq 1\}} \big) \Pi^{\star}(dy). \end{align} \noindent This finalizes the proof. \end{proof} \begin{proof}[\bf Proof of Corollary \ref{coro1}] \noindent First, we note that Equation (\ref{Toprove1a}) is a direct consequence of Lemma \ref{lem1}, since taking $\tau \equiv \mathcal{T}$ in (\ref{EqLem1}) directly provides the result for the European-type options, while the corresponding equality for American-type options is recovered from (\ref{EqLem1}) by taking the supremum over the set $\mathfrak{T}_{[0,\mathcal{T}]}$. Therefore, we proceed with the proof of the second identity. \vspace{1em} \\ \noindent For the proof of (\ref{Toprove1b}), we note as in the proof of Lemma \ref{lem1} that, for a Lévy process $(X_{t})_{t \geq 0}$, $t \geq 0$, $x \geq 0$, $\gamma \geq 0$ and given barrier level $\ell >0$, the following identity holds \begin{equation} \Gamma_{X,t,\ell}^{\pm}(x,\gamma) = \Gamma_{X,t,\frac{\ell}{xK}}^{\pm}\Big(\frac{1}{K},\gamma \Big), \end{equation} \noindent where we have used the notation introduced in (\ref{AppNota}). Then, combining the latter relation with Lemma \ref{lem1} allows us to recover, for $T >0$ and any stopping time $\tau \in \mathfrak{T}_{[0,T]}$, that \begin{align} \mathcal{DSC}\big( \tau ,x, \gamma_{L}, \gamma_{H}; & \,r,\delta, K, L, H,\rho_{L}, \rho_{H}, \Psi_{X}(\cdot)\big) \nonumber \\ & = xK \cdot \mathbb{E}^{\mathbb{Q}} \left[ B_{\tau}(r)^{-1} \exp\Big\{\rho_{L} \Gamma_{X,\tau,\frac{L}{xK}}^{-}\Big( \frac{1}{K},\gamma_{L} \Big) + \rho_{H} \Gamma_{X,\tau,\frac{H}{xK}}^{+}\Big( \frac{1}{K},\gamma_{H} \Big) \Big \} \Big(\frac{1}{K}e^{X_{\tau}} - \frac{1}{x} \Big)^{+} \right] \nonumber \\ & = xK \cdot \mathcal{DSC} \Big( \tau ,\frac{1}{K}, \gamma_{L}, \gamma_{H}; r,\delta, \frac{1}{x}, \frac{L}{xK}, \frac{H}{xK},\rho_{L}, \rho_{H}, \Psi_{X}(\cdot) \Big) \nonumber \\ & = xK \cdot \mathcal{DSP} \Big( \tau ,\frac{1}{x}, \gamma_{H}, \gamma_{L}; \delta,r, \frac{1}{K}, \frac{1}{H}, \frac{1}{L},\rho_{H}, \rho_{L}, \Psi_{Y}(\cdot) \Big). \label{GodEQ} \end{align} \noindent Here, $\Psi_{Y}(\cdot)$ represents, as in Lemma \ref{lem1}, the Lévy exponent of a process $(Y_{t})_{t \geq 0}$ driving another exponential Lévy market and that satisfies the relations (\ref{Ysatisf})-(\ref{inteMeas}). \noindent Therefore, taking as earlier $\tau \equiv \mathcal{T}$ in (\ref{GodEQ}) directly provides us with the result for the European-type options, while the corresponding identity for American-type contracts is obtained from (\ref{GodEQ}) by taking the supremum over the set $\mathfrak{T}_{[0,\mathcal{T}]}$. \end{proof} \begin{proof}[\bf Proof of Proposition \ref{prop1}] \noindent We start by showing the continuity of $(\mathcal{T},x) \mapsto \mathcal{DSC}^{\star}_{E}(\mathcal{T},x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}})$ on the domain $[0,T] \times [0,\infty)$ for any $K,\bm{\ell}$, and $\bm{\rho}_{\bm{\ell}} $. To do this, we first note that the continuity of the occupation times $x \mapsto \Gamma_{X,\mathcal{T},\ell}^{\pm}(x,0)$, defined for any $\mathcal{T} \in [0,T]$ and $\ell \geq 0$ as in (\ref{AppNota}), and the continuity of the function $x \mapsto (x-K)^{+}$, for $K \geq 0$, directly give by means of the dominated convergence theorem the continuity of $x \mapsto \mathcal{DSC}^{\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ for any of the parameters $\mathcal{T},K,\bm{\ell}$, and $\bm{\rho}_{\bm{\ell}}$. Therefore, to prove that $(\mathcal{T},x) \mapsto \mathcal{DSC}^{\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ is, for any parameters $K,\bm{\ell}$, and $\bm{\rho}_{\bm{\ell}}$, continuous on $[0,T] \times [0,\infty)$, it is enough to show that $\mathcal{T} \mapsto \mathcal{DSC}^{\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ is, for any parameters $x,K,\bm{\ell}$, and $\bm{\rho}_{\bm{\ell}}$, uniformly continuous on~$[0,T]$. To obtain this property, we fix times to maturity $0 \leq u < t \leq T$, recall that $\rho_{L}, \rho_{H} \leq 0$ and derive that \begin{align} \big| & \mathcal{DSC}^{\star}_{E}(t,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) - \mathcal{DSC}^{\star}_{E}(u,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) \big| \nonumber \\ & \leq \mathbb{E}_{x}^{\mathbb{Q}} \bigg[ \,e^{-ru + \rho_{L} \Gamma_{u,L}^{-} + \rho_{H} \Gamma_{u,H}^{+}} \, \Big| e^{-\int_{u}^{t} \left(r-\rho_{L} \mathds{1}_{(0,L)}(S_{s}) - \rho_{H} \mathds{1}_{(H,\infty)}(S_{s}) \right) ds } (S_{t} - K )^{+} - (S_{u} - K)^{+} \Big| \, \bigg] \nonumber \\ & \leq \mathbb{E}_{x}^{\mathbb{Q}} \bigg[ \,\Big| e^{-\int_{u}^{t} \left(r-\rho_{L} \mathds{1}_{(0,L)}(S_{s}) - \rho_{H} \mathds{1}_{(H,\infty)}(S_{s}) \right) ds } (S_{t} - K ) - (S_{u} - K) \Big| \, \bigg] \nonumber \\ & \leq \mathbb{E}_{x}^{\mathbb{Q}} \bigg[ S_{u} \, \Big| S_{t} S_{u}^{-1}e^{-\int_{u}^{t} \left(r-\rho_{L} \mathds{1}_{(0,L)}(S_{s}) - \rho_{H} \mathds{1}_{(H,\infty)}(S_{s}) \right) ds } -1 \Big| \,\bigg] + K \mathbb{E}_{x}^{\mathbb{Q}} \bigg[ \, \Big| e^{-\int_{u}^{t} \left(r-\rho_{L} \mathds{1}_{(0,L)}(S_{s}) - \rho_{H} \mathds{1}_{(H,\infty)}(S_{s}) \right) ds } -1 \Big| \, \bigg] \nonumber \\ & \leq \mathbb{E}^{\mathbb{Q}} \left[ xe^{X_{u}} \right] \bigg( \mathbb{E}^{\mathbb{Q}}\Big[ \big| e^{X_{t-u} + \lambda^{\star} (t-u)} -1 \big| \,\Big] + \mathbb{E}^{\mathbb{Q}}\Big[ \big| e^{X_{t-u} - \lambda^{\star} (t-u)} -1 \big| \,\Big] \bigg) + K \big( 1 - e^{-\lambda^{\star} (t-u)} \big)\nonumber \\ & \leq x \max \big\{ 1,\, e^{\Phi_{X}(1) T} \big\} \bigg( \mathbb{E}^{\mathbb{Q}}\Big[ \big| e^{X_{t-u} + \lambda^{\star} (t-u)} -1 \big| \,\Big] + \mathbb{E}^{\mathbb{Q}}\Big[ \big| e^{X_{t-u} - \lambda^{\star} (t-u)} -1 \big| \,\Big] \bigg) + K \big( 1 - e^{-\lambda^{\star} (t-u)} \big), \end{align} \noindent where $\lambda^{\star} := r-\rho_{L} - \rho_{H}$. Consequently, the right-continuity of the process $(X_{t})_{t \in [0,T]}$ implies the convergence \begin{equation} \mathcal{DSC}^{\star}_{E}(t,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) - \mathcal{DSC}^{\star}_{E}(u,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) \rightarrow 0, \hspace{1.5em} \mbox{whenever} \; \; t-u \rightarrow 0. \end{equation} \noindent This shows that the function $\mathcal{T} \mapsto \mathcal{DSC}^{\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ is, for any parameters $x,K,\bm{\ell},$ and $\bm{\rho}_{\bm{\ell}}$, uniformly continuous over $[0,T]$ and the proof of the initial claim is complete. \vspace{1em} \\ \noindent We now prove that $\mathcal{DSC}^{\star}_{E}(\cdot)$ solves Equation (\ref{GSCEuPIDE1}) on $(0,T] \times [0,\infty)$ with initial condition (\ref{GSCEuPIDE2}). Here, we start by noting that, for any parameters $\mathcal{T}, x, K, \bm{\ell},$ and $\bm{\rho}_{\bm{\ell}}$, geometric double barrier step options can be rewritten in the simpler form \begin{equation} \mathcal{DSC}^{\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = \mathbb{E}_{x}^{\mathbb{Q}} \left[ B_{\mathcal{T}}(r)^{-1} \, e^{\rho_{L} \Gamma_{\mathcal{T},L}^{-} \, + \, \rho_{H} \Gamma_{\mathcal{T},H}^{+}} \,\left(S_{\mathcal{T}} - K \right)^{+} \right] = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \left(\bar{S}_{\mathcal{T}} - K \right)^{+} \right], \end{equation} \noindent where $(\bar{S}_{t})_{t \in [0,T]}$ refers to the (strong) Markov process\footnote{It is well-known (cf.~\cite{pe06}) that the process $(\bar{S}_{t})_{t \in [0,T]}$ defined this way preserves the (strong) Markov property of the underlying process $(S_{t})_{t \in [0,T]}$.} obtained by ``killing'' the sample path of $(S_{t})_{t \in [0,T]}$ at the proportional rate $\lambda(x) := r - \bm{\rho}_{\bm{\ell}} \cdot \bigg( \begin{array}{c} \mathds{1}_{(0,L)}(x) \\ \mathds{1}_{(H,\infty)}(x) \end{array} \bigg)$. The process' transition probabilities are then given by \begin{equation} \mathbb{Q}_{x} \left( \bar{S}_{t} \in A \right) = \mathbb{E}_{x}^{\mathbb{Q}} \left[ e^{- \int_{0}^{t} \lambda(S_{s}) ds} \, \mathds{1}_{A}(S_{t}) \right] \label{TPKilling} \end{equation} \noindent and we identify its cemetery state, without loss of generality, with $\partial \equiv 0$. Consequently, for any initial value $z= (\mathbf{t},x) \in [0,T] \times [0,\infty) $, the process $(Z_{t})_{t \in [0,\mathbf{t}]}$ defined via $Z_{t}:= (\mathbf{t}-t, \bar{S}_{t})$, $\bar{S}_{0}=x$, is a strong Markov process with state domain given by $\mathcal{D}_{\mathbf{t}}:= [0,\mathbf{t}] \times [0,\infty)$. Additionally, $\mathcal{DSC}_{E}^{\star}(\cdot)$ can be re-expressed, for any $K,\bm{\ell}$, and $\bm{\rho}_{\bm{\ell}}$, as \begin{equation} \mathcal{DSC}^{\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = V_{E}\big((\mathcal{T},x)\big), \label{IMeq1} \end{equation} \noindent where the value function $V_{E}(\cdot)$ has the following representation under the measure $\mathbb{Q}_{z}^{Z}$ having initial distribution $Z_{0} = z$: \begin{align} V_{E}(z) : = \mathbb{E}^{\mathbb{Q}^{Z}}_{z} \big[ G(Z_{\tau_{\mathcal{S}}}) \big], \hspace{1.5em} G(z) := (x -K)^{+}, \end{align} \noindent and $\tau_{\mathcal{S}} := \inf \{ t \geq 0: Z_{t} \in \mathcal{S} \}$, $\mathcal{S} := \big(\{0 \} \times [0,\infty) \big) \cup \big( [0,\mathbf{t}] \times \{0 \} \big)$, is a stopping time that satisfies $\tau_{\mathcal{S}} \leq \mathbf{t}$, under $\mathbb{Q}_{z}^{Z}$ with $z = (\mathbf{t},x)$. Furthermore, the stopping region $\mathcal{S}$ is for any $\mathbf{t} \in [0,T]$ a closed set in $\mathcal{D}_{\mathbf{t}}$. Therefore, standard arguments based on the strong Markov property of $(Z_{t})_{t \in [0,\mathbf{t}]}$ (cf.~\cite{pe06}) imply that $V_{E}(\cdot)$ satisfies the following problem \begin{align} \mathcal{A}_{Z} V_{E}(z) & = 0, \hspace{2em} \mbox{on} \,\, \mathcal{D}_{T} \setminus \mathcal{S}, \\ V_{E}(z) & = G(z), \hspace{1.5em} \mbox{on} \,\, \mathcal{S}, \end{align} \noindent where $\mathcal{A}_{Z}$ denotes the infinitesimal generator of the process $(Z_{t})_{t \in [0,\mathbf{t}]}$. To complete the proof, we note that (for any suitable function $V: \mathcal{D}_{\mathbf{t}} \rightarrow \mathbb{R}$) the infinitesimal generator $\mathcal{A}_{Z}$ can be re-expressed as \begin{align} \mathcal{A}_{Z} V\big((\mathbf{t},x) \big) & = -\partial_{\mathbf{t}} V\big((\mathbf{t},x) \big) + \mathcal{A}_{\bar{S}} V\big((\mathbf{t},x) \big) \nonumber \\ & = -\partial_{\mathbf{t}} V\big((\mathbf{t},x) \big) + \mathcal{A}_{S} V\big((\mathbf{t},x) \big) - \lambda(x) V\big((\mathbf{t},x) \big). \label{IgEnE} \end{align} \noindent Therefore, recovering $\mathcal{DSC}^{\star}_{E}(\cdot)$ via (\ref{IMeq1}) finally gives the required equation and initial condition. \end{proof} \begin{proof}[\bf Proof of Proposition \ref{prop2}] First, we note that the continuity of $x \mapsto \mathcal{DSC}^{\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ for any $\mathcal{T}, K, \bm{\ell}$, and $\bm{\rho_{\bm{\ell}}}$, follows, just like the continuity of $x \mapsto \mathcal{DSC}^{\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ for $\mathcal{T}, K, \bm{\ell}$, and $\bm{\rho_{\bm{\ell}}}$, by means of the dominated convergence theorem while noticing the continuity of the occupation times $x \mapsto \Gamma_{X,\mathcal{T},\ell}^{\pm}(x,0)$, defined for any $\mathcal{T} \in [0,T]$ and $\ell \geq 0$ as in (\ref{AppNota}), and the continuity of the function $x \mapsto (x-K)^{+}$, for $K \geq 0$. Therefore, to prove that $(\mathcal{T},x) \mapsto \mathcal{DSC}^{\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ is, for any parameters $K,\bm{\ell}$, and $\bm{\rho}_{\bm{\ell}}$, continuous on $[0,T] \times [0,\infty)$, it is enough to show that $\mathcal{T} \mapsto \mathcal{DSC}^{\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ is, for any parameters $x,K,\bm{\ell}$ and $\bm{\rho}_{\bm{\ell}}$, uniformly continuous on $[0,T]$. To derive this property, we fix times to maturity $0 \leq u < t \leq T$, denote by $\tau_{2}$ the optimal stopping time for $\mathcal{DSC}^{\star}_{A}(t,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ and set $\tau_{1} := \tau_{2} \wedge u$. Then, noting that $\mathcal{T} \mapsto \mathcal{DSC}_{A}^{\star}(\mathcal{T},x;K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ is a non-decreasing function\footnote{This directly follows since, for $0\leq \mathcal{T}_{1} \leq \mathcal{T}_{2} \leq T$, any stopping time $\tau \in \mathfrak{T}_{[0,\mathcal{T}_{1}]}$ also satisfies $\tau \in \mathfrak{T}_{[0,\mathcal{T}_{2}]}$.}~while recalling that $\rho_{L}, \rho_{H} \leq 0$ holds and that $\tau_{1}$ is not necessarily optimal for the time to maturity~$u$, we obtain that \begin{align} 0 & \leq \mathcal{DSC}^{\star}_{A}(t,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) - \mathcal{DSC}^{\star}_{A}(u,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) \nonumber \\ & \leq \mathbb{E}_{x}^{\mathbb{Q}} \bigg[ \,e^{-r \tau_{1} + \rho_{L} \Gamma_{\tau_{1},L}^{-} + \rho_{H} \Gamma_{\tau_{1},H}^{+}} \Big( e^{-\int_{\tau_{1}}^{\tau_{2}} \left(r-\rho_{L} \mathds{1}_{(0,L)}(S_{s}) - \rho_{H} \mathds{1}_{(H,\infty)}(S_{s}) \right) ds } (S_{\tau_{2}} - K )^{+} - (S_{\tau_{1}} - K)^{+} \Big) \bigg] \nonumber \\ & \leq \mathbb{E}_{x}^{\mathbb{Q}} \bigg[ \, \Big| e^{-\int_{\tau_{1}}^{\tau_{2}} \left(r-\rho_{L} \mathds{1}_{(0,L)}(S_{s}) - \rho_{H} \mathds{1}_{(H,\infty)}(S_{s}) \right) ds } (S_{\tau_{2}} - K ) - (S_{\tau_{1}} - K) \Big| \, \bigg] \nonumber \\ & \leq \mathbb{E}_{x}^{\mathbb{Q}} \bigg[ S_{\tau_{1}} \, \Big| S_{\tau_{2}} S_{\tau_{1}}^{-1}e^{-\int_{\tau_{1}}^{\tau_{2}} \left(r-\rho_{L} \mathds{1}_{(0,L)}(S_{s}) - \rho_{H} \mathds{1}_{(H,\infty)}(S_{s}) \right) ds } -1 \Big| \,\bigg] + K \big( 1 - e^{-\lambda^{\star} (t-u)} \big) \nonumber \\ & \leq x \max \big\{ 1,\, e^{\Phi_{X}(1) T} \big\} \bigg( \mathbb{E}^{\mathbb{Q}}\Big[ \big| e^{X_{\tau_{2}-\tau_{1}} + \lambda^{\star} (t-u)} -1 \big| \,\Big] + \mathbb{E}^{\mathbb{Q}}\Big[ \big| e^{X_{\tau_{2}-\tau_{1}} - \lambda^{\star} (t-u)} -1 \big| \,\Big] \bigg) + K \big( 1 - e^{-\lambda^{\star} (t-u)} \big), \end{align} \noindent where $\lambda^{\star} := r-\rho_{L} - \rho_{H}$. Therefore, since we have that $\tau_{2} - \tau_{1} \rightarrow 0$, for $t-u \rightarrow 0$, we obtain, by means of the dominated convergence theorem, the convergence \begin{equation} \mathcal{DSC}^{\star}_{A}(t,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) - \mathcal{DSC}^{\star}_{A}(u,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) \rightarrow 0, \hspace{1.5em} \mbox{whenever} \; \; t-u \rightarrow 0. \end{equation} \noindent This finally shows that the function $\mathcal{T} \mapsto \mathcal{DSC}^{\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ is, for any parameters $x,K,\bm{\ell}$, and $\bm{\rho}_{\bm{\ell}}$, uniformly continuous over $[0,T]$ and the proof of the initial claim is complete. \vspace{1em} \\ \noindent To prove that $\mathcal{DSC}_{A}^{\star}(\cdot)$ satisfies the Cauchy-type problem (\ref{GSCAmePIDE1}), (\ref{GSCAmePIDE2}), we consider again, for any initial value $z = (\mathbf{t},x) \in [0,T] \times [0,\infty)$, the (strong) Markov process $(Z_{t})_{t \in [0,\mathbf{t}]}$ defined via $Z_{t} := (\mathbf{t} -t , \bar{S}_{t})$, $\bar{S}_{0} = x$, and make use of the fact that \begin{equation} \mathcal{DSC}^{\star}_{A}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = V_{A}\big((\mathcal{T},x)\big), \label{RARAppputa} \end{equation} \noindent where $V_{A}(\cdot)$ is defined, under the measure $\mathbb{Q}_{z}^{Z}$ having initial distribution $Z_{0} = z$, by \begin{align} V_{A}(z) : = \mathbb{E}^{\mathbb{Q}^{Z}}_{z} \big[ G(Z_{\tau_{\mathcal{D}_{s}}}) \big], \hspace{1.5em} G(z) := (x -K)^{+}, \end{align} \noindent and $\tau_{\mathcal{D}_{s}}$ refers to the optimal stopping time defined according to (\ref{OTime}). Since $\tau_{\mathcal{D}_{s}} \leq T$ and the stopping region $\mathcal{D}_{s}$ is a closed set in the domain $[0,T] \times [0,\infty)$,\footnote{This directly follows from Representation (\ref{Sregion}) and the continuity of $(\mathcal{T},x) \mapsto \mathcal{DSC}^{\star}_{A}(\mathcal{T},x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}})$ on $[0,T] \times [0,\infty)$ for any $K,\bm{\ell}$, and $\bm{\rho}_{\bm{\ell}} $.}~this leads via standard arguments based on the strong Markov property of $(Z_{t})_{t \in [0,\mathbf{t}]}$ (cf.~\cite{pe06}) to the following problem \begin{align} \mathcal{A}_{Z} V_{A}(z) & = 0, \hspace{3em} \mbox{on} \,\, \mathcal{D}_{c} , \\ V_{A}(z) & = G(z), \hspace{1.5em} \mbox{on} \,\, \mathcal{D}_{s}, \end{align} \noindent and finally allows to recover the required equations (\ref{GSCAmePIDE1}) and (\ref{GSCAmePIDE2}) by means of Relations (\ref{RARAppputa}) and (\ref{IgEnE}). \end{proof} \begin{proof}[\bf Proof of Proposition \ref{prop3}] \noindent To start, we note that the strong Markov property of the process $(\bar{S}_{t})_{t \in [0,T]}$ together with the optimality of the stopping time $\tau_{\mathcal{D}_{s}}$ defined, for any (fixed) $\mathcal{T} \in [0,T]$, according to (\ref{OTime}) imply that the diffusion and jump contributions to the early exercise premium of the geometric double barrier step call, $\mathcal{E}_{\mathcal{DSC}}^{0,\star}(\cdot)$ and $\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}(\cdot)$ respectively, can be written in the form \begin{align} \mathcal{E}_{\mathcal{DSC}}^{0,\star}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \Big( \left(\bar{S}_{\tau_{\mathcal{D}_{s}}} - K \right)^{+} - \mathbb{E}_{\bar{S}_{\tau_{\mathcal{D}_{s}}}}^{\mathbb{Q}} \left[ (\bar{S}_{\mathcal{T}-\tau_{\mathcal{D}_{s}}} - K )^{+} \right] \Big) \, \mathds{1}_{ \partial \mathcal{D}_{s}} \big( (\mathcal{T}-\tau_{\mathcal{D}_{s}}, \bar{S}_{\tau_{\mathcal{D}_{s}}}) \big) \right] \nonumber \\ & = \mathbb{E}_{x}^{\mathbb{Q}} \bigg[ \Big( \left(\bar{S}_{\tau_{\mathcal{D}_{s}}} - K \right)^{+} - \mathcal{DSC}^{\star}_{E}(\mathcal{T}-\tau_{\mathcal{D}_{s}}, \bar{S}_{\tau_{\mathcal{D}_{s}}} ; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) \Big) \, \mathds{1}_{\partial \mathcal{D}_{s}} \big( (\mathcal{T}-\tau_{\mathcal{D}_{s}}, \bar{S}_{\tau_{\mathcal{D}_{s}}}) \big) \bigg], \\ \mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) & = \mathbb{E}_{x}^{\mathbb{Q}} \left[ \Big( \left(\bar{S}_{\tau_{\mathcal{D}_{s}}} - K \right)^{+} - \mathbb{E}_{\bar{S}_{\tau_{\mathcal{D}_{s}}}}^{\mathbb{Q}} \left[ (\bar{S}_{\mathcal{T}-\tau_{\mathcal{D}_{s}}} - K )^{+} \right] \Big) \, \mathds{1}_{\mathcal{D}_{s}^{\circ} } \big( (\mathcal{T}-\tau_{\mathcal{D}_{s}}, \bar{S}_{\tau_{\mathcal{D}_{s}}}) \big) \right] \nonumber \\ & = \mathbb{E}_{x}^{\mathbb{Q}} \bigg[ \Big( \left(\bar{S}_{\tau_{\mathcal{D}_{s}}} - K \right)^{+} - \mathcal{DSC}^{\star}_{E}(\mathcal{T}-\tau_{\mathcal{D}_{s}}, \bar{S}_{\tau_{\mathcal{D}_{s}}} ; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) \Big) \, \mathds{1}_{\mathcal{D}_{s}^{\circ}} \big( (\mathcal{T}-\tau_{\mathcal{D}_{s}}, \bar{S}_{\tau_{\mathcal{D}_{s}}}) \big) \bigg]. \end{align} \noindent Therefore, to prove that $\mathcal{E}_{\mathcal{DSC}}^{0,\star}(\cdot)$ and $\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}(\cdot)$ satisfy Problem (\ref{GSCAmeEEPPIDE1})-(\ref{GSCAmeEEPPIDE1-2}) and (\ref{GSCAmeEEPPIDE2})-(\ref{GSCAmeEEPPIDE2-2}) respectively, we consider again, for any initial value $z = (\mathbf{t},x) \in [0,T] \times [0,\infty)$, the (strong) Markov process $(Z_{t})_{t \in [0,\mathbf{t}]}$ defined via $Z_{t} := (\mathbf{t} -t , \bar{S}_{t})$, $\bar{S}_{0} = x$, and make use of the fact that \begin{align} \mathcal{E}_{\mathcal{DSC}}^{0,\star}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = V_{\mathcal{E}}^{0} \big((\mathcal{T},x)\big), \hspace{1.2em} \mbox{and} \hspace{1.5em} \mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = V_{\mathcal{E}}^{\mathcal{J}} \big((\mathcal{T},x)\big), \label{RARAputa} \end{align} \noindent where $V_{\mathcal{E}}^{0}(\cdot)$ and $V_{\mathcal{E}}^{\mathcal{J}}(\cdot)$ are defined, under the measure $\mathbb{Q}_{z}^{Z}$ having initial distribution $Z_{0} = z = (\mathbf{t},x)$, by \begin{align} V_{\mathcal{E}}^{0}(z) : = \mathbb{E}^{\mathbb{Q}^{Z}}_{z} \big[ G_{0}(Z_{\tau_{\mathcal{D}_{s}}}) \big], & \hspace{1.5em} G_{0}\big((\mathbf{t},x)\big) := \big( (x -K)^{+} - \mathcal{DSC}^{\star}_{E}(\mathbf{t}, x ; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) \big) \, \mathds{1}_{\partial \mathcal{D}_{s}} ( (\mathbf{t},x) ), \\ V_{\mathcal{E}}^{\mathcal{J}}(z) : = \mathbb{E}^{\mathbb{Q}^{Z}}_{z} \big[ G_{\mathcal{J}}(Z_{\tau_{\mathcal{D}_{s}}}) \big], & \hspace{1.5em} G_{\mathcal{J}}\big((\mathbf{t},x)\big) := \big( (x -K)^{+} - \mathcal{DSC}^{\star}_{E}(\mathbf{t}, x ; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) \big) \, \mathds{1}_{ \mathcal{D}_{s}^{\circ}} ((\mathbf{t},x)). \end{align} As earlier, since $\tau_{\mathcal{D}_{s}} \leq T$ and the stopping region $\mathcal{D}_{s}$ is a closed set in the domain $[0,T] \times [0,\infty)$, this leads via standard arguments based on the strong Markov property of $(Z_{t})_{t \in [0,\mathbf{t}]}$ (cf.~\cite{pe06}) to the following problems \begin{align} \mathcal{A}_{Z} V_{\mathcal{E}}^{0}(z) & = 0, \hspace{3.5em} \mbox{on} \,\, \mathcal{D}_{c} , \\ V_{\mathcal{E}}^{0}(z) & = G_{0}(z), \hspace{1.5em} \mbox{on} \,\, \mathcal{D}_{s}, \end{align} \noindent and \begin{align} \mathcal{A}_{Z} V_{\mathcal{E}}^{\mathcal{J}}(z) & = 0, \hspace{3.8em} \mbox{on} \,\, \mathcal{D}_{c} , \\ V_{\mathcal{E}}^{\mathcal{J}}(z) & = G_{\mathcal{J}}(z), \hspace{1.5em} \mbox{on} \,\, \mathcal{D}_{s}, \end{align} \noindent and finally allows to recover the required equations (\ref{GSCAmeEEPPIDE1})-(\ref{GSCAmeEEPPIDE1-2}) and (\ref{GSCAmeEEPPIDE2})-(\ref{GSCAmeEEPPIDE2-2}) by means of Relations (\ref{RARAputa}) and (\ref{IgEnE}). \end{proof} \begin{proof}[\bf Proof of Proposition \ref{prop4}] \noindent We start the proof of Proposition \ref{prop4} by noting that the continuity of the function \mbox{$x \mapsto \widehat{\mathcal{DSC}^{\star}_{E}}(\vartheta,x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}})$} on $[0,\infty)$ directly follows from (\ref{LCTMREuro1}) and the continuity of $x \mapsto \mathcal{DSC}^{\star}_{E}(\mathcal{T},x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} )$ for $\mathcal{T}, K, \bm{\ell}$ and $\bm{\rho}_{\bm{\ell}}$, by means of the dominated convergence theorem.\footnote{Recall that we have assumed the integrability of the underlying price process $(S_{t})_{t \geq 0}$.}~Therefore, we only need to establish that $\widehat{\mathcal{DSC}^{\star}_{E}}(\cdot)$ solves Equation (\ref{MRGSCEuOIDE1}) on $ (0,\infty)$ with initial condition (\ref{MRGSCEuOIDE2}). To this end, we first recall that the (independent) exponentially distributed random time $\mathcal{T}_{\vartheta}$ can be viewed as the (first) jump time of a corresponding Poisson process $(N_{t})_{t \geq 0}$ with intensity $\vartheta > 0$. Hence, for a fixed $\vartheta > 0 $, we consider the process $(Z_{t})_{t \geq 0}$ defined, for any initial value $z = (n,x) \in \mathbb{N}_{0} \times [0,\infty)$, via $Z_{t} := (n + N_{t} , \bar{S}_{t})$, $\bar{S}_{0} = x$, and note that it is a strong Markov process with state domain $\mathcal{D} := \mathbb{N}_{0} \times [0,\infty)$. Additionally, $\widehat{\mathcal{DSC}_{E}^{\star}}(\cdot)$ can be re-expressed, for $\vartheta, K,\bm{\ell}$ and $\bm{\rho}_{\bm{\ell}}$, as \begin{equation} \widehat{\mathcal{DSC}^{\star}_{E}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = \widehat{V_{E}}\big((0,x)\big), \label{MRIMeq} \end{equation} \noindent where the value function $\widehat{V_{E}}(\cdot)$ has the following representation under the measure $\mathbb{Q}_{z}^{Z}$ having initial distribution $Z_{0} = z$: \begin{align} \widehat{V_{E}}(z) : = \mathbb{E}^{\mathbb{Q}^{Z}}_{z} \big[ G(Z_{\tau_{\mathcal{S}}}) \big], \hspace{1.5em} G(z) := (x -K)^{+}, \end{align} \noindent and $\tau_{\mathcal{S}} := \inf \{ t \geq 0: Z_{t} \in \mathcal{S} \}$, $\mathcal{S} := \big(\mathbb{N} \times [0,\infty) \big) \cup \big( \mathbb{N}_{0} \times \{0 \} \big)$, is a $\mathbb{Q}_{z}^{Z}$-almost surely finite stopping time for any $z = (n,x) \in \mathcal{D}$.\footnote{The finiteness of this stopping time directly follows from the finiteness of the first moment of any exponential distribution.}~Furthermore, the stopping region $\mathcal{S}$ forms (under an appropriate product-metric) a closed set in $\mathcal{D}$.\footnote{We note that several choices of a product-metric on $\mathcal{D}$ give the closedness of the set $\mathcal{S}$. In particular, one may choose on $\mathbb{N}_{0}$ the following metric $$ d_{\mathbb{N}_{0}}(m,n) := \left \{ \begin{array}{lc} 1 + |2^{-m} - 2^{-n} |, & m\neq n, \\ 0, & m =n, \end{array} \right. $$ \noindent and consider the product-metric on $\mathcal{D}$ obtained by combining $d_{\mathbb{N}_{0}}(\cdot,\cdot)$ on $\mathbb{N}_{0}$ with the Euclidean metric on $[0,\infty)$. \label{footnoteMETRIC}}~Therefore, standard arguments based on the strong Markov property of the process $(Z_{t})_{t \geq 0}$ (cf.~\cite{pe06}) imply that $\widehat{V_{E}}(\cdot)$ satisfies the following problem \begin{align} \mathcal{A}_{Z} \widehat{V_{E}}(z) & = 0, \hspace{2em} \mbox{on} \,\, \mathcal{D} \setminus \mathcal{S}, \\ \widehat{V_{E}}(z) & = G(z), \hspace{1.5em} \mbox{on} \,\, \mathcal{S}, \end{align} \noindent where $\mathcal{A}_{Z}$ denotes the infinitesimal generator of the process $(Z_{t})_{t \geq 0}$. To complete the proof, we note that the infinitesimal generator $\mathcal{A}_{Z}$ can be re-expressed (for any suitable function $V:\mathcal{D} \rightarrow \mathbb{R}$) as \begin{align} \mathcal{A}_{Z} V\big((n,x)\big) & = \mathcal{A}_{N}^{n} V\big((n,x) \big) + \mathcal{A}_{\bar{S}}^{x} V \big((n,x) \big) \nonumber \\ & = \vartheta \left( V\big( (n+1,x) \big) - V\big( (n,x) \big) \right) + \mathcal{A}_{S}^{x} V\big( (n,x) \big) - \lambda(x) V\big( (n,x) \big) , \label{MRIgEn} \end{align} \noindent where $\mathcal{A}_{N}$ denotes the infinitesimal generator of the Poisson process $(N_{t})_{ t \geq 0}$ and the notation $\mathcal{A}_{N}^{n}$, $\mathcal{A}_{\bar{S}}^{x}$, and $\mathcal{A}_{S}^{x}$ is used to indicate that the generators are applied to $n$ and $x$, respectively. Therefore, recovering $\widehat{\mathcal{DSC}^{\star}_{E}}(\cdot)$ via (\ref{MRIMeq}) while noting Relation (\ref{MRIgEn}) and the fact that for any $x \in [0,\infty)$ we have \begin{equation} \widehat{V_{E}}\big((1,x)\big) = G\big((1,x)\big) = (x-1)^{+} \end{equation} \noindent finally completes the proof. \end{proof} \begin{proof}[\bf Proof of Proposition \ref{prop5}] \noindent First, we note that the discussion preceding Proposition \ref{prop5} implies that the optimal stopping problem (\ref{NEWproBL}) can be re-expressed, under the measure $\mathbb{Q}_{z}^{Z}$ having initial distribution $Z_{0} = z \in \mathcal{D}$, as \begin{align} \widehat{V_{A}}(z) = \mathbb{E}^{\mathbb{Q}^{Z}}_{z} \left[ G\Big(Z_{\tau_{\widehat{\mathcal{D}_{s}^{Gen.}}}}^{\mathcal{S}_{J}}\Big) \right], \label{PROOFNEWproBL} \end{align} \noindent where $\tau_{\widehat{\mathcal{D}_{s}^{Gen.}}}$ is defined as in (\ref{NeWWOTime}) and $G(z) := (x - K)^{+}$, for $z \in \mathcal{D}$. Additionally, the finiteness of the first moment of the exponential distribution for any $\vartheta >0$ implies that this stopping time is $\mathbb{Q}_{z}^{Z}$-almost surely finite for any $z \in \mathcal{D}$, and combining this property with the closedness\footnote{As earlier, this property can be obtained under the product-metric considered in Footnote~\ref{footnoteMETRIC}.}~of the stopping domain $\widehat{\mathcal{D}_{s}^{Gen.}}$ gives~(cf.~\cite{pe06}) that $\widehat{V_{A}}(\cdot)$ satisfies the following problem \begin{align} \mathcal{A}_{Z} \widehat{V_{A}}(z) & = 0, \hspace{3em} \mbox{on} \,\, \widehat{\mathcal{D}_{c}^{Gen.}} , \\ \widehat{V_{A}}(z) & = G(z), \hspace{1.5em} \mbox{on} \,\, \widehat{\mathcal{D}_{s}^{Gen.}}. \end{align} \noindent Consequently, recovering $\widehat{\mathcal{DSC}^{\star}_{A}}(\cdot)$ by means of Relation (\ref{MRAmerIMeq}) while noting Identity (\ref{MRIgEn}) and the fact that \begin{equation} \widehat{\mathcal{D}_{s}^{Gen.}} = \mathcal{S}_{J} \cup \big( \{ 0 \} \times \widehat{\mathcal{D}}_{\vartheta,s} \big) \label{ImpStopRegio} \end{equation} \noindent and \begin{equation} \widehat{V_{A}}\big((1,x)\big) = G\big((1,x)\big) = (x-1)^{+} \end{equation} \noindent finally gives the required Equations (\ref{MRGSCAmerOIDE1}) and (\ref{MRGSCAmerOIDE2}). \vspace{1em} \\ \noindent The continuity of $x \mapsto \widehat{\mathcal{DSC}^{\star}_{A}}(\vartheta,x;K,\bm{\ell}, \bm{\rho}_{\bm{\ell}})$ on $[0,\infty)$ for $\vartheta, K, \bm{\ell},$ and $\bm{\rho}_{\bm{\ell}}$ is an easy consequence of the continuity of $x \mapsto (x-K)^{+}$ and the dominated convergence theorem. This concludes the proof of the proposition. \end{proof} \begin{proof}[\bf Proof of Proposition \ref{prop6}] \noindent Following the ideas outlined in the previous proofs, we re-consider, for any $\vartheta > 0 $ and initial value $z = (n,x) \in \mathbb{N}_{0} \times [0,\infty)$, the process $(Z_{t})_{t \geq 0}$ defined on the state domain $\mathcal{D} := \mathbb{N}_{0} \times [0,\infty)$ via $Z_{t} := (n + N_{t} , \bar{S}_{t})$, $\bar{S}_{0} = x$, as well as its stopped version, $(Z_{t}^{\mathcal{S}_{J}})_{t \geq 0}$, defined according to (\ref{STOPPEDprocDEF}) and note that the diffusion and jump contributions to the maturity-randomized early exercise premium of the geometric double barrier step call, $\widehat{\mathcal{E}_{\mathcal{DSC}}^{0,\star}}(\cdot)$ and $\widehat{\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}}(\cdot)$ respectively, can be re-expressed, using these processes, in the form \begin{align} \widehat{\mathcal{E}_{\mathcal{DSC}}^{0,\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = \widehat{V_{\mathcal{E}}^{0}} \big((0,x)\big), \hspace{1.2em} \mbox{and} \hspace{1.5em} \widehat{\mathcal{E}_{\mathcal{DSC}}^{\mathcal{J},\star}}(\vartheta,x; K, \bm{\ell}, \bm{\rho}_{\bm{\ell}} ) = \widehat{V_{\mathcal{E}}^{\mathcal{J}}} \big((0,x)\big), \label{MRRARAputaBup} \end{align} \noindent where $\widehat{V_{\mathcal{E}}^{0}}(\cdot)$ and $\widehat{V_{\mathcal{E}}^{\mathcal{J}}}(\cdot)$ are defined, under the measure $\mathbb{Q}_{z}^{Z}$ having initial distribution $Z_{0} = z$, by \begin{align} \widehat{V_{\mathcal{E}}^{0}}(z) : = \mathbb{E}^{\mathbb{Q}^{Z}}_{z} \left[ \widehat{G_{0}}\Big(Z_{\tau_{\widehat{\mathcal{D}_{s}^{Gen.}}}}^{\mathcal{S}_{J}}\Big) \right], & \hspace{1.5em} \widehat{G_{0}}\big((n,x)\big) := \big( (x -K)^{+} - \widehat{V_{E}}\big((n,x)\big) \big) \, \mathds{1}_{\partial \widehat{\mathcal{D}_{s}^{Gen.}}}\big((n,x)\big), \\ \widehat{V_{\mathcal{E}}^{\mathcal{J}}}(z) : = \mathbb{E}^{\mathbb{Q}^{Z}}_{z} \left[ \widehat{G_{\mathcal{J}}}\Big(Z_{\tau_{\widehat{\mathcal{D}_{s}^{Gen.}}}}^{\mathcal{S}_{J}}\Big) \right], & \hspace{1.5em} \widehat{G_{\mathcal{J}}}\big((n,x)\big) := \big( (x -K)^{+} - \widehat{V_{E}}\big((n,x)\big) \big) \, \mathds{1}_{ \big(\widehat{\mathcal{D}_{s}^{Gen.}}\big)^{\circ}}\big((n,x)\big). \end{align} As earlier, the $\mathbb{Q}_{z}^{Z}$-almost sure finiteness of the stopping time $\tau_{\widehat{\mathcal{D}_{s}^{Gen.}}}$ for any $z \in \mathcal{D}$ and the closedness\footnote{e.g.~under the product-metric considered in Footnote~\ref{footnoteMETRIC}.}~of the stopping domain $\widehat{\mathcal{D}_{s}^{Gen.}}$ lead via standard arguments (cf.~\cite{pe06}) to the following problems \begin{align} \mathcal{A}_{Z} \widehat{V_{\mathcal{E}}^{0}}(z) & = 0, \hspace{3.5em} \mbox{on} \,\, \widehat{\mathcal{D}_{c}^{Gen.}} , \\ \widehat{V_{\mathcal{E}}^{0}}(z) & = \widehat{G_{0}}(z), \hspace{1.5em} \mbox{on} \,\, \widehat{\mathcal{D}_{s}^{Gen.}}, \end{align} \noindent and \begin{align} \mathcal{A}_{Z} \widehat{V_{\mathcal{E}}^{\mathcal{J}}}(z) & = 0, \hspace{3.8em} \mbox{on} \,\, \widehat{\mathcal{D}_{c}^{Gen.}}, \\ \widehat{V_{\mathcal{E}}^{\mathcal{J}}}(z) & = \widehat{G_{\mathcal{J}}}(z), \hspace{1.5em} \mbox{on} \,\, \widehat{\mathcal{D}_{s}^{Gen.}}. \end{align} \noindent Finally, in view of (\ref{ImpStopRegio}), it is clear that\footnote{e.g.~under the product-metric considered in Footnote~\ref{footnoteMETRIC}.} \begin{equation} \partial \widehat{\mathcal{D}_{s}^{Gen.}} = \mathcal{S}_{J} \cup \big( \{0 \} \times \partial \widehat{\mathcal{D}}_{\vartheta,s} \big), \hspace{1,5em} \mbox{and} \hspace{1.7em} \big(\widehat{\mathcal{D}_{s}^{Gen.}}\big)^{\circ} = \mathcal{S}_{J} \cup \big( \{0 \} \times \widehat{\mathcal{D}}_{\vartheta,s}^{\circ} \big), \end{equation} \noindent so that \begin{equation} \widehat{V_{\mathcal{E}}^{0}}\big((1,x)\big) = \widehat{G_{0}}\big((1,x)\big) = 0 = \widehat{G_{\mathcal{J}}}\big((1,x)\big) = \widehat{V_{\mathcal{E}}^{\mathcal{J}}}\big((1,x)\big). \end{equation} \noindent Therefore, combining these properties with Relations (\ref{MRRARAputaBup}) and (\ref{MRIgEn}) finally allows to recover the required equations (\ref{GSCAmeEEPOIDE1})-(\ref{GSCAmeEEPOIDE1-2}) and (\ref{GSCAmeEEPOIDE2})-(\ref{GSCAmeEEPOIDE2-2}). This completes the proof. \end{proof} \subsection*{Appendix B: Proofs - Section \ref{SEC3}} \begin{proof}[\bf Proof of Proposition \ref{PropEurHEJD}] \noindent For simplicity, we rewrite the price of the maturity-randomized European-type down-and-out step contract $\widehat{\mathcal{DOSC}^{\star}_{E}}(\cdot)$ as function of the log-price $\bm{x} := \log(x)$ and the log-strike $\bm{k} := \log(K)$ via $\overline{\mathcal{DOSC}}_{E}^{\star}(\cdot)$, i.e.~we rely on the following relation \begin{equation} \overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) := \widehat{\mathcal{DOSC}^{\star}_{E}}(\vartheta,e^{\bm{x}}; e^{\bm{k}}, L,\rho_{L}). \end{equation} \noindent This transforms (\ref{MRGSCEuOIDE1}) into the following equation \begin{align} \vartheta (e^{\bm{x}} - e^{\bm{k}})^{+} + \mathcal{A}_{X} \overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) - \big( r + \vartheta - \rho_{L} \mathds{1}_{(0,L)}(e^{\bm{x}}) \big) \overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = 0, \end{align} \noindent with $\mathcal{A}_{X}$ denoting the infinitesimal generator of $(X_{t})_{t \geq 0}$, i.e. \begin{equation} \mathcal{A}_{X} V(\bm{x}) := \frac{1}{2} \sigma^{2}_{X} \partial_{\bm{x}}^{2} V(\bm{x}) + \Big( r -\delta -\lambda \zeta - \frac{1}{2}\sigma_{X}^{2} \Big) \partial_{\bm{x}} V(\bm{x}) + \lambda \int \limits_{\mathbb{R}} \big( V(\bm{x}+\bm{y}) - V(\bm{x}) \big) f_{J_{1}}(\bm{y})d\bm{y}. \label{NewGen} \end{equation} \noindent Equivalently, this can be written in the following system of three equations \begin{align} \mathcal{A}_{X} \overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) - ( r + \vartheta - \rho_{L}) & \overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = 0, \hspace{1.5em} \mbox{for} \; -\infty < \bm{x} < \ell^{\ast}, \label{EQfirst1}\\ \mathcal{A}_{X} \overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) - ( r + \vartheta) & \overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = 0, \hspace{2em} \mbox{for} \; \ell^{\ast} \leq \bm{x} \leq \bm{k}, \label{EQfirst2}\\ \mathcal{A}_{X} \overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) - ( r + \vartheta) \overline{\mathcal{DOSC}}_{E}^{\star}&(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = \vartheta(e^{\bm{k}} - e^{\bm{x}}), \hspace{1.5em} \mbox{for} \; \bm{k} < \bm{x} < \infty, \label{EQlast1} \end{align} \noindent where we have set $\ell^{\ast} := \log(L)$. Combining the arguments provided in \cite{ck11} (cf.~also \cite{lv17}, \cite{cv18}, \cite{fmv19}) with the fact that $$ P_{1}(x) := \vartheta\left( \frac{e^{\bm{x}}}{\delta + \vartheta} -\frac{e^{\bm{k}}}{r + \vartheta} \right) $$ \noindent is a particular solution to (\ref{EQlast1}) implies that the general solution to (\ref{EQfirst1})-(\ref{EQlast1}) takes the following form \begin{equation} \overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = \left \{ \begin{array}{lc} \sum \limits_{s=1}^{m+1} A_{s}^{+} e^{\beta_{s,(r+\vartheta-\rho_{L})} \cdot (\bm{x}-\ell^{\ast})}, & -\infty < \bm{x} < \ell^{\ast} ,\\ \sum \limits_{s=1}^{m+1} B_{s}^{+} e^{\beta_{s,(r+\vartheta)} \cdot (\bm{x}-\ell^{\ast})} + \sum \limits_{u=1}^{n+1} B_{u}^{-} e^{\gamma_{u,(r+\vartheta)} \cdot (\bm{x}-\bm{k})}, & \ell^{\ast} \leq \bm{x} \leq \bm{k}, \\ \sum \limits_{u=1}^{n+1} C_{u}^{-} e^{\gamma_{u,(r+\vartheta)} \cdot (\bm{x}-\bm{k})} + \vartheta \left( \frac{e^{\bm{x}}}{\delta + \vartheta} -\frac{e^{\bm{k}}}{r + \vartheta} \right) , & \bm{k} < \bm{x} < \infty , \end{array} \right. \end{equation} \noindent where the coefficients $(A_{s}^{+})_{s=1,\ldots,m+1}$, $(B_{s}^{+})_{s=1,\ldots,m+1}$, $(B_{u}^{-})_{u=1,\ldots,n+1}$ and $(C_{u}^{-})_{u=1,\ldots,n+1}$ are subsequently determined by analyzing the solution under the respective equations and in the different regions. This is done next.\vspace{1em} \\ \noindent \underline{\bf STEP 1: $-\infty < \bm{x} < \ell^{\ast}$.} \\ \noindent To start we derive that \begin{align} & \int \limits_{\mathbb{R}} \overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}+y; \bm{k}, L, \rho_{L}) f_{J_{1}}(y) dy \nonumber \\ & = \sum \limits_{s=1}^{m+1} \sum \limits_{j=1}^{n} q_{j} \eta_{j} e^{-\eta_{j} \bm{x}} A_{s}^{+} \int \limits_{-\infty}^{\bm{x}} e^{\eta_{j} z} e^{\beta_{s,(r+\vartheta-\rho_{L})} \cdot (z-\ell^{\ast})} \, dz + \sum \limits_{s=1}^{m+1} \sum \limits_{i=1}^{m} p_{i} \xi_{i} e^{\xi_{i} \bm{x}} A_{s}^{+} \int \limits_{\bm{x}}^{\ell^{\ast}} e^{-\xi_{i} z} e^{\beta_{s,(r+\vartheta-\rho_{L})} \cdot (z-\ell^{\ast})} \, dz \nonumber \\ & \hspace{0.3em} + \sum \limits_{s=1}^{m+1} \sum \limits_{i=1}^{m} p_{i} \xi_{i} e^{\xi_{i} \bm{x}} B_{s}^{+} \int \limits_{\ell^{\ast}}^{\bm{k}} e^{-\xi_{i} z} e^{\beta_{s,(r+\vartheta)} \cdot (z-\ell^{\ast})} \, dz + \sum \limits_{u=1}^{n+1} \sum \limits_{i=1}^{m} p_{i} \xi_{i} e^{\xi_{i} \bm{x}} B_{u}^{-} \int \limits_{\ell^{\ast}}^{\bm{k}} e^{-\xi_{i} z} e^{\gamma_{u,(r+\vartheta)} \cdot (z-\bm{k})} \, dz \nonumber \\ & \hspace{0.3em} + \sum \limits_{u=1}^{n+1} \sum \limits_{i=1}^{m} p_{i} \xi_{i} e^{\xi_{i} \bm{x}} C_{u}^{-} \int \limits_{\bm{k}}^{\infty} e^{-\xi_{i}z} e^{\gamma_{u,(r+\vartheta)} \cdot (z-\bm{k})} \, dz + \sum \limits_{i=1}^{m} p_{i} \xi_{i} e^{\xi_{i} \bm{x}} \Bigg( \frac{\vartheta}{\delta + \vartheta} \int \limits_{\bm{k}}^{\infty} e^{-(\xi_{i}-1) \cdot z} \, dz - \frac{\vartheta e^{\bm{k}}}{r + \vartheta} \int \limits_{\bm{k}}^{\infty} e^{-\xi_{i} z} \, dz \Bigg). \label{INTEalgebra} \end{align} \noindent After some algebra, Equation (\ref{EQfirst1}) can be transformed to obtain \begin{equation} \sum \limits_{s=1}^{m+1} A_{s}^{+} e^{\beta_{s,(r+\vartheta-\rho_{L})} \cdot (\bm{x}-\ell^{\ast})} \underbrace{\left( \Phi_{X}\big(\beta_{s,(r+\vartheta-\rho_{L})} \big) - (r+\vartheta-\rho_{L}) \right)}_{=0} + \lambda \sum \limits_{i=1}^{m} p_{i} \xi_{i} e^{\xi_{i} (\bm{x}-\ell^{\ast})} \, \mathcal{R}_{i}^{1}(\vartheta;\bm{k},L, \rho_{L}) = 0, \end{equation} \noindent where, for $i=1,\ldots,m$, \begin{align} \mathcal{R}_{i}^{1}(\vartheta;\bm{k},L, \rho_{L}) & := - \sum \limits_{s=1}^{m+1} \Bigg( A_{s}^{+} \frac{1}{\xi_{i} - \beta_{s,(r+\vartheta-\rho_{L})}} - B_{s}^{+} \frac{1 - e^{(\beta_{s,(r+\vartheta)}-\xi_{i}) (\bm{k}-\ell^{\ast})}}{ {\xi_{i} - \beta_{s,(r+\vartheta)}}} \Bigg) \nonumber \\ & \hspace{3.5em} + \sum \limits_{u=1}^{n+1} \Bigg( B_{u}^{-} \frac{e^{-\gamma_{u,(r+\vartheta)} (\bm{k} -\ell^{\ast})} - e^{-\xi_{i} (\bm{k}-\ell^{\ast})}}{ {\xi_{i} - \gamma_{u,(r+\vartheta)}}} + C_{u}^{-} \frac{e^{-\xi_{i} (\bm{k}-\ell^{\ast})}}{ {\xi_{i} - \gamma_{u,(r+\vartheta)}}} \Bigg) \nonumber \\ & \hspace{3.5em} + \frac{\vartheta e^{\bm{k}} e^{-\xi_{i}(\bm{k}-\ell^{\ast})}}{(\xi_{i}-1)(\delta+\vartheta)} -\frac{\vartheta e^{\bm{k}} e^{-\xi_{i}(\bm{k}-\ell^{\ast})}}{\xi_{i}(r+\vartheta)} . \end{align} \noindent Therefore, since the parameters $\xi_{1}, \ldots, \xi_{m}$ are all different from each other, we conclude that \begin{equation} \mathcal{R}_{i}^{1}(\vartheta; \bm{k}, L, \rho_{L}) = 0, \hspace{1.5em} \mbox{for}\; \; i =1, \ldots, m . \end{equation} \noindent \underline{\bf STEP 2: $\ell^{\ast} \leq \bm{x} \leq \bm{k}$.} \\ \noindent Combining similar arguments to the ones used in (\ref{INTEalgebra}) with Equation (\ref{EQfirst2}), we derive that \begin{align} & \sum \limits_{s=1}^{m+1} B_{s}^{+} e^{\beta_{s,(r+\vartheta)} \cdot (\bm{x}-\ell^{\ast})} \underbrace{\left( \Phi_{X}\big(\beta_{s,(r+\vartheta)} \big) - (r+\vartheta) \right)}_{=0} + \sum \limits_{u=1}^{n+1} B_{u}^{-} e^{\gamma_{u,(r+\vartheta)} \cdot (\bm{x}-\bm{k})} \underbrace{\left( \Phi_{X}\big(\gamma_{u,(r+\vartheta)} \big) - (r+\vartheta) \right)}_{=0} \nonumber \hspace{10em} \\ & \hspace{10em} +\lambda \Bigg( \sum \limits_{i=1}^{m} p_{i} \xi_{i} e^{\xi_{i} (\bm{x}-k)} \, \mathcal{R}_{i}^{2,+}(\vartheta;\bm{k},L, \rho_{L}) + \sum \limits_{j=1}^{n} q_{j} \eta_{j} e^{-\eta_{j} (\bm{x}-\ell^{\ast})} \, \mathcal{R}_{j}^{2,-}(\vartheta;\bm{k},L, \rho_{L}) \Bigg) = 0, \end{align} \noindent where, for $i=1,\ldots,m$ and $j=1, \ldots,n$, \begin{align} \mathcal{R}_{i}^{2,+}(\vartheta;\bm{k},L, \rho_{L}) & := - \sum \limits_{s=1}^{m+1} B_{s}^{+} \frac{e^{\beta_{s,(r+\vartheta)} (\bm{k}-\ell^{\ast}) }}{\xi_{i} - \beta_{s,(r+\vartheta)}} - \sum \limits_{u=1}^{n+1} \big( B_{u}^{-} - C_{u}^{-} \big) \frac{1}{\xi_{i} - \gamma_{u,(r+\vartheta)}} + \frac{\vartheta e^{\bm{k}}}{(\xi_{i}-1)(\delta+\vartheta)} -\frac{\vartheta e^{\bm{k}}}{\xi_{i}(r+\vartheta)}, \\ \mathcal{R}_{j}^{2,-}(\vartheta;\bm{k},L, \rho_{L}) & := \sum \limits_{s=1}^{m+1} \Bigg( A_{s}^{+} \frac{1}{\eta_{j} + \beta_{s,(r+\vartheta-\rho_{L})}} - B_{s}^{+} \frac{1}{\eta_{j} + \beta_{s,(r+\vartheta)}} \Bigg) - \sum \limits_{u=1}^{n+1} B_{u}^{-} \frac{e^{- \gamma_{u,(r+\vartheta)} (\bm{k} - \ell^{\ast})}}{\eta_{j} + \gamma_{u,(r+\vartheta)}}. \end{align} \noindent Hence, since the parameters $\xi_{1}, \ldots, \xi_{m}, \eta_{1}, \ldots, \eta_{n}$ are all different from each other, we conclude that \begin{align} \mathcal{R}_{i}^{2,+}(\vartheta; \bm{k}, L, \rho_{L}) & = 0, \hspace{1.5em} \mbox{for}\; \; i =1, \ldots, m, \\ \mathcal{R}_{j}^{2,-}(\vartheta; \bm{k}, L, \rho_{L}) & = 0, \hspace{1.5em} \mbox{for}\; \; j =1, \ldots, n . \end{align} \noindent \underline{\bf STEP 3: $\bm{k} < \bm{x} < \infty$.} \\ \noindent Following the line of the arguments used in STEP 1 and STEP 2, we rewrite Equation (\ref{EQlast1}) as \begin{align} & \sum \limits_{u=1}^{n+1} C_{u}^{-} e^{\gamma_{u,(r+\vartheta)} \cdot (\bm{x}-\bm{k})} \underbrace{\left( \Phi_{X}\big(\gamma_{u,(r+\vartheta)} \big) - (r+\vartheta) \right)}_{=0} \nonumber \\ & + \underbrace{\Bigg( \frac{\vartheta e^{\bm{x}}}{\delta + \vartheta} \Big( \frac{\sigma^{2}}{2} + b_{X} + \lambda \zeta -(r + \vartheta) \Big) + (r+\vartheta) \frac{\vartheta e^{\bm{k}}}{r+\vartheta} - \vartheta e^{\bm{k}} + \vartheta e^{\bm{x}} \Bigg)}_{=0} + \lambda \sum \limits_{j=1}^{n} q_{j} \eta_{j} e^{-\eta_{j} (\bm{x}-\bm{k})} \, \mathcal{R}_{j}^{3}(\vartheta;\bm{k},L, \rho_{L}) = 0, \end{align} \noindent where, for $j=1,\ldots,n$, \begin{align} \mathcal{R}_{j}^{3}(\vartheta;\bm{k},L, \rho_{L}) & := \sum \limits_{s=1}^{m+1} \Bigg( A_{s}^{+} \frac{e^{-\eta_{j} (\bm{k}-\ell^{\ast}) }}{\eta_{j} + \beta_{s,(r+\vartheta-\rho_{L})}} + B_{s}^{+} \frac{e^{\beta_{s,(r+\vartheta)}) (\bm{k}-\ell^{\ast})} - e^{-\eta_{j}(\bm{k}-\ell^{\ast})}}{ \eta_{j} + \beta_{s,(r+\vartheta)}} \Bigg) \nonumber \\ & \hspace{3.5em} + \sum \limits_{u=1}^{n+1} \Bigg( B_{u}^{-} \frac{1 - e^{-(\eta_{j}+\gamma_{u,(r+\vartheta)}) (\bm{k}-\ell^{\ast})}}{ {\eta_{j} + \gamma_{u,(r+\vartheta)}}} - C_{u}^{-} \frac{1}{\eta_{j} + \gamma_{u,(r+\vartheta)}} \Bigg) \nonumber \\ & \hspace{3.5em} - \frac{\vartheta e^{\bm{k}}}{(\eta_{j}+1)(\delta+\vartheta)} +\frac{\vartheta e^{\bm{k}}}{\eta_{j}(r+\vartheta)} . \end{align} \noindent Therefore, since the parameters $\eta_{1}, \ldots, \eta_{n}$ are all different from each other, we conclude that \begin{equation} \mathcal{R}_{j}^{3}(\vartheta; \bm{k}, L, \rho_{L}) = 0, \hspace{1.5em} \mbox{for}\; \; j =1, \ldots, n . \end{equation} \noindent \underline{\bf STEP 4:} \\ To close the system of equations, we impose smooth-fit conditions and obtain the following four identities: \begin{align} & \hspace{5em} \sum \limits_{s=1}^{m+1} \big( A_{s}^{+} - B_{s}^{+} \big) - \sum \limits_{u=1}^{n+1} B_{u}^{-} e^{-\gamma_{u,(r+\vartheta)} \cdot (\bm{k} -\ell^{\ast})} = 0, \\ & \hspace{4em}\sum \limits_{s=1}^{m+1} B_{s}^{+} e^{\beta_{s,(r+\vartheta)} \cdot (\bm{k}-\ell^{\ast})} + \sum \limits_{u=1}^{n+1} \big( B_{u}^{-} - C_{u}^{-} \big) = \frac{\vartheta e^{\bm{k}}}{\delta + \vartheta} - \frac{\vartheta e^{\bm{k}}}{r + \vartheta} , \\ & \sum \limits_{s=1}^{m+1} \Big( A_{s}^{+} \beta_{s,(r+\vartheta-\rho_{L})} - B_{s}^{+} \beta_{s,(r+\vartheta)} \Big) - \sum \limits_{u=1}^{n+1} B_{u}^{-} \gamma_{u,(r+\vartheta)} e^{-\gamma_{u,(r+\vartheta)} \cdot (\bm{k}-\ell^{\ast})} = 0, \label{smoothpasting1} \\ & \hspace{0.9em} \sum \limits_{s=1}^{m+1} B_{s}^{+} \beta_{s,(r+\vartheta)} e^{\beta_{s,(r+\vartheta)} \cdot (\bm{k}-\ell^{\ast})} + \sum \limits_{u=1}^{n+1} \Big( B_{u}^{-} \gamma_{u,(r+\vartheta)} - C_{u}^{-} \gamma_{u,(r+\vartheta)} \Big) = \frac{\vartheta e^{\bm{k}}}{\delta + \vartheta} . \label{smoothpasting2} \end{align} \noindent Although we do not further comment on the appropriateness of the smooth-fit conditions (\ref{smoothpasting1}) and (\ref{smoothpasting2}), we emphasize that smooth-pasting is very natural under hyper-exponential jump-diffusion markets and refer for similar results, e.g.~to~\cite{ccw10}, \cite{xy13}, \cite{lz16}. \vspace{1em} \\ \noindent \underline{\bf STEP 5:} \\ \noindent To finalize our derivations, we combine the results obtained in STEP 1 - STEP 4. This leads to the following system of equations \begin{equation} \mathbf{Q_{E}} \mathbf{v} = \mathbf{q_{E}}, \label{AppendixBSysEq} \end{equation} \noindent where $\mathbf{v} := (A_{1}^{+}, \ldots, A_{m+1}^{+},B_{1}^{+}, \ldots, B_{m+1}^{+}, B_{1}^{-}, \ldots, B_{n+1}^{-}, C_{1}^{-}, \ldots, C_{n+1}^{-})^{\intercal}$. Here, $\mathbf{q_{E}} = (\mathbf{q_{E}^{1}}, \ldots, \mathbf{q_{E}^{8}})^{\intercal}$ is a $(2m+2n+4)$-dimensional column vector, whose elements are defined in the following way: \begin{itemize} \setlength \itemsep{-0.2em} \item[$i)$] $\mathbf{q_{E}^{1}}$ and $\mathbf{q_{E}^{2}}$ are $1 \times m$ vectors given by \begin{align} \big(\mathbf{q_{E}^{1}}\big)_{i} := \frac{\vartheta e^{\bm{k}} e^{-\xi_{i}(\bm{k}-\ell^{\ast})}}{\xi_{i}(r+\vartheta)} & - \frac{\vartheta e^{\bm{k}} e^{-\xi_{i}(\bm{k}-\ell^{\ast})}}{(\xi_{i}-1)(\delta+\vartheta)} , \hspace{1.5em} i = 1, \ldots, m , \\ \big(\mathbf{q_{E}^{2}}\big)_{i} := \frac{\vartheta e^{k}}{\xi_{i}(r+\vartheta)} - & \frac{\vartheta e^{k}}{(\xi_i -1)(\delta + \vartheta)}, \hspace{1.5em} i = 1, \ldots, m, \end{align} \item[$ii)$] $\mathbf{q_{E}^{3}}$ and $\mathbf{q_{E}^{4}}$ are $1 \times n$ vectors given by \begin{align} \big(\mathbf{q_{E}^{3}}\big)_{j} & := 0, \hspace{1.5em} j = 1, \ldots, n, \\ \big(\mathbf{q_{E}^{4}}\big)_{j} := - \frac{\vartheta e^{\bm{k}}}{\eta_{j}(r+\vartheta)} & + \frac{\vartheta e^{\bm{k}}}{(\eta_{j}+1)(\delta+\vartheta)}, \hspace{1.5em} j = 1, \ldots, n , \end{align} \item[$iii)$] $\mathbf{q_{E}^{5}}$, $\mathbf{q_{E}^{6}}$, $\mathbf{q_{E}^{7}}$ and $\mathbf{q_{E}^{8}}$ are real values given by \begin{align} \mathbf{q_{E}^{5}} := 0, \hspace{1.5em} \mathbf{q_{E}^{6}}:= \frac{\vartheta e^{\bm{k}}}{\delta + \vartheta} - \frac{\vartheta e^{\bm{k}}}{r + \vartheta}, \hspace{1.5em} \mathbf{q_{E}^{7}} := 0, \hspace{1.5em} \mathbf{q_{E}^{8}} := \frac{\vartheta e^{\bm{k}}}{\delta + \vartheta}. \end{align} \end{itemize} \noindent Finally, $\mathbf{Q_{E}}$ is a $(2m+2n+4)$-dimensional square matrix \begin{equation} \mathbf{Q_{E}} = \left( \begin{array}{cccc} \mathbf{Q_{E}^{11}} & \mathbf{Q_{E}^{12}} & \mathbf{Q_{E}^{13}} & \mathbf{Q_{E}^{14}} \\ \mathbf{Q_{E}^{21}} & \mathbf{Q_{E}^{22}} & \mathbf{Q_{E}^{23}} & \mathbf{Q_{E}^{24}} \\ \vdots & \vdots & \vdots & \vdots \\ \mathbf{Q_{E}^{81}} & \mathbf{Q_{E}^{82}} & \mathbf{Q_{E}^{83}} & \mathbf{Q_{E}^{84}} \end{array} \right) \end{equation} \noindent that is defined in the following way: \begin{itemize} \setlength \itemsep{-0.2em} \item[$i)$] $\mathbf{Q_{E}^{11}}$, $\mathbf{Q_{E}^{12}}$ and $\mathbf{Q_{E}^{13}}$, $\mathbf{Q_{E}^{14}}$ are respectively $m\times (m+1)$ and $m \times (n+1)$ matrices given, for $i=1,\ldots,m$, $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{E}^{11}} \big)_{is} := - \frac{1}{\xi_{i} - \beta_{s,(r+\vartheta-\rho_{L})}}, \hspace{1.5em} \big( \mathbf{Q_{E}^{12}} \big)_{is} := \frac{1 - e^{(\beta_{s,(r+\vartheta)}-\xi_{i}) (\bm{k}-\ell^{\ast})}}{ {\xi_{i} - \beta_{s,(r+\vartheta)}}}, \\ & \big( \mathbf{Q_{E}^{13}} \big)_{iu} := \frac{e^{-\gamma_{u,(r+\vartheta)} (\bm{k} -\ell^{\ast})} - e^{-\xi_{i} (\bm{k}-\ell^{\ast})}}{ {\xi_{i} - \gamma_{u,(r+\vartheta)}}} , \hspace{1.5em} \big( \mathbf{Q_{E}^{14}} \big)_{iu} := \frac{e^{-\xi_{i} (\bm{k}-\ell^{\ast})}}{ {\xi_{i} - \gamma_{u,(r+\vartheta)}}} , \end{align} \item[$ii)$] $\mathbf{Q_{E}^{21}}$, $\mathbf{Q_{E}^{22}}$ and $\mathbf{Q_{E}^{23}}$, $\mathbf{Q_{E}^{24}}$ are respectively $m\times (m+1)$ and $m \times (n+1)$ matrices given, for $i=1,\ldots,m$, $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \hspace{2em} \big( \mathbf{Q_{E}^{21}} \big)_{is} := 0, \hspace{1.5em} \big( \mathbf{Q_{E}^{22}} \big)_{is} := -\frac{e^{\beta_{s,(r+\vartheta)} (\bm{k}-\ell^{\ast}) }}{\xi_{i} - \beta_{s,(r+\vartheta)}}, \\ & \big( \mathbf{Q_{E}^{23}} \big)_{iu} := -\frac{1}{\xi_{i} - \gamma_{u,(r+\vartheta)}} , \hspace{1.5em} \big( \mathbf{Q_{E}^{24}} \big)_{iu} := -\big( \mathbf{Q_{E}^{23}} \big)_{iu} , \end{align} \item[$iii)$] $\mathbf{Q_{E}^{31}}$, $\mathbf{Q_{E}^{32}}$ and $\mathbf{Q_{E}^{33}}$, $\mathbf{Q_{E}^{34}}$ are respectively $n\times (m+1)$ and $n \times (n+1)$ matrices given, for $j=1,\ldots,n$, $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{E}^{31}} \big)_{js} := \frac{1}{\eta_{j} + \beta_{s,(r+\vartheta-\rho_{L})}}, \hspace{1.5em} \big( \mathbf{Q_{E}^{32}} \big)_{js} := -\frac{1}{\eta_{j} + \beta_{s,(r+\vartheta)}}, \\ & \hspace{3em} \big( \mathbf{Q_{E}^{33}} \big)_{ju} := -\frac{e^{- \gamma_{u,(r+\vartheta)} (\bm{k} - \ell^{\ast})}}{\eta_{j} + \gamma_{u,(r+\vartheta)}} , \hspace{1.5em} \big( \mathbf{Q_{E}^{34}} \big)_{ju} := 0 , \end{align} \item[$iv)$] $\mathbf{Q_{E}^{41}}$, $\mathbf{Q_{E}^{42}}$ and $\mathbf{Q_{E}^{43}}$, $\mathbf{Q_{E}^{44}}$ are respectively $n\times (m+1)$ and $n \times (n+1)$ matrices given, for $j=1,\ldots,n$, $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{E}^{41}} \big)_{js} := \frac{e^{-\eta_{j} (\bm{k}-\ell^{\ast}) }}{\eta_{j} + \beta_{s,(r+\vartheta-\rho_{L})}}, \hspace{1.5em} \big( \mathbf{Q_{E}^{42}} \big)_{js} := \frac{e^{\beta_{s,(r+\vartheta)}) (\bm{k}-\ell^{\ast})} - e^{-\eta_{j}(\bm{k}-\ell^{\ast})}}{ \eta_{j} + \beta_{s,(r+\vartheta)}}, \\ & \hspace{1em} \big( \mathbf{Q_{E}^{43}} \big)_{ju} := \frac{1 - e^{-(\eta_{j}+\gamma_{u,(r+\vartheta)}) (\bm{k}-\ell^{\ast})}}{ {\eta_{j} + \gamma_{u,(r+\vartheta)}}} , \hspace{1.5em} \big( \mathbf{Q_{E}^{44}} \big)_{ju} := -\frac{1}{\eta_{j} + \gamma_{u,(r+\vartheta)}} , \end{align} \item[$v)$] $\mathbf{Q_{E}^{51}}$, $\mathbf{Q_{E}^{52}}$ and $\mathbf{Q_{E}^{53}}$, $\mathbf{Q_{E}^{54}}$ are respectively $1\times (m+1)$ and $1 \times (n+1)$ vectors given, for $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{E}^{51}} \big)_{s} := 1, \hspace{1.5em} \big( \mathbf{Q_{E}^{52}} \big)_{s} := -1, \hspace{1.5em} \big( \mathbf{Q_{E}^{53}} \big)_{u} := -e^{-\gamma_{u,(r+\vartheta)} \cdot (\bm{k}-\ell^{\ast})} , \hspace{1.5em} \big( \mathbf{Q_{E}^{54}} \big)_{u} := 0 , \end{align} \item[$vi)$] $\mathbf{Q_{E}^{61}}$, $\mathbf{Q_{E}^{62}}$ and $\mathbf{Q_{E}^{63}}$, $\mathbf{Q_{E}^{64}}$ are respectively $1\times (m+1)$ and $1 \times (n+1)$ vectors given, for $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{E}^{61}} \big)_{s} := 0, \hspace{1.5em} \big( \mathbf{Q_{E}^{62}} \big)_{s} := e^{\beta_{s,(r+\vartheta)} \cdot (\bm{k}-\ell^{\ast})}, \hspace{1.5em} \big( \mathbf{Q_{E}^{63}} \big)_{u} := 1 , \hspace{1.5em} \big( \mathbf{Q_{E}^{64}} \big)_{u} := -1 , \end{align} \item[$vii)$] $\mathbf{Q_{E}^{71}}$, $\mathbf{Q_{E}^{72}}$ and $\mathbf{Q_{E}^{73}}$, $\mathbf{Q_{E}^{74}}$ are respectively $1\times (m+1)$ and $1 \times (n+1)$ vectors given, for $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \hspace{1.2em} \big( \mathbf{Q_{E}^{71}} \big)_{s} := \beta_{s,(r+\vartheta-\rho_{L})} , \hspace{1.5em} \big( \mathbf{Q_{E}^{72}} \big)_{s} := -\beta_{s,(r+\vartheta)} , \\ & \big( \mathbf{Q_{E}^{73}} \big)_{u} := - \gamma_{u,(r+\vartheta)} e^{-\gamma_{u,(r+\vartheta)} \cdot (\bm{k}-\ell^{\ast})} , \hspace{1.5em} \big( \mathbf{Q_{E}^{74}} \big)_{u} := 0 , \end{align} \item[$viii)$] $\mathbf{Q_{E}^{81}}$, $\mathbf{Q_{E}^{82}}$ and $\mathbf{Q_{E}^{83}}$, $\mathbf{Q_{E}^{84}}$ are respectively $1\times (m+1)$ and $1 \times (n+1)$ vectors given, for $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{E}^{81}} \big)_{s} := 0, \hspace{1.5em} \big( \mathbf{Q_{E}^{82}} \big)_{s} := \beta_{s,(r+\vartheta)} e^{\beta_{s,(r+\vartheta)} \cdot (\bm{k}-\ell^{\ast})}, \\ & \hspace{1.2em} \big( \mathbf{Q_{E}^{83}} \big)_{u} := \gamma_{u,(r+\vartheta)} , \hspace{1.5em} \big( \mathbf{Q_{E}^{84}} \big)_{u} := -\big( \mathbf{Q_{E}^{83}} \big)_{u} . \end{align} \end{itemize} \end{proof} \begin{proof}[\bf Proof of Proposition \ref{PropAmerHEJD}] \noindent We proceed as in the proof of Proposition \ref{PropEurHEJD}, i.e.~we first rewrite the value of the maturity-randomized early exercise premium $\widehat{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\cdot)$ as function of the log-price $\bm{x} := \log(x)$ and of the log-strike $\bm{k} := \log(K)$ via $\overline{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\cdot)$ by relying on the following relation \begin{equation} \overline{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) := \widehat{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,e^{\bm{x}}; e^{\bm{k}}, L,\rho_{L}). \end{equation} \noindent This transforms (\ref{MRGSCAmerOIDE1}), (\ref{MRGSCAmerOIDE2}) into the following problem \begin{align} & \mathcal{A}_{X} \overline{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) - \big( r + \vartheta - \rho_{L} \mathds{1}_{(0,L)}(e^{\bm{k}}) \big) \overline{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = 0, \hspace{1.5em} \mbox{for} \; -\infty < \bm{x} < b^{\ast} ,\\ & \hspace{7.4em} \overline{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = e^{\bm{x}} - e^{\bm{k}} - \overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}), \hspace{2em} \mbox{for} \; b^{\ast} \leq \bm{x} < \infty, \end{align} \noindent with $\mathcal{A}_{X}$ given as in (\ref{NewGen}) and $b^{\ast}$ denoting the log early exercise boundary, i.e.~$b^{\ast} := \log(\mathfrak{b}_{s})$. \noindent Equivalently, this can be written in the following system of three equations \begin{align} \mathcal{A}_{X} \overline{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) - ( r + \vartheta - \rho_{L}) & \overline{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = 0, \hspace{2.1em} \mbox{for} \; -\infty < \bm{x} < \ell^{\ast}, \label{EQfirst21}\\ \mathcal{A}_{X} \overline{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) - ( r + \vartheta) & \overline{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = 0, \hspace{2.7em} \mbox{for} \; \ell^{\ast} \leq \bm{x} < b^{\ast}, \label{EQfirst22}\\ \overline{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = e^{\bm{x}} - e^{\bm{k}} & -\overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}), \hspace{2.6em} \mbox{for} \; b^{\ast} \leq \bm{x} < \infty, \label{EQlast21} \end{align} \noindent where we have set $\ell^{\ast} := \log(L)$. Consequently, following the arguments in the proof of Proposition \ref{PropEurHEJD}, we obtain that the general solution to (\ref{EQfirst21})-(\ref{EQlast21}) takes the following form \begin{equation} \overline{\mathcal{E}_{\mathcal{DOSC}}^{\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = \left \{ \begin{array}{lc} \sum \limits_{s=1}^{m+1} D_{s}^{+} e^{\beta_{s,(r+\vartheta-\rho_{L})} \cdot (\bm{x}-\ell^{\ast})}, & -\infty < \bm{x} < \ell^{\ast} ,\\ \sum \limits_{s=1}^{m+1} F_{s}^{+} e^{\beta_{s,(r+\vartheta)} \cdot (\bm{x}-\ell^{\ast})} + \sum \limits_{u=1}^{n+1} F_{u}^{-} e^{\gamma_{u,(r+\vartheta)} \cdot (\bm{x}-b^{\ast})}, & \ell^{\ast} \leq \bm{x} < b^{\ast}, \\ e^{\bm{x}} - e^{\bm{k}} -\overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}), & b^{\ast} \leq \bm{x} < \infty, \end{array} \right. \end{equation} \noindent where the coefficients $(D_{s}^{+})_{s=1,\ldots,m+1}$, $(F_{s}^{+})_{s=1,\ldots,m+1}$, $(F_{u}^{-})_{u=1,\ldots,n+1}$ and the free-boundary $b^{\ast}$ are subsequently determined by analyzing the solution under the respective equations and in the different regions. Here, following the steps outlined in the proof of Proposition \ref{PropEurHEJD}, we arrive at the following system of equation \begin{equation} \mathbf{Q_{A}}\mathbf{w} = \mathbf{q_{A}}, \label{AppendixBAmerSysEq} \end{equation} \noindent where $\mathbf{w} := (D_{1}^{+}, \ldots, D_{m+1}^{+}, F_{1}^{+}, \ldots, F_{m+1}^{+}, F_{1}^{-}, \ldots, F_{n+1}^{-})^{\intercal}$. The vector $\mathbf{q_{A}} = (\mathbf{q_{A}^{1}}, \ldots, \mathbf{q_{A}^{6}})^{\intercal}$ is a $(2m+n+3)$-dimensional column vector, whose elements are defined in the following way: \begin{itemize} \setlength \itemsep{-0.2em} \item[$i)$] $\mathbf{q_{A}^{1}}$ and $\mathbf{q_{A}^{2}}$ are $1 \times m$ vectors given by \begin{align} \big(\mathbf{q_{A}^{1}}\big)_{i} := \sum \limits_{u=1}^{n+1} C_{u}^{-} \frac{e^{- \xi_{i} (b^{\ast}-\ell^{\ast})} e^{\gamma_{u,(r+\vartheta)} \cdot (b^{\ast} - \bm{k})}}{\xi_{i} - \gamma_{u,(r+\vartheta)}} & + \frac{r e^{\bm{k}} e^{-\xi_{i} (b^{\ast}-\ell^{\ast})}}{\xi_{i}(r+\vartheta)} - \frac{\delta e^{b^{\ast}} e^{-\xi_{i} (b^{\ast}-\ell^{\ast})}}{(\xi_{i}-1)(\delta+\vartheta)}, \hspace{1.5em} i = 1, \ldots, m , \\ \big(\mathbf{q_{A}^{2}}\big)_{i} := \sum \limits_{u=1}^{n+1} C_{u}^{-} \frac{ e^{\gamma_{u,(r+\vartheta)} \cdot (b^{\ast} - \bm{k})}}{\xi_{i} - \gamma_{u,(r+\vartheta)}} & + \frac{r e^{\bm{k}}}{\xi_{i}(r+\vartheta)} - \frac{\delta e^{b^{\ast}} }{(\xi_{i}-1)(\delta+\vartheta)}, \hspace{1.5em} i = 1, \ldots, m , \end{align} \item[$ii)$] $\mathbf{q_{A}^{3}}$ is a $1 \times n$ vector given by $\big(\mathbf{q_{A}^{3}}\big)_{j} := 0, \; j = 1, \ldots, n,$ \item[$iii)$] $\mathbf{q_{A}^{4}}$, $\mathbf{q_{A}^{5}}$, $\mathbf{q_{A}^{6}}$ are real values given by \begin{align} \mathbf{q_{A}^{4}} := 0, \hspace{1.5em} \mathbf{q_{A}^{5}}:= \frac{\delta e^{b^{\ast}}}{\delta + \vartheta} - \frac{r e^{\bm{k}}}{r + \vartheta} - \sum \limits_{u=1}^{n+1} C_{u}^{-} e^{\gamma_{u,(r+\vartheta)} \cdot (b^{\ast}-\bm{k})}, \hspace{1.5em} \mathbf{q_{A}^{6}} := 0. \end{align} \end{itemize} \noindent Additionally, $\mathbf{Q_{A}}$ is a $(2m+n+3)$-dimensional square matrix \begin{equation} \mathbf{Q_{A}} = \left( \begin{array}{ccc} \mathbf{Q_{A}^{11}} & \mathbf{Q_{A}^{12}} & \mathbf{Q_{A}^{13}} \\ \mathbf{Q_{A}^{21}} & \mathbf{Q_{A}^{22}} & \mathbf{Q_{A}^{23}} \\ \vdots & \vdots & \vdots \\ \mathbf{Q_{A}^{61}} & \mathbf{Q_{A}^{62}} & \mathbf{Q_{A}^{63}} \end{array} \right) \end{equation} \noindent that is defined in the following way: \begin{itemize} \setlength \itemsep{-0.2em} \item[$i)$] $\mathbf{Q_{A}^{11}}$, $\mathbf{Q_{A}^{12}}$ and $\mathbf{Q_{A}^{13}}$ are respectively $m\times (m+1)$ and $m \times (n+1)$ matrices given, for $i=1,\ldots,m$, $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{A}^{11}} \big)_{is} := - \frac{1}{\xi_{i} - \beta_{s,(r+\vartheta-\rho_{L})}}, \hspace{1.5em} \big( \mathbf{Q_{A}^{12}} \big)_{is} := \frac{1-e^{(\beta_{s,(r+\vartheta)}-\xi_{i}) (b^{\ast} -\ell^{\ast})}}{ \xi_{i} - \beta_{s,(r+\vartheta)}}, \\ & \hspace{5.5em} \big( \mathbf{Q_{A}^{13}} \big)_{iu} := \frac{e^{-\gamma_{u,(r+\vartheta)} \cdot (b^{\ast}-\ell^{\ast})} - e^{-\xi_{i} (b^{\ast}-\ell^{\ast})}}{ \xi_{i} - \gamma_{u,(r+\vartheta)}} , \end{align} \item[$ii)$] $\mathbf{Q_{A}^{21}}$, $\mathbf{Q_{A}^{22}}$ and $\mathbf{Q_{A}^{23}}$ are respectively $m\times (m+1)$ and $m \times (n+1)$ matrices given, for $i=1,\ldots,m$, $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{A}^{21}} \big)_{is} := 0, \hspace{1.5em} \big( \mathbf{Q_{A}^{22}} \big)_{is} := -\frac{e^{\beta_{s,(r+\vartheta)} \cdot (b^{\ast}-\ell^{\ast}) }}{\xi_{i} - \beta_{s,(r+\vartheta)}}, \hspace{1.5em} \big( \mathbf{Q_{A}^{23}} \big)_{iu} := -\frac{1}{ \xi_{i} - \gamma_{u,(r+\vartheta)}} , \end{align} \item[$iii)$] $\mathbf{Q_{A}^{31}}$, $\mathbf{Q_{A}^{32}}$ and $\mathbf{Q_{A}^{33}}$ are respectively $n\times (m+1)$ and $n \times (n+1)$ matrices given, for $j=1,\ldots,n$, $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{A}^{31}} \big)_{js} := \frac{1}{\eta_{j} + \beta_{s,(r+\vartheta-\rho_{L})}}, \hspace{1.5em} \big( \mathbf{Q_{A}^{32}} \big)_{js} := -\frac{1}{\eta_{j} + \beta_{s,(r+\vartheta)}}, \\ & \hspace{6em} \big( \mathbf{Q_{A}^{33}} \big)_{ju} := -\frac{e^{- \gamma_{u,(r+\vartheta)} \cdot (b^{\ast} -\ell^{\ast})}}{\eta_{j} + \gamma_{u,(r+\vartheta)}} , \end{align} \item[$iv)$] $\mathbf{Q_{A}^{41}}$, $\mathbf{Q_{A}^{42}}$ and $\mathbf{Q_{A}^{43}}$ are respectively $1\times (m+1)$ and $1 \times (n+1)$ vectors given, for $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{A}^{41}} \big)_{s} := 1, \hspace{1.5em} \big( \mathbf{Q_{A}^{42}} \big)_{s} := -1, \hspace{1.5em} \big( \mathbf{Q_{A}^{43}} \big)_{u} := -e^{-\gamma_{u,(r+\vartheta)} \cdot (b^{\ast}-\ell^{\ast})} , \end{align} \item[$v)$] $\mathbf{Q_{A}^{51}}$, $\mathbf{Q_{A}^{52}}$ and $\mathbf{Q_{A}^{53}}$ are respectively $1\times (m+1)$ and $1 \times (n+1)$ vectors given, for $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{A}^{51}} \big)_{s} := 0, \hspace{1.5em} \big( \mathbf{Q_{A}^{52}} \big)_{s} := e^{\beta_{s,(r+\vartheta)} \cdot (b^{\ast}-\ell^{\ast})}, \hspace{1.5em} \big( \mathbf{Q_{A}^{53}} \big)_{u} := 1 , \end{align} \item[$vi)$] $\mathbf{Q_{A}^{61}}$, $\mathbf{Q_{A}^{62}}$ and $\mathbf{Q_{A}^{63}}$ are respectively $1\times (m+1)$ and $1 \times (n+1)$ vectors given, for $s=1,\ldots, m+1$, and $u=1,\ldots, n+1$, by \begin{align} & \big( \mathbf{Q_{A}^{61}} \big)_{s} := \beta_{s,(r+\vartheta-\rho_{L})} , \hspace{1.5em} \big( \mathbf{Q_{A}^{62}} \big)_{s} := -\beta_{s,(r+\vartheta)} , \hspace{1.5em} \big( \mathbf{Q_{A}^{63}} \big)_{u} := - \gamma_{u,(r+\vartheta)} e^{-\gamma_{u,(r+\vartheta)} \cdot (b^{\ast}-\ell^{\ast})}. \end{align} \end{itemize} \noindent Finally, the free-boundary $b^{\ast}$ can be recovered by combining (\ref{AppendixBAmerSysEq}) with the usual smooth-fit condition at the boundary level:\footnote{We emphasize that smooth-fit at the boundary $b^{\ast}$ can be proved using the same approach as the one outlined in \cite{Ma18}; See also \cite{pe06}, \cite{lm11}.} \begin{equation} \sum \limits_{s=1}^{m+1} F_{s}^{+} \beta_{s,(r+\vartheta)} e^{\beta_{s,(r+\vartheta)} \cdot (b^{\ast}-\ell^{\ast})} + \sum \limits_{u=1}^{n+1} F_{u}^{-} \gamma_{u,(r+\vartheta)} = \frac{\delta e^{b^{\ast}}}{\delta + \vartheta} - \sum \limits_{u=1}^{n+1} C_{u}^{-} \gamma_{u,(r+\vartheta)} e^{\gamma_{u,(r+\vartheta)} \cdot (b^{\ast}-\bm{k})}. \label{SPuseful} \end{equation} \end{proof} \begin{proof}[\bf Proof of Proposition \ref{PropEepHEJD}] \noindent To derive Representations (\ref{DiffEEP_MR1}) and (\ref{JumpEEP_MR1}), we mainly rely on the proof of Proposition~\ref{PropAmerHEJD}. As earlier, we write \begin{align} \overline{\mathcal{E}_{\mathcal{DOSC}}^{0,\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) &:= \widehat{\mathcal{E}_{\mathcal{DOSC}}^{0,\star}}(\vartheta,e^{\bm{x}}; e^{\bm{k}}, L,\rho_{L}), \\ \overline{\mathcal{E}_{\mathcal{DOSC}}^{\mathcal{J},\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) & := \widehat{\mathcal{E}_{\mathcal{DOSC}}^{\mathcal{J},\star}}(\vartheta,e^{\bm{x}}; e^{\bm{k}}, L,\rho_{L}), \end{align} \noindent and obtain, by the same arguments as in the proof of Proposition \ref{PropAmerHEJD}, that $\overline{\mathcal{E}_{\mathcal{DOSC}}^{0,\star}}(\cdot)$ and $\overline{\mathcal{E}_{\mathcal{DOSC}}^{\mathcal{J},\star}}(\cdot)$ take the form \begin{equation} \overline{\mathcal{E}_{\mathcal{DOSC}}^{0,\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = \left \{ \begin{array}{lc} \sum \limits_{s=1}^{m+1} D_{s}^{0,+} e^{\beta_{s,(r+\vartheta-\rho_{L})} \cdot (\bm{x}-\ell^{\ast})}, & -\infty < \bm{x} < \ell^{\ast} ,\\ \sum \limits_{s=1}^{m+1} F_{s}^{0,+} e^{\beta_{s,(r+\vartheta)} \cdot (\bm{x}-\ell^{\ast})} + \sum \limits_{u=1}^{n+1} F_{u}^{0,-} e^{\gamma_{u,(r+\vartheta)} \cdot (\bm{x}-b^{\ast})}, & \ell^{\ast} \leq \bm{x} < b^{\ast}, \\ e^{\bm{x}} - e^{\bm{k}} -\overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}), & \bm{x} = b^{\ast}, \\ 0, & b^{\ast} < \bm{x} < \infty, \end{array} \right. \end{equation} \begin{equation} \overline{\mathcal{E}_{\mathcal{DOSC}}^{\mathcal{J},\star}}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}) = \left \{ \begin{array}{lc} \sum \limits_{s=1}^{m+1} D_{s}^{\mathcal{J},+} e^{\beta_{s,(r+\vartheta-\rho_{L})} \cdot (\bm{x}-\ell^{\ast})}, & -\infty < \bm{x} < \ell^{\ast} ,\\ \sum \limits_{s=1}^{m+1} F_{s}^{\mathcal{J},+} e^{\beta_{s,(r+\vartheta)} \cdot (\bm{x}-\ell^{\ast})} + \sum \limits_{u=1}^{n+1} F_{u}^{\mathcal{J},-} e^{\gamma_{u,(r+\vartheta)} \cdot (\bm{x}-b^{\ast})}, & \ell^{\ast} \leq \bm{x} < b^{\ast}, \\ 0, & \bm{x} = b^{\ast}, \\ e^{\bm{x}} - e^{\bm{k}} -\overline{\mathcal{DOSC}}_{E}^{\star}(\vartheta,\bm{x}; \bm{k}, L, \rho_{L}), & b^{\ast} < \bm{x} < \infty. \end{array} \right. \end{equation} \noindent Here, analogous derivations to the ones in the proof of Proposition \ref{PropAmerHEJD} show that the vectors of coefficients $$ \mathbf{w_{0}} := (D_{1}^{0,+}, \ldots, D_{m+1}^{0,+}, F_{1}^{0,+}, \ldots, F_{m+1}^{0,+}, F_{1}^{0,-}, \ldots, F_{n+1}^{0,-})^{\intercal} $$ \noindent and $$\mathbf{w_{J}} := (D_{1}^{\mathcal{J},+}, \ldots, D_{m+1}^{\mathcal{J},+}, F_{1}^{\mathcal{J},+}, \ldots, F_{m+1}^{\mathcal{J},+}, F_{1}^{\mathcal{J},-}, \ldots, F_{n+1}^{\mathcal{J},-})^{\intercal}$$ solve the following system of equations, respectively, \begin{equation} \mathbf{Q_{A}}\mathbf{w_{0}} = \mathbf{q_{A,0}}, \hspace{1.5em} \mbox{and} \hspace{1.5em} \mathbf{Q_{A}}\mathbf{w_{J}} = \mathbf{q_{A,J}}, \label{AppendixBEepSysEq} \end{equation} \noindent where $\mathbf{q_{A,0}} = (\mathbf{q_{A,0}^{1}}, \ldots, \mathbf{q_{A,0}^{6}})^{\intercal}$ and $\mathbf{q_{A,J}} = (\mathbf{q_{A,J}^{1}}, \ldots, \mathbf{q_{A,J}^{6}})^{\intercal}$ are $(2m+n+3)$-dimensional column vectors, whose elements are defined by: \begin{itemize} \setlength \itemsep{-0.2em} \item[$i)$] $\mathbf{q_{A,0}^{1}}$ and $\mathbf{q_{A,0}^{2}}$ are $1 \times m$ vectors given by $\big(\mathbf{q_{A,0}^{1}}\big)_{i} := 0, \; \big(\mathbf{q_{A,0}^{2}}\big)_{i} := 0, \; i= 1, \ldots, m,$ \item[$ii)$] $\mathbf{q_{A,0}^{3}}$ is a $1 \times n$ vector given by $\big(\mathbf{q_{A,0}^{3}}\big)_{j} := 0, \; j = 1, \ldots, n,$ \item[$iii)$] $\mathbf{q_{A,0}^{4}}$, $\mathbf{q_{A,0}^{5}}$, $\mathbf{q_{A,0}^{6}}$ are real values given by \begin{align} \mathbf{q_{A,0}^{4}} := 0, \hspace{1.5em} \mathbf{q_{A,0}^{5}}:= \frac{\delta e^{b^{\ast}}}{\delta + \vartheta} - \frac{r e^{\bm{k}}}{r + \vartheta} - \sum \limits_{u=1}^{n+1} C_{u}^{-} e^{\gamma_{u,(r+\vartheta)} \cdot (b^{\ast}-\bm{k})}, \hspace{1.5em} \mathbf{q_{A,0}^{6}} := 0. \end{align} \end{itemize} \noindent and \begin{itemize} \setlength \itemsep{-0.2em} \item[$i)$] $\mathbf{q_{A,J}^{1}}$ and $\mathbf{q_{A,J}^{2}}$ are $1 \times m$ vectors given by \begin{align} \big(\mathbf{q_{A,J}^{1}}\big)_{i} := \sum \limits_{u=1}^{n+1} C_{u}^{-} \frac{e^{- \xi_{i} (b^{\ast}-\ell^{\ast})} e^{\gamma_{u,(r+\vartheta)} \cdot (b^{\ast} - \bm{k})}}{\xi_{i} - \gamma_{u,(r+\vartheta)}} & + \frac{r e^{\bm{k}} e^{-\xi_{i} (b^{\ast}-\ell^{\ast})}}{\xi_{i}(r+\vartheta)} - \frac{\delta e^{b^{\ast}} e^{-\xi_{i} (b^{\ast}-\ell^{\ast})}}{(\xi_{i}-1)(\delta+\vartheta)}, \hspace{1.5em} i = 1, \ldots, m , \\ \big(\mathbf{q_{A,J}^{2}}\big)_{i} := \sum \limits_{u=1}^{n+1} C_{u}^{-} \frac{ e^{\gamma_{u,(r+\vartheta)} \cdot (b^{\ast} - \bm{k})}}{\xi_{i} - \gamma_{u,(r+\vartheta)}} & + \frac{r e^{\bm{k}}}{\xi_{i}(r+\vartheta)} - \frac{\delta e^{b^{\ast}} }{(\xi_{i}-1)(\delta+\vartheta)}, \hspace{1.5em} i = 1, \ldots, m , \end{align} \item[$ii)$] $\mathbf{q_{A,J}^{3}}$ is a $1 \times n$ vector given by $\big(\mathbf{q_{A,J}^{3}}\big)_{j} := 0, \; j = 1, \ldots, n,$ \item[$iii)$] $\mathbf{q_{A,J}^{4}}$, $\mathbf{q_{A,J}^{5}}$, $\mathbf{q_{A,J}^{6}}$ are real values given by \begin{align} \mathbf{q_{A,J}^{4}} := 0, \hspace{1.5em} \mathbf{q_{A,J}^{5}}:= 0, \hspace{1.5em} \mathbf{q_{A,J}^{6}} := 0. \end{align} \end{itemize} \noindent As a final remark, it is worth mentioning that the above values for $\mathbf{q_{A,0}^{5}}$ and $\mathbf{q_{A,J}^{5}}$ only hold under the assumption that $\sigma_{X} >0$. In fact, whenever $\sigma_{X} = 0$, hyper-exponential jump-diffusion processes reduce to finite activity pure jump processes and the corresponding continuous-fit conditions do not anymore hold at the boundary level $b^{\ast}$ (cf.~\cite{fmv19}). \end{proof} \newpage
1,314,259,995,197
arxiv
\section{\uppercase{Introduction}} \subsection{The Role of Meridional Circulations in Transport and Mixing} The interiors of stars, in common with all rotating fluids, frequently exhibit meridional circulations, and the transport of angular momentum and chemical species by these circulations has significant consequences for stellar evolution \citep[{e.g.,}][]{Mestel53,ChaboyerZahn92,Pinsonneault97}. Within convective zones, transport by meridional circulations is generally less important than transport by turbulent convective motions, but within radiative zones transport by meridional circulations may be a dominant mechanism. Such transport may play a significant role in the spin-down of the Sun's core \citep{Howard-etal67,Spiegel72,Clark75} and the depletion of lithium in solar-type stars \citep[{e.g.,}][]{CharbonneauMichaud88,GaraudBodenheimer10}. Meridional flows also transport magnetic flux, a fact that forms the basis of certain theories of the solar magnetic cycle \cite[{e.g.,}][]{DikpatiCharbonneau99} and the solar interior rotation \citep{GoughMcIntyre98}. In one-dimensional (1D) models of stellar interiors, transport by meridional circulations is sometimes parameterized in terms of a ``rotationally-induced mixing'' parameter \citep{Zahn92} derived from idealized laminar or mean-field models. Many such models have been proposed, leading to many different prescriptions for transport and mixing, all of which assume that the amplitude and radial extent of the circulation, and hence the degree of mixing, depend only on the spherically-averaged angular velocity, $\Omega(r)$. Such a description is clearly limited: in the Sun, for example, the spherically-averaged angular velocity is almost uniform \citep{Couvidat-etal03}, yet the \emph{latitudinal} differential rotation drives a meridional circulation, as described in Section~\ref{sec:driving}. Moreover, the validity of such parameterizations has never been confirmed by any 3D numerical model. In this paper we use a fully compressible, 3D numerical model to study the behavior of meridional flows driven by differential rotation of the convective envelopes of solar-type stars. We compare the results of our simulations with the predictions of both laminar and mean-field models and test the assumptions on which those models are based. In the present work we consider only non-magnetic processes; the effects of magnetic fields will be addressed in future papers. Ultimately, our aim is to construct a better parameterization for the role of meridional flows in transporting angular momentum, chemical species, and magnetic fields in stellar interiors. The rest of the paper is structured as follows. In Sections~\ref{sec:driving}--\ref{sec:review} we briefly review the physical mechanisms that give rise to meridional flows and discuss their expected behavior in the context of stellar interiors. In Section~\ref{sec:global} we summarize the results of recent global numerical simulations of angular momentum transport in the solar interior. Our numerical model is described in Section~\ref{sec:model}, and parameter constraints are outlined in Section~\ref{sec:parameters}. In Section~\ref{sec:results} we present results from four simulations performed in different parameter regimes. Our findings are summarized in Section~\ref{sec:summary} and discussed in relation to the results of other studies. \subsection{Driving of Meridional Circulations} \label{sec:driving} The presence of meridional circulations in stellar interiors can be explained by two complementary arguments. The first is a generalization of the classical Vogt--Eddington argument \citep{Vogt25,Eddington25}. In a rotating star, a balance between centrifugal, Coriolis, pressure, and gravitational forces in the meridional plane generally requires that temperature is non-constant within each horizontal surface, implying that the radiative heat flux has nonzero divergence. Within a radiative (i.e.,\ non-convecting) zone, advection of entropy by a meridional flow is the only mechanism that can balance the divergence of the heat flux. A meridional circulation must therefore be present in order to maintain local thermal equilibrium and circumvent the so-called ``von Zeipel paradox'' \citep[{e.g.,}][\S5.4]{Mestel99}. For a given internal rotation profile, we can in principal deduce the meridional flow required to maintain the balances just described. Although it is tempting to say that the rotation profile ``drives'' the meridional flow, is it more correct to say that the persistence of the rotation profile, on timescales for which meridional force balance and local thermal equilibrium are expected to apply, \emph{implies} the presence of a meridional flow. The transport of angular momentum by this meridional flow will feed back onto the rotation profile, causing it to evolve over time. The classical example of this problem considers a star initially in uniform rotation \citep{Sweet50}; in that case the meridional circulation that arises is known as the Eddington--Sweet circulation. Advection of angular momentum by this circulation causes the star to develop differential rotation on the Eddington--Sweet timescale, \begin{equation} t_{\rm ES} = \left(\frac{N}{2\Omega}\right)^{\!2} R^2/\kappa, \label{eq:tES} \end{equation} where $\Omega$ is the initial rotation rate, $N$ is the buoyancy frequency, $\kappa$ is the thermal diffusivity, and $R$ is the stellar radius. For a solar-type star, this timescale is typically longer than the star's main-sequence lifetime, and so transport by the Eddington--Sweet circulation is usually neglected in stellar evolution models. However, in a differentially rotating star the meridional flows implied by the Vogt--Eddington argument can be much stronger than the classical Eddington--Sweet circulation, particularly in regions where the angular velocity gradient is large. In that case transport by the circulation can be significant. In the classical Eddington--Sweet problem it is assumed that the interior of the star is not subject to any internal torques arising from viscous, Maxwell, or small-scale Reynolds stresses, and so meridional circulations are the only means of transporting angular momentum. In fact, the consideration of such torques provides the second argument for the presence of meridional flows. In a quasi-steady state, any torque arising from viscous, Maxwell, or Reynolds stresses must be balanced by a Coriolis torque, because neither pressure nor gravity has a mean azimuthal component, and a meridional flow must be present to provide this Coriolis torque. The process by which an applied torque drives a meridional flow is often called ``gyroscopic pumping'' \citep{McIntyre00} by analogy with the Ekman pumping that occurs within Ekman layers. One example is the solar convection zone, in which turbulent Reynolds stresses maintain a state of differential rotation by systematically transporting angular momentum from high to low latitudes \citep[{e.g.,}][]{Miesch05}. This turbulent Reynolds torque, prograde in low latitudes and retrograde in high latitudes, also gyroscopically pumps a meridional circulation, as originally discussed by \citet{Kippenhahn63}. In general, part of the circulation must extend beneath the convection zone and into the radiation zone (unless by chance the vertically-integrated pumping torque within the convection zone happens to be exactly zero --- see \citet[][\S8.2]{McIntyre07}). The two arguments summarized here make slightly different assumptions about the balances of momentum and internal energy, and so one or the other may be preferred under different circumstances. The gyroscopic pumping argument is in a sense more general, since it does not assume the presence of stable stratification. In practice, the two arguments are often complementary; for example, the presence of meridional flows below the solar convection zone can also be inferred from the differential rotation of the solar tachocline, using the Vogt--Eddington argument \citep{SpiegelZahn92,GoughMcIntyre98}. \subsection{Predictions from Laminar and Mean-Field Models} \label{sec:review} An important property of gyroscopically pumped meridional circulations is their tendency to ``burrow'' through stably stratified regions; that is, the circulation extends progressively deeper over time. In this way, a meridional circulation that is gyroscopically pumped in the convective envelope of a solar-type star can burrow into the radiative interior, exchanging angular momentum between the two zones. This burrowing process was originally studied in connection with the problem of solar spin-down, that is, the the gradual extraction of angular momentum from the solar interior caused by magnetic braking at the Sun's surface \citep{Schatzman62,Howard-etal67}. Such studies usually assumed idealized, laminar conditions, or else relied on laboratory analogies \citep[][]{Sakurai70,BentonClark74,Clark75}. These studies neglected any mean effects arising from waves, instabilities, or turbulence. The first description of the burrowing process was given by \citet{Clark73}, for linear perturbations to a state of uniform rotation in a cylinder of stably stratified fluid. Assuming a balance of meridional forces (i.e.,\ hydrostatic and cyclostrophic balance), as well as local thermal equilibrium, he showed that a change in the rotation of the boundaries of the cylinder drove a meridional circulation within a boundary layer whose thickness grew ``hyperdiffusively'' as $t^{1/4}$. Advection of angular momentum by the meridional circulation caused the angular velocity perturbation imposed at the boundary to propagate into the cylinder at the same rate. Burrowing of meridional circulations is now known to be a robust feature of models in which meridional advection dominates the transport of angular momentum \citep[{e.g.,}][]{Haynes-etal91,SpiegelZahn92,Elliott97}. For fluids that are ``heavily stratified'', meaning that the buoyancy frequency $N$ far exceeds the rotation rate $\Omega$, burrowing requires the presence of a ``thermal relaxation'' mechanism that mitigates the effect of the stratification. For this reason, in stellar interiors the rate of burrowing is dependent on the rate of radiative diffusion, and so the burrowing process in this context is sometimes called ``radiative spreading'' \citep{SpiegelZahn92}. Following \citet{Haynes-etal91} and \citet{McIntyre02} we prefer to call it ``burrowing'' because of the circulation's tendency to extend downward, rather than upward, in a fluid with a finite density scale height. Assuming that the results of laminar models can be applied directly to stellar interiors, the time required for meridional circulations to burrow across an entire stellar radiation zone, and thereby communicate the spin-down of the surface all the way to the center, is of the order of the Eddington--Sweet time, $t_{\rm ES}$, given by Equation~(\ref{eq:tES}), where now $R$ is the radius of the radiation zone. These spin-down circulations are closely analogous to the Eddington--Sweet circulations described in the previous section \citep{Clark75}. Because the timescale $t_{\rm ES}$ is typically longer than the main-sequence lifetime of a solar-type star, spin-down by meridional circulations is not expected to operate all the way to the center of the star. In the solar interior, the Eddington--Sweet timescale is currently $\sim 10^{12}$~years \citep[{e.g.,}][]{Gough07}, and so \citet{Clark75} predicted that the solar rotation rate increases significantly with depth in the radiation zone. This picture of solar spin-down by burrowing meridional circulations was challenged by the advent of helioseismology, which revealed that the solar radiation zone rotates uniformly in both radius and latitude. This uniform rotation is incompatible with any model in which angular momentum transport occurs only by meridional advection \citep{SpiegelZahn92}, implying that other transport processes are operating in the solar radiation zone. Different authors have suggested that anisotropic turbulence \citep{SpiegelZahn92,ChaboyerZahn92}, internal gravity waves \citep[{e.g.,}][]{Schatzman93,Zahn-etal97}, or primordial magnetic fields \citep{CharbonneauMacGregor93,RudigerKitchatinov97,GoughMcIntyre98} might be responsible. Various parameterizations for these processes have subsequently been incorporated into one-dimensional stellar evolution models \citep[{e.g.,}][]{CharbonnelTalon05,Eggenberger-etal05}. \subsection{Results from Global Numerical Simulations} \label{sec:global} Recently, attempts have been made to model angular momentum transport in the solar interior using global-scale, two- and three-dimensional numerical simulations \citep{Rogers11,Brun-etal11,Strugarek-etal11}. These simulations include the effects of convective turbulence, gravity waves, and magnetic fields self-consistently, and can therefore be used to test the predictions of the laminar and mean-field models described above. Unfortunately, these studies arrive at rather different conclusions, and none of them lends support to any of the pictures of angular momentum transport just listed. \citet{Brun-etal11} find that angular momentum transport below the convection zone is dominated by viscous stresses, whereas \citet{Rogers11} finds that transport by meridional advection and viscous stresses cancel, with apparently no net exchange of angular momentum between the convection and radiation zones. \citet{Rogers11} also finds that the presence of a magnetic field has little effect on angular momentum transport, whereas \citet{Strugarek-etal11} find that a global-scale interior magnetic field dominates the angular momentum transport, but tends to produce nonuniform rotation within the radiation zone. None of these simulations exhibit the burrowing of meridional flows predicted by laminar models; in fact, such burrowing has never been observed in any self-consistent, three-dimensional numerical simulation. However, the parameter regime in which burrowing is expected to occur is rather difficult to achieve in numerical simulations. As originally noted by \citet{Clark73}, the transport of angular momentum by meridional circulations acts in competition with transport by viscosity. Since the timescale for transport by meridional circulation is very long, of order $t_{\rm ES}$, the transport will be dominated by viscous stresses unless the Prandtl number is very small (further detail is given in Section~\ref{sec:parameters}). Recently, \citet{GaraudBrummell08} and \citet{GaraudAA09} have studied the penetration of meridional flows into stellar radiative zones using laminar, axisymmetric, steady-state models. They find that the ratio of $t_{\rm ES}$ and the viscous diffusion timescale plays an important role in determining both the magnitude and structure of meridional flows. These results have not yet been confirmed by any self-consistent, three-dimensional numerical model. In an attempt to understand the discrepancies between the results of the global-scale numerical simulations, as well as their departures from the predictions of earlier models, we here study in greater detail the processes that transport angular momentum between the convective envelope and radiative interior of solar-type stars. We use a local Cartesian numerical model that incorporates the nonlinear effects of convection and gravity waves, and that allows us to study parameter regimes that are not attainable in global-scale simulations. \section{\uppercase{Model}} \label{sec:model} We use the same code used by \citet{Brummell-etal02} to study the penetration of convection into a radiation zone. This is a fully compressible, pseudo-spectral, $f$-plane code that solves the ideal gas equations within a Cartesian box in a rotating frame. We adopt Cartesian coordinates in which $x$, $y$, and $z$ correspond to azimuth, colatitude, and depth, respectively. The computational domain, illustrated in Figure~\ref{fig:box}, is periodic in both horizontal directions, $x$ and $y$. We model a localized region at the interface between the convection zone and radiation zone. Using a local Cartesian model, rather than a global spherical model, has the advantage that all of the available computational power is devoted to studying the interaction between the two zones. However, our results need to be interpreted in the context of the global stellar interior. For simplicity we take the rotation axis to be vertical ($\bm{\Omega} = -\Omega\mathbf{e}_z$), which is a reasonable approximation at high latitudes, where the burrowing of meridional flows is expected to be most effective \citep[{e.g.,}][]{Haynes-etal91}. Studies in global models \citep[{e.g.,}][]{Elliott97} have shown that burrowing also occurs at lower latitudes, and that the direction of burrowing is then roughly parallel to the rotation axis. For this reason, local and global studies of meridional circulations tend to produce results that are qualitatively similar, but differ by a ``geometrical factor'' of order unity \citep[{e.g.,}][]{GaraudBodenheimer10}. \begin{figure}[h!] \centering% \includegraphics[width=8cm]{Fig01.eps}% \caption{% Illustration of the computational domain. The white and black coloring respectively indicates positive and negative values of $u_x$ at time $t=t_0$ in the simulation described in Section~\ref{sec:burrow}. The thickness of the convective layer, $L$, is fixed, but convective overshoot causes the shear flow to extend into the radiation zone.} \label{fig:box} \end{figure} As in the study of \citet{Brummell-etal02}, we choose a vertical profile for the thermal conductivity, $k(z)$, and impose a uniform vertical heat flux $H$ at the bottom of the domain. We choose $k(z)$ and $H$ such that the radiative temperature gradient is subadiabatic in the lower part of the domain, and superadiabatic (i.e.,\ convectively unstable) in the upper part of the domain. This naturally leads to the formation of a lower radiation zone and an upper convection zone. In all the computations presented here the box size is $2L\times2L\times4L$, where $L$ is the thickness of the convection zone. The radiation zone is chosen to be three times thicker than the convection zone in order to minimize the influence of the bottom boundary on the dynamics. The numerical resolution is typically $100\times100\times400$; we require greater resolution in the vertical direction in order to accurately resolve any boundary layers that form at the interface between the convection and radiation zones. The top and bottom boundaries of the domain are both impenetrable and stress-free. We impose constant temperature $T_0$ at the top of the domain, $z=0$, and a constant heat flux at the bottom. Initially, the fluid is at rest and in hydrostatic balance with uniform vertical heat flux throughout, and has pressure $p_0$ and density $\rho_0$ at the upper boundary $z=0$. The ideal gas equations are nondimensionalized using the thickness of the convective layer, $L$, as the lengthscale, and $L/c$ as the timescale, where $c = \sqrt{p_0/\rho_0}$ is the isothermal sound speed at the top of the domain. The temperature, $T$, pressure, $p$, and density, $\rho$, are nondimensionalized using $T_0$, $p_0$, and $\rho_0$, respectively. The dimensionless ideal gas equations then take the form \begin{align} \rho\left(\frac{\partial}{\partial t} + \boldsymbol{u}\cdot\bm{\nabla}\right)\boldsymbol{u} - 2\Omega\rho\,\mathbf{e}_z\times\boldsymbol{u} &= - \bm{\nabla} p + g\rho\mathbf{e}_z + 2\mu\bm{\nabla}\cdot\mathbf{D} + F\mathbf{e}_x \label{eq:mom} \\ \left(\frac{\partial}{\partial t} + \boldsymbol{u}\cdot\bm{\nabla}\right)\rho &= -\rho\bm{\nabla}\cdot\boldsymbol{u} \label{eq:mass} \\ p &= \rho T \\ \rho T\left(\frac{\partial}{\partial t} + \boldsymbol{u}\cdot\bm{\nabla}\right)\ln(p^{1/\gamma}/\rho) &= \bm{\nabla}\cdot\left(k(z)\bm{\nabla} T\right) + \frac{\gamma\!-\!1}{\gamma}2\mu\mathbf{D}:\mathbf{D}, \end{align} where $\gamma=5/3$ is the ratio of specific heats, the constants $\Omega$ and $g$ are the dimensionless rotation rate and gravitational acceleration, and $\mathbf{D}$ is the deviatoric rate-of-strain tensor, \begin{equation} D_{ij} = \frac{1}{2}\frac{\partial u_i}{\partial x_j} + \frac{1}{2}\frac{\partial u_j}{\partial x_i} - \frac{1}{3}\bm{\nabla}\cdot\boldsymbol{u}\,\delta_{ij}. \end{equation} We have also introduced the dimensionless dynamic viscosity $\mu$ and thermal conductivity $k$, which are related to their dimensional counterparts, $\mu^\star$ and $k^\star$ say, by \begin{equation} \mu = \frac{\mu^\star}{\rho_0cL} \hspace{1cm} \mbox{and} \hspace{1cm} k = \frac{k^\star\!/C_p}{\rho_0cL}, \end{equation} where $C_p$ is the specific heat capacity. We take $\mu$ to be constant throughout the domain, whereas for $k$ we impose a vertical profile of the form \begin{equation} k(z) \; = \; \frac{k_1}{1 + \exp(20(z-1))} \; + \; \frac{k_2}{1 + \exp(20(1-z))}, \end{equation} so that $k = k_1$ in the upper layer, $z<1$, and $k = k_2 > k_1$ in the lower layer, $z>1$, the change occurring across a region of dimensionless thickness $\simeq 0.1$. The bottom of the convection zone is therefore fixed at $z=1$, but convective motions are able to overshoot into the radiation zone. We write the three components of the velocity field as $\boldsymbol{u} = (u_x,u_y,u_z)$. In our horizontally-periodic Cartesian model, differential rotation corresponds to any $x$-averaged flow in the $x$ direction, and angular momentum is essentially equivalent to azimuthal momentum, $\rho u_x$. Because our computational domain is horizontally symmetric, the Reynolds stresses in the convective layer are not able to drive any mean differential rotation. In order to mimic the generation of differential rotation in the solar convection zone, we add a volume forcing term to the $x$ component of the momentum equation (\ref{eq:mom}), \begin{equation} F = \lambda(z,t)\rho(u_{\rm T}(y,z,t) - u_x). \end{equation} The effect of the forcing term is to push the flow toward a prescribed ``target'' shear flow $u_{\rm T}$ at a rate $\lambda$. We have chosen to consider a situation analogous to the thought-experiment of \citet{SpiegelZahn92}, in which the radiation zone is initially in uniform rotation despite the presence of differential rotation in the convection zone. We therefore take $\lambda$ to be constant initially, $\lambda=\lambda_0$, and the target velocity $u_{\rm T}$ is taken to be \begin{equation} u_{\rm T} \; = \; \frac{u_0(1-z)\sin(\pi y)}{1 + \exp(20(z-1))}, \label{eq:uT_initial} \end{equation} which tends to zero at depths $z > 1$. We thereby suppress differential rotation in the radiation zone and drive differential rotation in the convection zone. By suppressing differential rotation in the radiation zone we also prevent the burrowing of meridional circulations, which are tied to the differential rotation through the balances described in Section~\ref{sec:driving}. In all the computations presented here we take $\lambda_0 = 2\Omega$, so that the forcing rate matches the rotation rate, and $u_0=2\Omega/\pi$, so that the shearing timescale is comparable to the rotation period. The Rossby number $\Omega^{-1}\partialu_{\rm T}/\partial y$ is therefore of order unity, and so the differential rotation is somewhat stronger than that observed in the Sun and most solar-type stars \citep[{e.g.,}][]{Reiners06}. By forcing a strong differential rotation, we can study the nonlinear effects of finite Rossby number, which were neglected in most of the idealized models reviewed in Section~\ref{sec:review}. Once the differential rotation reaches a statistically steady state, at $t=t_0$ say, the forcing parameters are changed to \begin{flalign} &&\lambda \; &= \; \frac{\lambda_0}{1 + \exp(20(z-1))}& \\ &\mbox{and}& u_{\rm T} \; &= \; u_0(1-z)\sin(\pi y) \hspace{2cm} \mbox{for $t>t_0$}& \end{flalign} so that $\lambda$ tends to zero at depths $z>1$. This means that the suppressive forcing is switched off below the convection zone for $t > t_0$, allowing the differential rotation to propagate into the interior, if the dynamics so dictate. \section{\uppercase{Choice of parameters}} \label{sec:parameters} The burrowing described in Section~\ref{sec:review} is expected to occur only under specific parameter conditions, which are difficult to achieve in a computational model. The relevant parameter regime can be characterized as a hierarchy of timescales in the radiation zone: \begin{center} \begin{tabular}{ccccccccc} acoustic time & $\ll$ & buoyancy time & $\ll$ & rotation time & $\ll$ & Eddington--Sweet time & $\ll$ & viscous time \end{tabular} \end{center} \citep[{e.g.,}][]{Clark73}. In terms of our dimensionless parameters, these conditions can be expressed as \begin{equation} 1 \;\; \ll \;\; \frac{1}{N} \;\; \ll \;\; \frac{1}{2\Omega} \;\; \ll \;\; \left(\frac{N}{2\Omega}\right)^{\!2}\frac{W^2}{k_2/\rho} \;\; \ll \;\; \frac{W^2}{\mu/\rho} \label{eq:ordering} \end{equation} where $N$ is the dimensionless buoyancy frequency, \begin{equation} N^2 = \frac{g^2}{T}\left( \frac{\gamma\!-\!1}{\gamma} - \frac{1}{g}\frac{\mathrm{d} T}{\mathrm{d} z} \right), \end{equation} and $W$ is a characteristic horizontal lengthscale, which is of order unity in our Cartesian model. We note in particular that the viscous time (i.e.,\ the timescale for viscous diffusion across the domain) must exceed the Eddington--Sweet time. This condition can be expressed as a constraint on the Prandtl number $\mu/k_2$ within the radiation zone: \begin{equation} \mu/k_2 \;\; \ll \;\; \left(\frac{2\Omega}{N}\right)^{\!2}. \label{eq:Prandtl} \end{equation} This constraint is particularly stringent because the second inequality in Equation~(\ref{eq:ordering}) implies that the right-hand side of Equation~(\ref{eq:Prandtl}) is $\ll 1$. Following \citet{GaraudAA09} we introduce the dimensionless parameter \begin{equation} \sigma = \frac{N}{2\Omega}\sqrt{\dfrac{\mu}{k_2}} \label{eq:sigma} \end{equation} in order to express condition (\ref{eq:Prandtl}) more succinctly as $\sigma \ll 1$. If this condition is not met then viscous transport of angular momentum dominates the transport by meridional flows \citep[][\S3.4]{Clark73,BentonClark74}, and we expect the burrowing of the meridional circulation to be less efficient. For example, in the laminar, steady-state model of \citet{GaraudAA09}, cases with $\sigma > 1$ have meridional circulations that decay exponentially beneath the convection zone, across a boundary layer similar to that described by \citet{BarcilonPedlosky67} \citep[see also][]{Lineykin55}. For any solar-type star at the start of the main-sequence, condition (\ref{eq:Prandtl}) holds throughout the radiation zone, but as the star spins down through magnetic braking, this condition may cease to hold in the most strongly stably stratified regions. In the solar interior, for example, condition (\ref{eq:Prandtl}) holds only within the outer part of the radiation zone \citep{GaraudAA09} and a very small region at the center. Computational limitations make it difficult to achieve the ``low-sigma'' regime described by condition (\ref{eq:Prandtl}) in a numerical model. If the rotation rate $\Omega$ and buoyancy frequency $N$ take realistic stellar values, then the Prandtl number $\mu/k_2$ must be extremely small, which requires very high spatial resolution. For this reason, all global simulations of the solar interior have been performed in the ``high-sigma'' regime, $\sigma>1$ \citep[{e.g.,}][]{Rogers11,Brun-etal11,Strugarek-etal11}. It is therefore important to test whether the behavior of meridional flows in nonlinear numerical simulations depends on $\sigma$ in the same manner as is predicted by laminar models. In order to reach a parameter regime with $\sigma < 1$, we have chosen here to use a local model, and to adopt more modest values for $\Omega$ and $N$, while still preserving the ordering of timescales given by (\ref{eq:ordering}). The code of \citet{Brummell-etal02} requires the specification of six dimensionless input parameters, which we take to be:\vspace{-0.15cm} \begin{itemize} \item the values $k_1$ and $k_2$ of the thermal conductivity in the convection and radiation zones; \vspace{-0.15cm} \item the dynamic viscosity $\mu$; \vspace{-0.15cm} \item the gravitational acceleration $g$; \vspace{-0.15cm} \item the rotation rate $\Omega$; \vspace{-0.15cm} \item the heat flux $H = k_2\partial T/\partial z$ at the bottom of the domain. \end{itemize}\vspace{-0.15cm} For any particular choices of $H$ and $k_1$ we can choose values for the remaining parameters in order to satisfy the four constraints in (\ref{eq:ordering}); different choices for $H$ and $k_1$ correspond to different degrees of compressibility and convective overshoot.\footnote{ \citeauthor{Brummell-etal02}\ measure the degree of compressibility and overshoot in terms of the quantities $\theta=H/k_1$ and $S= \left.\left(\frac{gk_2}{H}-\frac{\gamma}{\gamma-1}\right) \middle/ \left(\frac{\gamma}{\gamma-1}-\frac{gk_1}{H}\right)\right.$. All the convection simulations presented here have $\theta=0.1$ and $S=15$. } In practice, we choose values for $H$ and $k_1$ such that the pressure scale height is comparable to the height of the domain, and such that convective overshoot extends a distance of order $0.1$ below the base of the convection zone (see Figure~\ref{fig:box}), as is thought to be typical for solar-type stars \citep[{e.g.,}][]{Brummell-etal02,Rempel04}. \section{\uppercase{Results}} \label{sec:results} In the following sections we present results from four simulations performed in different parameter regimes. The first simulation, Case 0, differs from the others in that the conductivity of the upper layer, $k_1$, was chosen to make that layer adiabatic, and hence marginally stable to convection. The flow in this simulation therefore remains laminar, with no convection or internal wave generation, and so we expect the results to follow the predictions of the laminar models summarized in Section~\ref{sec:review}. Case 1 has the same parameters as Case 0 for the lower layer, but has a smaller conductivity $k_1$ in the upper layer, so that the upper layer is convectively unstable. Both Case 0 and Case 1 obey the ordering of timescales in (\ref{eq:ordering}), and so both have $\sigma < 1$; we will refer to this as the ``low-sigma regime''. Case 2 and Case 3 both have a larger viscosity $\mu$ and smaller thermal conductivity $k$ throughout the domain, relative to Case 1, such that $\sigma > 1$; we will refer to this as the ``high-sigma regime''. Case 2 has a larger viscosity and conductivity than Case 3, but both have the same Prandtl number $\mu/k$, equal to 0.24 in the lower layer, $z>1$. In both cases the heat flux $H$ was chosen so that the stratification profile, and hence the buoyancy frequency $N$, matches that of Cases 0 and 1. Case 2 is only weakly convective, because of its relatively high viscosity, whereas Case 3 is more strongly convective, because of its low thermal conductivity. In each simulation the dimensionless rotation rate and gravitational acceleration were taken to be $\Omega = 9.6\times10^{-3}$ and $g = 0.24$ respectively. The other parameters, and relevant timescales, are listed in Table~\ref{tab:parameters}. For reference, Table~\ref{tab:parameters} also lists characteristic values for the same dimensionless timescales in the solar interior, based on the values at $0.7R_\odot$ reported by \citet{Gough07}. To allow the most direct comparison with the numerical simulations, the solar timescales are calculated using the lengthscale $W = 0.7R_\odot \simeq 4.9\times10^{10}$\,cm, and quoted in units of $L/c$, where $L = 1.4\times10^9$\,cm is one quarter of the pressure scale height and $c=\sqrt{p/\rho} \simeq 1.8\times10^7$\,cm\,s$^{-1}$ is the isothermal sound speed. \begin{deluxetable*}{cccccccccc} \tablecaption{Parameters of each simulation, and corresponding solar parameters} \tablehead{\colhead{Case} & \colhead{$10^4k_1$} & \colhead{$10^4k_2$} & \colhead{$10^4\mu$} & \colhead{$10^4H$} & \colhead{$\dfrac{1}{N}$} & \colhead{$\dfrac{1}{2\Omega}$} & \colhead{$\left(\dfrac{N}{2\Omega}\right)^{\!2}\dfrac{W^2}{k_2/\rho}$} & \colhead{$\dfrac{W^2}{\mu/\rho}$} & \colhead{$\sigma$} } \startdata 0 & 15.1 & 24.1 & 0.145 & 1.4 & 11 & 52 & 1.0$\times10^4$ & 7.9$\times10^4$ & 0.36 \\ 1 & 14.5 & 24.1 & 0.145 & 1.4 & 11 & 52 & 1.0$\times10^4$ & 7.9$\times10^4$ & 0.36 \\ 2 & 5.12 & 8.53 & 2.05 & 0.51 & 11 & 52 & 2.9$\times10^4$ & 0.56$\times10^4$ & 2.27 \\ 3 & 1.93 & 3.22 & 0.774 & 0.19 & 11 & 52 & 7.6$\times10^4$ & 1.5$\times10^4$ & 2.27 \\ Sun & \nodata & \nodata & \nodata & \nodata & 16 & 2400 & 4.8$\times10^{16}$ & 1.1$\times10^{18}$ & 0.21 \enddata \tablecomments{The last five columns give approximate values for the dimensionless timescales in Equation~(\ref{eq:ordering}) and the value of $\sigma$, calculated using the time-averaged density $\rho$ and buoyancy frequency $N$ below the overshoot region.} \label{tab:parameters} \end{deluxetable*} \subsection{Case 0: The Low-Sigma Regime, without Convection} \label{sec:laminar} In this case the upper layer, $0<z<1$, is non-convective, and so we expect the dynamics to follow the predictions of the laminar models discussed in Section~\ref{sec:review}. Figure~\ref{fig:SZ-lam-t0} shows the steady azimuthal shear and meridional flow at $t=t_0$. By this time the flow has reached a steady state with a large-scale azimuthal shear and meridional circulation in the upper layer. Both the shear and circulation extend slightly into the lower layer, to an extent that depends on the rate at which the target velocity $u_{\rm T}$ tends to zero for $z>1$, but for $z>1.3$ the flow is exponentially weak. We note that the azimuthal shear $u_x$ does not exactly match the target shear flow $u_{\rm T}$ given by Equation~(\ref{eq:uT_initial}). In the steady state, the ``residual'' forcing $\lambda_0\rho(u_{\rm T}-u_x)$ is balanced primarily by the azimuthal component of the Coriolis force, $2\Omega\rho u_y$, and this balance determines the strength of the meridional flow within the upper layer. This is an example of the process referred to as gyroscopic pumping in Section~\ref{sec:driving}. Within the upper layer, the downwelling portion of the meridional circulation is stronger than the upwelling portion. This asymmetry is a symptom of the Rossby number being of order unity, and is therefore less apparent in the lower layer, where the shear is weaker. \begin{figure}[h!] \centering% \includegraphics[height=7cm]{Fig02.eps}% \caption{% Shear flow $u_x$ (left panel) and meridional streamlines (right panel) from Case 0, averaged in $x$ at time $t=t_0$. The meridional streamlines are drawn as contours of a streamfunction $\psi$, which was computed assuming that $\bm{\nabla}\cdot\rho\boldsymbol{u}=0$. Solid and dashed contours indicate anti-clockwise and clockwise circulation respectively. Contour levels for $u_x$ and $\psi$ are cubically spaced to show more detail in the radiation zone, where the flows are weakest.} \label{fig:SZ-lam-t0} \end{figure} At $t=t_0$ the forcing is switched off below $z=1$, allowing the flow of the upper layer to propagate into the lower, stably stratified layer. The sudden imbalance of forces also generates a spectrum of internal waves. Figure~\ref{fig:SZ-lam-ave} shows time averages of the azimuthal shear and meridional flow, averaged over one rotation period, at regular intervals after $t=t_0$. The time averaging filters out most of the internal wave modes, making the long-time evolution more visible. From $t=t_0$ onwards, the meridional circulation of the upper layer begins to burrow into the lower layer, as indicated by the increased vertical extent of the main meridional circulation cell in the lower panel of Figure~\ref{fig:SZ-lam-ave}. \begin{figure}[h!] \centering% \includegraphics[width=14cm]{Fig03.eps}% \caption{% Shear flow and meridional streamlines from Case 0 averaged over one rotation period for successive times $t>t_0$, using the same contour levels as in Figure~\ref{fig:SZ-lam-t0}. The time averages were taken over the intervals $\Delta t_1$, $\Delta t_2$, $\Delta t_3$, and $\Delta t_4$ indicated in Figure~\ref{fig:SZ-mom-lam}. The dashed box in the upper panels indicates the control volume $V$ used in Equation~(\ref{eq:torque}).} \label{fig:SZ-lam-ave} \end{figure} The mean azimuthal shear also propagates into the lower layer, at approximately the same rate, suggesting that the transport of angular momentum is the result of advection by the meridional flow, as expected for a laminar flow at these parameter values. To verify this, we first use the mass conservation equation (\ref{eq:mass}) to write the azimuthal component of the momentum equation (\ref{eq:mom}) as \begin{equation} \frac{\partial}{\partial t}(\rho u_x) = - 2\Omega\rho u_y -\bm{\nabla}\cdot(\rho u_x\,\boldsymbol{u}) + \mu\nabla^2 u_x + F. \label{eq:point_torque} \end{equation} After integrating over a fixed volume $V$, and employing the divergence theorem, we obtain \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t}\int_V\!\mathrm{d} V\rho u_x \;=\; \underbrace{-\;\; 2\Omega\int_V\!\mathrm{d} V\rho u_y}_{\rm Coriolis}\; \underbrace{-\; \int_{\partial V}\!\mathrm{d}\mathbf{S}\cdot\boldsymbol{u}\,\rho u_x}_{\rm inertial}\; \underbrace{+\;\; \mu\int_{\partial V}\!\mathrm{d}\mathbf{S}\cdot\bm{\nabla} u_x}_{\rm viscous}\; \underbrace{+\; \int_V\!\mathrm{d} V F}_{\rm forcing}, \label{eq:torque} \end{equation} where $\partial V$ represents the boundary of $V$, and where $\mathrm{d}\mathbf{S}$ is the area element directed outward. In order to verify that the burrowing meridional circulation seen in the lower panel of Figure~\ref{fig:SZ-lam-ave} is responsible for the propagation of the shear flow into the interior, we have computed each integral in Equation~(\ref{eq:torque}) for the control volume $V$ indicated by the dashed box in the upper panel of Figure~\ref{fig:SZ-lam-ave}. The result, after taking a time average over one rotation period, is plotted in Figure~\ref{fig:SZ-mom-lam}. The left panel of Figure~\ref{fig:SZ-mom-lam} shows that, whereas the azimuthal momentum in the upper layer adjusts to a new equilibrium after only a few rotation periods, the lower layer adjusts on a much longer timescale. The right panel shows that the long-time transport of azimuthal momentum into the lower layer can be attributed almost entirely to the azimuthal Coriolis force arising from the mean meridional flow, which corresponds to advection of angular momentum viewed in our rotating frame. \begin{figure}[h!] \centering% \includegraphics[width=13cm]{Fig04.eps}% \caption{% Left panel: The solid line shows the total azimuthal momentum $\int\mathrm{d} V\rho u_x$ in the dashed box indicated in Figure~\ref{fig:SZ-lam-ave}, averaged over one rotation period. For comparison, the dashed line shows the total azimuthal momentum in the region \emph{above} the dashed box. Right panel: The four terms contributing to the right-hand side of Equation~(\ref{eq:torque}), and their total, each averaged over one rotation period. The length of one rotation period $2\pi/\Omega$ is indicated on the plot. The total duration plotted corresponds to about half of the Eddington--Sweet time.} \label{fig:SZ-mom-lam} \end{figure} As a further illustration of the relative contributions of the different processes to the net transport of angular momentum, in Figure~\ref{fig:sasha-SZ-lam} we present vertical cross-sections of each term on the right-hand side of Equation~(\ref{eq:point_torque}), after averaging in azimuth and over the time interval $\Delta t_1$ indicated in Figure~\ref{fig:SZ-mom-lam}. Within the upper layer, the Coriolis and forcing terms are approximately in balance, and the strength of the meridional flow is determined by gyroscopic pumping. Within the lower layer, the Coriolis term dominates all others, leading to the evolution of the differential rotation seen in Figure~\ref{fig:SZ-lam-ave}. \begin{figure}[h!] \centering% \includegraphics[height=6cm]{Fig05.eps}% \caption{% Vertical cross-sections of each term on the right-hand side of Equation~(\ref{eq:point_torque}), and their total, averaged in azimuth and over the time interval $\Delta t_1$ indicated in Figure~\ref{fig:SZ-mom-lam}. The contour levels are cubically spaced, as indicated by the colorbar on the left. Throughout the control volume $V$, indicated by the dashed box, the Coriolis term provides the dominant contribution to the total angular momentum transport.} \label{fig:sasha-SZ-lam} \end{figure} Within this layer, the strength of the meridional flow is determined by meridional force balance and local thermal equilibrium, as anticipated by the Vogt--Eddington argument of Section~\ref{sec:driving}, and is therefore roughly proportional to the vertical gradient of the angular velocity, i.e.,\ to $\partial u_x/\partial z$ in our Cartesian geometry. This explains why the differential rotation and meridional circulation propagate at the same rate, and also why the strength of the meridional flow, and hence the rate of burrowing, decays over time. The results in this case are entirely consistent with the laminar models described in Section~\ref{sec:review}. \subsection{Case 1: The Low-Sigma Regime, with Convection} \label{sec:burrow} We now study the effects of turbulent convective overshoot and internal waves on the burrowing of meridional flows. In Case 1, unlike Case 0, the upper layer, $0<z<1$, is convectively unstable, and has an rms Reynolds number $\simeq550$. The turbulent motions in this layer continually generate internal waves, which propagate into the lower, stably stratified layer. Nevertheless, for times $t\leqslant t_0$ the time-averaged shear and meridional circulation are confined to the upper layer, as in Case 0. Motions in the lower layer, $z>1$, are suppressed by the forcing in that region. At $t=t_0$ the forcing is switched off in the lower layer. Figure~\ref{fig:SZ-ave} shows the mean shear and meridional flow, averaged over two rotation periods, at regular intervals after $t=t_0$, using the same contour levels as Figure~\ref{fig:SZ-lam-ave}. As in Case 0, the shear and meridional circulation both extend progressively deeper into the lower layer over time, though in a much less regular fashion than in Case 0. In any snapshot of the meridional flow, the mean circulation is completely dominated by internal wave motions, but after averaging over two rotation periods we observe a similar mean circulation to that in Figure~\ref{fig:SZ-lam-ave}. \begin{figure}[h!] \centering% \includegraphics[width=14cm]{Fig06.eps}% \caption{% Shear flow and meridional streamlines from Case 1, averaged over two rotation periods for times $t>t_0$. Time averages were taken over the intervals indicated in Figure~\ref{fig:SZ-mom}. The contour levels are the same as in Figure~\ref{fig:SZ-lam-ave}.} \label{fig:SZ-ave} \end{figure} In Figure~\ref{fig:SZ-mom} we show the evolution of the azimuthal momentum in the volume indicated by the dashed box in Figure~\ref{fig:SZ-ave}, as well as the terms contributing to the right-hand side of Equation~(\ref{eq:torque}) for this volume. (This figure is equivalent to Figure~\ref{fig:SZ-mom-lam} in Section~\ref{sec:laminar}.) Despite the presence of internal waves, we find that the mean meridional circulation dominates the long-time transport of azimuthal momentum into the radiative layer, as in Case 0 for which the flow was laminar. \begin{figure}[h!] \centering% \includegraphics[width=13cm]{Fig07.eps}% \caption{% Same plots as Figure~\ref{fig:SZ-mom-lam}, but for Case 1 and averaged over two rotation periods. The scale in the right-hand panel is larger than in Figure~\ref{fig:SZ-mom-lam}. The inertial contributions to the momentum transport, which are associated with waves and turbulent fluctuations, are larger than in Case 0, but the Coriolis force from the mean meridional flow remains dominant in the long term, leading to the overall increase in azimuthal momentum shown by the solid line in the left panel. The dimensionless Eddington--Sweet time in this case is $\simeq10^4$.} \label{fig:SZ-mom} \end{figure} Although there is qualitative agreement between the results of Case 0 and Case 1, we note that they differ in certain respects. In particular, the propagation of the shear into the lower layer does not proceed monotonically, as can be seen in the left panel of Figure~\ref{fig:SZ-mom} ({cf.}\ Figure~\ref{fig:SZ-mom-lam}), indicating that the burrowing process does not act continuously throughout the simulation. Perhaps more surprising, however, is that the overall rate of burrowing is \emph{faster} in Case 1 than in Case 0. These differences between Case 0 and Case 1 can be attributed to differences in the profile of $u_x$ within the upper layer in the two cases. In Case 0, $u_x$ is strongest close to the upper boundary, $z=0$, where the forcing is strongest. In Case 1, $u_x$ is highly time-dependent, but generally strongest close to the interface $z=1$ (see Figure~\ref{fig:SZ-ave}). Therefore the amplitude of $u_x$ at the top of the radiation zone is generally stronger in Case 1, but varies significantly in time, producing a burrowing that is more rapid overall, but very irregular. The occasional reversals in the sign of the momentum transport in Figure~\ref{fig:SZ-mom} ({e.g.,}\ for $2000\leqslant t \leqslant 3000$) correspond to the periods when $u_x$ at the interface is weakest. The variations in $u_x$ in the upper layer are quasi-periodic, with a timescale of several rotation periods, and may possibly be the result of an inertial mode trapped within the convection zone. (An inertia--gravity oscillation within the radiation zone, on the other hand, would necessarily have a period of less than half the rotation period, because $N > 2\Omega$.) In that case the oscillation relies on the artificial local geometry of our model, and would not be present in a global model. Finally, we note that during the phases when burrowing does occur, the transport of angular momentum throughout the radiation zone is qualitatively similar to that in Case 0. This is illustrated in Figure~\ref{fig:sasha-SZ}, in which we plot vertical cross-sections of each term on the right-hand side of Equation~(\ref{eq:point_torque}), averaged in azimuth and over the interval $\Delta t_1$ indicated in Figure~\ref{fig:SZ-mom}. (This figure is equivalent to Figure~\ref{fig:sasha-SZ-lam} in Section~\ref{sec:laminar}.) Although there is a non-zero contribution from the inertial and viscous terms, the main contribution comes from the Coriolis term, and the overall transport of angular momentum in the radiation zone resembles that in Case 0. \begin{figure}[h!] \centering% \includegraphics[height=6cm]{Fig08.eps}% \caption{% Same plots as Figure~\ref{fig:sasha-SZ-lam}, but for Case 1 and averaged over the time interval $\Delta t_1$ indicated in Figure~\ref{fig:SZ-mom}. As in Case 0, the Coriolis term provides the dominant contribution to the total angular momentum transport in the radiation zone.} \label{fig:sasha-SZ} \end{figure} \subsection{Case 2: The High-Sigma Regime; High Viscosity} In Case 2 the viscous diffusion timescale is shorter than the Eddington--Sweet timescale, primarily because the viscosity is larger than in Cases 0 and 1, and so $\sigma > 1$. The rotation rate $\Omega$ and stratification profile in the radiation zone are the same as in Cases 0 and 1. Because of the increased viscosity, in this case the upper layer is only weakly convective, and consequently the mean flows are more clearly defined. Most of the kinetic energy in the upper, convective layer is associated with the mean azimuthal shear flow, rather than with convective motions. The mean azimuthal shear and meridional flow at successive times $t>t_0$ are plotted in Figure~\ref{fig:high-nu-ave}, using the same contour levels as in Figures~\ref{fig:SZ-lam-ave} and \ref{fig:SZ-ave}. We find that the convection zone's azimuthal shear propagates progressively deeper into the radiation zone, as in Cases~0 and 1, but that meridional flows within the radiation zone remain confined to a thin layer below the bottom of the convection zone. The convection zone's meridional circulation does not burrow significantly, and instead a weak counter-circulating meridional cell is established at the top of the radiation zone ({cf.}\ Figures~\ref{fig:SZ-lam-ave} and \ref{fig:SZ-ave}). \begin{figure}[h!] \centering% \includegraphics[width=14cm]{Fig09.eps}% \caption{% Shear and meridional flow from Case 2, averaged over one rotation period for times $t>t_0$. Time averages were taken over the intervals indicated in Figure~\ref{fig:high-nu-mom}. Contour levels are the same as in Figures~\ref{fig:SZ-lam-ave} and \ref{fig:SZ-ave}.} \label{fig:high-nu-ave} \end{figure} Figure~\ref{fig:high-nu-mom} shows the equivalent of Figures~\ref{fig:SZ-mom-lam} and \ref{fig:SZ-mom} for Case 2. In this case, we find that the propagation of shear into the radiation zone is caused primarily by viscous diffusion. After about one viscous diffusion time, a roughly steady state is achieved in which the viscous force is balanced by the Coriolis force. We note that the Coriolis force in this case has the opposite sign than in Cases~0 and 1, so transport of azimuthal momentum by the mean meridional flow acts to \emph{oppose} (but not prevent) the propagation of the convection zone's shear into the radiation zone. This is a consequence of the counter-circulating meridional cell visible in Figure~\ref{fig:high-nu-ave}, as demonstrated by Figure~\ref{fig:sasha-high-nu}, which shows vertical cross-sections equivalent to those of Figures~\ref{fig:sasha-SZ-lam} and \ref{fig:sasha-SZ} for Case 2. The lack of burrowing of the meridional circulation in this case is consistent with the predictions of laminar models with $\sigma > 1$, as discussed in Section~\ref{sec:parameters}. \begin{figure}[h!] \centering% \includegraphics[width=13cm]{Fig10.eps}% \caption{% Same plots as Figures~\ref{fig:SZ-mom-lam} and \ref{fig:SZ-mom}, but for Case 2. The plots have been averaged over one rotation period. In this case the viscous term is dominant, and the Coriolis term takes the opposite sign. The total duration plotted corresponds to about one viscous diffusion time.} \label{fig:high-nu-mom} \end{figure} \begin{figure}[h!] \centering% \includegraphics[height=6cm]{Fig11.eps}% \caption{% Same plots as Figures~\ref{fig:sasha-SZ-lam} and \ref{fig:sasha-SZ}, but for Case 2, and averaged over the time interval $\Delta t_1$ indicated in Figure~\ref{fig:high-nu-mom}. In this case the viscous term causes the convection zone's azimuthal shear to propagate into the radiation zone. The Coriolis term has the opposite sign to the viscous term, and a smaller amplitude overall.} \label{fig:sasha-high-nu} \end{figure} \subsection{Case 3: The High-Sigma Regime; Low Thermal Conductivity} Case 3 has the same values of $\Omega$, $N$, and $\sigma$ as Case 2, but the viscosity and thermal conductivity are both smaller than in Case 2. As a result the upper layer is more strongly convective, with an rms Reynolds number $\simeq60$. Although this Reynolds number is significantly lower than that of Case 1, we find that internal waves are generated with a larger amplitude in this case than in Case 1. We attribute this to the lower value of the thermal conductivity in this case, which reduces the damping of internal waves. In fact, we find that a standing gravity mode is excited in the lower layer, which slowly grows in amplitude during the simulation. The existence of this mode is dependent on the artificial lower boundary of the computational domain, and so we end the simulation at the point where the amplitude of this mode becomes large enough to significantly affect the dynamics (at around $t = t_0 + 4000$). Figure~\ref{fig:low-k-ave} shows the mean azimuthal shear and meridional flow at successive times $t > t_0$, using the same contour values as Figures~\ref{fig:SZ-lam-ave}, \ref{fig:SZ-ave}, and \ref{fig:high-nu-ave}. Despite the presence of internal waves, the mean flow exhibits similar behavior to Case 2. The meridional circulation driven in the convection zone turns over at only a short depth within the radiation zone, and a counter-rotating meridional cell forms beneath, though with a more complicated structure than in Case 2. Meanwhile, the convection zone's shear propagates monotonically into the interior. \begin{figure}[h!] \centering% \includegraphics[width=14cm]{Fig12.eps}% \caption{% Shear and meridional flow from Case 3, averaged over one rotation period for times $t>t_0$. Time averages were taken over the intervals indicated in Figure~\ref{fig:low-kappa-mom}. Contour levels are the same as in Figures~\ref{fig:SZ-lam-ave}, \ref{fig:SZ-ave}, and \ref{fig:high-nu-ave}.} \label{fig:low-k-ave} \end{figure} Figure~\ref{fig:low-kappa-mom} shows the equivalent of Figures~\ref{fig:SZ-mom-lam}, \ref{fig:SZ-mom}, and \ref{fig:high-nu-mom} for Case 3. As in Case 2, we find that the propagation of shear into the radiation zone is caused primarily by viscous diffusion, although the relative contributions from inertial and Coriolis forces are larger than in Case 2. \begin{figure}[h!] \centering% \includegraphics[width=13cm]{Fig13.eps}% \caption{% Same plots as Figures~\ref{fig:SZ-mom-lam}, \ref{fig:SZ-mom}, and \ref{fig:high-nu-mom}, but for Case 3. The plots have been averaged over one rotation period. The viscous term is dominant, as in Case 2, until $t\simeq t_0+3000$. After this time the inertial contribution from a standing gravity mode becomes comparable to the viscous term.} \label{fig:low-kappa-mom} \end{figure} In contrast to Case 2, however, the Coriolis term has the same sign as the viscous term, at least for times $t \lesssim 2500$. This is because the convection zone's meridional circulation, shown in Figure~\ref{fig:low-k-ave}, manages to burrow a short distance into the radiation zone, as illustrated by Figure~\ref{fig:sasha-low-kappa}, which shows the equivalent of Figures~\ref{fig:sasha-SZ-lam}, \ref{fig:sasha-SZ}, and \ref{fig:sasha-high-nu} for Case 3, averaged over the time interval $\Delta t_2$ indicated in Figure~\ref{fig:low-kappa-mom}. A similar result was obtained by \citet{GaraudAA09} in a steady-state model of the solar interior; they found that, when $\sigma>1$, meridional circulations that are gyroscopically pumped within the convection zone extend a distance of order $W/\sigma$ into the radiation zone, where $W$ is the horizontal lengthscale of the circulation, which in our model is of order unity. \begin{figure}[h!] \centering% \includegraphics[height=6cm]{Fig14.eps}% \caption{% Same plots as Figures~\ref{fig:sasha-SZ-lam}, \ref{fig:sasha-SZ}, and \ref{fig:sasha-high-nu}, but for Case 3, and averaged over the time interval $\Delta t_2$ indicated in Figure~\ref{fig:low-kappa-mom}. As in Case 2, the viscous term is dominant throughout most of the radiation zone, but in this case the Coriolis and inertial terms also contribute to the propagation of the convection zone's shear within a thin layer at the top of the radiation zone.} \label{fig:sasha-low-kappa} \end{figure} \section{\uppercase{Summary and conclusions}} \label{sec:summary} This paper presents the first 3D, self-consistent, and nonlinear study of meridional flows in the parameter regime described by \citet{Clark73}, which is the relevant parameter regime for the radiative zones of many solar-type stars. In this regime the Eddington--Sweet time is shorter than the viscous time; the ratio of these timescales is determined by the dimensionless parameter $\sigma$ defined in Equation~(\ref{eq:sigma}). We have considered four separate cases: two in the ``low-sigma'' regime and two in the ``high-sigma'' regime. Our results indicate that the underlying long-time dynamical picture of angular momentum transport predicted by laminar models applies even under more realistic conditions, including the presence of overshooting from the neighboring turbulent convection zone, and the internal waves that this generates. In Cases~0 and 1, which have $\sigma < 1$, angular momentum transport is dominated in the long term by advection by meridional flows, and the meridional circulation driven in the convection zone burrows (i.e.,\ extends progressively downward) into the radiation zone on the Eddington--Sweet timescale, carrying the differential rotation of the convection zone with it. The burrowing is more irregular in Case 1 than in Case 0, as a result of angular momentum transport by turbulence and internal waves. In Cases~2 and 3, which have $\sigma > 1$, viscous stresses dominate the transport of angular momentum in the radiation zone, although there is also a significant contribution from internal waves in Case 3. In these two cases the meridional circulation driven in the convection zone extends only a short distance into the radiation zone. The differential rotation of the convection zone still propagates into the radiation zone, but by viscous diffusion rather than by meridional advection. It should be borne in mind that the simulations presented here are at most weakly turbulent, in comparison with real stellar convection. It may be that under the more turbulent conditions characteristic of real stellar interiors angular momentum transport is dominated by shear-driven turbulence or internal wave breaking, as argued for instance by \citet{Zahn92}. If that is the case then our results may not be applicable to real stars. However, our results should certainly be applicable to previous global-scale simulations of the solar interior, including those of \citet{Rogers11}, \citet{Brun-etal11}, and \citet{Strugarek-etal11}. We note that these simulations were all performed in the ``high-sigma'' regime in which we found that angular momentum transport is dominated by viscous stresses. All of these global models have $\sigma\simeq200$ close to the top of the radiation zone (although in the model of \citet{Strugarek-etal11} $\sigma$ drops to around to 20 deeper within the radiation zone). In each of these simulations it was indeed found that viscous stresses contribute at leading order to the transport of angular momentum within the radiation zone. The pattern of meridional flows found in these simulations is also rather similar to that which we observe in our simulations with $\sigma>1$ (Figures~\ref{fig:high-nu-ave} and \ref{fig:low-k-ave}), and no burrowing of the circulation was observed. In this situation we expect the transport of angular momentum between the convection and radiation zones to occur on a viscous timescale. This is indeed what \citet{Brun-etal11} and \citet{Strugarek-etal11} observe. \citet{Rogers11} reports that an apparently steady state is achieved in which viscous and Coriolis forces balance, and uniform rotation is preserved within the radiation zone. Based on our results, we suggest that this ``steady state'' is actually evolving slowly on a viscous timescale. We are now conducting global simulations in the low-sigma regime for comparison, to be presented in a forthcoming paper. Using a global model will also allow a more realistic study of the effects of internal waves, avoiding the difficulties encountered in Case 3. All of the simulations presented here have the same rotation rate and stratification profile, and in each case the Prandtl number in the radiation zone is smaller than unity. Yet the strength and depth of the mean meridional circulations vary drastically between the four cases. This highlights the danger in modeling stellar interiors with numerical simulations that have, for example, realistic rotation and stratification but unrealistic diffusivities. Our results suggest that the \emph{ordering} of dynamical timescales is of greater importance than the \emph{exact values} of those timescales when considering angular momentum transport. Real stellar interiors are characterized by the same ordering as the simulations presented here, but with substantially greater separation. If our results carry over to realistic stellar parameters then they have significant implications not only for angular momentum transport, but also for the transport of chemical elements and magnetic flux by meridional flows. In particular, the depth to which chemical elements are carried from the convection zone into the radiation zone will depend on the depth to which meridional circulations are able to burrow, and hence on the value of $\sigma$. An important issue not addressed in the present work is the contribution of magnetic fields to angular momentum transport in stellar interiors. If magnetic fields act to suppress differential rotation in the radiation zone, then the burrowing of meridional flows will also be suppressed \citep{GoughMcIntyre98}. In that case the role of the magnetic field is analogous to that of the forcing in the radiation zone in our model for $t < t_0$. Effects of magnetic fields will be addressed in future papers. We thank Pascale Garaud, Gary Glatzmaier, C\'eline Guervilly, Michael McIntyre, and an anonymous referee for useful comments and suggestions. T.S.W.~was supported by NSF CAREER grant 0847477. N.H.B.~was supported by NASA grant NNX07AL74G and the Center for Momentum Transport and Flow Organization (CMTFO), a DoE Plasma Science Center. Numerical simulations were performed on NSF TeraGrid/XSEDE resources Kraken and Ranger, and the Pleiades supercomputer at University of California Santa Cruz purchased under NSF MRI grant AST-0521566.
1,314,259,995,198
arxiv
\section{Introduction} The information overload and abundance of choices on the Web have made recommendation systems indispensable in facilitating user decision-making and information navigation. Recommender systems provide personalized user experience by filtering relevant items (e.g., books, music, or movies) or information (e.g., news). Many efforts have been devoted to developing effective recommender systems and approaches \cite{aggarwal2016recommender,s+x+k+2009}. \textit{Collaborative filtering (CF)}---a well-recognized approach in recommender systems---is based on the idea that users with similar revealed preferences might have similar preferences in the future \cite{s+x+k+2009}. User preferences in CF techniques are in the form of either \textit{explicit feedback} (e.g., ratings, reviews, etc.) or \textit{implicit feedback} (e.g., browsing history, purchasing history, search patterns, etc.). While explicit feedback is more informative than its implicit alternative, it imposes more cognitive burden on users through their elicitation, is subject to noisy self-reporting \cite{Amatriain2009}, and suffers from interpersonal comparison or \textit{calibration} issues \cite{s_b+s_c+2012,s_h+l_v+2009}. In contrast, implicit feedback naturally originates from user behavior based on the assumption that a user's interaction with an item is a signal of his/her interest to the item. Compared to explicit feedback, implicit feedback is more easily collected and abundant as long as user-item interactions are observable. This abundance of implicit feedback has made collaborative filtering more intriguing at the cost of some practical challenges. The implicit feedback lacks negative examples as the absence of a user-item interaction is not necessarily indicative of user disinterest (e.g., the user is unaware of the item). Also, the user-item interaction data for implicit feedback is large, yet severely \emph{sparse}. It is even more sparse than explicit feedback data as the unobserved user-item interactions are a mixture of both missing values and real negative feedback. Many attempts have been made to address these challenges by deep learning \cite{s_z+l_y+a_s+y_t+2019}. Deep neural networks, with their representation learning power, are effective in capturing non-linear and sparse user-item interactions for recommendation with implicit feedback. Multilayer perceptron (or feedforward) networks were (arguably) the first class of neural networks successfully applied for collaborative filtering \cite{h_c+l_k+j_h+2016,x_h+l_l+h_z+2017}. Also, there has been emerging interest in deploying the variants of autoencoders such as classical \cite{z+z+w+j+2019}, denoising \cite{w+y+d+c+2016}, and variational \cite{l+x+s+j+2017,l+d+k+r+2018}. However, these solutions either do not capture uncertainty of the latent representations \cite{z+z+w+j+2019,w+y+d+c+2016}, or solely focus on latent representation of users \cite{l+x+s+j+2017,l+d+k+r+2018}. Our work intends to address these shortcomings. We present \emph{joint variational autoencoder (JoVA)}, an ensemble of two variational autoencoders (VAEs). The two VAEs jointly learn both user and item representations while modeling their uncertainty, and then collectively reconstruct and predict user preferences. This design allows JoVA to capture user-user and item-item correlations simultaneously. We also introduce \emph{JoVA-Hinge}, a variant of JoVA, which extends the JoVA's objective function with a pairwise ranking loss to further specialize it for top-k recommendation with implicit feedback. Through extensive experiments over three real-world datasets, we show the accuracy improvements of our proposed solutions over a variety of state-of-the-art methods, under different metrics. Our JoVA-Hinge significantly outperforms other methods in the sparse datasets (up to 34\% accuracy improvement). Our experiments also demonstrate the outperformance of JoVA-Hinge across all users with varying numbers of training data (including cold-start users). Our findings confirm that the ensemble of VAEs equipped with pairwise loss improves recommendation with implicit feedback. Our proposed methods can potentially enhance other applications besides recommender systems. \section{Related Work} We review the related work on CF with implicit feedback. \subsection{Implicit Feedback Recommendation} Implicit feedback (e.g., clicking, browsing, or purchasing history) is a rich source of user preferences for recommender systems. This has motivated the development of various collaborative filtering methods, which exploit implicit feedback for effective recommendation \cite{y_h+y_k+c_v+2008,he2016fast}. The key developments are in either designing new models for capturing user-item interactions or novel objective functions for model learning. \emph{Matrix factorization (MF)} and its variants \cite{koren2008factorization,r_s+a_m+2008} are among the successful classical models and techniques deployed for CF. In MF, users and items are represented in a shared low-dimensional latent space. Then, a user's interaction with an item is computed by the inner product of their latent vectors. Several well-known methods have formulated the recommendation task as a ranking problem and/or optimize ranking losses. \emph{Bayesian personalized ranking (BPR)} \cite{s_r+c_f+z_g+l_s+2012}, by assuming that users prefer an interacted item to an uninteracted one, minimizes its pairwise ranking loss for model learning. Its pairwise loss has been still deployed in many state-of-the-art methods; see for example \cite{he2016fast,y+f+g+g+2016}. \textit{CofiRank} \cite{weimer2008cofi} directly optimizes ranking metrics by fitting a maximum margin matrix factorization model \cite{srebro2005maximum}. \textit{EigenRank} \cite{liu2008eigenrank} optimizes a function with Kendall rank correlation. RankALS \cite{takacs2012alternating} minimizes a ranking objective function with the \textit{alternating least squares (ALS)} method. These classical methods, despite their success in the recommendation, suffer from some limitations: (i) they fail to capture non-linear relationships between users and items; (ii) they cannot learn diverse user preferences as they treat each dimension of the latent feature space in the same way; and (iii) they have poor performance on sparse datasets. \subsection{Deep Recommendation} Recently, deep learning has been promising for the recommendation tasks \cite{s_z+l_y+a_s+y_t+2019} by capturing more enriched representations for users, items, and their interactions. Neural collaborative filtering (NCF) \cite{x_h+l_l+h_z+2017} uses a multi-layer perceptron (MLP) to learn the user-item interaction function, and can be viewed as the generalization of matrix factorization. The Wide \& Deep model \cite{h_c+l_k+j_h+2016}---an app recommender for Google play---consists of two components. The wide component is a generalized linear model that handles cross-product features, whereas the deep component extracts nonlinear relations among features. The model learns item features through a feed-forward neural network with embeddings. Another example of neural network-based recommender systems with implicit feedback is a visual Bayesian personalized ranking (VBPR) \cite{he2016vbpr}, which is an extension of the Bayesian personalized ranking with visual features. Of the most relevant to our work are recommender systems built based on autoencoders or their variations. \textit{Collaborative deep ranking (CDR)} \cite{ying2016collaborative} jointly implements representation learning and collaborative ranking by employing stacked denoising autoencoders. \textit{Joint collaborative autoencoder (JCA)} \cite{z+z+w+j+2019} deploys two separate classical autoencoders jointly optimized only by a hinge loss function for capturing user-user and item-item correlations. The proposed mini-batch optimization algorithm allows optimizing JCA without loading the entire rating matrix. Mult-VAE \cite{l+d+k+r+2018} is a collaborative filtering model for implicit feedback based on variational autoencoders. Mult-VAE uses a multinomial log-likelihood instead of the Gaussian likelihood. This work also proposed a regularization hyperparameter to control the trade-off between the reconstruction loss and the Kullback-Leibler (KL) loss in the objective function. Recently, RecVAE \cite{s+i+a+a+t+2019} proposed a new approach to optimizing this hyperparameter. Our work is closely related to both JCA and Mult-VAE, as we build on the strengths of these two. While JCA is jointly optimizing two classical autoencoders, it does not capture the uncertainity of latent representations, and consequently does not benefit from representation power of variational autoencoders. To address this, we deploy two separate variational autoencoders and jointly optimized them by our proposed loss function. Our loss function, by taking into account two variational autoencoders' losses and a pairwise ranking loss, well tunes our deep learning models for recommendation with implicit feedback. Our proposed work, while differentiating from both JCA and Mult-VAE with regards to both architecture and loss function, can be viewed as the powerful generalization or extension of these two. \section{Preliminaries} Our goal is to provide personalized item recommendations with the presence of implicit feedback. In this section, we formally define our problem, and describe VAE, which serves as a building block for our proposed model \subsection{Recommendation with Implicit Feedback} We assume that a set of $n$ users $U$ can interact with the set of $m$ items $I$ (e.g., users click ads, purchase products, watch movies, or listen to musics). We consider user-item interactions are binary (e.g., a user has watched a specific movie or not), and represent them with the user \emph{implicit feedback matrix} $\mathbf{R} \in \{0, 1\}^{m\times n}$, where $\mathbf{R}_{ui}=1$, if the interaction of user $u$ with item $i$ is observed. As each column (or row) of the matrix corresponds to a specific item (or user), we let $\mathbf{R}_u$ and $\mathbf{R}^T_i$ denote the user $u$'s and item $i$'s interaction vector, respectively. We also let $I_u^+ = \{i \in I \mbox{ } | \mbox{ } \mathbf{R}_{ui} = 1\}$ denote a set of items that user $u$ has interacted with, and $I_u^- = I \setminus I_u^+$ be a set of items that user $u$ has not yet interacted with. Our goal in top-k recommendation is to suggest $k$ most preferred (or likely) items to user $u$ from $I_u^-$. To achieve this goal, we predict the likelihood of interaction between user $u$ and $I_u^-$ (or preference of user $u$ over $I_u^-$), and then select a rank-list of $k$ items with the highest prediction score to recommend to user $u$. Our learning task is to find a \textit{scoring (or likelihood) function} $f$ that predicts an \emph{interaction score} $\hat{r}_{ui}$ for each user $u$ and an unobserved item $i \in I_u^-$. If $\hat{r}_{ui} \in [0,1]$, it can be interpreted as the predicted likelihood of user $u$'s interaction with item $i$. The function $f$ is formulated as $\hat{r}_{ui}$ = $f(u,i|\boldsymbol{\theta})$, where $\boldsymbol{\theta}$ denotes the model parameters. Most of model-based CF methods \cite{s+x+k+2009} differentiate from each other on the scoring function $f$ formulation or the objective functions used for parameter learning. There are various formulation of the function $f$, ranging from deep networks \cite{s_z+l_y+a_s+y_t+2019} to matrix factorization \cite{koren2008factorization}. In general, the objective functions fall into two categories. \emph{Pointwise loss} \cite{x_h+l_l+h_z+2017,y_h+y_k+c_v+2008}, by assuming an unobserved user-item interaction as a negative example, minimizes the error (or distance) between predicted interaction score $\hat{r}_{ui}$ and its actual value $r_{ui}$. In contrast to pointwise loss, \emph{pairwise loss} \cite{s_r+c_f+z_g+l_s+2012,he2016vbpr} directly optimizes the ranking of the user-item interaction while assuming that users prefer observed items to unobserved items. \subsection{Variational Autoencoder (VAE)} Our model uses the variational autoencoder (VAE) \cite{d+p+w+m+2014} as a building block. VAE is a deep generative model for learning complex distributions. Each VAE, similar to classical autoencoders, consists of encoder and decoder networks. The encoder first encodes the inputs to latent representations, and then the decoder reconstructs the original inputs from latent representations. However, the VAE differentiates from classical autoencoders by encoding an input as a distribution over latent representations (rather than a single point). This choice of probabilistic representation not only makes VAE a generative model, but also reduces overfitting by forcing smoother latent representation transitions. The encoder network of VAE encodes the input $\mathbf{x}$ to a d-dimensional latent representation $\mathbf{z}$, which is a multivariate random variable with a prior distribution $p(\mathbf{z})$.\footnote{The common practice is to assume that $p(\mathbf{z})$ is a standard multivariate normal distribution: $\mathbf{z} \sim \mathcal{N}(\mathbf{0},\mathbf{I})$.} One can view the encoder as the posterior distribution $p_{\pmb{\phi}}(\mathbf{z}|\mathbf{x})$ parametrized by ${\pmb{\phi}}$. Since this posterior distribution is intractable, it is usually approximated by variational distribution \cite{b+d+k+2017}: \begin{equation} q_{\pmb{\phi}}(\mathbf{z}|\mathbf{x}) = \mathcal{N}(\mu_{\pmb{\phi}}(\mathbf{x}),\sigma_{\pmb{\phi}}^2(\mathbf{x})\mathbf{I}), \end{equation} where two multivariate functions $\mu_{\pmb{\phi}}(\mathbf{x})$ and $\sigma_{\pmb{\phi}}(\mathbf{x})$ map the input $\mathbf{\mathbf{x}}$ to the mean and standard deviation vectors, respectively. In VAE, $\mu_{\pmb{\phi}}(\mathbf{x})$ and $\sigma_{\pmb{\phi}}(\mathbf{x})$ are jointly formulated by the \emph{inference network} $f_{\pmb{\phi}}(\mathbf{x}) = [\mu_{\pmb{\phi}}(\mathbf{x}), \sigma_{\pmb{\phi}}(\mathbf{x})]$. The decoder network $p_{\pmb{\psi}}(\mathbf{x}|\mathbf{z})$, also known as \emph{generative network}, takes $\mathbf{z}$ and outputs the probability distribution over (reconstructed) input data $\mathbf{x}$. Putting together the encoder and decoder networks, one can lower bound the log-likelihood of the input $\mathbf{x}$ by \begin{align*} \log p(\mathbf{x})&\geq \int q_{\pmb{\phi}}(\mathbf{z}|\mathbf{x}) \log \frac{p_{\pmb{\psi}}(\mathbf{x}|\mathbf{z}) p(\mathbf{z})}{q_{\pmb{\phi}}(\mathbf{z}|\mathbf{x})}dz\\ &=E_{q_{{\pmb{\phi}}}(\mathbf{z}|\mathbf{x})}\left[\log p_{\pmb{\psi}}(\mathbf{x}|\mathbf{z})\right] - \kld{q_{\pmb{\phi}}(\mathbf{z}|\mathbf{x})}{p(\mathbf{z})}, \end{align*} where $\mathit{KL}$ is Kullback-Leibler divergence distance measuring the difference between the distribution $q_{\pmb{\phi}}(\mathbf{z}|\mathbf{x})$ and the unit Gaussian distribution $p(\mathbf{z})$. This lower bound, known as \emph{evidence lower bound (ELBO)}, is maximized for learning the parameters of encoder and decoder, ${\pmb{\phi}}$ and $\pmb{\psi}$, respectively. Equivalently, for learning VAE parameters, one can minimize the negation of the ELBO as a loss function (see Eq.~\ref{eq:vae_loss}) by stochastic gradient decent with the reparameterization trick \cite{d+p+w+m+2014}. \begin{equation} L_{\scriptscriptstyle \mathtt{VAE}}(\mathbf{x}|\pmb{\theta}) = - E_{q_{\pmb{\phi}}(\mathbf{z}|\mathbf{x})}[\log p_{\pmb{\psi}} (\mathbf{x}|\mathbf{z})] + \kld{q_{\pmb{\phi}}(\mathbf{z}|\mathbf{x})}{p(\mathbf{z})}, \label{eq:vae_loss} \end{equation} where $\pmb{\theta}=[\pmb{\psi},\pmb{\phi}]$. This loss function can be viewed as a linear combination of \emph{reconstruction loss} and KL divergence, which serves as a regularization term. In this light, recent research \cite{l+d+k+r+2018,s+i+a+a+t+2019} has introduced regularization hyperparameter $\alpha$ for controlling the trade-off between regularization term and reconstruction loss: \begin{equation} L_{\scriptscriptstyle \mathtt{VAE}}(\mathbf{x}|\pmb{\theta}, \alpha) = - E_{q_{\pmb{\phi}}}[\log p_{\pmb{\psi}} (\mathbf{x}|\mathbf{z})] + \alpha \kld{q_{\pmb{\phi}}(\mathbf{z}|\mathbf{x})}{p(\mathbf{z})}, \label{eq:vae_loss-beta} \end{equation} As our input data $\mathbf{x}$ is a binary vector (i.e., implicit feedback), we consider logistic likelihood for the output of the VAE decoder. Defining $f_{\pmb{\psi}}(\mathbf{z}) = [o_i]$ as the output of generative function of the decoder, the logistic log-likelihood for input $\mathbf{x}$ is \begin{equation} \log p_{\pmb{\psi}}(\mathbf{x}|\mathbf{z}) = \sum_i x_i \log \sigma (o_i) + (1 - x_i) (1 - \sigma (o_i)). \label{eq:logliklihood} \end{equation} Here, $\sigma (x) = 1/(1+\exp(-x))$ is the logistic function. This logistic likelihood renders the reconstruction loss to the cross-entropy loss. \section{Joint Variational Autonecoder (JoVA)} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{JoVA.pdf} \caption{Illustration of the JoVA model. User and item VAEs recover the input matrix independently. The final output is the average of these two reconstructed matrices.} \label{fig:jova-model} \vskip -0.125in \end{figure} We here detail our proposed \emph{joint variational autoencoder (JoVA)} framework and its variant \emph{JoVA-Hinge} for top-k recommendation with implicit feedback. We first discuss the model architecture of JoVA and then discuss various objectives functions used for parameter learning. \subsection{Model}\label{sec:JVA} Our model consists of two separate variational autoencoders of \emph{user VAE} and \emph{item VAE} (see Figure \ref{fig:jova-model}). Given the implicit feedback matrix $\mathbf{R}$, the user VAE aims to reconstruct the matrix row-by-row, whereas the item VAE reconstructs it column-by-column. In other words, user VAE takes and reconstruct each user vector $\mathbf{R}_u$ (i.e., a row of the matrix). Similarly, item VAE takes and reconstruct each item vector $\mathbf{R}^T_i$ (i.e., a column of the matrix). These two VAEs independently and simultaneously complete the implicit feedback matrix. The final output of our model is the average of two predicted implicit matrices: \begin{equation} \hat{\mathbf{R}} = \frac{1}{2} (\hat{\mathbf{R}}^{user} + \hat{\mathbf{R}}^{item}), \end{equation} where $\hat{\mathbf{R}}^{user}$ and $\hat{\mathbf{R}}^{item}$ are implicit matrices predicted (or completed) by the user VAE and the item VAE, respectively. We note that $\hat{\mathbf{R}} \in [0,1]^{m\times n}$, where each $\hat{r}_{ui}$ represents the predicted likelihood that user $u$ interacts with item $i$. This natural probabilistic interpretation originates from our choice of logistic likelihood for the output of VAEs (see Eq.~\ref{eq:logliklihood}). The parameters of user VAE and item VAE are learned jointly with a joint objective function (see Sec. \ref{sec:obj}). JoVA model is designed carefully to capture both user-user and item-item correlations. The item VAE encodes similar items close to each other in its latent representations to preserve their correlations, while the user VAE does the same for similar users. The joint optimization of these two VAEs helps their fine-tune calibration, so that they can complement each other in their predictions. The item and user VAEs together can learn complementary information from user-item interactions beyond what each could separately learn. This richer learning is a valuable asset for sparse datasets, as confirmed by our experiments in Sec.~\ref{sec:exp}. One can readily observe the connections between JoVA and ensemble learning. Similar to ensemble learning, JoVA combines user VAE and item VAE into one learning framework for the final prediction. From this perspective, each VAE independently predicts the rating matrices, and then the final prediction is the aggregation of VAEs' predictions. The aggregation in JoVA is with unweighted averaging, which is shown to be a reliable choice as an aggregation method in the ensemble of deep learning models \cite{j+c+b+a+2018}. This unweighted averaging can easily be extended to the weighed averaging at the cost of tuning more hyper-parameters for each dataset, but with the promise of increased accuracy.\footnote{We have confirmed this in some experiments not reported in this paper.} Averaging user VAE and item VAE predictions can reduce the expected variance of neural network models and consequently, the risk of overfitting, thus, improving model accuracy. \subsection{Objective functions}\label{sec:obj} To learn model parameters of JoVA, we consider two variants of loss functions. One naturally arises from the combination of two user and item variatianal autoencoders: \begin{equation} L_{\scriptscriptstyle \mathtt{JoVA}}(\mathbf{R}|\pmb{\theta},\alpha) = \sum_{u\in U} L_{\scriptscriptstyle \mathtt{VAE}}(\mathbf{R}_u|\pmb{\theta}_{U},\alpha) + \sum_{i \in I} L_{\scriptscriptstyle \mathtt{VAE}}( \mathbf{R}^T_i|\pmb{\theta}_{I}, \alpha) \end{equation} Here, $\pmb{\theta}_{U}$ and $\pmb{\theta}_{I}$ represent the model parameters of user and item VAEs repsectively, and $L_{\scriptscriptstyle \mathtt{VAE}}$ is computed by Eq.~\ref{eq:vae_loss-beta} with the logistic likelihood of Eq.~\ref{eq:logliklihood}. To further specialize JoVA model for the top-k recommendation, we incorporate a pairwise ranking loss in its joint loss function. Specifically, we introduce \emph{JoVA-Hinge (JoVA-H)} loss function: \begin{equation} L_{\scriptscriptstyle \mathtt{JoVA-H}}(\mathbf{R}|\pmb{\theta},\alpha, \beta , \lambda) = L_{\scriptscriptstyle \mathtt{JoVA}}(\mathbf{R}|\pmb{\theta},\alpha) + \beta L_{\scriptscriptstyle \mathtt{H}}(\mathbf{R}|\pmb{\theta},\lambda), \label{eq:jova-hinge} \end{equation} where $$ L_{\scriptscriptstyle \mathtt{H}}(\mathbf{R}|\pmb{\theta}, \lambda) = \sum_{u\in U}\sum_{i\in I^+_u} \sum_{j\in I^-_u} \max (0, \hat{r}_{uj} - \hat{r}_{ui} +\lambda) $$ is \emph{hinge loss function}, wildly and successfully used as a pairwise ranking loss \cite{z+z+w+j+2019,w+j+b+s+2011,y+t+m+t+2016}. Here, $\hat{r}_{ui}$ is the predicted interaction score (or likelihood) of user u for item i, and $\lambda$ is the margin hyperparameter for the hinge loss. The hinge loss is built upon the assumption that user $u$ prefers his interacted item $i \in I^+_u$ over an uninteracted item (or negative example) $j \in I^-_u$ with the margin of at least $\lambda$.\footnote{In practice, the hinge loss is usually computed over a sample of negative examples.} We have introduced the hyperparameter $\beta$ for controlling the influence of hinge loss to the JoVA's objective function. \section{Experiments}\label{sec:exp} Our empirical experiments intend to assess the effectiveness of our proposed methods JoVA and JoVA-Hinge for top-k recommendation with implicit feedback. We compare the accuracy of our methods (under various evaluation metrics) with an extensive set of state-of-the-art methods on three real-world datasets. We further study the effectiveness of our methods in handling cold-start users. The source code is available on \footnote{\url{https://github.com/bahareAskari/JoVA-Hinge.git}} \begin{table}[tb] \setlength\tabcolsep{5pt} \begin{center} \begin{tabular}{|l|l|l|l|l|} \hline \textbf{Dataset} &\textbf{\#User}& \textbf{\#Item}&\textbf{\#Interaction}&\textbf{Sparsity}\\ \hline \hline MovieLens&6,027&3,062 &574,026 &96.89\%\\ Yelp&12,705 &9,245 &318,314 &99.73\%\\ Pinterest&55,187&9,911&1,500,806&99.73\%\\ \hline \end{tabular} \vskip -0.05in \caption{The summary statistics of our tested datasets.} \label{tab:PPer} \end{center} \vskip -0.125in \end{table} \subsection{Evaluation Datasets} We report results obtained on three real-world datasets: MovieLens-1M (ML1M)\footnote{\url{http://files.grouplens.org/datasets/movielens/ml-1m.zip}.}, Pinterest\footnote{\url{https://sites.google.com/site/xueatalphabeta/academic-projects}}, and Yelp\footnote{\url{https://www.yelp.com/dataset/challenge}.}. Pinterest is a dataset with implicit feedback in which the interaction of a user with an image (i.e., item) is 1, if the user has pinned the image to their own board. Following the previous work \cite{x_h+l_l+h_z+2017}, we kept only users with at least 20 interactions (pins). ML1M and Yelp originally include five-star user-item ratings. A user-item rating was converted to 1, if it is greater than or equal to 4 and to 0 otherwise. This method for converting explicit feedback to implicit feedback is a common practice, for example, see \cite{l+h+w+j+2019,z+z+w+j+2019,l+d+k+r+2018}. Table \ref{tab:PPer} provides the summary statistics of these datasets after pre-processing. For each dataset, we randomly selected 80\% of all user-item interactions as the training set and equally split the remaining 20\% into testing and validation sets. \begin{figure*}[tb] \centering \begin{tabular}{@{\hspace{-7pt}}c@{\hspace{-1pt}}c@{\hspace{-1pt}}c} \includegraphics[width=0.33\textwidth]{mlpre.pdf} & \includegraphics[width=0.33\textwidth]{yelpp.pdf} & \includegraphics[width=0.33\textwidth]{pinterestp.pdf}\\ (a) Precision, MovieLens. &(b) Precision, Yelp. & (c) Precision, Pinterest.\\ \includegraphics[width=0.33\textwidth]{mlr.pdf} & \includegraphics[width=0.33\textwidth]{yelpr.pdf} & \includegraphics[width=0.33\textwidth]{pinterestr.pdf}\\ (d) Recall, MovieLens. & (e) Recall, Yelp. & (f) Recall, Pinterest.\\ \end{tabular} \vskip -0.05in \caption{Precision (a--c) and Recall (d--f) for all methods and three datasets of MovieLens, Yelp, and Pinterest.} \vskip -0.05in \label{fig:pre-rec-all} \end{figure*} \begin{table*}[tb] \small \setlength\tabcolsep{2.34pt} \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-19} \multicolumn{1}{c|}{}&\multicolumn{6}{c|}{\textbf{ML1M}}& \multicolumn{6}{c|}{\textbf{Yelp}}&\multicolumn{6}{c|}{\textbf{Pinterest}}\\ \cline{2-19} \multicolumn{1}{c|}{}&\multicolumn{3}{c|}{\textbf{F1-score}}& \multicolumn{3}{c|}{\textbf{NDCG}}& \multicolumn{3}{c|}{\textbf{F1-score}}& \multicolumn{3}{c|}{\textbf{NDCG}}& \multicolumn{3}{c|}{\textbf{F1-score}}& \multicolumn{3}{c|}{\textbf{NDCG}}\\ \cline{2-19} \multicolumn{1}{c|}{}&\textbf{@1} & \textbf{@5}& \textbf{@10}& \textbf{@1}& \textbf{@5}&\textbf{@10}&\textbf{@1}&\textbf{@5}&\textbf{@10}&\textbf{@1}& \textbf{@5}& \textbf{@10}& \textbf{@1}& \textbf{@5}& \textbf{@10}& \textbf{@1}& \textbf{@5}& \textbf{@10} \\ \hline \textbf{BPR}&.0410&.1285&.1698&.2843&.2549&.2434&.0065&.0180&.0219&.0166&.0223&.0301&.0120&.0292&.0333&.0328&.0312&.0414\\ \hline \textbf{NCF}&.0513&.1487&.1883&.2955&.2727&.2709&.0153&.0325&.0350&.0367&.0392&.0497&.0123&.0306&.0375&.0375&.0348&.0479\\ \hline \textbf{CDAE}&.0518&.1474&.1873&.3428&.2896&.2728&.0159&.0315&.0356&.0378&.0390&.0471&.0154&.0349&.0401&.0415&.0387&.0506\\ \hline \textbf{Mult-VAE}&.0518&.1420&.1801&.3428&.2886&.2695&.0148&.0317&.0344&.0350&.0381&.0465&.0153&.0349&.0402&.0466&.0397&.0504\\ \hline \textbf{FAWMF}&.0595&\textbf{.1661}&.2068&\textbf{.3775}&\textbf{.3176}&\textbf{.2991}&.0152&.0290&.0305&.0358&.0358&.0425&.0131&.0310&.0360&.0416&.0359&.0450\\ \hline \textbf{JCA} &\textbf{.0602}&.1634 & \textbf{.2080}&.3699&.3125&.2976&\textbf{.0160}&\textbf{.0350}&\textbf{.0376}&\textbf{.0405}&\textbf{.0440}&\textbf{.0537}&\textbf{.0150}&\textbf{.0383}&\textbf{.0456}&\textbf{.0448}&\textbf{.0424}&\textbf{.0557}\\ \hline \textbf{JoVA-H}&\textbf{.0624}&\textbf{.1665}&\textbf{.2115}& \textbf{.3718}&\textbf{.3143}&\textbf{.3013}&\textbf{.0201}&\textbf{.0391}&\textbf{.0401}&\textbf{.0449}&\textbf{.0483}&\textbf{.0581}&\textbf{.0200}&\textbf{.0471}&\textbf{.0542}&\textbf{.0604}& \textbf{.0532}&\textbf{.0678}\\ \hline\hline \textbf{\% improve}&3.65&0.24&1.68&-1.53&-1.04&0.73&25.62&11.71&6.64&10.86&9.77&8.19&33.33&22.97& 18.85&34.82&25.47&21.72\\ \hline \end{tabular} \caption{Performance of the baselines and JoVA-Hinge on three datasets under F1@k and NDCG@k metrics. The results of JoVA-Hinge and the best baselines are in bold.} \label{tab:my-table} \end{center} \vskip -0.125in \end{table*} \begin{table*} \small \setlength\tabcolsep{4.1pt} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|} \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{\textbf{ML1M}}&\multicolumn{4}{c|}{\textbf{Yelp}}&\multicolumn{4}{c|}{\textbf{Pinterest}}\\ \hline &\textbf{P@1}&\textbf{R@1}&\textbf{F1@1}&\textbf{NDCG@1}&\textbf{P@1}&\textbf{R@1}&\textbf{F1@1}&\textbf{NDCG@1}&\textbf{P@1}&\textbf{R@1}&\textbf{F1@1}&\textbf{NDCG@1}\\ \hline \textbf{JoVA}&\cellcolor{lgreen}{0.3730}&0.0329&0.0605&\cellcolor{lgreen}{0.3730}&0.0420&0.0120&0.0180&0.0433&0.0571&0.0113&0.0189&0.0571\\ \hline \textbf{JoVA-H}& {0.3718}& {\cellcolor{lgreen}{0.0340}}&{\cellcolor{lgreen}{0.0624}}&{0.3718}&{\cellcolor{lgreen}{0.0449}} & {\cellcolor{lgreen}{0.0130}}&{\cellcolor{lgreen}{0.0201}}&{\cellcolor{lgreen}{{0.0449}}}&{\cellcolor{lgreen}{0.0604}}&{\cellcolor{lgreen}{0.0120}}& {\cellcolor{lgreen}{0.0200}}&{\cellcolor{lgreen}{0.0604}}\\ \hline \hline &\textbf{P@5}&\textbf{R@5}&\textbf{F1@5}&\textbf{NDCG@5}&\textbf{P@5}&\textbf{R@5}&\textbf{F1@5}&\textbf{NDCG@5}&\textbf{P@5}&\textbf{R@5}&\textbf{F1@5}&\textbf{NDCG@5}\\ \hline \textbf{JoVA}&0.2845&0.1169&0.1657&0.3135&0.0320&0.0430&0.0360&0.0449&0.0464&0.0459&0.0461&0.0516\\ \hline \textbf{JoVA-H}&{\cellcolor{lgreen}{0.2853}}&{\cellcolor{lgreen}{0.1176}}&{\cellcolor{lgreen}{0.1665}}&{\cellcolor{lgreen}{0.3143}}&{\cellcolor{lgreen}{0.0337}}&{\cellcolor{lgreen}{0.0464}}&{\cellcolor{lgreen}{0.0391}}&{\cellcolor{lgreen}{0.0483}}&{\cellcolor{lgreen}{0.0474}}&{\cellcolor{lgreen}{0.0468}}&{\cellcolor{lgreen}{0.0471}}&{\cellcolor{lgreen}{0.0532}}\\ \hline \hline &\textbf{P@10}&\textbf{R@10}&\textbf{F1@10}&\textbf{NDCG@10}&\textbf{P@10}&\textbf{R@10}&\textbf{F1@10}&\textbf{NDCG@10}&\textbf{P@10}&\textbf{R@10}&\textbf{F1@10}&\textbf{NDCG@10} \\ \hline \textbf{JoVA}&\cellcolor{lgreen}{0.2382}&0.1864&0.2092&0.2990&0.0272&0.0722&0.0395&0.0553&0.0406&0.0799&0.0538&0.0666\\ \hline \textbf{JoVA-H}&{0.2340}&{\cellcolor{lgreen}{0.1890}}&{\cellcolor{lgreen}{0.2115}}&{\cellcolor{lgreen}{0.3013}}&{\cellcolor{lgreen}{0.0281}}&{\cellcolor{lgreen}{0.0758}}&{\cellcolor{lgreen}{0.0401}}&{\cellcolor{lgreen}{0.0581}}&{\cellcolor{lgreen}{0.0409}}&{\cellcolor{lgreen}{0.0805}}&{\cellcolor{lgreen}{0.0542}}&{\cellcolor{lgreen}{0.0678}}\\ \hline \end{tabular} \end{center} \vskip -0.05in \caption{Performance of JoVA and JoVA-Hinge for various k, datasets, and metrics. The gray shows the best value.} \vskip -0.05in \label{tab:rankingLoss} \end{table*} \subsection{Evaluation Metrics} We utilize four commonly-used metrics to assess the quality of ranked list $\omega_u$ predicted for user $u$. \emph{Precision@k (P@K)} quantifies which fraction of u's recommended ranked list $\omega_u$ are $u$'s true preferred items: \begin{equation} P@k(\omega_u,I^*_u)= \frac{1}{k} \sum_{i=1}^k \ind{\omega_u(i) \in I^*_u}, \end{equation} where $\ind{.}$ is the indicator function, $\omega_u(i)$ is the $i^{th}$ ranked item in $\omega_u$, and $I^*_u$ is $u$'s true preferred items in held-out data. Similarly, \emph{Recall@k (R@K)} measures which fraction of $u$'s true preferred items $I^*_u$ are present in $u$'s recommended ranked list $\omega_u$: \begin{equation} R@k(\omega_u,I^*_u)= \frac{1}{|I^*_u|} \sum_{i=1}^k \ind{\omega_u(i) \in I^*_u}. \end{equation} \emph {F1-score@k (F1@k)}, by computing the harmonic mean of the precision and recall, captures both of these metrics. It reaches its maximum of 1, if both precision and recall are perfect (i.e., have value of 1): \begin{equation} F1@k(\omega_u,I^*_u) = \frac{2 \cdot P@k(\omega_u,I^*_u) \cdot R@k(\omega_u,I^*_u)}{P@k(\omega_u,I^*_u) + R@k(\omega_u,I^*_u)} \end{equation} One shortcoming of P@k, R@k, and F1@K is giving the same importance to all items ranked within the first k. To address this, NDCG@k gives higher weight to the higher ranked items: \begin{equation*} NDCG@k(\omega_u,I^*_u) = \frac{1}{IDCG@k} \sum_{i=1}^k \frac{2^{\ind{\omega_u(i) \in I^*_u}} - 1}{\log_2(i+1)}, \end{equation*} where $IDCG@k = \sum_{i=1}^{k}(1/\log_2 (i+1))$ normalizes NDCG with the maximum of 1. We report the average of these metrics in our experiments, when the average is taken over all testing users. \subsection{Baselines} To evaluate the effectiveness of our methods, we compare them against various state-of-the-art recommendation methods. \vskip 1.5mm \noindent \textbf{BPR} \cite{s_r+c_f+z_g+l_s+2012} optimizes a matrix factorization model with a pair-wise ranking loss. \vskip 1.5mm \noindent \textbf{CDAE} \cite{w+y+d+c+2016}, by extending the denoising auto-encoder, assumes that observed ratings are corrupted user’s preferences. \vskip 1.5mm \noindent \textbf{Mult-VAE} \cite{l+d+k+r+2018} is a model with only one VAE, which deploys multinomial distribution for the output of the decoder. \vskip 1.5mm \noindent \textbf{NCF} \cite{x_h+l_l+h_z+2017} learns user-item interaction function by combining both MF and multi-layer perceptrons with binary cross-entropy loss function. \vskip 1.5mm \noindent \textbf{JCA} \cite{z+z+w+j+2019} deploys two classical autoencoders for modeling users and items, and only uses hinge pairwise loss function. \vskip 1.5mm \noindent \textbf{FAWMF} \cite{c+j+w+c+2020} is an adaptive weighted matrix factorization method based on a variational autoencoder. For all baselines, we have used the implementations and optimal parameter settings reported by the original papers. \subsection{Setup and Parameter Settings} For learning all the models, we used the Adam optimizer with a learning rate of 0.003. For our models, as with \cite{z+z+w+j+2019}, we decomposed the training rating matrix into several small matrices, each of which is treated as a mini-batch. Each mini-batch size is set to encompass 1500 rows and 1500 columns. We run grid search on hyperparamerts and tested them against the validation sets. We set $\lambda=0.15$ and $\alpha=0.01$ for all experiments, but picked $\beta$ individually for each dataset: $\beta = 0.001$ for Yelp, and $\beta = 0.01$ for both MovieLens and Pinterest. We randomly sampled one negative instance per a positive instance in each epoch. For each encoder and decoder, we had two hidden layers each with 320 dimensions and tanh activation functions while the sigmoid activation function was used for the output layers. For both VAEs, we set the dimension of the latent representation $d$ to 80. \subsection{Performance Comparison} \begin{figure*}[tb] \centering \begin{tabular}{@{\hspace{-7pt}}c@{\hspace{-1pt}}c@{\hspace{-1pt}}c} \includegraphics[width=0.33\textwidth]{p5less.pdf} & \includegraphics[width=0.33\textwidth]{f15less.pdf} & \includegraphics[width=0.33\textwidth]{ndcg5less.pdf}\\ (a) Precision@5 &(b) F1-Score@5& (c) NDCG@5\\ \end{tabular} \vskip -0.05in \caption{The average performance of users with the varying number of positive examples in their training data, MovieLens dataset, various evaluation metrics: (a) P@5, (b) F1@5, and (c) NDCG@5.} \label{fig:cold} \vskip -0.125in \end{figure*} We compare the performance of the top-k recommendation of our models and baselines with various $k \in \{1, 5, 10\}$ on different datasets. Figure \ref{fig:pre-rec-all} illustrates the performance of all methods under precision@k and recall@k for various $k$ and datasets. We first notice that neural network-based methods outperform the traditional ranking approach BPR, which suggests the effectiveness of non-linear features in learning the user-item interaction function. The JoVA-Hinge outperforms all methods (with a few small exceptions) on all datasets for both Precision@K and Recall@K. For more holistic performance analyses of all methods, Table \ref{tab:my-table} reports F1-Score and NDCG for all datasets and methods. Our JoVA-Hinge model outperforms others for F1 measure on three datasets and various k. Compared with the best baseline (JCA), F1-score@k is improved by up to 3.65\% in ML1M, 25.62\% in Yelp, and 33.33\% in Pinterest. For NDCG, JoVA-Hinge also outperforms others significantly in two datasets of Yelp and Pinterest. In Yelp, the mimimum improvement is 8.19\% (for $k=10$) and the maximum improvement is 10.86\% (for $k=1$). The JoVA-Hinge has even higher improvement for Pinterest with the mimimum of 21.72\% (for $k=10$) and the maximum of 34.82\% (for $k=1$). For ML1M and NDCG, the performance of JoVA-Hinge is comparable to the performance of best baseline FAWMF. Cross-examination of Tables \ref{tab:PPer} and \ref{tab:my-table} suggest that our JoVA-H model significantly improves the accuracy of the state-of-the-art methods in terms of both F1 and NDCG for sparser datasets (i.e., Yelp and Pinterest). Our results also suggest that JoVA-Hinge offer more significant improvement for smaller $k$ (e.g., $k=1$ or $k=5$), which is of special practical interest for reducing cognitive burden on users, when the recommendation slate is small. \subsection{Effect of Pairwise Ranking loss } We aim to understand whether both variational encoders and pairwise loss function have contributed to the success of JoVA-Hinge. In other words, we are interested in assessing the effectiveness of the pairwise ranking loss in improving our model's accuracy. Thus, we compare the performance of JoVA-Hinge with JoVA in Table \ref{tab:rankingLoss} on three datasets and under four evaluation metrics. Our experiment illustrates that the pairwise ranking loss combined with VAE losses improves accuracy on almost every cases (except for p@1, p@10, and NDCG@1 on MovieLens). This finding suggests, by combining hinge loss function with VAEs, one can take advantage of capturing the relative distance between positive and negative items to learn more informative latent representations. We believe this successful marriage of VAEs and pairwise loss functions can be extended to other models built based on VAEs building blocks even in other applications (e.g., vision, speech, etc.). \subsection{Cold-Start and Data Sparsity} Data sparsity and cold-start problems---dealing with users and items with few or no interactions---are practical challenges in recommender systems. We aim to understand how the accuracy of recommendation changes for users with a different number of user-item interactions (i.e., positive examples). We study the average accuracy of users with at most $L$ user-item interactions in training data while increasing $L$. This setting allows us to study not only cold-start users with small $L$ (e.g., $L=10$), but also how more availability of user-item interactions affect the accuracy of recommendation. Figure \ref{fig:cold} shows the performance of the top four methods of previous experiments (i.e., Mult-VAE, FAWMF, JCA, and JoVA-Hinge) under different metrics when $L$ increases.\footnote{The results for $k=1$ and $k=10$ were qualitatively similar, which are not included due to space constraints.} Unsurprisingly, the performance of all methods increases with more availability of user-item interactions. However, JoVA-Hinge outperforms other methods not only for users with the low number of user-item interactions (i.e., cold-start users), but also for well-established users. This suggests that the overall success of JoVA-Hinge is not limited to a specific class of users, and all users with various numbers of user-item interactions can benefit from its prediction power. \section{Concluding Remarks and Future Work} We have introduced joint variational autoencoder (JoVA) for top-k recommendation with implicit feedback. JoVA, as an ensemble of two VAES, simultaneously and jointly learns user-user and item-item correlations. A variant of JoVA, referred to as JoVA-Hinge, includes pairwise ranking loss in addition to VAE's losses to specialize JoVA further for recommendation with implicit feedback. Our empirical experiments on three real-world datasets show that JoVA-Hinge advances the recommendation accuracy compared to a broad set of state-of-the-art methods, under various evaluation metrics. Additionally, our experiments demonstrate that the outperformance of JoVA-Hinge is across all users regardless of their numbers of observed interactions. Our JoVA model provides a solid framework for the broader investigation of the ensemble of VAEs equipped with pairwise ranking loss in recommender systems or possibly in other applications (e.g., vision, robotics, etc.). One can explore extending JoVA-Hinge to incorporate user and item features (e.g., descriptions, demographic information, etc.), side information (e.g., social networks), context (e.g., time, location, etc.), or non-stationary user preferences.
1,314,259,995,199
arxiv
\section{Introduction}\label{sec:one} The parton distribution functions (PDFs) content for nucleon is usually determined from global fits to experimental data at the large momentum transfer Q$^2$. Over the past decade, our knowledge of the quark and gluon substructure of the nucleon has been extensively improved due to the high-energy scattering data from the fixed target experiments, the data from $ep$ collider HERA~\cite{Abt:2016zth,Abramowicz:2015mha,South:2016cmx} and also from high energy $p \bar p$ scattering at the Tevatron~\cite{Abazov:2008ae,Aaltonen:2008eq}. More recently, the data taken from various channels in $pp$ collisions at the CERN LHC play a main role to constrain the sea quarks and gluon distributions at the proton~\cite{Rojo:2015acz}. In recent years, various up-to-date efforts are being made to extract more complete information about the nucleon's quark and gluon structure in the form of parton distribution functions for the unpolarized PDFs~\cite{Alekhin:2017kpj,Ball:2014uwa,Harland-Lang:2014zoa,Jimenez-Delgado:2014twa,Dulat:2015mca,Khanpour:2016uxh,MoosaviNejad:2016ebo,Goharipour:2017rjl} and the polarized PDFs~\cite{Jimenez-Delgado:2014xza,Sato:2016tuz,Shahri:2016uzl,Khanpour:2017cha,Ethier:2017zbq,Khanpour:2017fey,AtashbarTehrani:2013qea} cases. These analysis are mainly focused on the extraction of the parton distribution functions at small and large values of $x$ up to next-to-next-to-leading order (NNLO) accuracy. Similar efforts have also been performed for the case of fragmentation functions (FFs)~\cite{Nejad:2015fdh,deFlorian:2017lwf,MoosaviNejad:2016qdx,Ethier:2017zbq,Bertone:2017tyb,Zarei:2015jvh,Boroun:2015aya,Boroun:2016zql}, nuclear PDFs~\cite{Klasen:2017kwb,Eskola:2016oht,Khanpour:2016pph,Kovarik:2015cma,Goharipour:2017uic} and generalized parton distributions (GPDs)~\cite{Kumericki:2016ehc,Khanpour:2017slc,Kumericki:2009uq,Mueller:2013caa}. Since the Gottfried sum rule~\cite{Gottfried:1967kk} has been proposed in 1967, many experimental and theoretical researches have been widely performed so far to check the validity or violation of it and also to study the antiquark flavor asymmetry $ \bar d-\bar u $ in the nucleon sea (see Ref.~\cite{Chang:2014jba} and references therein). If we adopt that the $\bar u$ and $\bar d$ distributions in the nucleon are the same and the isospin invariance is also valid, then the Gottfried sum rule is obtained by integrating the difference between the $F_2$ structure functions of the proton and neutron over $ x $ as $I_G\equiv \int^1_0 [F^p_2(x)-F^n_2(x)]/x~dx = 1/3$, where $ x $ is the Bjorken scaling variable. However, assuming the flavor asymmetry of the nucleon sea, the Gottfried sum rule is violated by an extra term as $2/3\int^1_0 [\bar u(x) - \bar d(x)] dx$. In this way, if there is a $ \bar d $ excess over $ \bar u $ in the nucleon, we expect a smaller value for the Gottfried sum than $ 1/3 $. In 1991, the New Muon Collaboration (NMC) obtained the value $I_G = 0.235 \pm 0.026$ in measuring the proton and deuteron $ F_2 $ structure functions~\cite{Amaudruz:1991at} from deep-inelastic muon scattering on hydrogen and deuterium targets, which is approximately 28\% smaller than the Gottfried sum. This measurement provided the first clear evidence for the breaking of this sum rule. In addition to the deep-inelastic scattering (DIS) experiments, the violation of the Gottfried sum rule can be investigated from semi-inclusive DIS (SIDIS) and Drell-Yan cross section measurements. The related study has been performed by the HERMES collaboration~\cite{Ackerstaff:1998sr} in the case of SIDIS experiment. In this study a measurement of $\bar d(x) - \bar u(x)$ was reported over the range of $0.02 < x < 0.3$, but with a rather large experimental uncertainty. On the other hand, the NA51~\cite{Baldit:1994jk} and FNAL E866/NuSea~\cite{Hawker:1998ty} collaborations studied this violation by measuring $pp$ and $pd$ Drell-Yan processes and established again that there is a $\bar d$ excess over $\bar u$ in the nucleon sea. Although, the ratio of $\bar d / \bar u$ was only measured at the mean $x$-value of $\langle x\rangle $=0.18 in the NA51 experiment. The $x$-dependence of this ratio and the $\bar d(x) - \bar u(x)$ flavor asymmetry have also been measured over the kinematic region $0.015 < x < 0.35$ in the Fermilab E866 experiment. In addition to the violation of the Gottfried sum rule as well as the existence of the $\bar d-\bar u$ flavor asymmetry in the nucleon sea, one could take another important result from the Fermilab E866 data. In fact, the last data point suggested a sign-change for the $\bar d(x) - \bar u(x)$ distribution at $x \sim 0.3$, despite of their large uncertainty. To be more precise, it indicates that this distribution must be negative at the $x$-values approximately larger than $0.3$. This can be very important issue because the perturbative regime of quantum chromodynamics (QCD) can not lead to a remarkable flavor asymmetry in the nucleon sea. Furthermore, according to the studies which have yet been performed (for a review see Refs.~\cite{Chang:2014jba,Kumano:1997cy,Garvey:2001yq,Peng:2014hta}) the current theoretical models, regardless of their ability to describe an enhancement of $\bar d$ over $\bar u$, can not predict a negative value for the $\bar d(x) - \bar u(x)$ distribution at any value of $x$. These theoretical studies are based on the various models such as Pauli-blocking~\cite{Field:1976ve,Schreiber:1991tc,Buccella:1992zs,Steffens:1996bc}, meson-cloud~\cite{Thomas:1983fh,Speth:1996pz,Alwall:2005xd,Traini:2013zqa}, chiral-quark~\cite{Szczurek:1996tp,Song:2011fc,Salamu:2014pka}, chiral-quark soliton~\cite{Pobylitsa:1998tk,Wakamatsu:2003wg,Wakamatsu:2009fn,Wakamatsu:2014asa}, intrinsic sea~\cite{Chang:2011vx,Chang:2011du,Chang:2014lea} and statistical~\cite{Bourrely:1994nm,Bourrely:2005kw,Zhang:2008nr} models. Except the Pauli-blocking model which considers a perturbative mechanism to describe the enhancement of $\bar d$ over $\bar u$, other models consider a nonperturbative origin for this effect and are almost successful. However, the Pauli-blocking model is not successful to produce the distribution of the $\bar d(x) - \bar u(x)$ when it is compared with the experimental data. Recently, Peng \textit{et al.}~\cite{Peng:2014uea} have presented an independent evidence for the $\bar d(x) - \bar u(x)$ sign-change at $x \sim 0.3$ by analyzing the DIS data. They have showed that in addition to the Drell-Yan data, the analysis of the NMC DIS data for the $F^p_2 - F^n_2$~\cite{Amaudruz:1991at} and $F_2^d/F_2^p$~\cite{Arneodo:1996qe} can also lead to a negative value for the $\bar d(x) - \bar u(x)$ at $x \gtrsim 0.3$. They have also discussed the significance of this sign-change and the fact that none of the current theoretical models can predict this asymmetry. Future Drell-Yan experiment at J-PARC P04~\cite{jpar} and also Fermilab E906~\cite{fermi} experiments will give us more accurate information on the $\bar d-\bar u$ flavor asymmetry, especially, at the larger values of $x$. This motivates us to study on this topic. In the present paper, following the studies performed by Peng \textit{et al.} for the extraction of $\bar d(x) - \bar u(x)$, we first investigate whether such behavior can be seen in the analysis of data from other experiments. If it is, we shall study the approximate position of the $\bar d(x) - \bar u(x)$ sign-change in $x$ and also estimate the magnitude of its negative area. In addition, since our study is in the low Q$^2$ at high value of $x$ in which the target mass corrections (TMCs) and higher twist (HT) effects are significant, then we develop our analysis by considering these nonperturbative contributions. Therefore, we calculate the $\bar d(x) - \bar u(x)$ distribution using the Brodsky, Hoyer, Peterson, and Sakai (BHPS) model~\cite{Brodsky:1980pb} and show that the available experimental data for this quantity suggest a smaller value for the down quark mass than the up quark one in the BHPS formalism. Note that, this is in contrast to the previous studies in this context~\cite{Chang:2011vx,Chang:2011du,Chang:2014lea} where was assumed equal masses for the down and up quarks in the proton. This difference between masses leads to a sign-change for $\bar d(x) - \bar u(x)$ when we evolve this quantity to the scale of experimental data~\cite{Hawker:1998ty}. The content of the present paper goes as follows: we compare the Fermilab E866~\cite{Hawker:1998ty} data with the prediction of the latest parton distribution functions from various groups and also extract the $\bar d(x) - \bar u(x)$ using the updated CLAS collaboration data for the $ F_2^n/F_2^p $ ratio in Sec.~\ref{sec:two}. This section also includes detailed discussions on the nuclear corrections as well as the effects arising from the nonperturbative TMCs and HT terms. In Sec.~\ref{sec:three}, we briefly introduce the BHPS model and explain the idea for choosing a smaller mass for the down quark than up quark in the BHPS formalism. Then, we prove our claim and determine the masses of down and up sea quarks by fitting the available experimental data for the $\bar d(x) - \bar u(x)$. Finally, we summarize our results and present our conclusions in Sec.~\ref{sec:four}. Appendix presents our {\tt FORTRAN} package containing the $\bar d$ and $\bar u$ intrinsic distributions using the BHPS model. \section{$\bar d(x) - \bar u(x)$ from recent CLAS data}\label{sec:two} In recent years, our knowledge of the nucleon structure have been developed to a large extent, but it is not still enough. In this respect, an updated global analysis of PDFs including a broad range of the experimental data from the various observables and also theoretical improvements can play an important role. In the theoretical studies, generally, an independent parametrization form is chosen for the $\bar d(x) - \bar u(x)$ distribution in the global analysis of PDFs at the initial scale $Q_0$. Fig.~\ref{fig:fig1} shows the $\bar d(x) - \bar u(x)$ data from the HERMES and the Fermilab E866 at Q$^2 = 2.5$ and $54$ GeV$^2$, respectively, which have been compared with the NNLO theoretical predictions of JR14~\cite{Jimenez-Delgado:2014twa}, NNPDF3.0~\cite{Ball:2014uwa}, MMHT14~\cite{Harland-Lang:2014zoa} and CT14~\cite{Dulat:2015mca} PDFs for Q$^2 = 54$ GeV$^2$. Although, all predictions are in good agreement with these data, but they have major differences with each other. For example, there is no possibility to change the $\bar d(x) - \bar u(x)$ sign at large-$x$ in JR14 parametrization unlike other PDF sets or the CT14 parametrization predicts $\bar d(x) - \bar u(x) < 0$ in small $x$-region. There is also another important conclusion which can be taken from the E866 data. As it is clear from Fig.~\ref{fig:fig1} the last data point, despite of its large uncertainty, indicates that the $\bar d(x) - \bar u(x)$ must be negative at $x$-values approximately larger than $0.3$. Recently, Peng \textit{et al.}~\cite{Peng:2014uea} showed that in addition to the Drell-Yan data, there is an independent evidence for the $\bar d(x) - \bar u(x)$ sign-change at $x \sim 0.3$. Their results have been achieved by analyzing the NMC DIS data for the $F^p_2 - F^n_2$~\cite{Amaudruz:1991at} and the $F_2^d/F_2^p$~\cite{Arneodo:1996qe}. In this section, we are going to investigate if such behavior can be seen in the analysis of data from other experiments such as Barely Off-shell Nucleon Structure (BONuS) experiment at Jefferson Lab. In this way, we can compute the position of the $\bar d(x) - \bar u(x)$ sign-change in $x$ and it is also possible to estimate the magnitude of its negative area. \begin{figure}[t!] \centering \vspace{0.5 cm} \includegraphics[width=8.0cm]{fig1.eps} \caption{ A comparison between HERMES~\cite{Ackerstaff:1998sr} and Fermilab E866~\cite{Hawker:1998ty} collaborations data for the $\bar d(x) - \bar u(x)$ and the NNLO theoretical predictions of JR14~\cite{Jimenez-Delgado:2014twa}, NNPDF3.0~\cite{Ball:2014uwa}, MMHT14~\cite{Harland-Lang:2014zoa} and CT14~\cite{Dulat:2015mca} PDFs at Q$^2 = 54 \, {\rm GeV}^2$. } \label{fig:fig1} \end{figure} From the parton model, one knows that the $F_2^{p,n}$ structure function of the nucleon at the leading-order (LO) of strong coupling constant $\alpha_s$ is expressed as an expansion of parton distributions $f_i(x)$, $F_2^{p,n}(x) = \sum_i e_i^2 \, xf_i(x) $, where $i$ denotes the flavor of the quarks and $e_i$ is the charge of $i$'th quark. It should be noted that, in general, the parton distributions and in conclusion the structure functions depend on the four-momentum transfer squared $Q^2$. Now, if we adopt the charge symmetry of parton distributions in proton and neutron and also assume that the perturbatively generated $s, c, b$ quark distributions are equal in different nucleons, the following relation is obtained for the $F^p_2 - F^n_2$ at LO \begin{equation} F^p_2(x) - F^n_2(x) = \frac{1}{3} x[u(x) + \bar u(x) - d(x) - \bar d(x)]. \label{eq1} \end{equation} In consequence, using the definition of valence quark, $q_v = q - \bar q$, the above relation can be used to extract the $\bar d(x) - \bar u(x)$ as follows \begin{equation} \bar d(x) - \bar u(x) = \frac{1}{2} [u_v(x) - d_v(x)] - \frac{3}{2x}[F^p_2(x) - F^n_2(x)]. \label{eq2} \end{equation} According to Eq.~\eqref{eq2}, having two quantities $u_v(x) - d_v(x)$ and $F^p_2(x) - F^n_2(x)$ for a given value of $x$, one can extract the $\bar d(x) - \bar u(x)$ flavor asymmetry. For the first term in Eq.~(\ref{eq2}), we can use the related parameterizations from the various PDFs~\cite{Jimenez-Delgado:2014twa,Ball:2014uwa,Harland-Lang:2014zoa,Dulat:2015mca} and the last term ($F^p_2(x) - F^n_2(x)$) in the second bracket can be calculated, for example, from the new CLAS Collaboration data reported for the $F_2^n/F_2^p$~\cite{Tkachenko:2014byy}. Since we are looking for a possible sign-change in the $\bar d(x) - \bar u(x)$ at a large value of $x$, in this work we use the NNLO JR14 parametrization~\cite{Jimenez-Delgado:2014twa} for the $u_v - d_v$ that its prediction for the $\bar d(x) - \bar u(x)$ is clearly positive in all $x$, as seen in Fig.~\ref{fig:fig1}. In this way, if this sign-change occurs, we ensure that it is not resulted due to the selected PDFs. On the other hand, the CLAS Collaboration~\cite{Tkachenko:2014byy} has recently published the data for the neutron structure function $F_2^n$, and its ratio to the inclusive deuteron structure function $(F_2^n/F_2^d)$ as well as an updated extraction of Ref.~\cite{Baillie:2011za} for the ratio $R(x) = F_2^n/F_2^p$ from the BONuS experiment at Jefferson Lab. The data covers both the resonance and deep-inelastic regions including a wide range of $x$ for Q$^2$ between $0.7$ and $5$ GeV$^2$ and invariant mass $W$ between $1$ and $2.7$ GeV. In this way, the term $F^p_2(x) - F^n_2 (x)$ in Eq.~\eqref{eq2} can be calculated from the data for the ratio $R(x)$ and by using the parametrization of $F_2^d(x)$ from Ref.~\cite{Arneodo:1995cq}, according to the following relation \begin{equation} F^p_2 - F^n_2 = 2F_2^d(1 - F_2^n/F_2^p)/(1 + F_2^n/F_2^p). \label{eq3} \end{equation} \begin{figure}[t!] \centering \vspace{0.5 cm} \includegraphics[width=8.0cm]{fig2.eps} \caption{ The $\bar d(x) - \bar u(x)$ flavor asymmetry as a function of $x$. The results obtained by the NNLO JR14 parametrization~\cite{Jimenez-Delgado:2014twa} and the CLAS data~\cite{Tkachenko:2014byy} related to the three lower cuts on the range of final-state invariant mass $W^*$. The detailed explanation is given in the text. } \label{fig:fig2} \end{figure} Fig.~\ref{fig:fig2} shows our final results for the $\bar d(x) - \bar u(x)$ distribution, related to three lower cuts on the range of final-state invariant mass; $W^*>1.4$ GeV (blue circles), $W^*>1.6$ GeV (red squares) and $W^*>1.8 $ GeV (green diamonds). Note that, since the CLAS data are also $Q^2$-dependent and not related to a fixed value of Q$^2$, we have allowed all quantities in Eqs.~\eqref{eq2} and \eqref{eq3} to be also $Q^2$-dependent. Therefore, the extracted $\bar d (x)- \bar u(x)$ data points in $x$ are related to the different $Q^2$ values approximately between $1$ and $4.5$ GeV$^2$. For example, for the case in which $W^*>1.6 $, the first and last data points are related to $Q^2=1.086$ and $4.259$ GeV$^2$, respectively. However, we could also choose an average value for all data, i.e. $Q^2=2.1$ GeV$^2$. We examined this simplification and found it leads to an overall reduction in the magnitude of $\bar d(x)- \bar u(x)$, specifically, at small and large values of $x$. The related results have been shown in Fig.~\ref{fig:fig2} as black triangles. To estimate the uncertainties, we have included both the uncertainties of $F_2^n/F_2^p$ and $F_2^d$ in our calculation for the $F^p_2 - F^n_2$ \eqref{eq3}, and also the JR14 PDFs uncertainties in the extraction of $\bar d (x)- \bar u(x)$ by using Eq.~\eqref{eq2}. As can be seen from Fig.~\ref{fig:fig2}, the high-quality data from the BONuS experiment leads to rather smaller uncertainties. It should be noted that Eq.~(\ref{eq2}) is extracted at the LO approximation but in our analysis, shown in Fig.~\ref{fig:fig2}, we used the NNLO PDF parametrization for more accuracy. However, as we show in Fig.~\ref{fig:dbarmubarCLAS}, if one uses the LO PDF parameterizations from CT14~\cite{Dulat:2015mca}, the results show a sign-change as well. \begin{figure}[t!] \centering \vspace{0.5 cm} \includegraphics[width=8.0cm]{dbarmubarCLASCT14.eps} \caption{ As in Fig.~\ref{fig:fig2} but obtained from the LO CT14 parametrization~\cite{Dulat:2015mca} using the CLAS data~\cite{Tkachenko:2014byy}. The plot is related to the three lower cuts on the range of final-state invariant mass $W^*$. } \label{fig:dbarmubarCLAS} \end{figure} The last important issue which should be considered in our analysis is the effect of the nonperturbative target mass corrections (TMCs) and higher-twist (HT) terms. At the region of low Q$^2$, nucleon mass correction cannot be neglected. Therefore, the power-suppressed corrections to the structure functions can make an important contribution in some kinematical regions. In addition to the pure kinematical origin TMCs, the structure functions also receive remarkable contributions from HT terms. In the range of large values of $x$, their contributions are increasingly important. In this respect, we examine the stability and reliability of our obtained results by including the TMCs as well as the HT terms which are particularly important at the large-$x$ region and low Q$^2$. Actually, since the CLAS measurements belong to the kinematical regions of $W \approx 2.7$ GeV and $Q^2 \approx 1 - 5$ GeV$^2$, and the Eq.~\eqref{eq1} might be too naive to use for the data points at such low $W$ and $Q^2$ regions, we should check the validity of our results by considering both the TMCs and HT term. In this regards, we follow the formalization presented in Refs.~\cite{Schienbein:2007gr} and \cite{Accardi:2009br} in order to taking into account the TMC and HT corrections in the structure functions of Eq.~\eqref{eq1}. It should be also noted that for calculating the HT effect, we use the results presented in Table 3 of Ref.~\cite{Martin:2003sk}. Our final results have been shown in Fig.~\ref{fig:dbarmubarCLAScorecnum}, again for three lower cut values on $W^*$. Comparing Figs.~\ref{fig:fig2} and \ref{fig:dbarmubarCLAScorecnum}, one can conclude that the TMCs and HT effect overall cause the results have larger value than before for positive area and the data points which were in the negative area have become more negative. Although, the TMCs and HT effect have paused some negative points to the positive area, we still have some data points which undergo the sign change. As a last point, note that if one uses the results obtained in Ref.~\cite{Botje:1999dj} for calculating the HT term, similar results will be achieved. \begin{figure}[t!] \centering \vspace{0.5 cm} \includegraphics[width=8.0cm]{dbarmubarCLAScorecnum.eps} \caption{The $\bar d(x) - \bar u(x)$ asymmetry considering the TMC and HT corrections. } \label{fig:dbarmubarCLAScorecnum} \end{figure} The most important conclusion of our analysis in this section is to show that the sign-change of $\bar d(x) - \bar u(x)$ occurs at large-$x$, as suggested by Peng \textit{et al.}~\cite{Peng:2014uea} in their analysis of the NMC DIS data for the $F^p_2 - F^n_2$~\cite{Amaudruz:1991at} and $F_2^d/F_2^p$~\cite{Arneodo:1996qe}, and also seen by the Drell-Yan experimental data measured at the Fermilab Experiment (E866)~\cite{Hawker:1998ty}. Although, this sign-change has occurred at $x \sim 0.5$, that is larger in comparison to the case of Drell-Yan data $ x\sim 0.3 $ (as shown in Fig.~\ref{fig:fig1}), but it seems reasonable because the CLAS data include very smaller values of $Q^2$ in comparison to the E866 data. As another considerable point, note that in the definition of Eq.~\eqref{eq3} the nuclear effects in the deuteron, defined as $R_{\rm EMC}^d = F_2^d/(F_2^p + F_2^n)$, have been ignored. Actually, the nuclear corrections in the deuteron structure function are small and usually are neglected in calculations. This fact is checked in the recent studies of the EMC effect in the deuteron by Griffioen \textit{et al.}~\cite{Griffioen:2015hxa} through analyzing the recently published CLAS data at Jefferson Lab~\cite{Tkachenko:2014byy}. However, we recalculated the $\bar d(x) - \bar u(x)$ considering the nuclear corrections in the deuteron but only for the last data point that its related $R_{\rm EMC}^d$ ($=1.07$) is comparatively large, see Ref.~\cite{Griffioen:2015hxa}. We found that it changes the result by 10\% so that the negativity of data at large-$x$ is still remaining. \section{$\bar d(x) - \bar u(x)$ from BHPS model}\label{sec:three} In this section, we present the results of our study for the $\bar d(x) - \bar u(x)$ in the basis of the BHPS model. As was already mentioned in the Introduction, since the Gottfried sum rule has been violated by the NMC measurement~\cite{Amaudruz:1991at}, many theoretical studies based on the various models have yet been extended to explain the $\bar d(x) - \bar u(x)$ flavor asymmetry. Similar efforts have been also done in the case of strange-antistrange asymmetry of the nucleon sea (for instance see Refs.~\cite{Cao:2003ny,Salajegheh:2015xoa,Vega:2015hti}). In recent years, Chang and Pang~\cite{Chang:2011vx} have demonstrated that a good description of Fermilab E866 data for the $\bar d(x) - \bar u(x)$ can be also achieved using the BHPS model~\cite{Brodsky:1980pb} for the intrinsic quark distributions in the nucleons. In the past three decades, the intrinsic quarks have been a subject of interest in many researches including both intrinsic light and heavy quark components (see Refs.~\cite{Salajegheh:2015xoa,Brodsky:2015fna} and references therein). According to the BHPS model that is pictured in the light-cone framework, the existence of the five-quark Fock states $\vert uudq \bar{q} \rangle$ in the proton wave function is natural and the momentum distributions of the constituent quarks are given by \begin{equation} P(x_1, \cdots ,x_5) = N \frac{\delta \left(1-\sum\limits_{i=1}^5 x_i\right)}{\left(m_p^2- \sum\limits_{i=1}^5 \frac{m_i^2}{x_i}\right)^2}, \label{eq4} \end{equation} where $m_p$ and $m_i$ refer to the masses of the proton and quark $i$, and $x_i$ stands for the momentum fraction carried by quark $i$. It should be noted that in Eq.~\eqref{eq4} the effect of the transverse momentum in the five-quark transition amplitudes is neglected and the normalization factor $N$ is also determined through the following condition \begin{equation} \int dx_1 \cdots dx_5 P(x_1, \cdots ,x_5)\equiv \mathcal{P}^{q\bar{q}}_5, \label{eq5} \end{equation} where $\mathcal{P}^{q \bar{q}}_5$ is a probability to find the $\vert uud q\bar{q}\rangle$-Fock state in the proton. Considering Eq.~\eqref{eq4}, one can integrate over $x_1, x_2, x_3$ and $x_4$ to obtain the $ \bar q $-distribution in the proton. As was mentioned in Ref.~\cite{Brodsky:1980pb}, the probability of the five-quark Fock state is proportional to $1/m_q^2$, where $m_q$ is the mass of $q(\bar{q})$ in the Fock state $\vert uudq \bar{q} \rangle$. Although, the BHPS model prediction for the $\mathcal{P}^{q \bar{q}}_5$ is suitable when the quarks are heavy, we expect that the light five-quark states have a larger probability in comparison to the heavy five-quark states. It is worth noting that the BHPS model was applied, at first, for calculating the intrinsic charm distribution \cite{Brodsky:1980pb}. However, Chang and Pang~\cite{Chang:2011vx} generalized it to the light five-quark states to calculate their intrinsic distributions in the proton and also to extract their probabilities ($\mathcal{P}^{q \bar{q}}_5$) using available experimental data. It is interesting to note that they obtained different values for the $\mathcal{P}^{d \bar{d}}_5$ and $\mathcal{P}^{u \bar{u}}_5$ and therefore they extracted $\bar d(x) - \bar u(x)$ distribution. This may leads us to a new idea so that we can chose different masses for down and up quarks in the BHPS formalism. To make this point more clear, note that in one hand the $\mathcal{P}^{q\bar{q}}_5$ is proportional to $1/m_q^2$ and on the other hand, Eq.~\eqref{eq4} completely depends on the constituent quark masses, so these facts inevitably lead to the difference masses for the up and down quarks. Moreover, from~\cite{Chang:2011vx}, since $\mathcal{P}^{d \bar{d}}_5 (=0.294) $ is larger than $\mathcal{P}^{u \bar{u}}_5 (=0.176)$ then one can conclude that $m_{d, \bar{d}}$ should be smaller than $m_{u,\bar{u}}$. Considering this assumption, if one evolve the $\bar d(x) - \bar u(x)$ distributions to the experimental data scale~\cite{Hawker:1998ty}, it will provide a sing-change at large value of $x$, $x>0.3$. To prove our claim, we should determine the real masses of down and up sea quarks by fitting the available experimental data for the $\bar d(x) - \bar u(x)$. To this end, considering the definition of the $\chi^2$-function as~\cite{Martin:2009iq} \begin{equation}\label{maral} \chi^2 = \sum _i \frac{(\Delta_i^{\rm data} - \Delta_i^{\rm theory})^2}{(\sigma_i^{\rm data})^2} \,, \end{equation} we must minimize it to obtain the optimum values for the up and down quark masses. Here, $\Delta_i^{\rm data}$ is the experimental data for the $\bar d(x) - \bar u(x)$. In our analysis we use the HERMES~\cite{Ackerstaff:1998sr} and E866~\cite{Hawker:1998ty} data which are the only available data for this quantity. In (\ref{maral}), the theoretical result for the $\bar d(x) - \bar u(x)$ distribution ($\Delta_i^{\rm theory}$) is obtained from the BHPS model and $\sigma_i^{\rm data}$ is the experimental error related to the systematical and statistical errors as: $(\sigma_i^{\rm data})^2 = (\sigma_i^{\rm stat})^2 + (\sigma_i^{\rm syst})^2$. In our calculation of the theoretical result $\Delta_i^{\rm theory}$, the required probabilities of $\vert uud u\bar{u}\rangle $ and $\vert uud d\bar{d} \rangle $ states (in the proton) are taken from the recent analysis of Chang and Pang~\cite{Chang:2014lea} who have done their analysis by considering the new measurements of HERMES Collaboration \cite{Airapetian:2013zaw} for the $ x(s+\bar s) $. The related values are $\mathcal{P}^{u\bar{u}}_5= 0.229$ and $\mathcal{P}^{d\bar{d}}_5= 0.347$ for $\mu = 0.3$ GeV and also $\mathcal{P}^{u\bar{u}}_5 = 0.178$ and $\mathcal{P}^{d \bar{d}}_5 = 0.296 $ for $ \mu = 0.5 $ GeV, where $\mu$ is the initial scale for the evolution of the non-singlet $\bar d(x) - \bar u(x)$ distribution to the scale of experimental data. In this analysis, we merely extract the value of $m_{d,\bar d}$ by performing a fit to the experimental data. In fact, it is not necessary to extract $m_{u, \bar u}$ from data analysis, because one can determine this quantity using the following equation \begin{equation}\label{eq7} m_{u,\bar u} = \frac{m_p - m_{d, \bar d}}{2}. \end{equation} The equation above is obtained by the fact that the proton consists of two up quarks and one down quark in the ground state.\\ To minimize the $\chi^2$-function (\ref{maral}), we employ the CERN program {\tt MINUIT}~\cite{James:1975dr} and perform our analysis at the LO and next-to-leading order (NLO) approximations. For both LO and NLO, our results are evolved from the initial scales $\mu=0.3$ GeV and $\mu=0.5$ GeV to the experimental data scales (Q$^2 = 54$ GeV$^2$ for the E866 data and Q$^2=2.5$ GeV$^2$ for the HERMES data). In Table \ref{tab1}, our results for $ m_{d, \bar d}$ along with the corresponding $\chi^2/{\rm d.o.f}$ values are presented for four scenarios, depend on the order of perturbative QCD and the initial scale applied. \begin{table}[h!] \caption{The optimum values for the $d$-quark mass along with the corresponding $\chi^2/{\rm d.o.f}$ values.} \centering \begin{tabular}{ l c c } \hline Approach & $\chi^2/{\rm d.o.f}$ & $m_{d,\bar d}$ \\ \hline \hline LO ($\mu=0.3$) & 6.3145 & \qquad 0.2020 $\pm$ 7.3357 $\times10^{-5}$ \\ NLO ($\mu=0.3)$ & 1.0682 & \qquad 0.2779 $\pm$ 4.7401 $\times10^{-3}$ \\ LO ($\mu=0.5$) & 11.2947 & \qquad 0.2020 $\pm$ 5.1204 $\times10^{-5}$ \\ NLO ($\mu=0.5)$ & 4.4402 & \qquad 0.2020 $\pm$ 8.3806 $\times10^{-5}$ \\ \hline \end{tabular} \label{tab1} \end{table} According to Table~\ref{tab1} and Eq.~(\ref{eq7}), the possible values for the $m_{d,\bar d}$ are smaller than the $m_{u,\bar u}$ in all scenarios applied. As it can be seen from Table~\ref{tab1}, the value of $\chi^2/{\rm d.o.f}$ for the NLO approach considering the initial scale $\mu=0.3$~GeV is better than the other approaches. Another interesting point, shown in Table~\ref{tab1}, is that the values obtained for the $m_{d,\bar d}$ are the same when different scenarios are applied, i.e. LO ($\mu=0.3$ GeV), LO ($\mu=0.5$ GeV) and NLO ($\mu=0.5$ GeV). Considering Table~\ref{tab1} and Eq.~(\ref{eq7}), our expectation value of the up quark mass is $m_{u, \bar u}=0.330$~GeV using the second scenario where $\mu=0.3$~GeV is considered at NLO and one has $m_{u,\bar u}=0.368$~GeV considering other three scenarios. We have provided a code that gives the $\bar{d}$ and $\bar{u}$ intrinsic quark distributions in the proton for any arbitrary down quark mass and momentum fraction $x$ (see Appendix). Now, we can recalculate the BHPS model for the $\bar d(x) - \bar u(x)$ distribution using the new masses extracted for the up and down sea quarks. Because, the minimum value of $\chi^2/{\rm d.o.f}$ appears in the NLO scenario for $\mu=0.3$, we expect that this scenario leads to a more convenient consistency with the experimental data. Fig.~\ref{fig:fig3} shows a comparison between the experimental data and obtained results for the $\bar d(x) - \bar u(x)$ in four scenarios, using the BHPS model with the masses listed in Table~\ref{tab1}. Actually, these results show that our assumption is correct so choosing a smaller mass for the down quark is logical. \begin{figure}[t!] \centering \vspace{0.5 cm} \includegraphics[width=8cm]{fig3.eps} \caption{ A comparison between the experimental data from the HERMES~\cite{Ackerstaff:1998sr} and E866~\cite{Hawker:1998ty} collaborations and the theoretical results obtained for $\bar d(x) - \bar u(x)$ in four situations, using the BHPS model with masses listed in Table~\ref{tab1}. } \label{fig:fig3} \end{figure} \begin{figure}[b!] \centering \vspace{0.5 cm} \includegraphics[width=8cm]{Plotdoveru.eps} \caption{ $\bar{d}(x)/\bar{u}(x)$ versus $x$ which obtained in four situations, using the BHPS model with masses listed in Table~\ref{tab1}. } \label{fig:Plotdoveru} \end{figure} Another interesting finding has been achieved from our analysis is that the evolved distributions have a singe-change at the large value of $x$. The observed difference between $\bar d(x)$ and $\bar u(x)$ in this study for large value of $x$ is not significant as presented in Fig.~\ref{fig:fig3}. In this regard, for showing this sign-change, we have plotted the $\bar{d}(x)/\bar{u}(x)$ distribution as a function of $x$ for four analyzed scenarios. According to Fig.~\ref{fig:Plotdoveru}, at $x \gtrsim 0.33$ and for all approaches, the ratio of $\bar{d}(x)/\bar{u}(x)$ is smaller than 1. From Fig.~\ref{fig:Plotdoveru} one can conclude that, for the NLO scenario and for $\mu = 0.3$ GeV, the corresponding curve fall down faster than the others. The sign-change presented in this study have a number of important implications for future practice, and hence, any possible future studies on $\bar d(x) - \bar u(x)$ using the new and up-to-date experimental set up are most welcome. \section{Summary and Conclusion}\label{sec:four} The experimental data taken from a Drell-Yan experiment by FNAL E866/NuSea collaboration~\cite{Hawker:1998ty} can be recognized as a cleanest evidence for the violation of the Gottfried sum rule and the existence of the $\bar d(x) - \bar u(x)$ flavor asymmetry in the nucleon sea. Furthermore, these data suggest a sign-change for the $\bar d(x) - \bar u(x)$ at $x \sim 0.3$. Recently, by analyzing the DIS data, Peng \textit{et al.}~\cite{Peng:2014uea} has presented an independent evidence for the $\bar d(x) - \bar u(x)$ sign-change at $x \sim 0.3$. They have showed that in addition to the Drell-Yan data, the analysis of the NMC DIS data for the $F^p_2 - F^n_2$~\cite{Amaudruz:1991at} and $F_2^d/F_2^p$~\cite{Arneodo:1996qe} can also lead to a negative value for the $\bar d(x) - \bar u(x)$ at $x \gtrsim 0.3$. They have also discussed the significance of this sign-change and the fact that none of the current theoretical models can predict this effect. Following their studies, we have investigated this behavior in the DIS data analysis from other experiments. Then, we have tried to found the $x$-position of the $\bar d(x) - \bar u(x)$ in which the sign-change occurs. In the following, we estimated the magnitude of the negative area of the $\bar d(x) - \bar u(x)$ distribution. We have also enriched our formalism by considering the nonperturbative TMCs and HT terms. As a result, we fount that using the updated CLAS collaboration data for the structure function ratio $F_2^n/F_2^p$~\cite{Tkachenko:2014byy} the extracted $\bar d(x) - \bar u(x)$ undergoes a sing-change and becomes negative at large values of $x$, as suggested by Drell-Yan E866 data. Then, we have used in the following the BHPS model~\cite{Brodsky:1980pb} to calculate the $\bar d(x) - \bar u(x)$ distribution. According to the BHPS prediction, we assumed that the probability of the Fock state $\vert uudq\bar{q}\rangle$ in the proton wave function is proportional to $1/m_q^2$, where $m_q$ is the mass of $q(\bar{q})$ in five quark Fock state. Under this assumption, the $d (\bar d)$ quark has a smaller mass than the $u (\bar u)$ quark in the proton. To prove that, we obtained the real masses for the down and up sea quarks by fitting the available experimental data. We considered the $\chi^2$-function and minimized it to obtain the optimum down and up sea quarks masses. Our calculations have been done in four scenarios: leading- and next-to-leading order approximations considering two different initial scales $\mu = 0.3$ GeV and $\mu = 0.5$ GeV. Our results obtained from data analysis confirm the accuracy and correctness of our assumption. The following short conclusions can be drawn from the present study. As a short summary, the present results are significant in at least two major respects. First, we have found that the $\bar d(x) - \bar u(x)$ distribution with the new extracted masses, is in good agreement with the available dn up-to-date experimental data. In addition, unlike the previously performed theoretical studies~\cite{Kumano:1997cy,Garvey:2001yq,Peng:2014hta}, our results show a sign-change on the $\bar d(x) - \bar u(x)$ distribution. The latter one is the more significant finding emerge from this study. Any further information both on theory and exprimental observables on $\bar d(x) - \bar u(x)$ asymmetry would help us to establish a greater degree of accuracy on this matter. Theses are important issues for future research, and hence, further studies with more focus on $\bar d(x) - \bar u(x)$ asymmetry are therefore suggested. \section*{Acknowledgments} Authors are thankful of School of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM) for financially support of this project. Hamzeh Khanpour also gratefully acknowledge the University of Science and Technology of Mazandaran for financial support provided for this research.
1,314,259,995,200
arxiv
\section{Introduction} This paper aims to investigate the possible effects of cognitive biases on human understanding of machine learning models, in particular inductively learned rules. We use the term ``cognitive bias'' as a representative for various cognitive phenomena that materialize themselves in the form of occasionally irrational reasoning patterns, which are thought to allow humans to make fast judgments and decisions. Their cumulative effect on human reasoning should not be underestimated as ``cognitive biases seem reliable, systematic, and difficult to eliminate'' \citep{kahneman1972subjective}. The effect of some cognitive biases is more pronounced when people do not have well-articulated preferences \citep{tversky1993context}, which is often the case in explorative machine learning. Previous works have analysed the impact of cognitive biases on multiple types of human behavior and decision making. A specific example is the seminal book ``Social cognition'' by \citet{kunda1999social}, which is concerned with the impact of cognitive biases on social interaction. Another, more recent work by \citet{serfas2011cognitive} focused on the context of capital investment. Closer to the domain of machine learning, in their article entitled ``Psychology of Prediction'', \citet{kahneman1973psychology} warned that cognitive biases can lead to violations of the Bayes theorem when people make fact-based predictions under uncertainty. These results directly relate to inductively learned rules, since these are associated with measures such as confidence and support expressing the (un)certainty of the prediction they make. Despite some early works \citep{michalski1969quasi,michalski1983theory} showing the importance of study of cognitive phenomena for rule induction and machine learning in general, there has been a paucity of follow-up research. In previous work \citep{CognitiveBiases-Rules}, we have evaluated a selection of cognitive biases in the very specific context of whether minimizing the complexity or length of a rule will also lead to increased interpretability, which is often taken for granted in machine learning research. In this paper, we attempt to systematically relate cognitive biases to the interpretation of machine learning results. To that end, we review twenty cognitive biases that can distort interpretation of inductively learned rules. The review is intended to help to answer questions such as: \emph{How do cognitive biases affect human understanding of symbolic machine learning models? What could help as a ``debiasing antidote''?} This paper is organized as follows. Section~\ref{sec:background} provides a brief review of related work published at the intersection of rule learning and psychology. Section~\ref{sec:example} motivates our study on the example of the insensitivity to sample size effect. Section~\ref{sec:criteria} describes the criteria that we applied to select a subset of cognitive biases into our review, which eventually resulted in twenty biases. These biases and their disparate effects and causes are covered in detail in Section~\ref{sec:review}. Section~\ref{sec:recommendations} provides a concise set of recommendations aimed at developers of rule learning algorithms and user interfaces. In Section~\ref{sec:limitations} we state the limitations of our review and outline directions for future work. The conclusions summarize the contributions of the paper. \section{Background and Related Work} \label{sec:background} We selected individual rules as learnt by many machine learning algorithms as the object of our study. Focusing on simple artefacts---individual rules---as opposed to entire models such as rule sets or rule lists allows a deeper, more focused analysis since a rule is a small self-contained item of knowledge. Making a small change in one rule, such as adding a new condition, allows to test the effect of an individual factor. In this section, we first motivate our work by putting it into the context of prior research on related topics. Then, we proceed by a brief introduction to inductive rule learning (Section~\ref{ss:decrule}) and a brief recapitulation of previous work in cognitive science on the subject of decision rules (Section~\ref{ss:rules-cs}). Finally, we introduce cognitive biases (Section~\ref{ss:cbandib}) and and rule plausibility (Section~\ref{ss:measures}, which is a measure of rule comprehension. \subsection{Motivation} \label{sec:motivation} In the following three paragraphs, we discuss our motivation for this review, and summarize why we think this work is relevant to the larger artificial intelligence community. \paragraph{Explaining ``black box" models with rules} While neural networks and ensembles of decision trees are increasingly becoming the prevalent type of representation used in machine learning, it might be at first surprising that our review focuses almost exclusively on decision rules. The reason is that rules are widely used as a means for communicating explanations of a variety of machine learning approaches, since many types of models can be converted to rules, or rules can be extracted from them \citep{ADT95}. Decision trees can be represented as a rule model, when one rule is generated for each path from the root of the tree to a leaf. \citet{guidotti2018survey} provides a review of research on using rules for explaining some ``black-box models'', including neural networks, support vector machines and tree ensembles. \paragraph{Embedding cognitive biases to learning algorithms} The applications of cognitive biases go beyond explaining existing machine learning models. For example, \citet{taniguchi2018machine} demonstrate how a cognitive bias can be embedded into a machine learning algorithm, achieving superior performance on small datasets compared to commonly used machine learning algorithms with ``generic" inductive bias. \paragraph{Paucity of research on cognitive biases in artificial intelligence} Several recent position and review papers on explainability in Artificial Intelligence (xAI) recognize that cognitive biases play an important role in explainability research \citep{miller2018explanation,paez2019pragmatic}. To our knowledge, the only systematic treatment of psychological phenomena applicable to machine learning is provided by the review of \citet{miller2018explanation}, which focuses on reasons and thought processes that people apply during explanation selection, such as causality, abnormality and the use of counterfactuals. This authoritative review observes that there are currently no studies that look at cognitive biases in the context of selecting explanations. Because of the paucity of applicable research focusing on machine learning, the review of \citet{miller2018explanation} --- same as the present paper --- takes the first step of applying influential psychological studies to explanation in the xAI context without accompanying experimental validation specific to machine learning. While \citet{miller2018explanation} summarizes main reasoning processes that drive generation and understanding of explanations, our review focuses specifically on cognitive biases as psychological phenomena that can distort interpretation of machine learning models, if not properly accounted for. \subsection{Decision Rules in Machine Learning} \label{ss:decrule} While neural networks and ensembles of decision trees are increasingly becoming the prevalent type of representation used in machine learning, our review focuses almost solely on decision rules because many types of models can be converted to rules, or rules can be extracted from them, and rules are therefore widely used as a means for communicating explanations of a variety of machine learning approaches. \begin{figure}[t] \begin{Verbatim}[frame=single] IF A AND B THEN C confidence=c and support=s IF veil is white AND odor is foul THEN mushroom is poisonous confidence = 9 \end{Verbatim} \caption{Inductively learned rule} \label{fig:rule} \end{figure} An example of an inductively learned decision rule, which is a subject of the presented review, is shown in Figure~\ref{fig:rule}. Following the terminology of \citet{jf:Book-Nada}, $A, B, C$ represent \emph{literals}, i.e., Boolean expressions which are composed of attribute name (e.g., \texttt{veil}) and its value (e.g., \texttt{white}). The conjunction of literals on the left side of the rule is called \emph{antecedent} or \emph{rule body}, the single literal predicted by the rule is called \emph{consequent} or \emph{rule head}. Literals in the body are sometimes referred to as \emph{conditions} throughout the text, and the consequent as the \emph{target}. While this rule definition is restricted to conjunctive rules, other definitions, e.g., the formal definition given by \cite {slowinski2006application}, also allow for negation and disjunction as connectives. Rules on the output of rule learning algorithms are most commonly characterized by two parameters, confidence and support. The \emph{confidence} of a rule---sometimes also referred to as \emph{precision}---is defined as $\mathit{a}/(\mathit{a}+\mathit{b})$, where $\mathit{a}$ is the number objects that match both the conditions of the rule as well as the consequent, and $\mathit{b}$ is the number of objects that match the antecedent but not the consequent. The \emph{support} of a rule is either defined as $\mathit{a}/N$, where $N$ is the number of all objects (relative support), or simply as $a$ (absolute support). A related measure is \emph{coverage}, which is the total number of objects that satisfy the body of the rule ($a+b$). In the special case of learning rules for the purpose of building a classifier, the consequent of a rule consists only of a single literal, the so-called \emph{class}. In this case, $a$ is also known as the number of \emph{true positives}, and $b$ as the \emph{false positives}. Some rule learning frameworks, in particular association rule learning \cite{APriori,AssociationRules-Book}, require the user to set thresholds for minimum confidence and support. Only rules with confidence and support values meeting or exceeding these thresholds are included on the output of rule learning and presented to the user. \subsection{Decision Rules in Cognitive Science} \label{ss:rules-cs} Rules are used in commonly embraced models of human reasoning in cognitive science \citep{smith1992case,nisbett1993rules,pinker2015words}. They also closely relate to Bayesian inference, which also frequently occurs in models of human reasoning. Consider the first rule of Figure~\ref{fig:rule}. This rule can be interpreted as a hypothesis corresponding to the logical implication $A \land B \Rightarrow C$. We can express the plausibility of such a hypothesis in terms of Bayesian inference as the conditional probability $\Pr(C \vert A,B)$. This corresponds to the confidence of the rule, as used in machine learning and as defined above, and to \emph{strength of evidence}, a term used by cognitive scientists \citep{Tversky27091974}. Given that $\Pr(C \vert A,B)$ is a probability estimate computed on a sample, another relevant piece of information for determining the plausibility of the hypothesis is the robustness of this estimate. This corresponds to the number of instances for which the rule has been observed to be true. The size of the sample (typically expressed as ratio) is known as rule support in machine learning and as \emph{weight of the evidence} in cognitive science \citep{Tversky27091974}.\footnote{ Interestingly, balancing the likelihood of the judgment and the weight of the evidence in the assessed likelihood was already studied by \citet{keynes1922treatise} (according to \citet{Camerer1992}).} Psychological research on hypothesis testing in rule discovery tasks has been performed in cognitive science at least since the 1960s. The seminal article by \citet{wason1960failure} introduced what is widely referred to as \emph{Wason's 2-4-6} task. Participants are given the sequence of numbers 2, 4 and 6 and asked to find out the rule that generated this sequence. In search for the hypothesized rule they provide the experimenter other sequences of numbers, such as 3-5-7, and the experimenter answers whether the provided sequence conforms to the rule, or not. While the target rule is simple ``ascending sequence'', people find it difficult to discover this specific rule, presumably because they use the \emph{positive test strategy}, a strategy of testing a hypothesis by examining evidence confirming the hypothesis at hand rather then searching for disconfirming evidence \citep{klayman1987confirmation}. \subsection{Cognitive Bias} \label{ss:cbandib} According to the Encyclopedia of Human Behavior \citep{Wilke2012531}, the term cognitive bias was introduced in the 1970s by Amos Tversky and Daniel Kahneman \citep{Tversky27091974}, and is defined as a \begin{quote} ''systematic error in judgment and decision-making common to all human beings which can be due to cognitive limitations, motivational factors, and/or adaptations to natural environments.'' \end{quote} The narrow initial definition of cognitive bias as a shortcoming of human judgment was criticized by German psychologist Gerd Gigerenzer, who started in the late 1990s the ``Fast and frugal heuristic'' program to emphasize ecological rationality (validity) of judgmental heuristics \cite{gigerenzer1999fastandfrugal}. According to this research program, cognitive biases often result from an application of a heuristic in an environment for which it is not suited rather than from problems with heuristics themselves, which work well in usual contexts. In the present view, we define cognitive biases and associated phenomena broadly. We include cognitive biases related to thinking, judgment, and memory. We also include descriptions of thinking strategies and judgmental heuristics that may result in cognitive biases, even if they are not necessarily biases themselves. \paragraph{Debiasing} An important aspect related to the study of cognitive biases is the validation of strategies for mitigating their effects in cases when they lead to incorrect judgment. A number of such \emph{debiasing} techniques has been developed, with researchers focusing intensely on the clinical and judicial domains (cf. e.g. \citep{lau2009can,croskerry2013cognitive,martire2014interpretation}), apparently due to costs associated with erroneous judgment in these domains. Nevertheless, general debiasing techniques can often be derived from such studies. The choice of an appropriate debiasing technique typically depends on the type of error induced by the bias, since this implies an appropriate debiasing strategy \citep{arkes1991costs}. \citet{larrick2004debiasing} recognizes the following three categories: psychophysically-based error, association-based error, and strategy-based error. The first two are attributable to unconscious, automatic processes, sometimes referred to as ''System~1''. The last one is attributed to reasoning processes (System~2) \cite{evans2013dual}. For biases attributable to System~1, the most generic debiasing strategy is to shift processing to the conscious System~2 \citep{lilienfeld2009giving}, \citep[p. 491]{shafir2013behavioral}. \enlargethispage*{12pt} Another perspective on debiasing is provided by \citet{croskerry2013cognitive}, who organize debiasing techniques by their way of functioning, rather than the bias they address, into the following three categories: educational strategies, workplace strategies and forcing functions. While \citet{croskerry2013cognitive} focused on clinicians, our review of debiasing aims to be used as a starting point for analogous guidelines for an audience of machine learning practitioners. For example, the general workplace strategies applicable in the machine learning context include group decision making, personal accountability, and planning time-out sessions to help slowing down. \paragraph{Function and validity of cognitive biases} \label{sec:functions} In the introduction, we briefly characterized cognitive biases as seemingly irrational reasoning patterns that are thought to allow humans to make fast and risk-averse decisions. In fact, the function of cognitive biases is subject of scientific debate. According to the review of functional views by \citet{pohl2017cognitive}, there are three fundamental positions among researchers. The first group considers them as dysfunctional errors of the system, the second group as faulty by-products of otherwise functional processes, and the third group as adaptive and thus functional responses. According to \citet{pohl2017cognitive}, most researchers are in the second group, where cognitive biases are considered to be ``built-in errors of the human information-processing systems''. In this work, we consider cognitive biases as strategies that evolved to improve the fitness and chances of survival of the individual in particular situations or are consequences of such strategies. This defense of biases is succinctly expressed by \citet{haselton2006paranoid}: ``Both the content and direction of biases can be predicted theoretically and explained by optimality when viewed through the long lens of evolutionary theory. Thus, the human mind shows good design, although it is design for fitness maximization, not truth preservation.'' According to the same paper, empirical evidence shows that cognitive biases are triggered or strengthened by environmental cues and context \citep{haselton2006paranoid}. Given that the interpretation of machine learning results is a task unlike the simple automatic cognitive processes to which a human mind is adapted, cognitive biases are likely to have an influence upon it. \subsection{Measures of Interpretability, Perceived and Objective Plausibility} \label{ss:measures} We claim that cognitive biases can affect the interpretation of rule-based models. However, how does one measure interpretability? According to our literature review, there is no generally accepted measure of interpretability of machine learning models. Model size, which was used in several studies, has recently been criticized \citep{freitas2014comprehensible,DBLP:conf/dis/StecherJF16,CognitiveBiases-Rules} primarily on the grounds that the model's syntactic size does not capture any aspect of the model's semantics. A particular problem related to semantics is the compliance to pre-existing expert knowledge, such as domain-specific monotonicity constraints. In our work, we embrace the concept of \emph{plausibility} to measure interpretability \cite{furnkranz2018cognitive}. The word 'plausible' is defined according to the Oxford Dictionary of US English as ``seeming reasonable or probable'' and according to the Cambridge dictionary of UK English as ``seeming likely to be true, or able to be believed''. We can link the inductively learned rule to the concept of ``hypothesis'' used in cognitive science. There is a body of work in cognitive science on analyzing the perceived plausibility of hypotheses \citep{gettys1978hypothesis,gettys1986plausibility,anderson2016analytical}. In a recent review of interpretability definitions by \citet{bibal2016interpretability}, the term plausibility is not explicitly covered, but a closely related concept of \emph{justifiability} is stated to depend on interpretability. \citet{martens2011performance} define justifiability as ``intuitively correct and in accordance with domain knowledge''. By adopting plausibility, we address the concern expressed in \citet{freitas2014comprehensible} regarding the need to reflect domain semantics when interpretability is measured. We are aware of the fact that if a decision maker finds a rule plausible, it does not necessarily mean that the rule is correctly understood, it can be quite the contrary in many cases. Nevertheless, we believe that the \emph{alignment of the perceived plausibility with objective, data-driven, plausibility of a hypothesis} should be at the heart of an effort that strives for interpretable machine learning. \section{Motivational Example} \label{sec:example} It is well known in machine learning that chance rules with a deceptively high confidence can appear in the output of rule learning algorithms \citep{azevedo2007comparing}. For this reason, the rule learning process typically outputs both confidence and support for the analyst to make an informed choice about merits of each rule. \mybox{ \noindent\textbf{Example.} \begin{itemize} \item \texttt{IF a film is released in 2006 AND the language of the film is English THEN Rating is good, \\ confidence = 80\%, support = 10\%}. \item \texttt{IF a film is released in 2006 AND the director was John Smith THEN Rating is good, \\ confidence = 90\%, support = 1\%}. \end{itemize} } In the example above, both rules are associated with values of confidence and support to inform about the strength and weight of evidence for both rules. While the first rule is less strong (80\% vs 90\% correct), its weight of the evidence is ten times higher than of the second rule. According to the \emph{insensitivity to sample size effect} \citep{Tversky27091974} there is a systematic bias in human thinking that makes humans overweigh the strength of evidence (confidence) and underweigh the weight of evidence (support). The bias has been also shown in psychologists knowledgable in statistics \citep{tversky1971belief} and thus is likely to be applicable to the widening number of professions that use rule learning to obtain insights from data. The analysis of relevant literature from cognitive science not only reveals applicable biases, but also sometimes provides methods for limiting their effect (debiasing). The standard way used in rule learning software for displaying rule confidence and support metrics is to use percentages, as in our example. Extensive research in psychology has shown that if frequencies are used instead, then the number of errors in judgment drops \citep{gigerenzer1996reasoning,gigerenzer1995improve}. Reflecting these suggestions, the first rule in our example could be presented as follows: \mybox{ \noindent\textbf{Example.} \begin{itemize} \item \texttt{IF a film is released in 2006 AND the language of the film is English THEN Rating is good. \\ In our data, there are 100 movies which match the conditions of this rule. Out of these, 80 are predicted correctly as having good rating.} \end{itemize} } Rules can be presented in different ways (as shown), and depending on the way the information is presented, humans may perceive their plausibility differently. In this particular example, confidence is no longer conveyed as a percentage "80\%" but using expression "80 out of 100". Support is presented as an absolute number (100) rather than as a percentage (10\%). A correct understanding of machine learning models can be difficult even for experts. In this section, we tried to motivate why addressing cognitive biases can play an important role in making the results of inductive rule learning more understandable. In the remainder of this paper, the bias applied to our example will be revisited in greater depth, along with 19 other biases. \section{Selection Criteria} \label{sec:criteria} A number of cognitive biases have been discovered, experimentally studied, and extensively described in the literature. As \citet{pohl2017cognitive} states in a recent authoritative book on cognitive illusions: ``There is a plethora of phenomena showing that we deviate in our thinking, judgment and memory from some objective and arguably correct standard.'' We first selected a subset of biases which would be reviewed. To select applicable biases, we considered those that have some relation to the following properties of inductively learned rules: \begin{inparaenum} \item rule length (the number of literals in an antecedent), \item rule interest measures (especially support and confidence), \item position (ordering) of conditions in a rule and ordering of rules in the rule list, \item specificity and predictive power of conditions (correlation with a target variable), \item use of additional logical connectives (conjunction, disjunction, negation), \item treatment of missing information (inclusion of conditions referring to missing values), and \item conflict between rules in the rule list. \end{inparaenum} Through selection of appropriate learning heuristics, the rule learning algorithm can influence these properties. For example, most heuristics implement some form of a trade-off between the coverage or support of a rule, and its implication strength or confidence \cite{furnkranz2005roc,jf:Book-Nada}. \section{Review of Cognitive Biases} \label{sec:review} In this section, we cover a selection of twenty cognitive biases. For all of them, we include a short description including an example of a study demonstrating the bias and its proposed explanation. We pay particular attention to their potential effect on the interpretability of rule learning results, which has not been covered in previous works. For all cognitive biases we also suggest a debiasing technique that could be effective in aligning the perceived plausibility of the rule with its objective plausibility. In a recent scientometric survey of research on cognitive biases in information systems \citep{DBLP:conf/ecis/FleischmannABH14}, no papers are mentioned that aim at machine learning. For general information systems research, the authors claim that ``most articles' research goal [is] to provide an explanation of the cognitive bias phenomenon rather than to develop ways and strategies for its avoidance or targeted use". In contrast, our review aims at advancement of the field beyond explanation of applicable phenomena, by discussing specific debiasing techniques. An overview of the main features of the reviewed cognitive biases is presented in Table~\ref{tbl:bias-overview}. Note that the debiasing techniques that we describe have only limited grounding in applied psychological research and require further validation, since as \citet{lilienfeld2009giving} observe, there is a general paucity of research on debiasing in psychological literature, and the existing techniques suffer from a lack of theoretical coherence and a mixed research evidence concerning their efficacy. \begin{sidewaystable} \small \begin{center} \begin{tabular}{p{4cm}p{8cm}p{8cm}} \toprule phenomenon & implications for rule-learning & debiasing technique \\ \midrule Representativeness Heuristic & Overestimate the probability of condition representative of consequent& Use natural frequencies instead of ratios or probabilities\\ Averaging Heuristic & Probability of antecedent as the average of probabilities of conditions & Reminder of probability theory \\ Disjunction Fallacy & Prefer more specific conditions over less specific & Inform on taxonomical relation between conditions; explain benefits of higher support\\ Base-rate Neglect & Emphasis on confidence, neglect for support & Express confidence and support in natural frequencies \\ Insensitivity to Sample Size & Analyst does not realize the increased reliability of confidence estimate with increasing value of support & Present support as absolute number rather than percentage; use support to compute confidence (reliability) intervals for the value of confidence\\ Availability Heuristic & Ease of recollection of instances matching the rule & Explain to analyst why instances matching the particular rule are (not) easily recalled \\ Reiteration Effect & Presentation of redundant rules or conditions increases plausibility & rule pruning; clustering; explaining overlap \\ Confirmation Bias & Rules confirming analyst's prior hypothesis are ``cherry picked'' & Explicit guidance to consider evidence for and against hypothesis; education about the bias; interfaces making users slow down \\ Mere Exposure Effect & Repeated exposure (even subconscious) results in increased preference & Changes to user interfaces that limit subliminal presentation of rules\\ Overconfidence and underconfidence & Rules with small support and high confidence are ``overrated'' & Present less information when not relevant via pruning, feature selection, limiting rule length; actively present conflicting rules/knowledge. \\ Recognition Heuristic & Recognition of attribute or its value increases preference & More time; knowledge of attribute/value \\ Information Bias & belief that more information (rules, conditions) will improve decision making even if it is irrelevant & Communicate attribute importance \\ Ambiguity Aversion & Prefer rules without unknown conditions & Increase user motivation; instruct users to provide textual justifications \\ Confusion of the Inverse & Confusing the difference between the confidence of the rule $\Pr(\textrm{consequent} \vert \textrm{antecedent})$ with $\Pr(\textrm{antecedent} \vert \textrm{consequent})$ & Training in probability theory; unambiguous wording\\ Misunderstanding of ``and'' & ``and'' is understood as disjunction & Unambiguous wording; visual representation \\ Context and Tradeoff Contrast & Preference for a rule is influenced by other rules & Removal of rules, especially of those that are strong, yet irrelevant\\ Negativity Bias & Words with negative valence in the rule make it appear more important & Review words with negative valence in data, and possibly replace with neutral alternatives \\ Primacy Effect & Information presented first has the highest impact & Education on the bias; resorting; rule annotation \\ Weak Evidence Effect & Condition only weakly perceived as predictive of target decreases plausibility & Numerical expression of strength of evidence; omission of weak predictors (conditions) \\ Unit Bias & Conditions are perceived to have same importance & Inform on discriminatory power of conditions\\ \bottomrule \end{tabular} \end{center} \caption{Summary of analysis of cognitive biases. } \label{tbl:bias-overview} \end{sidewaystable} \subsection{Conjunction Fallacy and Representativeness Heuristic} \label{ss:reprh} The conjunction fallacy refers to a judgment that is inconsistent with \emph{the conjunction rule} -- the probability of a conjunction, $\Pr(A, B)$, cannot exceed the probability of its constituents, $\Pr(A)$ and $\Pr(B)$. It is often illustrated with the ``Linda'' problem in the literature \cite {tversky1983extensional}. In the Linda problem, depicted in Figure~\ref{fig:Linda}, subjects are asked to compare conditional probabilities $\Pr(F,B \vert L)$ and $\Pr(B \vert L)$, where $B$ refers to ``bank teller'', $F$ to ``active in feminist movement'' and $L$ to the description of Linda \citep{bar1991commentary}. \begin{figure}[ht!] \begin{Verbatim}[frame=single] Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more probable? (a) Linda is a bank teller. (b) Linda is a bank teller and is active in the feminist movement. \end{Verbatim} \caption{Linda problem} \label{fig:Linda} \end{figure} Multiple studies have shown that people tend to consistently select the second hypothesis as more probable, which is in conflict with the conjunction rule. In other words, it always holds for the Linda problem that $$\Pr(F,B \vert L) \leq \Pr(B \vert L).$$ Preference for the alternative $F \wedge B$ (option (b) in Figure~\ref{fig:Linda}) is thus always a logical fallacy. For example, \citet{tversky1983extensional} report that 85\% of their subjects indicated (b) as the more probable option for the Linda problem. The conjunction fallacy has been shown across multiple settings (hypothetical scenarios, real-life domains), as well as for various kinds of subjects (university students, children, experts, as well as statistically sophisticated individuals) \citep{tentori2012conjunction}. The conjunction fallacy is often explained by use of the representativeness heuristic \citep{kahneman1972subjective}. The representativeness heuristic refers to the tendency to make judgments based on similarity, based on the rule ``like goes with like'', which is typically used to determine whether an object belongs to a specific category. When people use the representativeness heuristic, \emph{``probabilities are evaluated by the degree to which A is representative of B, that is by the degree to which A resembles B"} \citep{Tversky27091974}. This heuristic provides people with means for assessing a probability of an uncertain event. It is used to answer questions such as ``What is the probability that object A belongs to class B? What is the probability that event A originates from process B?'' \citep{Tversky27091974}. The representativeness heuristic is not the only explanation for the results of the conjunction fallacy experiments. \citet{hertwig2008conjunction} hypothesized that the fallacy is caused by ``a misunderstanding about conjunction'', in other words by a different interpretation of ``probability'' and ``and'' by the subjects than assumed by the experimenters. The validity of this alternative hypothesis has been subject to criticism \citep{tentori2012conjunction}, nevertheless some empirical evidence suggests that the problem of correct understanding of ``and'' is of particular importance to rule learning \citep{furnkranz2018cognitive}. Recent research has provided several explanations for conjunctive and disjunctive (cf.\ Section~\ref{ss:disjfal}) fallacies, such as configural weighting and adding theory \citep{nilsson2009linda}, applying principles of quantum cognition \citep{bruza2015quantum} and inductive confirmation theory \citep{tentori2013determinants}. In the following, we will focus on the CWA theory. CWA essentially assumes that the causes of conjuctive and disjunctive fallacies relate to the fact that decision makers perform weighted average instead of multiplication of the component probabilities. For conjunctions, weights are set so that more weight is assigned to the lower component probability. For disjunctive probabilities, more weight is assigned to the likely component. This assumption was verified in at least one study \citep{fisk2002judgments}. For more discussion of the related averaging heuristic, cf. Section~\ref{ss:averaging}. \paragraph{Implications for rule learning} Rules are not composed only of conditions, but also of an outcome (the value of a target variable in the consequent). A higher number of conditions generally allows the rule to filter a purer set of objects with respect to the value of the target variable than a smaller number of conditions. Application of representativeness heuristic can affect the human perception of rule plausibility in that rules that are more ''representative'' of the user's mental image of the concept may be preferred even in cases when their objective discriminatory power may be lower. \paragraph{Debiasing techniques} A number of factors that decrease the proportion of subjects exhibiting the conjunction fallacy have been identified: \citet{charness2010conjunction} found that the number of participants committing the fallacy is reduced under a monetary incentive. Such an addition was reported to drop the fallacy rate in their study from 58\% to 33\%. The observed rate under a monetary incentive suggests smaller importance of this problem for important real-life decisions. \citet{zizzo2000violation} found that unless the decision problem is simplified, neither monetary incentives nor feedback can ameliorate the fallacy rate. A reduced task complexity is a precondition for monetary incentives and feedback to be effective. \citet{stolarz1996conjunction} observed that the rate of fallacies is reduced but still strongly present when the subjects receive training in logic. \citet{gigerenzer1996reasoning} as well as \citet{gigerenzer1995improve} showed that the rate of fallacies can be reduced or even eliminated by presenting the problems in terms of frequencies rather than probabilities. \citet{nilsson2009linda} present a computer simulation showing that when the component probabilities are not precisely known, averaging often provides equally good alternative to the normative computation of probabilities (cf. also \citet{juslin2009probability}). This computational model could be possibly adopted to detect high risk of fallacy, corresponding to the case when the deviation between the perceived probability and the normative probability is high. \subsection{Misunderstanding of ``and''} The misunderstanding of ``and'' refers to a phenomenon affecting the syntactic comprehensibility of the logical connective ``and''. As discussed by \citet{hertwig2008conjunction}, ``and'' in natural language can express several relationships, including temporal order, causal relationship, and most importantly, can also indicate a collection of sets\footnote{As in ``He invited friends and colleagues to the party''} as well as their intersection. People can therefore interpret ``and'' in a different meaning than intended. For example, according to the two experiments reported by \citet{hertwig2008conjunction}, the conjunction ``bank teller and active in the feminist movement'' used in the Linda problem (cf.\ Section~\ref{ss:reprh}) was found by about half of subjects as ambiguous---they explicitly asked the experimenter how ``and'' was to be understood. Furthermore, when participants indicated how they understood ``and'' by shading Venn diagrams, it turned out that about a quarter of them interpreted ``and'' as union rather than intersection, which is usually assumed by experimenters using the Linda problem. \paragraph{Implications for rule learning} The formation of conjunctions via ``and" is a basic building block of rules. Its correct understanding is thus important for effective communication of results of rule learning. Existing studies suggest that the most common type of error is understanding ``and" as a union rather than intersection. In such a case, a rule containing multiple ``ands" will be perceived as having a higher support than it actually has. Each additional condition will be incorrectly perceived as increasing the coverage of the rule. This implies higher perceived plausibility of the rule. Misunderstanding of ``and" will thus generally increase the preference of rules with more conditions. \paragraph{Debiasing techniques} According to \citet{sides2002reality} ``and'' ceases to be ambiguous when it is used to connect propositions rather than categories. The authors give the following example of a sentence which is not prone to misunderstanding: ``IBM stock will rise tomorrow and Disney stock will fall tomorrow.'' A similar wording of rule learning results may be, despite its verbosity, preferred. \citet{mellers2001frequency} showed that using ``bank tellers who are feminists'' or ``feminist bank tellers'' rather than ``bank tellers and feminists'' as a category in the Linda problem (Figure~\ref{fig:Linda}) might reduce the likelihood of committing the conjunction fallacy. It follows that using different wording such as ``and also'' might also help reduce the danger of a misunderstanding of ``and''. Representations that visually express the semantics of ``and'' such as decision trees may be preferred over rules, which do not provide such visual guidance.\footnote{We find limited grounding for this proposition in the following: Conditions connected with an arch in a tree are to be interpreted as simultaneously valid (i.e., arch means conjunction). A recent empirical study on comprehensibility of decision trees \citep{Piltaver2016333} does not consider ambiguity of this notation to be a systematic problem among the surveyed users.} \subsection{Averaging Heuristic} \label{ss:averaging} While the conjunction fallacy is most commonly explained by operation of the representativeness heuristic, the averaging heuristic provides an alternative explanation: it suggests that people evaluate the probability of a conjuncted event as the average of probabilities of the component events \citep{fantino1997conjunction}. As reported by \citet{fantino1997conjunction}, in their experiment ``approximately 49\% of variance in subjects' conjunctions could be accounted for by a model that simply averaged the separate component likelihoods that constituted a particular conjunction.'' \paragraph{Implications for rule learning} When applying the averaging heuristic, an analyst may not fully realize the consequences of the presence of a low-probability condition for the overall likelihood of the set of conditions in the antecedent of the rule. Consider the following example: Let us assume that the learning algorithm only adds independent conditions that have a probability of $0.8$, and we compare a 3-condition rule to a 2-condition rule. Averaging would evaluate both rules equally, because both have an average probability of $0.8$. A correct computation of the joint probability, however, shows that the longer rule is considerably less likely ($0.8^3$ vs.\ $0.8^2$ because all conditions are assumed to be independent). Averaging can also affect same-length rules. \citet{fantino1997conjunction} derive from their experiments on the averaging heuristic that humans tend to judge ``unlikely information [to be] relatively more important than likely information.'' Continuing our example, if we compare the above 2-condition rule with another rule with two features with more diverse probability values, e.g., one condition has $1.0$ and the other has $0.6$, then averaging would again evaluate both rules the same, but in fact the correct interpretation would be that the rule with equal probabilities is more likely than the other ($0.8^2 > 1.0 \times 0.6$). In this case, the low 0.6 probability in the new rule would ``knock down'' the normative conjoint probability below the one of the rule with two 0.8 conditions. \paragraph{Debiasing techniques} Experiments conducted by \citet{zizzo2000violation} showed that prior knowledge of probability theory, and a direct reminder of how probabilities are combined, are effective tools for decreasing the incidence of the conjunction fallacy, which is the hypothesized consequence of the averaging heuristic. A specific countermeasure for the biases caused by linear additive integration (weighted averaging) is the use of logarithm formats. Experiments conducted by \citet{juslin2011reducing} show that recasting probability computation in terms of logarithm formats, thus requiring additive rather than multiplicative integration, improves probabilistic reasoning. \subsection{Disjunction Fallacy } \label{ss:disjfal} The disjunction fallacy refers to a judgment that is inconsistent with the disjunction rule, which states that the probability $\Pr(X)$ cannot be higher than the probability $\Pr(Z)$, where $Z = X\cup Y$ is a union of event $X$ with another event $Y$. In experiments reported by \citet{bar1993alike}, $X$ and $Z$ were nested pairs of categories, such as Switzerland and Europe. Subjects read descriptions of people such as ``Writes letter home describing a country with snowy wild mountains, clean streets, and flower decked porches. Where was the letter written?'' It follows that since Europe contains Switzerland, Europe must be more likely than Switzerland. However, Switzerland was chosen as the more likely place by about 75\% of the participants \citep{bar1993alike}. The disjunction fallacy is considered as another consequence of the representativeness heuristic \citep{bar1993alike}: ``Which of two events---even nested events---will seem more probable is better predicted by their representativeness than by their scope, or by the level in the category hierarchy in which they are located." The description in the example is more representative of Switzerland than of Europe, so when people use representativeness as the basis for their judgment, they judge Switzerland to be a more likely answer than Europe, even though this judgment breaks the disjunction rule. \paragraph{Implications for rule learning} In the context of data mining, it can be the case that the feature space is hierarchically ordered. The analyst can thus be confronted with rules containing attributes (literals) on multiple levels of granularity. Following the disjunction fallacy, the analyst will generally prefer rules containing more specific attributes, which can result in preference for rules with fewer backing instances and thus in weaker statistical validity. \paragraph{Debiasing techniques} When asked to assign categories to concepts (such as land of origin of a letter) under conditions of certainty, people are known to prefer a specific category to a more general category that subsumes it, but only if the specific category is considered representative \citep{bar1993alike}: ``whenever an ordering of events by representativeness differs from their ordering by set inclusion, there is a potential for an extension fallacy to occur." From this observation a possible debiasing strategy emerges: making the analysts aware of the taxonomical relation of the individual attributes and their values. For example, the user interface can work with the information that Europe contains Switzerland, possibly actively notifying the analyst on the risk of falling for the disjunctive fallacy. This intervention can be complemented by ``training in rules" \citep{larrick2004debiasing}. In this case, the analysts should be explained the benefits of larger supporting sample associated with more general attributes. \subsection{Base-rate Neglect} \label{ss:baseratefallacy} People tend to underweigh the evidence provided by base rates, which results in the so-called \emph{base-rate neglect}. For example, \citet{kahneman1973psychology} gave participants a description of a person who was selected randomly from a group and asked them whether the person is an engineer or a lawyer. Participants based their judgment mostly on the description of the person and paid little consideration to the occupational composition of the group, even though the composition as provided as part of the task and should play a significant role in the judgment. \citet{kahneman1973psychology} view the base-rate neglect as a possible consequence of the representativeness heuristic \citep{kahneman1972subjective}. When people base their judgment of an occupation of a person mostly on similarity of the person to a prototypical member of the occupation, they ignore other relevant information such as base rates, which results in the base-rate neglect. \paragraph{Implications for rule learning} The application of the base rate neglect suggests that when facing two otherwise identical rules with different values of confidence and support metrics, an analyst's preferences will be primarily shaped by the confidence of the rule. Support corresponds to "base rate", which is sometimes almost completely ignored \citep{kahneman1973psychology}. It follows that by increasing preference for higher confidence, the base-rate neglect will generally contribute to a positive correlation between rule length and plausibility, since longer rules can better adapt to a particular group in data and thus have a higher confidence than a more general, shorter rules. This is in contrast to the general bias for simple rules that are implemented by state-of-the-art rule learning algorithms, because simple rules tend to be more general, have a higher support, and are thus statistically more reliable. \paragraph{Debiasing techniques} \citet{gigerenzer1995improve} show that representations in terms of natural frequencies, rather than conditional probabilities, facilitate the computation of cause's probability. Confidence is typically presented as percentage in current software systems. The support rule quality metric is sometimes presented as a percentage and sometimes as a natural number. It would foster correct understanding if analysts are consistently presented natural frequencies in addition to percentages. \subsection{Insensitivity to Sample Size} People tend to underestimate the increased benefit of higher robustness of estimates that are made on a larger sample, which is called insensitivity to sample size. The insensitivity to sample size effect can be illustrated by the so-called hospital problem. In this problem, subjects are asked which hospital is more likely to record more days in which more than 60 percent of the newborns are boys. The options are a larger hospital, a smaller hospital, or both hospitals with about a similar probability. The correct expected answer---the smaller hospital---was chosen only by 22\% of participants in an experiment reported by \citet{Tversky27091974}. Insensitivity to sample size may be another bias resulting from use of the representativeness heuristic \citep{kahneman1972subjective}. When people use the representativeness heuristic, they compare the proportion of newborns who are boys to the proportion expected in the population, ignoring other relevant information. Since the proportion is similarly representative of the whole population for both hospitals, most of the participants believed that both hospitals are equally likely to record days in which more than 60 percents of the newborns are boys \citep{Tversky27091974}. \paragraph{Implications for rule learning} This effect implies that analysts may be unable to appreciate the increased reliability of the confidence estimate with increasing value of support, i.e., they may fail to appreciate that the strength of the connection between antecedent and consequent of a rule rises with an increasing number of observations. If confronted with two rules, where one of them has a slightly higher confidence and the second rule a higher support, this cognitive bias suggests that the analyst will prefer the rule with higher confidence (all other factors equal). In the context of this bias, it is important to realize that population size is statistically irrelevant for determination of sample size for large populations \citep{cochran2007sampling}. However, previous research \citep{bar1979role} has shown that the perceived sample accuracy can incorrectly depend on the sample-to-population ratio rather than on the absolute sample size. For a small population, a 10\% sample can be considered as more reliable than 1\% sample drawn from much larger population. This observation has substantial consequences for the presentation of rule learning results. The support of a rule is typically presented as a percentage of the dataset size. Assuming that support relates to sample size and number of instances in the dataset to population size, it follows that the presentation of support as a percentage (relative support) induces the insensitivity to sample size effect. The recommended alternative is to present support as an absolute number (absolute support). \paragraph{Debiasing techniques} There have been successful experiments with providing decision aids to overcome the insensitivity to sample size bias. In particular, \citet{kachelmeier1990investigation} experimented with providing auditors a formula for computing appropriate sample size for substantive tests of details based on the description of a case and tolerable error. Provision of the aid resulted in larger sample sizes being selected by the auditors in comparison to intuitive judgment without the aid. Similarly, as the auditor can choose the sample size, a user of an association rule learning algorithm can specify the minimum support threshold. To leverage the debiasing strategy validated by \citet{kachelmeier1990investigation}, the rule learning interface should also inform the user of the effects of chosen support threshold on the accuracy of the confidence estimate of the resulting rules. For algorithms and workflows where the user cannot influence the support of a discovered rule, relevant information should be available as a part of rule learning results. In particular, the value of rule support can be used to compute a confidence interval for the value of confidence. Such supplementary information is already provided by Bayesian decision lists \citep{letham2015interpretable}, a recently proposed algorithmic framework positively evaluated with respect to interpretability (cf., e.g., \citep{de2017algorithmic}). \subsection{Confirmation Bias and Positive Test Strategy} Confirmation bias refers to the notion that people tend to look for evidence supporting the current hypothesis, disregarding conflicting evidence. According to \citet[p.\ 552]{evans1989bias} confirmation bias is ``the best known and most widely accepted notion of inferential error of human reasoning."\footnote{Cited according to \citet{nickerson1998confirmation}.} Research suggests that even neutral or unfavorable evidence can be interpreted to support existing beliefs, or, as \citet[p.\ 115--116]{trope1997wishful} put it, ``the same evidence can be constructed and reconstructed in different and even opposite ways, depending on the perceiver's hypothesis.'' A closely related phenomenon is the \emph{positive test strategy} (PTS) described by \citet{klayman1987confirmation}. This reasoning strategy suggests that when trying to test a specific hypothesis, people examine cases which they expect to confirm the hypothesis rather than the cases which have the best chance of falsifying it. The difference between PTS and confirmation bias is that PTS is applied to test a candidate hypothesis while confirmation bias is concerned with hypotheses that are already established \citep[p.\ 93]{pohl2004cognitive}. The experimental results of \citet{klayman1987confirmation} show that under realistic conditions, PTS can be a very good heuristic for determining whether a hypothesis is true or false, but it can also lead to systematic errors if applied to an inappropriate task. \paragraph{Implications for rule learning} This bias can have a significant impact depending on the purpose for which the rule learning results are used. If the analyst has some prior hypothesis before she obtains the rule learning results, according to the confirmation bias she will tend to ``cherry pick'' rules confirming this prior hypothesis and disregard rules that contradict it. Given that some rule learners may output contradicting rules, the analyst may tend to select only the rules conforming to the hypothesis, disregarding applicable rules with the opposite conclusion, which could otherwise turn out to be more relevant. \paragraph{Debiasing techniques} \label{ss:confbias-debias} Delaying final judgment and slowing down work has been found to decrease confirmation bias in several studies \citep{spengler1995scientist,parmley2006effects}. User interfaces for rule learning should thus give the user not only the opportunity to save or mark interesting rules, but also allow the user to review and edit the model at a later point in time. An example rule learning system with this specific functionality is EasyMiner \citep{vojivr2018easyminer}. \citet{wolfe2008locus} successfully experimented with providing subjects with explicit guidelines for considering evidence both for and against a hypothesis. Provision of ``balanced" instructions to search evidence for and against a given hypothesis reduced the incidence of myside bias, an effect closely related to confirmation bias, from 50\% exhibited by the control group to a significantly lower 27.5\%. Similarly, providing explicit guidance combined with modifications of the user interface of the system presenting the rule learning results could also be considered. The assumption that educating users about cognitive illusions can be an effective debiasing technique for positive test strategy has been empirically validated on a cohort of adolescents by \citet{barberia2013implementation}. \subsection{Availability Heuristic} The availability heuristic is a judgmental heuristic in which a person evaluates the frequency of classes or the probability of events by the ease with which relevant instances come to mind. This heuristic is explained by its discoverers, \citet{tversky1973availability}, as follows: ``That associative bonds are strengthened by repetition is perhaps the oldest law of memory known to man. The availability heuristic exploits the inverse form of this law, that is, it uses the strength of the association as a basis for the judgment of frequency.'' The availability heuristic is not itself a bias, but it may lead to biased judgments when availability is not a valid cue. In one of the original experiments, participants were asked whether the letter ``R'' appears more frequently on the first or third position in English texts \citep{tversky1973availability}. About 70\% of participants answered incorrectly that it appears more frequently on the first position, presumably because they estimated the frequency by recalling words containing ``R'' and it is easier to recall words starting with R than words with R on the third position. While original research did not distinguish between the number of recollected instances and ease of the recollection, later studies showed that to determine availability, it is sufficient to assess the ease with which instances or associations could be brought to mind; it is not necessary to count all the instances one is able to come up with \citep{schwarz1991ease}. \paragraph{Implications for rule learning} An application of the availability heuristic in rule learning would be based on the ease of recollection of instances (examples) matching the \emph{complete} rule (all conditions and consequent) by the analyst. Rules containing conditions for which instances can be easily recalled would be found more plausible compared to rules not containing such conditions. As an example, consider the rule pair \medskip \begin{tabular}{cl} $R_1$: & \texttt{IF latitude $\leq$ 44.189 AND longitude $\leq$ 6.3333} \\ &\texttt{AND longitude $>$ 1.8397 THEN Unemployment is high}\\ $R_2$: & \texttt{IF population $\leq$ 5 million THEN Unemployment is high}. \end{tabular} \medskip \noindent It is arguably easier to recall specific countries matching the second rule, than countries matching the conditions of the first rule. It is conceivable that the availability heuristic could also be applied in case when the easily recalled instances match \emph{only some of the conditions} in the antecedent of the rule, such as only latitude in the example above. The remaining conditions would be ignored. On the other hand, such a bias can also be implemented as a bias into rule learning algorithms. Often, in particular in cases where many candidate conditions are available, such as datasets with features derived from the semantic web \citep{Ristoski2016}, the same information can be encoded in rules that use different sets of conditions. For example, \citet{gabriel2014learning} proposed an algorithm that gives preference to selecting conditions that are semantically coherent. A similar technique could be used for realizing a preference for attributes that are easier to recall for human analysts. \paragraph{Debiasing techniques} Several studies have found that people use ease of recollection in judgment only when they cannot attribute it to a source that should not influence their judgment \citep{schwarz2004metacognitive}. Alerting an analyst to the reason why instances matching the conditions in the rule under consideration are easily recalled should therefore reduce the impact of the availability heuristic as long as the reason is deemed irrelevant to the task at hand. \subsection{Reiteration Effect, Effects of Validity and Illusiory Truth} \label{ss:reiteration} The reiteration effect describes the phenomenon that repeated statements tend to become more believable \citep{hertwig1997reiteration,pachur2011recognition}. For example, in one experiment, \citet{hasher1977frequency} presented subjects with general statements and asked them to asses their validity. Part of the statements were false and part were true. The experiment was conducted in several sessions, where some of the statements were repeated in subsequent sessions. The average perceived validity of both true and false repeated statements rose between the sessions, while for non-repeated statements it dropped slightly. The effect is usually explained by use of processing fluency in judgment. Statements that are processed fluently (easily) tend to be judged as true and repetition makes processing easier. A recent alternative account argues that repetition makes the referents of statements more coherent and people judge truth based on coherency \citep{unkelbach2017referential}. The reiteration effect is also known under different labels, such as ``frequency-validity" or ``illusory truth" \citep[195]{hertwig1997reiteration}. However, some research suggests that these are not identical phenomena. For example, the \emph{truth effect} ``disappears when the actual truth status is known" \citep[p.~253]{pohl2017cognitive}, which does not hold for validity effect in general. There is also a clear distinction between the effects covered here, and the mere exposure effect covered in Section~\ref{ss:mereexp}: the truth effect has been found largely independent of duration of stimulus exposure \citep[p.~245]{dechene2010truth}. \paragraph{Implications for rule learning} In the rule learning context, a repeating statement which becomes more believable corresponds to the entire rule or possibly a ``subrule'' consisting of the consequent of the rule and a subset of conditions in its antecedent. A typical rule learning result contains multiple rules that are substantially overlapping. If the analyst is exposed to multiple similar statements, the reiteration effect will increase the analyst's belief in the \emph{repeating} subrule. Especially in the area of association rule learning, a very large set of redundant rules---covering the same, or nearly same set of examples---is routinely included in the output. \citet{schwarz2007metacognitive} suggest that mere 30 minutes of delay can be enough for information originally seen as negative to have positive influence. Applying this in a data exploration task, consider an analyst who is presented a large number of ``weak" rules corresponding to highly speculative patterns of data. Even if the analyst rejects the rule---for example based on the presented metrics, pre-existing domain knowledge or common sense---the validity and truthfulness effects will make the analyst more prone to accept a similar rule later. \paragraph{Debiasing techniques} The reiteration effect can be suppressed already on the algorithmic level by ensuring that rule learning output does not contain redundant rules. This can be achieved by pruning algorithms \citep{furnkranz1997pruning}. Another possible technique is presenting the result of rule learning in several layers, where only clusters of rules (``rule covers") summarizing multiple sub rules are presented at first \citep{ordonez2006constraining}. The user can expand the cluster to obtain more similar rules. A more recent algorithm that can be used for summarizing multiple rules is the meta-learning method proposed by \citep{berka2018comprehensive}. Several lessons can be learnt from \citet{hess2006psychological}, who studied the role of the reiteration effect for spreading of gossip. Interestingly, already simple reiteration was found to increase gossip veracity, but only for those who found the gossip relatively uninteresting. Multiple sources of gossip were found to increase its veracity, especially when these sources were independent. Information that explained the gossip by providing benign interpretation decreased the veracity of gossip. These findings suggest that it is important to explain to the analyst which rules share the same source, i.e. what is the overlap in their coverage in terms of specific instances. Second, explanations can be improved by utilisation of recently proposed techniques that use domain knowledge to filter or explain rules, such as expert deduction rules proposed by \citet{Rauch2018}. The research related to debiasing validity and truth effects has been largely centered around the problem of debunking various forms of misinformation (cf., e.g., \citep{schwarz2007metacognitive,lewandowsky2012misinformation,ecker2017reminders}). The current largely accepted recommendation is that to correct a misinformation, it is best to address it directly -- repeat the misinformation along with arguments against it \citep{lewandowsky2012misinformation,ecker2017reminders}. This can be applied, for example, in incremental machine learning settings, when the results of learning are revised when new data arrive, or when mining with formalized domain knowledge. Generally, when the system has knowledge of the analyst being previously presented a rule (a hypothesis), which is falsified following the current state of knowledge, the system can explicitly notify the analyst, listing the rule in question and explaining why it does not hold. \subsection{Mere Exposure Effect} \label{ss:mereexp} According to the mere exposure effect, repeated exposure to an object results in an increased preference (liking, affect) for that object. When a concrete stimulus is repeatedly exposed, the preference for that stimulus increases logarithmically as a function of the number of exposures \citep{bornstein1989exposure}. The size of the mere exposure effect also depends on whether the stimulus the subject is exposed to is exactly the same as in prior exposure or only similar to it \citep{monahan2000subliminal}---the same stimuli are associated with larger mere exposure effect. The mere exposure effect is another consequence of increased fluency of processing associated with repeated exposure (cf.\ Section~\ref{ss:reiteration}) \citep{winkielman2003hedonic}. While the reitaration effect referred to the use of processing fluency in judgment of truth, the mere exposure effect relates to the positive feeling that is associated with fluent processing. Duration of the exposure below 1 second produces the strongest effects, with increasing time of exposure the effect drops and repeating exposures decrease the mere exposure effect. The liking induced by the effect drops more quickly with increasing exposures when the presented stimuli is simple (e.g., an ideogram) as opposed to complex (e.g., a photograph) \citep{bornstein1989exposure}. A recent meta analysis suggests that there is an inverted-U shaped relation between exposure and affect \citep{montoya2017re}. \paragraph{Implications for rule learning} The extent to which the mere exposure effect can affect the interpretation of rule leaning results is limited by the fact that that its magnitude decreases with extended exposure to the stimuli. It can be expected that the analysts inspect the rule learning results for a much longer period of time than the 1 second below which exposure results in the strongest effects \citep{bornstein1989exposure}. However, it is not unusual for rule-based models to be composed of several thousand rules \citep{alcala2011fuzzy}. When the user scrolls through a list of rules, each rule can be shown only for a fraction of a second. The analyst is not aware of having seen the rule, yet the rule can influence the analyst's judgment through the mere exposure effect. The mere exposure effect can also play a role when rules from the text mining or sentiment analysis domains are interpreted. The initial research of the mere exposure effect by \citet{zajonc1968attitudinal} included experimental evidence on the positive correlation between word frequency and affective connotation of the word. From this it follows that a rule containing frequently occurring words can induce the mere exposure effect. \paragraph{Debiasing techniques} While there is a considerable body of research focusing on the mere exposure effect, our literature survey did not result in any directly applicable debiasing techniques. Only recently, \citet{becker2016reversing} reported the first reversal of the mere exposure effect. This was achieved by presenting threatening materials (spider pictures) to people fearful of spiders in an unpleasant detection situation. This result, although interesting, is difficult to transpose to the domain of rules. Nevertheless, there are some conditions known to decrease the mere exposure effect that can be utilized in machine learning interfaces. The effect is strongest for repeated, ``flash-like" presentation of information. A possible workaround is to avoid subliminal exposure completely, by changing the mode of operation of the corresponding user interfaces. One attempt at a user interface to rule learning respecting these principles is the EasyMiner system \citep{easyminer12}. In EasyMiner, the user precisely formulates the mining task as a query against data. This restricts the number of rules that are discovered and the user is consequently exposed to. \subsection{Overconfidence and underconfidence} \label{ss:effdif} A decision maker's judgment is normally associated with belief that the judgment is true, i.e., with confidence in the judgment. \citet{griffin1992weighing} argue that confidence in judgment is based on a combination of the strength of evidence and its weight (credibility). According to their studies, people tend to combine strength with weight in suboptimal ways, resulting in the decision maker being too much or too little confident about the hypothesis at hand than would be normatively appropriate given the available information. This discrepancy between the normative confidence and the decision maker's confidence is called \emph{overconfidence} or \emph{underconfidence}. People use the provided data to assess a hypothesis, but they insufficiently regard the quality of the data. \citet{griffin1992weighing} describe this manifestation of bounded rationality as follows: ``If people focus primarily on the warmth of the recommendation with insufficient regard for the credibility of the writer, or the correlation between the predictor and the criterion, they will be overconfident when they encounter a glowing letter based on casual contact, and they will be underconfident when they encounter a moderately positive letter from a highly knowledgeable source.'' \paragraph{Implications for rule learning} Research has revealed systematic patterns of overconfidence and underconfidence \citep[p.~426]{griffin1992weighing}: If the estimated difference between two hypotheses is large, it is easy to say which one is better and there is a pattern of underconfidence. As the degree of difficulty rises (the difference between the normative confidence of two competing hypotheses is decreasing), there is an increasing pattern of overconfidence. The strongest overconfidence was recorded for problems where the \emph{weight of evidence is low and the strength of evidence is high}. This directly applies to rules with high value of confidence and low value of support. The empirical results related to the effect of difficulty therefore suggest that the predictive ability of such rules will be substantially overrated by analysts. This is particularly interesting because rule learning algorithms often suffer from a tendency to unduly prefer overly specific rules that have a high confidence on small parts of the data to more general rules that have a somewhat lower confidence, a phenomenon also known as overfitting. The above-mentioned results seem to indicate that humans suffer from a similar problem (albeit presumably for different reasons), which, e.g., implies that a human-in-the-loop solution may not alleviate this problem. \paragraph{Debiasing techniques} Research applicable to debiasing of overconfidence originated in 1950', but most initial efforts to reduce overconfidence have failed \citep{fischoff1981debiasing,arkes1987two}. Some recent research focuses on the hypothesis that the feeling of confidence reflects factors indirectly related to choice processes \citep{fleisig2011adding,hall2007illusion}. For example, in a sport betting experiment performed by \citet{hall2007illusion}, participants underweighted statistical cues while betting, when they knew the names of players. This research leads to the conclusion that ``more knowledge can decrease accuracy and simultaneously increase prediction confidence" \citep{hall2007illusion}. Applying this to debiasing in the rule learning context, presenting less information can be achieved by reducing the number of rules and removing some conditions in the remaining rules. This can be achieved by a number of methods, such as feature selection to external setting of maximum antecedent length, which is permitted by some algorithms. Also, rules and conditions that do not pass a statistical significance test can be removed from the output. As with other biases, research on debiasing overconfidence points at the importance of educating the experts on principles of subjective probability judgment and the associated biases \citep{clemen2002debiasing}. \citet[p. 487]{shafir2013behavioral} recommends to debias overconfidence (in policy making) by making the subject hear both sides of an argument. In the rule learning context, this would correspond to the user interface making rules and knowledge easily accessible, which is in ''unexpectedness" or ''exception" relation with the rule in question, as, e.g., experimented with in frameworks postprocessing association rule learning results \citep{Kliegr:2011:SSA:2070639.2070657}. \subsection{Recognition Heuristic} \citet{pachur2011recognition} define the recognition heuristic as follows: ``For two-alternative choice tasks, where one has to decide which of two objects scores higher on a criterion, the heuristic can be stated as follows: If one object is recognized, but not the other, then infer that the recognized object has a higher value on the criterion.'' In contrast with the availability heuristic, which is based on ease of recall, the recognition heuristic is based only on the fact that a given object is recognized. The two heuristics could be combined. When only one object in a pair is recognized, then the recognition heuristic would be used for judgment. If both objects are recognized, then the speed of the recognition could influence the choice \citep{hertwig2008fluency}. The use of this heuristic could be seen from an experiment performed by \citet{goldstein1999recognition}, which focused on estimating which of two cities in a presented pair is more populated. People using the recognition heuristic would say that the city they recognize has a higher population. The median proportion of judgments complying to the recognition heuristic was 93\%. It should be noted that the application of this heuristic is in this case ecologically justified since recognition will be related to how many times the city appeared in a newspaper report, which in turn is related to the city size \citep{beaman2006does}. \paragraph{Implications for rule learning} The recognition heuristic can manifest itself by preference for rules containing a recognized attribute name or value in the antecedent of the rule. Analysts processing rule learning results are typically shown many rules, contributing to time pressure. This can further increase the impact of the recognition heuristic. Empirical results reported by \citet{michalkiewicz2018smarter} indicate that people with higher cognitive ability use the recognition heuristic more when it is successful and less when it is not. The work of \citet{pohl2017use} shows that people adapt their decision strategy with respect to the more general environment rather than the specific items they are faced with. Considering that the application of the recognition heuristic can in some situations lead to better results than the use of available knowledge, the recognition heuristic may not necessarily have overly negative impacts on the intepretation of rule learning results. \enlargethispage*{12pt} \paragraph{Debiasing techniques} Under time pressure people assign a higher value to recognized objects than to unrecognized objects. This happens also in situations when recognition is a poor cue \citep{pachur2006psychology}. Changes to user interfaces that induce ``slowing down" could thus help to address this bias. As to the alleviation of effects of recognition heuristic in situations where it is ecologically unsuitable, \citet{pachur2006psychology} note that suspension of the heuristic requires additional time or direct knowledge of the ``criterion variable''. In typical real-world machine learning tasks, the data can include a high number of attributes that even experts are not acquainted with in detail. When these are recognized (but not understood), even experts may be liable to the recognition heuristic. When information on the meaning of individual attributes and literals is made easily accessible, we conjecture that the application of the recognition heuristic can be suppressed. \subsection{Information Bias } \label{ss:information} Information bias refers to the tendency to seek more information to improve the perceived validity of a statement even if the additional information is not relevant or helpful. The typical manifestation of the information bias is evaluating questions as worth asking even when the answer cannot affect the hypothesis that will be accepted \citep{baron1988heuristics}. For example, \citet{baron1988heuristics} asked subjects to assess to what degree a medical test is suitable for deciding which of three diseases to treat. The test detected a chemical, which was with a certain probability associated with each of the three diseases. These probabilities varied across the cases. Even though in some of the cases an outcome of the test would not change the most likely disease and thus the treatment, people tended to judge the test as worth doing. While information bias is primarily researched in the context of information acquisition \citep{nelson2010experience,nelson2005finding}, some scientists interpret this more generally as judging features with zero probability gain as useful, having potential to change one's belief \citep[p. 158]{nelson2008towards}. \paragraph{Implications for rule learning} Many rule learning algorithms allow the user to select the size of the generated model -- in terms of the number of rules that will be presented, as well as by setting the maximum length of conditions of the generated rules. Either as part of the feature selection, or when defining constraints for the learning, the users decide which attributes are relevant. These can then appear among conditions of the discovered rules. According to the information bias, people will be prone to setup the task so that they receive more information -- resulting in larger rule list with longer rules containing attributes with little information value. It is unclear if the information effect applies also to the case when the user is readily presented with more information, rather then given the possibility to request more information. Given the proximity of these two scenarios, we conjecture that information bias (or some related bias) will make people prefer more information to less, even if it is obviously not relevant. According to the information bias, a rule containing additional (redundant) condition may be preferred to a rule not containing this condition. \paragraph{Debiasing techniques} While informing people about the diagnosticity of considered questions does not completely remove the information bias, it reduces it \citep{baron1988heuristics}. To this end, communicating attribute importance can help guide the analyst in the task definition phase. Although existing algorithms and systems already provide ways for determining the importance of individual rules, for example via values of confidence, support, and lift, the cues on the importance of individual conditions in rule antecedent are typically not provided. While feature importance is computed within many learning algorithms, it is often used only internally. Exposing this information to the user can help counter the information bias. \subsection{Ambiguity Aversion } \label{ss:ambiguity} Ambiguity aversion refers to the tendency to prefer known risks over unknown risks. This is often illustrated by the Ellsberg paradox \citep{ellsberg1961risk}, which shows that humans tend to systematically prefer a bet with known probability of winning over a bet with not precisely known probability of winning, even if it means that their choice is systematically influenced by irrelevant factors. As argued by \citet{Camerer1992}, ambiguity aversion is related to the information bias: the demand for information in cases when it has no effect on decision can be explained by the aversion to ambiguity --- people dislike having missing information. \paragraph{Implications for rule learning} The ambiguity aversion may have profound implications for rule learning. The typical data mining task will contain a number of attributes the analyst has no or very limited knowledge of. The ambiguity aversion will manifest itself in a preference for rules that do not contain ambiguous conditions. \paragraph{Debiasing techniques} An empirically proven way to reduce ambiguity aversion is accountability -- ``the expectation on the side of the decision maker of having to justify her decisions to somebody else" \citep{vieider2009effect}. This debiasing technique is hypothesized to work through higher cognitive effort that is induced by accountability. This can be applied in the rule learning context by requiring the analysts to provide justifications for why they evaluated a specific discovered rule as interesting. Such explanation can be textual, but also can have a structured form. To decrease demands on the analyst, the explanation may only be required only if a conflict with existing knowledge has been automatically detected, for example, using approach proposed by \citet{Rauch2018}. Since the application of the ambiguity aversion can partly stem from the lack of knowledge of the conditions included in the rule, it is conceivable this bias would be alleviated if description of the meaning of the conditions is made easily accessible to the analyst, as demonstrated in e.g. \citep{Kliegr:2011:SSA:2070639.2070657}. \subsection{Confusion of the Inverse } This effect corresponds to confusing the probability of cause and effect, or, formally, confidence of an implication $A \rightarrow B$ with its inverse $B \rightarrow A$, i.e., $\Pr(B \vert A)$ is confused with the inverse probability $\Pr(A \mid B)$. For example, \citet{villejoubert2002inverse} showed in an experiment that about half of the participants estimating the probability of membership in a class gave most of their estimates that corresponded to the inverse probability. \paragraph{Implications for rule learning} The confusion of the direction of an implication sign has significant consequences on the interpretation of a rule. Already \citet{michalski1983theory} noted that there are two different kinds of rules, discriminative and characteristic. \emph{Discriminative rules} can quickly discriminate an object of one category from objects of other categories. A simple example is the rule $$\texttt{IF trunk THEN elephant}$$ which states that an animal with a trunk is an elephant. This implication provides a simple but effective rule for recognizing elephants among all animals. \emph{Characteristic rules}, on the other hand, try to capture \emph{all} properties that are common to the objects of the target class. A rule for characterizing elephants could be $$\texttt{IF elephant THEN heavy, large, grey, bigEars, tusks, trunk.}$$ Note that here the implication sign is reversed: we list all properties that are implied by the target class, i.e., by an animal being an elephant. From the point of understandability, characteristic rules are often preferable to discriminative rules. For example, in a customer profiling application, we might prefer to not only list a few characteristics that discriminate one customer group from the other, but are interested in all characteristics of each customer group. Characteristic rules are very much related to \emph{formal concept analysis} \citep{FCA,FCA-Foundations}. Informally, a concept is defined by its intent (the description of the concept, i.e., the conditions of its defining rule) and its extent (the instances that are covered by these conditions). A \emph{formal concept} is then a concept where the extension and the intension are Pareto-maximal, i.e., a concept where no conditions can be added without reducing the number of covered examples. In Michalski's terminology, a formal concept is both discriminative and characteristic, i.e., a rule where the head is equivalent to the body. The confusion of the inverse thus seems to imply that humans will not clearly distinguish between these types of rules, and, in particular, tend to interpret an implication as an equivalence. From this, we can infer that characteristic rules, which add all possible conditions even if they do not have additional discriminative power, may be preferable to short discriminative rules. This confusion may manifest itself strongest in the area of association rule learning, where an attribute can be of interest to the analyst both in the antecedent and consequent of a rule. \paragraph{Debiasing techniques} \citet{edgell2004learned} studied the influence of the effect of training of analysts in probabilistic theory with the conclusion that it is not effective in addressing the confusion of the inverse fallacy. \citet[p.~195]{werner2018eliciting} point at a concern regarding use of language liable to misinterpretation in statistical textbooks teaching fundamental concepts such as independence. The authors illustrate the misinterpretation on the statement \emph{whenever Y has no effect on X} as ``This statement is used to explain that two variables, X and Y, are independent and their joint distribution is simply the product of their margins. However, for many experts, the term 'effect' might imply a causal relationship." From this it follows that representations of rules should strive for unambiguous meaning of the wording of the implication construct. The specific recommendations provided by \citet{diaz2010teaching} for teaching probability can also be considered in the next generation of textbooks aimed at the data science audience. \subsection{Context and Tradeoff Contrast Effects} People evaluate objects in relation to other available objects, which may lead to various effects of context of presentation of a choice. For example, in one of the experiments described by \citet{tversky1993context}, subjects were asked to choose between two microwave ovens (Panasonic priced 180 USD and Emerson priced 110 USD), both a third off the regular price. The number of subjects who chose Emerson was 57\% and 43\% chose Panasonic. Another group of subjects was presented the same problem with the following manipulation: A more expensive Panasonic valued at 200 USD (10\% off the regular price) was added to the list of possible options. The newly added device was described to look as inferior to the other Panasonic, but not to the Emerson device. After this manipulation, only 13\% chose the more expensive Panasonic, but the number of subjects choosing the less expensive Panasonic rose from 43\% to 60\%. That is, even though the additional option was dominated by the cheaper Panasonic device and it should have been therefore irrelevant to the relative preference of the other ovens, its addition changed the preference in favor of the better Panasonic device. The experiment thus shows that selection of one of the available alternatives, such as products or job candidates, can be manipulated by addition or deletion of alternatives that are otherwise irrelevant. \citet{tversky1993context} attribute the tradeoff effect to the fact that ``people often do not have a global preference order and, as a result, they use the context to identify the most 'attractive' option.'' It should be noted that according to \citet{tversky1993context} if people have well-articulated preferences, the background context has no effect on the decision. \paragraph{Implications for rule learning} The effect could be illustrated on the inter-rule comparison level. In the base scenario, a constrained rule learning yields only a rule $R_1$ with a confidence value of $0.7$. Due to the relatively low value of confidence, the user does not find the rule very plausible. By lowering the minimum confidence threshold, multiple other rules predicting the same target class are discovered and shown to the user. These other rules, inferior to $R_1$, would increase the plausibility of $R_1$ by the tradeoff contrast effect. \paragraph{Debiasing techniques} Marketing professionals sometimes introduce more expensive versions of the main product, which induces the tradeoff contrast. The presence of a more expensive alternative with little added value increases sales of the main product \citep{simonson1992choice}. Somewhat similarly, a rule learning algorithm can have on its output rules with very high confidence, sometimes even 1.0, but very low values of support. Removal of such rules can help to debias the analysts. The influence of context can in some cases improve communication \citep[p. 293]{simonson1992choice}. An attempt at making contextual attributes explicit in the rule learning context was made by \citet{SD-CHD}, who introduced \emph{supporting factors} as a means for complementing the explanation delivered by conventional learned rules. Essentially, supporting factors are additional attributes that are not part of the learned rule, but nevertheless have very different distributions with respect to the classes of the application domain. In line with the results of \citet{NB-Rules-Application}, medical experts found that these supporting factors increase the plausibility of the found rules. \subsection{Negativity Bias} According to the negativity bias, negative evidence tends to have a greater effect than neutral or positive evidence of equal intensity \citep{rozin2001negativity}. For example, the experiments by \citet{pratto2005automatic} investigated whether the valence of a word (desirable or undesirable trait) has effect on the time required to identify the color in which the word appears on the screen. The results showed that the subjects took longer to name the color of an undesirable word than for a desirable word. The authors argued that the response time was higher for undesirable words because undesirable traits get more attention. Information with negative valence is given more attention partly because people seek diagnostic information, and negative information is more diagnostic \citep{skowronski1989negativity}. Some research suggests that negative information is better memorized and subsequently recognized \citep{robinson1996role,ohira1998effects}. \paragraph{Implications for rule learning} An interesting applicable discovery shows that negativity is an ``attention magnet'' \citep{fiske1980attention,ohira1998effects}. This implies that a rule predicting a class phrased with negative valence will get more attention than a rule predicting a class phrased with words with positive valence. \paragraph{Debiasing techniques} Putting a higher weight to negative information may in some situations be a valid heuristic. What needs to be addressed are cases, when the relevant piece of information is positive and a less relevant piece of information is negative \citep{huber2010mindless,tversky1981framing}. It is therefore advisable that any such suspected cases are detected in the data preprocessing phase, and the corresponding attributes or values are replaced with more neutral sounding alternatives. \subsection{Primacy Effect} Once people form an initial assessment of plausibility (favorability) of an option, its subsequent evaluations will reflect this initial disposition. \citet{bond2007information} investigated to what extent changing the order of information which is presented to a potential buyer affects the propensity to buy. For example, in one of the experiments, if the positive information (product description) was presented as first, the number of participants indicating they would buy the product was 48\%. When the negative information (price) was presented first, this number decreased to 22\%. \citet{bond2007information} argue that the effect is caused by distortion of interpretation of new information in the direction of the already held opinion. The information presented first not only influences disproportionately the final opinion, but it also influences interpretation of novel information. \paragraph{Implications for rule learning} Following the primacy effect, the analyst will favor rules that are presented as first in the rule model. Largest negative effects of this bias are likely to occur, when such ordering is not observed, for example, when rules are presented in the order in which they were discovered by a breadth-first algorithm. In this case, \emph{mental contamination} is another applicable bias related to the primacy effect (or in general order effects). This refers to the case when a presented hypothesis can influence subsequent decision making by its content, even if the subject is fully aware of the fact that the presented information is purely speculative \citep{fitzsimons2001nonconscious}. Note that our application scenario differs from \citep{fitzsimons2001nonconscious} and some other related research, in that cognitive psychology mostly investigated the effect of \emph{asking a hypothetical question}, while we are concerned with considering the plausibility of a presented hypothesis (inductively learnt rule). \citet{fitzsimons2001nonconscious} found that respondents are not able to prevent the contamination effects of the hypothetical questions and that the bias increases primarily when the hypothetical question is relevant. This bias is partly attributed to the application of expectations related to conversational maxims \citep{gigerenzer1999overcoming}. \paragraph{Debiasing techniques} Three types of debiasing techniques were examined by \citet{mumma1995procedural} in the context of clinical-like judgments. The \emph{bias inoculation} intervention involves direct training on the applicable bias or biases, consisting of information on the bias, strategies for adjustment, as well as completing several practical assignments. The second technique was \emph{consider-the-opposite} debiasing strategy, which sorts the information according to diagnosticity before it is reviewed. The third strategy evaluated was simply \emph{taking notes} when reviewing each cue before the final judgment was made. Interestingly, bias inoculation, a representative of direct debiasing techniques, was found to be the least effective. Consider-the-opposite and taking notes were found to work equally well. To this end, a possible debiasing strategy can be founded in presentation of the most relevant rules first. Similarly, the conditions within the rules can be ordered by predictive power. Some rule learning algorithms, such as CBA \citep{Liu98integratingclassification}, readily take advantage of the primacy effect, since they naturally create rule models that contain rules sorted by their strength. Other algorithms order rules so that more general rules (i.e., rules that cover more examples) are presented first. This typically also corresponds to the order in which rules are learned with the commonly used separate-and-conquer or covering strategies \cite{furnkranz1999separate}. Simply reordering the rules output by these algorithms may not work in situations, when rules compose a rule list that is automatically processed for prediction purposes.\footnote{One technique that can positively influence comprehensibility of the rule list is prepending (adding to the beginning) a new rule to the previously learned rules \citep{Prepend}. The intuition behind this argument is that there are often simple rules that would cover many of the positive examples, but also cover a few negative examples that have to be excluded as exceptions to the rule. Placing the simple general rule near the end of the rule list allows us to handle exceptions with rules that are placed before the general rule and keep the general rule simple.} In order to take advantage of the note taking debiasing strategy, the user interface can support the analyst in annotating the individual rules. \citet{lau2009can} provide a reason for optimism concerning the debiasing effect stemming from the proposed changes to user interface of machine learning tools. Their paper showed debiasing effect of similar changes implemented in a user interface to an information retrieval system used by consumers to find health information. Three versions of the system were compared: a baseline ``standard" search interface, \emph{anchor debiasing interface}, which asked the users to annotate the read documents as providing evidence for/against/neutral the proposition in question. Finally, the \emph{order debiasing interface} reordered the documents to neutralize the primacy bias by creating a ``counteracting order bias". This was done by randomly reshuffling a part of the documents. When participants used the baseline and anchor debiasing interface, the order effect was present. On the other hand, the use of the order debiasing interface eliminated the order effect \citep{lau2009can}. \subsection{Weak Evidence Effect} According to the weak evidence effect, presenting weak evidence in favor of an outcome can actually decrease the probability that a person assigns to the outcome. For example, in an experiment in the area of forensic science reported by \citet{martire2013expression}, it was shown that participants presented with evidence weakly supporting guilt tended to ``invert'' the evidence, thereby counterintuitively reducing their belief in the guilt of the accused. \citet{fernbach2011good} argue that the effect occurs because people give undue weight to the weak evidence and fail to take into account alternative evidence that more strongly favors the hypothesis at hand. \paragraph{Implications for rule learning} The weak evidence effect can be directly applied to rules: the evidence is represented by the rule antecedent; the consequent corresponds to the outcome. The analyst can intuitively interpret each of the conditions in the antecedent as a piece of evidence in favor of the outcome. Typical of many machine learning problems is the uneven contribution of individual attributes to the prediction. Let us assume that the analyst is aware of the prediction strength of the individual attributes. If the analyst is to choose from a rule containing only one strong condition (predictor) and another rule containing a strong predictor and a weak (weak enough to trigger this effect) predictor, according to the weak evidence effect the analyst should choose the shorter rule with one predictor. \paragraph{Debiasing techniques} \citet{martire2014interpretation} performed an empirical study aimed at evaluating what mode of communication of the strength of evidence is most resilient to the weak evidence effect. The surveyed modes of expression were numerical, verbal, a table, and a visual scale. It should be noted that the study was performed in the specific field of assessing evidence by a juror in a trial and the verbal expressions were following standards proposed by the Association of Forensic Science Providers \citep{willis2010standards}.\footnote{These provide guidelines on translation of numerical likelihood ratios into verbal formats. For example, likelihood ``$>1-10$" is translated as ``weak or limited", and likelihood of ``$1000-10,000$" as ``strong".} The results clearly suggested that numerical expressions of evidence are most suitable for expressing uncertainty. Likelihood ratios studied by \citet{martire2014interpretation} are conceptually close to the lift metric, used to characterize association rules. While lift is still typically presented as a number in machine learning user interfaces, there has been research towards communicating rule learning results in natural language since at least 2005 \citep{strossa2005reporting}. With recent resurgence of interest in interpretable models, the use of natural language has been taken up by commercial machine learning services, such as BigML, which allow to generate predictions via spoken questions and answers using Amazon Alexa voice service.\footnote{\url{https://bigml.com/tools/alexa-voice}} Similarly, machine learning interfaces increasingly rely on visualizations. The research on debiasing of the weak evidence effect suggests that when conveying machine learning results using modern means, such as transformation to natural language or through visualizations, care must be taken when numerical information is communicated. \citet{martire2014interpretation} also observe high level of miscommunication associated with low-strength verbal expressions. In these instances, it is ``appropriate to question whether expert opinions in the form of verbal likelihood ratios should be offered at all" \citep{martire2014interpretation}. Transposing this result to the machine learning context, we suggest to consider intentional omission of weak predictors from rules either directly by the rule learner or as part of feature selection. \subsection{Unit Bias} The unit bias refers to the tendency to give each unit similar weight while ignoring or underweighing the size of the unit \citep{geier2006unit}. \citet{geier2006unit} offered people various food items in two different sizes on different days and observed how this would affect consumption of the food. They found that people ate larger amount of food when the size of a single unit of the food item was big than when it was small. A possible explanation is that people ate one unit of food at a time without taking into account how big it was. Because the food was not consumed in larger amounts at any single occasion, but was rather eaten intermittently, the behavior led to higher consumption when a unit of food was larger. \paragraph{Implications for rule learning} Unit bias was so far primarily studied for quite different purposes than is the domain of machine learning. Nevertheless, as we will argue in the following, it can be very relevant for the domain of rule learning. From a technical perspective, the number of conditions in rules is not important. What matters is the actual discriminatory power of the individual conditions, which can vary substantially. However, following the application of unit bias, people can view conditions as units of similar importance, disregarding their sometimes vastly different discriminatory and predictive power. \paragraph{Debiasing techniques} One of the common ways how regulators address unhealthy food consumption patterns related to varying sizes of packaging is introduction of mandatory labelling of the size and calorie contents. Following an analogy to clearly communicating the size of food item, informing analysts about the discriminatory power of the individual conditions may alleviate unit bias. Such indicator can be generated automatically, for example, by listing the number of instances in the entire dataset that meet the condition. \section{Recommendations for Rule Learning Algorithms and Software} \label{sec:recommendations} This section provides a concise list of considerations that is aimed to raise awareness among machine learning practitioners regarding the availability of measures that could potentially suppress effect of cognitive biases on comprehension of rule-based models. We expect part of the list to be useful also for other symbolic machine learning models, such as decision trees. In our recommendations, we focus on systems that present the rule model to a human user, which we refer to as the analyst. We consider two basic roles the analyst can have in the process: approval of the complete classification model ("interpretable classifiation task"), and selection of interesting rules ("nugget discovery"). \subsection{Representation of a rule} The interpretation of natural language expressions used to describe a rule can lead to systematic distortions. Our review revealed the following recommendations applicable to individual rules: \begin{enumerate} \item \textbf{Syntactic elements. } There are several cognitive studies indicating that AND is often misunderstood \citep{hertwig2008conjunction}, \citep[p. 95-96]{gigerenzer2001content}. The results of our experiments \citep{furnkranz2018cognitive} support the conclusion that AND needs to be presented unambiguously in the rule learning context. Research has shown that ``and'' ceases to be ambiguous when it is used to connect propositions rather than categories. Similarly, the communication of the implication construct IF THEN connecting antecedent and consequent should be made unambiguous. Another important syntactic construct is negation (NOT). While processing of negation has not been included among the surveyed biases, our review of literature (cf.\ Section~\ref{ss:beyond}) suggests that its use should be discouraged on the grounds that its processing requires more cognitive effort, and because the fact that a specific information was negated may not be remembered in the long term. \item \textbf{Conditions. } Attribute-value pairs comprising conditions are typically either formed of words with semantics meaningful to the user, or of codes that are not directly meaningful. When conditions contain words with negative valence, these need to be reviewed carefully, since negative information is known to receive more attention and is associated with higher weight than positive information. A number of biases can be triggered or strengthened by the lack of understanding of attributes and their values appearing in rules. Providing easily accessible information on conditions in the rules, including their predictive power, can thus prove as an effective debiasing technique. People have the tendency to put higher emphasis on information they are exposed to first. By ordering the conditions by strength, machine learning software can conform to human conversational maxims. The output could also visually delimit conditions in the rules based on their significance or predictive stength. \item \textbf{Interestingness measures.} The values of interestingness measures should be communicated using numerical expressions. Alternate verbal expressions, with wordings such as ``strong relationship" replacing specific numerical values, are discouraged because there is some evidence that such verbal expressions are prone to miscommunication. Currently, rule interest measures are typically represented as probabilities (confidence) or ratios (lift), whereas results in cognitive science indicate that natural frequencies are better understood. The tendency of humans to ignore base rates and sample sizes (which closely relate to rule support) is a well established fact in cognitive science. Results of our experiments on inductively learned rules also provide evidence for this conclusion \citep{furnkranz2018cognitive}. Our proposition is that this effect can be addressed by presenting confidence (reliability) intervals for the values of measures of interest, where applicable. \end{enumerate} \subsection{Rule models} In many cases, rules are not presented in isolation to the analyst, but instead within a collection of rules comprising a rule model. Here, we relate the results of our review to the following aspects of rule models: \begin{enumerate} \setcounter{enumi}{3} \item \textbf{Model size}. An experiment by \citet{poursabzi2018manipulating} found that people are better able to simulate results of a larger regression model composed of eight coefficients than of a smaller model composed of two coefficients. The results indicate that removal of any unnecessary variables could improve model interpretability even though the experiment did not find a difference in the trust in the model based on the number of coefficients it consisted of. Similarly to regression models, rule models often incorporate output that is considered as marginally relevant. This can take a form of (nearly) redundant rules or (nearly) redundant conditions in the rule. Our analysis shows that such redundancies can induce a number of biases, which may be accountable for misintepretation of the model. Size of a rule model can be reduced by utilizing various pruning techniques, or by using learning algorithms that allow the user to set or influence size of the resulting model. Examples of such approaches include those proposed by \citet{letham2015interpretable,lakkarajuinterpretable,wang2017bayesian}. The Interpretable Decision Sets algorithm \citep{lakkarajuinterpretable} can additionally optimize for diversity and non-overlap of discovered rules, directly countering the reiteration effect. Another potentially effective approach to discarding some rules can be using domain knowledge or constraints set by the user to remove the strong (e.g., highly confident), yet ``obvious" rules confirming common knowledge.\footnote{For example, it is well-known that diastolic blood pressure rises with body mass index (DBP$\uparrow\uparrow$BMI). Rules confirming this relationship might be removed \citep{Kliegr:2011:SSA:2070639.2070657}.} Removal of weak rules could help to address the tradeoff contrast as well as the weak evidence effect. \item \textbf{Rule grouping.} The rule learning literature has seen multiple attempts to develop methods for grouping similar rules, often by clustering. Our review suggests that presenting clusters of similar rules can help to reduce cognitive biases caused by reiteration. Algorithms that learn rule lists provide mandatory ordering of rules, while the rule order in rule-set learning algorithms is not important. In either case, the rule order as presented to the user will affect perception of the model due to conversational maxims and the primacy effect. It is recommended to sort the presented rules by strength. However, due to paucity of applicable research, it is unclear which particular definition of rule strength would lead to the best results in terms of bias mitigation. \end{enumerate} \subsection{User Engagement} Some results of our review suggest that increasing user interaction can help counter some biases. Some specific suggestions for machine learning user interfaces (UIs) follow: \begin{enumerate} \setcounter{enumi}{6} \item \textbf{Domain knowledge}. Selectively presenting domain knowledge ``conflicting'' with the considered rule can help to invoke the 'consider-the-opposite' debiasing strategy. Other research has shown that the plausibility of a model depends on compliance to monotonicity constraints \citep{freitas2014comprehensible}. We thus suggest that UIs make background information on discovered rules easily accessible. \item \textbf{Eliciting rule annotation}. Activating the deliberate ``System 2" is one of the most widely applicable debiasing strategies. One way to achieve this is to require accountability, e.g., through visual interfaces motivating users to annotate selected rules, which would induce the 'note taking' debiasing strategy. Giving people additional time to consider the problem has been in some cases shown as an effective debiasing strategy. This can be achieved by making the selection process (at least) two stage, allowing the user to revise the selected rules. \item \textbf{User search for rules rather than scroll.} Repeating rules can affect users via the mere exposure effect even if they are exposed to them even for a short moment, e.g., when scrolling a rule list. The user interfaces should thus deploy alternatives to scrolling in discovered rules, such as search facilities. \end{enumerate} \subsection{Bias inoculation} In some studies, basic education about specific biases, such as brief tutorials, decreased the fallacy rate. This debiasing strategy has been called \emph{bias inoculation} in the literature. \begin{enumerate} \setcounter{enumi}{9} \item \textbf{Education on specific biases.} Several studies have shown that providing explicit guidance and education on formal logic, hypothesis testing, and critical assessment of information can reduce fallacy rates in some tasks. However, the effect of psychoeducational methods is still a subject of dispute \cite{lilienfeld2009giving}, and cannot be thus recommended as a sole or sufficient measure. \end{enumerate} \section{Limitations and Future Work} \label{sec:limitations} Our goal was to examine whether cognitive biases can affect the interpretation of machine learning models and to propose possible remedies if they do. Since this field is untapped from the machine learning perspective, we tried to approach the problem holistically. Our work yielded a number of partial contributions, rather than a single profound result. We mapped applicable cognitive biases, identified prior works on their suppression, and proposed how these could be transferred to machine learning. In the following, we outline some promising direction of future work. \subsection{Validation through human-subject experiments} All the identified shortcomings of human judgment pertaining to the interpretation of inductively learned rules are based on empirical cognitive science research. For each cognitive bias, we provided a justification how it would relate to machine learning. Due to the absence of applicable prior research in the intersection between cognitive science and machine learning, this justification is mostly based on authors' experience in machine learning. A critical next step is empirical validation of the selected cognitive biases. We have already described several user experiments aimed at validating selected cognitive biases in \citet{furnkranz2018cognitive}. Some other machine learning researchers have reported human-subject experiments that do not explicitly refer to cognitive biases, yet the cognitive phenomena they investigate may correspond to a known cognitive bias. One example is a study by \citet{lage2018evaluation} (cf. also extended version in \citep{narayanan2018humans}), which investigated the effect of the number of cognitive chunks (conditions) in a rule on response time. While the main outcome confirms the intuition that higher complexity results in higher response times, this study has also revealed several unexpected patterns, such as that defining a new concept and reusing it leads to a higher response time than repeating the description whenever that concept implicitly appears, even though this repetition means that subjects have to read more lines. The findings could possibly be attributed to fluency in judgement, a cognitive phenomenon assumed to underlie multiple cognitive biases. Despite the existence of several early studies, much more concentrated and systematic effort is needed to yield insights on the size of effect individual biases can have on understanding of machine learning models. \subsection{Role of Domain Knowledge} It has been long recognized that external knowledge plays an important rule in the rule learning process. Already \citet{mitchell1980need} recognized at least two distinct roles external knowledge can play in machine learning: it can constrain the search for appropriate generalizations, and guide learning based on the intended use of the learned generalizations. Interaction with domain knowledge has played an important role in multiple stages of the machine learning process. For example, it can improve semi-supervised learning \citep{carlson2010toward}, and in some applications it is vital to convert discovered rules back into domain knowledge \citep[p.~288]{jf:Book-Nada}. Some results also confirm the common intuition that compliance to constraints valid in the given domain increases the plausibility of the learned models \citep{freitas2014comprehensible}. Our review shows that domain knowledge can be one of the important instruments in the toolbox aimed at debiasing interpretation of discovered rules. To give a specific example, the presence or strength of the validity effect depends on the familiarity of the subject with the topic area from which the information originates \citep{boehm1994validity}. Future work should focus on a systematic review of the role of domain knowledge on activation or inhibition of cognitive phenomena applicable to interpretability of rule learning results. \subsection{Individual Differences} The presence of multiple cognitive biases and their strengths have been linked to specific personality traits. For example, overconfidence and the rate of conjunctive fallacy have been shown to be inversely related to numeracy \citep{winman2014role}. According to \citet{juslin2011reducing}, the application of the averaging heuristic rather than the normative multiplication of probabilities seems to depend on the working memory capacity and/or high motivation. Some research can even be interpreted as indicating that data analysts can be more susceptible to the myside bias than the general population. An experiment reported by \citet{wolfe2008locus} shows that subjects who defined good arguments as those that can be ‘‘proved by facts’’ (this stance, we assume, would also apply to many data analysts) were more prone to exhibiting the myside bias.\footnote{This tendency is explained by \citet{wolfe2008locus} as follows: ``For people with this belief, facts and support are treated uncritically. \ldots More importantly, arguments and information that may support another side are not part of the schema and are also ignored.''} \citet{stanovich2013myside} show that the incidence of myside bias is surprisingly not related to general intelligence. This suggests that even highly intelligent analysts can be affected. \citet{albarracin2004role} propose that the susceptibility to the confirmation bias can depend on one's personality traits. They also present a diagnostic tool called ``defense confidence scale'' that can identify individuals who are prone to confirmational strategies. Further research into personality traits of users of machine learning outputs, as well as into development of appropriate personality tests, would help to better target education focused on debiasing. \subsection{Incorporating Additional Biases} There are about 24 cognitive biases covered in \emph{Cognitive Illusions}, the authoritative overview of cognitive biases by \citet{pohl2017cognitive}, and even 51 different biases are covered by \citet{evans2007hypothetical}. While doing the initial selection of cognitive biases to study, we tried to identify those most relevant for machine learning research matching our criteria. In the end, our review focused on a selection of 20 cognitive biases (effects, illusions). Future work might focus on expanding the review with additional relevant biases, such as labelling and overshadowing effects \citep{pohl2017cognitive}. \subsection{Extending Scope Beyond Biases} \label{ss:beyond} There is a number of cognitive phenomena affecting the interpretability of rules, which are not classified as cognitive biases. Remarkably, since 1960 there is a consistent line of work by psychologists studying cognitive processes related to rule induction, which is centred around the so-called \emph{Wason's 2-4-6 problem} \citep{wason1960failure}. Cognitive science research on rule induction in humans has so far not been noticed in the rule learning subfield of machine learning.\footnote{Based on our analysis of cited reference search in Google Scholar for \citep{wason1960failure}.} It was out of the scope of the objectives of this review to conduct an analysis of the significance of these results for rule learning, nevertheless we believe that such investigation could bring interesting insights for cognitively-inspired design of rule learning algorithms. Another promising direction for further work is research focused on the interpretation of negations (``not"). Experiments conducted by \citet{jiang2014affective} show that the mental processes involved in processing negations slow down reasoning. Negation can be also sometimes ignored or forgotten \citep{deutsch2009fast}, as it decreases veracity of long-term correct remembrance of information. Most rule learning algorithms are capable of generating rules containing negated literals. For example, a healthy company can be represented as \texttt{status = not(bankrupt)}. Our precautionary suggestion based on interpretation of results obtained in general studies performed in experimental psychology \citep{deutsch2009fast} and neurolinguistics \citep{jiang2014affective} is that artificial learning systems should refrain, wherever feasible, from the use of negation in the discovered rules that are to be presented to the user. Due the adverse implications of the use of negation on cognitive load and remembrance, empirical research focused interpretability of negation in machine learning is urgently needed. \section{Conclusion} To our knowledge, cognitive biases have not yet been discussed in relation to the interpretability of machine learning results. We thus initiated this review of research published in cognitive science with the intent of providing a psychological basis to further research in inductive rule learning algorithms, and to the way their results are communicated. Our review covered twenty cognitive biases, heuristics, and effects that can give rise to systematic errors when inductively learned rules are interpreted. For most biases and heuristics included in our review, psychologists have proposed ``debiasing'' measures. Application of prior empirical results obtained in cognitive science allowed us to propose several methods that could be effective in suppressing these cognitive phenomena when machine learning models are interpreted. Overall, in our review, we processed only a fraction of potentially relevant psychological studies of cognitive biases, but we were unable to locate a single study focused on machine learning. Future research should thus focus on empirical evaluation of effects of cognitive biases in the machine learning domain. \section*{Acknowledgments} TK was supported by long term institutional support of research activities. \v{S}B and TK were supported by grant IGA 33/2018 by Faculty of Informatics and Statistics, University of Economics, Prague. An initial version of this review was published as a part of TK's PhD thesis at Queen Mary University of London. \section*{References}
1,314,259,995,201
arxiv
\section{Introduction} After the discovery of Higgs boson at the LHC \cite{atlas, cms}, couplings of the Higgs boson to certain other Standard Model (SM) particles have been measured and the best fit is performed with the result very close to the SM expectation~\cite{higgs}. However, the Higgs boson self coupling, a key parameter to test the structure of Higgs potential and electroweak symmetry breaking, has not yet been measured. At the LHC, Higgs boson pair production is known to be the primary process where one can use to determine this coupling~\cite{Djouadi:1999rca,baglio, ggf_hh, hh-sm, bbaa_hh, 4b_hh}. Nonetheless, it is expected to be a challenging measurement due to its low production cross section predicted in the SM, $\sigma(p p \rightarrow h h)_{\rm SM} \sim 40~ \rm{fb}$ at the 14-TeV LHC~\cite{LHCXS, deFlorian:2016spz, Borowka:2016ehy, plehn}. In the SM, tree-level Higgs trilinear and quartic self couplings are given as \begin{eqnarray} g_{hhh}^{\rm SM} = \frac{3m_h^2}{v} ~,~~ g_{hhhh}^{\rm SM} = \frac{3m_h^2}{ v^2} ~, \label{g_SM} \end{eqnarray} where $m_h$ is the Higgs boson mass, and are related by a factor of the vacuum expectation value (VEV) $v = 246~ \rm{ GeV}$. Physics beyond the SM (BSM) can easily affect the Higgs pair production cross section at the LHC through either modification in the top Yukawa coupling and/or new colored particles running in the triangle and box loops (non-resonance effects), or the existence of new heavy scalars decaying into Higgs pairs (resonance effect). The enhancement in production cross section can reach a few orders of magnitude in some cases~\cite{hh-bey, hh-susy, Godunov:2014waa,chen_low, hh-mi, hh-jc}. Currently, the ATLAS and CMS Collaborations have imposed upper limits on the production cross section ($bb\gamma\gamma$) and production cross section times branching ratios ($4b$, $\gamma\gamma W W^*$ and $\tau\tau bb$) with various categories of signal final states in Higgs pair searches at the 13-TeV LHC~\cite{atlas_8, cms_8, atlas_13, atlas_13_2, atlas_13_3, cms_13, cms_13_2}: $3.9 ~\rm{pb}$, $330~\rm{ fb}$, $25~\rm{ pb}$ and $508~\rm{ fb}$ for the $\gamma\gamma bb$, $4b$, $\gamma\gamma W W^*$ and $\tau\tau bb$ channels, respectively. The Georgi-Machacek (GM) model, proposed in the mid 1980s~\cite{gm1, gm2}, provides a good way to generate Majorana mass for neutrinos through the type-II seesaw mechanism while preserving the custodial symmetry at tree level. In addition to the SM-like Higgs boson $h$, the extended Higgs sector has another three neutral scalars, among which two are CP-even ($H^0_1$ and $H^0_5$) while the other is CP-odd ($H^0_3$), where the subscripts denotes their representations under $SU(2)_L$. One distinctive feature of this model is that the couplings between $h$ and the SM weak gauge bosons, $g_{hVV}$, can be larger than their SM values. Phenomenology of this and similar models, including their supersymmetric and dark matter extensions, at both hadron and lepton colliders have been extensively studied~\cite{Gunion:1989ci,Gunion:1990dt,Haber:1999zh,Aoki:2007ah,Godfrey:2010qb,Logan:2010en,Falkowski:2012vh,Chiang:2012cn,Englert:2013zpa,Englert:2013wga,Chiang:2013rua,Hartling:2014zca,Chiang:2014hia,Chiang:2014bia,Godunov:2014waa,Hartling:2014aga,Chiang:2015kka,Godunov:2015lea,Chang:2003zn,Cort:2013foa,Garcia-Pepin:2014yfa,Chiang:2015rva,Chiang:2015amq,Campbell:2016zbp}. With the GM scalars also in the Higgs potential, the SM-like Higgs trilinear coupling and its couplings to the SM fermions are modified, with the possibility of enhancing the non-resonant Higgs boson pair production cross section. Furthermore, $H^0_1$ can also mediate the Higgs boson pair production, and virtually the $gg\to H^{0}_{1}\to hh$ channel dominates at the LHC when $H^{0}_{1}$ can be produced on shell. Constraints on the GM model have already been studied from unitarity of scalar field scattering amplitudes, tree-level stability of the Higgs potential, and Higgs boson precision measurements~\cite{Hartling:2014zca,Chiang:2012cn, Chiang:2015amq, Chiang:2015kka}. The most stringent constraint allows only a small window in the interaction between the Higgs boson and weak gauge bosons $\kappa_V \equiv g_{hWW}/g_{hWW}^{\rm SM} = 0.94^{+0.11}_{-0.12}$~\cite{higgcision}. Ref.~\cite{Chiang:2015amq} studied the constraints on the $\alpha$-$v_\Delta$ plane using a $\chi^2$ fit to the data of Higgs boson production at LHC Run-I, including both gluon-gluon fusion (GGF) and vector boson fusion processes with the tree-dominated $b\bar{b},\ \tau^+\tau^-, ZZ$ and $WW$ decay channels. Within the $2\sigma$ contour, the mixing angle $\alpha$ and the VEV of the Higgs triplet field $v_\Delta$ are found to roughly fall within the following ranges: $-50^\circ\lesssim \alpha \lesssim40^\circ$ and $0 \le v_\Delta \alt 50$~GeV, as shown explicitly in Fig.~1 of Ref.~\cite{Chiang:2015amq}. In this work, we will focus on the 125-GeV Higgs boson pair production via the non-resonant $p p \rightarrow h h$ channel and the resonant $p p \rightarrow H^{0}_{1} \rightarrow h h$ channel in GM model. The rest of this paper is organized as follows. In the Section~\ref{sec:model}, we review the GM model and show the relevant couplings. The pair production of Higgs bosons in the model is discussed in Section~\ref{sec:hhprod}. Section~\ref{sec:result} shows our numerical results and direct search constraints from the 13-TeV LHC. Finally, we give a summary of our work in Section~\ref{sec:con}. \section{Georgi-Machacek model} \label{sec:model} In the GM model, two $SU(2)_L$ triplet scalar fields, $\chi$ with hypercharge $Y=1$ and $\xi$ with $Y=0$, are introduced to the Higgs sector in addition to the $SU(2)_L$ doublet $\Phi$ with $Y=1/2$ already in the SM. In this paper, we use the convention that $Q=T_3+Y$ with $Q$ and $T_3$ being the electric charge and the third component of the weak isospin, respectively. Writing in an $SU(2)_L\times SU(2)_R$ covariant form, we have \begin{eqnarray} \Phi = \begin{pmatrix}\phi^{0*} & \phi^+ \\ -(\phi^+)^* & \phi^0\end{pmatrix},\ \Delta = \begin{pmatrix} \chi^{0*}&\xi^+&\chi^{++} \\ -(\chi^+)^* & \xi^0 & \chi^+ \\ (\chi^{++})^* & -(\xi^+)^* & \chi^0\end{pmatrix}, \end{eqnarray} where we use the following phase convention for the scalar field components: $\phi^- = (\phi^+)^*,\chi^{--} = (\chi^{++})^*, \chi^- = (\chi^+)^*, \xi^- = (\xi^+)^*$. As in the SM, due to the instability of the Higgs potential, the neutral component of $\Phi$ spontaneously develops a VEV to break the electroweak symmetry and to induce VEVs for the neutral components of $\Delta$. We can parameterise these neutral fields as \begin{eqnarray} \phi^0={1\over\sqrt{2} } (v_\phi+\phi_r+i\phi_i) ~,~ \chi^0 = v_\chi + {1\over\sqrt{2} }(\chi_r + i\chi_i) ~,~ \xi^0 = v_\xi + \xi_r ~, \end{eqnarray} where $v_\phi$, $v_\chi$ and $v_\xi$ denote the VEVs of $\phi$, $\chi$ and $\xi$, respectively. In the case of vacuum alignment $v_\chi = v_\xi \equiv v_\Delta$, we have $v^2\equiv v^2_\phi + 8 v^2_\Delta = (246~\mbox{GeV})^2$, and define $\tan \beta \equiv v_\phi / (2\sqrt{2} v_\Delta)$. More explicitly, the Higgs potential in the GM model is given by \begin{eqnarray} V(\Phi,\Delta) = && {1\over 2} m^2_1 {\rm tr} [\Phi^\dagger \Phi]+{1\over 2} m^2_2 {\rm tr} [\Delta^\dagger \Delta] + \lambda_1 ({\rm tr} [\Phi^\dagger \Phi])^2+\lambda_2 (\rm{tr} [\Delta^\dagger \Delta])^2 \nonumber \\ && + \lambda_3 {\rm tr} [(\Delta^\dagger \Delta)^2]+\lambda_4 \rm{tr} [\Phi^\dagger \Phi]\rm{tr} [\Delta^\dagger \Delta] + \lambda_5 {\rm tr} \left[ \Phi^\dagger {\sigma^a\over 2}\Phi {\sigma^b\over 2} \right] \rm{tr} \left[ \Delta^\dagger T^a\Delta T^b \right] \nonumber\\ && + \mu_1 {\rm tr} \left[ \Phi^\dagger {\sigma^a\over 2}\Phi {\sigma^b\over 2} \right] (P^\dagger \Delta P)_{ab} + \mu_2 {\rm tr} \left[ \Delta^\dagger T^a\Delta T^b \right] (P^\dagger \Delta P)_{ab} ~, \end{eqnarray} where $\sigma$'s and $T$'s are the $2\times 2$ and $3\times 3$ matrix representations of the $SU(2)$ generators, and \begin{align} P = \frac{1}{\sqrt2} \begin{pmatrix} -1 & i & 0 \\ 0 & 0 & \sqrt2 \\ 1 & i & 0 \end{pmatrix} ~. \end{align} After the $SU(2)_L\times SU(2)_R$ symmetry is broken down to the diagonal $SU(2)_L$, the scalar fields in the GM model can be classified into different representations under the custodial symmetry transformation: $\Phi$ is decomposed into a $\bf 3$-plet and a singlet and $\Delta$ into a $\bf 5$-plet, a $\bf 3$-plet and a singlet. Among the neutral fields, we have two CP-even singlets $H_\Phi^1=\phi_r$ and $H_\Delta^1=\sqrt{1/ 3}\xi_r + \sqrt{2/3} \chi_r$ that mix through a mixing angle $\alpha$ to render two physical Higgs bosons: \begin{eqnarray} h = \cos\alpha H_\Phi^1-\sin\alpha H_\Delta^1,\ \ \ H_1^0 = \sin\alpha H_\Phi^1 + \cos\alpha H_\Delta^1 ~, \end{eqnarray} and one CP-even $H^0_5$ given by \begin{eqnarray} H^0_5 = \sqrt{1\over 3} \chi_r - \sqrt{2\over 3} \xi_r ~. \end{eqnarray} Here, we take $h$ to be the SM-like Higgs boson of mass $125$~GeV. The two CP-odd $\bf 3$-plet fields mix via a mixing angle $\beta$ to produce a physical $H^0_3 = -\cos\beta \phi_i + \sin\beta \chi_i$ and a Goldstone boson that becomes the longitudinal component of the $Z$ boson. Because of the custodial symmetry, the different charged states within each representation are almost degenerate in mass, subject to small mass splitting $\sim {\cal O}(100)$~MeV due to electromagnetic corrections. In the following, we will ignore such small mass differences and denote the Higgs masses by $m_{H_5}$, $m_{H_3}$, $m_{H_1}$, and $m_h$ for the physical $\bf 5$-plet, $\bf 3$-plet, heavy singlet, and SM-like Higgs boson. The five dimensionless scalar couplings $\lambda_1-\lambda_5$ in the GM model can be expressed in terms of the physical Higgs masses and the mixing angles $\alpha$ and $\beta$ as \begin{eqnarray} && \lambda_1 = {1\over 8v^2s^2_\beta} \left( m^2_hc^2_\alpha + m^2_{H_1^0}s^2_\alpha \right), \nonumber \\ && \lambda_2 = {1\over 6v^2c^2_\beta} \left[ 2m^2_{H_1^0}c^2_\alpha+2m^2_h s^2_\alpha+3M_2^2-2m^2_{H^0_5}+6s^2_\beta(m^2_{H^0_3}-M^2_1) \right], \nonumber\\ && \lambda_3 = {1\over v^2 c^2_\beta} \left[ s^2_\beta \left( 2M^2_1-3m^2_{H^0_3} \right) + m^2_{H^0_5}-M^2_2 \right], \nonumber \\ && \lambda_4 = {1\over 6v^2c_\beta s_\beta} \left[ \sqrt{6} s_{\alpha} c_\alpha \left( m^2_h-m^2_{H_1^0} \right) + 3c_\beta s_\beta \left( 2m^2_{H^0_3}-M^2_1 \right) \right], \nonumber \\ && \lambda_5={2\over v^2} \left( M^2_1-m^2_{H^0_3} \right), \end{eqnarray} where $c_\theta$ and $s_\theta$ are abbreviations for $\cos\theta$ and $\sin\theta$ for $\theta = \alpha, \beta$, respectively, and $M_1$ and $M_2$ are defined as \begin{eqnarray} M_1^2 = -\frac{v}{\sqrt{2} c_\beta}\mu_1 ~,~ \ \ M_2^2 =-3\sqrt{2} c_\beta v \mu_2 ~. \end{eqnarray} The Higgs boson trilinear self coupling in the model is therefore modified approximately as \begin{eqnarray} g_{hhh}\simeq \left\{ 1-{\mu_1^2v^2\over m_2^4} \left[ {7\over 8} -{3\over 2} {v^2\over m^2_h} \left( (2\lambda_4+\lambda_5)+{\mu_1\mu_2\over m^2_2} \right) \right] \right\} g_{hhh}^{\rm SM} ~, \end{eqnarray} where $g_{hhh}^{\rm SM}$ denotes the SM Higgs triple coupling shown in Eq.~(\ref{g_SM}). On the other hand, the coupling between one $H^0_1$ and two $h$ is \begin{eqnarray} g_{H^0_1hh}=&& 24\lambda_1 c^2_\alpha s_\alpha v_\phi + 2 \left[ \sqrt{3} c_\alpha v_\Delta (3c^2_\alpha -2) + s_\alpha v_\phi (1-3c_\alpha^2) \right] (2\lambda_4+\lambda_5) \nonumber \\ && + 8\sqrt{3} c_\alpha s_\alpha^2 v_\Delta (\lambda_3+3\lambda_2) + {\sqrt{3}\over 2} \mu_1 c_\alpha (3c_\alpha^2 -2) + 4\sqrt{3} \mu_2 c_\alpha s^2_\alpha ~. \nonumber \end{eqnarray} Couplings of neutral Higgs bosons to fermions and gauge bosons relevant to this analysis are expressed in terms of the corresponding SM values as: \begin{align} \begin{split} & g_{hf\bar{f}} = {c_\alpha \over s_\beta} g_{hf\bar{f}}^{\rm SM} ~, \qquad g_{hVV} = \left(s_\beta c_\alpha - \sqrt{8\over 3} c_\beta s_\alpha \right)g_{hVV}^{\rm SM} ~, \\ & g_{H_1^0f\bar{f}} = {s_\alpha \over s_\beta} g_{hf\bar{f}}^{\rm SM} ~, \qquad g_{H_1^0VV} = \left( s_\beta s_\alpha + \sqrt{8\over 3} c_\beta c_\alpha \right)g_{hVV}^{\rm SM} ~. \end{split} \end{align} \section{Higgs boson pair production } \label{sec:hhprod} As shown in Fig.~\ref{FR}, SM-like Higgs boson pair production in the GM model at the LHC receives contributions from both non-resonant process (plot (a)), mainly through top and bottom quark loops, and resonant process through the heavy $H_1^0$ decay (plot (b)). \begin{figure}[t] \centering \includegraphics[width=3in]{GM_hh_box.pdf} \includegraphics[width=3in]{GM_hh_tri.pdf} \\ \vspace{-15mm} (a) \hspace{75mm} (b) \caption{\label{FR} Feynman diagrams for Higgs bosons pair production in the GM model. } \end{figure} The differential cross section for the process $g(p_1) g(p_2) \rightarrow h(p_3) h(p_4) $ is given by~\cite{plehn} \begin{eqnarray} \frac{d\hat\sigma(gg\to hh)}{d\hat{t}} =& \displaystyle \frac{G_F^2 \alpha_s^2}{512(2\pi)^3} \left[ \left| \lambda_{hhh}\kappa_{F_h} D(\hat{s}) F_\triangle + \lambda_{H^0_1hh}\kappa_{F_{H^0_1}} \bar{D}(\hat{s}) F_\triangle + \kappa^2_{F_h}F_\Box \right|^2 + \left| \kappa^2_{F_h}G_\Box \right|^2 \right] ~, \nonumber \\ & \mbox{with}~ \displaystyle D(\hat{s})= \frac{3m_h^2}{\hat{s}-m_h^2+im_h\Gamma_h} ~,~ \bar{D}(\hat{s})= \frac{3m_h^2}{\hat{s}-m_{H^0_1}^2+im_{H^0_1}\Gamma_{H^0_1}} ~, \end{eqnarray} where $\kappa_{F_{h}}=g_{hf\bar{f}}/g^{SM}_{hf\bar{f}}$, $\kappa_{F_{H^0_1}}=g_{H^0_1f\bar{f}}/g^{SM}_{hf\bar{f}}$, $\lambda_{hhh} = g_{h hh}/g^{SM}_{hhh}$, $\lambda_{H^0_1hh} = g_{H^0_1 hh}/g^{SM}_{hhh}$ and $\hat{s}=(p_1+p_2)^2$, $\hat{t}=(p_1-p_3)^2$, and $\hat{u}=(p_2-p_3)^2$ with $p_1+p_2=p_3+p_4$. The loop functions $F_\triangle$, $F_\Box$, and $G_\Box$ are given in Appendix A.1 of Ref.~\cite{plehn}. More explicitly, \begin{eqnarray} \frac{d\hat\sigma(gg\to hh)}{d\hat{t}} \propto&& \lambda_{hhh}^2|D(\hat{s})|^2[|F_\triangle|^2\kappa^2_{F_h}]+\lambda_{H^0_1hh}^2|\bar{D}(\hat{s})|^2[|F_\triangle|^2\kappa^2_{F_{H^0_1}}]\nonumber \\ &&+2\lambda_{hhh}\lambda_{H^0_1hh}\kappa_{F_h}\kappa_{F_{H^0_1}}\Re {\rm e}(D(\hat{s})\bar{D}(\hat{s}))|F_\triangle|^2\nonumber \\ &&+2[\lambda_{hhh}\kappa_{F_h}^3 \Re {\rm e}(D(\hat{s}) F_\triangle F^*_\Box)+\lambda_{H^0_1hh}\kappa_{F_{H^0_1}} \kappa_{F_h}^2 \Re {\rm e}(\bar{D}(\hat{s}) F_\triangle F^{*}_\Box)]\nonumber\\ &&+ [ |F_\Box|^2+|G_\Box|^2 ] \kappa_{F_h}^4 ~. \label{eq:explicit} \end{eqnarray} In the following, we will focus in the scenario where $m_{H_1^0} \gtr 2m_h$ and a pair of SM-like Higgs bosons can be produced via the production and decay of $H_1^0$. In this case, we divide the total cross section into resonant and nonresonant contributions. For the resonant production of the Higgs boson pair, we employ the narrow width approximation and calculate the production cross section of $H_1^0$, $\sigma(g g \rightarrow H_1^0)$, times its decay branching ratio to two Higgs bosons, $BR(H^0_1\rightarrow h h)$. Consider the dominant $H^{0}_{1}$ production by GGF at the LHC~\footnote{Here and the following, we tacitly consider only the dominant GGF production mechanism. The vector boson fusion production mechanism is generally smaller by one order of magnitude~\cite{baglio, loop}. This also makes our later production rate estimates more conservative.}. Since the production of $H_1^0$ takes the same form as the SM Higgs boson production, the production cross section can be obtained by rescaling the result of SM Higgs boson with the modified Yukawa couplings and different masses. We then have the resonant production of Higgs boson pairs as \begin{align}\label{xsec_r} \sigma(p p \rightarrow H^0_1 \rightarrow h h ) = \sigma(g g \rightarrow h)_{m_h \to m_{H_1^0}}\times \kappa_{F_{H_1^0}}^2\times BR(H^{0}_{1} \rightarrow h h) ~. \end{align} In view of the scaling of couplings in different parts of Eq.~(\ref{eq:explicit}), the nonresonant production cross section of a pair of Higgs boson can be parameterized as \begin{align}\label{xsec_nr} \sigma(gg \to hh) =& \sigma_{\rm SM}(gg\to hh) \Big[ \lambda_{hhh}^2\kappa^2_{F_h}c_1(s) + \lambda_{hhh}\kappa_{F_h}^3c_2(s) + \kappa_{F_h}^4c_3(s) \nonumber \\ & \qquad + \lambda_{hhh}\lambda_{H^0_1hh}\kappa_{F_h}\kappa_{F_{H^0_1}}c_4(s) + \lambda_{H^0_1hh}\kappa_{F_{H^0_1}} \kappa_{F_h}^2\bar{c}_2(s) \Big] ~, \end{align} where we have removed the $H_1^0$ resonant production channel from the above expression to avoid double counting with Eq.~(\ref{xsec_r}). The coefficients $c_1 = 0.263$, $c_2 = -1.310$, $c_3 = 2.047$, and $c_4 = -0.001$ for $\sqrt{s} = 13$~TeV. We also take a good approximation that $\bar{c}_2 = c_2$ when the production is off the resonance. Our estimates of resonant production cross section to be given in the next section are scaled from the GGF single Higgs boson production cross section calculated at NNLO+NNLL QCD+NLO EW~\cite{deFlorian:2016spz}. The SM Higgs boson pair production appearing in Eq.~(\ref{xsec_nr}) is calculated at NLO~\cite{Borowka:2016ehy}. In this work, we use GMCALC~\cite{GMCALC} to calculate the Higgs mass spectrum, couplings and branching ratios in the GM model. Both theoretical and experimental constraints are taken into account, including tree-level unitarity, stability of Higgs potential, check of electroweak vacuum, and data of $ b\rightarrow s \gamma$ and $B^0_s\rightarrow \mu^+ \mu^-$ decays. We have scanned 140,000 points in the parameter space of $ -90^\circ < \alpha <90^\circ$, $0<v_\Delta <60~{\rm GeV}$ and $m_{H^0_1} \lesssim 1000~{\rm GeV}$. We find that in a restricted region in the $\alpha$-$v_\Delta$ plane $m_{H^0_1}$ can be as heavy as 1~TeV, while most other space allows a maximum of around $700$~GeV. It is a general feature that as $H^0_1$ becomes heavier, the range of $BR(H^0_1\rightarrow h h)$ becomes narrower and closer to 1, meaning that a heavy $H^0_1$ preferentially decays to a pair of SM-like Higgs bosons. \begin{figure}[t] \centering \includegraphics[width=3in]{vc_a_l3h_4h.png} ~~ \includegraphics[width=3in]{vc_a_lH1hh_4h.png} \\ \vspace{-3mm} (a) \hspace{7.5cm} (b) \\ \includegraphics[width=3in]{vc_a_kfh.png} ~~ \includegraphics[width=3in]{vc_a_kfH1.png} \\ \vspace{-3mm} (c) \hspace{7.5cm} (d) \caption{\small \label{GM_ps1} Couplings of $h$ and $H_1^0$ in the $\alpha$-$v_\Delta$ plane, with $m_{H_1^0} > 125$~GeV. Plots (a) and (b) show respectively $\lambda_{hhh}$ and $\lambda_{H_1^0hh}$ with maximally allowed absolute value. Plots (c) and (d) give respectively $\kappa_{F_{h}}$ and $\kappa_{F_{H_1^0}}$. }\end{figure} Fig.~\ref{GM_ps1} shows the couplings of $h$ and $H_1^0$. Since each point in the $\alpha$-$v_\Delta$ plane allows certain ranges of $\lambda_{hhh}$ and $\lambda_{H_1^0hh}$, we show in plots (a) and (b) only those with the maximal absolute values. As shown in the plots, $\lambda_{hhh}$ varies roughly in the range of $-20$ to $20$, $\lambda_{H^0_1hh}$ varies roughly between $-12$ and 6, $\kappa_{F_{h}}\lesssim 1.2$, and $|\kappa_{F_{H_1^0}}| \lesssim 1$. In the plots of $\lambda_{hhh}$ and $\lambda_{H^0_1hh}$, one can clearly see a region (roughly from the origin to $\alpha \sim -40^\circ$ and $v_\Delta \sim 50$~GeV) in which both couplings attain large absolute values. In particular, when $\lambda_{hhh}$ is negative (or $\lambda_{H^0_1hh}$ is positive), constructive interference between the box and triangle Feynman diagrams in Fig.~\ref{FR} would occur for that coupling and, in addition to the resonance effect, result in larger Higgs boson pair productions. If $H^0_1$ is lighter than twice of SM-like Higgs boson mass, $m_{H_1^0} \lesssim 2m_h$, or the decay branching ratio of $H^0_1$ into two $h$'s is small, $BR(H^0_1\rightarrow h h)\sim 0$, the non-resonant production cross section, given by Eq.~(\ref{xsec_nr}), becomes more important and can be either enhanced or reduced in comparison with the SM prediction. \begin{figure}[t] \centering \includegraphics[width=3in]{vc_a_xsec.png} ~~ \includegraphics[width=3in]{vc_a_xsec_mH.png} \caption{\small \label{Hhh_BR} Maximum production cross section (left) and the corresponding $m_{H^0_1}$ (right) in the $\alpha$-$v_\Delta$ plane, assuming $BR(H^{0}_{1} \rightarrow h h) > 0$ and $m_{H^{0}_{1} } > 250$~GeV.} \end{figure} In Fig.~\ref{Hhh_BR}, we show the maximum resonant production cross section $\sigma(p p \rightarrow H^0_1 \rightarrow h h )$ (left plot) and the corresponding $m_{H^0_1}$ (right plot) in the $\alpha$-$v_\Delta$ plane. Here we have further imposed the condition that $m_{H_1^0} > 2 m_h$ so that the $H_1^0 \to hh$ decay is kinematically allowed, resulting in fewer points in the parameter space than Fig.~\ref{GM_ps1}. More scattered points accumulate in the region of $\alpha < 0$, and the maximum of cross section can reach about 6~pb within the red contour (for $\alpha \sim - 30^\circ$ and $v_\Delta \sim 30$~GeV). \section{ Numerical results and direct searches constraints} \label{sec:result} In this section, we select eight benchmark points on the $(\alpha,v_\Delta)$ parameter plane, chosen within the $2 \sigma$ bound from the Higgs data given in Ref.~\cite{Chiang:2015amq}: $(10,30)$, $(-10,50)$, $(-10,20)$, $(-30,20)$, $(-40,30)$, $(-45,20)$, $(-28,33)$ and the close-to-decoupling limit $(-1,1)$. Here and afterwards, $\alpha$ and $v_\Delta$ are in units of degree and GeV, respectively. The coupling scale factors and ranges of $m_{H^0_1}$ and $BR(H^0_1\rightarrow h h)$ for these benchmark points are listed in Table~\ref{benchmark point_set}. Most benchmark points are located outside the heavy $m_{H_1^0}$ region, and $m_{H^0_1}\lesssim 500~{\rm GeV}$. Only benchmark points C and G predict that $m_{H_1^0}$ can be as heavy as $\sim 1~{\rm TeV}$. Note that the couplings of $H_1^0$ to quarks, $\kappa_{F_{H^0_1}}$, are larger in magnitude for benchmark points D, E, F and G. Combined with the sizeable decay branching ratio of $H_1^0 \to h h$, the resonant production of SM-like Higgs boson pair can be significant. In the close-to-decoupling limit, $(\alpha,v_\Delta)=(-1, 1)$, the pair production of $h$ becomes virtually the same as the SM prediction. {\squeezetable \begin{table}[t] \begin{tabular}{c| c c c c c c c c} \hline\hline benchmark point & A & B & C & D & E & F & G & H \\ \hline $(\alpha,v_\Delta)$ & $(10,30)$ & $(-10,50)$ & $(-10,20)$ & $(-30,20)$ & $(-40,30)$ & $(-45,20)$ & $(-28,33)$ & $(-1,1)$ \\ \hline $\kappa_{F_h}$ & 1.049 & 1.204 & 1.012 & 0.889 & 0.816 & 0.727 & 0.954 & 0.999 \\ $\kappa_{F_{H^0_1}}$ & 0.185 & $-0.212$ & $-0.178$ & $-0.514$ & $-0.685$ & $-0.727$ & $-0.507$ & $-0.018$ \\ $\kappa_{V_h}$ & 0.827 & 0.969 & 1.024 & 1.031 & 1.081 & 0.954 & 1.108 & 1.00 \\ $\kappa_{V_{H^0_1}}$ & 0.718 & 0.782 & 0.201 & $-0.161$ & $-0.172$ & $-0.423$ & 0.113 & $1.32\times 10^{-3}$ \\ $m_{H^0_1}$ & 250--301 & 250--455 & 250--954 & 250--315 & 250--402 & 250--273 & 250--1373 & 250--492 \\ $BR(H^0_1\rightarrow h h)$ & 0.004--0.16 & 0.0014--0.133 & 0.009--0.186 & 0.244--0.954 & $2\times10^{-4}$--0.96 & $2\times10^{-5}$--0.5 & $7\times10^{-3}$--0.81 & 0.6--0.99 \\ \hline\hline \end{tabular} \caption{\label{benchmark point_set} Coupling scale factors, the range of $m_{H^0_1} ~(\gtrsim 2m_h)$ and the range of $BR(H^0_1\rightarrow h h)$ for 8 benchmark points. We have scanned 3000 points for each benchmark point set, where $\alpha$ is in units of degree and $v_\Delta$ and $m_{H^0_1}$ are in units of GeV.} \end{table}} In addition to the couplings that are fixed by the chosen values of $(\alpha,v_\Delta)$ shown in Table~\ref{benchmark point_set}, the scalar self-couplings are also crucial for the production of $hh$ pairs. We show in Fig.~\ref{hhh_mH} the scatter plots of $\lambda_{h hh}$ (left plot) and $\lambda_{H^{0}_{1} hh}$ (right plot) for each benchmark point. The trilinear self-coupling of $h$ can significantly deviate from the SM value, and even flip its sign in benchmark points D, E, F and G, resulting in a wide range of possible values. For the coupling of $H_1^0$ to two light Higgs bosons $h$, benchmark points A, D, E, F, and G predicts values with an opposite sign to the SM Higgs self-coupling, with the latter four having particularly wide ranges. Only benchmark points B and C predict a positive sign and $\sim {\cal O}(1)$ for the coupling. \begin{figure}[t] \centering \includegraphics[width=3in]{lhhh_mH1.png} \includegraphics[width=3in]{lH1hh_mH1.png} \caption{\small \label{hhh_mH}Scatter plots of scalar couplings $\lambda_{h hh}$ (left) and $\lambda_{H^{0}_{1} hh}$ (right) as a function of $m_{H_1^0}$. }\end{figure} Before presenting our simulations, let us summarize the current situation of the search for Higgs boson pairs at the LHC. Here we only focus on the $bb\gamma\gamma$ and $4b$ final states since these two channels impose stronger constraints and are complementary when a resonance $H_1^0$ exists. The $bb\gamma\gamma$ channel serves as a good search channel in the lower mass regime as it has a cleaner signature, particularly for the non-resonant Higgs boson pair production in the SM. In the case of resonant production via a heavy resonance ($M_X \gtrsim 500$~GeV), its efficiency becomes lower than the $4b$ channel. This is because the photon pair coming from the more boosted Higgs boson decay will be very collinear. Experimentally, separating the two photons in this case significantly lowers the efficiency. At ATLAS, the search for a light $H_1^0$ with mass $275~{\rm GeV} \leq m_{H^0_1}\leq 400~{\rm GeV}$ is constrained by the $bb\gamma\gamma$ channel~\cite{atlas_13,bbaa_hh}. The efficiencies for signal events to pass the selection criteria are about $5-8\%$, depending on the mass of $H_1^0$. It is shown that the distribution of invariant mass of the $h$ pair, $M_{hh}$, in the SM peaks around $400$ GeV at the LHC~\cite{hh-sm}, and the peak position does not shift much as the collision energy varies from 8~TeV to 100~TeV. Therefore, a light resonant can contribute to the $h$ pair production rate through both interference effect and on-shell production. The $4b$ search channel used by the ATLAS Collaboration~\cite{atlas_13_2,4b_hh}, on the other hand, gives a cross section upper limit for a heavy scalar resonance in the mass range of $500 \ {\rm GeV} \leq m_{H^0_1}\leq 1000\ {\rm GeV}$ using the resolved analysis, and $1000 \ {\rm GeV} \leq m_{H^0_1}\leq 3000\ {\rm GeV}$ using the boosted analysis. The event selection efficiencies in the resolved analysis, where different cuts are applied for different masses of heavy resonance, are given by {\[ \begin{tabular}{c | c c c c c c} Mass (GeV) & 500 & 600 & 700 & 800 & 900 & 1000\\ \hline Efficiency~\cite{atlas_13_2} &0.95$\%$&1.91$\%$&2.55$\%$&2.86$\%$&3.14$\%$&3.45$\%$\\ \end{tabular}\]} Here the calculation of efficiency assumes a 100\% branching ratio for the heavy scalar resonance to a pair of SM-like Higgs bosons and a fixed total decay width of 1~GeV. In our simulations, events of Higgs boson pair production are generated with the loop-induced mode in {\tt Madgraph5 aMC@NLO}~\cite{Alwall:2014hca} with $m_h=125$~GeV. The model file is adopted from the model database of {\tt FeynRules}~\cite{GM_MG5:NLO, FR}. The decays of Higgs boson into $b\bar{b}$ and $\gamma\gamma$ are performed with {\tt MadSpin}~\cite{spin}. The events are then passed to {\tt Pythia8}~\cite{Sjostrand:2007gs} for parton showering and hadronization, and the fast detector simulation in {\tt Delphes3} (ATLAS settings)~\cite{delphes3} is used to include the detector effects. Finally, events are analyzed with {\tt MadAnalysis5}~\cite{MA5}. \begin{figure}[t] \includegraphics[width=3in]{m4030_M.png} \includegraphics[width=3in]{m4030_dRa.png} \\ (a) \hspace{75mm} (b) \\ \includegraphics[width=3in]{m4030_dRb.png} \\ (c) \caption{\small \label{kin_m4030} Kinematic distributions of the $bb\gamma\gamma$ channel for (a) the invariant mass $M_{\gamma\gamma b b}$, (b) the opening angle $\Delta R_{\gamma\gamma}$ and (c) the opening angle $\Delta R_{b b}$ for benchmark point E with different $m_{H_1^0}$ in comparison with the SM expectations at the 13-TeV LHC. } \end{figure} In the case of light $H_1^0$ in the mass range $250~{\rm GeV} \le m_{H^0_1} \le 500~{\rm GeV}$, we follow the cuts used in the ATLAS $bb\gamma\gamma$ channel analysis~\cite{atlas_13}: \begin{eqnarray}\label{ac_a} &&N_\gamma \ge 2,\ N_b = 2 ~,\ P_T(j)> 25~{\rm GeV} ~,\ P_T(b)^{\rm lead, subl}>55,~ 35~{\rm GeV} ~,\nonumber \\ && 105~{\rm GeV} < M_{\gamma \gamma} < 160~{\rm GeV},\ 95~{\rm GeV} < M_{b b} < 135~{\rm GeV} ~. \end{eqnarray} Here and the following, $N_p$ refers to the number of particle $p$, $P_T(h)$ is the transverse momentum of particle or system $h$, the superscripts ``lead'' and ``subl'' denote respectively the leading and subleading jets, and $M_{xx}$ ($x = b,\gamma$) is the invariant mass of the system. The kinematic distributions in the invariant mass $M_{\gamma\gamma b b}$ and the opening angles $\Delta R$ of the two photons and of two $b$ jets are shown in Fig.~\ref{kin_m4030}, where we illustrate with different masses of $H_1^0$ in benchmark point E. Unlike the broad invariant mass distributions peaked around 400~GeV in the SM, a clear resonance at the mass of $H_1^0$ can be readily identified in plot~(a). The opening angle of the Higgs decay products $\Delta R \approx 2m_h/P_{T}(h)$, where $P_{T}(h)$ denotes the transverse momentum of the decaying $h$. Since the production of Higgs boson pair via a lighter resonance generally has less boosted $h$, the opening angle of the Higgs decay products tends to be wider in this case, as seen in both plots~(b) and (c) of Fig.~\ref{kin_m4030}. It is also noted that the reason for the SM background to have smaller $\Delta R$ in these two plots is because the Higgs pair production mainly comes from the non-resonance production ({\it i.e.}, the box diagram) that produces more Higgs bosons with larger $p_T$. \begin{figure}[t] \centering \includegraphics[width=3in]{M_4b.png} ~~ \includegraphics[width=3in]{dR_4b.png} \end{figure} In the case of heavy $H_1^0$ with mass larger than $500$~GeV, the ATLAS $4b$ search using the resolved analysis is employed. We take benchmark point G as an example to show the distribution in the invariant mass $M_{bbbb}$ and that in $\Delta R$ of the second and third energetic $b$ jets. The curves in the plots are the results after imposing the preselection cuts used by ATLAS for the $4b$ channel analysis: \begin{eqnarray}\label{ac_b} && N_b \ge 4 ~,\ |\eta(j)| < 2.5 ~,\ P_T(b)> 40~{\rm GeV} ~,\nonumber \\ && \Delta R(jj) < 1.5,\ P_T(j j)^{\rm lead, subl}>200, 150~{\rm GeV} ~. \end{eqnarray} We observe that as $m_{H_1^0}$ becomes heavier, the peak in the distribution of $M_{bbbb}$ becomes broader as its total width gets bigger. The $\Delta R$ distribution also moves to smaller values, as expected. In order to make a comparison with experimental constraints measured by the ATLAS Collaboration, we further follow their analysis to impose the additional mass-dependent cuts in our numerical simulations: \begin{align}\label{ac} P_T^{\rm lead}(jj) > & \begin{cases} 400~{\rm GeV} \qquad\qquad\qquad~ \mbox{if } M_{4j} > 910~{\rm GeV} ~,\\ 200~{\rm GeV} \qquad\qquad\qquad~ \mbox{if } M_{4j} < 600~{\rm GeV} ~,\\ 0.65M_{4j}-190~{\rm GeV} \qquad \mbox{otherwise} ~;\\ \end{cases} \nonumber \\ P_T^{\rm subl}(jj) > & \begin{cases} 260~{\rm GeV} \qquad\qquad\qquad \mbox{if } M_{4j} > 990~{\rm GeV} ~,\\ 150~{\rm GeV} \qquad\qquad\qquad \mbox{if } M_{4j} < 520~{\rm GeV} ~,\\ 0.23M_{4j}+30~{\rm GeV} \qquad \mbox{otherwise} ~;\\ \end{cases} \nonumber \\ |\Delta\eta(jj)| < & \begin{cases} 1.0 \qquad\qquad\qquad\qquad\quad \mbox{ if } M_{4j} < 820~{\rm GeV} ~,\\ 1.6\times 10^{-3} M_{4j}-0.28 \qquad \mbox{otherwise} ~. \end{cases} \end{align} \begin{table}[t] \begin{tabular}{c | c c c c | c c c c c c | c} \hline\hline Benchmark point & \multicolumn{4}{c|}{E} & \multicolumn{6}{c|}{G} & SM \\ \hline $(\alpha, v_\Delta)$& \multicolumn{4}{c|}{$(-40^\circ,\ 30~{\rm GeV})$} & \multicolumn{6}{c|}{$(-28^\circ,\ 33~{\rm GeV})$} & \\ $m_{H^0_1}~{\rm (GeV)}$& 250 & 300 & 350 & 400 & 500 & 600 & 700 & 800 & 900 & 1000 & \\ $\Gamma_{H_1^0}$ (GeV)&0.68&5.37&10.62&8.05&6.75&9.04&18.91&27.83&34.67&51.00 \\ $BR(H^0_1\rightarrow h h)$&0.82&0.954&0.955&0.76&0.57&0.45&0.62&0.66&0.65&0.71 & \\ $\sigma(pp \to hh)_{13-{\rm TeV}}$ (pb)&3.62&3.28&3.32&2.68&0.56&0.25&0.18&0.11&0.11&0.078 \\ Efficiency &5.6$\%$&6.4$\%$&7.2$\%$&8.8$\%$&2.57$\%$&4.15$\%$&3.65$\%$&2.45$\%$&0.86$\%$&0.97$\%$ &9.2$\%$ \\ \hline\hline \end{tabular} \caption{\label{benchmark point_e} Mass of $H_1^0$, its total decay width, its decay branching ratio and production rate to a pair of SM-like Higgs bosons, and the selection efficiency for benchmark point E in the $\gamma\gamma b b$ channel, benchmark point G in the $4b$ channel, and SM in the $b b \gamma\gamma$ channel at the 13-TeV LHC.} \end{table} The efficiencies for different masses of $H_1^0$ and the decay branching ratio to $hh$ for benchmark points E and G are listed in Table~\ref{benchmark point_e}. Here we choose the other parameters to maximize the resonant Higgs pair production rate via GGF (and thus the branching ratio of $H_1^0 \to hh$), whose value is also given in the table. The efficiency for the $bb\gamma\gamma$ channel in the SM is also given for a comparison. The efficiency for our cases depends on both the mass of $H_1^0$, its production rate, and its branching ratio to a pair of SM-like Higgs bosons. For the $bb\gamma\gamma$ channel in the lower mass regime, the experimental cuts are designed to be optimal for the non-resonant production that is peaked around 400~GeV. Therefore, we find that the efficiency in benchmark point E reduces as $m_{H_1^0}$ becomes smaller. For the $4b$ channel in the higher mass regime, on the other hand, the cuts are designed for resonant production and will cut away non-resonant events if $m_{H_1^0}$ is sufficiently large. \begin{figure}[t] \centering \includegraphics[width=3in]{xsec_400.png} ~~ \includegraphics[width=3in]{xsec_1000.png} \caption{\label{xsec}Estimated resonant cross section $\sigma(p p \rightarrow H^{0}_{1}\rightarrow h h)=\sigma(p p \rightarrow H_1^0) \times \kappa_{F_{H_1^0}}^2 \times BR(H^0_1 \rightarrow h h)$ versus $m_{H_1^0}$ for each benchmark point set at the 13-TeV LHC, with the luminosities of $3.2$~fb$^{-1}$ (red solid curves), $30$~fb$^{-1}$ (red dashed curves) and $100$~fb$^{-1}$ (red dotted curves). The left plot is for the $\gamma\gamma b b$ channel in the lower mass regime, an the right plot is for the $4b$ channel in the higher mass regime. Also shown are scaled constraints of the 8-TeV data (blue solid curves) with the luminosities of $20$~fb$^{-1}$ (left plot) and $19.5$~fb$^{-1}$ (right plot)}. \end{figure} Fig.~\ref{xsec} plots our estimates of Higgs pair production cross sections for the eight benchmark points, including both resonant and non-resonant contributions [from Eq.~(\ref{xsec_r}) and Eq.~(\ref{xsec_nr})]. For each benchmark point set, we have scanned 3000 points \footnote{Note that if we sample more points, the cross section ranges may only go slightly wider.}. Most of the parameter space in benchmark points D, E, F, and G predict larger cross sections at the level of a few picobarns, in comparison with the other benchmark points. This is because the Higgs boson trilinear coupling $g_{hhh}$ in these four benchmark points can go negative, resulting in a constructive interference between the box and triangle Feynman diagrams in Fig.~\ref{FR}. It is noted that at the same time in these benchmark points, $g_{H_1^0 hh}$ is also negative, resulting in destructive interference to cancel part of the aforementioned constructive interference. The left plot shows scattered points for all the benchmark points in the mass range of $250$~GeV $\le m_{H_1^0} \le 500$~GeV. The right plot shows scattered points for benchmark points C and G in the mass range of $500$~GeV $\le m_{H_1^0} \le 1$~TeV as only they allow larger $m_{H_1^0}$ among the benchmark points considered here. We also show the current constraints (red solid curves) on the searches for $H_1^0$ from the $\gamma\gamma b b$ channel~\cite{atlas_13} and the $4b$ channel~\cite{atlas_13_2} done by the ATLAS Collaboration using the $3.2~{\rm fb^{-1}}$ dataset at the 13-TeV LHC. As a comparison, we also show the constraints (blue curves) of the corresponding searches from LHC Run-I \cite{atlas_8} after taking into account the acceptances and rescaling of the parton luminosity. It is seen that benchmark point E is close to the constraint of the $\gamma\gamma b b$ channel. The parameter space of $500~{\rm GeV}\lesssim M_{H_1^0}\lesssim 650~{\rm GeV}$ for benchmark point G is already excluded by the $4b$ channel search. We also estimate the projected exclusion limits (red dashed curves for an integrated luminosity of $30$~fb$^{-1}$ and red dotted curves for $100$~fb$^{-1}$) when more data are collected. With $30$~fb$^{-1}$, the LHC has the sensitivity to most of the parameter space with the $H_1^0$ mass heavier than twice the Higgs boson mass for benchmark points D, E, F and G. The parameter space of heavier $H_1^0$ with mass larger than $500~{\rm GeV}$ for benchmark point C can be probed as well. We note that the ATLAS $\gamma\gamma b b$ and $4b$ constraints are rescaled with the efficiencies for benchmark points E and G, respectively (see Table~\ref{benchmark point_e}). Different benchmark points would have slightly different efficiencies. In addition to the current luminosity of $3.2$~fb$^{-1}$ (drawn in red solid curves), we also plot those for $30$~fb$^{-1}$ (red dashed curves) and $100$~fb$^{-1}$ (red dotted curves). Among the eight scenarios considered here, benchmark points E and G predict largest cross sections in the lower and higher mass regimes, respectively, and benchmark points C and G allow wider mass ranges for $H^{0}_{1}$. The pink scattered points for benchmark point H have production rates approaching the SM prediction. \section{Conclusion} \label{sec:con} In this paper, we have studied in the Georgi-Machacek (GM) model the SM-like Higgs boson pair production through the gluon-gluon fusion (GGF) process at the 13-TeV LHC. We find that under various theory and experimental constraints, the Higgs boson couplings (self and with other SM particles) can have some deviations from the SM values. In particular, the model and current data even allow an interesting possibility that the Higgs boson self-coupling $g_{hhh}$ can flip its sign from the SM value. In addition, the existence of the heavier Higgs singlet $H_1^0$ in the model gives an additional contribution to the di-Higgs production cross section through its mixing with the SM-like Higgs boson. The mass of $H_1^0$ can in some cases be as heavy as $1$~TeV, especially in some parameter region with a negative mixing angle $\alpha$. When $H_1^0$ is sufficiently heavy to decay into a pair of SM-like Higgs bosons, the production rate can be significantly enhanced, particularly when the Higgs trilinear coupling $g_{hhh}$ becomes negative as constructive interference would occur. We also note that at the same time the other Higgs trilinear coupling $g_{H_1^0hh}$ is also negative to result in a smaller destructive interference. For illustration purposes, we select eight benchmark points and perform a detailed numerical study. The Higgs boson pair production rate is estimated and compared with current and projected search bounds given by the ATLAS Collaboration. A couple of scenarios considered here can be probed or ruled out by the LHC experiments in the near future. \section*{Acknowledgments} The authors are grateful to J.~Baglio for pointing out useful references. This work was supported in part by the Ministry of Science and Technology of Taiwan under Grant Nos.~MOST-105-2112-M-003-010-MY3 (CRC) and MOST-104-2628-M-002-014-MY4 (CWC).
1,314,259,995,202
arxiv
\section*{Acknowledgements} This work was supported in part by the key projects of Chinese Academy of Sciences, the National Science Foundation of China (NSFC) under the grant 10475105, 10491306. This research was also supported in part by the National Science Foundation under Grant No. PHY99-07949. B.H.'s work was also supported by the National Science Foundation of China (NSFC) under the grant 10505011 and 10663001, Jiangxi Provincial Department of Education under the Science and Technology Research Project grant 2006-17 and the Program for Innovative Research Team of Nanchang University.
1,314,259,995,203
arxiv
\section{Introduction} In this paper, we study the following inverse spectral problem: \begin{eqnarray}\label{1.1} \left\{% \begin{array}{ll} \Delta u+k^2u=0, & \hbox{ in }D ,\,k^2\in\mathbb{R}^+;\vspace{3pt}\\\vspace{3pt} \frac{\partial u}{\partial \nu}=0,& \hbox{ on }\partial D;\\ u=1,&\hbox{ on }\partial D, \end{array}% \right. \end{eqnarray} where $\nu$ is the unit outer normal; $D$ is a fixed starlike domain in $\mathbb{R}^3$ containing the origin with Lipschitz boundary $\partial D$. We interpret the model as the plane waves perturbed by the boundary condition which is specified by $D$, and satisfies the Helmholtz equation outside $D$. Let $u$ be a non-trivial eigenfunction with some $k^2\in\mathbb{R}^+$. We want to show that $D$ are actually balls centered at origin. \par Here we prove the result as a special case of interior transmission problem \cite{Aktosun,Cakoni2,Chen,Chen2,Chen3,Chen5,Colton4,Colton,Colton3,Colton2, Colton5,Kirsch86,Kirsch,L,La,Liu,Mc,Rynne}. In interior transmission problems, we look for a frequency so that a perturbed stationary wave behaves like or somewhere like a spherical Bessel function outside the perturbation. In Schiffer's conjecture, we ask if there is a frequency so that a perturbed wave can stay in its initial shape traveling to infinity in constant speed. We refer to \cite{A,Liu} and the reference there for the connections of interior transmission problem to other questions in mathematical science. \par To give a point of view from scattering theory to~(\ref{1.1}), we take the incident wave field to be the time harmonic acoustic plane wave of the form $$u^i(x):=e^{ikx\cdot d},$$ $k\in\mathbb{R}^+$, $x\in\mathbb{R}^3$, and $d\in\mathbb{S}^2$ is the incident direction. The inhomogeneity is defined by the index of refraction $n\in\mathcal{C}^2(\mathbb{R}^3)$ of~(\ref{1.1}), and the wave propagation is governed by the following equation. \begin{eqnarray}\label{122} \left\{% \begin{array}{ll} \Delta u(x)+k^2n(x)u(x)=0,\,x\in\mathbb{R}^3;\vspace{4pt}\\\vspace{3pt} u(x)=u^i(x)+u^s(x),\,x\in\mathbb{R}^3\setminus D; \\ \lim_{|x|\rightarrow\infty}|x|\{\frac{\partial u^s(x)}{\partial |x|}-iku^s(x)\}=0, \end{array}% \right. \end{eqnarray} in which the third equation is the Sommerfeld's radiation condition. Particularly, we have the following asymptotic expansion on the scattered wave field \cite{Colton2, Isakov}. \begin{equation}\label{U} u^s(x)=\frac{e^{ik|x|}}{|x|}u_\infty(\hat{x};d,k)+O(\frac{1}{|x|^{\frac{3}{2}}}),\,|x|\rightarrow\infty, \end{equation} which holds uniformly for all $\hat{x}:=\frac{x}{|x|}$, $x\in\mathbb{R}^3$, and $u_\infty(\hat{x};d,k)$ is known as the scattering amplitude or far-field pattern in the literature \cite{Colton2,Kirsch86}. It has an expansion in spherical harmonics \cite[p.\,35,\,Theorem 2.15]{Colton2} \begin{equation} u_\infty(\hat{x};d,k)=\frac{1}{k}\sum_{n=0}^\infty\frac{1}{i^{n+1}}\sum_{m=-n}^na_n^mY_n^m(\hat{x}), \end{equation} where we follow the notation in the reference. \par Let us start with the Rellich's representation in scattering theory. We expand the possible solution $u$ of~(\ref{1.1}) in a series of spherical harmonics near infinity by Rellich's lemma \cite[p.\,32, p.\,227]{Colton2}: \begin{eqnarray}\label{1.2} u(x;k)=\sum_{l=0}^{\infty}\sum_{m=-l}^{m=l}a_{l,m}(r)Y_l^m(\hat{x}), \end{eqnarray} where $r:=|x|$, $r\geq R_0$ with a sufficiently large $R_0$; $\hat{x}=(\theta,\varphi)\in\mathbb{S}^2$. The summations converge uniformly and absolutely on suitable compact subsets away from $D$. The spherical harmonics \begin{equation}\label{S} Y_l^m(\theta,\varphi):=\sqrt{\frac{2l+1}{4\pi}\frac{(l-|m|)!}{(l+|m|)!}} P_l^{|m|}(\cos\theta)e^{im\varphi}, \,m=-l,\ldots,l;\,l=0,1,2,\ldots, \end{equation} form a complete orthonormal system in $\mathcal{L}^2(\mathbb{S}^2)$, in which \begin{equation} P_n^m(t):=(1-t^2)^{m/2}\frac{d^mP_n(t)}{dt^m},\,m=0,1,\ldots,n, \end{equation} where the Legendre polynomials $P_n$, $n=0,1,\ldots,$ form a complete orthogonal system in $L^2[-1,1]$. We refer this to \cite[p.\,25]{Colton2}. By the orthogonality of the spherical harmonics, the family of functions \begin{eqnarray}\label{1.5} \{u_{l,m}(x;k)\}_{l,m}:=\{a_{l,m}(r)Y_l^m(\hat{x})\}_{l,m} \end{eqnarray} satisfy the first equation in~(\ref{1.1}) independently for each $(l,m)$ in $r\geq R_0$ for sufficiently large $R_0$. \par Now we consider the boundary condition given by the second and third equations in~(\ref{1.1}), and then extend the solutions $u_{l,m}(x;k)$ into $r\leq R_0$ as follows. Let $\hat{x}_0\in\mathbb{S}^2$ be any given incident direction that intersects $\partial D$ at $(\hat{R},\hat{x}_0)\in \mathbb{R}^+\times\mathbb{S}^2$. For the given $\hat{x}_0$, we impose the differential operato \begin{equation}\nonumber \Delta=\frac{1}{r^2}\frac{\partial}{\partial r} r^2\frac{\partial}{\partial r}+\frac{1}{r^2\sin{\varphi}}\frac{\partial}{\partial \varphi}\sin\varphi\frac{\partial}{\partial \varphi} +\frac{1}{r^2\sin^2{\varphi}}\frac{\partial^2}{\partial \theta^2} \end{equation} on $u_{l,m}(x;k)$ and, accordingly, we have the following ODE: \begin{eqnarray} \frac{d^2 a_{l,m}(r)}{dr^2}+\frac{2}{r}\frac{d a_{l,m}(r)}{dr}+(k^2-\frac{l(l+1)}{r^2})a_{l,m}(r)=0, \end{eqnarray} which is solved by spherical Bessel functions and spherical Neumann functions. Let \begin{equation}\label{118} y_{l,m}(r):=ra_{l,m}(r), \end{equation} so we obtain \begin{eqnarray}\label{1.10} \left\{% \begin{array}{ll} y_{l,m}''(r)+(k^2-\frac{l(l+1)}{r^2})y_{l,m}(r)=0;\vspace{5pt}\\ y_{l,m}(0)=0. \end{array}% \right. \end{eqnarray} \par To give an initial condition, we apply the boundary conditions in~(\ref{1.1}) to $u_{l,m}(x;k)$ near the intersection points $\hat{R}$ along $\hat{x}_0$. We replace the boundary condition $\frac{\partial u}{\partial \nu}=0$ to be $\nabla u=0$. Hence, \begin{eqnarray}\label{1.11} &&[\frac{y_{l,m}(r;k)}{r}]'|_{r=\hat{R}}=0;\\ &&[\frac{y_{l,m}(r;k)}{r}]|_{r=\hat{R}}=1,\label{1.12} \end{eqnarray} in which we assume there is no tangent point. The solutions $y_{l,m}(r;k)$ are independent of $m$, so we write $y_{l,m}(r;k)$ as $y_{l}(r;k)$. Hence, now we have following boundary conditions. \begin{eqnarray}\label{1.13} &&\frac{y_l'(\hat{R};k)}{\hat{R}}-\frac{y_l(\hat{R};k)}{\hat{R}^2}=0;\\ &&y_l(\hat{R};k)- \hat{R}=0.\label{1.14} \end{eqnarray} If $k$ satisfies~(\ref{1.13}) and~(\ref{1.14}), then $y_l'(\hat{R};k)=1$. Thus,~(\ref{1.11}) and~(\ref{1.12}) equivalently satisfy \begin{eqnarray}\label{113} &&F_l(k;\hat{R}):=y_l(\hat{R};k)- \hat{R}=0;\\ &&G_l(k;\hat{R}):=y_l'(\hat{R};k)-1=0.\label{114} \end{eqnarray} In the initial state, $y_l(\hat{R};k)$ is exactly the boundary defining function of $D$. Combining~(\ref{1.10}),~(\ref{113}), and~(\ref{114}), we consider the following eigenvalue problem at $\hat{R}$ for each fixed $\hat{x}\in\mathbb{S}^2$ and all $l\geq0$: \begin{eqnarray}\label{15} \left\{% \begin{array}{ll} y_l''(r;k)+(k^2-\frac{l(l+1)}{r^2})y_l(r;k)=0,\,0<r<\infty;\vspace{4pt}\\ \vspace{3pt} y_{l}(0;k)=0;\\ \vspace{3pt} F_l(k;\hat{R})=0;\\ \vspace{3pt} G_l(k;\hat{R})=0. \end{array}% \right. \end{eqnarray} This is a two-way initial value problem starting at $r=\hat{R}$ inward and outward. The eigenvalue $k$ passes through to the infinity by the uniqueness of the ODE and defines the far-field patterns near infinity. There is an one-to-one correspondence between the far-field patterns and the radiating solution of the Helmholtz equation. The $y_l(r;k)$ depends on the incident angle $\hat{x}$. Most important of all, we will examine the zero set of $y_{l}(0;k)=0$ which constitutes the eigenvalues of~(\ref{15}). The solutions $\{y_l(r;k)\}_{l\geq0}$ is a family of entire functions of exponential type \cite{Carlson,Carlson2,Carlson3,Po}. For each $l\geq0$, it behaves like sine functions in complex plane with zero set asymptotically approaching the zero set of sine functions for each incident direction. The Weyl's law of the eigenvalues of~(\ref{15}) in many settings are found in \cite{Chen,Chen3,Chen5} as a direct consequence of the Cartwright-Levinson theory in value distribution theory \cite{Boas,Cartwright,Cartwright2,Koosis,Levin,Levin2}. In particular, we can find that the density of the zero set for each incident direction is related to the radius $\hat{R}$ as a spectral invariant. Rellich's lemma indicates that all perturbations behave like spherical waves near the infinity, by which we prove \textbf{a special case} of Schiffer's conjecture. \begin{theorem}\label{11} Let $D$ be a starlike domain as assumed in~(\ref{1.1}) under radiation condition~(\ref{122}). If there is an eigenvalue $k_0^2\in\mathbb{R}^+$, $k_0^2\geq1$ of~(\ref{1.1}), then $D$ is an open ball centered at the origin. \end{theorem} \section{Singular Sturm-Liouville Theory} Here we collect the asymptotic behaviors for $y_l(r;k)$ and $y_l'(r;k)$. For $l\geq0$, we apply the results from \cite{Carlson,Carlson2,Carlson3,Po}. Let $z_l(\xi;k)$ be the solution of \begin{eqnarray}\label{21} \left\{ \begin{array}{ll} -z_l''(\xi)+\frac{l(l+1)z_l(\xi)}{\xi^2}+p(\xi)z_l(\xi)=k^2z_l(\xi);\vspace{9pt}\\ z_l(1;k)=-b;\,z_l'(1;k)=a,\,a,\,b\in\mathbb{R}, \end{array} \right. \end{eqnarray} where $p(\xi)$ is square integrable; the real number $l\geq-1/2$. In general, \begin{eqnarray}\label{123} |z_l(\xi;k)+b\cos{k(1-\xi)}+a\frac{\sin{k(1-\xi)}}{k}|\leq \frac{K(\xi)}{|k|}\exp\{|\Im k|[1-\xi]\},\,|k|\geq1, \end{eqnarray} where \begin{equation}\label{124} K(\xi)\leq\exp\{\int_\xi^1\frac{|l(l+1)|}{t^2}+|p(t)|dt\},\,0\leq\xi\leq 1. \end{equation} This explains the behaviors of solutions $z_l(\xi;k)$ and $z_l'(\xi;k)$ for all $l$ in unit interval. For its application in~(\ref{113}) and~(\ref{114}), we take $$b=-\hat{R};\,a=1$$ for each incident direction, and the problem~(\ref{21}) in interval $[0,\hat{R}]$. \par Outside the domain $D$, we consider~(\ref{15}) as an initial problem starting at $\hat{R}$ to the infinity. If $p(\xi)\equiv0$, then we consider the following special case: \begin{eqnarray}\label{2.4} v_l''(\xi)+[k^2-\frac{l(l+1)}{\xi^2}]v_l(\xi)=0. \end{eqnarray} The solutions of~(\ref{2.4}) are essentially Bessel's functions with a basis of two elements. The variation of parameters formula leads to the following asymptotic expansions: For $\xi>0$ and $\Re k\geq0$, there is a constant $C$ so that \begin{eqnarray} &&|v_l(\xi,k)-\frac{\sin\{k\xi-l\frac{\pi}{2}\}}{k^{l+1}}|\leq C|k|^{-(l+1)}\frac{\exp\{|\Im k|\xi\}}{|k\xi|};\label{2.5}\\ &&|v_l'(\xi,k)-\frac{\cos\{k\xi-l\frac{\pi}{2}\}}{k^{l}}|\leq C|k|^{-l}\frac{\exp\{|\Im k|\xi\}}{|k\xi|}.\label{2.6} \end{eqnarray} We refer these estimates to \cite[Lemma\,3.2,\,Lemma\,3.3]{Carlson2}, and the we find that a solution of the initial value problem of~(\ref{2.4}) is a linear combination of~(\ref{2.5}) and~(\ref{2.6}). \section{Cartwright-Levinson Theory} We take the following vocabularies from entire function theory \cite{Boas,Cartwright,Cartwright2,Koosis,Levin,Levin2} to describe the asymptotic behavior of the eigenvalues of~(\ref{15}). \begin{definition} Let $f(z)$ be an integral function of order $\rho$, $N(f,\alpha,\beta,r)$ be the number of the zeros of $f(z)$ inside the angle $[\alpha,\beta]$, and $|z|\leq r$. We define the density function as \begin{equation}\label{Den} \Delta_f(\alpha,\beta):=\lim_{r\rightarrow\infty}\frac{N(f,\alpha,\beta,r)}{r^{\rho}}, \end{equation} and \begin{equation} \Delta_f(\beta):=\Delta_f(\alpha_0,\beta), \end{equation} with some fixed $\alpha_0\notin E$, in which $E$ is at most a countable set \cite{Levin,Levin2}. \end{definition} Let us define \begin{equation} \hat{\Delta}(\xi):=\Delta_{z_l(\xi;k)}(-\epsilon,\epsilon),\,b=-\hat{R}, \end{equation} as the density of the zero set along $\hat{x}$. \begin{lemma} The entire functions $y_l(\xi;k)$ and $y_l'(\xi;k)$ are of order one and of type $\hat{R}-\xi$. \end{lemma} \begin{proof} From~(\ref{123}), we have \begin{equation}\label{3.4} y_l(\xi;k)=-\hat{R}\cos{k(\hat{R}-\xi)}-\frac{\sin{k(\hat{R}-\xi)}}{k}+O( \frac{K(\xi)}{|k|}\exp\{|\Im k|[\hat{R}-\xi]\}),\,|k|\geq1. \end{equation} To find the type of an entire function, we compute the following definition of Lindel\"{o}f's indicator function \cite{Levin,Levin2} \begin{definition} Let $f(z)$ be an integral function of finite order $\rho$ in the angle $[\theta_1,\theta_2]$. We call the following quantity as the indicator of the function $f(z)$. \begin{equation} h_f(\theta):=\lim_{r\rightarrow\infty}\frac{\ln|f(re^{i\theta})|}{r^{\rho}}, \,\theta_1\leq\theta\leq\theta_2. \end{equation} \end{definition} We find that if $k=|k|e^{i\theta}$, then \begin{equation}\label{3.6} h_{y_l(\xi;k)}(\theta)=|(\hat{R}-\xi)\sin\theta|,\,\theta\in[0,2\pi], \,0<\xi<\hat{R}. \end{equation} When referring more details to \cite{Chen,Chen3,Chen5,Cartwright2,Levin,Levin2}, we find more examples in \cite[p.\,70]{Cartwright2}. The maximal value of $h_{y_l(\xi;k)}(\theta)$ gives the type of an entire function \cite[p.\,72]{Levin}, which is $(\hat{R}-\xi)$. A similar proof holds for $y_l'(\xi;k)$. \end{proof} More importantly, the indicator function~(\ref{3.6}) leads to the following Cartwright's theory \cite[p.\,251]{Levin}. \begin{lemma}\label{34} We have the following asymptotic behavior of the zero set of $y_l(\xi;k)$. $$\hat{\Delta}(\xi)=\frac{\hat{R}-\xi}{\pi}.$$ \end{lemma} \begin{proof} We observe in~(\ref{3.4}) that $|y_l(\xi;k)|$ is bounded on the real axis. Hence, it is in Cartwright's class. All of the properties in \cite[p.\,251]{Levin} hold. \end{proof} Letting $\xi=0$, we obtain the eigenvalue density of~(\ref{15}) in $\mathbb{C}$. Moreover, they are all real. \begin{lemma} The eigenvalues $k$ of~(\ref{15}) are all real. \end{lemma} \begin{proof} For $l=0$, the result is classic \cite{Carlson2,Po}. In our case, $y_l(\xi;k)$ is real for $k\in0i+\mathbb{R}$. Furthermore, the asymptotic behavior of~(\ref{3.4}) proves the lemma, which is a special case of Bernstein's theorem in entire function theory \cite[Theorem 1]{Duffin}. A step-by-step proof is provided in \cite[Lemma 2.6]{Chen5}. \end{proof} \section{Proof of Theorem \ref{11}} \begin{proof} Let $k_0^2$ be an eigenvalue of~(\ref{1.1}), as assumed in Theorem \ref{11}. Particularly, from~(\ref{1.2}) we have \begin{eqnarray}\label{4.1} &&u(x;k_0)=\sum_{l=0}^{\infty}\sum_{m=-l}^{m=l} a_{l,m}(r;k_0)Y_l^m(\hat{x});\\ &&u_{l,m}(x;k_0)=a_{l,m}(r;k_0)Y_l^m(\hat{x}), \,\hat{x}\in\mathbb{S}^2,\label{4.2} \end{eqnarray} in which the coefficient $a_{l,m}(r;k_0)$ does not depend on the incident direction $\hat{x}\in\mathbb{S}^2$ for sufficiently large $|x|:=r$: The functions in~(\ref{4.1}) solve the Helmholtz equation in $r\geq R_0$. As a result of the uniqueness of the ODE~(\ref{15}), the solutions $y_l(r;k_0)$ extend both outward to the infinity and inward to the origin for all $l\geq0$. For the given eigenvalue $k_0^2$, the equation~(\ref{15}) holds for all incident directions $\hat{x}\in\mathbb{S}^2$ and for all $l\geq0$. \par The representation in~(\ref{4.1}) is unique in $\mathbb{R}^3$: If there is another eigenvalue $k'$ of~(\ref{15}) from incident angle $x'\neq \hat{x}\in\mathbb{S}^2$ with the solution $$u_{l,m}'(x;k'):=a_{l,m}'(r;k')Y_l^m(\hat{x}),$$ then the analytic continuation of Helmholtz equation \cite[p.\,18]{Colton2} implies that \begin{equation}\label{43} a_{l,m}'(r;k')=a_{l,m}(r;k_0). \end{equation} With the uniqueness of the ODE~(\ref{1.10}), $k_0$ or $k'$ satisfies~(\ref{15}) individually along its own incident direction inward to the origin. Therefore, Lemma \ref{34} provides an eigenvalue density $$\hat{\Delta}(0)=\frac{\hat{R}}{\pi},\,\hat{x}\in\mathbb{S}^2.$$ \par The ODE~(\ref{15}) holds for all $\xi\geq\hat{R}$ and $l\geq 0$. In particular, we apply the estimates ~(\ref{2.5}) and~(\ref{2.6}): \begin{eqnarray*} &&|v_l(\xi,k)-\frac{\sin\{k(\xi-\hat{R})-l\frac{\pi}{2}\}}{k^{l+1}}|\leq C|k|^{-(l+1)}\frac{\exp\{|\Im k|\xi\}}{|k\xi|};\\ &&|v_l'(\xi,k)-\frac{\cos\{k(\xi-\hat{R})-l\frac{\pi}{2}\}}{k^{l}}|\leq C|k|^{-l}\frac{\exp\{|\Im k|\xi\}}{|k\xi|}. \end{eqnarray*} Therefore, the initial value problem~(\ref{21}), with $p\equiv 0$, $b=\hat{R}$, and $a=1$, provides the asymptotic behavior for the solution: \begin{equation}\nonumber y_l(\xi;k)=\hat{R}\frac{\cos\{k(\xi-\hat{R})-l\frac{\pi}{2}\}}{k^l} +O\{\frac{\exp\{|\Im k|\xi\}}{k^{l}(k\xi)}\}. \end{equation} That is, \begin{equation}\label{4.6} k^ly_l(\xi;k)=\hat{R}\cos\{k(\xi-\hat{R})-l\frac{\pi}{2}\} [1+O\{\frac{1}{k\xi}\}],\,\hat{R}<\xi<\infty, \end{equation} outside the zeros of $\cos\{k(\xi-\hat{R})-l\frac{\pi}{2}\}$. This is classic in Sturm-Liouville theory \cite{Carlson,Carlson2,Po}. \par The given eigenvalue $k_0$ satisfies~(\ref{15}), for all $l\geq0$ and all $\hat{x}\in\mathbb{S}^2$, and~(\ref{4.6}). Therefore, \begin{equation} k_0^ly_l(\xi;k_0)=\hat{R}\cos\{k_0(\xi-\hat{R})-l\frac{\pi}{2}\} [1+O\{\frac{1}{k_0\xi}\}],\,0<\xi<\infty. \end{equation} We choose $l\uparrow\infty$ and so $\xi\uparrow\infty$ such that $\xi=\hat{R}+\frac{l\pi}{2k_0}>R_0$ for any large $R_0$. Thus, for large $l$, \begin{equation} k_0^ly_l(\xi;k_0)=\hat{R}+O(\frac{1}{\xi}),\, \xi=\hat{R}+\frac{l\pi}{2k_0},\,|k_0|\geq1. \end{equation} Using~(\ref{118}),~(\ref{1.2}) and the uniqueness of the Helmholtz equation, as shown in~(\ref{43}), the far-field patterns \cite[(2.49)]{Colton2} are asymptotically the same periodic functions for each $\hat{x}\in\mathbb{S}^2$. In particular, the boundary defining function $\hat{R}$ is constant to $\hat{x}\in\mathbb{S}^2$, and Theorem \ref{11} is thus proven. \end{proof} \begin{compliance} The author declares there is no conflicts of interest regarding the publication of this paper. The research does not involve any human participant and/or animals, and no further informed consent is required. \end{compliance}
1,314,259,995,204
arxiv
\section{ Introduction } In the Standard Model (SM), the flavor-changing neutral current (FCNC) decays \ensuremath{D^0\to\ell^+\ell^-}\ are strongly suppressed by the Glashow-Iliopoulos-Maiani (GIM) mechanism. % Long-distance processes bring the predicted branching fractions up to the order of $10^{-23}$ and $10^{-13}$ for \ensuremath{D^0\to e^+e^-}\ and \ensuremath{D^0\to \mu^+\mu^-}\ decays, respectively~\cite{burdman}. % These predictions are well below current experimental sensitivities. % The lepton-flavor violating (LFV) decay \ensuremath{D^0\to e^\pm\mu^\mp}\ is forbidden in the SM. % Several extensions of the SM predict \ensuremath{D^0\to\ell^+\ell^-}\ branching fractions that are enhanced by several orders of magnitude compared with the SM expectations~\cite{burdman}. % The connection between \ensuremath{D^0\to\ell^+\ell^-}\ and $D^0 - \bar{D^0}$ mixing in new physics models has also been emphasized~\cite{golowich}. We search for \ensuremath{D^0\to\ell^+\ell^-}\ decays using approximately 468~fb$^{-1}$ of data produced by the PEP-II asymmetric-energy $e^+e^-$ collider~\cite{pepii} and recorded by the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ detector. % The center-of-mass energy of the machine was at, or 40~MeV below, the $\Upsilon(4S)$ resonance for this dataset. % The \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ detector is described in detail elsewhere~\cite{babar-nim}. % We give a brief summary of the main features below. The trajectories and decay vertices of long-lived hadrons are reconstructed with a 5-layer, double-sided silicon strip detector (SVT) and a 40-layer drift chamber (DCH), which are inside a 1.5 T solenoidal magnetic field. % Specific ionization ($dE/dx$) measurements are made by both the SVT and the DCH. % The velocities of charged particles are inferred from the measured Cherenkov angle of radiation emitted within fused silica bars, located outside the tracking volume and detected by an array of phototubes (DIRC). % The $dE/dx$ and Cherenkov angle measurements are used in particle identification. % Photon and electron energy, and photon position, are measured by a CsI(Tl) crystal calorimeter (EMC). % The steel of the flux return for the solenoidal magnet is instrumented with layers of either resistive plate chambers or limited streamer tubes~\cite{menges}, which are used to identify muons (IFR). \section{ Event reconstruction and selection } We form \ensuremath{D^0}\ candidates by combining pairs of oppositely charged tracks and consider the following final states: \ensuremath{e^+e^-}\ , \ensuremath{\mu^+\mu^-}\ , \ensuremath{e^\pm \mu^\mp}\ , \ensuremath{\pi^+\pi^-}\ , and $K^-\pi^+$. % We use the measured \ensuremath{D^0\to\pi^+\pi^-}\ yield and the known \ensuremath{D^0\to\pi^+\pi^-}\ branching fraction to normalize our \ensuremath{D^0\to\ell^+\ell^-}\ branching fractions. % We also use the \ensuremath{D^0\to\pi^+\pi^-}\ candidates, as well as the \ensuremath{D^0\to K^-\pi^+}\ candidates, to measure the probability of misidentifying a $\pi$ as either a $\mu$ or an $e$. % Combinatorial background is reduced by requiring that the \ensuremath{D^0}\ candidate originate from the decay $D^*(2010)^+ \ensuremath{\rightarrow}\xspace \ensuremath{D^0}\ \pi^+$~\cite{chrgconj}. % We select \ensuremath{D^0}\ candidates produced in continuum $e^+e^- \ensuremath{\rightarrow}\xspace c \bar c$ events by requiring that the momentum of the \ensuremath{D^0}\ candidate be above 2.4 GeV in the center-of-mass (CM) frame, which is close to the kinematic limit for $B \ensuremath{\rightarrow}\xspace D^*\pi$, $D^{*+}\ensuremath{\rightarrow}\xspace D^0\pi^+$. % This reduces the combinatorial background from $e^+e^- \ensuremath{\rightarrow}\xspace \BB$ events. Backgrounds are estimated directly from data control samples. % Signal \ensuremath{D^0}\ candidates with a reconstructed \ensuremath{D^0}\ mass above 1.9~GeV consist of random combinations of tracks. % We use a sideband region above the signal region in the \ensuremath{D^0}\ mass ([1.90, 2.05] GeV) in a wide $\Delta m \equiv m(\ensuremath{D^0}\ \pi^+) - m(D^0)$ window ([0.141, 0.149] GeV) to estimate the amount of combinatorial background. % The \ensuremath{D^0}\ and $\Delta m$ mass resolutions, measured in the \ensuremath{D^0\to\pi^+\pi^-}\ sample, are 8.1~MeV and 0.2~MeV, respectively. % We estimate the number of \ensuremath{D^0\to\pi^+\pi^-}\ background events selected as \ensuremath{D^0\to\ell^+\ell^-}\ candidates by scaling the observed \ensuremath{D^0\to\pi^+\pi^-}\ yield, with no particle identification criteria applied, by the product of pion misidentification probabilities and a misidentification correlation factor $G$. % The misidentification correlation factor $G$ is estimated with the \ensuremath{D^0\to K^-\pi^+}\ data control sample. The tracks for the \ensuremath{D^0}\ candidates must have momenta greater than 0.1 GeV and have at least 6 hits in the SVT. % The slow pion track from the $D^{*+} \ensuremath{\rightarrow}\xspace \ensuremath{D^0}\ \pi^+$ decay must have at least 12 position measurements in the DCH. % A fit of the $D^{*+} \ensuremath{\rightarrow}\xspace \ensuremath{D^0}\ \pi^+; D^0 \ensuremath{\rightarrow}\xspace t^+t^-$ decay chain is performed where the \ensuremath{D^0}\ tracks $(t)$ are constrained to come from a common vertex and the \ensuremath{D^0}\ and slow pion are constrained to form a common vertex within the beam interaction region. % The $\chi^2$ probabilities of the \ensuremath{D^0}\ and $D^*$ vertices from this fit must be at least $1$\%. % The reconstructed \ensuremath{D^0}\ mass \ensuremath{m(D^0)}\ must be within $[1.65, 2.05]$~GeV and the mass difference $\Delta m$ must be within $[0.141, 0.149]$~GeV. % We subtract a data-Monte-Carlo difference of $0.91 \pm 0.06$~MeV, measured in the \ensuremath{D^0\to\pi^+\pi^-}\ sample, from the reconstructed \ensuremath{D^0}\ mass in the simulation. We use an error-correcting output code (ECOC) algorithm~\cite{ecoc} with 36 input variables to identify electrons and pions. % The ECOC combines multiple bootstrap aggregated~\cite{bagging} decision tree~\cite{decision-tree} binary classifiers trained to separate $e, \pi, K,$ and $p$. % The most important inputs for electron identification are the EMC energy divided by the track momentum, several EMC shower shape variables, and the deviation from the expected value divided by the measurement uncertainty for the Cherenkov angle and $dE/dx$ for the $e, \pi, K,$ and $p$ hypotheses. % For tracks with momentum greater than 0.5 GeV, the electron identification has an efficiency of 95\% for electrons and a pion misidentification probability of less than 0.2\%. % Neutral clusters in the EMC that are consistent with Bremsstrahlung radiation are used to correct the momentum and energy of electron candidates. % The efficiency of the pion identification is above 90\% for pions, with a kaon misidentification probability below 10\%. Muons are identified using a bootstrap aggregated decision tree algorithm with 30 input variables. Of these, the most important are the number and positions of the hits in the IFR, the difference between the measured and expected DCH $dE/dx$ for the muon hypothesis, and the energy deposited in the EMC. % For tracks with momentum greater than 1~GeV, the muon identification has an efficiency of around 60\% for muons, with a pion misidentification probability of between 0.5\% and 1.5\%. The reconstruction efficiencies for the different channels after the above particle identification requirements are about 18\% for $e^+e^-$, 9\% for $\mu^+\mu^-$, 13\% for $e^\pm\mu^\mp$, and 26\% for $\pi^+\pi^-$. % The background candidates that remain are either random combinations of two leptons (combinatorial background), or \ensuremath{D^0\to\pi^+\pi^-}\ decays where both pions pass the lepton identification criteria (peaking background). % The \ensuremath{D^0\to\pi^+\pi^-}\ background is most important for the \ensuremath{D^0\to \mu^+\mu^-}\ channel. % Figure~\ref{fig:d0m} shows the reconstructed invariant mass distributions from Monte Carlo (MC) simulated samples for the three \ensuremath{D^0\to\ell^+\ell^-}\ signal channels. % Also shown are the distributions from \ensuremath{D^0\to\pi^+\pi^-}\ reconstructed as \ensuremath{D^0\to\ell^+\ell^-}\ and \ensuremath{D^0\to K^-\pi^+}\ reconstructed as \ensuremath{D^0\to\ell^+\ell^-}\ for each signal channel. % The overlap between the \ensuremath{D^0\to\ell^+\ell^-}\ and \ensuremath{D^0\to\pi^+\pi^-}\ distributions is largest for the \ensuremath{D^0\to \mu^+\mu^-}\ channel, while the \ensuremath{D^0\to\ell^+\ell^-}\ and \ensuremath{D^0\to K^-\pi^+}\ distributions are well separated. % \begin{figure} \begin{center} \includegraphics[width=0.49\linewidth]{d0m2-prd.eps} \includegraphics[width=0.49\linewidth]{dm-prd.eps} \caption{ Reconstructed \ensuremath{D^0}\ mass (left) and \ensuremath{\Delta m}\ \ (right) for the three signal channels: \ensuremath{D^0\to e^+e^-}\ (top), \ensuremath{D^0\to \mu^+\mu^-}\ (middle), and \ensuremath{D^0\to e^\pm\mu^\mp}\ (bottom). % The solid (black) histogram is the signal MC, the dashed (blue) histogram is \ensuremath{D^0\to\pi^+\pi^-}\ MC reconstructed as \ensuremath{D^0\to\ell^+\ell^-}\ , and the dotted (red) histogram is \ensuremath{D^0\to K^-\pi^+}\ MC reconstructed as \ensuremath{D^0\to\ell^+\ell^-}\ . % The \ensuremath{D^0\to\ell^+\ell^-}\ and \ensuremath{D^0\to\pi^+\pi^-}\ distributions have been normalized to unit area. % The \ensuremath{D^0\to K^-\pi^+}\ normalization is arbitrary. } \label{fig:d0m} \end{center} \end{figure} The combinatorial background originates mostly from events with two semileptonic $B$ and/or $D$ decays. % The sample of events selected by the above criteria are dominantly from $e^+e^- \ensuremath{\rightarrow}\xspace B\bar B$ events, rather than events from the $e^+e^- \ensuremath{\rightarrow}\xspace q\bar q,\ (q=u,d,s,c)$ continuum. % We use a linear combination (Fisher discriminant~\cite{fisher}) of the following five variables to reduce the combinatorial \ensuremath{B\bar{B}}\ background: % \begin{itemize} % \item The measured \ensuremath{D^0}\ flight length divided by its uncertainty. % \item The value of \ensuremath{|\cos \theta_{\rm hel} |}, where $\theta_{\rm hel}$ is defined as the angle between the momentum of the positively-charged \ensuremath{D^0}\ daughter and the boost direction from the lab frame to the \ensuremath{D^0}\ rest frame, all in the \ensuremath{D^0}\ rest frame. % \item The missing transverse momentum with respect to the beam axis. % \item The ratio of the $2^{\rm nd}$ and $0^{\rm th}$ Fox-Wolfram moments~\cite{r2}. % \item The \ensuremath{D^0}\ momentum in the CM frame. % \end{itemize} % The flight length for combinatorial background is symmetric about zero, while the signal has an exponential distribution. % The \ensuremath{|\cos \theta_{\rm hel} |}\ distribution is uniform for signal but peaks at zero for combinatorial \ensuremath{B\bar{B}}\ background. % The neutrinos from the semileptonic decays in \ensuremath{B\bar{B}}\ background events create missing transverse momentum, while there is none for signal events. % The ratio of Fox-Wolfram moments uses general event-shape information to separate \ensuremath{B\bar{B}}\ and continuum $q\bar q$ events. % Finally, the signal has a broad \ensuremath{D^0}\ CM momentum spectrum that peaks at around 3 GeV, while combinatorial background peaks at the minimum allowed value of 2.4 GeV. Figure~\ref{fig:fisher} shows distributions of the Fisher discriminant (${\cal F}$) for samples of \ensuremath{B\bar{B}}\ MC, \ensuremath{D^0\to \mu^+\mu^-}\ signal MC, and continuum background MC. % The separation between signal and \ensuremath{B\bar{B}}\ background distributions is large, while the signal and continuum background distributions are similar. % For example, requiring ${\cal F}$ to be greater than 0 removes about 90\% of the \ensuremath{B\bar{B}}\ background while keeping 85\% of the signal. % The minimum ${\cal F}$ value is optimized for each signal channel as described below. \begin{figure} % \begin{center} \includegraphics[width=\linewidth]{fisher5-3.eps} \caption{ Fisher discriminant, ${\cal F}$, distributions for samples of $B\bar{B}$ MC (dashed blue), \ensuremath{D^0\to \mu^+\mu^-}\ signal MC (solid black), and continuum MC (dotted red). The ${\cal F}$ distributions for \ensuremath{D^0\to e^+e^-}\ and \ensuremath{D^0\to e^\pm\mu^\mp}\ are similar to those of \ensuremath{D^0\to \mu^+\mu^-}\ . } \label{fig:fisher} \end{center} \end{figure} We use the \ensuremath{|\cos \theta_{\rm hel} |}\ variable directly to remove continuum combinatorial background. % Figure~\ref{fig:coshel} shows distributions of \ensuremath{|\cos \theta_{\rm hel} |}\, before making a minimum ${\cal F}$ requirement, for \ensuremath{B\bar{B}}\ background, continuum background, and signal. % The drop-off for \ensuremath{|\cos \theta_{\rm hel} |}\ near $1.0$ in the signal distributions is caused by the selection and particle identification requirements. % The \ensuremath{B\bar{B}}\ background peaks near zero, while the continuum background peaks sharply near one. % \begin{figure*} % \begin{center} \includegraphics[width=0.31\linewidth]{coshel-2.eps} \includegraphics[width=0.31\linewidth]{coshel-3.eps} \includegraphics[width=0.31\linewidth]{coshel-4.eps} \caption{ Distributions of $|\cos(\theta_{\rm hel})|$ for the three signal channels: \ensuremath{D^0\to e^+e^-}\ (left), \ensuremath{D^0\to \mu^+\mu^-}\ (center), and \ensuremath{D^0\to e^\pm\mu^\mp}\ (right). The top distributions show Monte Carlo distributions for the combinatorial \BB\ (dashed, blue) and continuum (dotted, red) backgrounds. The bottom distributions show the signal Monte Carlo with arbitrary normalization. } \label{fig:coshel} \end{center} \end{figure*} The selection criteria for each signal channel were chosen to give the lowest expected signal branching fraction upper limit for the null hypothesis (a true branching fraction of zero) using the MC samples. % The Fisher discriminant coefficients were determined before applying the \ensuremath{|\cos \theta_{\rm hel} |}, \ensuremath{D^0}\ mass, and \ensuremath{\Delta m}\ \ requirements. % We then tested a total of 2700 configurations of \ensuremath{|\cos \theta_{\rm hel} |}, ${\cal F}$, \ensuremath{D^0}\ mass, and \ensuremath{\Delta m}\ \ criteria. % Table~\ref{tab:optimal-cuts} summarizes the resulting best values for the maximum \ensuremath{|\cos \theta_{\rm hel} |}, minimum ${\cal F}$, \ensuremath{m(D^0)}\ \ signal window, and \ensuremath{\Delta m}\ \ interval. \begin{table} % % \begin{center} \caption{ Selection criteria for the three \ensuremath{D^0\to\ell^+\ell^-}\ signal decay modes. The parameter in the last row is defined as $\delta \Delta m \equiv \Delta m - \Delta m_0$, where $\Delta m_0$ is the nominal $D^{*+}-D^0$ mass difference~\cite{PDG}. } \label{tab:optimal-cuts} \begin{tabular}{cccc} \hline\hline % Parameter & \ \ \ \ \ \ \ \ensuremath{e^+e^-}\ \ \ \ \ \ \ & \ \ \ \ \ \ \ \ensuremath{\mu^+\mu^-}\ \ \ \ \ \ \ & \ \ \ \ \ \ensuremath{e^\pm \mu^\mp}\ \ \ \\ \hline % \ensuremath{|\cos \theta_{\rm hel} |} & $<0.85$ & $<0.90$ & $<0.85$ \\ ${\cal F}$ & $>0.00$ & $>-0.25$ & $>0.00$ \\ \ensuremath{m(D^0)}\ (GeV) & $[1.815,1.890]$ & $[1.855,1.890]$ & $[1.845,1.890]$ \\ $|\delta\Delta m|$ (MeV) & $<0.5$ & $<0.5$ & $<0.4$ \\ % \hline\hline \end{tabular} % % \end{center} \end{table} After the selection criteria in Table~\ref{tab:optimal-cuts} were determined, the data yields in the sideband region were compared to the expectations from Monte Carlo samples. % The \ensuremath{D^0\to \mu^+\mu^-}\ and \ensuremath{D^0\to e^\pm\mu^\mp}\ data yields were consistent with the expectations from the Monte Carlo samples. % However, the \ensuremath{D^0\to e^+e^-}\ sideband yield showed a substantial excess of events; 90 events were observed when $5.5 \pm 1.6$ were expected. The excess of data sideband events over the expected background from Monte Carlo was investigated and found to have several distinct features: low track multiplicity, continuum-like event shape characteristics, tracks consistent with electrons produced in photon conversions, low \ensuremath{D^0}\ daughter track momenta, and undetected energy along the beam axis. % We found that such events result from hard initial state radiation events or two-photon interaction processes that are not simulated in the continuum MC samples used in the analysis. % The following selection criteria were added in order to remove such background contributions: % \begin{itemize} % \item Events must have at least 5 tracks for the \ensuremath{D^0\to e^+e^-}\ channel and at least 4 tracks for the \ensuremath{D^0\to \mu^+\mu^-}\ and \ensuremath{D^0\to e^\pm\mu^\mp}\ channels. % \item Events can have at most 3 electron candidates. % \item The longitudinal boost of the event, reconstructed from all tracks and neutral clusters, along the high-energy beam direction $p_z/E$ in the CM frame must be greater than -0.5 for all three \ensuremath{D^0\to\ell^+\ell^-}\ channels. % \item For \ensuremath{D^0\to \mu^+\mu^-}\ and \ensuremath{D^0\to e^\pm\mu^\mp}\ candidates, the pion track from the $D^{*+}$ decay and the leptons must be inconsistent with originating from a photon conversion. % \end{itemize} % The signal efficiencies for the \ensuremath{D^0\to e^+e^-}\ , \ensuremath{D^0\to \mu^+\mu^-}\ , and \ensuremath{D^0\to e^\pm\mu^\mp}\ channels for these additional criteria are 91.4\%, 99.3\%, and 96.8\%, respectively. % The \ensuremath{D^0\to e^+e^-}\ sideband yield in the data with these criteria applied is reduced to 8 events where $4.5\pm 1.3$ are expected, based on the Monte Carlo samples. \subsection{ \boldmath Peaking \ensuremath{D^0\to\pi^+\pi^-}\ background estimation } The amount of \ensuremath{D^0\to\pi^+\pi^-}\ peaking background within the \ensuremath{m(D^0)}\ signal window is estimated from data and calculated separately for each \ensuremath{D^0\to\ell^+\ell^-}\ channel using % \begin{equation} N_{\pi\pi}^{BG} = \left( \sum_i N_{\pi\pi,i}^{NP} \cdot \langle p_{f,i}^+\rangle \langle p_{f,i}^-\rangle \right) \cdot \epsilon_{m(D^0)} \cdot G \end{equation} % where the sum $i$ is over the six data-taking periods, % $N_{\pi\pi,i}^{NP}$ is the number of \ensuremath{D^0\to\pi^+\pi^-}\ events that pass all of the \ensuremath{D^0\to\ell^+\ell^-}\ selection criteria except for the lepton identification and \ensuremath{m(D^0)}\ signal window requirements, % $\langle p_{f,i}^+\rangle \langle p_{f,i}^-\rangle$ is the product of the average probability that the $\pi^+$ and the $\pi^-$ pass the lepton identification criteria, % $\epsilon_{m(D^0)}$ is the efficiency for \ensuremath{D^0\to\pi^+\pi^-}\ background to satisfy the \ensuremath{m(D^0)}\ signal window requirement, and $G$ takes into account a positive correlation in the probability that the $\pi^+$ and the $\pi^-$ pass the muon identification criteria. % The value of $\langle p_{f,i}^+\rangle$ $(\langle p_{f,i}^-\rangle)$ is measured using the ratio of the \ensuremath{D^0\to\pi^+\pi^-}\ yield requiring that the $\pi^+$ $(\pi^-)$ satisfy the lepton identification requirements to the \ensuremath{D^0\to\pi^+\pi^-}\ yield with no lepton identification requirements applied. % The $\langle p_{f,i}^+\rangle$ and $\langle p_{f,i}^-\rangle$ are measured separately for each of the six major data-taking periods due to the changing IFR performance with time. % The values of $\langle p_{f,i}^+\rangle$ and $\langle p_{f,i}^-\rangle$ vary between 0.5\% and 1.5\%. % The probability that the $\pi^+$ and $\pi^-$ both pass the muon identification criteria is enhanced when the two tracks curve toward each other, instead of away from each other, in the plane perpendicular to the beam axis. % We use $G = 1.19 \pm 0.05$ for the \ensuremath{D^0\to \mu^+\mu^-}\ channel and $G = 1$ for the \ensuremath{D^0\to e^+e^-}\ and \ensuremath{D^0\to e^\pm\mu^\mp}\ channels. % The $G$ factor is measured using a high-statistics \ensuremath{D^0\to K^-\pi^+}\ sample where the $K$ is required to have a signature in the IFR that matches that of a $\pi$ which passes the $\mu$ identification criteria. % This is in good agreement with the MC estimate of the $G$ factor value, $1.20 \pm 0.10$. \subsection{ Combinatorial background estimation } The combinatorial background is estimated by using the number of observed events in a sideband region and the expected ratio of events $R_{\rm cb}$ in the signal and sideband regions, determined from MC simulation. % The sideband is above the signal region in the \ensuremath{D^0}\ mass ([1.90, 2.05] GeV) in a wide \ensuremath{\Delta m}\ window ([0.141, 0.149] GeV). % We fit the \ensuremath{D^0}\ mass and \ensuremath{\Delta m}\ projections of the combinatorial background MC using $2^{\rm nd}$-order polynomials. % A two-dimensional probability density function (PDF) is formed by multiplying the one-dimensional PDFs, assuming the variables are uncorrelated. % The combinatorial background signal-to-sideband ratio $R_{\rm cb}$ is then computed from the ratio of the integrals of the two-dimensional PDF. \section{ Results } % The distribution of \ensuremath{\Delta m}\ vs \ensuremath{D^0}\ mass as well as projections of \ensuremath{\Delta m}\ and the \ensuremath{D^0}\ mass for the data events for the three signal channels are shown in Fig.~\ref{fig:dmvsd0m}. % Peaks from \ensuremath{D^0\to K^-\pi^+}\ and \ensuremath{D^0\to\pi^+\pi^-}\ are visible at 1.77~GeV and 1.85~GeV in the \ensuremath{D^0}\ mass distribution for \ensuremath{D^0\to \mu^+\mu^-}\ candidates. % We observe 1, 8, 2 events in the \ensuremath{D^0\to e^+e^-}\ , \ensuremath{D^0\to \mu^+\mu^-}\ , and \ensuremath{D^0\to e^\pm\mu^\mp}\ signal regions, respectively. \begin{figure*} % \begin{center} \includegraphics[width=0.31\linewidth]{unblind-2d-data-2.eps} \includegraphics[width=0.31\linewidth]{unblind-2d-data-3.eps} \includegraphics[width=0.31\linewidth]{unblind-2d-data-4.eps} \includegraphics[width=0.31\linewidth]{data-d0m-proj-ee.eps} \includegraphics[width=0.31\linewidth]{data-d0m-proj-mm.eps} \includegraphics[width=0.31\linewidth]{data-d0m-proj-em.eps} \includegraphics[width=0.31\linewidth]{data-dm-proj-ee.eps} \includegraphics[width=0.31\linewidth]{data-dm-proj-mm.eps} \includegraphics[width=0.31\linewidth]{data-dm-proj-em.eps} \caption{ Data distributions of $\Delta m$ vs the reconstructed $D^0$ mass (top row) and projections of the \ensuremath{D^0}\ mass (middle row) and $\Delta m$ (bottom row). % The columns contain the distributions for the \ensuremath{D^0\to e^+e^-}\ (left), \ensuremath{D^0\to \mu^+\mu^-}\ (center), and \ensuremath{D^0\to e^\pm\mu^\mp}\ (right) decay modes. % The shaded \ensuremath{D^0}\ mass ($\Delta m$) distributions represent the subset of events that fall in the $\Delta m$ (\ensuremath{D^0}\ mass) signal window. % In the top row, the dotted (black) box indicates the signal region and the dashed (red) box indicates the sideband region. % In the middle and bottom rows, the vertical dotted black lines indicate the boundaries of the signal region. } \label{fig:dmvsd0m} \end{center} \end{figure*} \subsection{ \boldmath \ensuremath{D^0\to\ell^+\ell^-}\ Branching fractions } The yield of \ensuremath{D^0\to\pi^+\pi^-}\ decays in the $\pi\pi$ control sample, selected with the same ${\cal F}$ and $|\cos \theta_{\rm hel}|$ criteria for each \ensuremath{D^0\to\ell^+\ell^-}\ signal mode (see Table~\ref{tab:optimal-cuts}), is used to normalize the \ensuremath{D^0\to\ell^+\ell^-}\ signal branching fraction. % For each \ensuremath{D^0\to\ell^+\ell^-}\ signal channel, the \ensuremath{D^0\to\pi^+\pi^-}\ yield $N_{\pi\pi}^{\rm fit}$ is determined by fitting the \ensuremath{D^0}\ mass spectrum of the \ensuremath{D^0\to\pi^+\pi^-}\ control sample in the range [1.7, 2.0] GeV. % The fit has three components: \ensuremath{D^0\to\pi^+\pi^-}\ , \ensuremath{D^0\to K^-\pi^+}\ , and combinatorial background. % The PDF for the \ensuremath{D^0\to\pi^+\pi^-}\ component is the sum of a Crystal Ball function and two Gaussians. % The Crystal Ball function is a Gaussian modified to have an extended, power-law tail on the low side~\cite{CB-function}. % The PDF for the \ensuremath{D^0\to K^-\pi^+}\ component is the sum of a Crystal Ball function and an exponential function. % The combinatorial background PDF is an exponential function. The \ensuremath{D^0\to\ell^+\ell^-}\ branching fraction is given by % \begin{equation} {\cal B}_{\ell\ell} = \left(\frac{N_{\ell\ell}}{N_{\pi\pi}^{\rm fit}}\right) \ \left(\frac{\epsilon_{\pi\pi}}{\epsilon_{\ell\ell}}\right) \ {\cal B}_{\pi\pi} \ = \ S_{\ell\ell} \ \cdot \ N_{\ell\ell} \end{equation} % where $N_{\ell\ell}$ is the number of \ensuremath{D^0\to\ell^+\ell^-}\ signal candidates, $N_{\pi\pi}^{\rm fit}$ is the number of \ensuremath{D^0\to\pi^+\pi^-}\ candidates from the fit, $\epsilon_{\pi\pi}$ and $\epsilon_{\ell\ell}$ are the efficiencies for the corresponding decay modes, ${\cal B}_{\pi\pi} = (1.400 \pm 0.026) \times 10^{-3}$ is the \ensuremath{D^0\to\pi^+\pi^-}\ branching fraction~\cite{PDG}, and $S_{\ell\ell}$ is defined by % \begin{equation} \label{eq:S} S_{\ell\ell} \ \equiv \ \frac{ {\cal B}_{\pi\pi} } { N_{\pi\pi}^{\rm fit} } \frac{ \epsilon_{\pi\pi} }{ \epsilon_{\ell\ell} }. \end{equation} % The expected observed number of events in the signal region is given by % \begin{equation} N_{\rm obs} \ = \ {\cal B}_{\ell\ell} / S_{\ell\ell} + N_{BG}. \end{equation} % The uncertainties on $S_{\ell\ell}$ and $N_{BG}$ are incorporated into a likelihood function by convolving a Poisson PDF in $N_{\rm obs}$ with Gaussian PDFs in $S_{\ell\ell}$ and $N_{BG}$. % We determine 90\% confidence level intervals using the likelihood ratio ordering principle of Feldman and Cousins~\cite{feldman} to construct the confidence belts. % The estimated branching fractions and one standard deviation uncertainties are determined from the values of ${\cal B}_{\ell\ell}$ that maximize the likelihood and give a change of 0.5 in the log likelihood relative to the maximum, respectively. \subsection{ Systematic uncertainties } Table~\ref{tab:syst} summarizes the systematic uncertainties. % Several of the uncertainties in $\epsilon_{\pi\pi} / \epsilon_{\ell\ell}$ cancel, including tracking efficiency for the \ensuremath{D^0}\ daughters, slow pion efficiency, and the efficiencies of the ${\cal F}$ and \ensuremath{D^0}\ momentum requirements. % The uncertainty on $\epsilon_{\pi\pi}/\epsilon_{\ell\ell}$ due to particle identification is 4\%. % Bremsstrahlung creates a low-side tail in the \ensuremath{D^0}\ mass distributions for the \ensuremath{D^0\to e^+e^-}\ and \ensuremath{D^0\to e^\pm\mu^\mp}\ decay modes. % The uncertainty $\epsilon_{\ell\ell}$ due to the modeling of this tail is 3\% for \ensuremath{D^0\to e^+e^-}\ and 2\% for \ensuremath{D^0\to e^\pm\mu^\mp}\ . % The Crystal Ball shape parameters that describe the low-side tail of the \ensuremath{D^0}\ mass distribution were varied, leading to an uncertainty of 1.1\% to 1.3\% on $N^{\rm fit}_{\pi\pi}$. % We use the world average for the \ensuremath{D^0\to\pi^+\pi^-}\ branching fraction~\cite{PDG}, which has an uncertainty of 1.9\%. % We combine the above relative uncertainties in quadrature resulting in 4.6\% to 5.4\% systematic uncertainties on $S_{\ell\ell}$. The \ensuremath{D^0}\ mass range for the fit used to determine the combinatorial background PDF was varied from [1.70, 2.05] GeV to [1.80, 2.05] GeV. The difference in the resulting signal-to-sideband ratio $R_{\rm cb}$ is taken as a systematic uncertainty. % The pion misidentification probabilities for $e$ and $\mu$ measured in data are in good agreement with the MC simulation. % We use the larger of either the difference between the data and the MC or the statistical uncertainty on the MC misidentification probabilities as a systematic uncertainty. % For the \ensuremath{D^0\to \mu^+\mu^-}\ decay mode, we take the uncertainty on the MC estimate for the $G$ factor of 8\% as a systematic uncertainty on the $G$ estimate from the \ensuremath{D^0\to K^-\pi^+}\ data control sample. \begin{table*} % \begin{center} % \caption{ Systematic uncertainties. The uncertainty on $S_{\ell\ell}$ results from the uncertainties on $\epsilon_{\pi\pi}/\epsilon_{\ell\ell}$, $N^{\rm fit}_{\pi\pi}$, and ${\cal B}_{\pi\pi}$ added in quadrature. The systematic uncertainty on the overall background $N_{BG}$ is obtained from the uncertainties on $N^{BG}_{\pi\pi}$ and $N_{\rm cb}$ added in quadrature. } \label{tab:syst} % \begin{tabular}{lccc} \hline\hline & $D^0\ensuremath{\rightarrow}\xspace e^+e^-$ & $D^0\ensuremath{\rightarrow}\xspace \mu^+\mu^-$ & $D^0\ensuremath{\rightarrow}\xspace e^\pm\mu^\mp$ \rule[-0.15cm]{0cm}{0.5cm} \\ \hline\hline % $\epsilon_{\pi\pi}/\epsilon_{\ell\ell}$, particle ID & 4\% & 4\% & 4\% \rule[-0.15cm]{0cm}{0.5cm} \\ % \hline $\epsilon_{\pi\pi}/\epsilon_{\ell\ell}$, Bremsstrahlung \ \ \ \ & 3\% & --- & 2\% \rule[-0.15cm]{0cm}{0.5cm} \\ % \hline $N^{\rm fit}_{\pi\pi}$ & 1.2\% & 1.3\% & 1.1\% \rule[-0.15cm]{0cm}{0.5cm} \\ % \hline ${\cal B}_{\pi\pi}$ & 1.9\% & 1.9\% & 1.9\% \rule[-0.15cm]{0cm}{0.5cm} \\ % \hline $S_{\ell\ell}$ & 5.4\% & 4.6\% & 5.0\% \rule[-0.15cm]{0cm}{0.5cm} \\ % \hline \hline % $N^{BG}_{\pi\pi}$ & \ \ \ \ 11\% (0.004 events) \ \ \ \ & \ \ \ \ 16\% (0.43 events) \ \ \ \ & \ \ \ \ 5\% (0.02 events) \ \ \ \ \rule[-0.15cm]{0cm}{0.5cm} \\ % \hline $N_{\rm cb}$, $R_{\rm cb}$ & 36\% (0.35 events) & 20\% (0.25 events) & 19\% (0.20 events) \rule[-0.15cm]{0cm}{0.5cm} \\ % \hline $N_{BG}$ & 0.35 events & 0.50 events & 0.20 events \\ % \hline\hline \end{tabular} \end{center} \end{table*} \subsection{ Branching Fraction Results } \begin{table*} % % \begin{center} % \caption{ Results for the observed event yields ($N_{\rm obs}$), estimated background ($N_{BG}$), and signal branching fractions (${\cal B}_{\ell\ell}$). The first uncertainty is statistical and the second systematic. % $N_{SB}$ is the observed number of events in the sideband, $R_{\rm cb}$ is the signal-to-sideband ratio for combinatorial background, $N_{\rm cb}$ and $N_{\pi\pi}^{BG}$ are the estimated combinatorial and \ensuremath{D^0\to\pi^+\pi^-}\ backgrounds in the signal region, $N^{\rm fit}_{\pi\pi}$ is the fitted yield in the \ensuremath{D^0\to\pi^+\pi^-}\ control sample, $\epsilon_{\pi\pi}$ and $\epsilon_{\ell\ell}$ are the $\pi\pi$ control sample and signal selection efficiencies, determined from Monte Carlo samples, which have negligible statistical uncertainties. % The systematic uncertainty on $\epsilon_{\pi\pi}/\epsilon_{\ell\ell}$ is included in the systematic uncertainty on $S_{\ell\ell}$, which is defined in Eqn.~(\ref{eq:S}). } \label{tab:results} % \begin{tabular}{lccc} \hline\hline & $D^0\ensuremath{\rightarrow}\xspace e^+e^-$ & $D^0\ensuremath{\rightarrow}\xspace \mu^+\mu^-$ & $D^0\ensuremath{\rightarrow}\xspace e^\pm\mu^\mp$ \rule[-0.15cm]{0cm}{0.5cm} \\ \hline\hline $N_{SB}$ & 8 & 27 & 24 \rule[-0.15cm]{0cm}{0.5cm} \\ \hline $R_{\rm cb}$ & \ \ \ \ $0.121 \pm 0.023 \pm 0.044$ \ \ \ \ & \ \ \ \ $0.046 \pm 0.005 \pm 0.009$ \ \ \ \ & \ \ \ \ $0.042 \pm 0.006 \pm 0.008$ \ \ \ \ \rule[-0.15cm]{0cm}{0.5cm} \\ \hline $N_{\rm cb}$ & \ \ \ \ $0.97 \pm 0.39 \pm 0.35$ \ \ \ \ & \ \ \ \ $1.24 \pm 0.27 \pm 0.25$ \ \ \ \ & \ \ \ \ $1.00 \pm 0.25 \pm 0.20$ \ \ \ \ \rule[-0.15cm]{0cm}{0.5cm} \\ \hline $N_{\pi\pi}^{BG}$ & $0.037 \pm 0.012 \pm 0.004$ & $2.64 \pm 0.22 \pm 0.43 $ & $0.42 \pm 0.08 \pm 0.02 $ \rule[-0.15cm]{0cm}{0.5cm} \\ \hline $N_{BG}$ & $1.01 \pm 0.39 \pm 0.35$ & $3.88 \pm 0.35 \pm 0.50$ & $1.42 \pm 0.26 \pm 0.20$ \rule[-0.15cm]{0cm}{0.5cm} \\ \hline\hline $N^{\rm fit}_{\pi\pi}$ & $39930 \pm 210 \pm 490$ & $51800 \pm 240 \pm 660$ & $39840 \pm 210 \pm 430$ \rule[-0.15cm]{0cm}{0.5cm} \\ \hline $\epsilon_{\pi\pi}$ & 14.4\% & 18.7\% & 14.6\% \rule[-0.15cm]{0cm}{0.5cm}\\ \hline $\epsilon_{\ell\ell}$ & 9.48\% & 6.29\% & 6.97\% \rule[-0.15cm]{0cm}{0.5cm} \\ \hline $S_{\ell\ell} \ (\times 10^{-9})$ & $53.4 \pm 0.2 \pm 2.9$ & $80.6 \pm 0.4 \pm 3.7$ & $73.9 \pm 0.4 \pm 3.7$ \rule[-0.15cm]{0cm}{0.5cm} \\ \hline\hline $N_{\rm obs}$ & 1 & 8 & 2 \rule[-0.15cm]{0cm}{0.5cm} \\ \hline ${\cal B}_{\ell\ell} \ (\times 10^{-7})$ & $0.1\ ^{+0.7}_{-0.4}$ & $3.3\ ^{+2.6}_{-2.0}$ & $0.5\ ^{+1.3}_{-0.9}$ \rule[-0.15cm]{0cm}{0.5cm} \\ \hline ${\cal B}_{\ell\ell} \ (\times 10^{-7})$ 90\% C.I. \ \ \ \ & $<1.7$ & $[0.6,8.1]$ & $<3.3$ \rule[-0.15cm]{0cm}{0.5cm} \\ \hline\hline \end{tabular} % % % \end{center} \end{table*} Table~\ref{tab:results} presents the results, where $N_{SB}$ is the number of events in the upper sideband, $N_{\rm cb}$ is the expected number of combinatorial background events in the signal window, $N_{\pi\pi}^{BG}$ is the number of events from the \ensuremath{D^0\to\pi^+\pi^-}\ peaking background, and $N_{BG}$ (data) is the expected number of total background events in the data. For the \ensuremath{D^0\to e^+e^-}\ and \ensuremath{D^0\to e^\pm\mu^\mp}\ channels, the event yield in the signal region is consistent with background only. % We observe 1 and 2 events with expected backgrounds of $1.0 \pm 0.5$ and $1.4 \pm 0.3$ events for the \ensuremath{D^0\to e^+e^-}\ and \ensuremath{D^0\to e^\pm\mu^\mp}\ channels, respectively. % The 90\% confidence interval upper limits for the branching fractions are $<1.7\times 10^{-7}$ for \ensuremath{D^0\to e^+e^-}\ and $<3.3 \times 10^{-7}$ for \ensuremath{D^0\to e^\pm\mu^\mp}\ . For the \ensuremath{D^0\to \mu^+\mu^-}\ channel, we observe 8 events in the signal region, where we expect $3.9 \pm 0.6$ background events. % There is a cluster of of \ensuremath{D^0\to \mu^+\mu^-}\ candidate events in Fig.~\ref{fig:dmvsd0m} just above and below the lower \ensuremath{D^0}\ mass edge of the signal region, where the \ensuremath{D^0\to\pi^+\pi^-}\ background is expected. % We expect $7.5 \pm 0.8$ \ensuremath{D^0\to\pi^+\pi^-}\ events in the entire [1.7, 2.05] GeV \ensuremath{D^0}\ mass range, with 93\% of these events falling within the narrower [1.830,1.875] GeV range. % The combinatorial background in the [1.830,1.875] GeV \ensuremath{D^0}\ mass interval is expected to be $1.8 \pm 0.6$ events, giving a total expected background of $8.8 \pm 1.1$ events. % In this interval, we observe 15 events. % The probability of observing 15 or more events when $8.8 \pm 1.1$ events are expected is 4.6\%, which corresponds to a 1.7 standard deviation upward fluctuation from the mean for a Gaussian distribution (i.e. $({\rm Erf}(1.7/\sqrt{2})+1)/2 = 1 - 0.046$). % The probability of observing 8 events when $3.9 \pm 0.6$ events are expected is 5.4\%. % We conclude that the excess over the expected background is not statistically significant. % The Feldman-Cousins method results in a two-sided 90\% confidence interval for the \ensuremath{D^0\to \mu^+\mu^-}\ branching fraction of $[0.6, 8.1]\times 10^{-7}$. In summary, we have searched for the leptonic charm decays \ensuremath{D^0\to e^+e^-}\ , \ensuremath{D^0\to \mu^+\mu^-}\ , and \ensuremath{D^0\to e^\pm\mu^\mp}\ using 468~fb$^{-1}$ of integrated luminosity recorded by the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ experiment. % We find no statistically significant excess over the expected background. % These results supersede our previous results~\cite{prev-babar} and are consistent with the results of the Belle experiment~\cite{belle}, which has set 90\% confidence level upper limits of $<0.79 \times 10^{-7}$, $<1.4 \times 10^{-7}$, and $<2.6 \times 10^{-7}$, for the \ensuremath{D^0\to e^+e^-}\ , \ensuremath{D^0\to \mu^+\mu^-}\ , and \ensuremath{D^0\to e^\pm\mu^\mp}\ branching fractions, respectively. % The LHCb experiment has recently presented preliminary search results~\cite{lhcb} for \ensuremath{D^0\to \mu^+\mu^-}\ , where they find no evidence for this decay and set an upper limit on the branching fraction of $< 1.3 \times 10^{-8}$ at 95\% C.L. \section*{ Acknowledgments } \par \input{acknowledgements.tex}
1,314,259,995,205
arxiv
\section{Logarithmic lower bound} \label{sec:LB-log} \ascomment{just copied from the "old" LB section.} Our first lower bound concerns the dependence on $\Dmin$, and holds for any given value thereof. \begin{theorem}[dependence on $\Dmin$]\label{thm:mainLB} Posit an arbitrary time horizon $T$, budget $B = \nicefrac{T}{2}$, and $d=2$ resources (including time). For any $\eps\in(\nicefrac{1}{\sqrt{T}},\nicefrac{1}{4})$, there exist two $3\times 2$ problem instances $\mI$, $\mI'$ with $\Dmin=\eps/2$ such that any algorithm incurs regret $\OPTFD-\E[\term{REW}] \geq \Omega\rbr{\Dmin^{-2}}$ on either $\mI$ or $\mI'$. In fact, both problem instances have deterministic resource consumption. \end{theorem} The problem instances for Theorem~\ref{thm:mainLB} are constructed as follows. For arm $A_1$, resource consumption is deterministically $\nicefrac{1}{2}$, and reward is drawn from $\term{Bernoulli}(\nicefrac{1}{4})$. For arm $A_2$, the consumption is deterministically $1$, and the reward is drawn from $\term{Bernoulli}(\nicefrac{1}{2} \pm \nicefrac{\eps}{2})$. \footnote{That is, the reward for arm $A_2$ is $\term{Bernoulli}(\nicefrac{1}{2} + \nicefrac{\eps}{2})$ for one instance, and $\term{Bernoulli}(\nicefrac{1}{2} - \nicefrac{\eps}{2})$ for another.} That $\Dmin$ is as claimed follows immediately from the construction. Because of the small difference between the two instances, any algorithm will choose a ``wrong" arm at least $\Omega(\eps^{-2})$ times on one of the instances, which in turn implies $\Omega(\eps^{-2})$ regret. The full proof is in Section~\ref{sec:LB-gap}. \ascomment{the rest is the proof} Let $\vec{X}^*_\mJ$ be the optimal distribution over arms for the linear relaxation \eqref{lp:primalAbstract} of a given problem instance $\mJ$. $\vec{X}^*_\mI$ chooses arm $A_1$ with probability $1$, giving an instantaneous expected reward of $\nicefrac{1}{4}$ on instance $\mI$. $\vec{X}^*_{\mI'}$ randomizes uniformly between arm $A_2$ and the null arm. Its instantaneous expected reward on instance $\mI'$ is $\nicefrac{1}{4} + \nicefrac{\eps}{2}$. In both problem instances, $\Dmin = \nicefrac{\eps}{2}$. Consider an arbitrary algorithm \term{ALG}. Apply Lemma~\ref{lem:bestArm} with quasi-rewards\xspace as rewards, and \[ H := \{\text{distribitions } \vec{X}: \| \vec{X}- \vec{X}^*_\mI \|_1 \geq \tfrac{3}{4} \}.\] \kaedit{Let $\vec{Y}_t$ denote the distribution chosen by the algorithm at time $t$. If $\vec{Y}_t \in H$, by definition this implies that $\| \vec{Y}_t- \vec{X}^*_\mI \|_1 \geq \tfrac{3}{4}$. If $\vec{Y}_t \notin H$, by definition this implies that $\| \vec{Y}_t- \vec{X}^*_\mI \|_1 < \tfrac{3}{4}$. This can be rewritten as $\| \vec{Y}_t- \vec{X}^*_{\mI'} + \vec{X}^*_{\mI'} - \vec{X}^*_\mI \|_1 < \tfrac{3}{4}$. Using triangle inequality and the definitions of the distributions $\vec{X}^*_\mI$ and $\vec{X}^*_{\mI'}$, this implies that $\tfrac{5}{4} \leq \| \vec{Y}_t- \vec{X}^*_{\mI'} \| \leq \tfrac{11}{4}$.} \kaedit{We also use the following algebra in the proof. First, note that for any two distributions $\vec{X}, \vec{Y} \in \Delta_K$, we have $\| \vec{X} - \vec{Y} \|_1 = 2 \cdot \sum_{j \in [K]: X_j \geq Y_j} X_j - Y_j$. Next, if $\| \vec{X} - \vec{Y} \|_1 \geq \alpha$, this implies that there exists a $j \in [K]$ such that $X_j - Y_j \geq \frac{\alpha}{2K}$. } We now use the above observation to complete the proof. \asedit{We pick $\eps$ such that $T = \frac{c}{\eps^2}$. We apply the lemma to each round $t\in [T]$, and let $\mJ_t \in \{ \mI,\, \mI' \}$ be the corresponding problem instance guaranteed by the lemma. If $\mI$ appears in this sequence at least $T/2$ time/y5s we have the following.} The expected regret incurred by $\term{ALG}$ is at least \begin{align*} &\sum_{t \in [T]:\; \mJ_t = \mI} \E\sbr{ \mathbb{I}[\vec{Y}_t \in H]} \cdot \langle \vec{X}^*_\mI - \vec{Y}_t, \vec{r} \rangle \\ & \qquad \sum_{t \in [T]:\; \mJ_t = \mI} 2 \cdot \E\sbr{ \mathbb{I}[\vec{Y}_t \in H]} \cdot \left( \sum_{j \in [K]: X^*_{\mI}(j) \geq Y(j)} X^*_{\mI}(j) - Y(j) \right) &\qquad\geq \sum_{t \in [T]:\; \mJ_t = \mI} \E\sbr{ \mathbb{I}[\vec{Y}_t \in H]} \cdot ( \vec{X}^*_\mI(j) - \vec{Y}_t(j) ) \cdot \left( \min_{j \in [3]} r_j \right) & \EqComment{by triangle inequality} \\ &\qquad\geq \frac{c}{2 \eps^2} \cdot \nicefrac{1}{4} \cdot \nicefrac{3}{4} \cdot \rbr{\nicefrac{1}{4} - \eps} & \EqComment{by defn. of $H$ and the construction}\\ &\qquad\geq \Omega\rbr{\eps^{-2}} = \Omega\rbr{\Dmin^{-2}}. \end{align*} Likewise, if $\mI'$ appears in this sequence at least $T/2$ times, then the expected regret incurred by $\term{ALG}$ is at least, \begin{align*} &\sum_{t \in [T]:\; \mJ_t = \mI'} \E\sbr{ \mathbb{I}[\vec{Y}_t \notin H]} \cdot \langle \vec{X}^*_{\mI'} - \vec{Y}_t, \vec{r} \rangle \\ &\qquad\geq \sum_{t \in [T]:\; \mJ_t = \mI'} \E\sbr{ \mathbb{I}[\vec{Y}_t \notin H]} \cdot \| \vec{X}^*_{\mI'} - \vec{Y}_t \|_1 \cdot \left( \min_{j \in [3]} r_j \right) & \EqComment{by triangle inequality} \\ &\qquad\geq \frac{c}{2 \eps^2} \cdot \nicefrac{1}{4} \cdot \nicefrac{5}{4} \cdot \rbr{\nicefrac{1}{4} - \eps} & \EqComment{by defn. of $H$ and the construction}\\ &\qquad\geq \Omega\rbr{\eps^{-2}} = \Omega\rbr{\Dmin^{-2}}. \end{align*} \subsection{Results and intuition} We consider problem instances with three arms $\{A_1,A_2,\term{null}\}$, Bernoulli rewards, and $d\geq 2$ resources, one of which is time; call them \emph{$3\times d$ instances}. Each lower bound constructs two similar problem instances $\mI,\mI'$ such that any algorithm incurs high regret on at least one of them.% \footnote{A standard approach for lower-bounding regret in multi-armed bandits is to construct multiple problem instances. A notable exception is the celebrated $\Omega(\log T)$ lower bound in \citet{Lai-Robbins-85}, which considers one (arbitrary) problem instance, but makes additional assumptions on the algorithm.} The two instances have the same parameters $T,K,d,B$, and the mean reward and the mean consumption for each arm and each resource differ by at most $\eps$; we call them \emph{$\eps$-perturbation} of each other. We start with an ``original" problem instance $\mI_0$ and construct problem instances $\mI,\mI'$ that are small perturbations of $\mI_0$. This is a fairly general result: unlike many bandit lower bounds that focus on a specific pair $\mI,\mI'$, we allow a wide range for $\mI_0$, as per the assumption below. \newcommand{c_{\mathtt{LB}}}{c_{\mathtt{LB}}} \begin{assumption}\label{ass:LBAss} There exists an absolute constant $c_{\mathtt{LB}}\in (0,\nicefrac13)$ such that: \begin{OneLiners \item[1.] \label{boundReward} $r(A_i),\,c_j(A_i)\in [c_{\mathtt{LB}},\,1-c_{\mathtt{LB}}]$ for each arm $i\in \{1,2\}$ and each resource $j$. \item[2.] \label{rewardDiff} $r(A_2) - r(A_1) \geq c_{\mathtt{LB}}$ and $c_j(A_2) - c_j(A_1) \geq c_{\mathtt{LB}} + \Dmin$ for every resource $j \in [d]$. \item[3.] \label{boundOPT} $B \leq c_{\mathtt{LB}} \cdot T \leq \OPTFD$. \item[4.] \label{boundLagrange} Lagrangian gap is not extremely small: $\Dmin \geq c_{\mathtt{LB}}/\sqrt{T}$. \end{OneLiners} \end{assumption} For a concrete example, let us construct a family of $3\times d$ problem instances that satisfy these assumptions. Fix some absolute constants $\eps,c_{\mathtt{LB}} \in (0, \nicefrac13)$ and time horizon $T$. The problem instance is defined as follows: budget $B= c_{\mathtt{LB}}\,T$, mean rewards $r(A_1) = \tfrac{1-c_{\mathtt{LB}}}{2}$ and $r(A_2) = 1- c_{\mathtt{LB}} - \eps$, mean consumptions $c(A_1) = c_{\mathtt{LB}} - \epsilon$ and $c(A_2) = 2 c_{\mathtt{LB}}$. Parts (1-4) of Assumption \ref{ass:LBAss} hold trivially. One can work out that $\Dmin = \eps$, so part (4) holds as long as $\eps \geq c_{\mathtt{LB}}/\sqrt{T}$. \begin{theorem}\label{thm:LB-root} Posit an arbitrary time horizon $T$, budget $B$, and $d$ resources (including time). Fix any $3\times d$ problem instance $\mI_0$ which satisfies Assumption~\ref{ass:LBAss}. In part (a), assume that $d=2$ and $\mI_0$ is far from being best-arm-optimal, in the sense that \begin{align}\label{eq:cor:bestArmOptimal} \text{There exists an optimal solution $\vec{X}^*$ such that $X(A_1)> 2 c_{\mathtt{LB}}^4/\sqrt{T}$ and $X(A_2) \geq c_{\mathtt{LB}}$}. \end{align} In part (b), assume that $d>2$. For both parts, there exist problem instances $\mI,\mI'$, which are $\mathcal{O}\rbr{\nicefrac{1}{\sqrt{T}}}$-perturbations of $\mI_0$, such that \begin{align}\label{eq:LB-guarantee} \text{Any algorithm incurs regret $\OPTFD-\E[\term{REW}] \geq \Omega(\; c_{\mathtt{LB}}^{4}\; \sqrt{T} \;)$ on $\mI$ or $\mI'$} \end{align} \end{theorem} For part (a), instance $\mI$ has the same expected outcomes as $\mI_0$ (but possibly different outcome distributions); we call such problem instances \emph{mean-twins}. For part (b), one can take $\mI_0$ to be best-arm-optimal. For both parts, the problem instances $\mI,\mI'$ require randomized resource consumption. Both parts follow from a more generic lower bound which focuses on linear independence of per-resource consumption vectors $\vec{c}_j := \rbr{ c_j(A_1),\, c_j(A_2),\, c_j(\term{null})} \in [0, 1]^3$, resources $j\in[d]$. \begin{theorem}\label{thm:generalLB} Posit an arbitrary time horizon $T$, budget $B$, and $d\geq 2$ resources (including time). Fix any $3\times d$ problem instance $\mI_0$ that satisfies Assumption~\ref{ass:LBAss} and \refeq{eq:cor:bestArmOptimal}. Assume that the consumption vectors $\vec{c}_j$, $j\in[d]$ are linearly independent. Then there are instances $\mI,\mI'$ which are $\eps$-perturbations of $\mI_0$, with $\eps = 2\,c_{\mathtt{LB}}^2 / \sqrt{T}$, which satisfy \eqref{eq:LB-guarantee}. In fact, $\mI$ is a mean-twin of $\mI_0$. \end{theorem} \begin{myproof}[Sketch \textnormal{(see Appendix~\ref{sec:LB-generic} for full proof).}] Let $r(a)$ and $\vec{c}(a)\in [0,1]^d$ be, resp., the mean reward and the mean resource consumption vector for each arm $a$ for instance $\mI_0$. Let $\eps = c_{\mathtt{LB}}/\sqrt{T}$. Problem instances $\mI,\mI'$ are constructed as follows. For both instances, the rewards of each non-null arm $a\in \{A_1,A_2\}$ are deterministic and equal to $r(a)$. Resource consumption vector for arm $A_1$ is deterministic and equals $\vec{c}(A_1)$. Resource consumption vector of arm $A_2$ in each round $t$, denoted $\vec{c}_{(t)}(A_2)$, is a carefully constructed random vector whose expectation is $c(A_2)$ for instance $\mI$, and slightly less for instance $\mI'$. Specifically, $\vec{c}_{(t)}(A_2) = \vec{c}(A_2)\cdot W_t/(1-c_{\mathtt{LB}}) $, where $W_t$ is an independent Bernoulli random variable which correlates the consumption of all resources. We posit $\E[W_t] = 1-c_{\mathtt{LB}}$ for instance $\mI$, and $\E[W_t] = 1-c_{\mathtt{LB}}-\eps$ for instance $\mI'$. Because of the small differences between $\mI,\mI'$, any algorithm will choose a sufficiently ``wrong" distribution over arms sufficiently often. The assumption in \refeq{eq:cor:bestArmOptimal} and the linear independence condition are needed to ensure that ``wrong" algorithm's choices result in large regret. \end{myproof} \newcommand{\widetilde{\mI}_0}{\widetilde{\mI}_0} The corollaries are obtained as follows. For Theorem~\ref{thm:LB-root}(a), problem instance $\mI_0$ trivially satisfies all preconditions in Theorem~\ref{thm:generalLB}. Indeed, letting time be resource $1$, the per-resource vectors are $\vec{c}_1 = (0,0,1)$ and $\vec{c}_2 = (\,\cdot\,,\,\cdot\,,\,0)$, hence they are linearly independent. For Theorem~\ref{thm:LB-root}(b), we use some tricks from the literature to transform the original problem instance $\mI_0$ to another instance $\widetilde{\mI}_0$ which satisfies \refeq{eq:cor:bestArmOptimal} and the linear independence condition. The full proof is in Section~\ref{sec:LB-cor}. \OMIT{ \kaedit{The problem instances for Theorem~\ref{cor:multipleResources} is constructed as follows. Define \[ \zeta_1 := \min \left \{ \tfrac{1}{\sqrt{T}}, \{c_j(A_i)\}_{j \in [d], i \in [2]}, \frac{1}{(d!)^2} \right \}, \qquad and \] \[ \zeta_2 := \min \left\{ \{c_j(A_i)\}_{j \in [d], i \in [2]}, \tfrac{1}{\sqrt{T}} \right \}. \] Given instance $\mI_0$, we construct instance $\mI$ by decreasing the mean consumption on arm $A_i$ and resource $j$ (except time) by $\zeta_1^j + u_j(A_i)$ where $u_j(a) \sim [-\zeta_2, \zeta_2]$ uniformly at random. We keep the mean rewards the same. As before $\mI'$ is obtained from $\mI$ by decreasing the mean consumption on all resources, except time, of one arm (say $A_2$) by $\eps = \mathcal{O}\left(\frac{1}{\sqrt{T}}\right)$ while keeping all other mean rewards/consumptions the same as in $\mI$.} } \section{Logarithmic regret bounds [OBSOLETE!]} \label{sec:algorithm} We analyze a version of \term{UcbBwK} which ``prunes out" the null arm, call it \term{PrunedUcbBwK}.% \footnote{This modification can only improve regret, so it retains the near-optimal worst-case performance of \term{UcbBwK}.} We prove that this algorithm enjoys logarithmic instance-dependent regret bounds under two assumptions: $d=2$ (only one resource other than time) and best-arm-optimality (Definition~\ref{def:best-arm-optimal}). These assumptions are essentially necessary to improve over $\Omega(\sqrt{T})$ regret (see Section~\ref{sec:LB}). \asedit{We make these two assumptions throughout this section, unless specified otherwise. The best arm is denoted $a^*$. We use $c(a)$ to denote the mean consumption of the non-time resource on arm $a$. Our results have two cases, depending on whether $c(a^*)$ is very close to $\nicefrac{B}{T}$.} \OMIT{To simplify presentation, we first define an algorithm that uses the null arm, and then transform it so that the null arm is ``pruned". To make this well-defined, the algorithm runs indefinitely until it is stopped. All regret bounds are proved for the ``pruned" version.} \OMIT{ Our algorithm is the $\term{UcbBwK}$ algorithm from \cite{AgrawalDevanur-ec14}, defined in Section~\ref{sec:prelims-conf}, with additional exploration steps that are crucial for obtaining logarithmic regret. In a given round $t$, we compute the solution $\vec{X}_t$ to the optimistic LP~\eqref{lp:UCBBwK}, and sample from this distribution. See Algorithm~\ref{alg:UCBBwK}. \begin{algorithm2e}[!h] \caption{Algorithm \term{UcbBwK}} \label{alg:UCBBwK} \DontPrintSemicolon \SetKwInOut{Input}{input} \Input{parameters $B,K,d,T$.} Initialize $t = 1$. \; \While{the algorithm is not stopped}{ \begin{minipage}{15cm} \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex] \item \label{alg:LPSolving} \underline{\textbf{Solve LP.}} Obtain optimal solution $\vec{X} = \vec{X}_t$ to the optimistic LP~\eqref{lp:UCBBwK}.\; \item \label{alg:mainSample} \underline{\textbf{UCB-round.}} Sample arm $a_t$ from distribution $\vec{X}$. \; \item Advance round $t\leftarrow t+1$ \; \end{enumerate} \end{minipage} } \end{algorithm2e} } Algorithm \term{PrunedUcbBwK} is formally defined as follows: in each round $t$, call \term{UcbBwK} as an oracle, repeat until it chooses a non-null arm $a$, and set $a_t=a$. (In one ``oracle call", \term{UcbBwK} outputs an arm and inputs an outcome vector for this arm.) The algorithm stops after it makes the total of $\Nmax$ oracle calls,% \footnote{Equivalently, then the algorithm can output the null arm in all subsequent rounds.} where $\Nmax = \Theta(T^2\;\log T)$ (with a sufficiently large absolute constant). \begin{theorem}\label{thm:logRegretUpper} Suppose there is only one resource other than time (\ie $d=2$), and $\eta_{\textsc{lp}}\leq \tfrac12$ in \eqref{eq:prelims-eta}. Consider a best-arm-optimal instance $\mI$ with Lagrangian gap $\Dmin > 0$. \begin{itemize} \item[(i)] Algorithm \term{PrunedUcbBwK} achieves regret \begin{align}\label{eq:logRegretSecondEq} \OPTFD - \E[\term{REW}] \textstyle \leq \mO\rbr{ \sum_{a \neq a^*} G_\term{LAG}^{-2}(a)\; \logThm}. \end{align} \item[(ii)] Moreover, if $|c(a^*)-\nicefrac{B}{T}| > \mO\rbr{\nicefrac{B}{T}\cdot \sqrt{\nicefrac{K}{B}\cdot \logThm}} $ then \begin{align}\label{eq:logRegretMainEq} \OPTFD - \E[\term{REW}] \textstyle \leq \mO\rbr{ \sum_{a \neq a^*} G_\term{LAG}^{-1}(a)\; \logThm\; + \OPTFD/B }. \end{align} \end{itemize} \end{theorem} \OMIT{ \begin{theorem}[worst-case]\label{thm:worst-case} Assume $\eta_{\textsc{lp}}\leq \tfrac12$ in \eqref{eq:prelims-eta}. Then \term{PrunedUcbBwK} achieves \begin{align}\label{eq:WSRegret} \OPTDP - \E[\term{REW}] \leq \mO\left( \sqrt{\logThm} \left( \OPTDP\sqrt{\nicefrac{K}{B}} + \sqrt{K\,\OPTDP} \right) \right). \end{align} \end{theorem} \begin{remark}\label{rem:logRegret-worstCase} \refeq{eq:WSRegret} is the worst-case optimal regret bound \eqref{eq:regret-opt} and follows directly from the analysis in \cite{AgrawalDevanur-ec14}. \kacomment{We don't need to prove anything here.} \end{remark} } \subsection{Preliminaries: information-theoretic impossibility} \xhdr{Preliminaries.} We rely on a well-known information-theoretic result for multi-armed bandits: essentially, no algorithm can reliably tell apart two bandit instances at time $T$ if they differ by at most $O(1/\sqrt{T})$.% \footnote{This strategy for proving lower bounds in multi-armed bandits goes back to \citet{bandits-exp3}. Lemma~\ref{lem:bestArm} is implicit in \citet{bandits-exp3}, see \citet[Lemma 2.9]{slivkins-MABbook} for exposition.} We formulate this result in a way that is most convenient for our applications. \begin{lemma} \label{lem:bestArm} Consider multi-armed bandits with Bernoulli rewards. Fix $\eps>0$ and two problem instances $\mI,\mI'$ such that the mean reward of each arm differs by at most $\eps$ between $\mI$ and $\mI'$. Suppose some bandit algorithm outputs distribution $\vec{Y}_t$ over arms at time $t \leq \nicefrac{c}{\eps^2}$, for a sufficiently small absolute constant $c$. Let $H$ be an arbitrary Lebesgue-measurable set of distributions over arms. Then either $\Pr[\vec{Y}_t \in H \mid \mJ_{t} = \mI] > \nicefrac{1}{4}$ or $\Pr[\vec{Y}_t \notin H \mid \mJ_{t} = \mI'] > \nicefrac{1}{4}$ holds. \end{lemma} Applying Lemma~\ref{lem:bestArm} to bandits with knapsacks necessitates some subtlety. First, the rewards in the lemma will henceforth be called \emph{quasi-rewards\xspace}, as they may actually correspond to consumption of a particular resource. Second, while a \term{BwK} algorithm receives multi-dimensional feedback in each round, the feedback other than the quasi-rewards\xspace will be the same (in distribution) for both problem instances, and hence can be considered a part of the algorithm. Third, distribution $\vec{Y}_t$ will be the conditional distribution over arms chosen by the \term{BwK} algorithm in round $t$ given the algorithm's observations so far; we will assume this without further mention. Fourth, we will need to specify the set $H$ of distributions (which will depend on a particular application). Consider the rescaled $\term{LP}$~\eqref{lp:rescaledLP} with $\eta_{\textsc{lp}} := 6 \ast \OPT_{\term{LP}} \sqrt{\frac{\log dT}{B}}$; we use this $\eta_{\textsc{lp}}$ throughout this proof. Let $\OPT_{\term{LP}}^{\term{sc}}$ be the value of this LP. We prove the lower bound using $\OPT_{\term{LP}}^{\term{sc}}$ as a benchmark. This suffices by the following claim from prior work: \footnote{Claim~\ref{claim:OPTFDLB} is a special case of Lemma 8.6 in \citet{AdvBwK-focs19} for $\tau^* = T$ and the reward/consumption for each arm, each resource and each time-step replaced with the mean reward/consumption.} \begin{claim}[\citet{AdvBwK-focs19}]\label{claim:OPTFDLB} $\OPT_{\term{LP}}^{\term{sc}} \leq \OPTFD$ for $\eta_{\textsc{lp}} := 6 \cdot \OPT_{\term{LP}} \sqrt{\frac{\log dT}{B}}$. \end{claim} \xhdr{Problem instances.} Let $\vec{r}(a)$ and $\vec{c}(a)\in [0,1]^d$ be, resp., the mean reward and the mean resource consumption vector for each arm $a$ for instance $\mI_0$. Let $\eps = c_{\mathtt{LB}}/\sqrt{T}$. Problem instances $\mI,\mI'$ are constructed as specified in the proof sketch; we repeat it here for the sake of convenience. For both instances, the rewards of each non-null arm $a\in \{A_1,A_2\}$ are deterministic and equal to $r(a)$. Resource consumption vector for arm $A_1$ is deterministic and equals $\vec{c}(A_1)$. Resource consumption vector of arm $A_2$ in each round $t$, denoted $\vec{c}_{(t)}(A_2)$, is a carefully constructed random vector whose expectation is $c(A_2)$ for instance $\mI$, and slightly less for instance $\mI'$. Specifically, $\vec{c}_{(t)}(A_2) = \vec{c}(A_2)\cdot W_t/(1-c_{\mathtt{LB}}) $, where $W_t$ is an independent Bernoulli random variable which correlates the consumption of all resources. We posit $\E[W_t] = 1-c_{\mathtt{LB}}$ for instance $\mI$, and $\E[W_t] = 1-c_{\mathtt{LB}}-\eps$ for instance $\mI'$. \xhdr{Main derivation.} From the premise of the theorem (\refeq{eq:cor:bestArmOptimal}), problem instance $\mI$ admits an optimal solution $\vec{X}^*$ that is substantially supported on both non-null arms. Let $\vec{X}^*_{\mI}$, $\vec{X}^*_{\mI'}$ denote the optimal solutions to the scaled LP, instantiated for instances $\mI, \mI'$ respectively. The proof proceeds as follows. We first prove certain properties of distributions $\vec{X}^*_{\mI}$ and $\vec{X}^*_{\mI'}$. We then use these properties and apply Lemma~\ref{lem:bestArm} with suitable quasi-rewards to complete the proof of the lower-bounds. Since we modify the mean consumption of all resources for one arm in $\mI'$ this implies that $\vec{X}^*_{\mI} \neq \vec{X}^*_{\mI'}$. From assumption~\ref{ass:LBAss}-\eqref{boundLagrange} we have that $\Dmin \geq c_{\mathtt{LB}}/\sqrt{T}$. From the premise of the theorem, we have that the mean vector of consumptions for the resources $j \in [d]$ are all linearly independent. Thus, we can apply sensitivity theorem~\ref{thm:perturbedColumn} to conclude that the support of the solution $\vec{X}^*_{\mI'}$ is same as $\vec{X}^*_{\mI}$. Moreover, from the linear independence of the consumption vectors and \refeq{eq:cor:bestArmOptimal}. combined with standard $\term{LP}$ theory (see chapter 4 on duality in \mycite{bertsimas1997introduction}) we have that there exists a resource $j^* \in [d]$ such that the optimal solution $\vec{X}^*_{\mI}$ satisfies the resource constraint with equality. In what follows, we denote the vector $\vec{c}$ as a shorthand for $\vec{c}_{j^*}$ (\emph{i.e.,} we drop the index $j^*$). Note that from the perturbation we have that $c(A_1) < c(A_2)$. Thus, for some $\delta > 0$ we have $X^*_{\mI'}(A_1) = X^*_{\mI}(A_1) - \delta$ and $X^*_{\mI'}(A_2) = X^*_{\mI}(A_2) + \delta$. Let $\| \vec{X} \|$ denote the $\ell_1$-norm of a given distribution $\vec{X}$. Thus, we have \begin{equation} \label{eq:diffDist} \| \vec{X}^*_{\mI} - \vec{X}^*_{\mI'} \| = 2 \delta. \end{equation} Given any distribution $\vec{Y}$ over the arms, let \begin{align}\label{eq:LPgap-defn} \scaledV(\vec{Y}) := \textstyle (1-\eta_{\textsc{lp}})\cdot \nicefrac{B}{T}\;\cdot r(\vec{Y}) / \rbr{\max_{j\in [d]} c_j(\vec{Y})}. \end{align} This is the value of $\vec{Y}$ in the rescaled LP~\eqref{lp:rescaledLP}, where $\vec{Y}$ itself is rescaled to make it LP-feasible (and as large as possible). Note that $\scaledV(\vec{Y}) = (1-\eta_{\textsc{lp}})\,V(\vec{Y})$, where $V(\vec{Y})$ is the value of the original LP, as defined in \eqref{eq:LPgap-defn}. Also, $\OPT_{\term{LP}}^{\term{sc}} = \sup_{\vec{Y}} \scaledV(\vec{Y})$. By a slight abuse of notation, let $\scaledV(\vec{Y}), \scaledV'(\vec{Y})$ be the value of $\scaledV(\vec{Y})$ corresponding to instances $\mI$ and $\mI'$ respectively. We use the following two claims in the proof of our lower-bound. Claim~\ref{clm:badForI} states that if a distribution is close to the optimal distribution for instance $\mI$ then it is also far from the optimal distribution for $\mI'$. Claim~\ref{clm:distToValue} states that if a distribution is far from the optimal distribution, then playing from that distribution also incurs large instantaneous regret. Both claims have nothing to do with particular algorithms. \begin{claim}\label{clm:badForI} Fix distribution $\vec{Y} \in \Delta^3$ and $\epsilon < 1$. If $\| \vec{X}^*_{\mI} - \vec{Y} \| < \epsilon \cdot \cLB^2$ then $\| \vec{X}^*_{\mI'} - \vec{Y} \| \geq \epsilon \cdot \cLB^2$. \end{claim} \begin{claim}\label{clm:distToValue} Fix distribution $\vec{Y} \in \Delta^3$ and $\epsilon < 1$. If $\| \vec{X}^*_{\mI} - \vec{Y} \| \geq \epsilon \cdot \cLB^2$ then $\scaledV(\vec{X}^*_{\mI}) - \scaledV(\vec{Y}) \geq \epsilon \cdot \tfrac{\cLB^3}{2}$. Likewise, if $\| \vec{X}^*_{\mI'} - \vec{Y} \| \geq \epsilon \cdot \cLB^2$ then $\scaledV'(\vec{X}^*_{\mI'}) - \scaledV'(\vec{Y}) \geq \epsilon \cdot \tfrac{\cLB^3}{2}$. \end{claim} We now invoke Lemma~\ref{lem:bestArm} with the quasi-rewards at each time-step determined by the consumption of the resource $j^*$. Define the set, \begin{equation} \label{eq:defnH} \mH := \left \{ \vec{Y}: \| \vec{X}^*_{\mI} - \vec{Y} \| \geq \epsilon \cdot \cLB^2 \right \}, \end{equation} to complete the proof Theorem~\ref{thm:generalLB}. Consider an arbitrary algorithm $\term{ALG}$. We consider two cases: $\mJ = \mI$ and $\mJ = \mI'$, which denote the instance that satisfies the conclusion of this lemma for at least $\frac{T}{2}$ rounds for $T := \frac{c_{\mathtt{LB}}}{\eps^2}$. Let $\mJ = \mI$. Let $\mathcal{T}$ denote the set of time-steps $t \in [T]$ such that $\mJ_t = \mI$ and $\vec{Y}_t \in \mH$. Then, the expected regret of \term{ALG} can be lower-bounded by, \begin{align*} \E\sbr{ \sum_{t \in \mathcal{T}} \scaledV(\vec{X}^*_\mI) - \scaledV(\vec{Y}_t) } &= \E\sbr{ \sum_{t \in \mathcal{T}:\; \| \vec{X}^*_{\mI} - \vec{Y}_t \| \geq \epsilon \cdot \cLB^2} \scaledV(\vec{X}^*_\mI) - \scaledV(\vec{Y}_t) } & \EqComment{by \refeq{eq:defnH}} \\ & \geq \textstyle \E\sbr{ \sum_{t \in \mathcal{T}} \; \epsilon \cdot \tfrac{\cLB^3}{2} } & \EqComment{by \refeq{clm:distToValue}} \\ & \geq \nicefrac{T}{4} \cdot \epsilon \cdot \tfrac{\cLB^3}{2} & \EqComment{by Lemma~\ref{lem:bestArm}}\\ & \geq O\rbr{ c_{\mathtt{LB}}^4 \cdot \sqrt{T}}. & \EqComment{Since $\epsilon = \tfrac{c_{\mathtt{LB}}}{\sqrt{T}}$} \end{align*} We use a similar argument when $\mJ = \mI'$. Let $\mathcal{T}'$ denote the set of time-steps $t \in [T]$ such that $\mJ_t = \mI'$ and $\| \vec{X}^*_{\mI'} - \vec{Y}_t \| \geq \epsilon \cdot \cLB^2$. The expected regret of \term{ALG} can be lower-bounded by, \begin{align*} \E \sbr{ \sum_{t \in \mathcal{T}'} \scaledV'(\vec{X}^*_{\mI'}) - \scaledV'(\vec{Y}_t) } & = \E\sbr{ \sum_{t \in \mathcal{T}':\; \| \vec{X}^*_{\mI'} - \vec{Y}_t \| \geq \epsilon \cdot \cLB^2} \scaledV'(\vec{X}^*_{\mI'}) - \scaledV'(\vec{Y}_t) } & \\ & \geq \E\sbr{ \sum_{t \in \mathcal{T}':\; \| \vec{X}^*_{\mI} - \vec{Y}_t \| < \epsilon \cdot \cLB^2} \scaledV'(\vec{X}^*_{\mI'}) - \scaledV'(\vec{Y}_t) } & \EqComment{by Claim~\ref{clm:badForI}} \\ & = \E\sbr{ \sum_{t \in \mathcal{T}':\; \vec{Y}_t \notin \mH} \scaledV'(\vec{X}^*_{\mI'}) - \scaledV'(\vec{Y}_t) } & \EqComment{by \refeq{eq:defnH}} \\ & \geq \E\sbr{ \sum_{t\in[T]:\; \vec{Y}_t \notin \mH} \epsilon \cdot \tfrac{\cLB^3}{2} } & \EqComment{by \refeq{clm:distToValue}} \\ & \geq \nicefrac{T}{4} \cdot \epsilon \cdot \tfrac{\cLB^3}{2} & \EqComment{by Lemma~\ref{lem:bestArm}}\\ & \geq O\rbr{ c_{\mathtt{LB}}^4 \cdot \sqrt{T}}. & \EqComment{Since $\epsilon = \tfrac{c_{\mathtt{LB}}}{\sqrt{T}}$}. \end{align*} \xhdr{Proof of Claim~\ref{clm:badForI}.} Let $c(A_1), c(A_2)$ denote the expected consumption of arms $A_1$ and $A_2$ respectively in instance $\mI$. Define $\zeta := \frac{\eps c(A_1)}{1-c_{\mathtt{LB}}}$. By definition, this implies that the expected consumption of arm $A_2$ in instance $\mI'$ is $c(A_2) - \zeta$. Additionally, since the support contains two arms, we have that the following holds: $c(A_1) X^*_{\mI}(A_1) + c(A_2) X^*_{\mI}(A_2) = B/T*(1-\eta_{\textsc{lp}})$ and $c(A_1) X^*_{\mI'}(A_1) + c(A_2) X^*_{\mI'}(A_2) - \zeta X^*_{\mI'}(A_2) = B/T*(1-\eta_{\textsc{lp}})$. Thus, we have \[ c(A_1)X^*_{\mI}(A_1) + c(A_2) X^*_{\mI}(A_2) = c(A_1) X^*_{\mI}(A_1) + c(A_2) X^*_{\mI}(A_2) + \delta ( C(A_2) - c(A_1) - \zeta) - \zeta X^*_{\mI}(A_2). \] Rearranging and using the assumptions in \ref{ass:LBAss}, we get that \begin{equation} \label{eq:deltaDef} \delta = \frac{\zeta X^*_{\mI}(A_2)}{c(A_2) - c(A_1)- \zeta} \geq \frac{\epsilon c_{\mathtt{LB}}}{1-c_{\mathtt{LB}}} \cdot \frac{c_{\mathtt{LB}}}{1-2c_{\mathtt{LB}} - \tfrac{\epsilon \cdot c_{\mathtt{LB}}}{1-c_{\mathtt{LB}}}} \geq \epsilon \cdot \cLB^2. \end{equation} Consider $\| \vec{X}^*_{\mI'} - \vec{Y} \|$. This can be rewritten as \begin{align*} & = \| \vec{X}^*_{\mI'} - \vec{Y} - \vec{X}^*_{\mI} + \vec{X}^*_{\mI} \| & \\ & \geq | \| \vec{X}^*_{\mI'} - \vec{X}^*_{\mI} \| - \| \vec{X}^*_{\mI} - \vec{Y} \| | & \EqComment{Triangle inequality}\\ & \geq 2 \delta - \epsilon \cdot \cLB^2 & \EqComment{Premise of the claim and \refeq{eq:diffDist}}\\ & \geq \epsilon \cdot \cLB^2. & \EqComment{From \refeq{eq:deltaDef}} \end{align*} \xhdr{Proof of Claim~\ref{clm:distToValue}.} We will prove the statement $\| \vec{X}^*_{\mI} - \vec{Y} \| \geq \epsilon \cdot \cLB^2 \implies \scaledV(\vec{X}^*_{\mI}) - \scaledV(\vec{Y}) \geq \epsilon \cdot \tfrac{\cLB^3}{2}$. The exact same argument holds by replacing $\vec{X}^*_{\mI}$ with $\vec{X}^*_{\mI'}$ and $\scaledV(.)$ with $\scaledV'(.)$. Consider $ \scaledV(\vec{X}^*_{\mI}) - \scaledV(\vec{Y})$. By definition, this equals, \begin{equation} \label{eq:diffReg} r(\vec{X}^*_{\mI}) - \frac{r(\vec{Y})}{\max \{\tfrac{B'}{T}, c(\vec{Y}) \}} \cdot \frac{B'}{T}, \end{equation} where $B'$ is the scaled budget. We have two cases. In case 1, let $\max \{\tfrac{B'}{T}, c(\vec{Y}) \} = \frac{B'}{T}$. Thus, \refeq{eq:diffReg} simplifies to, \begin{align*} & = r(\vec{X}^*_{\mI}) - r(\vec{Y}) & \\ & = r(A_1) [X^*_{\mI}(A_1) - Y(A_1)] + r(A_2) [X^*_{\mI}(A_2) - Y(A_2)] & \end{align*} Note that since $\max \{\tfrac{B'}{T}, c(\vec{Y}) \} = \frac{B'}{T}$, this implies that $Y(\term{null}) = 0$. Since $\vec{X}^*_{\mI}$ is an optimal solution and $r(A_2) > r(A_1)$, this implies that we have $Y(A_1) = X^*_{\mI}(A_1) + \zeta$ and $Y(A_2) = X^*_{\mI}(A_2) - \zeta$. Thus, we have, \begin{align*} r(A_1) [X^*_{\mI}(A_1) - Y(A_1)] + r(A_2) [X^*_{\mI}(A_2) - Y(A_2)] & \geq [r(A_2) - r(A_1)] \zeta \\ & \geq c_{\mathtt{LB}} \cdot \| \vec{X}^*_{\mI} - \vec{Y} \|/2 \\ & \geq \epsilon \cdot \tfrac{\cLB^3}{2}. \end{align*} Consider case 2 where $\max \{\tfrac{B'}{T}, c(\vec{Y}) \} = c(\vec{Y})$. Then, \refeq{eq:diffReg} simplifies to, \begin{align*} & = r(\vec{X}^*_{\mI}) - \tfrac{B'}{T} \cdot \tfrac{r(\vec{Y})}{c(\vec{Y})} & \\ & \geq r(\vec{X}^*_{\mI}) - \max_{\vec{Y} \in \Delta_3: \| \vec{X}^*_{\mI} - \vec{Y} \| \geq \epsilon \cdot \cLB^2} \tfrac{B(1-\eta_{\textsc{lp}})}{T} \cdot \tfrac{r(\vec{Y})}{c(\vec{Y})} \end{align*} The maximization happens when the distribution $\vec{Y}$ is such that $Y(A_1) = X^*_{\mI} - \epsilon \cdot \cLB^2/2$ and $Y(A_2) = X^*_{\mI} - \epsilon \cdot \cLB^2/2$. Plugging this into the expression we get the RHS is at least, \begin{align*} & \geq r(\vec{X}^*_{\mI}) - \tfrac{B(1-\eta_{\textsc{lp}})}{T} \cdot \frac{r(\vec{X}^*_{\mI}) + \epsilon \cdot \cLB^2/2 \cdot (r(A_2) - r(A_1))}{c(\vec{X}^*_{\mI}) + \epsilon \cdot \cLB^2/2 \cdot (c(A_2) -c(A_1))} & \\ & \geq r(\vec{X}^*_{\mI}) - c_{\mathtt{LB}} (1-\eta_{\textsc{lp}}) \cdot \frac{r(\vec{X}^*_{\mI}) + \epsilon \cdot \cLB^2/2 \cdot (r(A_2) - r(A_1))}{c(\vec{X}^*_{\mI}) + \epsilon \cdot \cLB^2/2 \cdot (c(A_2) -c(A_1))} & \\ & \geq r(\vec{X}^*_{\mI}) - (1-\eta_{\textsc{lp}}) \cdot \frac{r(\vec{X}^*_{\mI}) + \epsilon \cdot \cLB^2/2 \cdot (r(A_2) - r(A_1))}{1 + \epsilon \cdot \cLB^2/2 } & \\ & \geq \tfrac{\eta_{\textsc{lp}}}{2} \cdot r(\vec{X}^*_{\mI}) \geq \epsilon \cdot \tfrac{c_{\mathtt{LB}}^3}{2}. \end{align*} The last two inequality follows from Assumption~\ref{ass:LBAss}-\eqref{boundOPT}, the value of $\eta_{\textsc{lp}}$ and the fact that $\epsilon = \tfrac{c_{\mathtt{LB}}}{\sqrt{T}}$, respectively. Combining the two cases we get the claim. \subsection{A general reduction from \term{BwK} to bandits} To state the general result, let us define an abstract notion of ``confidence radius". For each round $t$, a \emph{formal confidence radius} is a mapping $\operatorname{Rad}_t(a)$ from algorithm's history and arm $a$ to $[0,1]$ such that with probability at least $1-O(T^{-4})$ it holds that \[ |r(a)-\hat{r}_t(a)| \leq \operatorname{Rad}_t(a) \quad\text{and}\quad |c_j(a)-\hat{c}_{j,t}(a)| \leq \operatorname{Rad}_t(a) \] for each resource $j$, where $\hat{r}_t(a)$ and $\hat{c}_{j, t}(a)$ denote average reward and resource consumption, as defined in \refeq{eq:ave-defn}. Such $\operatorname{Rad}_t(a)$ induces a version of $\term{UcbBwK}$ with confidence bounds \[ r_t^{+}(a) = \min(1, \hat{r}_t(a) + \operatorname{Rad}_t(a) \;) \quad\text{and}\quad c_{j, t}^{-}(a) = \max(\;0, \hat{c}_{j, t}(a) - \operatorname{Rad}_t(a) \;).\] We allow the algorithm to observe auxiliary feedback before and/or after each round, depending on a particular problem formulation, and this feedback may be used to compute the confidence radii. We replace \refeq{eq:ActConfSum-UB} with a generic bound on the action-confidence sum, for some $\beta$ that can depend on the parameters in the problem instance, but not on $S$: \begin{align}\label{eq:ConfSumBound-generic} \textstyle \sum_{t\in S} \operatorname{Rad}_t(a_t) \leq \sqrt{|S|\, \beta}, \quad\text{for any algorithm and any subset $S\subset[T]$}. \end{align} \begin{theorem}\label{thm:abstractReduceMain} Consider an instance of $\term{BwK}$ with time horizon $T$. Let $\operatorname{Rad}_t(\cdot)$ be a formal confidence radius which satisfies \eqref{eq:ConfSumBound-generic} for some $\beta$. Consider the induced algorithms $\term{UcbBwK}$ and $\term{PrunedUcbBwK}$ with rescaling parameter $\eta_{\textsc{lp}} = \frac{2}{B} \sqrt{\beta T}$. \begin{OneLiners} \item[(i)] Both algorithms obtain regret $\OPTDP -\E[\term{REW}] \leq O(\sqrt{\beta T})(1+\OPTDP/B)$. \item[(ii)] Theorem~\ref{thm:logRegretUpper} holds with $\Psi = \beta\,\Dmin^{-2}$ and regret $\mO\rbr{ \beta\,\Dmin^{-1} }$ in part (ii). \item[(iii)] Theorem~\ref{thm:UCBSmallNonArms} holds with $N_\eps = \mO\left( \beta\,\eps^{-2} \right)$. \end{OneLiners} \end{theorem} \begin{myproof}[Sketch] For part (i), the analysis in \mycite{AgrawalDevanur-ec14} explicitly relies on \eqref{eq:ActConfSum-UB}. For part (ii), we modify the proof of Theorem~\ref{thm:logRegretUpper} so as to use \eqref{eq:ActConfSum-UB} instead of Claim~\ref{cl:ConfSum-main}. For part (iii), our proof of Theorem~\ref{thm:UCBSmallNonArms} uses \eqref{eq:ActConfSum-UB} explicitly. In all three parts, we replace \eqref{eq:ActConfSum-UB} with \eqref{eq:ConfSumBound-generic}, and trace how the latter propagates through the respective proof. \end{myproof} We apply this general result to three specific scenarios: linear contextual bandits with knapsacks (\term{LinCBwK}) \citep{agrawal2015linear}, combinatorial semi-bandits with knapsacks (\term{SemiBwK}) \citep{Karthik-aistats18}, and multinomial-logit bandits with knapsacks (\term{MnlBwK}) \citep{Cheung-MNLBwK-arxiv17}. In all three applications, the confidence-sum bound \eqref{eq:ConfSumBound-generic} is implicit in prior work on the respective problem without resources. The guarantees in part (i) match those in prior work referenced above, up to logarithmic factors, and are optimal when $B = \Omega(T)$; in fact, we obtain an improvement for \term{MnlBwK}. Parts (ii) and (iii) -- the results for logarithmic regret and simple regret -- did not appear in prior work. \subsection{Linear Contextual Bandits with Knapsacks (\term{LinCBwK})} In \emph{Contextual Bandits with Knapsacks} (\term{CBwK}), we have $K$ actions, $d$ resources, budget $B$ and time horizon $T$, like in \term{BwK}, and moreover we have a set $\mX$ of possible contexts. At each round $t \in [T]$, the algorithm first obtains a context $\vec{x}_t\in X$. The algorithm then chooses an action $a_t \in [K]$ and obtains an outcome $\vec{o}_t(a_t) \in [0, 1]^{d+1}$ like in \term{BwK}. The tuple $\rbr{ \vec{x}_t; \vec{o}_t(a):\, a \in [K] }$ is drawn independently from some fixed but unknown distribution. The algorithm continues until some resource, including time, is exhausted. One compares against a given a set $\Pi$ of \emph{policies}: mappings from contexts to actions. We can formally interpret \term{CBwK} as an instance of \term{BwK} in which actions correspond to policies in $\Pi$. This interpretation defines the benchmarks $\OPTDP$ and $\OPTFD$ that we compete with. \term{LinCBwK} is a special case of \term{CBwK} in which the context space is $\mX = [0,1]^{K\times m}$, for some parameter $m\in\N$, so that each context $\vec{x}_t$ is in fact a tuple $\vec{x}_t = \rbr{\vec{x}_t(a)\in [0,1]^m:\,a\in[K] }$. We have a linearity assumption: for some unknown matrix $\vec{W}_* \in [0, 1]^{m \times (d+1)}$ and each arm $a \in [K]$, \[ \E\sbr{ \vec{o}_t(a) \mid \vec{x}_t(a) } = \vec{W}_*^\textrm{T} \cdot \vec{x}_t(a). \] The policy set $\Pi$ consists of all possible policies. \emph{Linear contextual bandits}, studied in prior work \citep[\eg][]{Auer-focs00,DaniHK-colt08,Langford-www10,Reyzin-aistats11-linear,Csaba-nips11}, is the special case without resources. Much of the complexity of linear contextual bandits (resp., \term{LinCBwK}) is captured by the special case of of \emph{linear bandits} (resp., \emph{linear \term{BwK}}) where the context is the same in each round. The general theme in the work on linear bandits (contextual or not) to replace the dependence on the number of arms $K$ in the regret bound with the dependence on the dimension $m$ and, if applicable, avoid the dependence on $|\Pi|$. This is what we accomplish, too. \begin{corollary}\label{cor:linCBwK} For \term{LinCBwK}, Theorem~\ref{thm:abstractReduceMain} holds with $\beta = \mO(m^2 d^2 \log(mTd))$. \end{corollary} \begin{proof} Combining Lemma 13 of \mycite{Auer-focs00} and Theorem 2 of \mycite{AbbasiYPS-nips11}, it follows that the confidence-sum bound \refeq{eq:ConfSumBound-generic} holds with $\beta = \mO(m^2 d^2 \log mTd)$. \end{proof} \subsection{Combinatorial Semi-bandits with Knapsacks (\term{SemiBwK})} \term{SemiBwK} is a version of \term{BwK}, where actions correspond to subsets of some fixed ground set $[N]$ (whose elements are called \emph{atoms}). There is a fixed family $\mF \subset 2^{[N]}$ of feasible actions. In each round $t$, the algorithm chooses a subset $A_t \in \mF$ and observes the outcome $\vec{o}_t(a)\in [0,\nicefrac{1}{n}]^d$ for each atom $a\in A_t$, where $n = \max_{A \in \mF} |A|$. The outcome for a given subset $A\in \mF$ is defined as the sum \begin{align}\label{eq:SemiBwK-sum} \vec{o}_t(A) = \textstyle \sum_{a\in A} \vec{o}_t(a) \in [0,1]^{d+1}. \end{align} The outcome matrix $\rbr{\vec{o}_t(a): a \in [N]}$ is drawn independently from some fixed but unknown distribution. The algorithm continues until some resource, including time, is exhausted. \emph{Combinatorial semi-bandits}, the problem studied in prior work \citep[\eg][]{Chen-icml13,Kveton-aistats15,MatroidBandits-uai14}, is the special case without resources. Note that the number of feasible actions can be exponential in $N$. The general theme in this line of work is to replace the dependence on $|\mF|$ in the regret bound with the dependence on $N$, or, even better, on $n$. We extend this to \term{SemiBwK}. \begin{corollary}\label{cor:semiBwK} For \term{SemiBwK}, Theorem~\ref{thm:abstractReduceMain} holds with $\beta = \mO(n \log(NdT))$. \end{corollary} \begin{proof} Using Lemma 4 in \mycite{wen2015efficient} we immediately obtain the confidence-sum bound \refeq{eq:ConfSumBound-generic} with $\beta = n \log KdT$. \end{proof} \subsection{Multinomial-logit Bandits with Knapsacks (\term{MnlBwK})} In the \term{MnlBwK} problem, the setup starts like in \term{SemiBwK}. There is a ground set of $N$ \emph{atoms}, and a fixed family $\mF \subset 2^{[N]}$ of feasible actions. In each round, each atom $a$ has an outcome $\vec{o}_t(a)\in [0,1]^{d+1}$, and the outcome matrix $\rbr{\vec{o}_t(a): a \in [N]}$ is drawn independently from some fixed but unknown distribution. The aggregate outcome is formed in a different way: when a given subset $A_t\in\mF$ is chosen by the algorithm in a given round $t$, at most one atom $a_t\in A_t$ is chosen stochastically by ``nature", and the aggregate outcome is then $\vec{o}_t(A_t) := \vec{o}_t(a)$; otherwise, the algorithm skips this round. A common interpretation is that the atoms correspond to products, the chosen action $A_t\in \mF$ is the bundle of products offered to the customer, and at most one product from this bundle is actually purchased. As usual, the algorithm continues until some resource (incl. time) is exhausted. The selection probabilities are defined via the multinomial-logit model. For each atom $a$ there is a hidden number $v_a\in [0,1]$, interpreted as the customers' valuation of the respective product, and the \[\Pr\sbr{ \text{atom $a$ is chosen} \mid A_t} = \begin{cases} \tfrac{v_a}{1+\sum_{a' \in A_t} v_{a'}} & \text{if $a \in A_t$} \\ 0 & \text{otherwise}. \end{cases} \] The set $\mF$ of possible bundles is \[ \mF = \cbr{ A\subset [N]:\; \vec{M} \cdot x(A) \leq \vec{b} },\] for some (known) totally unimodular matrix $\vec{M}\in \R^{N\times N} $ and a vector $\vec{b} \in \R^N$, where $x(A)\in \{0,1\}^N$ represents set $A$ as a binary vector over atoms. \emph{Multinomial-logit bandits}, the problem studied in prior work \citep[\eg][]{Shipra-ec16,rusmevichientong2010dynamic,saure2013optimal,caro2007dynamic}, is the special case without resources. We derive the following corollary from the analysis of MNL-bandits in \citet{Shipra-ec16}, which analyzes the confidence sum for the $v_a$'s. \begin{corollary}\label{cor:MnlBwK} Consider \term{MnlBwK} and denote $V := \sum_{a \in [N]} v_a$. Theorem~\ref{thm:abstractReduceMain} holds with \[ \beta = \textstyle \mO\rbr{\rbr{\frac{\ln T}{\ln (1 + \nicefrac{1}{V})}}^2 \rbr{N \sqrt{\ln (NT)} + \ln(NT) }} = \tildeO\rbr{N^3}.\] \end{corollary} \begin{proof} The proof is implicit in the analysis in \citet{Shipra-ec16}. As in their paper, let $n_\ell$ denote the number of time-steps in phase $\ell$. Let $V_\ell = \sum_{a \in S_\ell} v_a$. Recall that $n_\ell$ is a geometric random variable with mean $\tfrac{1}{1 + V_\ell}$. Using Chernoff-Hoeffding bounds we obtain that with probability at least $1 - \tfrac{1}{T^2}$, $n_\ell \leq \tfrac{\ln T}{\ln (1+\nicefrac{1}{V_\ell})}$. Consider a random subset $S$. Summing the LHS and RHS in Lemma 4.3, we get that $\sum_{t \in S} \operatorname{Rad}_t(a_t) \leq \sum_{a \in [N]} \sum_{\ell: t \in \mathcal{T}_a(\ell)} \tilde{R}_a(S_\ell)$. Using Lemma 4.3 in \citep{Shipra-ec16} we have, $\sum_{a \in [N]} \sum_{\ell: t \in \mathcal{T}_a(\ell)} \tilde{R}_a(S_\ell) \leq \sum_{a \in [N]} \sum_{\ell: t \in \mathcal{T}_a(\ell)} n_{\ell} \sqrt{\frac{v_a \ln \sqrt{N} T}{T_a(\ell)}} + \frac{\ln \sqrt{N} T}{T_a(\ell)}$. Note that $v_a \leq 1$. Using the upper bound on $n_\ell$ derived above combined with the argument used to obtain (A.19) in \citep{Shipra-ec16} we get the desired value of $\beta$. \end{proof} The worst-case regret bound from Corollary~\ref{cor:MnlBwK} improves over prior work \mycite{Cheung-MNLBwK-arxiv17}. In particular, consider the worst-case dependence on $N$, the number of atoms. Our regret bound scales as $N^{3/2}$, whereas the regret bound in \citep{Cheung-MNLBwK-arxiv17} scales as $N^{7/2}$ (while both scale as $\sqrt{T}$). \subsection{Computational issues} We do not provide a generic computationally efficient implementation for \term{UcbBwK} in our reduction. The algorithm constructs and solves a linear program in each round, with one variable per arm in the reduction. So, even if the regret is fairly small, the number of LP variables may be very large: indeed, it may be exponential in the number of atoms in \term{SemiBwK} and \term{MnlBwK}, arbitrarily large compared to the other parameters in linear \term{BwK}, or even infinite as in \term{LinCBwK}. The corresponding LPs have a succinct representation in all these applications, but we do not provide a generic implementation. However, such (or very similar) linear programs may be computationally tractable via application-specific implementations, and indeed this is the case in \term{LinCBwK} \citep{agrawal2015linear} and \term{SemiBwK} \citep{Karthik-aistats18}. In the prior work on \term{MnlBwK} \citep{Cheung-MNLBwK-arxiv17}, the $\sqrt{T}$-regret algorithm is not computationally efficient, same as ours; there is, however, a computationally efficient algorithm with regret $T^{2/3}$. \subsection{Proof of \refeq{cl:change-B}} \label{sec:Wald} Let $\tau$ denote the stopping time of the algorithm that chooses arm $a^*$ in every time-step, given that the total budget is $B_0$, $T_0$ on the two resources. From definition we have $\term{REW}(a^*\mid B_0, T_0) = \sum_{t \in [\tau]} r_t(a^*)$. Using Wald's identity (Theorem~\ref{thm:wald}), we have that $\E[\term{REW}(a^*\mid B_0, T_0)] = \E[\tau]\; r(a^*)$. Let $B_0$, $T_0$ denote the budget remaining for the two resources. By definition, we have that $\tau \geq T_0$ and $\sum_{t \in [\tau]} c_t(a^*) \geq B_0$. Using the Wald's identity (Theorem~\ref{thm:wald}) we have that $\E[\sum_{t \in [\tau]} c_t(a^*)] = \E[\tau] c(a^*)$. Thus, we have $\E[\tau] \geq \min \left\{ T_0, \tfrac{B_0}{c(a^*)} \right\} \geq \min \left\{ T_0, B_0 \right\}$. Therefore, we obtain the following. \begin{equation} \label{eq:lowerBoundBpSt} \E[\term{REW}(a^*\mid B_0, T_0)] = \E[\tau] r(a^*) > \left( \frac{\min \left\{ T_0, B_0 \right\}}{\max\{ \tfrac{B}{T}, c(a^*) \}} \right) r(a^*), \quad \text{and} \end{equation} \begin{equation} \label{eq:upperBoundBpSt} \E[\term{REW}(a^*\mid B)] = \E[\tau_{B}] r(a^*) \leq \left( \frac{B}{\max\{ \tfrac{B}{T}, c(a^*) \}} \right) r(a^*). \end{equation} Combining Equations~\eqref{eq:lowerBoundBpSt} and \eqref{eq:upperBoundBpSt}, we get \refeq{cl:change-B}. \subsection{Proof of \refeq{cl:change-B-new}} \label{appx:logRegretSectionStronger} We now modify the above proof to get the tighter lower-bound in Eq.~\eqref{cl:change-B-new}. Let $T_0$, $B_0$ denote the expected remaining time and budget (respectively) and let $\tau$ denote the (random) stopping time of the algorithm that chooses arm $a^*$ in every time-step given $T_0$ time-steps and $B_0$ budget. This implies that we have, $\mathbb{E}[\sum_{t \in [\tau]} c_t(a^*)] \geq B_0$ and $\E[\tau] \geq T_0$. From Theorem~\ref{thm:wald}, this implies that we have $\E[\tau] c(a^*) \geq B_0$ and $\E[\tau] \geq T_0$. This implies that $\E[\tau] \geq \min\{ T_0, \tfrac{B_0}{c(a^*)} \}$. Similar to Eq.~\eqref{eq:lowerBoundBpSt} and Eq.~\eqref{eq:upperBoundBpSt} we obtain the following. \begin{equation} \label{eq:lowerBoundBpStTight} \E[\term{REW}(a^*\mid B_0, T_0)] = \E[\tau] r(a^*) > \min\{ T_0, \tfrac{B_0}{c(a^*)} \} r(a^*), \quad \text{and} \end{equation} \begin{equation} \label{eq:upperBoundBpStTight} \E[\term{REW}(a^*\mid B_0 = B, T_0 = T)] = \OPTFD \leq \left( \frac{B}{\max\{ \tfrac{B}{T}, c(a^*) \}} \right) r(a^*). \end{equation} Combining Equations~\eqref{eq:lowerBoundBpStTight} and \eqref{eq:upperBoundBpStTight}, we get \refeq{cl:change-B-new}. \subsection{Lower bound on Lagrange gap: Proof of \refeq{eq:gLagP}} \label{app:gLagP} We will use \refeq{eq:gLagSimplified} and some standard properties of linear programming. Assume $c(a^*) < \tfrac{B}{T}$. Using complementary slackness theorem on $\term{LP}$~\eqref{lp:primalAbstract}, this implies that $\lambda^*_1 = 0$. Moreover, note that the objective in the dual of $\term{LP}$~\eqref{lp:primalAbstract} is $\lambda^*_0 + \lambda^*_1 = \lambda^*_0$. The optimal value of the primal $\term{LP}$~\eqref{lp:primalAbstract} is $r(a^*)$ since, $X(a^*) = 1$ is the optimal solution to the $\term{LP}$. This implies that $\lambda^*_0 = r(a^*) \geq \frac{\OPTFD}{T}$. Substituting this into \refeq{eq:gLagSimplified} gives the first inequality in \refeq{eq:gLagP}. Now assume $c(a^*) > \tfrac{B}{T}$. Again, as above complementary slackness theorem on $\term{LP}$~\eqref{lp:primalAbstract}, this implies that $\lambda^*_0 = 0$. Thus, $\Dmin(a) = \frac{T}{B} \cdot \lambda^*_1 \cdot c(a) - r(a)$. Using the dual objective function $\lambda^*_0 + \lambda^*_1 = \lambda^*_1$ combined with strong duality, this implies that $\lambda^*_1 = \tfrac{\OPT_{\term{LP}}}{T} \geq \tfrac{\OPTFD}{T}$. Plugging this back into \refeq{eq:gLagSimplified} gives the second inequality in \refeq{eq:gLagP}. \subsection{Martingale arguments: Proof of \refeq{eq:expectedBudgets}} \label{app:martingale-arguments} For the proof of \refeq{eq:expectedBudgets}, we use the well-known theorem on optimal stopping time of martingales (Theorem~\ref{thm:optStop}). Fix an arm $a \in [K]$. For any subset $S \subseteq [T]$ of rounds let $N_S(a)$, $r_S(a)$ and $c_S(a)$ denote the number of times arm $a$ is chosen, the total realized rewards for arm $a$ and the total realized consumption of arm $a$, respectively. Let $\tau$ denote the (random) stopping time of a $\term{BwK}$ algorithm with (random) budget $B$ and time $T$. Then we have the following claim. \begin{claim} \label{clm:OSTClaim} For a random stopping time $\tau$, for every arm $a \in [K]$ we have the following. \begin{equation} \label{eq:OSTreward} \E\left[ r_{[\tau]}(a) \right] = r(a) \cdot \E[N_{[\tau]}(a)]. \end{equation} \begin{equation} \label{eq:OSTconsumption} \E\left[ c_{[\tau]}(a) \right] = c(a) \cdot \E[N_{[\tau]}(a)]. \end{equation} \end{claim} \begin{proof} We will prove the equality in \refeq{eq:OSTreward}; the one in \refeq{eq:OSTconsumption} follows. Consider $r_{[\tau]}(a)$. By definition this is equal to $\sum_{t \in [\tau]} r_t(a) \cdot \mathbb{I}[a_t = a]$. Let $A_t := \mathbb{I}[a_t = a]$ denote the random variable corresponding to the event that arm $a$ is chosen at time $t$. Define the random variable \begin{align*} Y_t & := \sum_{t' \leq t} A_{t'} r_{t'}(a) - \E_{t'} \left[A_{t'} r_{t'}(a)\right], \end{align*} where $\E_t[.]$ denotes the conditional expectation given the random variables $A_1, A_2, \ldots, A_{t-1}$. It is easy to see that the sequences $\{ X_t \} _{t \in [\tau]}$, $\{ Y_t \} _{t \in [\tau]}$ and $\{ Z_t \}_{t \in [\tau]}$ forms a martingale sequence. Thus, we will apply the optimal stopping theorem (Theorem~\ref{thm:optStop}) at time $\tau$, we have the following. \begin{equation} \label{eq:optStopReward} \E\left[ Y_{\tau} \right] = \E\left[ \sum_{t' \leq \tau} A_{t'} r_{t'}(a) \right] - \E\left[ \sum_{t' \leq \tau} \E_{t'}\left[A_{t'} r_{t'}(a)\right]\right] = 0. \end{equation} Consider the term $\E\left[ \sum_{t' \leq \tau} \E_{t'}\left[A_{t'} r_{t'}(a)\right]\right]$ in \refeq{eq:optStopReward}. This can be simplified to\\ $\E\left[ \sum_{t' \leq \tau} r(a) \cdot \Pr[a_{t'} = a]\right]$. Consider the following random variable \begin{align*} Z_t := \sum_{t' \leq t} \Pr[a_{t'} = a] - \E_{t'}[\Pr[a_{t'} = a]]. \end{align*} Note that $\sum_{t' \leq t} \E_{t'}[\Pr[a_{t'} = a]] = N_{[t]}(a)$. Thus, using Theorem~\ref{thm:optStop} on the sequence $Z_t$ at the stopping time $\tau$, we obtain $\E\left[ \sum_{t' \leq \tau} \Pr[a_{t'} = a]\right] = \E[N_{[\tau]}(a)]$. Thus, the term $\E\left[ \sum_{t' \leq \tau} \E_{t'}\left[A_{t'} r_{t'}(a)\right]\right]$ in \refeq{eq:optStopReward} simplifies to $r(a) \cdot N_{[\tau]}(a)$ which gives the required equality in \refeq{eq:OSTreward}. \end{proof} We will now use Claim~\ref{clm:OSTClaim} to prove \refeq{eq:expectedBudgets}. Recall that $\term{REW}(a\mid B(a), T(a))$ denotes the total contribution to the reward by the $\term{BwK}$ algorithm by playing arm $a$ with a (random) resource consumption of $B(a)$ and time steps of $T(a)$. Let $\tau$ be the (random) stopping time of this algorithm. By definition we have that $N_{[\tau]}(a) = T(a)$. Thus, $\E[N_{[\tau]}(a)] = \E[T(a)$. From \refeq{eq:OSTconsumption}, we also have that $\E[N_{[\tau]}(a)] = \tfrac{\E\left[ c_{[\tau]}(a) \right]}{c(a)}$. From the definition of $B(a)$ we have, $B(a) = c_{[\tau]}(a)$ and thus, $\E[B(a)] = \E[c_{[\tau]}(a)]$. Thus, this implies that $\E[N_{[\tau]}(a)] = \min\{ T(a), \tfrac{\E[B(a)]}{c(a)} \}$. Consider $\E[\term{REW}(a)] = \E[\term{REW}(a\mid B(a), T(a))]$. \begin{align} \E[\term{REW}(a\mid B(a), T(a))] & = \E\left[ r_{[\tau]}(a) \right] & \nonumber \\ & = r(a) \cdot \E[N_{[\tau]}(a)] & \EqComment{From \refeq{eq:OSTreward}} \nonumber \\ & = r(a) \cdot \min\{ T(a), \tfrac{\E[B(a)]}{c(a)} \} & \label{eq:plugInReward} \end{align} Now, consider $\term{LP}(a\mid \E[B(a)], \E[T(a)])$. This value is equal to, \begin{align*} \E[\term{REW}(a\mid \E[B(a)], \E[T(a)])] & = \frac{r(a)}{\max\{ \E[B(a)]/\E[T(a)], c(a)\}} \cdot \tfrac{\E[B(a)]}{\E[T(a)]} \\ & = r(a) \cdot \min \left\{ \E[T(a)], \tfrac{\E[B(a)]}{c(a)} \right\}. \end{align*} Note that the last equality is same as the RHS in \refeq{eq:plugInReward}. \subsection{Confidence sums} \label{sec:logRegret-confSum} The following arguments depend only on the definition of the confidence radius, and work for any algorithm $\term{ALG}$. Suppose in each round $t$, this algorithm chooses a distribution $\vec{Y}_t$ over arms and samples arm $a_t$ independently $\vec{Y}_t$. We upper-bound the number of rounds $t$ with large $\operatorname{Rad}_t(\vec{Y}_t)$: \begin{lemma}\label{lm:DistrConfSum-main} Fix the threshold $\theta_0> 0$, and let $S$ be the set of all rounds $t\in [T]$ such that $\operatorname{Rad}_t(\vec{Y}_t)\geq \theta_0$. Then $|S| \leq \mO\left( \theta_0^{-2}\cdot K \logThm \right)$ with probability at least $1-O(T^{-3})$. \end{lemma} To prove the lemma, we study \emph{confidence sums}: for a subset $S \subset [T]$ of rounds, define \begin{align*} \AcConfSum(S) &:= \textstyle\sum_{t \in S}\; \operatorname{Rad}_t(a_t) & \EqComment{action-confidence sum of \term{ALG}}, \\ \DisConfSum(S) &:= \textstyle\sum_{t \in S}\; \operatorname{Rad}_t(\vec{Y}_t) & \EqComment{distribution-confidence sum of \term{ALG}}. \end{align*} First, a standard argument (\eg implicit in \mycite{bandits-ucb1}, see Section~\ref{app:confSum-standard}) implies that \begin{align}\label{eq:ActConfSum-UB} \AcConfSum(S) \leq \mO \rbr{ \sqrt{K\, |S|\, C_{\mathtt{rad}}} + K \cdot \ln |S| \cdot C_{\mathtt{rad}}} \quad\text{for any fixed subset $S \subset [T]$}. \end{align} Second, note that $\DisConfSum(S)$ is close to $\AcConfSum(S)$: for any fixed subset $S \subset [T]$, \begin{align}\label{eq:twoConfSums} \left| \DisConfSum(S) - \AcConfSum(S) \right| \leq \mathcal{O}(\sqrt{|S|\,\log T}) \quad\text{with probability at least $1- T^{-3}$}. \end{align} This is by Azuma-Hoeffding inequality, since $\rbr{\operatorname{Rad}_t(a_t) - \operatorname{Rad}_t(\vec{Y}_t):\; t\in S}$ is a martingale difference sequence. We extend this observation to \emph{random} sets $S$. A random set $S\subset [T]$ is called \emph{time-consistent} if the event $\{t\in S\}$ does not depend on the choice of arm $a_t$ or anything that happens afterwards, for each round $t$. (But it \emph{can} depend on the choice of distribution $\vec{Y}_t$.) \begin{claim}\label{cl:acConfDisConf} For any any time-consistent random set $S\subset [T]$, \begin{align}\label{eq:cl:acConfDisConf} \left| \DisConfSum(S) - \AcConfSum(S) \right| \leq \mathcal{O}\left( \sqrt{|S|\,\log T} + \log T\right) \quad\text{with probability at least $1- T^{-3}$}. \end{align} \end{claim} \begin{proof} By definition of time-consistent set, for each round $t$, \[ \E[\indicator{t\in S}\cdot\operatorname{Rad}_t(a_t) \mid (\vec{Y}_1, a_1) \LDOTS (\vec{Y}_{t-1}, a_{t-1}), \vec{Y}_t] = \indicator{t\in S}\cdot \operatorname{Rad}_t(\vec{Y}_t).\] Thus, $\indicator{t\in S}\operatorname{Rad}_t(a_t) - \operatorname{Rad}_t(\vec{Y}_t)$, $t\in[T]$ is martingale difference sequence. Claim~\ref{cl:acConfDisConf} follows from a concentration bound from prior work (Theorem~\ref{thm:myAzuma}). \end{proof} We complete the proof of Lemma~\ref{lm:DistrConfSum-main} as follows. Fix $\delta>0$. Since $S$ is a time-consistent random subset of $[T]$, by \refeq{eq:ActConfSum-UB} and Claim~\ref{cl:acConfDisConf}, with probability at least $1- \delta$ it holds that \[ \theta_0\cdot |S| \leq \DisConfSum(S) \leq \mathcal{O} \left( \sqrt{|S| K C_{\mathtt{rad}}} + K\,C_{\mathtt{rad}} + \sqrt{|S|\,\log T} + \log T \right). \] We obtain the Lemma by simplifying and solving this inequality for $|S|$. \subsection{Connecting LP-gap and the confidence radius} \label{sec:simpleRegret-confRad} In what follows, let $B_{\mathtt{sc}} = B(1-\eta_{\textsc{lp}})$ be the budget in the rescaled LP. \begin{lemma} \label{lem:MainLemmaUCBBwK} Fix round $t \in [T]$, and assume the ``clean event" in \eqref{eq:cleanEvent}. Then \[ \Gap(\vec{X}_t) \leq \left( 2 + \nicefrac{T}{B_{\mathtt{sc}}} \right) \operatorname{Rad}_t(\vec{X}_t). \] \end{lemma} \begin{proof} Let $\alpha := B_{\mathtt{sc}}/T$. For any distribution $\vec{X}$, let \[ \ValP(\vec{X}) := \nicefrac{B_{\mathtt{sc}}}{T}\;\cdot r(\vec{X}) / \max_{j\in [d]} c^-_j(\vec{X}).\] denote the value of $\vec{X}$ in the optimistic $\term{LP}$~\eqref{lp:UCBBwK}, after proper rescaling. Let $\vec{X}^*$ be an optimal solution to the (original) LP ~\eqref{lp:primalAbstract}. Then \begin{align} \label{eq:LPGap} \Gap(\vec{X}_t) = \Val(\vec{X}^*) - \Val(\vec{X}_t) - \ValP(\vec{X}_t) + \ValP(\vec{X}_t). \end{align} Since $\ValP(\vec{X}_t)$ is the optimal solution to the optimistic $\term{LP}$~\eqref{lp:UCBBwK}, \[ \ValP(\vec{X}_t) \geq \ValP(\vec{X}^*). \] Moreover, since $\vec{X}^*$ is feasible to the optimistic $\term{LP}$~\eqref{lp:UCBBwK} with the scaled budget $B_{\mathtt{sc}}$, \[ \ValP(\vec{X}^*) \geq \Val(\vec{X}^*). \] It follows that \refeq{eq:LPGap} an be upper-bounded as \begin{align}\label{eq:LPGap-1} \Gap(\vec{X}_t) \leq \ValP(\vec{X}_t) - \Val(\vec{X}_t). \end{align} We will now upper-bound the right-hand side in the above. Denote \begin{align*} c_{\max}(\vec{X}_t) &:= \max_{j \in [d]} \sum_{a \in [K]} c_{j, t}(a) X_t(a)\\ c^{-}_{\max}(\vec{X}_t) &:= \max_{j \in [d]} \sum_{a \in [K]} c^{-}_{j, t}(a) X_t(a). \end{align*} By definition of the value of a linear program, we can continue \refeq{eq:LPGap-1} as follows: \begin{align} \Gap(\vec{X}_t) &\leq \ValP(\vec{X}_t) - \Val(\vec{X}_t) \nonumber \\ &\leq \alpha\cdot \frac{\hat{r}(\vec{X}_t) + \operatorname{Rad}_t(\vec{X}_t)}{c^{-}_{\max}(\vec{X}_t)} - \alpha \cdot \frac{r(\vec{X}_t)}{c_{\max}(\vec{X}_t)}. \label{eq:defnGap} \end{align} Under the clean event in \refeq{eq:cleanEvent}, we continue \refeq{eq:defnGap} as follows: \begin{align} & \leq \alpha \left( \frac{2 \operatorname{Rad}_t(\vec{X}_t) + r(\vec{X}_t) }{ c^{-}_{\max}(\vec{X}_t) } - \frac{r(\vec{X}_t)}{c_{\max}(\vec{X}_t)} \right). \label{eq:clEventLPGapUsage} \end{align} Since time is one of the resources, $c^{-}_{\max}(\vec{X}_t) \geq \frac{B_{\mathtt{sc}}}{T}$. Thus, we continue \refeq{eq:clEventLPGapUsage} as follows: \begin{align} & \leq 2 \operatorname{Rad}_t(\vec{X}_t) + \alpha r(\vec{X}_t) \left( \frac{1}{c^{-}_{\max}(\vec{X}_t)} - \frac{1}{c_{\max}(\vec{X}_t)} \right) \nonumber \\ & = 2 \operatorname{Rad}_t(\vec{X}_t) + \alpha r(\vec{X}_t) \left( \frac{\operatorname{Rad}_t(\vec{X}_t)}{c^{-}_{\max}(\vec{X}_t) \cdot c_{\max}(\vec{X}_t)} \right) \nonumber \\ & \leq 2 \operatorname{Rad}_t(\vec{X}_t) + \frac{\operatorname{Rad}_t(\vec{X}_t)}{c^{-}_{\max}(\vec{X}_t)} \label{eq:clEventLPGapUsage2} \\ & \leq \left( 2 + \tfrac{T}{B_{\mathtt{sc}}} \right) \operatorname{Rad}_t(\vec{X}_t) \label{eq:clEventLPGapUsage3} \end{align} \refeq{eq:clEventLPGapUsage2} uses the fact that $\alpha \frac{r(\vec{X}_t)}{c_{\max}(\vec{X}_t)} \leq \frac{B}{T} \frac{r(\vec{X}_t)}{c_{\max}(\vec{X}_t)} = V(\vec{X}_t) \leq 1$. \refeq{eq:clEventLPGapUsage3} uses the fact that time is one of the resources and thus, $c^{-}_{\max}(\vec{X}_t) \geq \frac{B_{\mathtt{sc}}}{T}$. \end{proof} \subsection{Finishing the proof of Theorem~\ref{thm:UCBSmallNonArms}} \begin{claim}\label{cl:simpleRegret-to-Gap} Fix round $t$, and assume the ``clean event" in \eqref{eq:cleanEvent}. Then \[ \OPTDP/T - r(\vec{X}_t) \leq \Gap(\vec{X}_t) + \eta_{\textsc{lp}}.\] \end{claim} \begin{proof} By \eqref{eq:cleanEvent} and because $\vec{X}_t$ is the solution to the optimistic LP, we have \[ \max_{j\in d} c_j(\vec{X}_t) \geq \max_{j\in d} c^-_j(\vec{X}_t) = \nicefrac{B}{T}\; (1-\eta_{\textsc{lp}}).\] It follows that $r(\vec{X}_t) \geq V(\vec{X}_t)(1-\eta_{\textsc{lp}})$. Finally, we know that $\OPT_{\term{LP}}\geq \OPTDP/T$. \end{proof} Condition on \eqref{eq:cleanEvent}, and the high-probability event in Lemma~\ref{lm:DistrConfSum-main}. (Take the union bound in Lemma~\ref{lm:DistrConfSum-main} over all thresholds $\theta_0\geq 1/\sqrt{T}$, \eg over an exponential scale.) Fix $\eps>0$. By Claim~\ref{cl:simpleRegret-to-Gap} and Lemma~\ref{lem:MainLemmaUCBBwK}, any round $t$ with simple regret at least $\eps$ satisfies \[ \eps \leq \OPTDP/T-r(\vec{X}_t) \leq \eta_{\textsc{lp}} + \left( 2 + \nicefrac{T}{B_{\mathtt{sc}}} \right) \operatorname{Rad}_t(\vec{X}_t).\] Therefore, $\operatorname{Rad}_t(\vec{X}_t) \geq \theta_0$, where $\theta_0 = \frac{\epsilon - \eta_{\textsc{lp}}}{ \left( 2 + \nicefrac{T}{B_{\mathtt{sc}}} \right)} \geq \Theta(\epsilon)$ when $\epsilon \geq 2 \eta_{\textsc{lp}}$. Now, the theorem follows from Lemma~\ref{lm:DistrConfSum-main}. Note, when $\epsilon < 2 \eta_{\textsc{lp}}$, then the total number of rounds in the theorem is larger than $T$ and hence not meaningful. \subsection{The standard confidence-sum bound: proof of \refeq{eq:ActConfSum-UB}} \label{app:confSum-standard} Let us prove \refeq{eq:ActConfSum-UB} for the sake of completeness. By definition of $\operatorname{Rad}_t(a_t)$ from \refeq{eq:MaxConfRadUB}, \[ \operatorname{Rad}_t(a_t) = f(n) := \min\rbr{ 1,\;\sqrt{C_{\mathtt{rad}}/n} + C_{\mathtt{rad}}/n }, \] where $N_t(a)$ is the number of times arm $a$ was chosen before round $t$. Therefore: \begin{align*} \sum_{t \in S} \operatorname{Rad}_t(a_t) &\leq \sum_{a \in [K]} \sum_{n=1}^{|S|/K} f(n) \\ &\leq \sum_{a \in [K]} \int_{x=1}^{|S|/K} f(x)\, \mathrm{d} x \leq 3 \rbr{ \sqrt{ K |S|\, C_{\mathtt{rad}}} + K\cdot\ln |S| \cdot C_{\mathtt{rad}} }. \end{align*} \section{Preliminaries} \label{sec:prelims} \input{prelims} \section{Logarithmic regret bounds} \label{sec:algorithm} \input{logRegret} \section{Lower Bounds} \label{sec:LB} \input{LB-sqrt} \newpage \section{Simple regret of \term{UcbBwK} algorithm} \label{sec:simple-regret} \input{simpleRegret} \input{appx_simple.tex} \section{Reduction from \term{BwK} to stochastic bandits} \label{sec:extensions} \input{reductions} \subsection{The general result} \input{appx_extensions} \section{Discussion: significance and novelty} \input{conclusions} \clearpage \begin{small} \bibliographystyle{plainnat} \section{Checklist} \begin{itemize} \item[(1a)] Q: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? A: Yes, also see Section~\ref{sec:significance} for additional discussion. \item[(1b)] Q: Have you read the ethics review guidelines and ensured that your paper conforms to them? A: Yes \item[(1c)] Q: Did you discuss any potential negative societal impacts of your work? A: This is a theoretical paper which elucidates regret bounds of an existing bandit algorithm, and advances theoretical understanding of multi-armed bandits. Thus, it is very unlikely to directly cause any negative societal impact. As this paper contributes to the field of bandit learning, it may indirectly contribute to whatever societal impact the entire field might have, both positive or otherwise. While we are not aware of any concrete evidence of actual or likely societal harms caused by bandit algorithms, it is important to be mindful that such harms might exist in some application scenarios. By definition, bandit learning trades off exploration vs exploitation, which in many applications means worsening the experience of some users for the sake of improving the aggregate experience of all users (\ie with positive societal effects). However, \citet{externalities-colt18} suggest artificial constructions in which users are served by a bandit algorithm, and the presence of one population of users creates a negative externality upon another. \item[(1d)] Q: Did you describe the limitations of your work? A: Yes. Our theoretical results hold only for the model and assumptions as specified in the paper. Both the model and the assumptions are carefully spelled out (resp., in Introduction/Preliminaries and next to the respective theorems). Incidentally, the limitations of our positive results on logarithmic regret are in fact a ``feature", as we prove that such limitations are in fact inevitable for any algorithm. \item[(2a)] Q: Did you state the full set of assumptions of all theoretical results? A: yes, see above. \item[(2b)] Q: Did you include complete proofs of all theoretical results? A: Yes. We provide full proofs for all results, either in the body of the paper or in the supplement. If the full proof is in the supplement, we include a proof sketch in the body. \end{itemize} Checklist items 3, 4, 5 not applicable to this paper. \section{Introduction} \label{sec:intro} We study multi-armed bandit problems with supply or budget constraints. Multi-armed bandits is a simple model for \emph{exploration-exploitation tradeoff}, \ie the tension between acquiring new information and making optimal decisions. It is an active research area, spanning computer science, operations research, and economics. Supply/budget constraints arise in many realistic applications, \eg a seller who dynamically adjusts the prices or product assortment may have a limited inventory, and an algorithm that optimizes ad placement is constrained by the advertisers' budgets. Other motivating examples concern repeated auctions, crowdsourcing markets, and network routing. We consider a general model called \emph{Bandits with Knapsacks} (\emph{BwK}), which subsumes the examples mentioned above. There are $d\geq 2$ \emph{resources} that are consumed over time, one of which is time itself. Each resource $i$ starts out with budget $B_i$. In each round $t$, the algorithm chooses an action (\emph{arm}) $a=a_t$ from a fixed set of $K$ actions. The outcome is a vector in $[0,1]^{d+1}$: it consists of a reward and consumption of each resource. This vector is drawn independently from some distribution over $[0,1]^{d+1}$, which depends on the chosen arm but not on the round, and is not known to the algorithm. The algorithm observes \emph{bandit feedback}, \ie only the outcome of the chosen arm. The algorithm stops at a known time horizon $T$, or when the total consumption of some resource exceeds its budget. The goal is to maximize the total reward, denoted $\term{REW}$. The presence of supply/budget constraints makes the problem much more challenging. First, algorithm's choices constrain what it can do in the future. Second, the algorithm is no longer looking for arms with maximal expected per-round reward (because such arms may consume too much resources). Third, the best fixed distribution over arms can be much better than the best fixed arm. Accordingly, we compete with the \emph{best fixed distribution} benchmark: the total expected reward of the best distribution, denoted $\OPTFD$. All this complexity is already present even when $d=2$, \ie when there is only one resource other than time, and the minimal budget is $B = \min_i B_i = \Omega(T)$. \term{BwK} were introduced in \cite{BwK-focs13-conf,BwK-focs13} and extensively studied since then. The optimal worst-case regret rate is well-understood. In particular, it is $\tilde{\mO}(\sqrt{KT})$ when $B = \Omega(T)$. We present several results that go beyond the worst-case perspective: \fakeItem[1.] We provide a full characterization for instance-dependent regret rates. In stochastic bandits, one obtains regret $\mO\rbr{ \tfrac{K}{\Delta} \log T}$, where $\Delta$ is the the \emph{reward-gap}: the gap in expected reward between the best and the second-best arm. We work out whether, when and how such results extend to \term{BwK}. \fakeItem[2.] We show that \emph{simple regret}, which tracks algorithm's performance in a given round, can be small in all but a few rounds. Like in stochastic bandits, simple regret can be at least $\eps$ in at most $\tilde{\mO}(K/\eps^2)$ rounds, and this is achieved for all $\eps>0$ simultaneously. \fakeItem[3.] We improve all results mentioned above for a large number of arms, assuming some helpful structure. In fact, we provide a general ``reduction" from \term{BwK} to stochastic bandits, and apply this reduction to three well-studied scenarios from stochastic bandits. \vspace{2mm} Our algorithmic results focus on \term{UcbBwK}, a \term{BwK} algorithm from \mycite{AgrawalDevanur-ec14} which implements the ``optimism under uncertainty" paradigm and attains the optimal worst-case regret bound. We provide new analyses of this algorithm along the above-mentioned themes. \section{Lower Bounds [OBSOLETE!]} \label{sec:lowerBoundLog} We provide several lower bounds to complement Theorem~\ref{thm:logRegretUpper}. Essentially, we show that the $\Dmin^{-2}$ dependence is optimal, and that $d\leq 2$ resources and best-arm-optimality are necessary. We present the theorems and intuition first; the proofs are exposed in the subsections. \subsection{Results and intuition} We consider problem instances with three arms $\{A_1,A_2,\term{null}\}$, Bernoulli rewards, and $d\geq 2$ resources, one of which is time; call them \emph{$3\times d$ instances}. Each lower bound constructs a pair of problem instances $\mI,\mI'$ such that any algorithm incurs high regret on at least one of them. The two instances are similar: they have the same parameters $T,K,d,B$, and the mean reward and the mean consumption for each arm and each resource differ by at most $\eps$; then, one instance is called an \emph{$\eps$-perturbation} of another. In fact, each lower bound exhibits a \emph{family} of such $\mI,\mI'$ pairs. Our first lower bound concerns the dependence on $\Dmin$, and holds for any given value thereof. \begin{theorem}[dependence on $\Dmin$]\label{thm:mainLB} Posit an arbitrary time horizon $T$, budget $B = \nicefrac{T}{2}$, and $d=2$ resources (including time). For any $\eps\in(\nicefrac{1}{\sqrt{T}},\nicefrac{1}{4})$, there exist two $3\times 2$ problem instances $\mI$, $\mI'$ with $\Dmin=\eps/2$ such that any algorithm incurs regret $\OPTFD-\E[\term{REW}] \geq \Omega\rbr{\Dmin^{-2}}$ on either $\mI$ or $\mI'$. In fact, both problem instances have deterministic resource consumption. \end{theorem} The problem instances for Theorem~\ref{thm:mainLB} are constructed as follows. For arm $A_1$, resource consumption is deterministically $\nicefrac{1}{2}$, and reward is drawn from $\term{Bernoulli}(\nicefrac{1}{4})$. For arm $A_2$, the consumption is deterministically $1$, and the reward is drawn from $\term{Bernoulli}(\nicefrac{1}{2} \pm \nicefrac{\eps}{2})$. \footnote{That is, the reward for arm $A_2$ is $\term{Bernoulli}(\nicefrac{1}{2} + \nicefrac{\eps}{2})$ for one instance, and $\term{Bernoulli}(\nicefrac{1}{2} - \nicefrac{\eps}{2})$ for another.} That $\Dmin$ is as claimed follows immediately from the construction. Because of the small difference between the two instances, any algorithm will choose a ``wrong" arm at least $\Omega(\eps^{-2})$ times on one of the instances, which in turn implies $\Omega(\eps^{-2})$ regret. The full proof is in Section~\ref{sec:LB-gap}. \vspace{2mm} Next, we argue that regret $\Omega(\sqrt{T})$ is essentially inevitable if a problem instance $\mI_0$ is far from best-arm-optimal \asedit{or if there are $d>2$ resources. For both results, we construct two problem instances $\mI,\mI'$ that are small perturbations of $\mI_0$, and prove that any algorithm suffers high regret on one of them.} This is a fairly general result: unlike many bandit lower bounds that focus on a specific problem instance, we allow a wide range of instances $\mI_0$, as per the following assumption. \newcommand{c_{\mathtt{LB}}}{c_{\mathtt{LB}}} \begin{assumption}\label{ass:LBAss} There exists an absolute constant $c_{\mathtt{LB}}\in (0,\nicefrac12)$ such that: \begin{enumerate} \item \label{boundReward} $r(A_i),\,c_j(A_i)\in (c_{\mathtt{LB}},\,1-c_{\mathtt{LB}})$ for each arm $i\in \{1,2\}$ and each resource $j$. \item \label{rewardDiff} $r(A_2) - r(A_1) \geq c_{\mathtt{LB}}$ and $c_j(A_2) - c_j(A_1) \geq c_{\mathtt{LB}}$ for every resource $j \in [d]$. \item \label{nonNegativity} Mean consumption is strictly positive: $c_j(A_i) > 0$ for each arm $A_i$ and resource $j \in [d]$. \item \label{boundOPT} $\term{OPT} - B \geq c_{\mathtt{LB}}\cdot T$. \item \label{boundLagrange} Lagrangian gap is not extremely small: $\Dmin \geq c_{\mathtt{LB}}/\sqrt{T}$. \end{enumerate} \end{assumption} \begin{theorem}\label{thm:LB-root} Posit an arbitrary time horizon $T$, budget $B$, and $d$ resources (including time). Fix any $3\times d$ problem instance $\mI_0$ which satisfies Assumption~\ref{ass:LBAss}. Assume either one of the following: \begin{OneLiners} \item[(a)] \label{cor:bestArmOptimal} $d=2$ and $\mI_0$ is far from being best-arm-optimal, in the sense that \begin{align}\label{eq:cor:bestArmOptimal} \text{There exists an optimal solution $\vec{X}^*$ such that $X(A_1)>0$ and $X(A_2) \geq c_{\mathtt{LB}}$.} \end{align} \item[(b)] \label{cor:multipleResources} $d>2$ \end{OneLiners} \noindent Then there exist problem instances $\mI,\mI'$, which are $\mathcal{O}\rbr{\nicefrac{1}{\sqrt{T}}}$-perturbations of $\mI_0$, such that \begin{align}\label{eq:LB-guarantee} \text{Any algorithm incurs regret $\OPTFD-\E[\term{REW}] \geq \Omega\rbr{ c_{\mathtt{LB}}^{-4}\; \sqrt{T}}$ on $\mI$ or $\mI'$} \end{align} \end{theorem} \begin{remark} The problem instances in the theorem require randomized resource consumption. This is crucial for part (b) given our positive result for deterministic consumption (Theorem~\ref{thm:deterministic}). \asedit{For part (a), instance $\mI$ has the same expected outcomes as $\mI_0$ (but possibly different outcome distributions); we call such problem instances \emph{mean-twins}.} \end{remark} Both parts of Theorem~\ref{thm:LB-root} are corollaries of a more generic lower bound for $3\times d$ problem instances which focuses on linear independence of per-resource consumption vectors \begin{align}\label{eq:per-resource-vector} \vec{c}_j := \rbr{ c_j(A_1),\, c_j(A_2),\, c_j(\term{null})} \in [0, 1]^3, \quad \text{resources $j\in[d]$}. \end{align} \begin{theorem}\label{thm:generalLB} Posit an arbitrary time horizon $T$, budget $B$, and $d\geq 2$ resources (including time). Fix any $3\times d$ problem instance $\mI_0$ that satisfies Assumption~\ref{ass:LBAss} and \refeq{eq:cor:bestArmOptimal}. Assume that the consumption vectors in \refeq{eq:per-resource-vector} are linearly independent. \asedit{Then there are instances $\mI,\mI'$ which are $\eps$-perturbations of $\mI$, with $\eps = \rbr{ \nicefrac{c_{\mathtt{LB}}^2}{(1-c_{\mathtt{LB}})} } / \sqrt{T}$, which satisfy \eqref{eq:LB-guarantee}. In fact, $\mI$ is a mean-twin of $\mI_0$.} \end{theorem} \newcommand{\widetilde{\mI}_0}{\widetilde{\mI}_0} The corollaries are obtained as follows. For Theorem~\ref{thm:LB-root}(a), problem instance $\mI_0$ trivially satisfies all preconditions in Theorem~\ref{thm:generalLB}. Indeed, letting time be resource $1$, the per-resource vectors are $\vec{c}_1 = (0,0,1)$ and $\vec{c}_2 = (\,\cdot\,,\,\cdot\,,\,0)$, hence they are linearly independent. For Theorem~\ref{thm:LB-root}(b), we use some tricks from the literature to transform the original problem instance $\mI_0$ to another instance $\widetilde{\mI}_0$ which satisfies \refeq{eq:cor:bestArmOptimal} and the linear independence condition. The full proof is in Section~\ref{sec:LB-d}. The intuition for Theorem~\ref{thm:generalLB} is as follows. Given an instance $\mI_0$, the instance $\mI'$ is constructed as follows: for arm $A_2$, mean consumption on all resources except time is increased by $\mathcal{O}\rbr{\nicefrac{1}{\sqrt{T}}}$, and everything else stays the same. \ascomment{TODO: insert one sentence to explain $\mI$.} Because of the small difference between the two instances, any algorithm will choose a sufficiently ``wrong" distribution over arms sufficiently often. The assumption in \refeq{eq:cor:bestArmOptimal} and the linear independence condition are needed to ensure that ``wrong" algorithm's choices result in large regret. The full proof is in Section~\ref{sec:LB-generic}. \OMIT{ \kaedit{The problem instances for Theorem~\ref{cor:multipleResources} is constructed as follows. Define \[ \zeta_1 := \min \left \{ \tfrac{1}{\sqrt{T}}, \{c_j(A_i)\}_{j \in [d], i \in [2]}, \frac{1}{(d!)^2} \right \}, \qquad and \] \[ \zeta_2 := \min \left\{ \{c_j(A_i)\}_{j \in [d], i \in [2]}, \tfrac{1}{\sqrt{T}} \right \}. \] Given instance $\mI_0$, we construct instance $\mI$ by decreasing the mean consumption on arm $A_i$ and resource $j$ (except time) by $\zeta_1^j + u_j(A_i)$ where $u_j(a) \sim [-\zeta_2, \zeta_2]$ uniformly at random. We keep the mean rewards the same. As before $\mI'$ is obtained from $\mI$ by decreasing the mean consumption on all resources, except time, of one arm (say $A_2$) by $\eps = \mathcal{O}\left(\frac{1}{\sqrt{T}}\right)$ while keeping all other mean rewards/consumptions the same as in $\mI$.} } \subsection{$O(\log T)$ regret analysis for \term{UcbBwK}} We analyze a version of \term{UcbBwK} which ``prunes out" the null arm, call it \term{PrunedUcbBwK}. (This modification can only improve regret, so it retains the worst-case regret \eqref{eq:regret-opt} of \term{UcbBwK}.) We provide a new analysis of this algorithm for $d=2$ and best-arm-optimality. We analyze the sensitivity of the ``optimistic" linear relaxation to small perturbations in the coefficients, and prove that the best arm is chosen in all but a few rounds. The key is to connect each arm's confidence term with its Lagrangian gap. This gives us $\mO(K\Dmin^{-2}\,\log T)$ regret rate. To improve it to $\mO(K\Dmin^{-1}\,\log T)$, we use a careful counting argument which accounts for rewards and consumption of non-optimal arms. Algorithm \term{PrunedUcbBwK} is formally defined as follows: in each round $t$, call \term{UcbBwK} as an oracle, repeat until it chooses a non-null arm $a$, and set $a_t=a$. (In one ``oracle call", \term{UcbBwK} outputs an arm and inputs an outcome vector for this arm.) The total number of oracle calls is capped at $\Nmax = \alpha_0 \cdot T^2\;\log T$, with a sufficiently large absolute constant $\alpha_0$ which we specify later in Claim~\ref{cl:null-rounds}. Formally, after this many oracle calls the algorithm can only choose the null arm. \begin{definition}\label{def:best-arm-optimal} An instance of \term{BwK} is called \emph{best-arm-optimal} with best arm $a^*\in [K]$ if the following conditions hold: (i) $\OPT_{\term{LP}} = \tfrac{B}{T}\cdot r(a^*) / \max_{j\in [d]} c_j(a^*)$, (ii) the linear program \eqref{lp:primalAbstract} has a unique optimal solution $\vec{X}^*$ supported on $\{a^*,\term{null}\}$, and (iii) $X^*(a^*) > \frac{3\sqrt{B} \log(KdT)}{T}$. \end{definition} \noindent Part (ii) here is essentially w.l.o.g.;% \footnote{Part (ii) holds almost surely given part (i) if one adds a tiny noise, \eg $\eps$-variance, mean-$0$ Gaussian for any $\eps>0$, independently to each coefficient in the LP \eqref{lp:primalAbstract}, as per Prop. 3.1 in \mycite{megiddo1988perturbation}. To implement this, an algorithm can precompute the noise terms and add them consistently to observed rewards and consumptions.} part (iii) states that the optimal value should not be tiny. We assume $d=2$ and best-arm-optimality throughout this section without further mention. In particular, the linear program ~\eqref{lp:primalAbstract} has a unique optimal solution $\vec{X}^*$, and its support has only one arm $a^*\neq \term{null}$. We use $c(a)$ to denote the mean consumption of the non-time resource on arm $a$. We distinguish two cases, depending on whether $c(a^*)$ is very close to $\nicefrac{B}{T}$. \begin{theorem}\label{thm:logRegretUpper} Fix a best-arm optimal problem instance with only one resource other than time (\ie $d=2$). Consider Algorithm \term{PrunedUcbBwK} with parameter $\eta_{\textsc{lp}}\leq \tfrac12$ in \eqref{eq:prelims-eta}. Then \begin{itemize} \item[(i)] $ \OPTFD - \E[\term{REW}] \leq \mO\rbr{\tfrac{\OPTFD}{B} \cdot \Psi}$, where $\Psi := \mO\rbr{\nicefrac{K}{T} \cdot \Dmin^{-2} \cdot \logThm}$. \item[(ii)] Moreover, if $|c(a^*)-\nicefrac{B}{T}| >\Omega(\Psi/T)$, then \begin{align}\label{eq:logRegretMainEq} \OPTFD - \E[\term{REW}] \textstyle \leq \mO(\; \sum_{\myArms} G_\term{LAG}^{-1}(a)\; \logThm\;). \end{align} \end{itemize} \end{theorem} \refeq{eq:logRegretMainEq} optimally depends on $G_\term{LAG}(\cdot)$: indeed, it does in the unconstrained case when Lagrangian gap specializes to the reward gap, as per the lower bound in \mycite{Lai-Robbins-85}. In particular, \refeq{eq:logRegretMainEq} holds if $G_\term{LAG}>T^{-1/4}$ and $|c(a^*)-\nicefrac{B}{T}| >\mO(T^{-1/2})$. The constant in $\mO(\cdot)$ is $48$ in both parts of the theorem; the analysis only suppresses constants from concentration bounds and from Lemma~\ref{cl:sensitivity-body}. \OMIT{ \begin{remark} Theorem~\ref{thm:logRegretUpper} holds even if budget $B$ is small, whereas the worst-case regret bound \eqref{eq:regret-opt} is vacuous. In particular, suppose all arms $\myArms$ have small mean reward $r(a)$ and large mean consumption $c(a)$. Then their Lagrangian gap is small, because from \eqref{eq:gLagSimplified} we have that it is an increasing function of $c(a)$, and decreasing function of $r(a)$ and $B$. So they do not contribute much to regret. \end{remark} } \OMIT{ \begin{theorem}[worst-case]\label{thm:worst-case} Assume $\eta_{\textsc{lp}}\leq \tfrac12$ in \eqref{eq:prelims-eta}. Then \term{PrunedUcbBwK} achieves \begin{align}\label{eq:WSRegret} \OPTDP - \E[\term{REW}] \leq \mO\left( \sqrt{\logThm} \left( \OPTDP\sqrt{\nicefrac{K}{B}} + \sqrt{K\,\OPTDP} \right) \right). \end{align} \end{theorem} \begin{remark}\label{rem:logRegret-worstCase} \refeq{eq:WSRegret} is the worst-case optimal regret bound \eqref{eq:regret-opt} and follows directly from the analysis in \cite{AgrawalDevanur-ec14}. \kacomment{We don't need to prove anything here.} \end{remark} } \subsubsection{Basic analysis: proof of Theorem~\ref{thm:logRegretUpper}(i)} We analyze $\term{UcbBwK}$ in a relaxed version of \term{BwK}, where an algorithm runs for exactly $\Nmax$ rounds, regardless of the time horizon and the resource consumption; call it \emph{\relaxedBwK}. The algorithms are still parameterized by the original $B,T$, and observe the resource consumption. We sometimes condition on the high-probability event that \eqref{eq:cleanEvent} holds for all rounds $t\in [\Nmax]$, call it the ``clean event". Recall that its probability is at least $1-\frac{\mO(\logThm)}{T^2}$. We prove that the best arm $a^*$ chosen in all but a few rounds. The crux is an argument about sensitivity of linear programs to perturbations. More specifically, we argue about sensitivity of the support of the optimal solution for the linear relaxation $\eqref{lp:primalAbstract}$. \begin{lemma}[LP-sensitivity] \label{cl:sensitivity-body} Consider an execution of \term{UcbBwK} in \relaxedBwK. Under the ``clean event", $\operatorname{Rad}_t(a)\geq \tfrac{1}{4}\,G_\term{LAG}(a)$ for each round $t$ and each arm $a\in \operatorname{supp}(\vec{X}_t)\setminus\{a^*,\term{null}\}$. \end{lemma} \begin{myproof}[Sketch] We use a standard result about LP-sensitivity, the details are spelled out in Appendix~\ref{app:LP-sensitivity}. We apply this result via the following considerations. We treat the optimistic LP \eqref{lp:UCBBwK} a perturbation of (the rescaled version of) the original LP $\eqref{lp:primalAbstract}$. We rely on perturbations being ``optimistic" (\ie upper-bounding rewards and lower-bounding resource consumption). We use the clean event to upper-bound the perturbation size by the confidence radius. Finally, we prove that \begin{align}\label{eq:gLagSimplified} \Dmin(a) =\textstyle \frac{T}{B} \sum_{j \in [d]} \;\lambda^*_j c_j(a) - r(a), \end{align} and use this characterization to connect Lagrangian gap to the allowed perturbation size. \end{myproof} We rely on the following fact which easily follows from the definition of the confidence radius: \begin{claim}\label{cl:ConfSum-main} Consider an execution of some algorithm in \relaxedBwK. Fix a threshold $\theta >0$. Then each arm $a \neq \term{null}$ can only be chosen in at most $\mO\rbr{ \theta^{-2}\logThm }$ rounds $t$ with $\operatorname{Rad}_t(a)\geq \theta$. \end{claim} \begin{corollary}\label{cor:support-optimal} Consider an execution of \term{UcbBwK} in \relaxedBwK. Under the clean event, each arm $a\not\in \{a^*,\term{null}\}$ is chosen in at most $N_0(a) := \mO\rbr{G_\term{LAG}^{-2}(a)\;\logThm}$ rounds. \end{corollary} \OMIT{ \begin{myproof}[of Lemma~\ref{cor:support-optimal}] Consider a time-step $t$ when an arm $a\in \operatorname{supp}(\vec{X}_t)\setminus\{a^*,\term{null}\}$ is sampled. Then $\operatorname{Rad}_t(a)\geq \tfrac{G_\term{LAG}(a)}{4}$ by Lemma~\ref{cl:sensitivity-body}. By Claim~\ref{cl:ConfSum-main} we have that there are at most $\mO\rbr{ G_\term{LAG}(a)^{-2}\logThm }$ such rounds for a given arm $a \in[K]$. Thus, we get the claim. \end{myproof} } This follows from Lemma~\ref{cl:sensitivity-body} and Claim~\ref{cl:ConfSum-main}. Next, the null arm is not chosen too often: \begin{claim}\label{cl:null-rounds} Consider an execution of \term{UcbBwK} in \relaxedBwK. With probability at least $1-\mO(T^{-3})$, the following happens: the null arm cannot be chosen in any $\alpha_0 \,T\,\log(T)$ consecutive rounds, for a large enough absolute constant $\alpha_0$. Consequently, a non-null arm is chosen in at least $T$ rounds. \end{claim} \begin{myproof}[Sketch] Fix round $t$, and suppose \term{UcbBwK} chooses the null arm in $N$ consecutive rounds, starting from $t$. No new data is added, so the optimistic LP stays the same throughout. Consequently, the solution $\vec{X}_t$ stays the same, too. Thus, we have $N$ consecutive independent draws from $\vec{X}_t$ that return $\term{null}$. It follows that $r(\vec{X}_t) <\nicefrac{1}{T}$ with high probability, \eg by \eqref{eq:lem:radBound}. On the other hand, assume the clean event. Then $r(\vec{X}_t) \geq (1-\eta_{\textsc{lp}})\;\OPT_{\term{LP}}$ by definition of the optimistic LP, and consequently $r(\vec{X}_t) \geq (1-\eta_{\textsc{lp}})\, \OPTDP/T$. We obtain a contradiction. \end{myproof} Corollary~\ref{cor:support-optimal} and Claim~\ref{cl:null-rounds} imply a strong statement about the pruned algorithm. \begin{claim}\label{cl:FinalAlg} Consider an execution of \term{PrunedUcbBwK} in the (original) \term{BwK} problem. With probability at least $1-\mO(T^{-2})$, each arm $a\not\in\{a^*,\term{null}\}$ is chosen in at most $N_0(a)$ rounds, and arm $a^*$ is chosen in $T-N_0$ remaining rounds, $N_0 := \sum_{\myArms}N_0(a)$. \end{claim} We take a very pessimistic approach to obtain Theorem~\ref{thm:logRegretUpper}(i): we only rely on rewards collected by arm $a^*$, and we treat suboptimal arms as if they bring no reward and consume the maximal possible amount of resource. We formalize this idea as follows (see Appendix~\ref{appx:logRegretSection} for details). For a given arm $a$, let $\term{REW}(a)$ be the total reward collected by arm $a$ in \term{PrunedUcbBwK}. Let $\term{REW}(a\mid B_0, T_0)$ be the total reward of an algorithm that always plays arm $a$ if the budget and the time horizon are changed to $B_0\leq B$ and $T_0 \leq T$, respectively. Note that \begin{align}\label{eq:logT-LPformula} \term{LP}(a\mid B_0,T_0) := \E[\term{REW}(a\mid B_0, T_0)] = r(a)\cdot \min(\; T_0,\tfrac{B_0}{c(a)} \;). \end{align} is the value of always playing arm $a$ in a linear relaxation with the same constraints. By best-arm-optimality, we have $\E[\term{REW}(a^*\mid B, T)] = \OPTFD$. We observe that \begin{align}\label{cl:change-B} \E[\term{REW}(a^*\mid B_0, T_0)] \geq \tfrac{\min \left\{ T_0, B_0 \right\}}{B}\; \cdot \OPTFD. \end{align} By Claim~\ref{cl:FinalAlg} there are at least $B_0 = B-N_0$ units of budget and at least $T_0 = T-N_0$ rounds left for arm $a^*$ with high probability. Consequently, \begin{align}\label{eq:logT-basic-reduce} \E[\term{REW}] \geq \E[\term{REW}(a^*)] \geq \E[\term{REW}(a^* \mid B_0,T_0) ] - \tilde{\mO}(\nicefrac{1}{T}). \end{align} We obtain Theorem~\ref{thm:logRegretUpper}(i) by plugging these $B_0, T_0$ into \refeq{cl:change-B}, and then using \eqref{eq:logT-basic-reduce}. \subsubsection{Tighter computation: proof of Theorem~\ref{thm:logRegretUpper}(ii)} We re-use the basic analysis via Claim~\ref{cl:FinalAlg}, but perform the final computation more carefully so as to account for the rewards and resource consumption of the suboptimal arms. Let's do some prep-work. First, we characterize $\term{REW}(a^*)$ in a more efficient way compared to \refeq{eq:logT-basic-reduce}. Let $B(a), T(a)$ denote, resp., the budget and time consumed by \term{PrunedUcbBwK} when playing a given arm $a$. We use expectations of $B(a)$ and $T(a)$, rather than lower bounds: \begin{align} \E[\term{REW}(a)] & = r(a)\,\E[T(a)] = r(a)\,\tfrac{\E[B(a)]}{c(a)} & \nonumber \\ &= \term{LP}\rbr{ a\mid \E[B(a)], \E[T(a)] } &\quad\text{for each arm $a$}. \label{eq:expectedBudgets} \end{align} We prove \refeq{eq:expectedBudgets} via martingale techniques, see Appendix~\ref{app:martingale-arguments}. Second, we use a tighter version of \refeq{cl:change-B} (see Appendix~\ref{appx:logRegretSectionStronger}): for any $B_0\leq B$, $T_0\leq T$ \begin{align}\label{cl:change-B-new} \term{LP}(a^*\mid B_0, T_0)] \geq \OPTFD \cdot\tfrac{B_0}{B} \;/ \rbr{ \max\cbr{\tfrac{B}{T}, c(a^*)} \cdot \max\cbr{\tfrac{B_0}{T_0}, c(a^*)} }. \end{align} Third, we lower-bound $G_\term{LAG}(a)$ in a way that removes Lagrange multipliers $\lambda^*$: \begin{equation} \label{eq:gLagP} \Dmin(a) \geq \begin{cases} \OPTFD/T - r(a) &\text{if}\quad c(a^*) < \nicefrac{B}{T},\\ \OPTFD\cdot c(a)/B - r(a) &\text{if}\quad c(a^*) > \nicefrac{B}{T}. \end{cases} \end{equation} We derive this from \refeq{eq:gLagSimplified} and complementary slackness, see Appendix~\ref{app:gLagP}. Fourth, let $B_0 = \E[B(a^*)]$ and $T_0 =\E[T(a^*)]$ denote, resp., the expected budget and time consumed by arm $a^*$. Let $N(a) = \E[T(a)]$ be the expected number of pulls for each arm $a\not\in \{a^*,\term{null}\}$. In this notation, \refeq{eq:expectedBudgets} implies that \begin{align}\label{eq:expectedBudgets-total} \E[\term{REW}] =\textstyle \sum_{\myArms} N(a)\,r(a) + \term{LP}(a^*\mid B_0,T_0). \end{align} Now we are ready for the main computation . We consider four cases, depending on how $c(a^*)$ compares with $\nicefrac{B}{T}$ and $\nicefrac{B_0}{T_0}$. We prove the desired regret bound when $c(a^*)$ is either larger than both or smaller than both, and we prove that it cannot lie in between. The ``in-between" cases is the only place in the analysis where we use the assumption that $c(a^*)$ is close to $\nicefrac{B}{T}$. \xhdr{Case 1: $c(a^*)< \min(\nicefrac{B}{T},\nicefrac{B_0}{T_0})$}. Plugging in \refeq{cl:change-B-new} into \refeq{eq:expectedBudgets-total} and simplifying, \begin{align} \E[\term{REW}] &\geq \textstyle\sum_{\myArms} \; N(a)\, r(a) + \OPTFD\cdot \nicefrac{T_0}{T}. \end{align} \noindent Re-arranging, plugging in $T_0 = T-\sum_{a\neq a^*} N(a)$ and simplifying, we obtain \begin{align} \OPTFD - \E[\term{REW}] & \textstyle \leq \sum_{\myArms} N(a) \left( \frac{\OPTFD}{T} - r(a) \right) \\ & \textstyle \leq \sum_{\myArms} N(a)\, \Dmin(a) & \EqComment{by \refeq{eq:gLagP}} \nonumber \\ & \textstyle \leq \mO (\; \sum_{\myArms} G_\term{LAG}^{-1}(a)\; \logThm \;) &\EqComment{by Claim~\ref{cl:FinalAlg}}. \nonumber \end{align} \xhdr{Case 2: $c(a^*)> \max(\nicefrac{B}{T},\nicefrac{B_0}{T_0})$}. Plugging in \refeq{cl:change-B-new} into \refeq{eq:expectedBudgets-total} and simplifying, \begin{align} \E[\term{REW}] &\geq \textstyle\sum_{\myArms} \; N(a)\, r(a) + \OPTFD\cdot \nicefrac{B_0}{B}. \end{align} Re-arranging, plugging in $B_0 = B - \sum_{a\neq a^*} N(a)\,c(a)$, and simplifying, we obtain \begin{align*} \OPTFD - \E[\term{REW}] &\textstyle \leq \sum_{\myArms} N(a) \left( \frac{\OPTFD}{B} \cdot c(a) - r(a) \right) & \\ & \textstyle \leq \sum_{\myArms} N(a)\, \Dmin(a) & \EqComment{by \refeq{eq:gLagP}}, \end{align*} and we are done by Claim~\ref{cl:FinalAlg}, just like in Case 1. \xhdr{Case 3: $\nicefrac{B_0}{T_0} \leq c(a^*) \leq \nicefrac{B}{T}$.} Let us write out $B_0$ and $T_0$: \begin{align*} c(a^*) & \geq \frac{B_0}{T_0} = \frac{B - \sum_{\myArms} N(a)\, c(a)}{T - \sum_{\myArms} N(a)} \geq \frac{B}{T} \rbr{ 1 - \frac{1}{B} \cdot \textstyle \sum_{\myArms} N(a) } \\ & \geq \nicefrac{B}{T} - O(\Psi/T), \;\text{where $\Psi$ is as in Theorem~\ref{thm:logRegretUpper}} & \EqComment{by Claim~\ref{cl:FinalAlg}}. \end{align*} Since $c(a^*)\leq \nicefrac{B}{T}$, we have $0\leq \nicefrac{B}{T} - c(a^*) \leq O(\Psi/T)$ which contradicts the premise. \xhdr{Case 4: $\nicefrac{B}{T} \leq c(a^*) \leq \nicefrac{B_0}{T_0}$.} The argument is similar to Case 3. Writing out $B_0,T_0$, we have \begin{align*} c(a^*) & \leq \frac{B_0}{T_0} = \frac{B - \sum_{\myArms} N(a) c(a)}{T - \sum_{\myArms} N(a)} \leq \frac{B}{T(1 - \tfrac{1}{T} \cdot \sum_{\myArms} N(a))}. \end{align*} By Claim~\ref{cl:FinalAlg}, $c(a^*) \leq \nicefrac{B}{T}\;(1+O(\Psi/T))$. Therefore, $0\leq c(a^*) -\nicefrac{B}{T} \leq O(\Psi/T)$, contradiction. \section{Preliminaries: the problem, linear relaxation, \term{UcbBwK} algorithm} \label{sec:prelims} \input{prelims} \section{Logarithmic regret bounds} \label{sec:algorithm} \input{logRegret} \section{Lower Bounds} \label{sec:LB} \input{LB-sqrt} \section{Bounds on ``simple regret"} \label{sec:simple-regret} \input{simpleRegret} \input{appx_simple.tex} \section{Extensions via confidence-sum analysis} \label{sec:extensions} \input{reductions} \input{appx_extensions} \section{Discussion: significance and novelty} \label{sec:significance} \input{conclusions} \clearpage \bibliographystyle{plainnat} \section{Simple regret of \term{UcbBwK} algorithm [OBSOLETE!!!]} \asedit{We define \emph{simple regret} in a given round $t$ as $\OPTDP/T- r(\vec{X}_t)$, where $\vec{X}_t$ is the distribution over arms chosen by the algorithm. The benchmark $\OPTDP/T$ generalizes the best-arm benchmark from stochastic bandits. If each round corresponds to a user and the reward is this user's utility, then $\OPTDP/T$ is the ``fair share" of the total reward. We prove that with \term{UcbBwK}, all but a few users receive close to their fair share. This holds if $B>\Omega(T) \gg K$, without any other assumptions.} \begin{theorem}\label{thm:UCBSmallNonArms} Consider \term{UcbBwK}. Assume $B \geq \Omega(T)$ and $\eta_{\textsc{lp}}\leq \tfrac12$. With probability $\geq 1-O(T^{-3})$, for each $\eps>0$, there are at most $N_\eps = \mathcal{O}\left( \frac{K}{\eps^2} \log KTd \right)$ rounds $t$ such that $\OPTDP/T - r(\vec{X}_t) \geq \eps$. \end{theorem} To prove Theorem~\ref{thm:UCBSmallNonArms}, we consider another generalization of the ``reward-gap", which measures the difference in LP-value compared to $\OPT_{\term{LP}}$. For distribution $\vec{X}$ over arms, the \emph{LP-gap} of $\vec{X}$ is \begin{align}\label{eq:LPgap-defn} \Gap(\vec{X}) := \OPT_{\term{LP}} - \Val(\vec{X}), \;\text{where}\; V(\vec{X}) := \textstyle (\nicefrac{B}{T})\;\cdot r(\vec{X}) / \rbr{\max_{j\in [d]} c_j(\vec{X})}. \end{align} Here, $V(\vec{X})$ is the value of $\vec{X}$ in the LP~\eqref{lp:primalAbstract} after rescaling. It suffices to study the LP-gap because $r(\vec{X}_t)\geq \Val(\vec{X}_t) (1-\eta_{\textsc{lp}})$ for each round $t$ with high probability. This holds under the ``clean event" in \eqref{eq:cleanEvent}, because $\vec{X}_t$ being the solution to the optimistic LP implies $\max_j c_j(\vec{X}_t) \geq \nicefrac{B}{T}\; (1-\eta_{\textsc{lp}})$. Thus, we upper-bound the number of rounds $t$ in which $\Gap(\vec{X}_t)$ is large. We do this in two steps, focusing on the confidence radius $\operatorname{Rad}_t(\vec{X}_t)$ as defined in \eqref{eq:MaxConfRadUB}. First, we upper-bound the number of rounds $t$ with large $\operatorname{Rad}_t(\vec{X}_t)$. \asedit{A crucial argument concerns \emph{confidence sums}: \begin{align}\label{eq:confSums} \textstyle \sum_{t \in S}\; \operatorname{Rad}_t(a_t) \quad\text{and}\quad \sum_{t \in S}\; \operatorname{Rad}_t(\vec{X}_t), \end{align} the sums of confidence radii over a given subset of rounds $S\subset [T]$, for, resp., actions $a_t$ and distributions $\vec{X}_t$ chosen by the algorithm.} Second, we upper-bound $\Gap(\vec{X}_t)$ in terms of $\operatorname{Rad}_t(\vec{X}_t)$. The details are spelled out in Appendix~\ref{app:simple-regret}. \newpage \section{Reduction from \term{BwK} to stochastic bandits} \asedit{We improve all regret bounds for \term{UcbBwK} algorithm, from worst-case regret to logarithmic regret to simple regret, when the problem instance has some helpful structure. In fact, we provide a general \emph{reduction} which translates insights from stochastic bandits into results on \term{BwK}. This reduction works as follows: if prior work on a particular scenario in stochastic bandits provides an improved upper bound on the confidence sums \eqref{eq:confSums}, this improvement propagates throughout the analyses of \term{UcbBwK}.} Specifically, suppose $\sum_{t\in S} \operatorname{Rad}_t(a_t) \leq \sqrt{\beta\, |S|}$ for all algorithms, all subsets of rounds $S\subset [T]$, and some instance-dependent parameter $\beta\ll K$, then \term{UcbBwK} satisfies \begin{OneLiners} \item[(i)] worst-case regret $\OPTDP -\E[\term{REW}] \leq O(\sqrt{\beta T})(1+\OPTDP/B)$. \item[(ii)] Theorem~\ref{thm:logRegretUpper} holds with $\Psi = \beta\,\Dmin^{-2}$ and regret $\mO\rbr{ \beta\,\Dmin^{-1} }$ in part (ii). \item[(iii)] Theorem~\ref{thm:UCBSmallNonArms} holds with $N_\eps = \mO\left( \beta\,\eps^{-2} \right)$. \end{OneLiners} \asedit{Conceptually, this works because confidence sum arguments depend only on the confidence radii, rather than the algorithm that chooses arms, and are about stochastic bandits rather than \term{BwK}. The analyses of \term{UcbBwK} in \citep{AgrawalDevanur-ec14} and the previous sections use $\beta=K$, the number of arms. The confidence sum bound with $\beta=K$ and results (i, ii, iii) for stochastic bandits follow from the analysis in \cite{bandits-ucb1}.} We apply this reduction to three well-studied scenarios in stochastic bandits: combinatorial semi-bandits \citep[\eg][]{Chen-icml13,Kveton-aistats15,MatroidBandits-uai14}, linear contextual bandits \citep[\eg][]{Auer-focs00,DaniHK-colt08,Langford-www10,Reyzin-aistats11-linear,Csaba-nips11}, and multinomial-logit (MNL) bandits \citep{Shipra-ec16}. The confidence-sum bounds are implicit in prior work on stochastic bandits, and we immediately obtain the corresponding extensions for \term{BwK}. To put this in perspective, each scenario has lead to a separate paper on \term{BwK} \citep[resp.,][]{Karthik-aistats18,agrawal2015linear,Cheung-MNLBwK-arxiv17}, for the worst-case regret bounds alone. We match the worst-case regret bounds from prior work, and obtain new bounds on logarithmic regret and simple regret. The details are spelled out in Appendix~\ref{sec:extensions}. Another reduction from \term{BwK} to bandits, found in \cite{AdvBwK-focs19}, is very different from ours. It requires a much stronger premise (a regret bound against an adaptive adversary), and only yields worst-case regret bounds. Moreover, it reuses a bandit algorithm as a subroutine, whereas ours reuses a lemma. \subsection{Setting up the results} For $n\in \N$, let $[n] = \{1 \LDOTS n\}$ and $\Delta_n = \{\text{all distributions on $[n]$} \}$. Let $[K]$ and $[d]$ be, resp., the set of all arms and the set of all resources. For each arm $a$, let $r(a)$ and $c_j(a)$ be, resp., the mean reward and mean resource-$j$ consumption, \ie $(r(a); c_1(a) \LDOTS c_d(a)) := \E_{\vec{o}\sim \mD_a}[\vec{o}].$ We sometimes write $ \vec{r} = (r(a):\,a\in [K])$ and $\vec{c}_j = (c_j(a):\,a\in [K])$ as vectors over arms. Given a function $f:[K]\to \R$, we extend it to distributions $\vec{X}$ over arms as $f(\vec{X}) := \E_{a\sim\vec{X}}[f(a)] $. \xhdr{Linear Relaxation.} Following prior work, we consider a linear relaxation: \begin{equation} \label{lp:primalAbstract} \begin{array}{ll@{}ll} \text{maximize} \qquad & \vec{X} \cdot \vec{r} & \text{such that}\\ & \vec{X} \in [0,1]^K,\; \vec{X} \cdot \vec{1} = 1 &\\ \displaystyle \forall j \in [d] \qquad & \vec{X} \cdot \vec{c}_j \leq B/T . \end{array} \end{equation} Here $\vec{X}$ is a distributions over arms, the algorithm does not run out of resources in expectation, and the objective is the expected per-round reward. Let $\OPT_{\term{LP}}$ be the value of this linear program. Then $\OPT_{\term{LP}}\geq \OPTDP/T \geq \OPTFD/T$ \citep{BwK-focs13}. The Lagrange function $\mL: \Delta_K \times \R^d_+ \rightarrow \mathbb{R}$ defined as follows: \begin{align}\label{eq:LagrangianGeneral} \textstyle \mL(\vec{X}, \vec{\lambda}) := r(\vec{X}) + \sum_{j \in [d]} \lambda_j [\; 1-\nicefrac{T}{B}\; c_j(\vec{X}), \;]. \end{align} where $\vec{\lambda}$ corresponds to the dual variables. Then (\eg by Theorem D.2.2 in \cite{ben2001lectures}): \begin{align}\label{eq:LagrangeMinMax} \min_{\vec{\lambda}\geq 0} \max_{\vec{X} \in \Delta_K} \mL(\vec{X}, \vec{\lambda}) = \max_{\vec{X} \in \Delta_K} \min_{\vec{\lambda}\geq 0} \mL(\vec{X}, \vec{\lambda}) = \OPT_{\term{LP}}. \end{align} The $\min$ and $\max$ in \eqref{eq:LagrangeMinMax} are attained, so that $(\vec{X}^*,\vec{\lambda}^*)$ is maximin pair if and only if it is minimax pair; such pair is called a \emph{saddle point}. We'll use $\mL(\,\cdot\,,\vec{\lambda}^*)$ to generalize reward-gap to \term{BwK}. \xhdr{Algorithm \term{UcbBwK}.} We analyze an algorithm from \mycite{AgrawalDevanur-ec14}, defined as follows. In the LP \eqref{lp:primalAbstract}, rescale the last constraint, for each resource $j\neq \term{time}$, as $(\nicefrac{B}{T})(1-\eta_{\textsc{lp}})$, where \begin{align}\label{eq:prelims-eta} \eta_{\textsc{lp}} := 3 \cdot (\; \sqrt{\nicefrac{K}{B}\;\logThm} + \nicefrac{K}{B}\; (\logThm)^2 \;). \end{align} We call it the \emph{rescaled LP} (see~\eqref{lp:rescaledLP}). Its value is $(1-\eta_{\textsc{lp}})\;\OPT_{\term{LP}}$. At each round $t$, the algorithm forms an ``optimistic" version of this LP, upper-bounding rewards and lower-bounding consumption: \begin{equation} \label{lp:UCBBwK} \begin{array}{ll@{}ll} \text{maximize} \qquad & \sum_{a\in [K]} X(a)\; r^{+}_t(a) & \text{such that}\\ & \vec{X}\in [0,1]^K,\quad\sum_{a\in [K]} X(a) = 1 \\ \displaystyle \forall j \in [d] \qquad & \sum_{a\in[K]} X(a)\; c^{-}_{j, t}(a) \leq B (1 - \eta_{\textsc{lp}})/T . \end{array} \end{equation} \term{UcbBwK} solves~\eqref{lp:UCBBwK}, obtains distribution $\vec{X}_t$, and samples an arm $a_t$ independently from $\vec{X}_t$. The algorithm achieves the worst-case optimal regret bound in~\eqref{eq:regret-opt}. The upper/lower confidence bounds $r_t^+(a),\,c^{-}_{j, t}(a)\in[0,1]$ are computed in a particular way specified in Appendix~\ref{app:spec}. What matters to this paper is that they satisfy a high-probability event \begin{align}\label{eq:cleanEvent} 0\leq r_t^+(a) -r(a) \leq \operatorname{Rad}_t(a) \;\text{and}\; 0\leq c_j(a) - c^-_{j,t}(a)\leq \operatorname{Rad}_t(a), \end{align} for some \emph{confidence radius} $\operatorname{Rad}_t(a)$ specified below. This event holds, simultaneously for all arms $a$, resources $j$ and rounds $t$, with probability (say) at least $1-\frac{\logThm}{T^4}$. For $a\neq\term{null}$, we can take \begin{align}\label{eq:MaxConfRadUB} \operatorname{Rad}_t(a) =\min(\;1,\;\sqrt{C_{\mathtt{rad}}/N_t(a)} + C_{\mathtt{rad}}/N_t(a)\;), \end{align} where $C_{\mathtt{rad}} = 3 \cdot \log (KdT)$ and $N_t(a)$ is the number of rounds before $t$ in which arm $a$ has been chosen. There is no uncertainty on the time resource and the null arm, so we define $c_{\term{time},\,t}^{-}(\cdot) = B/T$ and $\operatorname{Rad}_t(\term{null}) = r^+_t(\term{null}) = c_{j,t}^{-}(\term{null}) = 0$ for all resources $j\neq \term{time}$.
1,314,259,995,206
arxiv
\subsection{\textsc{lingauss} datasets} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chain_fit.png} \caption{} \label{fig:result_chain_fit} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chain_rcit.png} \caption{} \label{fig:result_chain_rcit} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chain_chsic.png} \caption{} \label{fig:result_chain_chsic} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chain_kcit.png} \caption{} \label{fig:result_chain_kcit} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chain_kcipt.png} \caption{} \label{fig:result_chain_kcipt} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chain_cci.png} \caption{} \label{fig:result_chain_cci} \end{figure} \FloatBarrier \subsection{\textsc{chaos} datasets} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chaos_fit.png} \caption{} \label{fig:result_chaos_fit} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chaos_rcit.png} \caption{} \label{fig:result_chaos_rcit} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chaos_chsic.png} \caption{} \label{fig:result_chaos_chsic} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chaos_kcit.png} \caption{} \label{fig:result_chaos_kcit} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chaos_kcipt.png} \caption{} \label{fig:result_chaos_kcipt} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__chaos_cci.png} \caption{} \label{fig:result_chaos_cci} \end{figure} \FloatBarrier \subsection{\textsc{hybrid} datasets} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__discrete_fit.png} \caption{} \label{fig:result_discrete_fit} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__discrete_rcit.png} \caption{} \label{fig:result_discrete_rcit} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__discrete_chsic.png} \caption{} \label{fig:result_discrete_chsic} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__discrete_kcit.png} \caption{} \label{fig:result_discrete_kcit} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__discrete_kcipt.png} \caption{} \label{fig:result_discrete_kcipt} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__discrete_cci.png} \caption{} \label{fig:result_discrete_cci} \end{figure} \FloatBarrier \subsection{\textsc{pnl} datasets} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__pnl_fit.png} \caption{} \label{fig:result_pnl_fit} \end{figure} \begin{figure} \includegraphics[width=1.\textwidth]{figures/fullres__pnl_rcit.png} \caption{} \label{fig:result_pnl_rcit} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__pnl_chsic.png} \caption{} \label{fig:result_pnl_chsic} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__pnl_kcit.png} \caption{} \label{fig:result_pnl_kcit} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__pnl_kcipt.png} \caption{} \label{fig:result_pnl_kcipt} \end{figure} \begin{figure}[!h] \includegraphics[width=1.\textwidth]{figures/fullres__pnl_cci.png} \caption{} \label{fig:result_pnl_cci} \end{figure} \section{Introduction} Two random variables $X$ and $Y$ are conditionally independent given a third variable $Z$ if and only if $P(X, Y \mid Z) = P(X \mid Z) P(Y \mid Z)$. We denote this relationship by $X \independent Y \mid Z$, and its negation, the case of conditional dependence, by $X \notindependent Y \mid Z$. This article develops and evaluates the Fast (conditional) Independence Test (FIT), a nonparametric conditional or unconditional independence test. Given a finite sample from the joint distribution $P(X, Y, Z)$, FIT returns the p-value under the null hypothesis that $X\independent Y \mid Z$. FIT applies to datasets with large sample sizes on scalar- or vector-valued variables, and returns in short time. Extensive empirical comparison with existing alternatives shows that FIT is currently the only conditional independence test to achieve this goal. \subsection{Motivation} \label{sec:motivation} Independence tests are ubiquitous in scientific inference. They are the main work horse of classical hypothesis testing (see~\citet{fisher1992}, Chapter 21), which underlies most experimental and many observational techniques of data analysis. A conditional independence test (CIT) extends this idea by testing for independence between two variables given the value of a third, or more generally, given the values of a set of further variables~\citep{dawid1979}. Probabilistic dependence -- conditional or not -- is often taken as an indicator of informational exchange (see e.g.~\citet{cover2012}, Chapter 2), causal connection~\citep{dawid1979,spirtes_causation_2000} or simply as tool for prediction and diagnosis~\citep{koller1996}. Consequently, knowledge of the (conditional) independence relations among a set of variables provides some of the most basic indicators for scientific relations of interest. Our motivation comes from the challenge of causal discovery. Many extant causal discovery algorithms use conditional independence tests to infer the underlying causal structure over a set of variables from observational data~\citep{pearl_causality_2000,spirtes_causation_2000,hoyer2009}. The most general of these methods do not make any specific assumptions about the functional form or parameterization of the underlying causal system. Discovery of independence structure from data, rather than estimation of any specific parameters, allows these methods to establish causal structure. However, such generality can only be maintained if the CITs used to determine the independence structure in the first place are themselves non-parametric. That is, the CITs have to check for general dependence relations among the variables, and not just, say, for a non-zero (partial) correlation. A further motivation derives from the need for CITs that are both fast to compute and flexible in the variables they can handle. Generally, causal discovery algorithms have to perform a number of tests that is at least polynomial, if not exponential, in the number of variables~\citep{spirtes_causation_2000}. For genetic datasets where thousands of genes are measured, or neuroscience data containing tens of thousands of voxels, this implies millions of independence tests. Moreover, for example, in social network data, the variables may be categorical or continuous, and scalar or vector valued. These facts indicate a clear need for non-parametric tests that are fast and flexible with regard to the nature of the variables being tested. While our motivation derives from the domain of causal discovery, the test is completely domain general and can be applied to any area in which (conditional) independence tests are used. \subsection{Previous work} \label{sec:alternatives} For continuous variables, extant non-parametric CITs have largely focused on kernel methods~\citep{scholkopf2002}. The methods differ in how the null-distribution is estimated, which distance measure is used in the reproducing kernel Hilbert space (RKHS) or which test statistic is applied. Here we consider six methods we are aware of for comparison: The Conditional Hilbert-Schmidt Independence Criterion (CHSIC) \citep{fukumizu2008} maps the data into a RKHS. It determines independence using the normalized cross-covariance operator, which can be thought of as an extension of standard correlations to higher order moments. In contrast, the Kernel-based Conditional Independence test (KCIT) \citep{zhang_kernel-based_2012} derives a test statistic based on the traces of the kernel matrices and approximates the null-distribution using a Gamma distribution. This avoids the permutations required for CHSIC, but comes at the expense of significant matrix operations. The Kernel Conditional Independence Permutation test (KCIPT) \citep{doran_permutation-based_2014}, in many ways a variant of CHSIC, replaces the random sampling of permutations for the null-distribution with a specifically learned permutation that satisfies, among other criteria, that it is representative of conditional independence. KCIPT also replaces the normalized cross-covariance operator with the maximum mean discrepancy (MMD) test statistic developed by \citet{gretton_kernel_2012}. Kernel-based tests do not scale well to large sample sizes due to the matrix operations required to kernelize the data and embed it in the RKHS. To address this limitation, the Randomized Conditional Independence test (RCIT)~\citep{strobl_approximate_2017}, uses a fast Fourier transform to speed up the matrix operations. For similar reasons, the Conditional Correlation Independence test (CCI) \citep{ramsey_scalable_2014} determines independence of $X$ and $Y$ by only checking whether $cov(f(X), g(Y)) = 0$ for different choices of $f, g$ taken from an (in practice, truncated) set of basis functions. For conditional independence of $X \independent Y \mid Z$, the same test is performed on the non-parametric residuals of $X$ and $Y$ each regressed on $Z$ using a uniform kernel. In the case of categorical data, two variables are independent conditional on a third categorical variable $Z$ just in case they are independent given any specific categorical value of $Z=z$. Consequently, the simplest way to handle categorical data is to repeat whatever unconditional test is available for each subset of the data corresponding to a fixed value of $Z=z$. Unless some further structure among the categories (such as a particular order) holds important information about the probabilistic relations between variables, there is nothing further for a conditional independence test to exploit. Consequently, tests conditioning on categorical variables are in the general case no different than unconditional tests, and inevitably sample intensive. However, a particular -- and common -- challenge arises when the conditioning variable $Z$ is continuous and at least one of $X$ or $Y$ is categorical. In this case we have a "hybrid" situation. The methods mentioned above do not apply due to the categorical nature of $X$ or $Y$, and for a continuous conditioning variable $Z$, by definition one cannot explicitly check conditional independence by checking independence for each value of $Z$. The development of the above tests has enabled the construction of genuinely non-parametric methods of discovery, but has left several questions of practical importance open. First, it remains unclear how to choose between the tests given that there is no available systematic comparison. Second, the evaluation of tests has focused on a small set of specialized datasets. Third, there has been no evaluation of how these tests perform for high dimensional variables and large sample sizes. In our evaluation in Sec.~\ref{sec:results} we have attempted to faithfully implement all the above tests\footnote{For all but the CCI test, we wrote Python wrappers around existing Matlab or R implementations written by the original authors. We implemented CCI from scratch in Numpy.} and systematically evaluated them on all the previous data sets, in each case reporting the actual behavior of the resulting $p$-values. In addition, we vastly extended the range of parameters that were explored to generate some of the existing datasets, and added datasets that explore the hybrid case with categorical and continuous variables described above. \subsection{Contributions} We address the challenge of testing for (conditional) independence when the variables are high-dimensional and sample sizes are high, but when the relation among the variables cannot be well described by a parametric model. Moreover, in order to enable applications for which it is necessary to run tens of thousands of conditional independence tests -- for example, establishing the causal graph over hundreds of variables -- the test has to run in a short amount of time. To our knowledge, none of the existing CITs applies with reasonable runtime in such situations. The main contributions of this article are as follows: \begin{compactenum} \item The Fast Independence Test (FIT), which can process hundred-dimensional conditional independence queries with $10^5$ samples in less than 60s. \item An extensive evaluation and comparison of FIT and alternatives (CHSIC, KCIT, KCIPT, RCIT, CCI) on a wide range of datasets. \item A parallelized Python implementation of FIT, easily installable through Python Package Index\footnote{To install FIT on Linux, type \texttt{pip install fcit} in the terminal window.}. \end{compactenum} \section{Fast (conditional) Independence Test (FIT)} The intuition behind FIT is that if $X \notindependent Y \mid Z$, then prediction of $Y$ using both $X$ and $Z$ as covariates should be more accurate than prediction of $Y$ when only $Z$ is used as the covariate. On the other hand, if $X \independent Y \mid Z$, then the accuracy of the prediction of $Y$ should not change whether just $Z$, or $X$ \emph{and} $Z$ are used as covariates. While this is not always true (see Sec~\ref{sec:discussion_failure}), it is a relatively weak assumption to make compared to, for example, assuming a specific parametric form of the involved distributions or linearity of the functional relationships. Alg.~\ref{alg:fit} contains pseudocode for the implementation of FIT. Our implementation uses decision tree regression (DTR,~\citet{breiman_classification_1984})\footnote{DTR is a fast multi-output regression algorithm. It splits the input space in two on each decision tree node until reaching a leaf. A leaf assigns a constant value to each output dimension. During training, the \texttt{min\_samples\_split} hyperparameter turns a node into a leaf when the number of samples in the node falls below the threshold.} to predict $Y$ using both $X, Z$, and also using $Z$ only. he rationale behind this choice and the alternatives are discussed in Sec.~\ref{sec:discussion_ml}. We measure the accuracy of the prediction in terms of the mean squared error (MSE). Central to our approach is the simplifying assumption that $X \independent Y \mid Z$ if and only if the mean squared error (MSE) of the algorithm trained using both $X$ and $Z$ is not smaller than the MSE of the algorithm trained using $Z$ only. \subsection{Obtaining the p-value} Our implementation runs DTR \texttt{n\_perm} times to learn both the function $X, Z \mapsto Y$ as well as $Z \mapsto Y$. The train/test (or in-sample/out-of-sample) split is resampled each of the \texttt{n\_perm} training repetitions. We store the \texttt{n\_perm} MSEs resulting from regressing $Y$ on $X, Z$ in a list $\texttt{mses\_x}$, and the \texttt{n\_perm} MSEs resulting from regressing $Y$ on $Z$ in a list \texttt{mses\_nox}. The remaining challenge is to determine the p-value of the null hypothesis that the first list contains on average values equal to or larger than the second list. To do this, we run a one-tailed t-test~\citet{student1908}, with the null hypothesis that the corresponding values in the two arrays are on average equal. A small p-value rejects the null hypothesis, meaning that \texttt{mses\_x} are significantly smaller than \texttt{mses\_nox} or that $X \notindependent Y \mid Z$. As an alternative to using the t-test (which assumes the distribution of MSEs under the null hypothesis is Gaussian), we experimented with using bootstrap to estimate the null distribution~\citep{efron1992}. Since the results were nearly identical across our evaluation tasks, for simplicity of presentation we use the t-test in the final method. \subsection{Speed of the Implementation} Two details of the implementation are worth mentioning: \begin{compactenum} \item Both the cross-validation procedure (Lines~\ref{lin:cv1},~\ref{lin:cv2}) and the training loop (Lines~\ref{lin:tr1}-~\ref{lin:z3}) are parallelized to automatically take up all the available CPUs. This provides a multiplicative speedup for larger datasets. \item Using our algorithm as an unconditional independence test (testing whether $X \independent Y$) requires only a simple modification, which our implementation uses automatically when $Z$ is not given. Instead of training using only $Z$ as input (Lines~\ref{lin:cv2} and~\ref{lin:z2},~\ref{lin:z3}) we train using $X$ permuted randomly across the sampling dimension. \end{compactenum} Asymptotically the algorithm's speed scales as $\mathcal{O}(n\_samples^2 \times \log{(n\_samples)} \times n\_features \times \frac{1}{n\_cpus})$. Sec.~\ref{sec:results} shows actual runtimes on an 8 Intel(R) Core(TM) i7-6700K CPU \@ 4.00GHz machine. The runtimes are bounded below by about 1s due to parallelization overhead, and grow to about 100s for a 1000-dimensional dataset with 100K samples. \begin{algorithm} \caption{Fast (conditional) Independence Test} \label{alg:fit} \begin{lstlisting}[language=python,escapechar=&] def cross_validate(covarite, regressand): # Find the best decision tree hyperparameters for # regressing `regressand` on `covariate`. # Return a trainable decision tree with the best hyperparameters. def fit_test(x, y, z, n_perm=8, frac_test=.1): # Store total sample size and test set size. n_samples = x.shape[0] n_test = floor(frac_test * n_samples) # Find best decision tree parameters for learning y = f(x, z).&\label{lin:cv1}& best_tree_x = cross_validate(concat(x, z), y) # Find the best decision tree parameters for learning y = f(z). best_tree_nox = cross_validate(z, y)&\label{lin:cv2}& # Obtain `n_perm` MSEs on random train-test splits. mses_x = list() mses_nox = list() # In our implementation, this loop is parallelized.&\label{lin:tr1}& for perm_id in range(n_perm): perm_ids = random.permutation(n_samples) x_test, x_train = x[perm_ids][:n_test], x[perm_ids][n_test:] y_test, y_train = y[perm_ids][:n_test], y[perm_ids][n_test:] z_test, z_train = z[perm_ids][:n_test], z[perm_ids][n_test:] # Train a decision tree to predict y using x and z. best_tree_x.train(concat(x_train, z_train), y_train) mses_x.append( mse(best_tree_x.predict(concat(x_test, z_test)), y_test)) # Train a decision tree to predict y using only z. best_tree_nox.train(z_train, y_train)&\label{lin:z2}& mses_nox.append(mse(best_tree_nox.predict(z_test), y_test))&\label{lin:z3}& # Obtain the one-tailed p-value for the null hypothesis that # on average, mses_nox is the same as mses_x. t, pval = ttest_1samp(mses_nox - mses_x) if t < 0: return 1 - pval / 2. else: return pval / 2. \end{lstlisting} \end{algorithm} \section{Evaluating Statistical Test Performance} An ideal nonparametric CIT should be fast, accurate, and generally applicable. To provide a thorough analysis of these factors we found it necessary to deviate from previous work in the evaluation method, as described in this section. We discuss alternative criteria and the problems we found with them in Sec.~\ref{sec:discussion_eval}. Our evaluation criteria are as follows: \begin{compactenum} \item[\textbf{Power}:] The test should have high power. Whenever $X\notindependent Y \mid Z$, the p-value should be small and should tend to 0 as sample sizes grows. \item[\textbf{Size}:] The size of the test should be equal to \emph{or lower than} its level $\alpha$. That is, whenever $X \independent Y \mid Z$, the p-value of the test should not be small for any sample size. Note that in some applications it might be important for the size of the test to be equal (neither lower nor higher) to its level. We discuss this further in Sec.~\ref{sec:discussion_eval}. \item[\textbf{Speed}:] The test should be fast. This is motivated by the use cases discussed in Sec.~\ref{sec:motivation}. \item[\textbf{Breadth of Application}:] The test should remain fast for large sample sizes and high-dimensional variables. It should have high power against any type of dependence, and low size for any distribution. \end{compactenum} \subsection{Evaluation method} In order to understand the speed, accuracy and breadth of FIT and its competitors, we first chose four settings, described in Sec.~\ref{sec:datasets}. These four settings cover a range of possible data: low- and high-dimensional data, simple and complex functional relationships between the variables, continuous and mixed discrete-continuous systems\footnote{We did not consider systems where $X, Y, Z$ are all categorical as this case is easily solved without special conditional independence tests, as discussed in Sec.~\ref{sec:alternatives}} -- Table~\ref{tbl:datasets} summarizes the characteristics of each setting. We designed each setting in a way that enables us to vary its dimensionality or complexity. For convenience of plotting the results, we chose nine instantiations of the dimensionality/complexity parameters in each setting, and for each such instantiation, we generated a pair of datasets, one version where $X\independent Y \mid Z$ and one where $X\notindependent Y \mid Z$. We applied each candidate test described in Sec.~\ref{sec:alternatives} to each dataset, no matter the dimensionality or complexity. To further understand the change in speed and performance of the tests as the number of samples grows, we ran each test on each dataset for \texttt{n\_samples}{}{} (number of samples) varying between $10^2$ and $10^5$ logarithmically. For each \texttt{n\_samples}{}{} we plot the $p$-value corresponding to the dependent and independent versions of the data, as well as the runtime of each test on each of the datasets and \texttt{n\_samples}{}{}. This results in plots such as those shown in Fig.~\ref{fig:specresults_fit}-~\ref{fig:specresults_cci}. These figures provide a very detailed picture of the tests' performance, which we discuss in Sec.~\ref{sec:results}. In addition, Fig.~\ref{fig:minires} reduces the results to answer the question ``If I give each test at most 1 minute time, what is the best Type I and Type II error it can achieve on each dataset?''. This question follows naturally from our need to run large numbers of CITs for causal discovery (Sec.~\ref{sec:motivation}). \subsection{Datasets} \label{sec:datasets} To evaluate FIT and compare it to alternatives, we used four settings to generate synthetic datasets, \textsc{LINGAUSS}, \textsc{CHAOS}, \textsc{HYBRID} and \textsc{PNL}, that cover a broad variety of use cases -- see Table~\ref{tbl:datasets}. We here provide descriptions of these settings in some detail to give a better sense of what the data can look like. Figures~\ref{fig:dataset_lingauss}-\ref{fig:dataset_pnl} show \emph{low dimensional examples} from these settings in order to provide at least some illustration of their difficulty and diversity. \begin{table} \centering \begin{tabular}{lllll}\toprule dataset&\textsc{lingauss}&\textsc{chaos}&\textsc{discrete}&\textsc{pnl}\\ \midrule dimensionality&3-768&10&10-2080&3-258\\ type&x, y, z cont& x, y, z cont& x, y categorical; z cont&x, y, z cont\\ complexity&low&med-high&low-high&med\\ \bottomrule \vspace{.05in} \end{tabular} \caption{Summary of evaluation settings. Dimensionality is summed over x, y and z. Type differentiates between continuous (cont) and categorical variables. Complexity is a non-rigorous quantification of the complexity of the conditional distributions that generate the data. See Sec.~\ref{sec:datasets} for details.} \label{tbl:datasets} \end{table} \paragraph{\textsc{lingauss}:} Any reasonable statistical independence test should work on linear Gaussian models. Surprisingly, previous work was not evaluated on this basic case. For the $X\independent Y \mid Z$ version of the data, given \texttt{dim}{}{} (the desired data dimensionality), we sample \texttt{n\_samples}{}{} samples from the linear-Gaussian graphical model $X \leftarrow Z \rightarrow Y$: \begin{compactenum} \item Let $Z$ be a \texttt{dim}{}-dimensional independent standard Gaussian, i.e. $Z \sim N(\mathbf{0}, \mathbf{I})$, where $\mathbf{I}$ is a $\texttt{dim}{}{}\times\texttt{dim}{}{}$ identity matrix. \item Sample \texttt{dim}{}$\times$\texttt{dim}{}{} independent standard normal variables and arrange them into a \texttt{dim}{}$\times$\texttt{dim}{}{} matrix $A$ (for the $ZX$-"edge coefficient" matrix). \item Sample \texttt{dim}{}{}$\times$\texttt{dim}{}{} independent standard normal variables and arrange them into a \texttt{dim}{}$\times$\texttt{dim}{}{} matrix $B$ (for the $ZY$-"edge coefficient" matrix). \item Let $X = AZ + $\texttt{dim}{}-dimensional independent standard Gaussian. \item Let $Y = BZ + $\texttt{dim}{}-dimensional independent standard Gaussian. \end{compactenum} \begin{figure} \centering \includegraphics[width=.8\textwidth]{figures/dataset_lingauss.png} \caption{An example \textsc{lingauss} dataset. For both the dependent and independent (1-dimensional) instantiation of the dataset we created $10^5$ random samples, and plotted $x$ vs $y$ that correspond to the cluster of $z$'s between the 50th and 51st percentile of $z$-values.} \label{fig:dataset_lingauss} \end{figure} For the $X\notindependent Y \mid Z$ case, given \texttt{dim}{}{} (the desired data dimensionality), we sample \texttt{n\_samples}{}{} samples from the linear-Gaussian graphical model $Z \rightarrow X \rightarrow Y$: \begin{compactenum} \item Let $Z$ again be a \texttt{dim}{}-dimensional independent standard Gaussian. \item Sample \texttt{dim}{}$\times$\texttt{dim}{}{} independent standard normal variables and arrange them into a \texttt{dim}{}$\times$\texttt{dim}{}{} matrix $A$ (for the $ZX$-"edge coefficient" matrix). \item Sample \texttt{dim}{}{}$\times$\texttt{dim}{}{} independent standard normal variables and arrange them into a \texttt{dim}{}$\times$\texttt{dim}{}{} matrix $B$ (for the $XY$-"edge coefficient" matrix). \item Let $X = AZ + $\texttt{dim}{}-dimensional independent standard Gaussian. \item Let $Y = BX + $\texttt{dim}{}-dimensional independent standard Gaussian. \end{compactenum} Note that we deliberately did \emph{not} create the (conditionally) dependent dataset by adding an extra "edge" between $X$ and $Y$ to the conditionally independent "common cause model" in order to ensure that the dimensionalities of the relation between $X$ and $Y$ conditional on $Z$ remain the same between the independent and dependent case. We evaluated the CITs using \textsc{LINGAUSS} with \texttt{dim} = 1, 2, 4,... , 256. \paragraph{\textsc{chaos}:} \citet{doran_permutation-based_2014} used this dataset to evaluate their CIT. The data is sampled from a highly nonlinear, chaotic dynamical system (see Fig.~\ref{fig:dataset_chaos}). The dimensionality of the data is fixed: $X$ and $Y$ both have \texttt{dim}{}{} = 4 and $Z$ has \texttt{dim}{}{} = 2. A scalar parameter $\alpha\in (0, 1)$ controls the complexity of the dataset. Let $A\in\mathbb{R}^2, B\in\mathbb{R}^2$ follow the equations (through time index $t$): \begin{align*} A(t, 0) &= 1.4 - A(t-1, 0)^2 + .3 A(t-1, 1)\\ A(t, 1) &= A(t-1, 0)\\ B(t, 0) &= 1.4 - (\alpha A(t-1, 0) B(t-1, 0) + (1-\alpha) B(t-1, 0)^2) + .1 B(t-1, 1)\\ B(t, 1) &= B(t-1, 0) \end{align*} To create the $X\independent Y \mid Z$ data, take \begin{align*} X(t) &= A(t+1),\\ Y(t) &= B(t),\\ Z(t) &= A(t). \end{align*} To create the $X\notindependent Y \mid Z$ data, take \begin{align*} X(t) &= B(t+1),\\ Y(t) &= A(t),\\ Z(t) &= B(t). \end{align*} Finally, to make the task more difficult, append two independent Gaussian variables with $N(0, 0.5)$ as the third and fourth dimension of $X$ and $Y$. To make the samples independent, we generated $10^6$ samples and chose a random subset of the desired size to form our datasets. Fig.~\ref{fig:dataset_chaos}a,b illustrate the choices of $X,Y,Z$ in the generative model. We evaluated the CITs using \textsc{CHAOS} with $\alpha$ = .01, .04, .16, .32, .5, .68, .84, .96, .99. \begin{figure} \centering \includegraphics[width=.8\textwidth]{figures/chaos_figure.png} \caption{Example \textsc{chaos} dataset when $\alpha = .5$. a) The generative model for the conditionally independent version of the dataset. Only three timesteps are shown and the third and fourth dimensions of $X$ and $Y$ are omitted, since they are independent noise variables. b) The generative model for the conditionally dependent version of the dataset. c) For the independent version of the dataset we created $10^5$ random samples. We then clustered the z's into clusters of size roughly 1000 using K-Means, and plotted $X_1$ vs.\ $Y_1$ corresponding to a randomly chosen cluster of z's (since $Z$ is multivariate continuous, the distribution $P(X, Y \mid Z \in \text{cluster of z values})$ is an approximation of $P(X, Y \mid z)$ for one value of $z$.) d) Same as c) but for the dependent data.} \label{fig:dataset_chaos} \end{figure} \paragraph{\textsc{hybrid}:} We created this hybrid continuous-categorical dataset as we have not seen any previous methods evaluated on hybrid data (see Sec.~\ref{sec:motivation}). We deliberately constructed \emph{categorical} variables, as opposed to (e.g.\ numerical) discrete variables, whose values still have some order. Given a count parameter $\gamma$, each sample for a \textsc{hybrid}($\gamma$)-dataset, say $(x_i, y_i, z_i)$, was constructed using the following pseudocode: \begin{compactenum} \item Let $z_i$ be a a \texttt{dim}{}-dimensional continuous vector sampled from the uniform Dirichlet distribution with $Z\sim \text{Dirichlet}(\mathtt{dim})$. \item Draw two independent samples $S_X$ and $S_Y$ from the multinomial distribution with probability parameter $z_i$ and count parameter $\gamma\in\mathbb{N}$. \item Since we want $x_i$ and $y_i$ to represent multi-dimensional categorical variables, convert each dimension of $S_X$ and $S_Y$ to the one-hot encoding. For example, take \texttt{dim}{}=3 and $\gamma = 2$. Suppose $z_i = [.1, .6, .3]$ for a particular sample. Then $S_X$ might be $(0,2,0)$, corresponding to zero draws in the first and third bucket and two draws in the second bucket of the sample from the $z_i$-multinomial. Then the one-hot encoding of $S_X$ makes $x_i = [0, 0, 0, 0, 0, 1, 0, 0, 0]$. The one-hot encoding enforces the discrete metric on the $X$-space. \item (If creating the dependent version of the dataset), toss an unbiased coin. If heads, then overwrite the value of $y_i$ with the value of $x_i$. \end{compactenum} \begin{figure} \centering \includegraphics[width=.8\textwidth]{figures/dataset_discrete.png} \caption{Example \textsc{hybrid} dataset with $\gamma=2$ and \texttt{dim}{}$=32$. For both the dependent and independent version of the dataset we created $10^5$ random samples, and picked the $x, y$ values that correspond to $z\in(.49, .51)$. The plot shows the heatmap (more saturated color = more samples) of the number of samples falling into given $x, y$ bin.} \label{fig:dataset_hybrid} \end{figure} Since $Z$ is a sufficient statistic of the distributions of $X$ and $Y$, we have $X \independent Y \mid Z$ in Steps 1-3, and Step 4 is used to create an obvious dependency (which nevertheless turns out to be non-trivial to detect by many of the algorithms, see Sec.~\ref{sec:results}). We evaluated the CITs using \textsc{HYBRID} with $(\gamma, \texttt{dim}{})$ = (2, 2), (2, 8), (2, 32), (8, 2), (8, 8), (8, 32), (32, 2), (32, 8), (32, 32). \paragraph{\textsc{pnl}:} Various versions of this dataset have been used before in~\citep{zhang_kernel-based_2012,doran_permutation-based_2014,strobl_approximate_2017}. We used the version from the most recent article~\citep{strobl_approximate_2017}. Since this version offers only scalar $X, Y, Z$, we extended it to the multi-dimensional case in a straightforward way. To create the generating mode for the \texttt{dim}{}-dimensional \textsc{PNL} dataset: \begin{compactenum} \item Let $A$ be a \texttt{dim}{}$\times$\texttt{dim}{}-dimensional matrix of independent standard normal scalars. \item Let $Z$ be a \texttt{dim}{}{}-dimensional Gaussian with a covariance $A\times A^T$ and mean 0. \item Let $X$ be a random function of the first coordinate of $Z$ (see below). \item Let $Y$ be a (re-sampled) random function of the first coordinate of $Z$ (see below). \item (To create the dependent version of the dataset,) add a Gaussian variable with mean zero and standard deviation .5 to both $X$ and $Y$. \end{compactenum} \begin{figure} \centering \includegraphics[width=.8\textwidth]{figures/dataset_pnl.png} \caption{Example \textsc{pnl} dataset. For both the dependent and independent (one-dimensional) version of the dataset we created $10^5$ random samples, and plotted $x$ vs.\ $y$ that correspond to the cluster of $z$'s between the 50th and 51st percentile of $z$-values.} \label{fig:dataset_pnl} \end{figure} This procedure uses ``random functions''. To obtain such functions, following~\citet{strobl_approximate_2017}, we sample with equal probability from the set of functions $\{v\mapsto v, v\mapsto v^2, v\mapsto v^3, v\mapsto \tanh{v}, v\mapsto \exp{(-|v|)}\}$. Note that in this dataset, $X$ and $Y$ are always one-dimensional. We evaluated the CITs using \textsc{PNL} with \texttt{dim}{} = 1, 2, 4, ..., 256. \section{Empirical Results and Comparison with Previous Work} \label{sec:results} For each out of the 72 dataset versions (4 settings times 9 parameter instantiations for both the dependent and independent version), we obtain $p$-values of FIT and each of the five conditional independence tests described in Sec.~\ref{sec:alternatives}. In each case, we varied the number of samples between $10^2$ and $10^5$ logarithmically. In addition, we stopped any test that exceeded the time threshold of about 100s. In this section, for each CIT we picked the results plot that best summarizes the broad behavior of the test across the datasets. In addition, Figure~\ref{fig:minires} provides a summary view of the full results. Due to their sheer volume we leave the detailed full results to Appendix~\ref{sec:full_res}. Nevertheless, the full results plots form an important part of the evidence that motivates the discussion in this section. \begin{figure}[h!] \centering \includegraphics[width=1.\textwidth]{figures/res_alpha_05.png} \caption{Type I and II errors at significance level $\alpha=.05$. For each dataset setting (x-axis) and each dataset difficulty (narrow columns within each x-axis block), we computed the errors each method (y-axis) returns on the largest number of samples it can process within 60s (colors indicate error magnitude). Black regions correspond to the cases where an algorithm was not able to process even 100 samples within 100s. Black vertical lines separate the four settings, and for each setting the columns are arranged in increasing difficulty, e.g. for \textsc{lingauss} the first column corresponds to \texttt{dim}{}=1, last column to \texttt{dim}{}=256. See Figures~\ref{fig:result_chain_fit}-\ref{fig:result_pnl_cci} for full results plots, covering all sample sizes and all $p$-values.} \label{fig:minires} \end{figure} \FloatBarrier For all methods, as expected, lower dataset difficulty results in smaller Type I and higher Type II errors. The average of the two error types, shown in the right column of Figure~\ref{fig:minires}, shows that FIT achieves the best performance across the board. The remaining methods either have high errors (CHSIC, RCIT, CCI) or are too slow to process a large proportion of the datasets (KCIT, KCIPT). More detailed remarks on each method's performance follow. \subsection{FIT} Figure~\ref{fig:specresults_fit} illustrates typical behavior of the test, in this case applied to the \textsc{PNL} dataset. The only data that causes problems for the FIT algorithm is the high-dimensional, sparse versions of the \textsc{hybrid} dataset. None of the other methods succeed here either (see full result plots in Appendix~\ref{sec:full_res}). The results suggest the following considerations for the FIT test: \begin{compactenum} \item The test can fail for small sample sizes. It becomes reliable once the sample size reaches $10^3-10^4$. \item The test is fast even for a large number of dimensions and sample sizes, most often returning in less than 10s. \item Type I and Type II errors decrease consistently as the number of samples increases. \end{compactenum} \begin{figure}[h!] \centering \includegraphics[width=1.\textwidth]{figures/fullres__pnl_fit.png} \caption{Typical FIT results: all the $p$-values plotted against sample size on log-log scales. Here we show as example the results on \textsc{PNL} data, one plot for each dimensionality setting. Black dots correspond to dataset versions where $X\independent Y \mid Z$. To achieve low Type I error, they should be close to 1. White dots correspond to dataset versions where $X \notindependent Y \mid Z$. To achieve low Type II error, they should be close to 0. (Circled black dots result when the independent and dependent version of the dataset returned the same $p$-value.) The values are clipped at $10^{-5}$. The horizontal lines mark the $p$-values of $0.05$ and $0.01$, respectively. Lack of datapoints indicates that the method ran out of time (e.g.\ for high sample sizes in the highest dimensionality datasets). The last plot shows the runtime for different sample sizes on each of the datasets. We stopped the methods after 100s.} \label{fig:specresults_fit} \end{figure} \FloatBarrier \newpage \subsection{CHSIC} CHSIC performs well on low-dimensional, low-sample-size and low-difficulty data, but has trouble detecting conditional dependence in the more difficult cases. Fig.~\ref{fig:specresults_chsic} illustrates this using the \textsc{lingauss} datasets. Overall, our results suggests that CHSIC: \begin{compactenum} \item Achieves very good Type I and Type II error levels for most datasets settings, but only for the lowest difficulty settings. \item Fails to return meaningful results for sample sizes over 1000 and/or large difficulty data. \end{compactenum} \begin{figure}[h!] \centering \includegraphics[width=1.\textwidth]{figures/fullres__chain_chsic.png} \caption{Typical CHSIC results (shown here on the \textsc{lingauss} datasets). Black dots correspond to dataset versions where $X\independent Y \mid Z$. To achieve low Type I error, they should be close to 1. White dots correspond to dataset versions where $X \notindependent Y \mid Z$. To achieve low Type II error, they should be close to 0. The values are clipped at $10^{-5}$. Lack of datapoints indicates that the method ran out of time. Note the comparison to the runtime plot in Fig.~\ref{fig:specresults_fit}.} \label{fig:specresults_chsic} \end{figure} \FloatBarrier \subsection{KCIT} KCIT does well in most cases when it returns, but is often too slow to approach the harder datasets -- as illustrated in Fig.~\ref{fig:specresults_kcit}, again using the \textsc{lingauss} datasets. \begin{compactenum} \item KCIT achieves great Type I and Type II errors for low dimensionality / low difficulty datasets. \item Type I error of the method increases as dataset difficulty increases, while Type II error stays low, across all the settings. \item The method's speed only allows it to tackle low-dimensionality data with sample size smaller than 1000. \end{compactenum} \begin{figure}[h!] \centering \includegraphics[width=1.\textwidth]{figures/fullres__chain_kcit.png} \caption{Typical KCIT results (shown here on the \textsc{lingauss} datasets). Black dots correspond to dataset versions where $X\independent Y \mid Z$. To achieve low Type I error, they should be close to 1. White dots correspond to dataset versions where $X \notindependent Y \mid Z$. To achieve low Type II error, they should be close to 0. The values are clipped at $10^{-5}$. Lack of datapoints indicates that the method ran out of time -- almost always when number of samples was over 1000.} \label{fig:specresults_kcit} \end{figure} \FloatBarrier \subsection{RCIT} In almost all cases (with the only exception of 1-dimensional \textsc{lingauss} and 1-dimensional \textsc{pnl} data), RCIT returns very low $p$-values as the number of samples increases, thus yielding high Type I errors. Fig.~\ref{fig:specresults_rcit} illustrates this behavior. \begin{figure}[h!] \centering \includegraphics[width=1.\textwidth]{figures/fullres__chaos_rcit.png} \caption{Typical RCIT results (shown here on the \textsc{chaos} datasets). Black dots correspond to dataset versions where $X\independent Y \mid Z$. To achieve low Type I error, they should be close to 1. White dots correspond to dataset versions where $X \notindependent Y \mid Z$. To achieve low Type II error, they should be close to 0. The values are clipped at $10^{-5}$. Lack of datapoints indicates that the method ran out of time.} \label{fig:specresults_rcit} \end{figure} \FloatBarrier \subsection{KCIPT} KCIPT fails on a majority of the datasets, and is too slow to process any but the least-dimensional and lowest-sample cases -- as shown in Fig.~\ref{fig:specresults_kcipt}. \begin{figure}[h!] \centering \includegraphics[width=1.\textwidth]{figures/fullres__chain_kcipt.png} \caption{Typical KCIPT results (shown here on \textsc{lingauss} data). Black dots correspond to dataset versions where $X\independent Y \mid Z$. To achieve low Type I error, they should be close to 1. White dots correspond to dataset versions where $X \notindependent Y \mid Z$. To achieve low Type II error, they should be close to 0. The values are clipped at $10^{-5}$. Lack of datapoints indicates that the method ran out of time.} \label{fig:specresults_kcipt} \end{figure} \FloatBarrier \subsection{CCI} CCI fails on most datasets. In addition, instead of outputting a $p$-value it outputs a binary decision: either 1 (indicating the data is conditionally independent) or 0 (for conditionally dependent data). Unfortunately, in our evaluation it has a tendency to output 0 for most of the continuous datasets, and 1 for the \textsc{hybrid} dataset (as shown in Fig.~\ref{fig:specresults_cci}). \begin{figure}[h!] \centering \includegraphics[width=1.\textwidth]{figures/fullres__discrete_cci.png} \caption{Typical CCI results (shown here on \textsc{hybrid} data). Black dots correspond to dataset versions where $X\independent Y \mid Z$. To achieve low Type I error, they should be close to 1. White dots correspond to dataset versions where $X \notindependent Y \mid Z$. To achieve low Type II error, they should be close to 0. The values are clipped at $10^{-5}$. Lack of datapoints indicates that the method ran out of time. CCI returns only a binary result of 0 (for dependence) or 1 (for independence).} \label{fig:specresults_cci} \end{figure} \section{Discussion} \label{sec:discussion} We showed that FIT is a fast and accurate conditional independence test. Our evaluation covered a broad range of dataset sizes, dimensionalities and complexities. In this evaluation, FIT performs as well as or better than the alternatives. In addition, it is the only algorithm that applies successfully to large sample sizes. In this section, we discuss alternatives to our evaluation strategy, and clarify some aspects of FIT. \subsection{Alternative evaluation metrics} \label{sec:discussion_eval} Previous work used a variety of methods to evaluate the performance of CITs. \citet{fukumizu2008} and \citet{zhang_kernel-based_2012} report Type I and Type II errors on a number of datasets. This is similar to our condensed presentation in Fig.~\ref{fig:minires}. However, their evaluation considered a fixed, small sample size of 200 and 400 datapoints (as the evaluated methods are too slow to attack a larger dataset). In contrast, we allow each algorithm to process as many datapoints as possible with a fixed time limit. As~\citet{doran_permutation-based_2014} point out however, Type I and Type II errors can bias the results. Some algorithms might perform particularly well at one significance level, and some applications might require non-standard significance levels. Thus,~\citet{doran_permutation-based_2014} proposed to condense the performance of a CIT across all significance levels to two statistics. First, their ``area under the power curve'' (AUPC) is the area under the cumulative distribution function of the $p$-values the algorithm returns on a conditionally dependent dataset. Second, they perform a test on the distribution of $p$-values: their ``KS-test $p$-value'' is the $p$-value of the Kolmogorov-Smirnoff test~\citep{stephens_edf_1974} for the null hypothesis that the $p$-values of a CIT are uniformly distributed on a conditionally independent dataset. A good test has low AUPC and a non-low KS-test $p$-value. The problem with both of these approaches is that they do not consider how the behavior of a CIT changes as the number of samples grows. To us, an important requirement is that for any type of data, the test is ``correct'' in the limit of an infinite number of samples. Looking at a limited range of sample sizes can be deceptive. Consider Fig.~\ref{fig:result_pnl_rcit} with dim=128. RCIT seems to be doing very well for sample sizes between 100 and 1000. However, it fails for any larger number of samples. In reality, both for the dependent and independent version of the dataset RCIT's $p$-values follow a decaying trend as the number of samples grows, which is not a correct behavior. Evaluating a CIT on a fixed sample size is deceptive in a similar way to evaluating it at a fixed significance level. The best (although energy-consuming) way we found to understand the behavior of a CIT and compare it with other algorithms is to carefully examine how its $p$-value changes with a changing sample size on a variety of datasets. Figures~\ref{fig:result_chain_fit}-~\ref{fig:result_pnl_cci} show these $p$-value plots. We encourage any reader interested in a thorough comparison of the CIT methods to take the time to examine these plots. \subsection{Why use a Decision Tree Regression?} \label{sec:discussion_ml} Algorithm~\ref{alg:fit} uses decision tree regression (DTR) as its machine learning component. Other regression algorithms could serve as a substitute for DTR. There are two crucial requirements for such algorithms (apart from good predictive power): 1) computational efficiency and 2) ability to handle structured multi-dimensional inputs. Initially, we implemented FIT using a neural network as the back-end, efficiently implemented in Tensorflow~\citep{abadi_tensorflow:_2016}, and trained on a GPU (Titan Xp). However, given that we want FIT to apply to both small and large, high- and low-dimensional data, the hyperparameter choice for the neural network regressor becomes a complex problem -- for example~\citet{jaderberg_population_2017} propose an simple and effective hyperparameter search method that nevertheless assumes having access to a cluster of a hundred GPU machines as a reasonable setting. DTRs are fast and require us to choose the value of only one hyperparameter. \subsection{Failure cases} \label{sec:discussion_failure} Our Fast (Conditional) Independence Test assumes that if $X\notindependent Y \mid Z$, then the MSE obtained after regressing $Y$ on $X, Z$ is smaller than that obtained after regressing $Y$ on $Z$ only. This assumption doesn't always hold. Take the system where $X, Y, Z$ are one-dimensional variables and $Y\sim \mathcal{N}(Z, X)$. In other words, $Y$ is a Gaussian with mean $Z$ and variance $X$. Then, whether $X$ is known or not, the true regression line is simply $Y=Z$, and the MSE does not change. However, $Y$ clearly depends on $X$, even when $Z$ is given. This represents just one possible failure mode; we do not attempt to characterize all the cases in which nonlinear regression does not benefit from a conditionally dependent regressor and simply assume them away. \subsection{Which test should I use?} Our experimental results suggest that in general, FIT is the best choice. The only situation where an alternative should be considered is low-dimensional data where it i impossible to obtain more than 1000 samples. In such cases, CHSIC or KCIT are likely to outperform FIT. RCIT and CCI do not work well in our evaluation. KCIPT is too slow to justify using it over FIT, KCIT or CHSIC. \FloatBarrier \bibliographystyle{plainnat}
1,314,259,995,207
arxiv
\section{Introduction} The Sanskrit Computational Linguistics (SCL) community has garnered growing interest in the dependency parsing task in the last two decades. Earlier attempts for developing dependency parser for the Sanskrit mainly focused on building rule-based systems ~\cite{goyal2007analysis,kulkarni2010designing,kulkarni2013deterministic,kulkarni-etal-2019-dependency,Kulkarni2021SanskritPF} due to lack of labelled dataset. With the recent availability of task-specific labelled data, \citeN{amrith21,krishna-etal-2020-keep} propose hybrid systems that integrate a linguistically motivated data-driven system with rules from P\={a}\d{n}inian grammar \cite{panini} and report the state of the art performance for SDP. These hybrid systems rely on a \textit{lexicon driven} shallow parser called Sanskrit Heritage Reader (SHR)\footnote{\url{https://sanskrit.inria.fr/DICO/reader.fr.html}} for computing linguistically motivated features. SHR does not use a domain-specific lexicon; hence it may fail to recognize some words of an input sentence. In realistic scenarios, failure in generating linguistically motivated features makes these systems handicapped. Further, the morphologically rich nature of the Sanskrit language intensifies the possibility of such failures. Recently, \textit{purely data-driven} neural architectures ~\cite{DBLP:conf/iclr/DozatM17,fernandez-gonzalez-gomez-rodriguez-2019-left,zhou-zhao-2019-head} have become de-facto alternative in the Natural Language Processing community due to their ease of applicability and scalability. Thus, \citeN{krishna2020neural,amrith2019thesis} investigate the efficacy of these architectures from \textit{purely data-driven} family to obliviate the need of linguistically motivated hand-crafted feature engineering; however, they do not match the performance of \textit{hybrid} dependency parsers \cite{krishna-etal-2020-keep,amrith21} due to data sparsity. Sanskrit being a relatively free word order and a low-resourced morphologically rich language (MRL), demands relatively large labelled data to build robust parsing solution. Human resources for developing additional labelled data are scarce because annotators need to be experts in Sanskrit and native speakers can not do the job \cite{hellwig-etal-2020-treebank}. In this work, we investigate: How far can we push a \textit{purely data-driven} system using various techniques proposed in low-resource setting \cite{hedderich-etal-2021-survey}? To answer this question, we systematically explore five pragmatic strategies for low-resource settings, namely, data augmentation \cite{vania-etal-2019-systematic,sahin-steedman-2018-data,gulordava-etal-2018-colorless}, cross-lingual/mono-lingual pretraining \cite{sandhan-etal-2021-little,conneau-etal-2020-unsupervised,peters-etal-2018-deep,kondratyuk-straka-2019-75}, sequential transfer learning \cite{ruder-etal-2019-transfer}, multi-task learning \cite{nguyen-verspoor-2018-improved} and self-training \cite{rotman2019deep,clark-etal-2018-semi} (\S~\ref{5_strategies}). Our proposed ensembled system outperforms by 2.8/3.9 points (UAS/LAS) absolute gain compared to the state of the art \textit{purely data-driven} system \cite{DBLP:conf/iclr/DozatM17}. Notably, it also outperforms hybrid system \citeN{krishna-etal-2020-keep} by 1.2 points absolute gain in terms of UAS metric and shows comparable performance in terms of LAS metric (\S~\ref{experiments}). Finally, we investigate on the robustness of the proposed system to the relatively free word order nature of Sanskrit. We also perform analysis on what kind of error reduction occurs as a result of effective integration of various strategies (\S~\ref{Error_analysis}). \section{Background: Sanskrit dependency parsing} \label{related_work} The P\={a}\d{n}inian framework, the oldest dependency grammar, is generative and deterministically irreversible. That leaves the scope of ambiguity when it is reversed for the Sanskrit analyzer. \citeN{goyal2007analysis} proposed a rule-based system and pointed out the need for a hybrid scheme where the data-driven statistical method is integrated with the rule-based system to resolve ambiguity posed by the irreversibility of grammar rules. Later,~\citeN{DBLP:conf/sanskrit/Hellwig09} took forward this idea and improved performance using a hybrid approach, purely statistical parser augmented with simple syntactic rules from grammar. Earlier attempts for developing dependency parser for the Sanskrit mainly focused on building rule-based systems ~\cite{goyal2007analysis,kulkarni2010designing,kulkarni2013deterministic,kulkarni-etal-2019-dependency,Kulkarni2021SanskritPF}. \citeN{kulkarni2010designing} proposed a graph-based approach where each word is taken as a node and relations between words are identified using grammar rules. Based on heuristics, they rank the exhaustive candidate space. Later, they improved the computational aspect of their method in the following work~\cite{kulkarni2013deterministic}. Earlier attempts for building dependency parser were targeted for simple prose order sentences so~\citeN{kulkarni-etal-2019-dependency} extended their previous work for poetry sentences. However, this system enumerates all possible solutions. Due to the increasing popularity of dependency parser across NLP,~\citeN{goyal2014converting} introduced a methodology to convert enriched constituency trees into unlabelled dependency tree. Recently,~\citeN{amrith2019thesis} proposed structured prediction framework for dependency parsing. This linguistically motivated model uses a graph-based parsing approach and currently reports a state-of-the-art performance for Sanskrit. Although the results of this system are very promising, it takes input from lexicon driven shallow parsers. If parser fails to identify word, this system can not produce a parsed tree. Recently, \citeN{sandhan-etal-2021-little} proposed pretraining approach for low-resource dependency parsing for Sanskrit. Due to lack of powerful morphological analyzer, it is challenging to have this information during run time. Therefore, \citeN{sandhan-etal-2021-little} obliviate the need of morphological information during run time with the help of proposed pretraining method. Data-driven approaches have shown tremendous achievement in the field of NLP~\cite{hellwig-nehrdich-2018-sanskrit,sandhan-etal-2019-revisiting}. Specifically, for dependency parsing, neural-based approaches have attracted intense attention of researchers due to state-of-the-art performance without explicit feature engineering~\cite{DBLP:conf/iclr/DozatM17,fernandez-gonzalez-gomez-rodriguez-2019-left,zhou-zhao-2019-head}. However, ~\citeN{krishna2020neural} reports that these \textit{purely data-driven} approaches do not match the performance of \textit{hybrid} counterparts due to data scarcity. Thus, in this work, we investigate the following question: How far can we push a \textit{purely data-driven} approach using recently proposed strategies for low-resource settings? We experiment with five strategies, namely, data augmentation, sequential transfer learning, pretraining, multi-task learning and self-training. Similar to our work, \citeN{vania-etal-2019-systematic} also investigate data augmentation, cross-lingual training; and transliteration for low-resource dependency parsing. However, we specifically focus on Sanskrit language with exhaustive experiments on 5 strategies. \section{Investigation of Strategies Tailored for Low-resource Settings} \label{5_strategies} In this section, we explore five strategies, specially tailored for low-resource settings and ensemble the best performing strategy of each category in our proposed system (\S~\ref{final_system}). We utilize BiAFFINE parser \cite{DBLP:conf/iclr/DozatM17} as a base system for all the experiments, henceforth referred to as BiAFF.\\ \noindent\textbf{Dataset and metric:} We utilize around 4,000 dependency labelled trees from the Sanskrit Treebank Corpus \cite[STBC]{kulkarni2010designing} and \textit{\`{S}i\`{s}up\={a}lavadha} \cite{ryalichallenges}. We use 1,700, 1,000 and 1,300 sentences as train, dev and test set, respectively. We do these investigations on the dev set to find out the best strategy from each category. Following \citeN{amrith21}, we use sentence level macro averaged Unlabelled and Labelled Attachment Scores (UAS, LAS). \subsection{Data Augmentation} \label{data_augmentation} \begin{table*}[h] \centering \begin{tabular}{|c|c|c|c|} \hline \textbf{System} &\textbf{UAS} &\textbf{LAS}\\ \hline BiAFF & 84.07 & 76.87 \\ Mixed \cite{amrith21} & 64.71 & 53.99 \\ Cropping \cite{sahin-steedman-2018-data} & 71.48 & 66.43 \\ Rotation \cite{sahin-steedman-2018-data} & 83.76 & 76.54 \\ Nonce \cite{gulordava-etal-2018-colorless} & 84.74 & \textbf{77.83} \\ Nonce++ \cite{gulordava-etal-2018-colorless} & \textbf{84.67} & 77.47 \\ \hline \end{tabular} \vspace{3mm} \caption{Results on dev set when different data augmentation strategies augmented with BiAFF.} \label{table:data_augmentation}\vspace{-5mm} \end{table*} \noindent\textbf{Systems:} Table~\ref{table:data_augmentation} reports data augmentation strategies experimented for SDP. \textbf{Mixed:} \citeN{amrith21} use mixture of existing multiple augmentation techniques like synonym replacement \cite{NIPS2015_250cf8b5}, sentence simplification \cite{vickrey-koller-2008-sentence}, sentence cropping \cite{sahin-steedman-2018-data} and linguistically motivated constraints to generate augmented data. \citeN{sahin-steedman-2018-data} introduce \textbf{Cropping:} deletes some parts of a sentence to create multiple short meaningful sentences and \textbf{Rotation:} permutes the siblings of headword restricted to a set of relations. Both operations modify a set of words or configurational information; however, they do not change the dependencies. \textbf{Nonce:} \citeN{gulordava-etal-2018-colorless} propose to create nonce sentences by substituting a few words which share the same syntactic labels. For creating each training instance, stochastically, they replace content words with the words having the same part of speech tag, morphological tag and dependency label as syntactic constraints.\footnote{Following \citeN{vania-etal-2019-systematic}, we additionally consider dependency label as a constraint.} However, they pick replacement pairs from the same training data, which may not be better choice to reduce OOV issues. \textbf{Nonce++:} Thus, on top of the \textbf{Nonce} setting, we experiment with replacement pairs other than training data. To reduce OOV, we use the rule-based parser \cite{kulkarni2013deterministic} on unlabelled data for obtaining potential candidates for replacement which also satisfy the syntactic constraints. \\ \noindent\textbf{Observations:} Except \textbf{Nonce/Nonce++} from Table~\ref{table:data_augmentation}, all systems show low performance compared to BiAFF trained with gold data. \textbf{Rotation} seems useful for modelling data in a poetry domain; since our data is in a prose domain, permutations in word order resulted in results in loss of potentially useful configurational information for prose domain. \textbf{Cropping} possibly shows performance drop of 12.5/10.4 points (UAS/LAS) due to the generation of shorter training samples relative to test data. Similarly, the low performance of \textbf{Mixed} might be due to presence of the cropping technique and lack of syntactic constraints while using synonym replacement. On the other hand, \textbf{Nonce} setting shows significant improvement over BiAFF, possibly due to the hard syntactic constraints it follows. Without altering configuration information and hampering sentence length, \textbf{Nonce} shows significant improvement over BiAFF. \textbf{Nonce++} does not improve further. \subsection{Cross/mono-lingual Pretraining} \begin{table*}[h] \centering \begin{tabular}{|c|c|c|c|} \hline \textbf{System} &\textbf{UAS} &\textbf{LAS}\\ \hline BiAFF \cite{dozat2017stanford} & 84.07 & 76.87 \\ BiAFF+mBERT \cite{kondratyuk-straka-2019-75} & 80.29 & 69.65 \\ BiAFF+ELMo \cite{peters-etal-2018-deep} & 84.00 & 76.78 \\ BiAFF+XLM-R \cite{nguyen2021trankit} & 86.02 & 78.84 \\ BiAFF+LCM \cite{sandhan2021evaluating} & \textbf{86.28} & \textbf{80.17}\\ \hline \end{tabular} \vspace{3mm} \caption{Results on dev set when BiAFF is integrated with cross-lingual/mono-lingual pretraining strategies.} \label{table:pretraining} \end{table*} \vspace{-5mm} \noindent\textbf{Systems:} We briefly elaborate on the pretraining approaches reported in Table~\ref{table:pretraining}. Due to bottleneck of unlabelled data for Sanskrit, training monstrous contextual pretraining approaches from scratch may not be helpful; hence, the natural choice is multilingual pretraining approach. Thus, we experiment with two multilingual pretraining approaches, namely, multilingual BERT \cite[\textbf{mBERT}]{devlin-etal-2019-bert} based system \cite{kondratyuk-straka-2019-75} and XLM-Roberta \cite[\textbf{XLM-R}]{conneau-etal-2020-unsupervised} based system \cite{nguyen2021trankit}. Also, we investigate on \textbf{ELMo} \cite{peters-etal-2018-deep} trained from scratch for Sanskrit \cite{sandhan2021evaluating}. Also, we evaluate pretraining, which use morphological tagging related auxiliary tasks \cite[\textbf{LCM}]{sandhan-etal-2021-little}.\\ \noindent\textbf{Observations:} \textbf{mBERT} shows 3.8/7.2 points (UAS/LAS) drop over BiAFF (Table~\ref{table:pretraining}) due to absence of Sanskrit language during pretraining \cite{sandhan-etal-2021-little}. On the other hand, \textbf{XLM-R} (with Sanskrit during pretraining) demonstrates 1.9/2.0 points absolute gain (UAS/LAS) over BiAFF. On par performance of \textbf{ELMo} trained from scratch might be attributed to insufficient unlabelled data. Finally, \citeN[\textbf{LCM}]{sandhan-etal-2021-little}, with the help of morphological enriched encoders shows 2.2/3.3 points (UAS/LAS) absolute gain.\footnote{In contrast to \citeN{sandhan-etal-2021-little}, we also use oracle morphological information as an input.} Notably, gains obtained for LAS metric is higher by 1.1 points which helps to bridge the gap between UAS-LAS metrics. \subsection{Self-training} \begin{table*}[h] \centering \begin{tabular}{|c|c|c|c|} \hline \textbf{System} &\textbf{UAS} &\textbf{LAS}\\ \hline BiAFF \cite{dozat2017stanford} & 84.07 & 76.87 \\ Self-Train & 84.15 & 77.03 \\ CVT \cite{clark-etal-2018-semi} & 82.40 & 73.62 \\ DCST \cite{rotman2019deep} & \textbf{85.61} & \textbf{78.85}\\ \hline \end{tabular}\vspace{3mm} \caption{Results on dev set for self-training based systems.}\vspace{-5mm} \label{table:self_training} \end{table*} \noindent\textbf{Systems:} Another line of modelling focuses on self-training~\cite{goldwasser-etal-2011-confidence,clark-etal-2018-semi,rybak2018semi} to overcome the bottleneck of task-specific labelled data. Earlier attempts failed to prove effectiveness of self-training for dependency parsing~\cite{rush-etal-2012-improved}. However, \citeN[\textbf{CVT}]{clark-etal-2018-semi} and \citeN{rotman2019deep}[\textbf{DCST}], show successful application, thus, we consider these two systems. Also, we generate dependency data by applying a pretrained BiAFF system on unlabelled data. We augment this predicted data with gold data and retrain BiAFF in \textbf{Self-Train} setting.\\ \noindent\textbf{Observation:} As shown in Table~\ref{table:self_training}, \textbf{DCST} outpeforms due to its syntactically enriched contextual representations. The default setting of \textbf{CVT} does not consider the potentially useful morphological information for the SDP task. Thus, it falls short in terms of performance. \textbf{Self-Train} does not show significant improvement over BiAFF. \subsection{Sequential Transfer Learning} \begin{table*}[h] \centering \begin{tabular}{|c|c|c|c|} \hline \textbf{System} &\textbf{UAS} &\textbf{LAS}\\ \hline BiAFF & 84.07 & 76.87 \\ SeqTraL-FE & 80.03 & 73.60 \\ SeqTraL-FEA & 82.92 & 76.51 \\ SeqTraL-UF & 84.00 & 77.72 \\ SeqTraL-DL & 84.78 & 78.53 \\ SeqTraL-FT & \textbf{84.84} & \textbf{78.53}\\ \hline \end{tabular} \vspace{3mm} \caption{Results on dev set for Transfer Learning using various optimization schemes.} \label{table:SeqTraL} \end{table*} \vspace{-5mm} \noindent\textbf{Systems:} Transfer Learning has captivated the attention of many NLP researchers due to it's wide emergence and ease of integration. It contains two stages, namely, pretraining and adaptation. In pretraining, general representation is learned from related source tasks, and in the latter stage, learned knowledge is used for target tasks. However, the selection of pretraining and target task is interrelated. Sequential Transfer Learning (\textbf{SeqTraL}) consists of two components, namely, pretraining for learning general representations and adaptation to facilitate sample efficiency \cite{ruder-etal-2019-transfer}. We investigate on adapting the pretrained morphological tagger trained on the same morphologically tagged dependency data.\footnote{Sequence tagger consists of similar LSTM-based encoder as \cite{DBLP:conf/iclr/DozatM17} and decoder with fully connected layer followed by softmax layer.} Here, we use the first three Bi-LSTM layers from this morphological tagger and integrate them in the BiAFF and investigate with various optimization schemes, proposed for reducing a catastrophic forgetting~\cite{french1999catastrophic,MCCLOSKEY1989109}. As we move down, in Table~\ref{table:SeqTraL}, the freedom for adaptation increases. Table~\ref{table:SeqTraL} reports baselines used to investigate the best optimization scheme to adopt pretrained encoder. \textbf{SeqTraL-FE:} We treat newly integrated layers as Feature Extractors (FE) by freezing them. \textbf{SeqTraL-FEA:} In the SeqTraL-FE, we augment adaptor modules~\cite{pmlr-v97-houlsby19a,pmlr-v97-stickland19a} in between newly added consecutive layers. \textbf{SeqTraL-UF:} Gradually Unfreeze (UF) these new layers in the top to down order~\cite{howard-ruder-2018-universal,felbo-etal-2017-using}. \textbf{SeqTraL-DL:} The discriminative learning rate (DL) is used for newly added layers~\cite{howard-ruder-2018-universal}, the learning rate is decreased from top-to-bottom layers. \textbf{SeqTraL-FT:} We fine tune (FT) all the newly added layers with the default learning rate. \\ \noindent\textbf{Observations:} Table~\ref{table:SeqTraL} shows that performance improves as the freedom for adaption increases. The \textbf{SeqTraL-FT} (fine-tuned) reports the best result with 1-2 points absolute gain over BiAFF. \subsection{Multi-task Learning} \begin{table*}[h] \centering \begin{tabular}{|c|c|c|c|} \hline \textbf{Tasks} &\textbf{UAS} &\textbf{LAS}\\ \hline BiAFF & 84.07 & 76.87 \\ MTL-Case & 84.81 & 77.47 \\ MTL-Label & 84.87 & 77.88 \\ MTL-Morph & \textbf{84.84} & \textbf{78.00} \\ \hline \end{tabular}\vspace{3mm} \caption{Results on dev set for different auxiliary tasks in multi-task learning setting.} \label{table:MTL} \end{table*} \vspace{-5mm} \noindent\textbf{Systems:} Multi-task learning helps to exploit the complementary signal from related auxiliary tasks which enables better generalization. We simultaneously train BiAFF and a sequence labelling based auxiliary task in a multi-task setting (\textbf{MTL}). We experiment with the following auxiliary tasks: prediction of the morphological label (\textbf{MTL-Morph}), dependency relation between a word and its head (\textbf{MTL-Label}) and the case label (\textbf{MTL-Case\footnote{We use POS label in the absense of case information.}}). \\ \noindent\textbf{Observations:} Table~\ref{table:MTL} illustrates that all auxiliary tasks show significant gains over BiAFF and \textbf{MTL-Morph} marks highest gains. We also experiment with a setting where all three tasks are considered simultaneously; however, this shows subpar performance compared to \textbf{MTL-Morph}. \subsection{The Proposed Ensembled System} \label{final_system} \begin{table*}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline \textbf{System} &\textbf{UAS}&\textbf{LAS} \\\hline BiAFF & 84.07 & 76.87 \\ + SeqTraL-FT & 84.84 & 78.53 \\ + LCM & 86.22 & 79.93 \\ + MTL-Morph & \textbf{86.55} & \textbf{80.30} \\ \hline + Nonce & 85.91 & 79.68 \\ \hline + DCST & 85.86 & 79.46 \\ \hline \end{tabular} \vspace{3mm} \caption{Ablation analysis on dev set for different components of the proposed system.} \vspace{-5mm} \label{table:ablation} \end{table*} \noindent\textbf{Ablation:} Table~\ref{table:ablation} reports the contribution of different components for the proposed system. We observe consistent improvements with \textbf{SeqTraL-FT}, \textbf{LCM} and \textbf{MTL-Morph} integrated on the top of BiAFF, due to their complementary signals. The most prominent contribution comes from the morphologically tailored pretraining method \cite[\textbf{LCM}]{sandhan-etal-2021-little} which confirms the well-established findings of the effectiveness of morphological information for dependency parsing. However, \textbf{DCST} and \textbf{Nonce} augmentation strategy do not add any complementary signal; hence, we do not consider them in the proposed system. \\ \noindent\textbf{The proposed system:} Figure~\ref{fig:model} shows the ensembled proposed system using a toy example from Sanskrit. It consists of two steps, namely, pretraining (\textbf{LCM} \crule[lightg]{0.25cm}{0.25cm}) and integration. As shown in Figure~\ref{fig:tagging_schem}, \textbf{LCM} pretrains three encoders $E^{(1)-(3)}$ using three independent auxiliary tasks, namely, morphological label prediction, case label prediction and relation label prediction. Thereafter, as shown in Figure \ref{fig:gating}, these pretrained encoders are integrated with the BiAFF encoder $E^{(P)}$ using a gating mechanism as employed in ~\citeN{sato-etal-2017-adversarial}. We use \textbf{SeqTraL-FT} \crule[lorg]{0.25cm}{0.25cm} optimization scheme to update the weights of these four encoders. Next, \textbf{MTL-Morph} \crule[lpink]{0.25cm}{0.25cm} component adds morphological tagging as an auxiliary task to inject complementary signal in the model. Finally, the combined representation of a pair of words in passed to \textbf{BiAFF} \crule[lvio]{0.25cm}{0.25cm} to calculate probability of arc score (S) and label (L). \begin{figure*}[!tbh] \centering \subfloat[\label{fig:tagging_schem}]{\includegraphics[width=0.5\linewidth]{images/Tallip_LCM.pdf}} \subfloat[\label{fig:gating}]{\includegraphics[width=0.5\linewidth]{images/Tallip_BiAFF.pdf}} \caption{The proposed ensembled architecture with a toy example from Sanskrit. Translation: ``Oh V\={a}caspate (B\d{r}haspati)! Come again with divine mind". (a) Pretraining step (LCM): We use sequence labelling architecture with three independent auxiliary tasks, namely, morphological label (green), case label (red) and relation label (black). (b) Integrated parser: Pretrained encoders (\crule[lightg]{0.25cm}{0.25cm}) $E^{(1)-(3)}$ are integrated with BiAFF encoder $E^{(P)}$ using the gating mechanism (G). We use SeqTraL-FT (\crule[lorg]{0.25cm}{0.25cm}) optimization scheme to update the weights of these four encoders. Next, MTL-Morph (\crule[lpink]{0.25cm}{0.25cm}) perform morphological tagging in multi-task setting to enrich information in the encoders. Finally, combined representation of a pair of words, is passed to BiAFF (\crule[lvio]{0.25cm}{0.25cm}) to predict the probability of arc score (S) and label (L).} \label{fig:model} \end{figure*} \section{Experiments} \label{experiments} \noindent\textbf{Data and Metric:} We utilize around 4,000 dependency labelled trees from the Sanskrit Treebank Corpus \cite[STBC]{kulkarni2010designing} and \textit{\`{S}i\`{s}up\={a}lavadha} \cite{ryalichallenges}. We use 1,700, 1,000 and 1,300 sentences as train, dev and test set, respectively. However, the final results on the test set are reported using systems trained with combined gold train and dev set. Additionally, we also evaluate on the recently proposed Vedic Sanskrit Treebank \cite[VST]{hellwig-etal-2020-treebank} consisting of 2,524 and 1,473 sentences as training and test data, respectively. The STBC contains sentences from prose domain alone; however, VST contains sentences from poetry and prose domain. The STBC follows annotations based on K\={a}raka \cite{kulkarni-sharma-2019-paninian,kulkarni2010designing}, the grammatical tradition of Sanskrit, while the VST uses Universal Dependency (UD). Following \citeN{amrith21}, we use sentence level macro averaged Unlabelled and Labelled Attachment Scores (UAS, LAS) and t-test for statistical significance \cite{dror-etal-2018-hitchhikers}.\\ \noindent\textbf{Baselines:} There are two broad approaches proposed for the dependency parsing task, namely, transition-based and graph-based parsing. We use \citeN{more-etal-2019-joint}[\textbf{YAP}] and \citeN{chang2016}[\textbf{L2S}] from transition-based dependency parsing family. \citeN{DBLP:conf/iclr/DozatM17}[\textbf{BiAFF}] is a graph-based approach with BiAFFINE attention mechanism. \citeN{krishna-etal-2020-keep}[\textbf{MG-EBM}] extends \citeN{amrith21}[\textbf{Tree-EBM-F}] using multi-graph formulation. We report their standalone numbers for fair comparison. Systems marked with (*) are hybrid systems which leverage linguistic rules from P\={a}\d{n}inian grammar.\\ \noindent\textbf{Hyper-parameters:} \label{hyper-parameter} We use default hyper-parameters for all the competing baseline systems compared in this work. For SeqTraL variants, we use the exact same encoder as \citeN{ma-etal-2018-stack} with 2 Bi-LSTM layers and decoder with fully connected layer followed by softmax layer. For the proposed system, we adopt BiAFF's codebase by \citeN{ma-etal-2018-stack} with the hyper-parameters setting as follows: the batch size as 16, training iterations as 100, a dropout rate as 0.33, the number of stacked Bi-LSTM layers as 2, learning rate as 0.002 and the remaining parameters as the same as \citeN{ma-etal-2018-stack}. \\ \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline &\multicolumn{2}{c|}{\textbf{STBC}} & \multicolumn{2}{c|}{\textbf{VST}} \\\hline \textbf{System} &\textbf{UAS}&\textbf{LAS} & \textbf{UAS} &\textbf{LAS} \\\hline YAP & 75.31 & 66.02&70.37&39.20\\ L2S & 81.97 & 74.14&72.44&62.76 \\ Tree-EBM-F & 82.65 & 79.28&-&- \\ BiAFF & 85.88 & 79.55&77.23&67.68 \\ Tree-EBM-F* & \textit{85.32} & \textit{83.93} &-&- \\ MG-EBM* & \textit{87.46} & \textit{84.70} &-&- \\ Ours & \textbf{88.67} & \textbf{83.47}&\textbf{79.71}&\textbf{69.89} \\ \hline \end{tabular} \vspace{3mm} \caption{Main results on test set for SDP. \textit{Hybrid} systems, marked with (*) are not directly comparable with our system. Our results are statistically significant compared to the strong baseline BiAFF as per t-test ($p < 0.01$). Results are averaged over 3 runs.} \vspace{-5mm} \label{table:san_results} \end{table} \noindent\textbf{Results:} Clearly, the proposed ensembled system outperforms the state of the art \textit{purely data-driven} system (BiAFF) by 2.8/3.9 points (UAS/LAS) absolute gain.\footnote{ The \textit{hybrid} systems, EBM* and MG-EBM* use extra-linguistic knowledge; hence, they are not directly comparable.} Interestingly, it also supersedes the performance of the \textit{hybrid} state of the art system \cite[\textbf{MG-EBM}]{krishna-etal-2020-keep} by 1.2 points (UAS) absolute gain and shows comparable performance for LAS metric. We observe that performance of transition-based systems (\textbf{YAP/L2S}) is significantly low compared to graph-based counterparts (\textbf{BiAFF/Ours}). We also obtain a similar performance trend for VST data. The VST data is a mixture of dependency labelled trees from both poetry and prose domain. As a result, the overall performance for VST is low compared to STBC due to loss of configurational information.\footnote{We do not evaluate Tree-EBM-F* and MG-EBM* on VST data due to the unavailability of the codebase.}\\ \section{Error Analysis} \label{Error_analysis} \begin{figure*}[!tbh] \centering \subfloat[\label{fig:UAS_error}]{\includegraphics[width=0.3\linewidth]{images/predUASLength.pdf}} \subfloat[\label{fig:label-vocab}]{\includegraphics[width=0.3\linewidth]{images/labelwise-vocab.pdf}} \subfloat[\label{fig:label-OOV}]{\includegraphics[width=0.3\linewidth]{images/labelwise-OOV.pdf}}\\ \subfloat[\label{fig:dependency_length}]{\includegraphics[width=0.3\linewidth]{images/Dependency_length.pdf}} \subfloat[\label{fig:distance2root}]{\includegraphics[width=0.3\linewidth]{images/distancetoroot.pdf}} \subfloat[\label{fig:nonprojectivity}]{\includegraphics[width=0.3\linewidth]{images/Non-projectivity_Precision.pdf}} \caption{(a) Performance on prose and poetry domain as sentence length increases. The dependency label-wise performance on (b) in vocabulary (c) out of vocabulary words. From left to right the label frequency increases. Performance in terms of (d) dependency length (e) distance to root and (f) degree of non-projectivity.} \label{fig:length_analysis} \end{figure*} \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline &\multicolumn{2}{c|}{\textbf{BiAFF}} & \multicolumn{2}{c|}{\textbf{Ours}} \\\hline \textbf{Domain} &\textbf{UAS}&\textbf{LAS} & \textbf{UAS} &\textbf{LAS} \\\hline Prose &85.88 & 79.55&\textbf{88.67} & \textbf{83.47}\\ Poetry&47.75&44.32&\textbf{52.86}&\textbf{50.45}\\\hline In Vocab & 86.25 & 81.76&\textbf{89.16} & \textbf{85.64} \\ OOV &79.01&69.97 &\textbf{80.37}&\textbf{72.26} \\ \hline \end{tabular} \vspace{3mm} \caption{Fine-grained analysis in between the strong baseline: BiAFF and the proposed system on test set.} \vspace{-5mm} \label{table:OOV_analysis} \end{table} \noindent We compare the proposed system with the strong baseline BiAFF.\footnote{We could not compare with Tree-EBM-F* and MG-EBM* due to unavailability of codebase and system outputs.} We investigate the robustness of systems based on the following language specific phenomenona such as (1) relatively free word order nature: Here, we verify the robustness to the configurational information by evaluating on poetry and prose domains. Table~\ref{table:OOV_analysis} illustrates that the proposed system shows consistently superior performance over BiAFF; however, both systems are brittle to poetry domain; on an average the performance degrades by 36.8/34.0 points (UAS/LAS) compared to prose counterpart. The prose/poetry data has 44.0/0.6\% word pairs with dependency length as 1, 13.5/30.7\% word pairs with dependency length more than 5 and 7.1/19.2\% non-projective arc. Clearly, these structural divergences explain the low performance on poetry domain due to systems trained on prose domain. (2) morphologically rich nature: is investigated by focusing on out of vocabulary phenomenon. Being a morphologically rich language, the OOV phenomenon is crucial for modelling robust systems. The test set of STBC consists of 34\% words that are OOV. The prominent contribution to average drop (reported in Table~\ref{table:OOV_analysis}) of 7.9/12.6 points (UAS/LAS) comes from failure to predict rarely occurring dependency relation, corresponding to OOV words correctly. In order to verify the performance in OOV vocabulary scenario, we evaluate the dependency label-wise performance for out of vocabulary (OOV) words (Figure~\ref{fig:label-OOV}) and in vocabulary words (Figure~\ref{fig:label-vocab}). The contrast between Figure~\ref{fig:label-vocab} and~\ref{fig:label-OOV} demonstrates that both systems perform poorly when a word is OOV and dependency label is rarely occurring.\footnote{Opposite observation for the frequency marked with 1 is due to the fact that the total number of candidates belonging to this label class is very less so systems are not able to generalize well for this class.} Figure~\ref{fig:UAS_error} illustrates the empirical performance in terms of sentence length to validate the robustness of the proposed system against varied sentence lengths. We notice downwards trend in the performance as the sentence length increases. Particularly, both systems drastically go down in poetry domain. Following \citeN{kulmizev-etal-2019-deep,mcdonald-nivre-2011-analyzing}, we analyze the performance in terms of dependency length\footnote{The dependency length is defined as the distance between head-dependent pair when arranged linearly in a sentence.} (Figure~\ref{fig:dependency_length}), distance to root\footnote{The distance of a node to its root in a dependency tree.} (Figure~\ref{fig:distance2root}) and degree of non-projectivity\footnote{The degree of non-projectivity for a head($x$)/dependent($y$) pair is defined as the number of words occurrences in between $x$ and $y$ which are not part of decedents of $x$ and modify a word outside $x$ and $y$ window.} (Figure~\ref{fig:nonprojectivity}). If we contrast between the performance of both the systems in poetry and prose domain then they show the declined performance in terms of sentence length, distance to root and degree of non-projectivity. In Figure~\ref{fig:dependency_length}, we observe slightly improved performance for both the systems in prose domain as the dependency length increases. This can be attributed to ability of graph-based parsers to capture long range dependencies well. Figure~\ref{fig:length_analysis} only reports the performance on labelled score; however, we observe a similar trend for unlabelled score. Summarily, we conclude that both systems perform similarly in the all aspects of error analysis where the proposed system shows consistent improvements. \section{Conclusion and Discussion} We focused on dependency parsing for low-resourced morphologically rich Sanskrit language. To tackle limitations of the existing \textit{engineering-based} approaches, we investigate the efficacy of recently proposed strategies specially tailored for low-resource settings. Our results showed that the proposed ensembled system significantly improves with 2.8/3.9 points (UAS/LAS) compared to BiAFF. Interestingly, it also superseded the performance of state of the art \textit{hybrid} system MG-EBM* by 1.2 points (UAS) absolute gain and showed comparable performance in terms of LAS metric. We plan to extend this work by investigating on ways to make current system robust for poetry domain. \begin{acks} We thank Amba Kulkarni for providing Sanskrit dependency treebank data, Anupama Ryali for \textit{\`{S}i\`{s}up\={a}lavadha} dataset and the anonymous reviewers for their constructive feedback towards improving this work. The work of the first author is supported by the TCS Fellowship under the Project TCS/EE/2011191P. \end{acks} \bibliographystyle{ACM-Reference-Format}
1,314,259,995,208
arxiv
\section*{Introduction} Identification of the intrinsic properties of magnetic nanostructures is central to the development of applications in a wide range of topics: information storage, biomedicine, permanent magnet development, and many more. The parameter identification techniques are at the heart of large scale material characterisation to quantify the properties of nanoscopic constituents of materials. For example, the optimisation of magnetic granular materials for the current and future hard disk drive technologies, such as heat assisted magnetic recording (HAMR), or the synthesis of magnetic nanoparticles for molecular sensing and detection, imaging, and cancer therapy in biomedicine relies on the possibility of efficient and accurate identification of the physical properties of billions of magnetic nanoparticles, which requires analysis in a high dimensional parameter space and the employment of a statistical approach. In such cases, direct measurements targeting individual particles become inefficient and infeasible. Instead, an indirect approach based on relating theoretical models to macroscopic experimental data and identifying the model parameters from the optimal fit becomes the most viable approach. This inverse problem solving methodology relies on the availability of a realistic model capable of i) reliably representing the physics of elementary constituents of a physical system, ii) accurately reproducing the macroscopic measurement data (forward problem) and iii) understanding the uniqueness properties of inverse solutions of the model, i. e. whether the identified parameter set is the only set allowing the model to accurately reproduce the measurement data \cite{Tarantola2006, bertero1998introduction, sarvas1987basic, tarantola2005inverse}. Unfortunately, inverse problems are often ill-posed and entire manifolds of parameters allow the models to reproduce the measurement data, which effectively translates to a significant error of the parameter identification. Such errors can only be reduced by providing new information from independent measurements. As a result, the development of identification techniques for materials characterisation remains a challenging problem. In this work we consider the exemplar problem of identification of the switching field distribution (SFD) in magnetic particulate and granular systems. The SFD carries information about the intrinsic conditions for magnetisation reversal of individual magnetic particles and, for example, is a crucial characteristic determining the quality of high density magnetic media for current and future hard disk technologies, such as based on the bit patterned media or granular materials considered for HAMR \cite{Yang2015,Weller2014a}. This problem is especially challenging due to the strength and complexity of the interparticle interactions. The recently developed identification schemes to extract the SFD based on the inverse problem solving approach include applications to assemblies of magnetic nanoparticles\cite{Rut}, and $\Delta H (M,\Delta M)$ method for granular materials relevant in magnetic recording \cite{Liu2008a,Hovorka2010a,Hovorka2012,Pisana2013} \cite{Hauet2014,Tabasum2013,Wang2008a}, which was motivated by the earlier $\Delta H_c$ methodology \cite{Tagawa1991}. Simpler approaches to identify the SFD with variable degree of consistency were based on the differentiation of hysteresis loop `de-sheared' to remove the contribution from magneto-static interactions \cite{Pike1999}, methods based on Preisach models \cite{Mayergoyz1991}, and methods based on analysing the transformed first order reversal curves (FORC), i.e. magnetisation curves generated by reversing the external magnetic field starting from a point on a major hysteresis loop branch (Fig. \ref{fig1}). The FORC methods are equivalent to the classical Preisach modelling if the measurement data display microscopic memory of states of magnetic particles after the external field excursion (wiping-out property) and a minor hysteresis loop congruency \cite{Mayergoyz1985, Stancu2003}. The FORC methods are used broadly as a tool for qualitative, and in some cases quantitative, description of general magnetic characteristics of magnetic systems, such as of distributions of magnetic properties, mixed magnetic phases\cite{Roberts2000}, clustering and long-range ferromagnetic state, magnetic characterisation of geological mixtures and minerals and the differences in magnetization reversal mechanisms\cite{Pike2001,Muxworthy2005,Roberts2014,Gilbert2014}. The attractiveness of the FORC method is in its simplicity and its straightforward application to a wide range of systems displaying hysteresis. The accuracy of determining the SFD quantitatively in various classes of systems is presently under intensive critical discussion \cite{Dobrot??2013,Dobrot??2015}, and quantifying its range of validity, and understanding the microscopic reasons for its breakdown, is of broad interest with implications beyond the exemplar magnetic hysteresis considered here. In the simplest case of an assembly of bistable magnetic particles, the elementary hysteresis loop of a particle is rectangular and can be represented by a hysteron (Fig. \ref{fig1}a). In the absence of inter-particle interactions, when any magnetic correlations are irrelevant, the macroscopic hysteresis loop is simply a superposition of projections of magnetic moments of particles onto the field direction, ordered according to the switching events (hysteron thresholds) of individual particles. Then the SFD can be determined by de-constructing the hysteresis loop into distribution of hysterons uniquely linked with particle properties. In this case, FORC method is an inherently accurate technique for its identification. On the other hand, the presence of thermal relaxation and significant inter-particle interactions gives rise to more complex magnetisation reversal mechanisms (Fig. \ref{fig1}b). The emergent magnetic correlations fundamentally transform a macroscopic hysteresis loop and mask the direct information about the intrinsic switching fields of individual particles. The accuracy of the FORC method becomes parameter region-dependent and requires validation against the systematic inverse problem solving framework. The purpose of the present article is to study the validity range of the FORC method against the large-scale computational data generated from a fully featured model of hard disk drive (HDD) media, which incorporates details of the statistical nature of inter-granular interactions, intrinsic properties of individual grains, and thermal activation. Of particular importance is the role of magnetic correlations and consequent departures from simple hysteron-based model predictions. Gilbert et. al. \cite{Gilbert2014} have shown that the introduction of nearest-neighbour correlations strongly modifies the FORC diagram. Here we use a fully-featured kinetic Monte Carlo model with short- and long- ranged interactions to create a complete picture of interaction effects, most importantly the balance between exchange and magnetostatic interactions. We proceed with models of increasing complexity to demonstrate firstly the effects of thermal activation for a non-interacting system. We then proceed to the study of interaction effects using the kinetic Monte-Carlo (kMC) model and a simplified model of correlation effects. By evaluating the inter-granular magnetic correlation function, we demonstrate the direct relationship between the emergence of magnetic correlations and the failure of the FORC methodology to determine the SFD, and establish the criteria for the validity of the FORC method as a quantitative approach for accurate identification of the SFD in HDD magnetic media. \section*{Results} We apply the FORC method to large-scale computational data generated from a fully featured model of HDD media which incorporates details of the statistical nature of intergranular interactions, intrinsic properties of individual grains, and thermal activation (Methods \ref{methods_fullmodel}). We use a kinetic Monte-Carlo (kMC) model (Methods \ref{methods_kmc}) to computationally reproduce the magnetisation behaviour of the HDD media, and a FORC method as the technique for identification of the underlying SFD. Here we consider realistic media with elongated grains with an aspect ratio ($h/d$) of 1.17, uniaxial anisotropy ($K$) with mean value of $7 \cdot 10^6$ erg/cm$^3$ and 3 degree dispersion of the anisotropy easy axis around the perpendicular direction to the grain plane. The grain height ($h$) is 10 nm, the mean grain size ($d$) is 8.5 nm and the saturation magnetization ($M_s$) is $700$ emu/cm$^3$. The calculations assume an external field rate of $4 \cdot 10^4$ Oe/s at room temperature (300K). In all cases studied the intrinsic SFD can be easily calculated in the model by switching off all interactions (Methods \ref{methods_nonint}) and histogramming the switching fields of individual particles along a hysteresis loop. \subsection*{From reversal curve to FORC diagram and to SFD} The FORC method is used as a quantitative tool to investigate the SFD and interaction field distribution in granular materials. It is typically applied to the measurements of macroscopic hysteresis loops. The application of the method contains two main steps. The first step requires measurement of the first order reversal curves (FORC) and their transformation to the so-called FORC diagram (Fig. \ref{fig1} ). In the second step, the FORC diagram is processed such that the undesirable contribution of the inter-particle interaction is removed, which then allows accessing information about the intrinsic SFD. \begin{figure}[t!] \begin{center} \includegraphics[angle = 0,width = 11cm]{Figure1_vs2.pdf} \caption{ The ideal single particle hysteresis loop has a rectangular shape corresponding to the hysteron model (a), where the only change in magnetisation is due to switching events. The hysteresis loop will have a more complex shape (b) in the presence of thermal effects and reversal components as included in our model. Example of hysteresis loops for a HDD media system (c) and the corresponding first order reversal curves in $H_a H_b$ plane (d) or $H_c H_u$ plane (e) for a non-interacting system of 10000 elongated grains (1.17 aspect ratio and D=8.5nm) simulated at 300K and field rate of $4 \cdot 10^4$ Oe/s . The system parameters are: $M_s= 700$ $emu/cm^3$, K=$7 \cdot 10^6 erg/cm^3$ with 3 degree dispersion of the easy axis. } \label{fig1} \end{center} \end{figure} \emph{FORC data, FORC diagram, and the SFD.} Fig. \ref{fig1}(c) illustrates the measurement protocol used to generate the FORC data. The starting point is the saturation of the sample by applying a large positive applied field. The field is then decreased towards the reversal field, $H_b$, when the field direction is reversed and increased from $H_b$ back to positive saturation. This process generates a FORC attached to the major hysteresis loop at the reversal point $H_b$ (blue line in Fig. \ref{fig1}(c)). The magnetisation point at an applied field $H_a > H_b$ along this FORC, denoted as $M(H_a ,H_b)$, is internal to the major hysteresis loop. As illustrated in Fig. \ref{fig1}(c), at any value of $H_a$ in the hysteresis region, there is an entire family of such internal magnetisation points $M(H_a ,H_b)$ distinguished by the reversal field $H_b$ of their corresponding FORCs. The FORC data are then analysed by computing the numerical second-order derivative of the functional dependence $M(H_a, H_b)$ with respect to the applied field $H_a$ and $H_b$: \begin{align} \rho_{ab}(H_a, H_b) = -\frac{1}{M_s}\frac{\partial^2 M(H_a, H_b)} {\partial H_a\partial H_b} \label{forcab} \end{align} where $M_s$ is the saturation magnetisation of the material. It is next conventional to transform $\rho_{ab}$ by introducing new variables $H_c$ and $H_u$ such that $H_a(H_c, H_u) = (H_u + H_c)/2$ and $H_b(H_c, H_u) = (H_u - H_c)/2$, which leads to the FORC distribution represented as: \begin{align} \rho_{ab}(H_a, H_b) = \rho_{ab}(H_a(H_c, H_u), H_b(H_c, H_u)) \equiv \rho(H_c, H_u) \label{forcuc} \end{align} from which the SFD can be obtained by a straightforward integration over the variable $H_u$:\cite{Zimanyi2006} \begin{align} \rho_{SFD}(H_c)=\int_{-\infty}^\infty \rho(H_c, H_u)\,dH_u \label{sfd} \end{align} The interpretation of these equations is as follows. The distribution $\rho_{ab}$ in Eq. \eqref{forcab} is defined in terms of the differentiation of the magnetisation $M(H_a, H_b)$ attained through general applied fields $H_a$, $H_b$ along the hysteresis loop (Fig. \ref{fig1}(c)), and it is not immediately obvious how it relates to microscopic material properties such as the distribution of intrinsic switching field thresholds of magnetic grains $\rho_{SFD}$. The key to establishing this link is the notion of a magnetic particle having an elementary rectangular hysteresis loop (RHL) as shown in Fig. \ref{fig1}(a), with the up and down switching thresholds corresponding to the fields $H_a$ and $H_b$. Then, Eq. \eqref{forcab} can be interpreted as measuring the fraction of magnetic grains with the switching thresholds $H_a > H_b$ adding up to the cumulative magnetisation $M(H_a, H_b)$ at the field $H_a$ after the field excursion from $H_b$. The transformed variables $H_c = (H_a - H_b) / 2$ and $H_u = (H_a + H_b) / 2$ then represent the coercive and the bias fields of such RHLs (Fig. \ref{fig1}(a)), and the FORC distribution $\rho$ defined in Eq. \eqref{forcuc} is the joint probability distribution of $H_c$ and $H_u$. Consequently, the SFD defined by Eq. \eqref{sfd} is the distribution of the coercive fields of particles, i.e. their intrinsic switching thresholds. In the ideal system of isolated magnetic particles represented by RHLs, such as the non-thermal system of non-interacting Stoner-Wohlfarth particles with the anisotropy axes aligned along the field direction, the RHLs have symmetric up and down switching thresholds $\pm H_c$ and due to the absence of interactions the $H_u=0$ for all particles. The macroscopic hysteresis loop is a superposition of magnetic states of all particles and, due to the rectangular shape of RHLs, any magnetisation change along the hysteresis loop can occur only at applied fields corresponding to the particle switching thresholds. The differentiation in Eq. \eqref{forcab} filters the contribution from the `flat' parts of RHLs and as a residual the distribution $\rho_{ab}$ carries an accurate representation of the switching thresholds of particles. In this case, the transformed FORC distribution in Eq. \eqref{forcuc} can be shown to be $\rho(H_c, H_u)=\rho^*(H_c)\delta(H_u)$, where $\delta(H_u)$ is the Dirac delta-function and $\rho^*(H_c)$ the statistical distribution of coercive fields of RHLs of particles, which according to Eq. \eqref{sfd} gives the SFD directly as $\rho_{SFD}=\rho^*(H_c)$. Historically, an elementary RHL of a particle has been referred to as a hysteron in Preisach modelling,\cite{Mayergoyz1985,Dobrot??2015} which has served as a basis for developing the FORC method.\cite{Pike1999} The essence of Preisach models is to represent the macroscopic hysteresis loops of materials as a superposition of RHLs with the RHL threshold distribution, termed as a Preisach distribution, defined identically as the $\rho_{ab}$ in Eq. \eqref{forcab} (Methods \ref{methods_preisach}). The uniqueness of identification of the Preisach distribution has been shown to be guaranteed if the macroscopic magnetisation data satisfy the wiping-out and congruency properties \cite{Mayergoyz1985, Stancu2003}. Consequently, if the wiping-out and congruency properties are satisfied, the FORC distribution $\rho_{ab}$ is a valid and unique Preisach distribution. Unfortunately, the straightforward interpretation of Eqs. \eqref{forcab}-\eqref{sfd} as given above does not apply in realistic cases when the particles are represented by non-ideal RHLs, the inter-particle interactions are relevant, or in the presence of thermal fluctuations. Moreover, general systems with hysteresis do not always display the wiping-out and congruency properties, and the accuracy and uniqueness of the identification of SFD from the FORC distributions needs to be established with respect to the relevant physical picture and by independent measurement methodologies. Such cases are analysed in detail below. \emph{Effects on imperfect RHLs on FORC diagram}. To access the effects of deviations of elementary hysteresis loop of particles from the RHLs on the accuracy of determining the SFD, we applied the kMC model to study the hysteresis loop behaviour of a reduced system of isolated magnetic particles represented as Stoner-Wohlfarth particles (Methods \ref{methods_nonint} and \ref{methods_kmc}). The intrinsic magnetic properties of particles in the model were set to represent a typical magnetic recording medium (Methods \ref{methods_parameters}), including a 3$^\circ$ misalignment of the particle anisotropy easy axes around the applied field direction, and the driving field rate set to $10^4$ Oe/s of a typical experimental MOKE setup, which determined the extent of thermal activation. The inter-particle interactions were turned off. Fig. \ref{fig1}(b) shows an example of the computed hysteresis loop of an ensemble of isolated particles, which clearly deviates from RHL behaviour (Fig. \ref{fig1}(a)). The rounding features are typical of a loop with strong component from thermal activation. The computed macroscopic hysteresis loop of a system of 10000 non-interacting particles with representative FORCs is shown in Fig. \ref{fig1}(c). The FORC diagrams $\rho_{ab}$ and $\rho$, obtained from this loop by applying Eqs. \eqref{forcab} and \eqref{forcuc}, are shown in Figs. \ref{fig1}(d) and (e). Note, that given the nature of their transformation, the FORC distributions $\rho_{ab}$ or $\rho$ are related by the 45$^\circ$ rotation of the $(H_a, H_b)$ coordinate plane. Fig. \ref{fig1}(e) shows that the FORC distribution $\rho$ is no-longer a straight line $\rho(H_c, H_u)=\rho^*(H_c)\delta(H_u)$ as in the case of a system of ideal non-interacting particles with RHLs discussed above, and instead has a significant $H_u$ component even if the inter-particle interactions are absent. This is due to the particle hysteresis loop rounding seen in Fig. \ref{fig1}(b), when the change of magnetisation along the macroscopic loop no longer occurs only at the switching thresholds of particles, as in the ideal RHL case, but in addition includes a smooth nonlinear component from the rounding effect. The magnetisation data transformation through Eq. \eqref{forcab} then convolutes the residual of the differentiation of this smooth component with the actual distribution of the switching thresholds of particles, which in the FORC diagram becomes manifested as a `fictitious' $H_u$ field distribution (Fig. \ref{fig1}(e)). This poses a difficulty in the interpretation of the FORC diagram, which appears to suggest the presence of interactions in the system of non-interacting particles. Nevertheless, we find that evaluating the underlying SFD in Eq. \eqref{sfd} based on this FORC diagram actually yields the accurate SFD $-$ as a result of the reflection symmetry of the FORC diagram around the $H_c$ axis, when the $H_u$ component of the FORC distribution simply integrates to unity after factorisation of $\rho(h_c, h_u)$ in Eq. \eqref{sfd}. The slightly non-symmetric peak seen in Fig. \ref{fig1}(e), is often observed experimentally in systems with thermal activation. \begin{figure}[t!] \begin{center} \includegraphics[angle = 0,width = 0.9\textwidth]{Figure2_version2.pdf} \caption{ FORC diagram for interacting case with only magnetostatic interaction calculated using mean-field approach (a) and grain-grain interaction (c). The corresponding FORC diagram after mean interaction field is removed are given in (b) and (d) for the mean-field and grain-grain interaction models respectively. The radial magnetization correlation function for the grain-grain interaction model is given as the inset in (c).} \label{fig2} \end{center} \end{figure} \emph{Interactions: Mean-field correction of the FORC diagram.} The effects of inter-particle interactions on the FORC diagram are illustrated in Figs. \ref{fig2}, which shows that the FORC diagram becomes considerably modified by interactions with respect to the non-interacting case in Fig. \ref{fig1}(e). Fig. \ref{fig2}(a) shows the FORC diagram of the model based on the mean-field approximation of the full granular model (Methods \ref{methods_meanfield}). The mean-field interactions between grains have random strength following from a Gaussian distribution, obtained by histogramming the interaction fields of a full granular system in saturation used as a reference, weighted by the overall magnetisation $M$ of the granular system (Methods \ref{methods_meanfield}). The observed FORC diagram has the shape of a rotated `V' or `L' as expected \cite{Gilbert2014}. To apply the FORC method to identify the underlying SFD distribution, it is first necessary to extract the mean-field interaction and recover the non-interacting particle FORC diagram. This can be achieved by introducing a correction factor $\alpha$, variation of which allows to symmetrise the FORC diagram equivalently to subtracting the average interaction field acting on the system\cite{Papusoi2011}. Specifically, varying the mean-field correction factor $\alpha$ transforms the field axes $H_a$ and $H_b$ in the raw FORC diagram (Fig. \ref{fig2}(a)) to new axes $H_a\rightarrow H_a-\alpha M(H_a)$ and $H_b\rightarrow H_b - \alpha M(H_b)$, until obtaining the optimal value of $\alpha\equiv\alpha_o$ when the new FORC diagram $\rho$ becomes symmetric around the $H_u$ axis and any possible negative regions of $\rho < 0$ that may result from an over- or under-estimated mean-field correction become eliminated. This procedure is equivalent to the hysteresis loop `de-shearing' procedure typically applied to extract the effects of demagnetising fields from experimental hysteresis loops. In the ideal case, the optimal value of the correction factor, $\alpha_o$, corresponds to the mean-field interaction strength $\langle H_{inter}\rangle$ of the mean field granular model (Methods \ref{methods_meanfield}). After applying the mean-field correction, the resulting FORC diagram shown in Fig. \ref{fig2}(b) resembles that of the non-interacting case Fig. \ref{fig1}(e), which allows to calculate the SFD by applying Eq. \eqref{sfd}. The values found are consistent with the non-interacting cases within the statistical error corresponding to uncertainty of 5\%. \emph{Interactions: magnetic clusters.} The mean-field interaction is expected to be an oversimplification as it does not account for the inter-granular magnetic correlations typically present in real systems. The presence of such correlations leads to the emergence of magnetic clusters which influences the accuracy of the FORC method. To begin investigating the magnetic clustering effect, we first consider the mean-field model discussed above reduced to an ensemble of disconnected regions of $N_g$ grains. In this `toy model' the regions act as non-interacting clusters of $N_g$ grains interacting via equivalent mean-field-like interactions dependent on the average magnetisation within each cluster (Methods \ref{methods_clusters}). In this model, a switching grain affects only the magnetisation of its own cluster, while the magnetisation of all other clusters in the ensemble remains unaffected, and the hysteresis loop is a superposition of magnetisation jumps from individual clusters. Thus, when the cluster size $N_g$ is large, approaching the system size, the behaviour recovers that of a full mean-field system discussed above. On the other hand, as $N_g$ decreases the behaviour moves away from being mean-field-like and the macroscopic loop results from a combined contribution of an increasing number of elementary hysteresis loops of individual clusters available in the system. These elementary loops of individual clusters have shape deviating from the RHLs, which is expected to reduce the accuracy of the FORC method. Fig. \ref{fig22} shows analysis of five ensembles of uniform clusters of variable $N_g=4$, 5, 10, 100, 500. Applying the cluster model (Methods \ref{methods_clusters}) combined with the kinetic Monte-Carlo solver (Methods \ref{methods_kmc}) we first computed the macroscopic hysteresis loops with FORCs for every ensemble. Then we transformed the FORC data to the underlying FORC diagram by applying Eqs. \eqref{forcab} and \eqref{forcuc}, applied the mean-field correction $\alpha_0$ to remove the interactions as discussed above - which is a standard procedure used in the practical FORC method, and computed the SFD from the corrected FORC diagram using Eq. \eqref{sfd}. As expected, the results of extracting the SFD are accurate when $N_g$ approaches the full system size, while the accuracy of the FORC method reduces with the decreasing cluster size $N_g$. When $N_g$ is small, the clusters contain only small numbers of grains relative to the full system size, and there are many clusters contributing to the overall macroscopic hysteresis loop. There are two main sources of error expected to contribute to the loss of accuracy of the FORC method: 1) the mean-field correction is no-longer accurate as in the mean-field model of a full granular system, and 2) distorted RHL shape of elementary hysteresis loops of individual grains, due to correlated behaviour inside each cluster. Examples of the obtained raw FORC diagrams prior applying the mean-field correction are shown in the insets i-iv in Fig. \ref{fig22}. The mean-field like nature of the interaction in the cluster model (Methods \ref{methods_clusters}) results in an equivalent effective shift of the switching thresholds of the grains in each cluster, which results in the observed segmentation of the `V'-like shape FORC diagram into distinct regions along each branch. The number of these regions per branch corresponds to the number of grains per cluster, the `V' shape of the arrangement of the segments reflects the interaction induced symmetry breaking of the up and down intrinsic switching thresholds of grains where the separation between the segments corresponds roughly to the magnitude of the mean interaction field $\langle H_{inter}\rangle$ in the model. The interpretation of this FORC diagram is consistent with the recent work, where analogous segmentation effects have been studied in terms of a different model with the nearest neighbour grain interactions \cite{Gilbert2014}. Increasing $N_g$ in clusters results in the increased density of segments in the FORC diagram until gradually reproducing the FORC diagram of the mean field model in Fig. \ref{fig2}(a). \begin{figure}[h!] \begin{center} \includegraphics[angle = 0,width = 0.7\textwidth,scale=0.4]{Toy_model_version4.pdf} \caption{ The width of the SFD ($\sigma_{SFD}$) as function of cluster size for the toy model (red). The cluster size in inverse proportional with the degree of correlation in the system. The larger the cluster size the closer is the model to a mean-field-like model, which is completely uncorrelated system and FORC method can be applied successfully. By increasing the correlation, the FORC method is underestimating the $\sigma_{SFD}$. The result from the HDD model is also included (blue). Example of FORC diagram for different cluster size are illustrated in the insets i-iv. } \label{fig22} \end{center} \end{figure} \emph{Applicability of the FORC method to a full granular model of recording media.} To investigate the accuracy of the FORC method in determining the SFD in realistic magnetic recording materials we use the full granular kinetic Monte-Carlo model with exchange and magnetostatic interactions (Methods \ref{methods_fullmodel}) to simulate the underlying hysteresis loops and FORCs. Such general interactions introduce magnetic correlations between grains, which lead to correlated behaviour when magnetic grains begin to switch in unison in clusters of size equal to the characteristic correlation length. This leads to magnetisation jumps (Barkhausen noise) along the hysteresis loop, analogous to the case of the cluster model discussed above. Fig. \ref{fig2}(c) shows the corresponding FORC diagram. The inset in the figure shows the radial correlation function, suggesting the presence of significant short range grain-grain correlations in a typical recording medium which are absent in the non-interacting and mean-field granular systems. To remove the contribution from interactions, we first subtract the mean-field correction after finding the optimal $\alpha_o$ as discussed above, as is typically done in practical applications of the FORC method. The corrected FORC diagram shown in Fig. \ref{fig2}(d) deviates from the non-interacting case shown in Fig. \ref{fig1}(e), which is due to the fact that the mean-field interaction mis-represents the full exchange and magnetostatic interactions. Consequently, applying Eq. \eqref{sfd} we find that the FORC method underestimates the $\sigma_{SFD}$ by as much as 60\%. Thus the presence of significant magnetic correlations results in the loss of accuracy of the FORC method. The question of main interest is to understand the relationship between the extent of correlations and the accuracy of the SFD determined by the FORC method. To study this issue in simulations, we systematically varied the strength of exchange and magnetostatic interactions, in each case computing the underlying radial pair correlation function between the grains (Methods \ref{methods_correlations}), and evaluated the reference SFD directly by histogramming the intrinsic field thresholds of grains during switching for comparison with the SFD obtained by the FORC method through Eqs. \eqref{forcab}-\eqref{sfd}. Fig. \ref{fig_Diagram}(a) shows the dependence of the maximum value of the correlation function on the strength of exchange and magnetostatic fields. Representative FORC diagrams after applying the mean-field correction are shown in the insets (i)-(v). Magnetic correlations increase with the strength of one of the interaction types increasing relative to the other, while they remain negligible in the weakly interacting case or in the interaction compensating region corresponding to the region with similar total magnitudes of exchange and magnetostatic interactions. The contour lines quantify the correlation strength. Fig. \ref{fig_Diagram}(b) shows the corresponding relative accuracy of the SFD determined by the FORC method, measured relative to the SFD determined directly from the kMC model. The comparison of Figs. \ref{fig_Diagram}(a) and (b) reveals close agreement between the correlation strength and the accuracy of the FORC method for determining the SFD. Errors can also be attributed to the fact that the model is thermal and RHL are not perfect, or the effects of the slight misalignment of anisotropy axes of grains combined with the interactions. However, as shown in Fig. \ref{fig1}(e) for the thermal effects, these factors are relatively small and the largest discrepancy is caused by the interactions and specifically by the interaction-induced spatial correlations. The accuracy of the FORC method is the highest in the weakly correlated interaction regions. Nevertheless, depending on the required accuracy of determination of the SFD, Fig. \ref{fig_Diagram}(b) indicates the range of parameter space in which this can be achieved. The FORC method is limited to very small field (up to 1200Oe) considering a deviation of 10\% from the expected value of $\sigma_{SFD}$. Finally we map the deviation of the SFD from FORC and combine the results with magnetization correlation data to draw a validity diagram for using FORC as a quantitative tool. \begin{figure}[t!] \begin{center} \includegraphics[angle = -0,width = 0.7\textwidth]{Figure4_version3.pdf} \caption{ Plots of correlation and error from the FORC calculations; (a) Correlation diagram: The magnetization correlation is calculated at coercivity and the maximum correlation is extracted. As exchange and magnetostatic interactions increase, the coupling between grains also increases leading to large correlation values. The values on the diagonal are minimum because positive and negative contributions from the exchange and magnetostatic interaction, compensate overall. (b) Validity diagram: Diagram showing the deviation of $\sigma_{SFD}$ from the FORC method in comparison with the expected value. The contour lines for different correlation in (a) are used in (b) to guide the eye. (inset i-v) Example of FORC diagram for the system having just exchange interaction (ii: 500 Oe and i: 1125 Oe), having just magnetostatic interaction (iv: 920 Oe, V: 1600 Oe). The non interacting FORC diagram is illustrated for comparison in inset iii. } \label{fig_Diagram} \end{center} \end{figure} \section*{Discussion} Our study presents the analysis of the accuracy and the different modes of failure of the FORC method by using as a benchmark a succession of computational models of increased complexity starting from the system of non-interacting particles towards the realistic full model of magnetic granular media in magnetic recording, which includes exchange and magnetostatic interactions and various relevant sources of the material disorder. In terms of the analysis, similar to Pike et.al. \cite{Pike1999} we make the distinction between the raw FORC data, the FORC diagram and the usual interpretation of the FORC data based on the Preisach model. Using model calculations we show that, while the FORC diagram in principle contains information about the SFD and the interactions, application of the RHL interpretation does not reliably deconvolve the SFD and interaction effects. This is attributed to the spatial magnetization correlations which are an important feature of many materials, including magnetic recording media, and which are not included in the RHL approach. We reveal that the applicability of the FORC method for the quantitative analysis of the SFD is limited to the parameter range where inter-particle spatial correlations are insignificant, i.e. when exchange and magnetostatic interactions are weak or when they compensate. The accuracy of the FORC method decreases in the presence of significant correlations resulting from the correlated switching with multiple grains reversing their magnetic state in unison at a given field threshold. These correlated grains behave as a single entity thus hiding any information about the intrinsic switching thresholds of individual particles into the correlated reversal. The information cannot be recovered by the differentiation of the first order magnetisation reversal curves in the way of the FORC method simply because Eq. \eqref{forcab} no longer provides access to the switching thresholds of individual RHLs of particles but instead provides access to the fields corresponding to the magnetisation jumps along first order reversal curves, which are equivalent to the switching thresholds of the correlated particle clusters as a single entity and not as individual particles in the cluster. Moreover, general hysteresis loops do not satisfy the wiping out and congruency properties. Then the FORC diagrams cannot be interpreted as Preisach distributions and no longer guaranteeing unique SFD \cite{Dobrot??2015, Mayergoyz1985, Pike1999,Dobrot??2013}, which necessitate careful analysis in establishing its physical relevance. Recovering the intrinsic switching thresholds of individual particles from the first order reversal curve data then requires further deconvolution based on the refined models capable of accounting for the detailed structure of inter-particle interactions. Our work applies such fine-scale models and based on the evaluation of microscopic correlation functions establishes quantitatively the range of validity of the FORC method for determining the SFD with relevance to magnetic recording media (Fig. \ref{fig_Diagram}). Generally speaking, to identify the SFD in the material parameter range beyond the applicability of the FORC method requires inverse problem solving techniques based on the physically realistic models, which allow reproducing the relevant correlated switching of particles. Moreover, besides identifying accurate models suitable for interpreting the experimental data, such methods also require establishing uniqueness properties of the identified solutions. We have implemented a direct approach employing optimisation techniques based on the grid-search method \cite{Hovorka2009} to fit the full recoding model (Methods \ref{methods_fullmodel}) to the computed hysteresis loop data, and uniquely recovered the expected SFD in the entire parameter range. Thus, the most reliable, albeit computationally expensive approach, seems to be to essentially carry out by a direct fit to the experimental FORC data using a microscopic approach, including the detailed calculation of the interactions such as presented here for the specific example of perpendicular recording media. % \section*{Methods} \subsection{Full interacting model recording media}\label{methods_fullmodel} The system consists of $N$ Stoner-Wohlfarth grains, where the volume ($V$) and geometry of the grains is generated by using a Voronoi construction. The energy of a system of $N$ grains is: \begin{equation}\label{energy} E = \sum_iK_i V_i(\hat k_i\times\hat m_i)^2 - \sum_{i} M_sV_i\hat m_i\cdot\vec {H}_{ap} - \frac{1}{2}\sum_{nn\,\,ij}E^{ij}_{exch} - \frac{1}{2}\sum_{i\ne j} E^{ij}_{mag} \end{equation} where the first term is the uniaxial anisotropy terms with $\vec K_i = K_i\hat k_i$ being the uniaxial anisotropy vector and $V_i$ the volume of a particle $i$, $M_s$ the saturation magnetisation, and $\hat m_i = \vec m_i/M_s$ the particle moment normalised to unity. The values of $K_i$, $\hat k_i$, and $V_i$ are drawn from random distributions relevant to modern granular magnetic recording materials, as described below. The second term represents the Zeeman term describing the interaction of grains with the applied field $\vec H_{ap}$. The third term in Eq. \eqref{energy} describes the exchange interaction between the nearest neighbour grains. The exchange interaction in granular materials for magnetic recording is dependent on the extent of the grain boundary and is of randomised character, which can be expressed as $E^{ij}_{exch} = M_sV_i\hat m_i\cdot H^{ij}_{exch}$ with the locally varying exchange field $H^{ij}_{exch}$:\cite{Peng2011} \begin{equation} H^{i,j}_{exch}=H_{exch} \left( \frac{J_{ij}}{\langle J_{ij}\rangle} \right) \left( \frac{L_{ij}}{\langle L_{ij}\rangle} \right) \left( \frac{\langle A_{i}\rangle}{A_{i}} \right) \label{exch} \end{equation} where $H_{exch}$ is the mean strength of the exchange interaction field, $J_{ij}$ is the fractional exchange constant between the adjacent grains $i$ and $j$ with $L_{ij}$ being the length of the connecting boundary, $A_i$ is the area of the grain $i$, and $\langle \cdot\rangle$ represent averages over all pairs of grains. The last term in Eq. \eqref{energy} represents the magneto-static interaction between the grains and is represented as $E_{mag}^{ij} = M_sV_i\hat m_i\cdot H_{mag}^{ij}$. The contribution to the magneto-static interaction field $H_{mag}^{ij}$ is performed by a direct integration of the magneto-static surface charge \cite{Newell1993}. The evaluation of $H_{mag}^{ij}$ by full integration over the surface charge accounts for the correction resulting from the dipolar interaction over-estimating the magneto-static interaction in the proximity of a grain.\cite{Liu2009a} Both exchange and magnetostatic interactions in Eqs. \eqref{energy} are dependent on the size and shape of grains, and on the inter-granular distance. \subsection{Approximations of the full model} Various levels of reduction of the full model can be introduced as follows. \subsubsection{Non-interacting model approximation of recoding media}\label{methods_nonint} In the non-interacting model, the definition of the system energy reduces from Eq. \eqref{energy} to: \begin{equation}\label{energy0int} E = \sum_iK_i V_i(\hat k_i\times\hat m_i)^2 - \sum_{i} M_sV_i\hat m_i\cdot\vec {H}_{ap} \end{equation} The kinetic Monte-Carlo modelling of this system allows to study thermal relaxation aspects in ensemble of non-interacting Stoner-Wohlfarth particles, and may serve as a reference for gauging the effects of interactions in the full interacting model. \subsubsection{Mean-field model of recoding media}\label{methods_meanfield} In the mean-field model the interactions between magnetic grains are introduced in a uniform ways. The energy expression given in Eq. \eqref{energy} can be reduced: \begin{equation}\label{energymf} E = \sum_iK_i V_i(\hat k_i\times\hat m_i)^2 - \sum_{i} M_sV_i\hat m_i\cdot\vec {H}_{ap} - \sum_i M_sV_iH^i_{inter} \langle\hat m_k\rangle\cdot\hat m_i \end{equation} where the symbol $\langle\hat m_k\rangle$ implies averaging over all grains in the system, i.e. average magnetisation at a given field $\vec {H}_{ap}$, $H_{inter}^i$ is the random interaction field given by Gaussian distribution with mean $\langle H_{inter}\rangle$ and standard deviation $\sigma_{inter}$. We found that Gaussian distribution represents well the distribution of interaction fields in the full HDD model at saturating fields. This allows us to calibrate the mean-field interaction strength to be consistent with the full model at saturating fields, which is a point used as a reference. \subsubsection{Cluster ensemble model of recording media}\label{methods_clusters} In the cluster model, the full model is divided into clusters of $N_g$ grains, with grains inside a $j$-th cluster interacting via a mean-field like interaction, while the clusters being non-interacting. The full energy expression given in Eq. \eqref{energy} reduces to a sum through individual clusters $j$ as $E = \sum_{j} E_j$ with: \begin{equation}\label{energycluster} E_j = \sum_{i\in j}^{N_g}K_i V_i(\hat k_i\times\hat m_i)^2 - \sum_{i\in j} M_sV_i\hat m_i\cdot\vec {H}_{ap} - \sum_{i\in j} M_sV_iH^i_{inter} \langle\hat m\rangle_j\cdot\hat m_i \end{equation} where the symbol $\sum_{i\in j}$ implies that the summations occur through the magnetic grains with the cluster $j$, and $\langle\hat m_k\rangle_j$ is the average magnetisation of the cluster $j$. The interaction field $H_{inter}^i$ is defined identically as in the mean-field case in Section \ref{methods_meanfield}. \subsection{Modelling hysteresis with thermal activation: Kinetic Monte-Carlo approach}\label{methods_kmc} The thermal fluctuation and external field driven magnetisation behaviour of interacting magnetic particles as described in the model in the Methods Section 0.1 is modelled by using kinetic Monte-Carlo approach\cite{Chantrell2000,Ruta2015}. The effective local fields of particles are given by Eqs. \eqref{energy} as $\vec {H}_{ap}+\vec{H}^{ij}_{mag} +\vec{H}^{ij}_{exch}$. The time dependent transition for a particle moment $\hat m_i$ to switch between the up (`1') and down (`2') states is $P_i = 1-\exp(-t/\tau_i)$, where the relaxation time constant $\tau_i$ is a reciprocal sum of the transition rates $\tau^+_i$ and $\tau^-_i$ dependent on the energy barriers $\Delta E_i^{1,2}$ seen from the `1' and `2' states via the standard N\'eel-Arrhenius law\cite{Neel1949}: $\tau_i^{1,2} = \tau_0\exp(\Delta E_i^{1,2}/k_BT)$. The $k_B$ is the Boltzmann constant and $T$ the temperature. According Eq. \eqref{energy}, the $\Delta E_i^{1,2}$ depend on the intrinsic particle properties, such as $V_i$ and $\vec K_i$. \subsection{Simulation parameters of realistic recording media}\label{methods_parameters} Throughout this study we consider a thin film system, with elongated grains (1.17 aspect ratio), log-normal volume distribution (33\%) and log-normal anisotropy distribution (5\%). The uniaxial anisotropy has a 3$^\circ$ dispersion of easy axis around the axis perpendicular to the film. The system properties are: mean anisotropy $\langle K_i\rangle=7 \cdot 10^6$ erg/cm$^3$, saturation magnetisation $M_s = 700$ emu/cm$^3$, grain height $h = 10$ nm and the mean grain size $d = 8.5$ nm. The calculations are done for an external field rate of $4 \cdot 10^4$ Oe/s at room temperature 300K. \subsection{Rectangular hysteresis loop (RHL) model: Preisach modelling}\label{methods_preisach} If a granular system can be viewed as a collection of grains having rectangular hysteresis loops (RHL), with coercive field $H_c$ and bias field $H_u$ given by probability distribution, then the macroscopic hysteresis loop of the system can be obtained as a superposition of the RHLs and magnetisation $M(H_a, H_b)$ represented as: \begin{equation}\label{preisach} M(H_a, H_b) / M_s = \int_0^{AB}dH_c \int_{-A}^{A}\rho(H_c, H_u)dH_u + \int_{AB}^\infty dH_c \int_{-B}^{B}\rho(H_c, H_u)dH_u \end{equation} where $M_s$ is the saturation magnetisation, $AB = (H_a - H_b)/2$, $A = H_a - H_c$, and $B = H_b + H_c$ are the integration limits dependent on the applied field $H_a$ along the FORC attached do decreasing major hysteresis loop at the reversal field $H_b$, i.e. $H_a > H_b$. Applying the Leibniz integral rule to differentiate the integral we obtain: \begin{equation}\label{forc1} \frac{1}{M_s}\frac{\partial^2 M(H_a, H_b)}{\partial H_a\partial H_b} = -\rho\left(\frac{H_a-H_b}{2}, \frac{H_a+H_b}{2}\right) \end{equation} Given that Eq. \eqref{preisach} is inherently a superposition from switching events of individual grains, Eq. \eqref{forc1} establishes the relation between the applied fields $H_a$ and $H_b$, and the intrinsic switching thresholds of particles, which can be labeled equivalently as $H_a$ (threshold of a grain flipping up along the FORC at the field $H_a$) and $H_b$ (threshold for a grain flipping down before generating FORC at the field $H_b$). Given that $H_c = (H_a - H_b)/2$ and $H_u = (H_a + H_b)/2$ (Fig. \ref{fig1}(a)), the above equation can be rewritten as: \begin{equation} \rho(H_c(H_a, H_b), H_u(H_a, H_b))\equiv\rho_{ab}(H_a, H_b) = -\frac{1}{M_s}\frac{\partial^2 M(H_a, H_b)}{\partial H_a\partial H_b} \end{equation} which agrees with the definition of the FORC distribution given in Eqs. \eqref{forcab} and \eqref{forcuc}. If the system displays the wiping-out and congruency properties, Eq. \eqref{preisach} can be shown to be a unique Preisach distribution associated with the granular system represented by magnetisation $M(H_a, H_b)$. \subsection{Magnetic correlation}\label{methods_correlations} To investigate the coupling between grains due to correlated behaviour, we computed the radial correlation function as following: \begin{align} C_j(r)=\frac{\langle m_j(R) m_j(R + r)\rangle - \langle m_j(R)\rangle\langle m_j(R + r)\rangle}{\sqrt{\langle m_j^2(R)\rangle - \langle m_j(R)\rangle^2 }\sqrt{\langle m_j^2(R+r)\rangle - \langle m_j(R+r)\rangle^2 } } , \end{align} where $j=x,y,z$ and $m_j(R)$, $m_j(R+r)$ are pairs of of grains separated by a distance $r$. The correlation data plot in Fig. \ref{fig_Diagram}(b) shows the correlation function $C_z(r)$. \subsection{FORC method} The measurement protocol to produce a first order reversal curve (FORC) begins by first applying a large field to saturate the sample, then decreasing the field to a certain value $H_b$. From this point, the FORC is obtained by increasing the field back to saturation. The magnetisation is recoded at fields $H_a$ along the FORC at the reversal field $H_b$, $H_a>H_b$. The FORC diagram is then evaluated using Eqs. \eqref{forcab} and \eqref{forcuc}, from which the SFD can be calculated using Eq. \eqref{sfd}.
1,314,259,995,209
arxiv
\section{Introduction} \label{sect1} Lian and Yau \cite{LY1,LY2} studied arithmetic properties of mirror maps of pencils of certain $K3$ surfaces, and further, they considered mirror maps of certain families of Calabi--Yau threefolds \cite{LY3}. Lian and Yau observed in a number of explicit examples a mysterious relationship (now the so-called {\it mirror moonshine phenomenon}) between mirror maps and the McKay--Thompson series (Hauptmoduls of one variable associated to a genus zero congruence subgroup of $SL_2(\mathbb R)$) arising from the Monster. Inspired by the work of Lian and Yau, Verrill--Yui \cite{VY} further computed more examples of mirror maps of one-parameter families of lattice polarized $K3$ surfaces with Picard number $19$. The outcome of Verrill--Yui's calculations suggested that the mirror maps themselves are not always Hauptmoduls, but they are commensurable with Hauptmoduls (referred as the modularity of mirror maps). This fact was indeed established by Doran \cite{Dor} for $M_n$-lattice polarized $K3$ surfaces of Picard number $19$ with maximal unipotent monodromy (where $M_n=U\perp (-E_8)^2\perp\langle-2n\rangle$). More generally, Doran \cite{Dor2} considered the commensurability of ``maximal $n$-dimensional families of rank $20-n$ lattice polarized families of $K3$ surfaces, and he showed that all such families of $K3$ surfaces are commensurable to autormorphic forms. The mirror maps were calculated via the Picard--Fuchs differential equations of the $K3$ families in question. Therefore, the determination of the Picard--Fuchs differential equations played the central role in their investigations. In this paper, we will address the inverse problem of a kind. That is, instead of starting with families of $K3$ surfaces or families of Calabi--Yau threefolds, we start with modular forms and functions of more than one variable associated to certain subgroups of $SL_2(\mathbb R)$. More specifically, the main focus our discussions in this paper are on modular forms and functions of two variables. Here is the precise definition. \smallskip \begin{defn} {\rm Let $\mathbb H$ denote the upper half-plane $\{\tau:\rm {Im}\tau>0\}$, and let $\mathbb H^\ast=\mathbb H\cup\mathbb Q\cup\{\infty\}$. Let $\Gamma_1$ and $\Gamma_2$ be two subgroups of $SL_2(\mathbb R)$ commensurable with $SL_2(\mathbb Z)$. We call a function $F:\mathbb H^\ast\times\mathbb H^\ast\longmapsto\mathbb C$ of two variables a {\it modular form (of two variables) of weight $(k_1,k_2)$ on $\Gamma_1\times\Gamma_2$ with character $\chi$} if $F$ is meromorphic on $\mathbb H^\ast\times\mathbb H^\ast$ such that $$ F(\gamma_1\tau_1,\gamma_2\tau_2)=\chi(\gamma_1,\gamma_2) (c_1\tau_1+d_1)^{k_1}(c_2\tau_2+d_2)^{k_2}F(\tau_1,\tau_2) $$ for all $$ \gamma_1=\begin{pmatrix}a_1&b_1\\ c_1&d_1\end{pmatrix}\in\Gamma_1, \qquad \gamma_2=\begin{pmatrix}a_2&b_2\\ c_2&d_2\end{pmatrix}\in\Gamma_2. $$ If $F$ is a modular form (of two variables) of weight $(0,0)$ with trivial character, then we also call $F$ a {\it modular function (of two variables) on $\Gamma_1\times\Gamma_2$.}} \end{defn} \smallskip \noindent{\bf Notation.} We let $q_1=e^{2\pi i\tau_1}$ and $q_2=e^{2\pi i\tau_2}$. For a variable $t$ we let $D_t$ denote the the differential operator $t\frac{\partial}{\partial t}$. \smallskip \begin{rem}\label{remark 1.1} {\rm Stienstra and Zagier \cite{SZ} have introduced the notion of {\it bi-modular forms} (of two variables). Let $\Gamma\subset \mbox{SL}_2(\mathbb R)$, and let $\tau_1,\,\tau_2\in \mathbb H$. Let $k_1,\, k_2$ be integers. A two-variable meromorphic function $F: \mathbb H\times \mathbb H\to\mathbb C$ is called a {\it bi-modular} form of weight $(k_1,k_2)$ on $\Gamma$ if for any $\gamma=\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in\Gamma$, it satisfies the transformation formula: $$ F(\gamma\tau_1,\gamma\tau_2)=(c\tau_1+d)^{k_1}(c\tau_2+d)^{k_2} F(\tau_1,\tau_2). $$ For instance, $$ F(\tau_1, \tau_2)=\tau_1-\tau_2 $$ is a bi-modular form for $\mbox{SL}_2(\mathbb Z)$ of weight $(-1,-1)$. Another typical example is $$ F(\tau_1,\tau_2)=E_2(\tau_1)-\frac{1}{\tau_1-\tau_2}, $$ which is a bi-modular form of weight $(2,0)$ for $\mbox{SL}_2(\mathbb Z)$. For bi-modular forms of Stienstra--Zagier, the fundamental domain $\mathbb H\times\mathbb H/\Gamma$ is not of finite volume. On the other hand, for our modular forms (of two variables), the fundamental domain is $\mathbb H/\Gamma_1\times \mathbb H/\Gamma_2$, which is always of finite volume. We should emphasize that the two notions of two variable modular forms (namely, our modular forms and bi-modular forms of Stienstra and Zagier) are indeed different. Also we mention that our modular forms are not a special case of Hilbert modular forms.} \end{rem} \smallskip The problems that we will consider here are formulated as follows : {\it Given a modular form $F$ (of two variables), determine a differential equation it satisfies, and construct a family of $K3$ surfaces (or degenerations of a family of Calabi--Yau threefolds at some limit points) having the determined differential equation as its Picard--Fuchs differential equation.} This kind of problem may be called a {\it geometric realization problem}. In fact, a similar problem was already considered by Lian and Yau in their papers \cite{LY1, LY2}. They discussed the so-called ``modular relations'' involving power series solutions to second and third order differential equations of Fuchsian type (e.g., hypergeometric differential equations $_2F_1,\, _3F_2$) and modular forms of weight $4$ using mirror symmetry. More recently, van Enckevort and van Straten \cite{ES05} considered the following geometric realization problem: {\it Starting with a certain forth order differential equation whose monodromy representation can be calculated, find a one-parameter families of Calabi--Yau threefolds (if it exists), whose associated Picard--Fuchs differential equation is the given one}. Also a recent article of Doran and Morgan \cite{DorMor05} addressed the geometric realization question in the context of an old question of Griffiths: {\it When does an integral variation of Hodge structure come from geometry?}. A rigorous answer was presented for one-parameter families of Calabi--Yau threefolds with $h^{2,1}=1$ with generalized Picard--Fuch differential eqations, relating mirror symmetry and integral variations of Hodge structure. \smallskip In this paper, we will focus our discussion on modular forms (of two variables) of weight $(1,1)$. We will determine the differential equations satisfied by modular forms (of two variables) of weight $(1,1)$ associated to $\Gamma_1\times \Gamma_2$ where $\Gamma_i$ are genus zero subgroups of $SL_2(\mathbb R)$ of the form $\Gamma_0(N)$ and $\Gamma_0(N)^*$. Then the existence and the construction of particular modular forms of weight $(1,1)$ are discussed, using solutions of some hypergeometric differential equations. Moreover, we determine the differential equations they satisfy. Further, several examples of modular forms (of two variables) and their differential equations are discussed aiming to realize these differential equations as the Picard--Fuchs differential equations of some families of $K3$ surfaces (or degenerations of families of Calabi--Yau threefolds) with large Picard numbers $19,18,17$ and $16$. It should be pointed out that our paper and our results have non-empty intersections with the results of Lian and Yau \cite{LY1, LY2}. Indeed, our approach rediscovers some of the examples of Lian and Yau. Our contributions may be summarized as follows. From geometric point of veiw, we give examples of two-parameter families of $K3$ surfaces which after pull-back along a morphism from $(t_1,t_2)$-space to $(x,y)$-space decouple as a direct product of two one-parameter families of elliptic curves. From function theoretic point of veiw, we give examples of non-trivial substitions transforming certain (two-variables) GKZ hypergeometic hypergeometric functions into a product of two (one-variable) GKZ hypergeometric fucntions. Finally, from moduli point of view, we give examples of moduli spaces for $K3$ surfaces with extra structure and show that these moduli spaces are quotients of ${\mathfrak H}\times{\mathfrak H}$. \section{Differential equations satisfied by modular forms (of two variables)} \label{sect2} We will now determine differential equations satisfied by modular forms (of two variables) of weight $(1,1)$ on $\Gamma_1\times \Gamma_2$. \smallskip \begin{thm} \label{DE satisfied by modular forms} {\sl Let $F(\tau_1,\tau_2)$ be a modular form (of two variables) of weight $(1,1)$, and let $x(\tau_1,\tau_2)$ and $y(\tau_1,\tau_2)$ be non-constant modular functions (of two variables) on $\Gamma_1\times\Gamma_2$, where $\Gamma_i$ ($i=1,2$) are subgroups of $SL_2(\mathbb R)$ commensurable with $SL_2(\mathbb Z)$. Then $F$, as a function of $x$ and $y$, satisfy a system of partial differential equations \begin{equation} \label{main} \begin{split} D_x^2F+a_0D_xD_yF+a_1D_xF+a_2D_yF+a_3F=0, \\ D_y^2F+b_0D_xD_yF+b_1D_xF+b_2D_yF+b_3F=0, \end{split} \end{equation} where $a_i$ and $b_i$ are algebraic functions of $x$ and $y$, and can be expressed explicitly as follows. Suppose that, for each function $t$ among $F$, $x$, and $y$, we let $$ G_{t,1}=\frac{D_{q_1}t}t=\frac1{2\pi i}\frac{dt}{t\,d\tau_1}, \qquad G_{t,2}=\frac{D_{q_2}t}t=\frac1{2\pi i}\frac{dt}{t\,d\tau_2}. $$ Then we have $$ a_0=\frac{2G_{y,1}G_{y,2}}{G_{x,1}G_{y,2}+G_{y,1}G_{x,2}}, \qquad b_0=\frac{2G_{x,1}G_{x,2}}{G_{x,1}G_{y,2}+G_{y,1}G_{x,2}}, $$ $$ a_1=\frac{G_{y,2}^2(D_{q_1}G_{x,1}-2G_{F,1}G_{x,1}) -G_{y,1}^2(D_{q_2}G_{x,2}-2G_{F,2}G_{x,2})} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2}, $$ $$ b_1=\frac{-G_{x,2}^2(D_{q_1}G_{x,1}-2G_{F,1}G_{x,1}) +G_{x,1}^2(D_{q_2}G_{x,2}-2G_{F,2}G_{x,2})} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2}, $$ $$ a_2=\frac{G_{y,2}^2(D_{q_1}G_{y,1}-2G_{F,1}G_{y,1}) -G_{y,1}^2(D_{q_2}G_{y,2}-2G_{F,2}G_{y,2})} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2}, $$ $$ b_2=\frac{-G_{x,2}^2(D_{q_1}G_{y,1}-2G_{F,1}G_{y,1}) +G_{x,1}^2(D_{q_2}G_{y,2}-2G_{F,2}G_{y,2})} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2}, $$ $$ a_3=-\frac{G_{y,2}^2(D_{q_1}G_{F,1}-G_{F,1}^2) -G_{y,1}^2(D_{q_2}G_{F,2}-G_{F,2}^2)} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2}, $$ and $$ b_3=-\frac{-G_{x,2}^2(D_{q_1}G_{F,1}-G_{F,1}^2) +G_{x,1}^2(D_{q_2}G_{F,2}-G_{F,2}^2)} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2}. $$} \end{thm} In order to prove Theorem \ref{DE satisfied by modular forms}, we first need the following lemma, which is an analogue of the classical Ramanujan's differential equations $$D_qE_2=\frac{E_2^2-E_4}{12}=-24\sum_{n\in\mathbb N}\frac{n^2q^n}{(1-q^n)^2},$$ $$ D_qE_4=\frac{E_2E_4-E_6}3=240\sum_{n\in\mathbb N}\frac{n^4q^n}{(1-q^n)^2},$$ $$ D_qE_6=\frac{E_2E_6-E_4^2}2=\sum_{n\in\mathbb N}\frac{n^6q^n}{(1-q^n)^2}$$ where \begin{equation} \label{Ek} E_k=1-\frac{2k}{B_k}\sum_{n\in\mathbb N}\frac{n^{k-1}q^n}{1-q^n} \end{equation} are the Eisenstein series of weight $k$ on $SL_2(\mathbb Z)$, where $B_k$ denotes the $k$-th Bernoulli number, e.g., $B_2=\frac{1}{6},\, B_4=-\frac{1}{30}$ and $B_6=\frac{1}{42}$. \smallskip \begin{lem} \label{lemma 2.2} {\sl We retain the notations of Theorem \ref{DE satisfied by modular forms}. Then (a) $G_{x,1}$ and $G_{y,1}$ are modular forms (of two variables) of weight $(2,0)$, (b) $G_{x,2}$ and $G_{y,2}$ are modular forms (of two variables) of weight $(0,2)$, (c) $D_{q_1}G_{x,1}-2G_{F,1}G_{x,1}$, $D_{q_1}G_{y,1}-2G_{F,1}G_{y,1}$ and $D_{q_1}G_{F,1}-G_{F,1}^2$ are modular forms (of two variables) of weight $(4,0)$, and (d) $D_{q_2}G_{x,2}-2G_{F,2}G_{x,2}$, $D_{q_2}G_{y,2}-2G_{F,2}G_{y,2}$ and $D_{q_2}G_{F,2}-G_{F,2}^2$ are modular forms (of two variables) of weight $(0,4)$.} \end{lem} \begin{proof} We shall prove (a) and (c); the proof of (b) and (d) is similar. By assumption, $x$ is a modular function (of two variables) on $\Gamma_1\times\Gamma_2$. That is, for all $\gamma_1=\begin{pmatrix}a_1&b_1\\ c_1&d_1\end{pmatrix}\in\Gamma_1$ and all $\gamma_2=\begin{pmatrix}a_2&b_2\\ c_2&d_2\end{pmatrix}\in\Gamma_2$, one has $$ x(\gamma_1\tau_1,\gamma_2\tau_2)=x(\tau_1,\tau_2) $$ Taking the logarithmic derivatives of the above equation with respect to $\tau_1$, we obtain $$ \frac1{(c_1\tau_1+d_1)^2}\frac{\dot x}x(\gamma_1\tau_1,\tau_2) =\frac{\dot x}x(\tau_1,\tau_2), $$ or \begin{equation} \label{temp1: lemma 2.2} G_{x,1}(\gamma_1\tau_1,\gamma_2\tau_2)=(c_1\tau_1+d_1)^2 G_{x,1}(\tau_1,\tau_2), \end{equation} where we let $\dot x$ denote the derivative of the two-variable function $x$ with respect to the first variable. This shows that $G_{x,1}$ is a modular form of weight $(2,0)$ on $\Gamma_1\times\Gamma_2$ with the trivial character. The proof for the case $G_{y,1}$ is similar. Likewise, taking the logarithmetic derivatives of the equation $$ F(\gamma_1\tau_1,\gamma_2\tau_2)=\chi(\gamma_1,\gamma_2) (c_1\tau_1+d_1)(c_2\tau_2+d_2)F(\tau_1,\tau_2) $$ with respect to $\tau_1$, we obtain $$ \frac1{(c_1\tau_1+d_1)^2}\frac{\dot F}F(\gamma_1\tau_1,\gamma_2\tau_2) =\frac{c_1}{(c_1\tau_1+d_1)}+\frac{\dot F}F(\tau_1,\tau_2), $$ or, equivalently \begin{equation} \label{temp2: lemma 2.2} G_{F,1}(\gamma_1\tau_1,\gamma_2\tau_2)=\frac{c_1(c_1\tau_1+d_1)} {2\pi i}+(c_1\tau_1+d_1)^2G_{F,1}(\tau_1,\tau_2). \end{equation} Now, differentiating (\ref{temp1: lemma 2.2}) with respect to $\tau_1$ again, we obtain $$ \frac{\dot G_{x,1}}{(c_1\tau_1+d_1)^2} (\gamma_1\tau_1,\gamma_2\tau_2) =2c_1(c_1\tau_1+d_1)G_{x,1}(\tau_1,\tau_2) +(c_1\tau_1+d_1)^2\dot G_{x,1}(\tau_1,\tau_2), $$ or $$ D_{q_1}G_{x,1}(\gamma_1\tau_1,\gamma_2\tau_2) =\frac{c_1(c_1\tau_1+d_1)^3}{\pi i}G_{x,1}(\tau_1,\tau_2) +(c_1\tau_1+d_1)^4D_{q_1}G_{x,1}(\tau_1,\tau_2). $$ On the other hand, we also have, by (\ref{temp1: lemma 2.2}) and (\ref{temp2: lemma 2.2}), $$ G_{F,1}G_{x,1}(\gamma_1\tau_1,\gamma_2\tau_2) =\frac{c_1(c_1\tau_1+d_1)^3}{2\pi i}G_{x,1}(\tau_1,\tau_2) +(c_1\tau_1+d_1)^4G_{F,1}G_{x,1}(\tau_1,\tau_2). $$ From these two equations we see that $D_{q_1}G_{x,1}-2G_{F,1}G_{x,1}$ is a modular form (of two variables) of weight $(4,0)$ with the trivial character. Finally, differentiating (\ref{temp2: lemma 2.2}) with respect to $\tau_1$ and multiplying by $(c_1\tau_1+d_1)^2$ we have \begin{equation*} \begin{split} D_{q_1}G_{F,1}(\gamma_1\tau_1,\gamma_2\tau_2) &=\frac{c_1^2(c_1\tau_1+d_1)^2}{(2\pi i)^2} +\frac{c_1(c_1\tau_1+d_1)^3}{\pi i}G_{F,1}(\tau_1,\tau_2) \\ &\qquad\qquad+(c_1\tau_1+d_1)^4D_{q_1}G_{F,1}(\tau_1,\tau_2). \end{split} \end{equation*} Combining this with the square of (\ref{temp2: lemma 2.2}) we see that $D_{q_1}G_{F,1}-G_{F,1}^2$ is a modular form of weight $(4,0)$ on $\Gamma_1\times\Gamma_2$. This completes the proof of the lemma. \end{proof} \smallskip \begin{proof}[Proof of Theorem \ref{DE satisfied by modular forms}] In light of Lemma \ref{lemma 2.2}, the functions $a_k$, $b_k$ are all modular functions on $\Gamma_1\times\Gamma_2$, and thus can be expressed as algebraic functions of $x$ and $y$. Therefore, it suffices to verify (\ref{main}) as formal identities. By the chain rule we have $$ \begin{pmatrix} D_{q_1}F \\ D_{q_2}F\end{pmatrix} =\begin{pmatrix} x^{-1}D_{q_1}x & y^{-1}D_{q_1}y \\ x^{-1}D_{q_2}x & y^{-1}D_{q_2}y \end{pmatrix} \begin{pmatrix} D_x F \\ D_y F \end{pmatrix}. $$ It follows that $$ \begin{pmatrix} D_x F \\ D_y F \end{pmatrix} =\frac F{G_{x,1}G_{y,2}-G_{x,2}G_{y,1}} \begin{pmatrix} G_{y,2} & -G_{y,1} \\ -G_{x,2} & G_{x,1} \end{pmatrix} \begin{pmatrix} G_{F,1} \\ G_{F,2} \end{pmatrix} $$ Writing $$ \Delta=G_{x,1}G_{y,2}-G_{x,2}G_{y,1}, $$ and $$ \Delta_x=G_{F,1}G_{y,2}-G_{F,2}G_{y,1}, \qquad \Delta_y=-G_{x,2}G_{F,1}+G_{x,1}G_{F,2}, $$ we have \begin{equation} \label{DxF, DyF} D_xF=F\frac{\Delta_x}\Delta, \qquad D_yF=F\frac{\Delta_y}\Delta. \end{equation} Applying the same procedure on $D_xF$ again, we obtain \begin{equation*} \begin{split} \begin{pmatrix} D_x^2 F \\ D_yD_x F \end{pmatrix} &=\frac 1\Delta \begin{pmatrix} G_{y,2} & -G_{y,1} \\ -G_{x,2} & G_{x,1} \end{pmatrix} \begin{pmatrix} D_{q_1}(F\Delta_x/\Delta) \\ D_{q_2}(F\Delta_x/\Delta) \end{pmatrix} \\ &=\frac F\Delta \begin{pmatrix} G_{y,2} & -G_{y,1} \\ -G_{x,2} & G_{x,1} \end{pmatrix} \left\{\frac{\Delta_x}\Delta \begin{pmatrix} G_{F,1} \\ G_{F,2} \end{pmatrix}+ \begin{pmatrix} D_{q_1}(\Delta_x/\Delta) \\ D_{q_2}(\Delta_x/\Delta) \end{pmatrix}\right\}. \end{split} \end{equation*} That is, \begin{equation} \label{DxxF} D_x^2F=F\frac{\Delta_x^2}{\Delta^2}+\frac F\Delta\left( G_{y,2}D_{q_1}\frac{\Delta_x}\Delta-G_{y,1}D_{q_2} \frac{\Delta_x}\Delta\right) \end{equation} and \begin{equation} \label{DyxF} D_yD_xF=F\frac{\Delta_x\Delta_y}{\Delta^2}+\frac F\Delta \left(-G_{x,2}D_{q_1}\frac{\Delta_x}\Delta+G_{x,1}D_{q_2} \frac{\Delta_x}\Delta\right). \end{equation} We then substitute (\ref{DxF, DyF}), (\ref{DxxF}), and (\ref{DyxF}) into (\ref{main}) and find that (\ref{main}) indeed holds. (The details are tedious, but straightforward calculations. We omit the details here.) \end{proof} \section{Modular forms (of two variables) associated to solutions of hypergeometric differential equations} \label{sect4} Here we will construct modular forms (of two variables) of weight $(1,1)$ using solutions of some hypergeometric differential equations. Our main result of this section is the following theorem. \begin{thm} \label{Hypergeometric DE} {\sl Let $0<a<1$ be a positive real number. Let $f(t)=\,_2F_1(a,a;1;t)$ be a solution of the hypergeometric differential equation \begin{equation} \label{HG DE} t(1-t)f^{\prime\prime}+[1-(1+2a)t]f^\prime-a^2f=0. \end{equation} Let $$ F(t_1,t_2)=f(t_1)f(t_2)(1-t_1)^a(1-t_2)^a, $$ $$ x=\frac{t_1+t_2}{(t_1-1)(t_2-1)}, \quad y=\frac{t_1t_2}{(t_1+t_2)^2}. $$ Then $F$ is a modular form of weight $(1,1)$ for $\Gamma_1\times \Gamma_2$, provided that $t_1$ and $t_2$ are modular functions (of one variable) for $\Gamma_1$ and $\Gamma_2$, respectively. Furthermore, $F$, as a function of $x$ and $y$, is a solution of the partial differential equations \begin{equation} \label{HGDE x} D_x(D_x-2D_y)F+x(D_x+a)(D_x+1-a)F=0, \end{equation} and \begin{equation} \label{HGDE y} D_y^2F-y(2D_y-D_x+1)(2D_y-D_x)F=0, \end{equation} where $D_x=\partial/\partial x$ and $D_y=\partial/\partial y$. } \end{thm} \smallskip \begin{rem}\label{remark 3.1} {\rm Theorem 2.1 of Lian and Yau \cite{LY2} is essentially the same as our Theorem \ref{Hypergeometric DE}, though the formulation and proof are different. The condition that $t_1,\, t_2$ are modular functions (of one variable) for $\Gamma_1$ and $\Gamma_2$ are used to draw the conclustion that $F$ is a modular form (of two variables) for $\Gamma_1\times\Gamma_2$. However, the modular property of $t_1,\,t_2$ is irrelevant to derive \ref{HGDE x} and \ref{HGDE y} from \ref{HG DE}.} \end{rem} \smallskip We will present our proof of Theorem \ref{Hypergeometric DE} now. For this, we need one more ingredient, namely, the Schwarzian derivatives. \smallskip \begin{lem} \label{Schwarzian} {\sl Let $f(t)$ and $f_1(t)$ be two linearly independent solutions of a differential equation $$ f^{\prime\prime}+p_1f^\prime+p_2f=0. $$ Set $\tau:=f_1(t)/f(t)$. Then the associated Schwarzian differential equation $$ 2Q\left(\frac{dt}{d\tau}\right)^2+\{t,\tau\}=0, $$ where $\{t,\tau\}$ is the Schwarzian derivative $$ \{t,\tau\}=\frac{dt^3/d\tau^3}{dt/d\tau}-\frac32 \left(\frac{dt^2/d\tau^2}{dt/d\tau}\right)^2, $$ satisfies $$ Q=\frac{4p_2-2p_1^\prime-p_1^2}4. $$} \end{lem} \begin{proof} This is standard, and proof can be found, for instance, in Lian and Yau \cite{LY3}. \end{proof} \smallskip \begin{proof}[Proof of Theorem \ref{Hypergeometric DE}] Let $f_1$ be another solution of (\ref{HG DE}) linearly independent of $f$, and set $\tau=f_1/f$. Then a classical identity asserts that $$ f^2=c\exp\left\{-\int^t\frac{1-(1+2a)u}{u(1-u)}\,du\right\} \frac{dt}{d\tau} =\frac{cdt/d\tau}{t(1-t)^{2a}}, $$ where $c$ is a constant depending on the choice of $f_1$. Thus, letting $$ q_1=e^{2\pi if_1(t_1)/f(t_1)}\quad\mbox{and}\quad q_2=e^{2\pi if_1(t_2)/f(t_2)}, $$ the function $F$, with a suitable choice of $f_1$, is in fact $$ F(t_1,t_2)=\left(\frac{D_{q_1}t_1\cdot D_{q_2}t_2} {t_1t_2}\right)^{1/2}. $$ We now apply the differential identities in (\ref{main}), which hold for arbitrary $F$, $x$, and $y$. We have $$ G_{x,1}:=\frac{D_{q_1}x}x=\frac{(1+t_2)D_{q_1}t_1}{(t_1+t_2)(1-t_1)}, \quad G_{x,2}:=\frac{D_{q_2}x}x=\frac{(1+t_1)D_{q_2}t_2}{(t_1+t_2)(1-t_2)}, $$ $$ G_{y,1}:=\frac{D_{q_1}y}y=\frac{(t_2-t_1)D_{q_1}t_1}{t_1(t_1+t_2)}, \qquad G_{y,2}:=\frac{D_{q_2}y}y=\frac{(t_1-t_2)D_{q_2}t_2}{t_2(t_1+t_2)}, $$ $$ G_{F,1}:=\frac{D_{q_1}F}F=\frac{t_1D_{q_1}^2t_1-(D_{q_1}t_1)^2} {2t_1D_{q_1}t_1}, \quad G_{F,2}:=\frac{D_{q_2}F}F=\frac{t_2D_{q_2}^2t_2-(D_{q_2}t_2)^2} {2t_2D_{q_2}t_2}. $$ It follows that $$ a_0:=\frac{2G_{y,1}G_{y,2}}{G_{x,1}G_{y,2}+G_{y,1}G_{x,2}} =-\frac{2(t_1-1)(t_2-1)}{t_1t_2+1}=-\frac2{1+x}, $$ $$ b_0:=\frac{2G_{x,1}G_{x,2}}{G_{x,1}G_{y,2}+G_{y,1}G_{x,2}} =\frac{2t_1t_2(t_1+1)(t_2+1)}{(t_1-t_2)^2(t_1t_2+1)} =\frac{2y(1+2x)}{(1+x)(1-4y)}, $$ \begin{equation*} \begin{split} a_1:&=\frac{G_{y,2}^2(D_{q_1}G_{x,1}-2G_{F,1}G_{x,1}) -G_{y,1}^2(D_{q_2}G_{x,2}-2G_{F,2}G_{x,2})} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2} \\ &=\frac{t_1+t_2}{t_1t_2+1}=\frac x{1+x}, \end{split} \end{equation*} \begin{equation*} \begin{split} b_1:&=\frac{-G_{x,2}^2(D_{q_1}G_{x,1}-2G_{F,1}G_{x,1}) +G_{x,1}^2(D_{q_2}G_{x,2}-2G_{F,2}G_{x,2})} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2} \\ &=\frac{t_1t_2(t_1+1)(t_2+1)}{(t_1-t_2)^2(t_1t_2+1)} =\frac{y(1+2x)}{(1+x)(1-4y)}, \end{split} \end{equation*} $$ a_2:=\frac{G_{y,2}^2(D_{q_1}G_{y,1}-2G_{F,1}G_{y,1}) -G_{y,1}^2(D_{q_2}G_{y,2}-2G_{F,2}G_{y,2})} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2}=0, $$ \begin{equation*} \begin{split} b_2:&=\frac{-G_{x,2}^2(D_{q_1}G_{y,1}-2G_{F,1}G_{y,1}) +G_{x,1}^2(D_{q_2}G_{y,2}-2G_{F,2}G_{y,2})} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2} \\ &=-\frac{2t_1t_2}{(t_1-t_2)^2}=-\frac{2y}{1-4y}. \end{split} \end{equation*} Moreover, we have \begin{equation*} \begin{split} a_3:&=-\frac{G_{y,2}^2(D_{q_1}G_{F,1}-G_{F,1}^2) -G_{y,1}^2(D_{q_2}G_{F,2}-G_{F,2}^2)} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2} \\ &=\frac{(t_1-1)(t_2-1)(t_1+t_2)\left\{t_1^2\dot t_2^4 (2\dot t_1\dddot t_1-3\ddot t_1^2)-t_2^2\dot t_1^4 (2\dot t_2\dddot t_2-3\ddot t_2^2)\right\}} {4(t_1-t_2)(t_1^2t_2^2-1)\dot t_1^4\dot t_2^4}, \end{split} \end{equation*} where, for brevity, we let $\dot t_j$, $\ddot t_j$, $\dddot t_j$ denote the derivatives $D_{q_j}t_j$, $D_{q_j}^2t_j$, and $D_{q_j}^3t_j$, respectively. To express $a_3$ in terms of $x$ and $y$, we note that, by Lemma \ref{Schwarzian}, \begin{equation*} \begin{split} 2\dot t_j\dddot t_j-3\ddot t_j^2&=-\dot t_j^4\left( \frac{-4a^2}{t_j(1-t_j)}-2\frac d{dt_j}\frac{1-(1+2a)t_j}{t_j(1-t_j)} -\frac{(1-(1+2a)t_j)^2}{t_j^2(1-t_j)^2}\right) \\ &=-\frac{(t_j-1)^2+4a(1-a)t_j}{t_j^2(t_j-1)^2}\dot t_j^4. \end{split} \end{equation*} It follows that $$ a_3=a(1-a)\frac{t_1+t_2}{t_1t_2+1}=\frac{a(1-a)x}{1+x}. $$ Likewise, we have \begin{equation*} \begin{split} b_3:&=-\frac{-G_{x,2}^2(D_{q_1}G_{F,1}-G_{F,1}^2) +G_{x,1}^2(D_{q_2}G_{F,2}-G_{F,2}^2)} {G_{x,1}^2G_{y,2}^2-G_{y,1}^2G_{x,2}^2} \\ &=a(1-a)\frac{t_1t_2(t_1+t_2)}{(t_1-t_2)^2(t_1t_2+1)} =\frac{a(1-a)xy}{(1+x)(1-4y)}. \end{split} \end{equation*} Then, by (\ref{main}), the function $F$, as a function of $x$ and $y$, satisfies \begin{equation} \label{HG 1} D_x^2F-\frac2{1+x}D_xD_yF+\frac x{1+x}D_xF+\frac{a(1-a)x}{1+x}F=0 \end{equation} and \begin{equation} \label{HG 2} \begin{split} &D_y^2F+\frac{2y(1+2x)}{(1+x)(1-4y)}D_xD_yF +\frac{y(1+2x)}{(1+x)(1-4y)}D_xF \\ &\qquad\qquad -\frac{2y}{1-4y}D_yF+\frac{a(1-a)xy}{(1+x)(1-4y)}F=0. \end{split} \end{equation} Finally, we can deduce the claimed differential equations by taking (\ref{HG 1}) times $(1+x)$ and (\ref{HG 2}) times $(1-4y)$ minus (\ref{HG 1}) times $y$, respectively. \end{proof} \section{Examples} \begin{ex}\label{Example 4.1} {\rm Let $j$ be the elliptic modular $j$-function, and let $E_4(\tau)=1+240\sum_{n\in\mathbb N}\frac{n^3q^n}{1-q^n}$, $q=e^{2\pi i\tau}$, be the Eisenstein series of weight $4$ on $SL_2(\mathbb Z)$. Set $$ x=2\frac{1/j(\tau_1)+1/j(\tau_2)-1728/(j(\tau_1)j(\tau_2))}{1+ \sqrt{(1-1728/j(\tau_1))(1-1728/j(\tau_2))}}, \qquad y=\frac1{j(\tau_1)j(\tau_2)x^2}, $$ and $$F=(E_4(\tau_1)E_4(\tau_2))^{1/4}.$$ Then $F$ satisfies the system of partial differential equations: $$ (1-432x)D_x^2F-2D_xD_yF-432xD_x-60xF=0, $$ $$ (1-4y)D_y^2F+4yD_xD_yF-yD_x^2F-yD_xF-2yD_yF=0. $$} We should remark that the functions $x$ and $y$ are modular functions (of two variables) for $\Gamma_1\times \Gamma_2$ where $\Gamma_1=\Gamma_2$ is a subgroup of $SL_2(\mathbb Z)$ of index $2$. On the other hand, in the sense of Stienstra-Zagier, $x$ and $y$ are bi-modular functions for the group $SL_2(\mathbb Z)$ (cf. Remark 1.1). \end{ex} We have noticed that this system of differential equation belongs to a general class of partial differential equations which involve solutions of hypergeometric hypergeometric differential equations discussed in Theorem \ref{Hypergeometric DE}. Here we will prove the assertion of Example \ref{Example 4.1} using Theorem \ref{Hypergeometric DE}. \begin{proof}[Proof of Example 4.1] We first make a change of variable $x\mapsto-\bar x/432$. For convenience, we shall denote the new variable $\bar x$ by $x$ again. Thus, we are required to show that the functions $$ x=-864\frac{1/j(\tau_1)+1/j(\tau_2)-1728/(j(\tau_1)j(\tau_2))}{1+ \sqrt{(1-1728/j(\tau_1))(1-1728/j(\tau_2))}}, \qquad y=\frac{432^2}{j(\tau_1)j(\tau_2)x^2}, $$ and $F=(E_4(\tau_1)E_4(\tau_2))^{1/4}$ satisfy $$ (1+x)D_x^2F-2D_xD_yF+xD_x+\frac 5{36}xF=0, $$ and $$ (1-4y)D_y^2F+4yD_xD_yF-yD_x^2F-yD_xF-2yD_yF=0. $$ For brevity, we let $j_1$ denote $j(\tau_1)$ and $j_2$ denote $j(\tau_2)$. We now observe that the function $x$ can be alternatively expressed as \begin{equation*} \begin{split} x&=-864\frac{1/j_1+1/j_2-1728/(j_1j_2)}{1-(1-1728/j_1)(1-1728/j_2)} \left(1-\sqrt{(1-1728/j_1)(1-1728/j_2)}\right) \\ &=\frac12\left(\sqrt{(1-1728/j_1)(1-1728/j_2)}-1\right). \end{split} \end{equation*} Setting $$ t_1=\frac{\sqrt{1-1728/j_1}-1}{\sqrt{1-1728/j_1}+1}, \qquad t_2=\frac{\sqrt{1-1728/j_2}-1}{\sqrt{1-1728/j_2}+1}, $$ we have $$ x=\frac{t_1+t_2}{(t_1-1)(t_2-1)}. $$ Moreover, the functions $j_k$, written in terms of $t_k$, are $j_k=432(t_k-1)^2/t_k$ for $k=1,2$. It follows that $$ y=\frac{432^2}{j_1j_2x^2}=\frac{t_1t_2}{(t_1+t_2)^2}. $$ In view of Theorem \ref{Hypergeometric DE}, setting $$ t=\frac{\sqrt{1-1728/j(\tau)}-1}{\sqrt{1-1728/j(\tau)}+1} $$ it remains to show that the function $f(t)=E_4(\tau)^{1/4}(1-t)^{-1/6}$ is a solution of the hypergeometric differential equation $$ t(1-t)f^{\prime\prime}+(1+4t/3)f^\prime-\frac1{36}f=0, $$ or equivalently, that $$ \frac{E_4(\tau)^{1/4}}{(1-t)^{1/6}}=\, _2F_1(1/6,1/6;1;t). $$ This, however, follows from the classical identity $$ E_4(\tau)^{1/4}=\,_2F_1\left(\frac1{12},\frac5{12};1;\frac{1728}{j(\tau)}\right) $$ and Kummer's transformation formula \begin{equation*} \begin{split} &\left(\frac{1+\sqrt{1-z}}2\right)^{2a}\, _2F_1\left(a,b;a+b+\frac12;z\right) \\ &\qquad\qquad=\, _2F_1\left(2a,a-b+\frac12;a+b+\frac12;\frac{\sqrt{1-z}-1} {\sqrt{1-z}+1}\right). \end{split} \end{equation*} This completes the proof of Example \ref{Example 4.1}. \end{proof} \begin{rem} {\rm The functions $x$ and $y$ in Example \ref{Example 4.1} (up to constant multiple) have also appeared in the paper of Lian and Yau \cite{LY3}, Corollary 1.2, as the mirror map of the family of $K3$ surfaces defined by degree $12$ hypersurfaces in the weighted projective space $\mathfrak p^3[1,1,4,6]$. Further, this $K3$ family is derived from the square of a family of elliptic curves in the weighted projective space $\mathfrak p^2[1,2,3]$. (The geometry behind this phenomenon is the so-called Shoida--Inose structures, which has been studied in detail by Long \cite{Long} for one-parameter families of $K3$ surfaces, and their Picard--Fuchs differential equations.) Lian and Yau \cite{LY3} proved that the mirror map of the $K3$ family can be given in terms of the elliptic $j$-function, and indeed, by the functions $x$ and $y$ (up to constant multiple). We will discuss more examples of families of $K3$ surfaces, their Picard--Fuchs differential equations and mirror maps in the section 6.} \end{rem} Along the same vein, we obtain more examples of modular forms of weight $(1,1)$ and modular functions on $\Gamma_0(N)\times \Gamma_0(N)$ for $N=2,3,4$. \smallskip \begin{thm}\label{theorem 4.1} {\sl We retain the notations of Theorem \ref{HG DE}. Then the solutions of the differential equations (\ref{HGDE x}) and (\ref{HGDE y}) for the cases $a=1/2,1/3,1/4,1/6$ can be expressed in terms of modular forms and modular functions on $\Gamma_0(N)\times \Gamma_0(N)$ for some $N$. (a) For $a=1/2$, they are given by $$ F(\tau_1,\tau_2)=\theta_4(\tau_1)^2\theta_4(\tau_2)^2, \qquad t=\theta_2(\tau)^4/\theta_3(\tau)^4, $$ which are modular on $\Gamma_0(4)\times\Gamma_0(4)$. (b) For $a=1/3$, they are $$ F(\tau_1,\tau_2)=\frac12(3E_2(3\tau_1)-E_2(\tau_1))^{1/2} (3E_2(3\tau_2)-E_2(\tau_2))^{1/2}, \quad t=-27\frac{\eta(3\tau)^{12}}{\eta(\tau)^{12}}, $$ which are modular on $\Gamma_0(3)\times\Gamma_0(3)$. (c) For $a=1/4$, they are $$ F(\tau_1,\tau_2)=(2E_2(2\tau_1)-E_2(\tau_1))^{1/2}(2E_2(2\tau_2)-E_2(\tau_2))^{1/2}, \quad t=-64\frac{\eta(2\tau)^{24}}{\eta(\tau)^{24}}, $$ which are modular are $\Gamma_0(2)\times\Gamma_0(2)$. (d) For $a=1/6$, they are given as in Example \ref{Example 4.1}. \medskip Here $$\eta(\tau)=q^{1/24}\prod_{n\in\mathbb N}(1-q^n), \qquad q=e^{2\pi i\tau}$$ is the Dedekind eta-function, and $$ \theta_2(\tau)=q^{1/4}\sum_{n\in\mathbb Z} q^{n(n+1)},\quad \theta_3(\tau)=\sum_{n\in\mathbb Z} q^{n^2},\quad \theta_4(\tau)=\sum_{n\in\mathbb Z} (-1)^nq^{n^2}$$ are theta-series.} \end{thm} \begin{lem} \label{lemma 4.2} Let $\Gamma$ be a subgroup of $SL_2(\mathbb R)$ commensurable with $SL_2(\mathbb Z)$. Let $f(\tau)$ be a modular form (of one variable) of weight $1$, and $t(\tau)$ be a non-constant modular function (of one variable) on $\Gamma$. Then, setting $$ G_t=\frac{D_q t}t, \qquad G_f=\frac{D_q f}f, $$ we have $$ D_t^2f+\frac{D_qG_t-2G_fG_t}{G_t^2}D_tf- \frac{D_qG_f-G_f^2}{G_t^2}f=0. $$ \end{lem} \begin{proof}[Proof of Theorem 4.1] To prove part (a) we use the well-known identities $$ \theta_3^2=\,_2F_1\left(\frac12,\frac12;1; \frac{\theta_2^4}{\theta_3^4}\right) $$ (see \cite{Yg04} for a proof using Lemma \ref{lemma 4.2}) and $$ \theta_3^4=\theta_2^4+\theta_4^4. $$ Applying Theorem 3.1 and observing that $$ \theta_3^2\left(1-\frac{\theta_2^4}{\theta_3^4}\right)^{1/2} =\theta_3^2\frac{\theta_4^2}{\theta_3^2}=\theta_4^2, $$ we thus obtain the claimed differential equation. For parts (b), we need to show that the function $$ f(\tau)=\frac{(3E_2(3\tau)-E_2(\tau))^{1/2}}{(1-t)^{1/3}} $$ satisfies $$ t(1-t)\frac{d^2}{dt^2}f+(1-5t/3)\frac d{dt}f-\frac19f=0, $$ or, equivalently, \begin{equation} \label{temp1: theorem 4.1} (1-t)D_t^2f-\frac23tD_tf-\frac19tf=0. \end{equation} Let $G_t$ and $G_f$ be defined as in Lemma \ref{lemma 4.2}. For convenience we also let $g=(3E_2(3\tau)-E_2(\tau))/2$. We have $$ G_t=\frac12(3E_2(3\tau)-E_2(\tau))=g $$ and $$ G_f=\frac{D_q g}{2g}-\frac1{3(1-t)}D_q t= \frac{D_q g}{2g}+\frac t{3(1-t)}g. $$ It follows that $$ \frac{D_qG_t-2G_fG_t}{G_t^2}=g^{-2} \left(D_q g-2\left(\frac{D_q g}{2g}+\frac t{3(1-t)}g\right)g\right) =-\frac{2t}{3(1-t)}. $$ Moreover, we can show that $(D_qG_f-G_f^2)/G_t^2$ is equal to $-t/(9(1-t))$ by comparing enough Fourier coefficients. This establishes (\ref{temp1: theorem 4.1}) and hence part (b). The proof of part (c) is similar, and we shall skip the details here. \end{proof} \section{More examples} \label{sect5} We may also consider groups like $\Gamma_0(N)^*\times \Gamma_0(N)^*$ where $\Gamma_0(N)^*$ denotes the group generated by $\Gamma_0(N)$ and the Atkin--Lehner involution $w_N=\begin{pmatrix} 0 & -1 \\ N & 0\end{pmatrix}$ for some $N$. (Note that $\Gamma_0(N)^*$ is contained in the normalizer of $\Gamma_0(N)$ in $SL_2(\mathbb R)$.) Also the entire list of $N$ giving rise to genus zero groups $\Gamma_0(N)^*$ is known (cf.\cite{CMS04}), and we will be interested in some of those genuz zero groups. We can determine differential equations satisfied by modular forms (of two variables) of weight $(1,1)$ on $\Gamma_0(N)^*\times \Gamma_0(N)^*$ for some $N$ (giving rise to genus zero subgroups $\Gamma_0(N)^*$). We first prove a generalization of Theorem \ref{Hypergeometric DE}. \begin{thm}\label{theorem 5.1} Let $0<a,b<1$ be positive real numbers. Let $f(t)=\,_2F_1(a,b;1;t)$ be a solution of the hypergeometric differential equation \begin{equation} \label{temp: Theorem 5.1} t(1-t)f^{\prime\prime}+[1-(1+a+b)t]f^\prime-abf=0. \end{equation} Set $$ F(t_1,t_2)=f(t_1)f(t_2)(1-t_1)^{(a+b)/2}(1-t_2)^{(a+b)/2}, $$ $$ x=t_1+t_2-2, \qquad y=(1-t_1)(1-t_2). $$ Then $F$, as a function of $x$ and $y$, satisfies \begin{equation}\label{5.2} D_x^2F+2D_xD_yF-\frac1{x+y+1}D_xF+\frac{x}{x+y+1}D_yF +\frac{(2ab-a-b)x}{2(x+y+1)}F=0 \end{equation} and \begin{equation}\label{5.3} \begin{split} &D_y^2F+\frac{2y}{x^2}D_xD_yF+\frac{y^2}{x^2(x+y+1)}D_xF +\frac{y-x-x^2}{x(x+y+1)}D_yF \\ &\qquad\qquad-\frac{(a+b)(a+b-2)(x^2+x)+(a-b)^2xy-(4ab-2a-2b)y} {4x(x+y+1)}F=0. \end{split} \end{equation} \end{thm} \begin{proof} The proof is very similar to that of Theorem \ref{Hypergeometric DE}. Let $f_1$ be another solution of the hypergeometric differential equation (\ref{temp: Theorem 5.1}), and set $\tau:=f_1/f$. We find $$ f^2=c\exp\left\{-\int^t\frac{1-(1+a+b)u}{u(1-u)}\,du\right\} \frac{dt}{d\tau} =\frac{cdt/d\tau}{t(1-t)^{a+b}} $$ for some constant $c$ depending on the choice of $f_1$. Thus, setting $$ q_1=e^{2\pi if_1(t_1)/f(t_1)}\quad\mbox{and}\quad q_2=e^{2\pi if_1(t_2)/f(t_2)}, $$ we have $$ F(t_1,t_2)=c^\prime\left(\frac{D_{q_1}t_1\cdot D_{q_2}t_2} {t_1t_2}\right)^{1/2} $$ for some constant $c^\prime$. We now apply the differential identities (\ref{main}). We have, for $j=1,2$, $$ G_{x,j}:=\frac{D_{q_j}x}x=\frac{D_{q_j}t_j}{t_1+t_2-2}, \qquad G_{y,j}:=\frac{D_{q_j}y}y=-\frac{D_{q_j}t_j}{1-t_j}, $$ and $$ G_{F,j}:=\frac{D_{q_j}F}F=\frac{t_jD_{q_j}^2t_j-(D_{q_j}t_j)^2} {2t_jD_{q_j}t_j}. $$ It follows that the coefficients in (\ref{main}) are $$ a_0=2, \qquad b_0 =\frac{2(1-t_1)(1-t_2)}{(t_1+t_2-2)^2}=\frac{2y}{x^2}, $$ $$ a_1 =-\frac1{t_1t_2}=-\frac1{x+y+1}, \qquad b_1 =\frac{(1-t_1)^2(1-t_2)^2}{t_1t_2(t_1+t_2-2)^2} =\frac{y^2}{x^2(x+y+1)}, $$ $$ a_2 =\frac{t_1+t_2-2}{t_1t_2}=\frac x{x+y+1}, $$ $$ b_2 =-\frac{t_1^2+t_1t_2+t_2^2-2t_1-2t_2+1}{t_1t_2(t_1+t_2-2)} =\frac{y-x-x^2}{x(x+y+1)}. $$ Moreover, we have \begin{equation*} \begin{split} a_3 &=\left\{-\frac{(1-t_1)^2(2\dot t_1\dddot t_1 -3\ddot t_1^2)}{4(t_1-t_2)\dot t_1^4} +\frac{(1-t_2)^2(2\dot t_2\dddot t_2-3\ddot t_2^2)} {4(t_1-t_2)\dot t_2^4} -\frac{2t_1t_2-t_1-t_2}{4t_1^2t_2^2}\right\} \\ &\qquad\times(t_1+t_2-2), \end{split} \end{equation*} where we, as before, employ the notations $\dot t_j$, $\ddot t_j$, $\dddot t_j$ for the derivatives $D_{q_j}t_j$, $D_{q_j}^2t_j$, and $D_{q_j}^3t_j$, respectively. Now, by Lemma \ref{Schwarzian}, we have $$ 2\dot t_j\dddot t_j-3\ddot t_j^2=\dot t_j^4 \frac{(a-b)^2t_j^2-(1-t_j)^2+(4ab-2a-2b)t_j}{t_j^2(1-t_j)^2}. $$ It follows that $$ a_3=\frac{(2ab-a-b)(t_1+t_2-2)}{2t_1t_2}=\frac{(2ab-a-b)x}{x+y+1}. $$ A similar calculation shows that $$ b_3=-\frac{(a+b)(a+b-2)(x^2+x)+(a-b)^2xy-(4ab-2a-2b)y}{4x(x+y+1)}. $$ This proves the claimed result. \end{proof} \begin{rem} It should be pointed out that the first identity in our proof of Theorem \ref{theorem 5.1} is equivalent to the formula in Proposition 4.4 of Lian and Yau \cite{LY1}. \end{rem} We now obtain new examples of modular forms of weight $(1,1)$ on $\Gamma_0(N)^*\times \Gamma_0(N)^*$ for some $N$. \begin{thm} When the pairs of numbers $(a,b)$ in Theorem \ref{theorem 5.1} are given by $(1/12,5/12)$, $(1/12,7/12)$, $(1/8,3/8)$, $(1/8,5/8)$, $(1/6,1/3)$, $(1/6,2/3)$, $(1/4,1/4)$ and $(1/4,3/4)$, the solutions $F(t_1,t_2)$ of the differential equations \eqref{5.2} and \eqref{5.3} are modular forms of weight $(1,1)$ on $\Gamma_0(N)^\ast\times\Gamma_0(N)^\ast$ with $N=1,1,2,2,3,3,4,4$, respectively. \end{thm} \begin{proof} We shall prove only the cases $(a,b)=(1/6,1/3)$ and $(1/6,2/3)$; the other cases can be proved in the same manner. Let $$ s(\tau)=-27\frac{\eta(3\tau)^{12}}{\eta(\tau)^{12}}, \qquad E_2(\tau)=1-24\sum_{n=1}^\infty\frac{nq^n}{1-q^n}. $$ From the proof of Part (b) of Theorem \ref{theorem 4.1} we know that $$ f(\tau)=\frac{(3E_2(3\tau)-E_2(\tau))^{1/2}}{(1-s)^{1/3}}, $$ as a function of $s$, is equal to $\sqrt2\,_2F_1(1/3,1/3;1;s)$. Now, applying the quadratic transformation formula $$ _2F_1(\alpha,\beta;\alpha-\beta+1;x)=(1-x)^{-\alpha}\,_2F_1 \left(\frac\alpha2,\frac{1+\alpha}2-\beta;\alpha-\beta+1; -\frac{4x}{(1-x)^2}\right) $$ for hypergeometric functions (see, for example \cite[Theorem 3.1.1]{AAR}) with $\alpha=\beta=1/3$, we obtain $$ (3E_2(3\tau)-E_2(\tau))^{1/2}=\sqrt2\,_2F_1\left(\frac16,\frac13;1; -\frac{4s}{(1-s)^2}\right). $$ Observing that the action of the Atkin-Lehner involution $w_3$ sends $s$ to $1/s$, we find that the function $s/(1-s)^2$ is modular on $\Gamma_0(3)^\ast$. This proves that $F(t_1,t_2)$ is a modular form of weight $(1,1)$ for $\Gamma_0(3)^\ast\times\Gamma_0(3)^\ast$ in the case $(a,b)=(1/6,1/3)$. Furthermore, an application of another hypergeometric function identity $$ _2F_1(\alpha,\beta;\gamma;x)=(1-x)^{-\alpha}\,_2F_1\left(\alpha, \gamma-\beta;\gamma;\frac x{x-1}\right) $$ yields $$ (3E_2(3\tau)-E_2(\tau))^{1/2}=\sqrt2\left(\frac{1-s}{1+s}\right)^{1/3} \,_2F_1\left(\frac16,\frac23;1;\frac{4s}{(1+s)^2}\right). $$ This corresponds to the case $(a,b)=(1/6,2/3)$. Again, the function $4s/(1+s)^2$ is modular on $\Gamma_0(3)^\ast$. This implies that $F(t_1,t_2)$ is a modular form of weight $(1,1)$ for $\Gamma_0(3)^\ast\times \Gamma_0(3)^\ast$ for the case $(a,b)=(1/6,2/3)$. \end{proof} \begin{rem} For the remaining pairs $(a,b)$ in Theorem 5.2, we simply list the exact expressions of $F(t_1,t_2)$ in terms of modular forms as proofs are similar. For $(a,b)=(1/12,5/12)$ and $(1/12,7/12)$, they are $$ \left(\frac{E_6(\tau_1)E_6(\tau_2)}{E_4(\tau_1)E_4(\tau_2)}\right)^{1/2}, \qquad and \quad \left(\frac{E_8(\tau_1)E_8(\tau_2)}{E_6(\tau_1)E_6(\tau_2)}\right)^{1/2}, $$ respectively, where $E_k$ are the Eisenstein series in (\ref{Ek}). For $(a,b)=(1/8,3/8)$ and $(1/8,5/8)$, they are $$ \prod_{j=1}^2\left(\frac{1+s_j}{1-s_j} (2E_2(2\tau_j)-E_2(\tau_j))\right)^{1/2}, \quad and \quad \prod_{j=1}^2\left(\frac{1-s_j}{1+s_j} (2E_2(2\tau_j)-E_2(\tau_j))\right)^{1/2}, $$ respectively, where $s_j=-64\eta(\tau_j)^{24}/\eta(\tau_j)^{24}$. For $(a,b)=(1/6,1/3)$ and $(1/6,2/3)$, they are $$ \prod_{j=1}^2\left(\frac{1+s_j}{1-s_j} (3E_2(3\tau_j)-E_2(\tau_j))\right)^{1/2}, \quad and \quad \prod_{j=1}^2\left(\frac{1-s_j}{1+s_j} (3E_2(3\tau_j)-E_2(\tau_j))\right)^{1/2}, $$ respectively, where $s_j=-27\eta(3\tau_j)^{12}/\eta(\tau_j)$. For $(a,b)=(1/4,1/4)$ and $(1/4,3/4)$, they are $$ \prod_{j=1}^2\left(2E_2(2\tau_j)-E_2(\tau_j)\right)^{1/2}, \qquad and \quad \prod_{j=1}^2\left(2E_2(2\tau_j)-E_2(\tau_j)\right)^{1/2} \frac{1-s_j}{1+s_j}, $$ respectively, where $s_j=\theta_2(\tau_j)^4/\theta_3(\tau_j)^4$. \end{rem} \bigskip \section{Picard--Fuchs differential equations of Familes of $K3$ surfaces : Part I} \label{sect6} One of the motivations of our investigation is to understand the mirror maps of families of $K3$ surfaces with large Picard nubmers, e.g., $19, 18, 17$ or $16$. Some examples of such families of $K3$ surfaces were discussed in Lian--Yau \cite{LY2}, Hosono--Lian--Yau \cite{HLY} and also in Verrill-Yui \cite{VY}. Some of $K3$ families occured considering degenerations of Calabi--Yau families. Our goal here is to construct families of $K3$ surfaces whose Picard--Fuchs differential equations are given by the differential equations satisfied by modular forms (of two variables) we constructed in the earlier sections. In this section, we will look into the families of $K3$ surfaces appeared in Lian and Yau \cite{LY1,LY2}. \smallskip Let $S$ be a $K3$ surface. We recall some general theory about $K3$ surfaces which are relevant to our discussion. We know that $$H^2(S,\mathbb Z)\simeq (-E_8)^2\perp U^3$$ where $U$ is the hyperbolic plane $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ and $E_8$ is the even unimodular negative definite lattice of rank $8$. The Picard group of $S$, $\mbox{Pic}(S)$, is the group of linear equivalence classes of Cartier divisors on $S$. Then $\mbox{Pic}(S)$ injects to $H^2(X,\mathbb Z)$, and the image of $\mbox{Pic}(S)$ is the algebraic cycles in $H^2(S,\mathbb Z)$. As $Pic(S)$ is torsion-free, it may be regarded as a lattice in $H^2(S,\mathbb Z)$, called the {\it Picard lattice}, and its rank is denoted by $\rho(S)$. According to Arnold--Dolgachev \cite{D96}, two $K3$ surfaces form a mirror pair $(S, \hat S)$ if $$\mbox{Pic}(S)^{\perp}_{H^2(S,\mathbb Z)}=\mbox{Pic}(\hat S)\perp U\quad \mbox{as lattices}$$ In terms of ranks, a mirror pair $(S, \hat S)$ is related by the identity: $$22-\rho(S)=\rho(\hat S)+2\Leftrightarrow \rho(S)+\rho(\hat S)=20.$$ \begin{ex}\label{Example 6.1} {\rm We will be interested in mirror pairs of $K3$ surfaces $(S,\hat S)$ whose Picard lattices are of the form $$Pic(S)=U\quad{and}\quad Pic(\hat S)=U_2\perp (-E_8)^2.$$ We go back to our Example \ref{Example 4.1}, and discuss geometry behind that example. Associated to this example, there is a family of $K3$ surfaces in the weighted projective $3$-space $\mathfrak p^3[1,1,4,6]$ with weight $(q_1,q_2,q_3,q_4)=(1,1,4,6)$. There is a mirror pair of $K3$ surfaces $(S,\hat S)$. Here we know (cf. Belcastro \cite{B02}) that $$\mbox{Pic}(S)=U\quad\mbox{so that $\rho(S)=2$},$$ and that $S$ has a mirror partner $\hat S$ whose Picard lattice is given by $$\mbox{Pic}(\hat S)=U\perp (-E_8)^2\quad\mbox{ so that $\rho(\hat S)=18$}.$$ The mirror $K3$ family can be defined by a hypersurface in the orbifold ambient space $\mathfrak p^3[1,1,4,6]/G$ of degree $12$. Here $G$ is the discrete group of symmetry and can be given explicitly by $G=(\mathbb Z/3\mathbb Z)\times(\mathbb Z/2\mathbb Z)=\langle g_1\rangle\times\langle g_2\rangle$ where $g_1, g_2$ are generatoers whose actions are given by: $$\begin{matrix} g_1 : (Y_1,Y_2,Y_3,Y_4) &\mapsto &(\zeta_3 Y_1, Y_2, \zeta_3^{-1}Y_3,Y_4) \\ g_2 : (Y_1,Y_2,Y_3,Y_4) &\mapsto &(Y_1,-Y_2, Y_3,-Y_4)\end{matrix}$$ (Here $\zeta_3=e^{2\pi i/3}$.) The $G$-invariant monomials are $$Y_1^{12},\,Y_2^{12},\,Y_3^3,\, Y_4^2,\, Y_1^6Y_2^6,\, Y_1Y_2Y_3Y_4.$$ The matrix of exponents is the following $6\times 5$ matrix $$\begin{pmatrix} 12 & 0 & 0 & 0 & 1 \\ 0 & 12 & 0 & 0 & 1 \\ 0 & 0 & 3 & 0 & 1 \\ 0 & 0 & 0 & 2 & 1 \\ 6 & 6 & 0 & 0 & 1 \\ 1 & 1 & 1 & 1 & 1 \end{pmatrix}$$ whose rank is $2$. Therefore we may conclude that the typical $G$-invariant polynomials is in $2$-parameters, and $\hat S$ can be defined by the following $2$-parameter family of hypersurfaces of degree $12$ $$Y_1^{12}+Y_2^{12}+Y_3^3+Y_4^2+\lambda Y_1Y_2Y_3Y_4+\phi Y_1^6Y_2^6=0$$ in $\mathfrak p^3[1,1,4,6]/G$ with parameters $\lambda$ and $\phi$. How do we conmpute the Picard--Fuchs differential equation of this $K3$ family? Several physics articles are devoted to this question. For instance, Klemm--Lerche--Mayr \cite{KLM}, Hosono--Klemm--Theisen--Yau \cite{HKTY}, Lian and Yau \cite{LY2} determined the Picard--Fuchs differential equation of the Calabi--Yau family using the GKZ hypergeometric system. Also it was noticed (cf. \cite{KLM}, \cite{LY2}) that the Picard--Fuchs system of this family of $K3$ surfaces can be realized as the degeneration of the Picard--Fuchs systems of the Calabi--Yau family. The family of Calabi--Yau threefolds is a degree $24$ hypersurfaces in $\mathfrak p^4[1,1,2,8,12]$ with $h^{1,1}=3$. The defining equation for this family is given by $$Z_1^{24}+Z_2^{24}+Z_3^{12}+Z_4^3+Z_5^2 -12\psi_0Z_1Z_2Z_3Z_4Z_5$$ $$-6\psi_1(Z_1Z_2Z_3)^6-\psi_2(Z_1Z_2)^{12}=0.$$ Its Picard--Fuchs system is given by $$\begin{matrix} L_1&=&\Theta_x(\Theta_x-2\Theta_z)-12\,x(6\Theta_x+5)(6\Theta_x+1)\\ L_2&=& \Theta_y^2-y(2\Theta_y-\Theta_z+1)(2\Theta_y-\Theta_z)\\ L_3&=&\Theta_z(\Theta_z-2\Theta_y)-z(2\Theta_z-\Theta_x+1)(2\Theta_z-\Theta_x) \end{matrix} $$ where $$x=-\frac{2\psi_1}{1728^2\psi_0^6},\, y=\frac{1}{\psi_2^2}\quad\mbox{and} \quad z=-\frac{\psi_2}{4\psi_1^2}$$ are deformation coordinates. Now the intersection of this Calabi--Yau hypersurface with the hyperplane $Z_2-t\,Z_1=0$ gives rise to a family of $K3$ surfaces $$b_0Y_1Y_2Y_3Y_4+b_1Y_1^{12}+b_2Y_2^{12}+b_3Y_3^3+b_4Y_4^2+b_5Y_1^6Y_2^6=0$$ in $\mathfrak p^3[1,1,4,6]$ of degree $12$. Taking $(b_0,b_1,b_2,b_3,b_4,b_5)= (\lambda, 1,1,1,1,\phi)$ we obtain the $2$-parameter family of $K3$ surfaces described above. The Picard--Fuchs system of this $K3$ family is obtained by taking the limit $y=0$ in the Picard--Fuchs system for the Calabi--Yau family: $$\begin{matrix} L_1&=&\Theta_x(\Theta_x-2\Theta_z)-12\,x(6\Theta_x+5)(6\Theta_x+1)\\ L_3&=&\Theta_z^2-z(2\Theta_z-\Theta_x+1)(2\Theta_z-\Theta_x) \end{matrix} $$ Further, if we intersect this $K3$ family with the hyperplane $Y_2-s\,Y_1=0$, we obtain a family of elliptic curves: $$c_0W_1W_2W_3+c_1W_1^6+c_2W_2^3+c_3W_3^2=0$$ in $\mathfrak p^2[1,2,3]$, whose Picard--Fuchs equation is given by $$L=\Theta_x^2-12\,x(6\Theta_x+5)(6\Theta_x+1).$$ } \end{ex} Here we describe a relation of the Picard--Fuchs system of the above family of $K3$ surfaces to the differential equation discussed in Example 4.1. \begin{rem}\label{remark 6.1} {\rm We note that, in view of our proof of Example 4.1, the process of setting $z=0$ in the above Picard--Fuchs system $\{L_1,L_3\}$ is equivalent to setting $t_1=0$ or $t_2=0$ in $x$ and $y$ in Example 4.1. Our Theorem 3.1 then implies that $F(t)=(1-t)^{1/6}\,_2F_1(1/6,1/6;1;t)$ satisfies $$ (1+x)D_x^2F+xD_xF+\frac{5}{36}xF=0 $$ with $x=t/(1-t)$, or equivalently, (making a change of variable $x\mapsto-x$) $$ x(1-x)F^{\prime\prime}+(1-2x)F^\prime-\frac5{36}F=0 $$ with $x=t/(t-1)$. That is, $$ (1-t)^{1/6}\,_2F_1\left(\frac16,\frac16;1;t\right) =\,_2F_1\left(\frac16,\frac56;1;\frac t{t-1}\right). $$ This is the special case of the hypergeometric series identity $$ (1-t)^a\,_2F_1(a,b;c;t)=\,_2F_1\left(a,c-b;c;\frac t{t-1}\right). $$ } \end{rem} \bigskip We will discuss more examples of Picard--Fuchs systems of Calabi--Yau threefolds and $K3$ surfaces, which have already been considered by several people. For instance, the articles \cite{HKTY}, \cite{HLY}, and \cite{KLM} obtained the Picard--Fuchs operators for Calabi--Yau hypersurfaces with $h^{1,1}\leq 3$. The next two examples consider Calabi--Yau hypersurfaces with $h^{1,1}>3$, and the paper of Lian and Yau \cite{LY2} addressed the question of determining the Picard--Fuchs system of the families of $K3$ surfaces $\mathfrak p^3[1,1,2,2]$ of degree $6$ and $\mathfrak p^3[1,1,2,4]$ of degree $8$. Their results are that \smallskip (1) there is an elliptic fibration on these $K3$ surfaces, and the Picard--Fuchs systems of the $K3$ families can be derived from the Picard--Fuchs system of the elliptic pencils, and that \smallskip (2) the solutions of the Picard--Fuchs systems for the $K3$ families are given by ``squares'' of those for the elliptic families. \smallskip The system of partial differential equations considered by Lian and Yau \cite{LY2} is $$\begin{matrix} L_1&=& \Theta_x(\Theta_x-2\Theta_z)-\lambda\,x(\Theta_x+\frac{1}{2}+\nu)(\Theta_x +\frac{1}{2}-\nu) \\ L_2&=& \Theta_z^2-z(2\Theta_z-\Theta_x+1)(2\Theta_z-\Theta_x) \end{matrix} $$ and an ordinary differential equations $$L=\Theta_x^2-\lambda\,x(\Theta_x+\frac{1}{2}+\nu)(\Theta_x+\frac{1}{2}-\nu)$$ where $\Theta_x=x\frac{\partial}{\partial\,x}$, etc.) and $\lambda,\, \nu$ are complex numbers. Also they noted that the $K3$ families correspond, respecitvely, to the families of Calabi--Yau threefolds $\mathfrak p^4[1,1,2,4,4]$ of degree $12$ and $\mathfrak p^4[1,1,2,4,8]$ of degree $16$. However, the Picard--Fuchs systems for the Calabi--Yau families are not explicitly determined. \begin{ex}\label{example 6.2} {\rm We now consider a family of $K3$ surfaces $\mathfrak p^3[1,1,2,4]$. of degree $8$. This $K3$ family is realized as the degeneration of the family of Calabi--Yau hypersurfaces $\mathfrak p^4[1,1,2,4,8]$ of degree $16$ and $h^{1,1}=4$. The most generic defining equation for this family is given by $$a_0Z_1Z_2Z_3Z_4Z_5+a_1Z_1^{16}+a_2Z_2^{16}+a_3Z_3^8+a_4Z_4^4+a_5Z_5^2 +a_6Z_3^2Z_4Z_5+a_7Z_1^8Z_2^8=0$$ Again the intersection with the hyperplane $Z_2-t\,Z_1=0$ gives rise to a family of $K3$ surfaces $\mathfrak p^3[1,1,2,4]$: $$Y_1^8+Y_2^8+Y_3^4+Y_4^2+\lambda Y_1Y_2Y_3Y_4+\phi Y_1^4Y_2^4=0$$ Let $S$ denote this family of $K3$ surfaces. Then $$Pic(S)=M_{(1,1),(1,1),0}\quad\mbox{with $\rho(S)=3$}.$$ The mirror family $\hat S$ exists and its Picard lattice is $$Pic(\hat S)=E_8\perp D_7\perp U\quad\mbox{with $\rho(\hat S)=17$}.$$ The Picard lattices are determined by Belcastro \cite{B02}. The intersection of this family of $K3$ surfaces with the hyperplane $Y_2-s\,Y_2=0$ gives rise to the pencil of elliptic curves $$c_0W_1W_2W_3+c_1W_1^4+c_2W_2^4+c_3W_3^2=0$$ in $\mathfrak p^2[1,1,2]$ of degree $4$. This means that this family of $K3$ surfaces has the elliptic fibration with section. Now translate this ``inductive'' structure to the Picard--Fuchs systems. The Picard--Fuchs system for the $K3$ family is given by $$\begin{matrix} L_1&=& \Theta_x(\Theta_x-2\Theta_z)-64\,x(\Theta_x+\frac{1}{2}+\frac{1}{4}) (\Theta_x+\frac{1}{2}-\frac{1}{4})\\ L_2&=& \Theta_z^2-z(2\Theta_z-\Theta_x+1)(2\Theta_z-\Theta_x) \end{matrix}$$ and the Picard--Fuchs defferential equation of the elliptic family is given by $$L=\Theta_x^2-64\,x(\Theta_x+\frac{1}{2}+\frac{1}{4})(\Theta_x+\frac{1}{2}-\frac{1}{4})$$ The same remark as Remark 6.1 is valid for the Picard--Fuchs system $\{L_1,L_3\}$ which corresponds to Theorem 4.1 (b) with $a=1/3$.} \end{ex} \begin{ex}\label{example 6.3} {\rm We consider a family of $K3$ surfaces $\mathfrak p^3[1,1,2,2]$ of degree $6$. This $K3$ family is realized as the degeneration of the family of Calabi--Yau hypersurfaces $\mathfrak p^4[1,1,2,4,4]$ of degree $12$ and $h^{1,1}=5$: $$a_0Z_1Z_2Z_3Z_4Z_5+a_1Z_1^{12}+a_2Z_2^{12}+a_3Z_3^6+a+4Z_4^3+a_5Z_5^3 +a_6Z_1^6Z_2^6=0.$$ The intersection of this Calabi--Yau hypersurface with the hyperplane $Z_2-t\,Z_1=0$ gives rise to the family of $K3$ hypersurfaces $\mathfrak p^3[1,1,2,4]$: $$Y_1^6+Y_2^6+Y_3^3+Y_4^3+\lambda Y_1Y_2Y_3Y_4+\phi Y_1^3Y_2^3=0.$$ Let $S$ denote this family of $K3$ surfaces. Then $$\mbox{Pic}(S)=M_{(1,1,1),(1,1,1),0}\quad\mbox{with $\rho(S)=4$}.$$ There is a mirror family of $K3$ surfaces, $\hat S$ with $$\mbox{Pic}(\hat S)=E_8\perp D_4\perp A_2\perp U\quad\mbox {with $\rho(\hat S)=16$}.$$ The Picard lattices are determined by Belcastro \cite{B02}. The intersection of this $K3$ family with the hyperplane $Y_2-s\,Y_1=0$ gives rise to the family of elliptic curves $$c_0W_1W_2W_3+c_1W_1^3+c_2W_2^3+c_3W_3^3=0$$ in $\mathfrak p^2[1,1,1]$ of degree $3$. The Picard--Fuchs system of this $K3$ family is $$\begin{matrix} L_1&=& \Theta_x(\Theta_x-2\Theta_z)-27\,x(\Theta_x+\frac{1}{2}+\frac{1}{6})(\Theta_x +\frac{1}{2}-\frac{1}{6})\\ L_2&=& \Theta_z^2-z(2\Theta_z-\Theta_x+1)(2\Theta_z-\Theta_x) \end{matrix} $$ and the Picard--Fuchs differential equation for the elliptic family is given by $$L=\Theta_x^2-27\,x(\Theta_x+\frac{1}{2}+\frac{1}{6})(\Theta_x+\frac{1}{2}-\frac{1}{6})$$ We note that the same remark is valid for the Picard--Fuchs system $\{L_1,L_3\}$ corresponding to $a=1/4$ in Theorem 4.1(c). } \end{ex} We will summarize the above discussions for the families of $K3$ surfaces in the following form. \begin{prop}\label{proposition 6.1} {\sl The Picard--Fuchs systems of families of $K3$ surfaces obtained by Lian and Yau \cite{LY2} can be reconstructed starting from the modular forms (of two variables) and then finding the differential equations satisfied by them. In other words, the differential equations satisfied by the modular forms (of two variables) are realized as the Picard--Fuchs differential equations of the families of $K3$ surfaces, establishing, in a sense, the ``modularity'' of the $K3$ families.} \end{prop} \section{Picard--Fuchs differential equations of families of $K3$ surfaces: Part II} \label{sect7} The purpose of this section is to study (one-parameter) families of $K3$ surfaces (some of which are realized as degenerations of some families of Calabi--Yau threefolds), whose mirror maps are expressed in terms of Hauptmodules for genus zero subgroups of the form $\Gamma_0(N)^*$, aiming to identify their Picard--Fuchs systems with differential equations assocaited to some to modular forms (of two variable) (e.g., in Theorem 5.1). Dolgachev \cite{D96} has discussed several examples of families of $M_N$-polarized $K3$ surfaces corresponding to $\Gamma_0(N)^*$ for small values of $N$, e.g., $N=1, 2$ and $3$. Lian and Yau \cite{LY1} have given examples of families of $K3$ surfaces and their Picard--Fuchs differential equqtions of order $3$. The modular groups are genus zero subgroups of the form $\Gamma_0(N)^*$ where $N$ ranging from $1$ to $30$. Here we try to analyze their examples and their method in relation to our results in the section 5. \begin{ex}\label{example 7.1} {\rm We start with the hypergeometric equation: $$t(1-t)f^{\prime\prime}+[1-(1+a+b)t]f^{\prime}-abf=0$$ in Theorem 5.1. Take $a=b=\frac{1}{4}$ and consider a one-parameter deformation of this equation of the form: $$t(1-t)f^{\prime\prime}+(1-\frac{3}{2}t)f^{\prime}-\frac{1}{16}(1-4\nu ^2)f=0$$ with a deformation parameter $\nu$. This has a unique solution $f_0(t)$ near $t=0$ with $f_0(0)=1$, and a solution $f_1(t)$ with $f_1(t)=f_0(t)\mbox{log}\,t+O(t)$. The inverse $t(q)$ of the power series $q=\mbox{exp}(\frac{f_1(t)}{f_0(t)})=t + O(t^2)$ defines an invertible holomorphic function in a disc, and $t(q)$ is the so-called mirror map. Put $$x(q)=\frac{1}{\lambda}t(\lambda q)\quad\mbox{for a given $\lambda$}.$$ One of the main results of Lian and Yau \cite{LY1} is that for any complex numbers $\lambda,\,\nu$ with $\lambda\neq 0$, there is a power series identity: $$_3F_2(\frac{1}{2},\frac{1}{2}+\nu,\frac{1}{2}-\nu;1,1;\lambda\,x(q))^2 =\frac{x^{\prime\,2}}{x^2(1-\lambda\,x)}$$ in the common domain of definitions of both sides. As before, $x^{\prime}(q)=D_q x(q)$. For instance, take $(\lambda,\nu)=(2^63^3,\frac{1}{3}),\,(2^8,\frac{1}{4}),\, (2^23^3,\frac{1}{6})$ and $(2^6,0)$, then these relations are given below. The mirror maps in these examples are expressed in terms of Hauptmodules of genus zero modular groups of the form $\Gamma_0(N)^*$ ($\Gamma_0(1)^*=\Gamma$). $$\begin{matrix} \mbox{Label} & \mbox{Modular Relation} && & \mbox{Modular Group} \\ I &:\Bigg(\sum_{n=0}^{\infty}\frac{(6n)!}{(3n)!(n!)^3}\frac{1}{j(\tau)^n}\Bigg)^2& =& E_4(q) & \Gamma \\ II &:\Bigg(\sum_{n=0}^{\infty}\frac{(4n)!}{(n!)^4} x_2(\tau)^n\Bigg)^2& = &\frac{x_2^{\prime\,2}}{x^2(1-256x)} & \Gamma_0(2)^*\\ III &:\Bigg(\sum_{n=0}^{\infty}\frac{(2n)!(3n)!}{(n!)^5} x_3(\tau)^n\Bigg)^2&= &\frac{x_3^{\prime\,2}}{x_3^2(1-108x_3)} & \Gamma_0(3)^*\\ IV &:\Bigg(\sum_{n=0}^{\infty}\frac{(2n)!^3}{(n!)^6} x_4(\tau)^n\Bigg)^2&= &\frac{x_4^{\prime\,2}}{x_4^2(1-64x_4)} & \Gamma_0(4)^*. \end{matrix} $$ \noindent Here $j(\tau),\, x_2(\tau), x_3(\tau)$ and $x_4(\tau)$ are Hauptmodules for the genus zero subgroups $\Gamma,\, \Gamma_0(2)^*,\,\Gamma_0(3)^*$ and $\Gamma_0(4)^*$, respectively. Observe that in each modular relation, the right hand side is a modular form of weight $4$ on the corresponding genus zero subgroup. We know that $_3F_2(\frac{1}{2},\frac{1}{2}+\nu,\frac{1}{2}-\nu;1,1;\lambda\,x)$ is a unique solution with the leading term $1+O(x)$ to the differential operator $$L=\Theta_x^3-\lambda\,x(\Theta_x+\frac{1}{2})(\Theta_x+\frac{1}{2} +\nu)(\Theta_x+\frac{1}{2}-\nu).$$ In these examples, this differential operator is identified with the Picard--Fuchs differential operator for a one-parameter family of $K3$ surfaces, which are obtained by degenerating Calabi--Yau families. (Cf. Lian and Yau \cite{LY1}, Klemm, Lercher and Myer \cite{KLM}.) $$\begin{matrix} \quad & \mbox{CY family} & \mbox{$K3$ family} & \mbox{PF Operator} \\ I & X(1,1,2,2,2)[8] & X(1,1,1,3)[6] & \Theta^3-8x(6\Theta+5)(6\Theta+3)(6\Theta+1) \\ II & X(1,1,2,2,6)[12] & X(1,1,1,1)[4] & \Theta^3-4x(4\Theta+3)(4\Theta+2)(4\Theta+1) \\ III & X(1,1,2,2,2,2)[6,4] & X(1,1,1,1,1)[3,2] & \Theta^3-6x(2\Theta+1)(3\Theta+2)(3\Theta+1) \\ IV & X(1,1,2,2,2,2,2)[4,4,4] & X(1,1,1,1,1,1)[2,2,2]& \Theta^3-8x(2\Theta+1)^3 \end{matrix} $$ \smallskip The $K3$ families I and II have already been discussed in Lian--Yau \cite{LY2} (see also Verrill--Yui \cite{VY}) in relation to mirror maps. The Picard group of I (resp. II) is given by $$(-E_8)^2\oplus U_2\oplus <-4>\quad\text{(resp. $(-E_8)^2\oplus U_2\oplus <-2>$}).$$ The Calabi--Yau family III can be realized as a complete intersection of the two hypersurfaces: $$\begin{matrix} Y_1^6+Y_2^6+Y_3^3+Y_4^3+Y_5^3+Y_6^3=0 \\ Y_1^4+Y_2^4+Y_3^2+Y_4^2+Y_5^2+Y_6^2=0 \end{matrix} $$ This Calabi--Yau family has $h^{1,2}=68$ and $h^{1,1}=2$. The $K3$ family is realized as the fiber space by setting $$Y_1=Z_1^{1/2},\quad Y_2=\lambda Z_1^{1/2},\quad\text{and}\quad Y_i=Z_i\quad\text{for $i=3,\cdots, 6$}$$ where $\lambda\in{\mathfrak p}^1$ is a parameter. That is, we obtain a family of complete intersection $K3$ surfaces $X(1,1,1,1,1)[3,2]$: $$\begin{matrix} (1+\lambda^6)Z_1^3+Z_3^3+Z_4^3+Z_5^3+Z_6^3=0\\ (1+\lambda^4)Z_1^2+Z_3^2+Z_4^2+Z_5^2+Z_6^2=0 \end{matrix} $$ {\bf Question: What is the Picard group of this K3 family?} \smallskip In the similar manner, the Calabi--Yau family IV can be realized as a complete intersection of the three hypersurfaces: $$\begin{matrix} Y_1^4+Y_2^4+Y_3^2+Y_4^2+Y_5^2+Y_6^2+Y_7^2=0\\ Z_1^4+Z_2^4+Z_3^2+Z_4^2+Z_5^2+Z_6^2+Z_7^2=0\\ W_1^4+W_2^4+W_3^2+W_4^2+W_5^2+W_6^2+W_7^2=0 \end{matrix}$$ The $K3$ family is realized as the fiber space by setting $$Y_1=Y_1^{\prime \frac{1}{2}},\, Y_2=\lambda Y_1^{\prime \frac{1}{2}}\quad\text{and}\quad Y_i=Y_i^{\prime}\quad\text{for $i=3,\cdots, 7$}$$ and similarly for $Z_1,\, Z_2$ and $W_1,\, W_2$ where $\lambda\in\mathfrak p^1$ is a parameter. This gives rise to the $K3$ family $X(1,1,1,1,1,1)[2,2,2]$: $$\begin{matrix} (1+\lambda^4)Y_1^{\prime 2}+Y_3^{\prime 2}+Y_4^{\prime 2}+Y_5^{\prime 2}+Y_6^{\prime 2} +Y_7^{\prime}=0\\ (1+\lambda^4)Z_1^{\prime 2}+Z_3^{\prime 2}+Z_4^{\prime 2}+Z_5^{\prime 2}+Z_6^{\prime 2} +Z_7^{\prime 2}=0\\ (1+\lambda^4)W_1^{\prime 2}+W_3^{\prime 2}+W_4^{\prime 2}+W_5^{\prime 2}+W_6^{\prime 2} +W_7^{\prime 2}=0 \end{matrix} $$ \medskip {\bf Question: What is the Picard group of this K3 family?} \medskip \noindent Here is the summary: (1) One starts with a Hauptmodule $x(=x(q))$ for a genus zero subgroup $\Gamma_0(N)^*$; (2) then there associate a modular form $\frac{x^{\prime\,2}}{x\,r(x)}$ of weight $4$, (3) and a power series solution $\omega_0(x)$ of an order three differential operator; (4) this differential operator coincides with the Picard--Fuchs differential operator of a one-parameter family of $K3$ surfaces in weighted projective spaces.} \end{ex} Lian and Yau \cite{LY1} further considered generalizations of the above phenomenon, constructing many more examples. Given a genus zero subgroup of the form $\Gamma_0(N)^*$ and a Hauptmodul $x(q)$, constract (by taking a Schwarzian derivative) a modular form $E$ of weight $4$ of the form $\frac{x^{\prime\,2}}{x\,r(x)}$ and a differential operator $L$ whose monodromy has maximal unipotency at $x=0$, such that $L\,E^{1/2}=0$. Further, identify $L$ as the Picard--Fuchs differential operator of a family of $K3$ surfaces. Let $\omega_0(x)$ denotes the fundamental period of this manifold. Then it should be subject to the modular relation $$\omega_0(x)^2=\frac{x^{\prime\,2}}{x\,r(x)}$$ \medskip How do we associate modular forms of weight $(1,1)$ corresponding to the groups $\Gamma_0(N)^*\times \Gamma_0(N)^*$ in this situation? Taking the square root of both sides of the modular relation, we obtain that $\omega_0(x)^{1/2}$ is a modular form (of one variable) of weight $1$ for the group $\Gamma_0(N)^*$. Then taking $\omega_0(q_1)\omega_0(q_2)$, we see that this is a modular form for $\Gamma_0(N)^*\times \Gamma_0(N)^*$ of weight $(1,1)$. Then this modular form (of two variables) satisfies a differential equation, which may be identified with the Picard--Fuchs differential equation of the K3 family considered above. We summarize the above discussion in the following proposition. \begin{prop}\label{proposition 7.1} {\sl The examples I--IV above are related to our Theorem 5.2. Indeed, the connection is established by the identity $$ _2F_1\left(a,b;a+b+\frac12;z\right)^2=\, _3F_2\left(2a,a+b,2b;a+b+\frac12,2a+2b;z\right). $$ More explicitly, the examples I--IV correspond to the cases $(1/12,5/12)$, $(1/8,3/8)$, $(1/6,1/3)$, and $(1/4,1/4)$, respectively.} \end{prop} Note that the generalized hypergeometric series $_3F_2(\alpha_1,\alpha_2,\alpha_3;1,1;z)$ satisfies the differential equation of the form: $$[\Theta_z^3-\lambda\,z(\Theta_z+\alpha_1)(\Theta_z+\alpha_2)(\Theta_z+\alpha_3)]f=0$$ for some $\alpha_1,\alpha_2,\alpha_3\in{\mathbb Q}$ and $\lambda\in{\mathbb Q},\, \neq 0$. A natural question we may ask now is: Is is possible to construct families of $K3$ surfaces corresponding to Theorem 5.2 from this observation? When the order $3$ differential equation of this form becomes the symmetric square of an order $2$ differential equation, and if the order $2$ differential equation is realized as the Picard--Fuchs differential equation of a family of elliptic curves, we may be able to construct a family of $K3$ surfaces using the method of Long \cite{Long}, especially when the Picard number of the $K3$ family in question is $19$ or $20$. In fact, Rodriguez--Villegas \cite{RV03} has discussed $4$ families of $K3$ surfaces which fall into this class. However, at the moment, we do not know if there are readily available methods for constructing $K3$ families starting from differential equations. \begin{rem} {\rm If we consider the order $4$ generalized hypergeomtric series, there are $14$ differential equations are of the form $$[\Theta_z^4-\lambda\,z(\Theta_z+\alpha_1)(\Theta_z+\alpha_2)(\Theta_z+\alpha_3)(\Theta_z+\alpha_4)]f =0$$ for some $\alpha_i\in{\mathbb Q}$ and $\lambda\in {\mathbb Q},\, \neq 0$. These $14$ differential operators can be found in Almkvist--Zudilin \cite{AZ}. As Doran and Morgan \cite{DorMor05} explained, only 13 of the 14 such operators are known to be realizable as the Picard--Fuch differential operator for a family of smooth Calabi--Yau threefolds with $h^{2,1}=1$. For the $13$ cases, Klemm and Theisen \cite{KT} (see also Villegas \cite{RV03}) found the corresponding families of Calabi--Yau threefolds in weighted projective spaces. The missing case is $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)=(1/12, 5/12, 7/12, 11/12)$. For more thorough discussions on this topic, the reader should consult the article of Doran and Morgan \cite{DorMor05}, where classify integral monodromy representations. } \end{rem} \section{Generalizations and open problems} \label{sect8} \begin{pr}\label{problem 8.1} {\rm We have determined differential equations satisfied by modular forms (of two variables) of weight $(1,1)$. The arguments can be generalized to modular forms (of two variables) of any weight $(k_1,k_2)$, using the result of Yang \cite{Yg04}. However, differerntial equations satisfied by them are getting too big to display.} \end{pr} \begin{pr}\label{problem 8.2} {\rm A natural generalization is to consider modular forms of three (or more than three) variables $F(\tau_1,\tau_2,\tau_3)$ of weight $(k_1,k_2,k_3)$ on $\Gamma_1\times \Gamma_2\times \Gamma_3$. Examples of this kind should correspond to Picard--Fuchs differential equations of families of Calabi--Yau threefolds, or Picard--Fuchs differential equations of degenerate families of Calabi--Yau fourfolds.} \end{pr} \section*{Acknowledgments} The collaboration for this work started when Y. Yang visited N. Yui at Queen's University, Kingston Canada, in August 2004. The work was completed at Tsuda College, Tokyo Japan 2005, and revisions were incorporated at Tsuda College again in the summer of 2006. Both authors were visiting professors at that institution in June 2005, and August 2006. We thank the hospitality of Tsuda College. We thank Jan Stienstra for his interest and helpful comments and suggestions, and Chuck Doran and Don Zagier for their comments on the earlier version(s) of this paper. \vskip 1.5cm
1,314,259,995,210
arxiv
\section{Introduction \label{sec:Intro}} Applications of nonclassical states can only manifest the true power of quantum mechanics. This is so because the working of any technology that does not use nonclassical state(s) can be understood/explained classically (i.e., without using quantum mechanics). Recently, many applications of nonclassical states manifesting the power of quantum mechanics have been reported. Specifically, squeezed vacuum state has been used in the detection of gravitational wave \cite{abbott2016observation,abbott2016gw151226} at the Laser Interferometer Gravitational-Wave Observatory (LIGO). Further, with the recent progresses in the field of quantum computation and communication, the importance and necessity of nonclassical states have been established strongly established. For example, it has been established that the entangled states are essential for the implementation of a set of schemes for quantum cryptography \cite{ekert1991quantum,thapliyal2015applications,thapliyal2017quantum}, quantum teleportation \cite{bennett1993teleporting}, dense-coding \cite{bennett1992communication}; Bell nonlocal states are required for device independent quantum key distribution (DI-QKD) \cite{acin2006bell}; squeezed states are useful for continuous variable quantum cryptography \cite{hillery2000quantum} and antibunched states are useful in building single photon sources \cite{yuan2002electrically,pathak2010recent}. These interesting applications of nonclassical states have motivated many groups to investigate the possibilities of generating nonclassical states using frequent and important physical processes. One such important physical process is Raman process, which has several variants (e.g., spontaneous Raman scattering, stimulated Raman scattering, degenerate and non-degenerate hyper-Raman scattering, coherent anti-Stokes Raman scattering, coherent anti-Stokes hyper-Raman scattering) and clubbing them together we refer to them as Raman processes. Among these Raman processes non-degenerate hyper-Raman process is most general in nature as the Hamiltonians of spontaneous and stimulated Raman scattering and non-degenerate hyper-Raman process (in the simplest case which can be viewed as a 3 photon analogue of stimulated Raman scattering \cite{sen2007squeezing}) can be obtained as limiting cases of it. Nonclassicality in spontaneous and stimulated Raman scattering (see \cite{sen2011sub,sen2007quantum,sen2005squeezed,sen2008amplitude} and references therein, Section 10.4 of \cite{perina1991quantum} and \cite{miranowicz1994quantum} for reviews) and non-degenerate hyper-Raman processes \cite{perinova1979quantum2,szlachetka1980photon,perinova1984sub,sen2007squeezing} has already been studied in detail. However, non-degenerate hyper-Raman process has not yet been investigated rigorously because of its inherent mathematical complexity and the potential difficulties associated with the experimental realization of this process. This is what motivated us to investigate the possibilities of observing nonclassical effects in the non-degenerate hyper-Raman process. We were further motivated by the fact that in Ref. \cite{olivik1995non}, Oliv{\'\i}k and Pe{\v{r}}ina noted that higher order non-linearity present in the hyper-Raman process may lead to more significant nonclassical effects compared to the standard Raman process (at least in the context of the statistical properties of radiation fields and quadrature squeezing). In fact, hyper-Raman scattering represents a very interesting nonlinear optical process as it allows self-interaction of the pump modes and thus leads to the generation of different types of nonclassicality. Specifically, the presence of antibunched, sub-Poissonian and squeezed light in the degenerate hyper-Raman processes has already been reported in the past \cite{perinova1979quantum2,szlachetka1980photon,perinova1984sub,sen2007squeezing}. However, only antibunching in the photon and phonon modes of its non-degenerate counterpart have been reported until now \cite{perinova1979quantum}. The fact that the limiting cases of the non-degenerate hyper-Raman process have found application in various spheres of modern science has also motivated us to perform the present study. To be precise, quantum repeater \cite{grangier2005quantum,duan2001long} has been built using the spontaneous Raman process; stimulated Raman scattering has been used to design devices for laser cooling of solids \cite{rand2013raman}, highly sensitive label-free biomedical imaging \cite{freudiger2008label}, imaging of a degenerate Bose-Einstein gas \cite{sadler2007coherence} and to design a quantum random number generator (QRNG) \cite{bustard2011quantum} which is a true random number generator having no classical analogue. The multi-photon processes, such as hyper-Raman processes, may reveal many-body correlation functions and thus useful information regarding the nonlinear medium (see \cite{kielich1993multi} for a review). A set of possibilities for experimentally observing these processes have been discussed since long \cite{ziegler1990hyper}. One such possibility was reported in Ref. \cite{french1975versatile}, where the output of a hyper-Raman spectrometer was illustrated and analyzed. Theoretical proposals for studying hyper-Raman spectroscopy are still of prime interest \cite{valley2010theoretical,butet2015surface}. Specifically, these multi-photon processes possess a particular experimental advantage as their signals are spectrally well separated from the input laser \cite{butet2015surface}. It is also shown in the past that due to specific selection rules involved in these processes they can reveal information not accessible by Raman and infrared spectroscopy \cite{butet2015surface}. Further, nanosensors based on the surface-enhanced hyper-Raman processes enable measurement of wide range of pH circumventing use of multiple probes \cite{kneipp2007one}. Also, due to wide applications of the hyper-Raman processes and other nonlinear optical phenomena in quantum information processing tasks, its analogues with single atoms and virtual photons are also proposed \cite{kockum2017deterministic}. In addition, with the recent growth in the experimental facilities a set of experimental results using hyper-Raman scattering has been presented \cite{kneipp2007one,kozich2007non}. A brief review of numerous applications and the future scopes of hyper-Raman processes may be found in \cite{madzharova2017surface}. Motivated by the above, to investigate the possibilities of observing nonclassical features in non-degenerate hyper-Raman process (illustrated in Figure \ref{fig:scheme}), a completely quantum mechanical description of the system is used here to construct a Hamiltonian of the system. To obtain a closed analytic expression for the time evolution of each mode involved here, we have used Sen-Mandal perturbative technique (\cite{sen2005squeezed,thapliyal2014higher,thapliyal2014nonclassical,thapliyal2016linear} and references therein), which is known to be a superior method compared to the corresponding short-time technique \cite{perina1991quantum,szlachetka1979dynamics,szlachetka1980photon}. Further, we have established the general nature of the obtained solution by obtaining the existing Sen-Mandal solutions of Raman and degenerate hyper-Raman processes \cite{sen2005squeezed,sen2007quantum,sen2008amplitude,sen2011sub,sen2007squeezing}, (as limiting cases of the solution obtained here) which were already reduced to corresponding short-time solutions in the past \cite{perina1991quantum,szlachetka1979dynamics,szlachetka1980photon}. Subsequently, the obtained time evolution of all the photon and phonon modes has allowed us to use a finite set of moments-based criteria \cite{miranowicz2010testing} to establish the highly nonclassical behavior of the hyper-Raman processes. Specifically, the model (Hamiltonian) used here is capable of dealing with the stimulated, spontaneous and partially spontaneous non-degenerate hyper-Raman process considering some or all the modes as stimulated. In all these cases, we have analyzed the possibilities of generating lower and higher order single mode nonclassicality. Specifically, in what follows, we would investigate the possibilities of observing single mode antibunched and squeezed states, and compound mode nonclassicality as intermodal squeezing, antibunching and entanglement. Further, feasibility of higher order entanglement in the hyper-Raman processes is examined. The remaining part of the paper is organized as follows. The model Hamiltonian for the non-degenerate hyper-Raman process and its solution is reported in Section \ref{sec:The-model-Hamiltonian}. A list of criteria to be used for the study of the nonclassical properties of the non-degenerate hyper-Raman process are given in Section \ref{sec:Criteria-of-nonclassicalities}. In Section \ref{sec:Nonclassicality-observed}, we summarize our results illustrating the presence and evolution of various types of nonclassicality and discuss the obtained results in detail before finally concluding the paper in Section \ref{sec:Conclusion}. \section{The model Hamiltonian and its solution \label{sec:The-model-Hamiltonian}} The most general Hamiltonian of the hyper-Raman processes is \begin{equation} \begin{array}{lcl} H & = & \sum_{i=1}^{k}\omega_{i}a_{i}^{\dagger}a_{i}+\omega_{b}b^{\dagger}b+\omega_{c}c^{\dagger}c+\omega_{d}d^{\dagger}d\\ & - & \left(g\prod_{i=1}^{k}a_{i}b^{\dagger}c^{\dagger}+\chi^{*}\prod_{i=1}^{k}a_{i}cd^{\dagger}+{\rm h.c.}\right), \end{array}\label{eq:hamiltonian} \end{equation} where $a_{i}$ is the annihilation operator for $i$th laser (pump) mode, $b,\,c,$ and $d$ are the annihilation operators corresponding to Stokes, phonon (vibration) and anti-Stokes modes, respectively. The Hamiltonian given in Eq. (\ref{eq:hamiltonian}) corresponds to $k$-pump modes in the non-degenerate hyper-Raman process (shown in Figure \ref{fig:scheme}). It is straightforward to obtain the Hamiltonian corresponding to Raman or $k$-pump degenerate hyper-Raman process just by considering $k=1$ or $\omega_{i}=\omega_{p}$, respectively. \begin{figure}[h] \begin{centering} \includegraphics[scale=0.8]{figure1}\caption{\label{fig:scheme}(Color online) The schematic energy diagram for multi-photon non-degenerate hyper-Raman processes. Here, we have shown $k$ pump $\left(a_{i}\right),$ Stokes $\left(b\right),$ vibrational (phonon) $\left(c\right),$ and anti-Stokes $\left(d\right)$ modes.} \par\end{centering} \end{figure} Specifically, if we choose $a_{1}=a_{2}$ (i.e., $\omega_{1}=\omega_{2}$) for $k=2$ then we would obtain the Hamiltonian of degenerate hyper-Raman process which is already studied in a reasonably detailed manner in Ref. \cite{sen2007squeezing}. Apart from this, 2-pump mode non-degenerate hyper-Raman process was discussed in Ref. \cite{perina1984relations}. The present Hamiltonian can be viewed as a generalization of this case to multi-mode pump non-degenerate hyper-Raman process. However, nonclassical properties of multi-mode pump non-degenerate Hamiltonian is not studied in that detail. This is why we are interested in the operator solution of the Hamiltonian of non-degenerate multi-photon pump hyper-Raman process. To obtain the solution, first we write the Heisenberg's equations of motion for different modes as \begin{equation} \begin{array}{lcl} \dot{a_{j}} & = & -i\omega_{j}a_{j}+\prod_{i=1:i\neq j}^{k}\left(ig^{*}a_{i}^{\dagger}bc+i\chi a_{i}^{\dagger}c^{\dagger}d\right),\\ \dot{b} & = & -i\omega_{b}b+ig\prod_{i=1}^{k}a_{i}c^{\dagger},\\ \dot{c} & = & -i\omega_{c}c+\prod_{i=1}^{k}\left(iga_{i}b^{\dagger}+i\chi a_{i}^{\dagger}d\right),\\ \dot{d} & = & -i\omega_{d}d+i\chi^{*}\prod_{i=1}^{k}a_{i}c, \end{array}\label{eq:field operators} \end{equation} for which we derive the solution, using Sen-Mandal perturbative (\cite{thapliyal2014higher,thapliyal2014nonclassical,thapliyal2016linear,sen2005squeezed} and references therein) approach, as \begin{widetext} \begin{equation} \begin{array}{lcl} a_{j}(t) & = & \prod_{i=1:i\neq j}^{k}\left(f_{1_{j}}a_{j}(0)+f_{2_{j}}a_{i}^{\dagger}(0)b(0)c(0)+f_{3_{j}}a_{i}^{\dagger}(0)c^{\dagger}(0)d(0)\right.\\ & + & f_{4_{j}}a_{j}(0)A_{l}b^{\dagger}(0)b(0)c^{\dagger}(0)c(0)+f_{5_{j}}a_{i}^{\dagger}(0)a_{i}(0)a_{j}(0)b(0)b^{\dagger}(0)\\ & + & f_{6_{j}}a_{i}^{\dagger}(0)a_{i}(0)a_{j}(0)c^{\dagger}(0)c(0)+f_{7_{j}}a_{j}(0)A_{l}b(0)c^{2}(0)d^{\dagger}(0)\\ & + & f_{8_{j}}a_{j}^{\dagger}(0)a_{i}^{\dagger2}(0)b(0)d(0)+f_{9_{j}}a_{j}(0)A_{l}b^{\dagger}(0)c^{\dagger2}(0)d(0)\\ & + & f_{10_{j}}a_{j}(0)a_{i}^{\dagger}(0)a_{i}(0)d^{\dagger}(0)d(0)+f_{11_{j}}a_{j}(0)A_{l}c(0)c^{\dagger}(0)d^{\dagger}(0)d(0)\\ & + & \left.f_{12_{j}}a_{i}^{\dagger}(0)a_{i}(0)a_{j}(0)c^{\dagger}(0)c(0)\right),\\ b(t) & = & \prod_{i=1}^{k}\left(g_{1}b(0)+g_{2}a_{i}(0)c^{\dagger}(0)+g_{3}a_{i}^{2}(0)d^{\dagger}(0)+g_{4}a_{i}^{\dagger}(0)a_{i}(0)b(0)\right.\\ & + & \left.g_{5}A_{l}b(0)c^{\dagger}(0)c(0)+g_{6}A_{l}c^{\dagger2}(0)d(0)\right),\\ c(t) & = & \prod_{i=1}^{k}\left(h_{1}c(0)+h_{2}a_{i}(0)b^{\dagger}(0)+h_{3}a_{i}^{\dagger}(0)d(0)+h_{4}a_{i}^{\dagger}(0)a_{i}(0)c(0)\right.\\ & + & h_{5}A_{l}b^{\dagger}(0)b(0)c(0)+h_{6}A_{l}b^{\dagger}(0)c^{\dagger}(0)d(0)+h_{7}A_{l}c(0)d^{\dagger}(0)d(0)\\ & + & \left.h_{8}a_{i}^{\dagger}(0)a_{i}(0)c(0)\right),\\ d(t) & = & \prod_{i=1}^{k}\left(l_{1}d(0)+l_{2}a_{i}(0)c(0)+l_{3}a_{i}^{2}(0)b^{\dagger}(0)+l_{4}A_{l}b(0)c^{2}(0)\right.\\ & + & \left.l_{5}A_{l}c^{\dagger}(0)c(0)d(0)+l_{6}a_{i}(0)a_{i}^{\dagger}(0)d(0)\right), \end{array}\label{eq:solution} \end{equation} \end{widetext} \noindent where $A_{l}=\prod_{i}\left(a_{i}(0)a_{i}^{\dagger}(0)-a_{i}^{\dagger}(0)a_{i}(0)\right)$ (which gives us $l=2^k-1$ terms in $k$ pump mode case). For example, for 2-pump non-degenerate hyper-Raman process, we obtain $l=3$ terms as follows, $A_{3}=\left(\left\{a_{1}^{\dagger}(0)a_{1}(0)+1\right\}\left\{a_{2}^{\dagger}(0)a_{2}(0)+1\right\}\right.-\left.a_{1}^{\dagger}(0)a_{1}(0)a_{2}^{\dagger}(0)a_{2}(0)\right)=\left(a_{1}^{\dagger}(0)a_{1}(0)+a_{2}^{\dagger}(0)a_{2}(0)+1\right).$ Further, various terms in Eq. (\ref{eq:solution}) are given as Eqs. (\ref{eq:solutions of f})-(\ref{eq:eq:solutions of l}) in Appendix A. The details of obtaining the Sen-Mandal perturbative solution are given in Appendix B. Here, it is also worth mentioning that we have neglected all the terms higher than quadratic in coupling constants $\chi$ and $g$ while obtaining the present solution. The most general nature of the Hamiltonian describing the hyper-Raman processes used here has already been established. On top of that, the obtained solution is also quite general in nature and it is imperative to mention here that all the existing solutions of various Raman \cite{sen2005squeezed,sen2007quantum,sen2008amplitude,sen2011sub} and degenerate-hyper-Raman \cite{sen2007squeezing} processes can be obtained as the limiting cases of the present solution. It is also relevant to mention here that the solution obtained in \cite{sen2005squeezed}, which is a limiting case of the present solution, has already been shown to be reducible to the short-time solution reported till then \cite{perina1991quantum,szlachetka1979dynamics,szlachetka1980photon}. It is also important to note here that in some of our recent works, it has been established that the Sen-Mandal perturbative solutions are more general than the corresponding short-time solutions for the same systems (\cite{sen2005squeezed,sen2007quantum,sen2008amplitude,sen2011sub,sen2007squeezing,thapliyal2014higher,thapliyal2014nonclassical,thapliyal2016linear} and references therein). To reduce our general solution to the solution for degenerate hyper-Raman process reported in \cite{sen2007squeezing} we need to consider $k=2$, with $a_{1}=a_{2}=a$, and $\omega_{1}=\omega_{2}=\omega_{a}$ (as the process is degenerate), and $\chi$ and $g$ to be real (as $\chi$ and $g$ were considered as real in Ref. \cite{sen2007squeezing}). The coupling constants $\chi$ and $g$ were treated as real in the case of Raman process, too \cite{sen2005squeezed}, but to be consistent with the convention used in Ref. \cite{sen2005squeezed} and to reduce the solution reported here to the solution reported in \cite{sen2005squeezed}, we would require to replace $g$ by $-g$. Specifically, the solution used in \cite{sen2005squeezed,sen2007quantum,sen2008amplitude,sen2011sub,sen2013intermodal,giri2016higher} can be reproduced using $k=1$, $\omega_{1}=\omega_{a}$ in the present solution. The relation among various time dependent functional coefficients in the evolution of the pump mode in the present case and previous results \cite{sen2005squeezed,sen2007quantum,sen2008amplitude,sen2011sub,sen2007squeezing,sen2013intermodal,giri2016higher} is summarized in Table \ref{tab:pump}. A similar correspondence among the functions for the remaining modes is mentioned explicitly in Table \ref{tab:non-pump} (see Appendix B). \begin{table} \begin{centering} \begin{tabular}{|>{\centering}p{2.5cm}|>{\centering}p{2.5cm}|>{\centering}p{2.5cm}|} \hline Multi-photon pump hyper-Raman case & Degenerate hyper-Raman case \cite{sen2007squeezing} & Raman case \cite{sen2005squeezed}\tabularnewline \hline $f_{1_{i}}$ & $f_{1}^{'}$ & $f_{1}^{''}$\tabularnewline \hline $f_{2_{i}}$ & $f_{2}^{'}$ & $f_{2}^{''}$\tabularnewline \hline $f_{3_{i}}$ & $f_{3}^{'}$ & $f_{3}^{''}$\tabularnewline \hline $f_{4_{i}}$ & $f_{7}^{'}$ & \tabularnewline \hline $f_{5_{i}}$ & $f_{9}^{'}$ & $f_{5}^{''}$\tabularnewline \hline $f_{6_{i}}$ & $f_{8}^{'}$ & $f_{6}^{''}$\tabularnewline \hline $f_{7_{i}}$ & $f_{4}^{'}$ & \tabularnewline \hline $f_{8_{i}}$ & $f_{6}^{'}$ & $f_{4}^{''}$\tabularnewline \hline $f_{9_{i}}$ & $f_{5}^{'}$ & \tabularnewline \hline $f_{10_{i}}$ & $f_{11}^{'}$ & $f_{8}^{''}$\tabularnewline \hline $f_{11_{i}}$ & $f_{10}^{'}$ & \tabularnewline \hline $f_{12_{i}}$ & $f_{12}^{'}$ & $f_{7}^{''}$\tabularnewline \hline \end{tabular} \par\end{centering} \caption{\label{tab:pump}The present solution is general in nature, and the existing solutions for Raman and degenerate hyper-Raman processes can be obtained as special cases of the present solution for the functions in the evolution of pump modes. Here, we have used a (two) prime(s) in the superscript of the functions $f_{i}$s to distinguish the present solution from the degenerate hyper-Raman (Raman) process. } \end{table} \section{Criteria of nonclassicalities \label{sec:Criteria-of-nonclassicalities}} Once we have the closed form analytic expressions for the evolution of various field and phonon modes involved in the process (given in Eq. (\ref{eq:solution})), we can test the nonclassical properties of the process using various moments-based criteria (such as listed in \cite{miranowicz2010testing}), which are essentially the expectation values of annihilation and creation operators of the modes under consideration. Although an infinite set of moment-based criteria would be essential to form a necessary criterion of nonclassicality that would be equivalent to $P$-function \cite{richter2002nonclassicality}, here, we only use a small subset of this infinite set, which is therefore only sufficient. However, this small set of noncassicality criteria is found to be good enough to establish the highly nonclassical character of the non-degenarate hyper-Raman process. In this section, we enlist the set of criteria that is used in the present work to analyze the presence of lower and higher order nonclassicality. With the advent of sophisticated experimental techniques, some exciting experimental results involving a few of these moments-based higher order nonclassicality criteria have been reported in the recent past \cite{allevi2012measuring,allevi2012high,avenhaus2010accessing,hamar2014non}. More recently, experimental detection of higher order nonclassicality up to ninth order has also been reported \cite{perina2017higher}. Further, in Section \ref{sec:Intro}, we have already mentioned several recent applications of nonclassical states and Raman processes. Because of the above mentioned facts, in what follows, we are interested in analyzing the nonclassical properties of the process with specific attention to squeezed, antibunched and entangled states. \subsection{Lower and higher order squeezing } In order to study the squeezing effects in the various modes, we define the quadrature operators \begin{equation} \begin{array}{lcl} X_{a} & = & \frac{1}{2}\left(a(t)+a^{\dagger}(t)\right),\\ Y_{a} & = & -\frac{i}{2}\left(a(t)-a^{\dagger}(t)\right), \end{array}\label{eq:quadrature} \end{equation} where $a\,(a^{^{\dagger}})$ is the annihilation (creation) operator for a specific bosonic mode, and it satisfies $[a,a^{\dagger}]=1$. Squeezing in mode $a$ is possible if the fluctuation in one of the quadrature operators goes below the minimum uncertainty level, i.e., if \begin{equation} \left(\Delta X_{a}\right)^{2}<\frac{1}{4}\,\mathrm{or}\,\left(\Delta Y_{a}\right)^{2}<\frac{1}{4}.\label{eq:condition for squeezing} \end{equation} Similarly, we may study intermodal squeezing in the compound mode $ab$ using the following quadrature operator for the compound mode introduced by Loudon and Knight \cite{loudon1987squeezed}: \begin{equation} \begin{array}{lcl} X_{ab} & = & \frac{1}{2\sqrt{2}}\left(a(t)+a^{\dagger}(t)+b(t)+b^{\dagger}(t)\right)\\ Y_{ab} & = & -\frac{i}{2\sqrt{2}}\left(a(t)-a^{\dagger}(t)+b(t)-b^{\dagger}(t)\right). \end{array}\label{eq:two-mode quadrature} \end{equation} Usually, the higher order counterpart of squeezing is studied using two different criteria, proposed by Hong and Mandel \cite{hong1985higher,hong1985generation} and Hillery \cite{hillery1987amplitude}, independently. Hong-Mandel-type squeezing takes into consideration the higher order moments of usual quadrature defined in Eq. (\ref{eq:quadrature}), while Hillery's squeezing criterion deals with amplitude powered quadratures. Here, we have focused only on the latter type. For which, the amplitude powered quadratures are defined as \begin{equation} Y_{1,a}=\frac{a^{n}+\left(a^{\dagger}\right)^{n}}{2}\label{eq:quadrature-power1} \end{equation} and \begin{equation} Y_{2,a}=i\left(\frac{\left(a^{\dagger}\right)^{n}-a^{n}}{2}\right).\label{eq:quadrature-power2} \end{equation} As the quadratures fail to commute, we can obtain a criterion for amplitude powered squeezing from the uncertainty principle as \begin{equation} A_{i,a}=\left(\Delta Y_{i,a}\right)^{2}-\frac{1}{2}\left|\left\langle \left[Y_{1,a},Y_{2,a}\right]\right\rangle \right|<0\label{eq:HOS} \end{equation} for each quadrature $i\in\left\{ 1,2\right\}$, where $\left[A,B\right]=AB-BA$ is the commutator, and $\left[Y_{1,a},Y_{2,a}\right]\neq0 \forall n$. \subsection{Lower and higher order antibunching} Higher order antibunching criterion was introduced by Lee \cite{lee1990higher}. With time, several variants of this criterion, which are essentially equivalent, have been proposed. One such criterion was proposed by Pathak and Garcia \cite{pathak2006control} as \begin{equation} \begin{array}{lcl} D_{a}(n-1)=\left\langle a^{\dagger n}a^{n}\right\rangle -\left\langle a^{\dagger}a\right\rangle ^{n} & < & 0.\end{array}\label{hoa} \end{equation} Importantly, for $n=2$, it reduces to lower order antibunching, while for all $n\geq3$ we obtain the higher order counterpart. Therefore, here we have calculated and reported $\left(n-1\right)$th order antibunching and also inferred corresponding lower order results from it. Similarly, in order to study the intermodal antibunching, one can use the solution reported here and the following criterion \begin{equation} D_{ab}=(\Delta N_{ab})^{2}=\left\langle a^{\dagger}b^{\dagger}ba\right\rangle -\left\langle a^{\dagger}a\right\rangle \left\langle b^{\dagger}b\right\rangle <0.\label{eq:In-ant} \end{equation} \subsection{Lower and higher order entanglement} Along the same line, two higher order entanglement criteria were proposed by Hillery and Zubairy \cite{hillery2006entanglement,hillery2006entanglementapplications} as inseparability criteria. It may be noted that each of these criteria are sufficient but not necessary and thus the entanglement (nonclassical nature) not detected by a particular criterion may be detected by the other one, and in some situations both of them may fail to detect entanglement. A quantum state can be verified to be entangled using Hillery-Zubairy's HZ-I criterion \begin{equation} E_{ab}^{m,n}=\left\langle \left(a^{\dagger}\right)^{m}a^{m}\left(b^{\dagger}\right)^{n}b^{n}\right\rangle -\left\vert \left\langle a^{m}\left(b^{\dagger}\right)^{n}\right\rangle \right\vert ^{2}<0\label{hoe-criteria} \end{equation} or Hillery-Zubairy's HZ-II criterion \begin{equation} E_{ab}^{\prime m,n}=\left\langle \left(a^{\dagger}\right)^{m}a^{m}\right\rangle \left\langle \left(b^{\dagger}\right)^{n}b^{n}\right\rangle -\left\vert \left\langle a^{m}b^{n}\right\rangle \right\vert ^{2}<0.\label{hoe-criteria-2} \end{equation} An arbitrary quantum state is higher order entangled if it satisfies HZ-I and/or HZ-II criteria for $m+n\geq3$. Importantly, the lower order entanglement can also be verified from Eqs. (\ref{hoe-criteria}) and (\ref{hoe-criteria-2}), by considering $m=n=1$. \section{Nonclassicality observed \label{sec:Nonclassicality-observed}} All the nonclassicality criteria listed in Eqs. (\ref{eq:condition for squeezing})-(\ref{hoe-criteria-2}) contain average values of functions of time evolved annihilation and creation operators given in Eq. (\ref{eq:solution}). To calculate the average values, we have to consider an initial state of the system. Without any loss of generality, the initial state is chosen to be a product state of coherent states in each mode \begin{equation} |\psi(0)\rangle=|\alpha_{i}\rangle\otimes|\beta\rangle\otimes|\gamma\rangle\otimes|\delta\rangle,\label{eq:initial state} \end{equation} where $|\alpha_{i}\rangle=|\alpha_{1}\rangle\otimes|\alpha_{2}\rangle\otimes\cdots|\alpha_{i}\rangle\cdots\otimes|\alpha_{k}\rangle$ is the initial state of the pump modes in the product state of $k$ coherent states. Further, we have considered a detuning of $\left|\frac{\Delta\omega_{1}}{g}\right|=10$ and $\left|\frac{\Delta\omega_{2}}{g}\right|=19$ in Stokes and anti-Stokes hyper-Raman processes, respectively. In the stimulated case, we have considered non-zero photon number initially in each mode, i.e., $\left|\beta\right|=8,$ $\left|\gamma\right|=0.01,$ and $\left|\delta\right|=1$, while all these values are initially zero in the spontaneous case. Additionally, we have considered $\left|\alpha_{i}\right|=10$ for all the pump modes, unless stated otherwise, in both the stimulated and spontaneous cases. \subsection{Lower and higher order squeezing} Using Eqs. (\ref{eq:solution}) and (\ref{eq:initial state}) in criterion of squeezing (\ref{eq:condition for squeezing}), we have obtained the closed form analytic expressions for the witnesses of squeezing in all the modes involved. Specifically, the witnesses for the single mode squeezing in an arbitrary pump mode is calculated to be \begin{subequations} \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} \left(\Delta X_{a_{j}}\right)^{2}\\ \left(\Delta Y_{a_{j}}\right)^{2} \end{array}\right] & = & \frac{1}{4}\left[1+2\left|f_{2}\right|^{2}\sigma_{l}\left|\beta\right|^{2}\left|\gamma\right|^{2}+2\left|f_{3}\right|^{2}\left|\delta\right|^{2}\right.\\ & \times & \left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\left(\left|\gamma\right|^{2}+1\right)\right)+2\left\{ f_{2}^{*}f_{3}\sigma_{l}\right.\\ & \times & \left.\left.\beta^{*}\gamma^{*2}\delta\mp f_{1}^{2}g_{3}^{*}g_{1}\alpha_{i}^{*2}\beta\delta+{\rm c.c.}\right\} \right], \end{array}\label{eq:Sq-a} \end{equation} while the witnesses of squeezing in Stokes and vibration (phonon) modes are obtained as \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} \left(\Delta X_{b}\right)^{2}\\ \left(\Delta Y_{b}\right)^{2} \end{array}\right] & = & \frac{1}{4}\left[1+2\left|g_{2}\right|^{2}\left|\alpha_{i}\right|^{2}\right]\end{array}\label{eq:Sq-b} \end{equation} and \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} \left(\Delta X_{c}\right)^{2}\\ \left(\Delta Y_{c}\right)^{2} \end{array}\right] & = & \frac{1}{4}\left[1+2\left|h_{2}\right|^{2}\left|\alpha_{i}\right|^{2}+2\left|h_{3}\right|^{2}\sigma_{l}\left|\delta\right|^{2}\right.\\ & \pm & \left.\left\{ 2h_{1}^{2}g_{1}^{*}g_{6}\sigma_{l}\beta^{*}\delta+{\rm c.c.}\right\} \right], \end{array}\label{eq:sq-c} \end{equation} respectively. Using the obtained Sen-Mandal perturbative solution, squeezing in anti-Stokes mode was not observed, i.e., \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} \left(\Delta X_{d}\right)^{2}\\ \left(\Delta Y_{d}\right)^{2} \end{array}\right] & = & \frac{1}{4}.\end{array}\label{eq:Sq-d} \end{equation} \end{subequations} Here, and in what follows we have used $\sigma_{l}=\langle A_{l}\rangle$. In Eq. (\ref{eq:Sq-b}), we can observe a positive quantity is added to $\frac{1}{4}$, therefore, none of these quadratures can show variance less than $\frac{1}{4}$, and consequently, squeezing cannot be observed in these quadratures. However, unlike Stokes and anti-Stokes modes, the compact expressions for squeezing in all the remaining modes are complex enough to infer directly from them. To analyze the dependence of squeezing in all these modes on various parameters we performed a rigorous numerical analysis of the obtained expressions. To do so, we have used $\underset{i}{\sum}\frac{\omega_{i}}{g}=1000.0001\times10^{5},$ $\frac{\omega_{b}}{g}=999.999\times10^{5},$ $\frac{\omega_{c}}{g}=0.001\times10^{5},$ and $\frac{\omega_{d}}{g}=1000.00091\times10^{5}.$ In 2-pump mode non-degenerate hyper-Raman processes, $\frac{\omega_{1}}{g}=600.0001\times10^{5}$ and $\frac{\omega_{2}}{g}=400\times10^{5}$; while in 3-pump mode non-degenerate hyper-Raman processes, $\frac{\omega_{1}}{g}=100.0001\times10^{5},$ $\frac{\omega_{2}}{g}=700\times10^{5}$ and $\frac{\omega_{3}}{g}=200\times10^{5}.$ Further, for the sake of simplicity, in the following discussion we have subtracted $\frac{1}{4}$ from both sides of all the expressions of squeezing. This helps us to plot the variation of squeezing parameter in a manner consistent with the remaining illustrations where the negative regions of the plots depict nonclassicality. The study revealed that squeezing in the stimulated case of non-degenerate $k$-pump hyper-Raman process is observed only in the pump mode, which is shown to vary with various parameters. Thus, in turn, these parameters can be used to control the amount of squeezing. Specifically, witnesses of squeezing are found to be independent of the phases of different pump modes. However, the amount of squeezing is found to depend on the frequency of the pump modes. This fact can be established from Figure \ref{fig:SqA} (a)-(c), where we can observe different amount of squeezing for each mode with different frequency, which becomes the same in the degenerate case. Further, different natures of squeezing for degenerate and non-degenerate cases have been observed (cf. Figure \ref{fig:SqA} (a) and (d)). This point is also established in context of the spontaneous case in Figure \ref{fig:Spont} (d) discussed later. Additionally, the amount of squeezing in a particular pump mode can also be controlled by the intensity of one of the other pump modes (cf. Figure \ref{fig:SqA} (a) and (b)). Note that we have shown the variation in quadrature squeezing for relatively smaller time domain to establish its dependence on various independent parameters. Only due to this reason, the amount of squeezing appears to be very small (in the order of $10^{-12}$ in Figure \ref{fig:SqA} (a)). However, we observed relatively higher amount of squeezing for a larger rescaled time as shown in inset in Figure \ref{fig:SqA} (a) in case of $X_{a_{1}}$ quadrature. Similar highly oscillating nature is also observed in all other cases of single mode and intermodal squeezing, too, but being repetitive, such illustrations are not included in the subsequent plots. Similarly, intermodal squeezing in the compound two-mode cases can be studied using Loudon and Knight's criterion given in Eq. (\ref{eq:two-mode quadrature}) with Eqs. (\ref{eq:solution}) and (\ref{eq:initial state}). We are reporting here the analytic expressions of two-mode squeezing for compound pump-pump mode as \begin{subequations} \begin{widetext} \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} \left(\Delta X_{a_{j}a_{r}}\right)^{2}\\ \left(\Delta Y_{a_{j}a_{r}}\right)^{2} \end{array}\right] & = & \frac{1}{4}\left[2\left(\left[\begin{array}{c} \left(\Delta X_{a_{j}}\right)^{2}\\ \left(\Delta Y_{a_{j}}\right)^{2} \end{array}\right]+\left[\begin{array}{c} \left(\Delta X_{a_{r}}\right)^{2}\\ \left(\Delta Y_{a_{r}}\right)^{2} \end{array}\right]\right)+\left\{ f_{2}r_{2}^{*}\left|\beta\right|^{2}\left|\gamma\right|^{2}\alpha_{j}\alpha_{r}^{*}\sigma_{l}+f_{3}r_{2}^{*}\alpha_{j}\alpha_{r}^{*}\sigma_{l}\beta^{*}\gamma^{*2}\delta\right.\right.\\ & + & f_{2}r_{3}^{*}\alpha_{j}\alpha_{r}^{*}\sigma_{l}\beta\gamma^{2}\delta^{*}\pm\left\{ f_{1}r_{2}\alpha_{i}^{*}\beta\gamma+f_{1}r_{3}\alpha_{i}^{*}\beta^{*}\delta+f_{1}r_{4}\left|\beta\right|^{2}\left|\gamma\right|^{2}\alpha_{j}\alpha_{r}\sigma_{l}+f_{1}r_{5}\left|\alpha_{i}\right|^{2}\left(\left|\beta\right|^{2}+1\right)\alpha_{j}\alpha_{r}\right.\\ & + & \left(f_{1}r_{6}+f_{1}r_{12}\right)\left|\alpha_{i}\right|^{2}\left|\gamma\right|^{2}\alpha_{j}\alpha_{r}+f_{1}r_{10}\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\left(\left|\gamma\right|^{2}+1\right)\right)\left|\delta\right|^{2}\alpha_{j}\alpha_{r}+f_{1}r_{7}\alpha_{j}\alpha_{r}\sigma_{l}\beta\gamma^{2}\delta^{*}\\ & + & \left.\left.\left.\left(2f_{1}r_{8}+f_{2}r_{3}\right)\alpha_{j}^{*}\alpha_{r}^{*}\alpha_{i}^{*2}\beta\delta+f_{1}r_{9}\alpha_{j}\alpha_{r}\sigma_{l}\beta^{*}\gamma^{*2}\delta\right\} +f_{3}r_{3}^{*}\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\left(\left|\gamma\right|^{2}+1\right)\right)\left|\delta\right|^{2}\alpha_{j}\alpha_{r}^{*}+{\rm c.c.}\right\} \right], \end{array}\label{eq:Int-sq-aij} \end{equation} \begin{figure} \centering{}\includegraphics[scale=0.8]{hyRa-SqA}\caption{\label{fig:SqA}(Color online) Squeezing in $j$th pump mode. Squeezing in both modes of the 2-pump modes non-degenerate stimulated hyper-Raman processes with (a) $\left|\alpha_{1}\right|=\left|\alpha_{2}\right|=10$, and (b) $\left|\alpha_{1}\right|=10,$ $\left|\alpha_{2}\right|=12$. (c) Squeezing in all three modes of the 3-pump modes non-degenerate hyper-Raman processes with $\left|\alpha_{1}\right|=\left|\alpha_{2}\right|=\left|\alpha_{3}\right|=10$. (d) Squeezing in degenerate hyper-Raman processes for the cases with 2 and 3-pump modes is compared. Here, we have amplified the variation in the case of 2-pump modes 30 times to show it with the squeezing in 3-pump modes. In all the cases, the solid-blue, dot-dashed-magenta, and dotted-black lines represent the quadrature $\left(\Delta X_{a_{j}}\right)^{2}$; and dashed-red, purple-large-dot-dashed, and orange-double-dotted-dashed lines correspond to the quadrature $\left(\Delta Y_{a_{j}}\right)^{2}$.} \end{figure} \noindent which is applicable to any arbitrary pump modes $a_{j}$ and $a_{r}$. Compound pump-Stokes, pump-vibration, and pump-anti-Stokes modes squeezing are obtained as \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} \left(\Delta X_{a_{j}b}\right)^{2}\\ \left(\Delta Y_{a_{j}b}\right)^{2} \end{array}\right] & = & \frac{1}{4}\left[2\left(\left[\begin{array}{c} \left(\Delta X_{a_{j}}\right)^{2}\\ \left(\Delta Y_{a_{j}}\right)^{2} \end{array}\right]+\left[\begin{array}{c} \left(\Delta X_{b}\right)^{2}\\ \left(\Delta Y_{b}\right)^{2} \end{array}\right]\right)+\left\{ f_{3}g_{2}^{*}\alpha_{j}^{*}\alpha_{i}^{*2}\delta\mp f_{4}g_{1}\left|\alpha_{i}\right|^{2}\alpha_{j}\beta\mp f_{1}g_{4}\left|\gamma\right|^{2}\sigma_{l}\alpha_{j}\beta\right.\right.\\ & \pm & \left.\left.f_{1}g_{6}\sigma_{l}\alpha_{j}\gamma^{*2}\delta+{\rm c.c.}\right\} \right], \end{array}\label{eq:Int-sq-ab} \end{equation} \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} \left(\Delta X_{a_{j}c}\right)^{2}\\ \left(\Delta Y_{a_{j}c}\right)^{2} \end{array}\right] & = & \frac{1}{4}\left[2\left(\left[\begin{array}{c} \left(\Delta X_{a_{j}}\right)^{2}\\ \left(\Delta Y_{a_{j}}\right)^{2} \end{array}\right]+\left[\begin{array}{c} \left(\Delta X_{c}\right)^{2}\\ \left(\Delta Y_{c}\right)^{2} \end{array}\right]\right)+\left\{ f_{2}h_{3}^{*}\sigma_{l}\alpha_{j}\beta\gamma\delta^{*}+f_{3}h_{3}^{*}\left|\delta\right|^{2}\sigma_{l}\alpha_{j}\gamma^{*}\pm\left(f_{1}h_{7}-f_{4}h_{1}\right)\left|\alpha_{i}\right|^{2}\alpha_{j}\gamma\right.\right.\\ & \pm & \left.\left.f_{1}h_{3}\alpha_{i}^{*}\delta\pm f_{1}h_{4}\left|\beta\right|^{2}\sigma_{l}\alpha_{j}\gamma\pm f_{1}h_{6}\sigma_{l}\alpha_{j}\beta^{*}\gamma^{*}\delta\pm f_{1}h_{7}\left|\delta\right|^{2}\sigma_{l}\alpha_{j}\gamma+{\rm c.c.}\right\} \right], \end{array}\label{eq:Int-sq-ac} \end{equation} and \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} \left(\Delta X_{a_{j}d}\right)^{2}\\ \left(\Delta Y_{a_{j}d}\right)^{2} \end{array}\right] & = & \frac{1}{4}\left[2\left(\frac{1}{2}+\left[\begin{array}{c} \left(\Delta X_{a_{j}}\right)^{2}\\ \left(\Delta Y_{a_{j}}\right)^{2} \end{array}\right]\right)+\left\{ f_{1}l_{4}\sigma_{l}\alpha_{j}\beta\gamma^{2}\pm f_{1}l_{5}\alpha_{j}\delta\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\left(\left|\gamma\right|^{2}+1\right)\right)+{\rm c.c.}\right\} \right],\end{array}\label{eq:Int-sq-ad} \end{equation} respectively. We have also considered two-mode squeezing among Stokes-vibration mode \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} \left(\Delta X_{bc}\right)^{2}\\ \left(\Delta Y_{bc}\right)^{2} \end{array}\right] & = & \frac{1}{4}\left[2\left(\left[\begin{array}{c} \left(\Delta X_{b}\right)^{2}\\ \left(\Delta Y_{b}\right)^{2} \end{array}\right]+\left[\begin{array}{c} \left(\Delta X_{c}\right)^{2}\\ \left(\Delta Y_{c}\right)^{2} \end{array}\right]\right)+\left\{ g_{2}h_{2}^{*}\sigma_{l}\beta\gamma^{*}\pm\left(g_{1}h_{2}\alpha_{i}+g_{1}h_{5}\sigma_{l}\beta\gamma+2g_{6}h_{1}\sigma_{l}\gamma^{*}\delta\right)+{\rm c.c.}\right\} \right],\end{array}\label{eq:Int-sq-bc} \end{equation} \begin{figure} \centering{}\includegraphics[scale=0.8]{hyRa-IntSqA}\caption{\label{fig:IntSq}(Color online) Intermodal squeezing is observed in 2-pump modes non-degenerate stimulated hyper-Raman processes in (a) $j$th pump-Stokes mode, (b) $j$th pump-vibration mode, and (c) $j$th pump-anti-Stokes mode. In all three cases, the solid-blue (dot-dashed-magenta) and dashed-red (purple-large-dot-dashed) lines correspond to the quadratures $\left(\Delta X_{a_{j}K}\right)^{2}$ and $\left(\Delta Y_{a_{j}K}\right)^{2}$ for compound $a_{1}K$ ($a_{2}K$) mode, respectively. While in (d), intermodal squeezing in compound $a_{j}a_{k}$ mode for 3-pump modes non-degenerate stimulated hyper-Raman processes is shown for all three cases, i.e., $a_{1}a_{2}$ mode shown in solid-blue and dashed-red lines, $a_{2}a_{3}$ mode shown in dot-dashed-magenta and purple-large-dot-dashed lines, and $a_{1}a_{3}$ mode as dotted-black and orange-double-dotted-dashed lines for the quadratures $\left(\Delta X_{a_{j}a_{r}}\right)^{2}$ and $\left(\Delta Y_{a_{j}a_{r}}\right)^{2}$, respectively. In all the cases, $\left|\alpha_{i}\right|=10\,\forall i\in\left\{ 1,2,3\right\} $. } \end{figure} \noindent Stokes-anti-Stokes mode \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} \left(\Delta X_{bd}\right)^{2}\\ \left(\Delta Y_{bd}\right)^{2} \end{array}\right] & = & \frac{1}{4}\left[1+\frac{1}{2}\left[\begin{array}{c} \left(\Delta X_{b}\right)^{2}\\ \left(\Delta Y_{b}\right)^{2} \end{array}\right]\pm\left\{ g_{1}l_{3}\alpha_{i}^{2}+{\rm c.c.}\right\} \right],\end{array}\label{eq:Int-sq-bd} \end{equation} and vibration-anti-Stokes mode \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} \left(\Delta X_{cd}\right)^{2}\\ \left(\Delta Y_{cd}\right)^{2} \end{array}\right] & = & \frac{1}{4}\left[1+\frac{1}{2}\left[\begin{array}{c} \left(\Delta X_{c}\right)^{2}\\ \left(\Delta Y_{c}\right)^{2} \end{array}\right]\pm\left\{ h_{1}l_{5}\sigma_{l}\gamma\delta+{\rm c.c.}\right\} \right].\end{array}\label{eq:Int-sq-cd} \end{equation} \end{widetext}\end{subequations} In all the expressions obtained for two-mode squeezing (i.e., Eqs. (\ref{eq:Int-sq-aij})-(\ref{eq:Int-sq-cd})), the single mode squeezing witnesses (i.e., variance $\left(\Delta X_{i}\right)^{2}$, and $\left(\Delta Y_{i}\right)^{2}$, with $i\in{a,b,c}$) that appear in the right hand sides are to be substituted by the corresponding expressions reported for single mode squeezing in Eqs. (\ref{eq:Sq-a})-(\ref{eq:Sq-d}). Finally, we analyzed the expressions for the compound mode squeezing, and variation is shown in Figure \ref{fig:IntSq}. Interestingly, intermodal squeezing is observed in all the compound modes involving pump mode. In the analogy of quadrature squeezing illustrated in Figure \ref{fig:SqA}, the observed nonclassicality is shown to depend on the frequency of the pump mode. The same fact has been established here using non-degenerate 2-pump and 3-pump hyper-Raman processes, where the amount of intermodal squeezing are found to be different for various pump modes. Hillary's amplitude powered squeezing for all the modes involved is calculated using Eqs. (\ref{eq:solution}) and (\ref{eq:initial state}) in criterion of squeezing (\ref{eq:HOS}). Specifically, the analytic expression for an arbitrary pump mode is obtained as follows \begin{subequations} \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} A_{1,a_{j}}\\ A_{2,a_{j}} \end{array}\right] & = & \frac{1}{2}k^{2}\left|\alpha_{j}\right|^{2\left(k-1\right)}\left[\left|f_{2}\right|^{2}\sigma_{l}\left|\beta\right|^{2}\left|\gamma\right|^{2}\right.\\ & + & \left|f_{3}\right|^{2}\left|\delta\right|^{2}\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\left(\left|\gamma\right|^{2}+1\right)\right)\\ & + & \left.\left\{ f_{2}^{*}f_{3}\sigma_{l}\beta^{*}\gamma^{*2}\delta\mp f_{1}^{2}g_{3}^{*}g_{1}\alpha_{i}^{*2}\beta\delta+{\rm c.c.}\right\} \right]. \end{array}\label{eq:asq-aj} \end{equation} \begin{figure} \centering{}\includegraphics[scale=0.6]{AmPSq-a}\caption{\label{fig:AmSq}(Color online) Amplitude powered squeezing is observed in $a_{1}$ pump mode in 2-pump modes non-degenerate hyper-Raman processes with $\left|\alpha_{1}\right|=\left|\alpha_{2}\right|=10$. The solid-blue and dashed-red lines, dot-dashed-magenta and purple-large-dot-dashed lines, and dotted-black and orange-double-dotted-dashed lines correspond to the amplitude powered quadratures $A_{1,a_{1}}$ and $A_{1,a_{2}}$ for $k=1,2,$ and 3, respectively. To accommodate all the variations in the same plot we have amplified the values for $k=1$ and 2 with $10^{4}$ and $10^{2}$, respectively. } \end{figure} Similar study for Stokes, vibration, and anti-Stokes modes resulted in \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} A_{1,b}\\ A_{2,b} \end{array}\right] & = & \frac{1}{2}\left[k^{2}\left|g_{2}\right|^{2}\left|\alpha_{i}\right|^{2}\left|\beta\right|^{2\left(k-1\right)}\right],\end{array}\label{eq:as-q-b} \end{equation} \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} A_{1,c}\\ A_{2,c} \end{array}\right] & = & \frac{1}{2}k^{2}\left|\gamma\right|^{2\left(k-1\right)}\left[\left(\left|h_{2}\right|^{2}\left|\alpha_{i}\right|^{2}+\left|h_{3}\right|^{2}\sigma_{l}\left|\delta\right|^{2}\right)\right.\\ & \pm & \left.\left\{ h_{1}^{2k}g_{1}^{*}g_{6}\sigma_{l}\beta^{*}\delta+{\rm c.c.}\right\} \right], \end{array}\label{eq:as-q-c} \end{equation} and \begin{equation} \begin{array}{lcl} \left[\begin{array}{c} A_{1,d}\\ A_{2,d} \end{array}\right] & = & 0,\end{array}\label{eq:as-d} \end{equation} respectively. \end{subequations}\textbf{ }From the obtained expressions, the presence of amplitude powered squeezing in the pump mode has been observed. Similar to the quadrature squeezing (shown in Figure \ref{fig:SqA}), the nonclassicality is found to be absent in the remaining modes. Further, with increase in higher orders of squeezing, depth of the witness of amplitude powered squeezing is also observed to increase, which is in accordance with some of our recent observations (\cite{thapliyal2014higher,thapliyal2014nonclassical,giri2017nonclassicality} and references therein). \subsection{Lower and higher order antibunching} Higher order antibunching criterion given in Eq. (\ref{hoa}), used with Eqs. (\ref{eq:solution}) and (\ref{eq:initial state}) leads to the closed analytic expressions for the pump, Stokes, vibration and anti-Stokes modes as \begin{subequations} \begin{equation} \begin{array}{lcl} D_{a_{j}}(n-1) & = & n\left(n-1\right)\left[\left|\alpha_{j}\right|^{2\left(n-1\right)}\left(\left|f_{2}\right|^{2}\left|\beta\right|^{2}\left|\gamma\right|^{2}\sigma_{l}\right.\right.\\ & + & \left.\left|f_{3}\right|^{2}\left|\delta\right|^{2}\left\{ \left|\alpha_{i}\right|^{2}+\sigma_{l}\left(\left|\gamma\right|^{2}+1\right)\right\} \right)\\ & + & \left|\alpha_{j}\right|^{2\left(n-2\right)}\left\{ f_{2}^{*}f_{3}\sigma_{l}\beta^{*}\gamma^{*2}\delta\right.\\ & - & \left.\left.g_{1}^{*}g_{3}\alpha_{j}^{2}\alpha_{i}^{2}\beta^{*}\delta^{*}+{\rm c.c.}\right\} \right], \end{array}\label{eq:dan} \end{equation} \begin{equation} \begin{array}{lcl} D_{b}(n-1) & = & n\left(n-1\right)\left|g_{2}\right|^{2}\left|\alpha_{i}\right|^{2}\left|\beta\right|^{2\left(n-1\right)},\end{array}\label{eq:dbn} \end{equation} \begin{equation} \begin{array}{lcl} D_{c}(n-1) & = & n\left(n-1\right)\left[\left(\left|h_{2}\right|^{2}\left|\alpha_{i}\right|^{2}+\left|h_{3}\right|^{2}\sigma_{l}\left|\delta\right|^{2}\right)\right.\\ & \times & \left|\gamma\right|^{2\left(n-1\right)}+\left\{ g_{6}^{*}g_{1}\left|\gamma\right|^{2\left(n-2\right)}\sigma_{l}\beta\gamma^{2}\delta^{*}\right.\\ & + & \left.\left.{\rm c.c.}\right\} \right], \end{array}\label{eq:dcn} \end{equation} and \begin{equation} D_{d}(n)=0,\label{eq:ddn} \end{equation} respectively. \end{subequations}Using these expressions we have analyzed the possibilities of observing both lower and higher order antibunching in all the modes except anti-Stokes mode (as $D_{d}(n)$ is always zero). The presence of lower and higher order antibunching in the pump mode has been observed and shown in Figure \ref{fig:Ant} (a). Antibunching is not observed in the other modes, i.e., in the modes other than the pump modes. It is also important to note here that the second-order correlation computed here for obtaining the signature of antibunching depends on the number of photons in certain mode while it is independent of its frequency. \begin{subequations} Similarly, intermodal antibunching defined in Eq. (\ref{eq:In-ant}) can be calculated for all the possible two-mode cases, i.e., pump-pump mode \begin{widetext} \begin{figure}[t] \centering{}\includegraphics[scale=0.8]{hyRa-Ant}\caption{\label{fig:Ant}(Color online) Higher order antibunching and intermodal antibunching in $k$-pump modes non-degenerate hyper-Raman processes with (a) The values for lower order antibunching ($n=2$) in 2-pump (3-pump) modes non-degenerate hyper-Raman processes is multiplied by $10^{4}$ ($2\times10^{2}$) and higher order antibunching in 2-pump modes non-degenerate hyper-Raman processes is amplified 50 times. Intermodal antibunching in (b) pump-pump, (c) pump-Stokes, (d) pump-anti-Stokes, (e) vibration-anti-Stokes, and (f) Stokes-anti-Stokes modes for 2-pump and 3-pump modes non-degenerate hyper-Raman processes. The variation in case of 2-pump modes is amplified 10 times in (b), (c), and (f), while 100 times in (d) and (e). In all the cases, we have used $\left|\alpha_{i}\right|=10$.} \end{figure} \begin{equation} \begin{array}{lcl} D_{a_{j}a_{r}} & = & \left|\alpha_{j}\right|^{2}\left|\alpha_{r}\right|^{2}\left|\alpha_{i}\right|^{2}\left\{ -\left|f_{2}\right|^{2}\left(\left|\beta\right|^{2}+\left|\gamma\right|^{2}+1\right)\right.+\left.\left|f_{3}\right|^{2}\left(3\left|\delta\right|^{2}-\left|\gamma\right|^{2}\right)\right\} +3\left|\alpha_{j}\right|^{2}\left|\alpha_{r}\right|^{2}\sigma_{l}\left\{ \left|f_{2}\right|^{2}\right.\\ & \times & \left|\beta\right|^{2}\left|\gamma\right|^{2}+\left|f_{3}\right|^{2}\left|\delta\right|^{2}\left(\left|\gamma\right|^{2}+1\right)+ \left.\left(f_{2}^{*}f_{3}\beta^{*}\gamma^{*2}\delta+{\rm c.c.}\right)\right\} +2\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\right)\\ & + & \left(\left|\alpha_{j}\right|^{2}+\left|\alpha_{r}\right|^{2}+1\right)\left\{ \left|f_{2}\right|^{2}\left|\beta\right|^{2}\left|\gamma\right|^{2}\right.+ \left.\left|f_{3}\right|^{2}\left|\delta\right|^{2}\left(\left|\gamma\right|^{2}+1\right)+\left(f_{2}^{*}f_{3}\beta^{*}\gamma^{*2}\delta+{\rm c.c.}\right)\right\} \\ & + & 2\left|\alpha_{j}\right|^{2}\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\right)\left\{ \left|f_{2}\right|^{2}\left|\beta\right|^{2}\left|\gamma\right|^{2}+\left|f_{3}\right|^{2}\left|\delta\right|^{2}\right. \left.\left(\left|\gamma\right|^{2}+1\right)+\left(f_{2}^{*}f_{3}\beta^{*}\gamma^{*2}\delta+{\rm c.c.}\right)\right\} +\left\{ f_{1}^{*}f_{2}\right.\\ & \times & \alpha_{j}^{*}\alpha_{r}^{*}\alpha_{i}^{*}\beta\gamma+\left(2f_{1}^{*}f_{8}+f_{1}^{*2}f_{2}f_{3}\right) \left.\alpha_{j}^{*2}\alpha_{r}^{*2}\alpha_{i}^{*2}\beta\delta+f_{1}^{*}f_{3}\alpha_{j}^{*}\alpha_{r}^{*}\alpha_{i}^{*}\gamma^{*}\delta+{\rm c.c.}\right\} , \end{array}\label{eq:dai-aj} \end{equation} \begin{figure}[t] \centering{}\includegraphics[scale=0.8]{hyRa-EntA}\caption{\label{fig:Ent-A}(Color online) The presence of lower and higher order entanglement in two pump modes is shown in (a) using HZ1 and HZ2 criteria. In (b) and (c), both lower and higher order entanglement between pump and vibration modes is established using HZ1 and HZ2 criteria, respectively. The lower order entanglement in (a) is amplified $10^{4}$ times, while higher order entanglement with $m=2,\,n=1$ (which is same as for $m=1,\,n=2$) is amplified $10^{2}$ times. The dot-dashed-magenta and dotted-black lines in (b) and (c) correspond to the phase angle $\phi_{1}=\frac{\pi}{2}$ and $-\frac{\pi}{2}$, respectively, in $\alpha_{1}=\left|\alpha_{1}\right|\exp\left(i\phi_{1}\right)$. In (b) and (c), the lower order entanglement is amplified 100 times, whereas in (d), the lower order entanglement and higher order entanglement with $m=1,\,n=2$ are amplified by 100 times, respectively. In all the cases, $\left|\alpha_{i}\right|=10$ has been used. } \end{figure} \end{widetext} \noindent pump-Stokes mode \begin{equation} \begin{array}{lcl} D_{a_{j}b} & = & -\left|f_{2}\right|^{2}\left|\alpha_{j}\right|^{2}\left|\beta\right|^{2}\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\left|\gamma\right|^{2}\right)\\ & + & \left\{ \left(f_{1}^{*}f_{9}+g_{1}^{*}g_{2}f_{1}^{*}f_{3}\right)\left|\alpha_{j}\right|^{2}\sigma_{l}\beta^{*}\gamma^{*2}\delta+{\rm c.c.}\right\} , \end{array}\label{eq:dab} \end{equation} pump-vibration mode \begin{equation} \begin{array}{lcl} D_{a_{j}c} & = & -\left|\alpha_{j}\right|^{2}\left|\gamma\right|^{2}\left\{ \left|f_{2}\right|^{2}\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\left|\beta\right|^{2}\right)-\left|f_{3}\right|^{2}\right.\\ & \times & \left.\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\left|\delta\right|^{2}\right)\right\} +\left|f_{3}\right|^{2}\left|\delta\right|^{2}\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\right)\\ & \times & \left(2\left|\gamma\right|^{2}+1\right)+\left\{ f_{1}^{*}f_{3}\alpha_{j}^{*}\alpha_{i}^{*}\gamma^{*}\delta\right.\\ & + & h_{2}^{*}h_{1}f_{1}^{*}f_{3}\alpha_{j}^{*2}\alpha_{i}^{*2}\beta\delta+f_{2}^{*}f_{3}\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\right)\beta^{*}\gamma^{*2}\delta\\ & + & \left.\left(2f_{1}^{*}f_{9}+h_{1}^{*}h_{2}f_{1}^{*}f_{3}\right)\left|\alpha_{j}\right|^{2}\sigma_{l}\beta^{*}\gamma^{*2}\delta+{\rm c.c.}\right\} , \end{array}\label{eq:dac} \end{equation} pump-anti-Stokes mode \begin{equation} \begin{array}{lcl} D_{a_{j}d} & = & \left|f_{3}\right|^{2}\left|\alpha_{j}\right|^{2}\left|\delta\right|^{2}\left\{ \left|\alpha_{i}\right|^{2}+\sigma_{l}\left(\left|\gamma\right|^{2}+\left(\left|\delta\right|^{2}-1\right)\right)\right\} \\ & + & \left\{ \left(f_{1}^{*}f_{7}+l_{1}^{*}l_{2}f_{1}^{*}f_{2}\right)\left|\alpha_{j}\right|^{2}\sigma_{l}\beta\gamma^{2}\delta^{*}+{\rm c.c.}\right\} , \end{array}\label{eq:dad} \end{equation} Stokes-vibration mode \begin{equation} \begin{array}{lcl} D_{bc} & = & \left|g_{2}\right|^{2}\left|\alpha_{i}\right|^{2}\left(2\left|\gamma\right|^{2}+1\right)+\left\{ g_{1}^{*}g_{2}\alpha_{i}\beta^{*}\gamma^{*}\right.\\ & + & g_{1}^{*}g_{5}\left|\beta\right|^{2}\left|\gamma\right|^{2}\sigma_{l}+g_{1}^{*}g_{6}\sigma_{l}\beta^{*}\gamma^{*2}\delta\\ & + & \left.h_{2}^{*}h_{1}g_{1}^{*}g_{2}\left|\alpha_{i}\right|^{2}\left|\beta\right|^{2}+h_{3}^{*}h_{1}g_{1}^{*}g_{2}\alpha_{i}^{2}\beta^{*}\delta^{*}+{\rm c.c.}\right\} , \end{array}\label{eq:dbc} \end{equation} Stokes-anti-Stokes mode \begin{equation} \begin{array}{lcl} D_{bd} & = & \left\{ l_{1}^{*}l_{3}\alpha_{i}^{2}\beta^{*}\delta^{*}+{\rm c.c.}\right\} ,\end{array}\label{eq:dbd} \end{equation} and vibration-anti-Stokes mode \begin{equation} \begin{array}{lcc} D_{cd} & = & -\left|l_{2}\right|^{2}\left|\gamma\right|^{2}\left|\delta\right|^{2}\sigma_{l}.\end{array}\label{eq:dcd} \end{equation} \end{subequations} From the obtained expressions, the presence of intermodal antibunching in various compound modes is shown in Figure \ref{fig:Ant}. We could detect antibunching in all possible compound modes except pump-vibration and Stokes-vibration modes. It is interesting to observe that the depth of nonclassicality witness increases with the increase in number of pump modes in hyper-Raman process for the same values of the coupling constants. On top of that, two arbitrary pump modes are also found to possess intermodal antibunching as shown in Figure \ref{fig:Ant} (b). \subsection{Lower and higher order entanglement} Inseparability of various modes can be analyzed using HZ-I and HZ-II criteria of entanglement given in Eqs. (\ref{hoe-criteria}) and (\ref{hoe-criteria-2}). For the two arbitrary pump modes the compact expression is obtained as follows \begin{subequations} \begin{widetext} \begin{equation} \begin{array}{lcl} \left(\begin{array}{c} E_{a_{j},a_{r}}^{m,n}\\ E_{a_{j},a_{r}}^{\prime m,n} \end{array}\right) & = & \left|f_{2}\right|^{2}\left|\alpha_{j}\right|^{2\left(m-1\right)}\left|\alpha_{r}\right|^{2\left(n-1\right)}\left[\left|\alpha_{i}\right|^{2}\left\{ -mn\left|\alpha_{j}\right|^{2}\left|\alpha_{r}\right|^{2}\left(1+\left|\beta\right|^{2}+\left|\gamma\right|^{2}\right)\pm\left|\beta\right|^{2}\left|\gamma\right|^{2}\right.\right.\\ & \times & \left.\left(m^{2}n^{2}+m^{2}\left(2n\pm1\right)\left|\alpha_{r}\right|^{2}+n^{2}\left(2m\pm1\right)\left|\alpha_{j}\right|^{2}\right)\right\} +\left\{ m^{2}\left(1+\left|\alpha_{r}\right|^{2}\right)\left|\alpha_{r}\right|^{2}+n^{2}\left(1+\left|\alpha_{j}\right|^{2}\right)\left|\alpha_{j}\right|^{2}\right.\\ & \pm & \left.\left.mn\left|\alpha_{j}\right|^{2}\left|\alpha_{r}\right|^{2}\right\} \sigma_{l}\left|\beta\right|^{2}\left|\gamma\right|^{2}\right]+\left|f_{3}\right|^{2}\left|\alpha_{j}\right|^{2\left(m-1\right)}\left|\alpha_{r}\right|^{2\left(n-1\right)}\left[\left|\alpha_{i}\right|^{2}\left\{ \left|\delta\right|^{2}\left(m^{2}\left(1+\left|\alpha_{r}\right|^{2}\right)\left|\alpha_{r}\right|^{2}\right.\right.\right.\\ & + & \left.n^{2}\left(1+\left|\alpha_{j}\right|^{2}\right)\left|\alpha_{j}\right|^{2}+\left(m^{2}\left(1\pm2n\right)\left|\alpha_{r}\right|^{2}+n^{2}\left(1\pm2m\right)\left|\alpha_{j}\right|^{2}\pm m^{2}n^{2}\right)\left|\gamma\right|^{2}\right)\pm mn\left|\alpha_{j}\right|^{2}\left|\alpha_{r}\right|^{2}\\ & \times & \left.\left(\left|\delta\right|^{2}-\left|\gamma\right|^{2}\right)\right\} +\sigma_{l}\left(\left|\gamma\right|^{2}+1\right)\left|\delta\right|^{2}\left\{ m^{2}\left|\alpha_{r}\right|^{2}\left(1+\left|\alpha_{r}\right|^{2}\right)+n^{2}\left(1+\left|\alpha_{j}\right|^{2}\right)\left|\alpha_{j}\right|^{2}\pm mn\left|\alpha_{j}\right|^{2}\right.\\ & \times & \left.\left.\left|\alpha_{r}\right|^{2}\right\} \right]+F_{a\pm}\pm mn\left|\alpha_{j}\right|^{2\left(m-2\right)}\left|\alpha_{r}\right|^{2\left(n-2\right)}\left[f_{2}^{*}f_{1}\alpha_{j}\alpha_{r}\alpha_{i}\beta^{*}\gamma^{*}+f_{3}^{*}f_{1}\alpha_{j}\alpha_{r}\alpha_{i}\gamma\delta^{*}+\left(f_{2}^{*2}f_{1}^{2}\right.\right.\\ & \times & \left.\alpha_{j}^{2}\alpha_{r}^{2}\alpha_{i}^{2}\beta^{*2}\gamma^{*2}+f_{3}^{*2}f_{1}^{2}\alpha_{j}^{2}\alpha_{r}^{2}\alpha_{i}^{2}\gamma^{2}\delta^{*2}\right)\left(\left(m-1\right)\left|\alpha_{r}\right|^{2}+\left(n-1\right)\left|\alpha_{j}\right|^{2}+\frac{\left(m-1\right)\left(n-1\right)}{2}\right)\\ & + & \alpha_{j}^{2}\alpha_{r}^{2}\alpha_{i}^{2}\beta^{*}\delta^{*}\left\{ f_{8}^{*}f_{1}\left(\left(2\left|\alpha_{j}\right|^{2}+m-1\right)\left|\alpha_{r}\right|^{2}+\left(n-1\right)\left(\left|\alpha_{j}\right|^{2}+\frac{m-1}{2}\right)\right)+f_{1}^{2}f_{2}^{*}f_{3}^{*}\left(\left|\gamma\right|^{2}\left(n-1\right)\right.\right.\\ & \times & \left.\left.\left(2\left|\alpha_{j}\right|^{2}+m-1\right)+\left(n-1\right)\left(\left|\alpha_{j}\right|^{2}+\frac{\left(m-1\right)}{2}\right)+2\left(m-1\right)\left|\alpha_{r}\right|^{2}\left(2\left|\gamma\right|^{2}+1\right)+\left|\alpha_{r}\right|^{2}\left|\gamma\right|^{2}\right)\right\} \\ & + & f_{2}^{*}f_{3}\left|\alpha_{j}\right|^{2}\left|\alpha_{r}\right|^{2}\beta^{*}\gamma^{*2}\delta\left\{ \left|\alpha_{i}\right|^{2}\left(n^{2}\left(1\pm2m\right)\left|\alpha_{j}\right|^{2}+m^{2}\left(1\pm2n\right)\left|\alpha_{r}\right|^{2}\pm m^{2}n^{2}\right)\right.\\ & + & \left.\left.\left.\sigma_{l}\left(n^{2}\left(1+\left|\alpha_{j}\right|^{2}\right)\left|\alpha_{j}\right|^{2}\right.+m^{2}\left(1+\left|\alpha_{r}\right|^{2}\right)\left|\alpha_{r}\right|^{2}\pm mn\left|\alpha_{j}\right|^{2}\left|\alpha_{r}\right|^{2}\right)\right\} +{\rm c.c.}\right], \end{array}\label{eq:aiaj} \end{equation} \end{widetext} \noindent where \[ \begin{array}{lcl} F_{a+} & = & mn\left|\alpha_{j}\right|^{2\left(m-1\right)}\left|\alpha_{r}\right|^{2\left(n-1\right)}\left(2n\left|\alpha_{j}\right|^{2}+2m\left|\alpha_{r}\right|^{2}\right.\\ & + & \left.mn\right)\left\{ \left|f_{2}\right|^{2}\sigma_{l}+\left|f_{3}\right|^{2}\left|\delta\right|^{2}\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\left(\left|\gamma\right|^{2}+1\right)\right)\right.\\ & + & \left.\left(f_{2}^{*}f_{3}\sigma_{l}\beta^{*}\gamma^{*2}\delta+{\rm c.c.}\right)\right\} \end{array} \] and $F_{a-}=0$. While the analytic expression of HZ-I and HZ-II for pump-Stokes mode is obtained as \begin{equation} \begin{array}{lcl} \left(\begin{array}{c} E_{a_{j},b}^{m,n}\\ E_{a_{j},b}^{\prime m,n} \end{array}\right) & = & \left|f_{2}\right|^{2}\left|\alpha_{j}\right|^{2\left(m-1\right)}\left|\beta\right|^{2\left(n-1\right)}\left(n\left|\alpha_{j}\right|^{2}\mp m\left|\beta\right|^{2}\right)\\ & \times & \left(n\left|\alpha_{j}\right|^{2}\left|\alpha_{i}\right|^{2}-m\sigma_{l}\left|\beta\right|^{2}\left|\gamma\right|^{2}\right)+m^{2}\left|f_{3}\right|^{2}\\ & \times & \left|\alpha_{j}\right|^{2\left(m-1\right)}\left|\beta\right|^{2n}\left|\delta\right|^{2}\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\left(\left|\gamma\right|^{2}+1\right)\right)\\ & + & m\left|\alpha_{j}\right|^{2\left(m-1\right)}\left|\beta\right|^{2\left(n-1\right)}\sigma_{l}\\ & \times & \left\{ \left(mf_{2}^{*}f_{3}\left|\beta\right|^{2}\pm ng_{1}^{*}g_{6}\left|\alpha_{j}\right|^{2}\right)\beta^{*}\gamma^{*2}\delta+{\rm c.c.}\right\} . \end{array}\label{ab} \end{equation} A similar study for pump-vibration and pump-anti-Stokes modes are obtained as \begin{equation} \begin{array}{lcl} \left(\begin{array}{c} E_{a_{j},c}^{m,n}\\ E_{a_{j},c}^{\prime m,n} \end{array}\right) & = & \left|f_{2}\right|^{2}\left(n\left|\alpha_{j}\right|^{2}\left|\alpha_{i}\right|^{2}-m\sigma_{l}\left|\beta\right|^{2}\left|\gamma\right|^{2}\right)\\ & \times & \left(n\left|\alpha_{j}\right|^{2}\mp m\left|\gamma\right|^{2}\right)\left|\alpha_{j}\right|^{2\left(m-1\right)}\left|\gamma\right|^{2\left(n-1\right)}\\ & + & \left|f_{3}\right|^{2}\left|\alpha_{j}\right|^{2\left(m-1\right)}\left|\gamma\right|^{2\left(n-1\right)}\left[\left|\alpha_{i}\right|^{2}\right.\\ & \times & \left\{ n^{2}\left(1\pm2m\right)\left|\alpha_{j}\right|^{2}\left|\delta\right|^{2}+m^{2}\left(1\pm2n\right)\left|\gamma\right|^{2}\right.\\ & \times & \left.\left|\delta\right|^{2}\mp mn\left|\alpha_{j}\right|^{2}\left|\gamma\right|^{2}\pm m^{2}n^{2}\left|\delta\right|^{2}\right\} \\ & + & \left\{ n^{2}\left(1+\left|\alpha_{j}\right|^{2}\right)\left|\alpha_{j}\right|^{2}+m^{2}\left(1+\left|\gamma\right|^{2}\right)\left|\gamma\right|^{2}\right.\\ & \pm & \left.\left.mn\left|\alpha_{j}\right|^{2}\left|\gamma\right|^{2}\right\} \sigma_{l}\left|\delta\right|^{2}\right]+F_{c\pm}\\ & \pm & mn\left|\alpha_{j}\right|^{2\left(m-2\right)}\left|\gamma\right|^{2\left(n-2\right)}\left[f_{3}^{*}f_{1}\left|\alpha_{j}\right|^{2}\left|\gamma\right|^{2}\right.\\ & \times & \alpha_{j}\alpha_{i}\gamma\delta^{*}+f_{2}^{*}f_{3}\left|\alpha_{j}\right|^{2}\left|\alpha_{i}\right|^{2}\beta^{*}\gamma^{*2}\delta\\ & \times & \left\{ m\left|\gamma\right|^{2}-\left(n-1\right)\left|\alpha_{j}\right|^{2}\right\} +h_{3}^{*}h_{2}\left|\gamma\right|^{2}\alpha_{j}^{2}\alpha_{i}^{2}\\ & \times & \beta^{*}\delta^{*}\left\{ n\left|\alpha_{j}\right|^{2}-\left(m-1\right)\left|\gamma\right|^{2}\right\} +\left|\alpha_{j}\right|^{2}\sigma_{l}\\ & \times & \beta^{*}\gamma^{*2}\delta\left\{ \frac{m}{n}f_{2}^{*}f_{3}\left|\gamma\right|^{4}\pm h_{1}^{*}h_{6}\left|\alpha_{j}\right|^{2}\left|\gamma\right|^{2}\right.\\ & \pm & \left.\left.\left(n-1\right)\left|\alpha_{j}\right|^{2}\left(f_{1}^{*}f_{9}-f_{2}^{*}f_{3}\right)\right\} +{\rm c.c.}\right], \end{array}\label{ac} \end{equation} \begin{widetext} \begin{figure} \centering{}\includegraphics[scale=0.8]{hyRa-Ent}\caption{\label{fig:Ent}(Color online) The presence of lower and higher order entanglement in Stokes-anti-Stokes ((a)-(b)) and Stokes-vibration ((c)-(d)) modes is established using HZ1 ((a) and (c)) and HZ2 ((b) and (d)) criteria. The lower order entanglement in (a) is amplified 100 times, while higher order entanglement with $m=1,\,n=2$ is amplified 50 times. The dot-dashed-magenta and dotted-black lines in (b) and (c) correspond to the phase angle $\phi_{1}=\frac{\pi}{2}$ and $-\frac{\pi}{2}$, respectively, in $\alpha_{1}=\left|\alpha_{1}\right|\exp\left(i\phi_{1}\right)$. In (c), the lower order entanglement is shown after multiplying with 100, whereas in (d), the lower order entanglement is amplified 10 times and $m=1,\,n=2$ and $m=2,\,n=2$ are $10^{5}$ and $10^{3}$ times amplified, respectively. In all the cases, $\left|\alpha_{i}\right|=10$ has been used. } \end{figure} \begin{figure} \centering{}\includegraphics{hyRaSp}\caption{\label{fig:Spont}(Color online) The presence of lower and higher order nonclassicality in pump modes in spontaneous case. Intermodal (a) squeezing, (b) antibunching, and (c) entanglement for compound mode $a_{j}a_{r}$ is shown for the same values of the corresponding plots in Figures \ref{fig:SqA}-\ref{fig:Ent-A} for stimulated case. It is important to note that the values for HZ1 and HZ2 are obtained to be the same. In (d), squeezing in quadrature $X_{a_{j}}$ for 2-pump modes stimulated case with the value of frequency $\left(\frac{\omega_{i}}{g}\right)$ for $a_{j}$ mode varied between $35\times10^{6}$ and $65\times10^{6}$ in the steps of $2\times10^{6}$. The arrow indicates the variation in quadrature squeezing due to increase in the frequency.} \end{figure} \end{widetext} \noindent where \[ \begin{array}{lcl} F_{c+} & = & mn\sigma_{l}\left|\alpha_{j}\right|^{2\left(m-1\right)}\left|\gamma\right|^{2\left(n-1\right)}\left\{ \left(n\left|\alpha_{j}\right|^{2}+m\left|\gamma\right|^{2}\right)\right.\\ & \times & \left.2\left|f_{3}\right|^{2}\left|\delta\right|^{2}+\left(mf_{2}^{*}f_{3}\beta^{*}\gamma^{*2}\delta+{\rm c.c.}\right)\right\} \end{array} \] and $F_{c-}=0$; and \begin{equation} \begin{array}{lcl} \left(\begin{array}{c} E_{a_{j},d}^{m,n}\\ E_{a_{j},d}^{\prime m,n} \end{array}\right) & = & m\left|f_{3}\right|^{2}\left|\alpha_{j}\right|^{2\left(m-1\right)}\left(\left|\alpha_{i}\right|^{2}+\sigma_{l}\left(\left|\gamma\right|^{2}+1\right)\right)\\ & \times & \left|\delta\right|^{2n}\left(m\left|\delta\right|^{2}\mp n\left|\alpha_{j}\right|^{2}\right)+m^{2}\left|f_{2}\right|^{2}\left|\alpha_{j}\right|^{2\left(m-1\right)}\\ & \times & \left|\delta\right|^{2n}\left|\beta\right|^{2}\left|\gamma\right|^{2}\sigma_{l}+m\left|\alpha_{j}\right|^{2\left(m-1\right)}\left|\delta\right|^{2\left(n-1\right)}\sigma_{l}\\ & \times & \left\{ \left(mf_{2}^{*}f_{3}\left|\delta\right|^{2}+nl_{4}^{*}l_{1}\left|\alpha_{j}\right|^{2}\right)\beta^{*}\gamma^{*2}\delta+{\rm c.c.}\right\} , \end{array}\label{ad} \end{equation} respectively. The analysis of the obtained analytic expressions of entanglement of an arbitrary pump mode with all the remaining modes revealed some interesting results. Specifically, all the pump modes are found to be entangled with vibration and anti-Stokes modes as shown in Figure \ref{fig:Ent-A} (b), (c) and (d). Importantly, the bipartite entanglement between a pump and vibration modes could only be ensured for initial evolution of the system. However, as the criteria used here are only sufficient not necessary, the separability of these two modes can not be deduced. One significant result, which would be absent in Raman or degenerate hyper-Raman process due to the existence of single pump mode, is entanglement between two pump modes (cf. Figure \ref{fig:Ent-A} (a)). The present results establish that two pump modes are always entangled in the non-degenarate hyper-Raman process. This interesting result would be in continuation of a set of systems able to produce always entangled pump modes \cite{thapliyal2014higher,thapliyal2014nonclassical} and bosonic modes \cite{giri2017nonclassicality}. A similar study for all the modes, except pump mode, resulted in following compact analytic expressions for Stokes-vibration, vibration-anti-Stokes, and Stokes-anti-Stokes modes \begin{widetext} \begin{equation} \begin{array}{lcl} \left(\begin{array}{c} E_{b,c}^{m,n}\\ E_{b,c}^{\prime m,n} \end{array}\right) & = & \left|g_{2}\right|^{2}\left|\beta\right|^{2\left(m-1\right)}\left|\gamma\right|^{2\left(n-1\right)}\left\{ m^{2}\left(1\pm2n\right)\left|\alpha_{i}\right|^{2}\left|\gamma\right|^{2}+n^{2}\left(1\pm2m\right)\left|\alpha_{i}\right|^{2}\left|\beta\right|^{2}\right.\\ & \pm & \left.m^{2}n^{2}\left|\alpha_{i}\right|^{2}\mp mn\sigma_{l}\left|\beta\right|^{2}\left|\gamma\right|^{2}\right\} +n^{2}\left|h_{3}\right|^{2}\sigma_{l}\left|\beta\right|^{2m}\left|\gamma\right|^{2\left(n-1\right)}\left|\delta\right|^{2}\\ & \pm & mn\left|\beta\right|^{2\left(m-2\right)}\left|\gamma\right|^{2\left(n-2\right)}\left[g_{2}^{*}g_{1}\left|\beta\right|^{2}\left|\gamma\right|^{2}\alpha_{i}^{*}\beta\gamma+nh_{3}^{*}h_{2}\left|\beta\right|^{2}\left|\gamma\right|^{2}\alpha_{i}^{2}\beta^{*}\delta^{*}\right.\\ & + & g_{6}^{*}g_{1}\sigma_{l}\left|\beta\right|^{2}\beta\gamma^{2}\delta^{*}\left(2\left|\gamma\right|^{2}+n-1\right)+\left(n-1\right)h_{1}^{*2}h_{2}h_{3}\left|\alpha_{i}\right|^{2}\left|\beta\right|^{2}\beta^{*}\gamma^{*2}\delta\\ & + & \left.g_{2}^{*2}g_{1}^{2}\alpha_{i}^{*2}\beta^{2}\gamma^{2}\left\{ \frac{1}{2}\left(m-1\right)\left(n-1\right)+\left(n-1\right)\left|\beta\right|^{2}+\left(m-1\right)\left|\gamma\right|^{2}\right\} +{\rm c.c.}\right], \end{array}\label{bc} \end{equation} \end{widetext} \begin{equation} \begin{array}{lcl} \left(\begin{array}{c} E_{c,d}^{m,n}\\ E_{c,d}^{\prime m,n} \end{array}\right) & = & m^{2}\left|h_{2}\right|^{2}\left|\alpha_{i}\right|^{2}\left|\gamma\right|^{2\left(m-1\right)}\left|\delta\right|^{2n}\\ & + & \left|l_{2}\right|^{2}\left|\gamma\right|^{2\left(m-1\right)}\left|\delta\right|^{2n}\sigma_{l}\left[m^{2}\left|\delta\right|^{2}\mp mn\left|\gamma\right|^{2}\right], \end{array}\label{cd} \end{equation} and \begin{equation} \begin{array}{lcl} \left(\begin{array}{c} E_{b,d}^{m,n}\\ E_{b,d}^{\prime m,n} \end{array}\right) & = & m^{2}\left|g_{2}\right|^{2}\left|\alpha_{i}\right|^{2}\left|\beta\right|^{2\left(m-1\right)}\left|\delta\right|^{2n}\\ & \pm & mn\left|\beta\right|^{2\left(m-1\right)}\left|\delta\right|^{2\left(n-1\right)}\left[l_{1}^{*}l_{3}\alpha_{i}^{2}\beta^{*}\delta^{*}+{\rm c.c.}\right], \end{array}\label{bd} \end{equation} respectively. \end{subequations} Entanglement between the modes except pump mode is also a topic of prime interest in some of the recent studies on Raman or degenerate hyper-Raman process \cite{sen2013intermodal,giri2016higher}. The present results clearly reestablish that the non-separability criteria are only sufficient as one of the criteria (either HZ1 or HZ2) detects entanglement while the other one fails to detect entanglement in the same regimes of various parameters (cf. Figure \ref{fig:Ent} (a)-(b) or (c)-(d)). The present results show Stokes-anti-Stokes and Stokes-vibration modes are both lower and higher order entangled. Finally, before we conclude the paper it is customary to check the possibility of nonclassical behavior that can be observed even under the spontaneous condition. The present results show that intermodal squeezing, antibunching and entanglement between different pump modes can be observed in the spontaneous case, too (cf. Figure \ref{fig:Spont}). Here, in Figure \ref{fig:Spont} (d), we also establish the effect of change in frequency of input pump beams in spontaneous case, but it should be noted that a similar nature can be observed in stimulated case as well. It is also worth noting here that in partial spontaneous case, when one (or two) of the modes except the pump modes has non-zero photons initially, all the nonclassicality observed in the spontaneous case will also survive. On top of that, certain other nonclassical behaviors may appear. Specifically, for non-zero photons in the Stokes mode, intermodal squeezing and antibunching in the pump-Stokes compound mode can also be observed. \section{Conclusion \label{sec:Conclusion}} Here, we have obtained a completely quantum mechanical solution of the most general case of hyper-Raman process, i.e., with $k$ non-degenerate pump modes. Our endeavor to obtain the Sen-Mandal perturbative solution for this most general Hamiltonian, describing the multi-mode non-degenerate hyper-Raman process, resulted in a solution quite general in its nature. This general nature of the present Hamiltonian and corresponding solution insinuated to deduce all the existing Sen-Mandal and short time solutions for Raman and 2-pump mode degenerate hyper-Raman processes. This reduction establishes the wide applicability of the present results for all the Raman and hyper-Raman processes. Further, the present study also revealed various interesting results. Specifically, the most significant property of the present system is more than one non-degenerate pump modes. Therefore, the nonclassical features reported in an arbitrary single pump and compound two-pump modes specify our most significant contribution. The present study revealed that an arbitrary single pump mode shows both lower and higher order squeezing and antibunching; while the compound pump-pump mode possesses all the nonclassical properties studied here, i.e., intermodal squeezing, antibunching, and entanglement. The pump mode also shows compound mode nonclassicalities with Stokes, vibration, anti-Stokes modes as well. Specifically, compound pump-Stokes mode shows both intermodal squeezing and antibunching; compound pump-vibration mode exhibits intermodal squeezing and lower and higher order entanglement; intermodal squeezing antibunching, and lower and higher order entanglement are present in compound pump-anti-Stokes mode. Higher order entanglement in terms of multimode entanglement (as studied in Refs. \cite{thapliyal2014higher,thapliyal2014nonclassical}) is not studied here as it is already shown that due to self-interaction two arbitrary pump modes are always entangled. Therefore, it is expected that all the pump modes would form a $k$-partite entangled state. In addition, all the nonclassical properties observed in Raman or degenerate hyper-Raman processes (\cite{sen2005squeezed,sen2007quantum,sen2007squeezing,sen2008amplitude,perina1991quantum} and references therein) are also found to be present in the multi-mode non-degenerate hyper-Raman process. Precisely, the presence of intermodal antibunching and both lower and higher order entanglement in compound vibration-anti-Stokes and Stokes-anti-Stokes modes have been established. Interestingly, most of the nonclassical properties of the hyper-Raman process under consideration survive even in the spontaneous case. To be specific, intermodal squeezing, antibunching, and lower and higher order entanglement between two arbitrary pump modes are observed in the spontaneous case. Further, squeezing and intermodal squeezing involving the pump mode are found to depend on the frequency and number of photons in the pump mode under consideration. It is also observed to vary with the number of non-degenerate pump modes. Additionally, intermodal antibunching and entanglement are phase dependent properties and can be controlled by the phases of the pump modes. The nonclassical behavior of hyper-Raman processes can also be established with the help of quasidistribution functions \cite{thapliyal2015quasiprobability}, which will be performed in the near future. We conclude this paper with a hope that the growing experimental facilities and techniques would lead to experimental realization of the single mode and intermodal nonclassical properties observed in the pump and other modes in the present work. \textbf{Acknowledgment:} KT acknowledges support from the Council of Scientific and Industrial Research, Government of India. AP thanks Department of Science and Technology (DST), India for the support provided through the project number EMR/2015/000393. JP thanks the support from LO1305 of the Ministry of Education, Youth and Sports of the Czech Republic. \bibliographystyle{apsrev4-1}
1,314,259,995,211
arxiv
\section{Introduction} \label{intro} The context of smart cities relates to the capability of analyzing and responding to specific requests from residents. In addition to social media networks such as Twitter and Chinese Weibo, an important source of city events down to the county level are service requests directly reported from metropolitan residents. These requests are through phone calls, emails and chatbots sent to metropolitan residential service centers that employ Human agents to dispatch the requests manually to corresponding service sectors for handling such issues. However, the processing capacity of call centers is bounded by the limitation of human operation. Therefore an artificial intelligence-enabled system helps to automatically convert audio-based reported requests into text content and then dispatches the request to the corresponding sector of services. Such automation improves service responsiveness. The core of such an automated request analysis and dispatching system is a machine learning model that classifies a request in natural language to an organization that is responsible for handling the case. To scope the research problem, this paper has the assumption that requests are all text-based. Any audio request is converted to the text-based content. The issues of classifying these text-based request data are two aspects. First, the labels for classification require processing on the raw datasets to produce suitable labels. It could be assumed that the field called the responsible department in the original dataset should be the labels. However, the responsible departments are also presented in natural language description that contains abbreviation, location information and less precise rename of a department. In addition, future requests may even result in new department names that are not part of the historical labels. Another issue is the request distribution over the corresponding responsible departments by nature are not evenly distributed. Some responsible departments are of small request cases and lead to the overall datasets unbalanced. In this paper, we propose a hybrid method to address the above two issues. Our method consists of (1) an NLP processing workflow for feature extraction; (2) a clustering algorithm for generating meta-classes for hierarchical training; (3) multiple classifiers with a Naive Bayes model, a fully connected neural network model and a residual neural network model to produce an optimal inference for a certain request. Training the classification with generated meta-classes, we gain insights into the classification errors. A use case of our hybrid method has been developed on over 80,000 sample requests in Chinese that involve 157 responsible departments. Our method achieves with testing precision of 76.42\% and log loss value of 1.192 over 35,663 test data that collected in different time period than the training datasets. Our code and samples of the dataset are open-sourced on Github\footnote{https://github.com/OneClickDeepLearning/classificationOfResidentialRequests}. The \textbf{contribution} of this paper is three-fold as follows: \begin{itemize} \item We build an NLP-based feature engineering workflow with a rigorous measure of feature prediction power using information values; \item We demonstrate a hybrid machine learning method with both unsupervised and supervised machine learning to classify residential requests over unbalanced data sample distribution; \item We develop three classifiers to ensemble the best classification result. We compare the classification performance with two benchmarking models. \end{itemize} The structure of this paper is organized as follows: Section 2 presents the related work. In Section ~\ref{sub:featureextraction}, we introduce our dataset and feature engineering techniques. Section~\ref{sec:Hierarchical} describes a hierarchical classification method of generating meta-classes for classification of a large number of classes. The hybrid machine learning models in Section~\ref{sec:hybrid}. Finally, we present our assessment metrics and experiment results of the classification performance in Section~\ref{sec:evaluation}. We conclude our paper in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:1} \textbf{Learning from few labeled data.} Kamal et al. proposed an Expectation-Maximization based method~\cite{nigam2000text} to train a classifier with few labeled data and use the classifier to label the high-confidence unlabelled data, then use the labeled data to train a new classifier, and repeat this process until the classifier converge. Blum et al.~\cite{blum1998combining} proposed a co-training method that allows learning from two views of the features. Rajat et al. proposed a Self-taught method that uses Auto-Encoder to learn higher-level representations with unlabelled data~\cite{raina2007self}. \textbf{Imbalance dataset.} Nitesh et al. presented a method of SMOTE (Synthetic Minority Over-sampling Technique) \cite{chawla2002smote} to over-sampling the minority class by creating synthetic minority samples that fall between the minority data and their nearest neighbors. If the distance between the minority data and the neighbors is far, the synthetic data will have a large range of noise. Hui et al. presented a Borderline-SMOTE method \cite{han2005borderline} ensures that the sampling happens only when the majority of the selected neighbors are minority data to lower the randomness of noise. \textbf{Ensemble of Classifiers.} Breiman proposed a bootstrap aggregating method\cite{breiman1996bagging} using bootstrap to extract several sub training sets from the original training set and train a classifier for each sub training set. Then each classifier gives a weighted vote for the classification. Yoav and Robert proposed a boost-based method called AdaBoost that uses weighted data to train weak classifiers. The data miss-classified by a weak classifier gains more weight to train the next weak classifier. The weak classifiers are weighted to form a confident classifier. Based on AdaBoost \cite{freund1997decision} method, Wei et al. proposed a cost-sensitive Boosting \cite{fan1999adacost} that increases the weight of misclassified training data that have a higher cost. \textbf{Hierarchical Classification.} Pei-Yi et al proposed a hierarchically SVM text classification method that split a problem into sub-problems in the classification tree that can be solved accurately and efficiently \cite{hao2007hierarchically}. A hierarchical softmax architecture \cite{Bengio2015} was proposed by Morin and Bengio when dealing with a huge number of output classes. It significantly speeds up the training time compared to feed-forward networks. \textbf{Convolutional Neural Networks on NLP Tasks.} Convolutional Neural Network (CNN) models have been demonstrated performing well in the NLP tasks of sentence classification~\cite{Cybenko1989}~\cite{Zhang:2015:CCN:2969239.2969312}. A CNN model consists of layers of convolutions with non-linear activation function such as \textit{ReLU} or \textit{tanh}. In a CNN model, convolutions over the input layer are used to compute the output. As illustrated by Zhang~\cite{DBLP:journals/corr/ZhangW15b}, each region of the input is connected to a neuron in the output. Then each layer applies a filter to detect higher-level features. These features further go through a pooling layer to form a univariate feature vector for the penultimate layer. The final softmax layer then receives this feature vector as input and uses it to classify the sentence. Kim applied a single convolutional layer on top of Word2Vec embeddings \cite{kim2014convolutional} of words in a sentence to perform classification tasks. His work proves that convolutional neural network, with a good representation of the words, can provide promising results on NLP problems. Conneau et al. proposed ResNet-like deep convolutional neural network \cite{vdconvfortex} that can learn the different hierarchical representation of the input text. They use small convolutions and let the network learn what is the best combination of these extracted features. Their work allows the network to be deeper and finer-grained. Prabowo and Thelwall proposed a series of hybrid classifiers \cite{sentimentanalysis} that combine different rule-based classifiers and SVM as a pipeline to solve sentiment analysis problems and achieved good performance. \section{Feature Engineering}\label{sub:featureextraction} Feature engineering processes and transforms the dataset in texts to word vectors as inputs of machine learning models. The original dataset contains eight features with \textit{id}, \textit{time stamp}, four \textit{categories} of responsible departments, \textit{request description}, and \textit{responsible department description}. The feature engineering workflow is depicted in Figure~\ref{fig:featureeng-workflow}. Feature engineering first removes invalid data samples in which the values of the responsible department description are missing or originally marked as non-available. The training dataset contains 849,861 records, including 145,542 records originally marked as invalid and 58,394 records without a responsible department description. Finally, valid dataset contains 645,924 records. \vspace{-0.3in} \begin{figure*}[h] \includegraphics[width=1.0\textwidth]{./figure/featureeng-workflow.png} \caption{The major steps of feature engineering on two features of request description and responsible department description} \label{fig:featureeng-workflow} \end{figure*} \vspace{-0.4in} \subsection{Data Preprocessing} Sentences are segmented into tokens before feature extraction. Two major methodologies of tokenization and segmentation are dictionary-based and statistics-based. The dictionary-based methods recognize words based on a maintained vocabulary \cite{mikolov2013efficient}, while the statistics-based approach uses corpus as a resource to build a word-segmentation model. In this paper, we process the tokenization and segmentation with the second method using a tool called LTP, a Chinese language technology platform~\cite{LTP}. LTP consists of six Chinese processing modules, including 1) Word Segmentation (WordSeg); 2)Part-of-Speech Tagging (POSTag); 3) Named Entity Recognition (NER); 4) Word Sense Disambiguation (WSD); 5) Syntactic Parsing (Parser) and Semantic Role Labeling (SRL). Further on, the segmented tokens are filtered by lexical analysis modules of LTP to eliminate tokens that include digits, words in other languages, punctuation, and stop words. A combined list of public available stopping-words is applied to the LTP tool. In addition, verbs, adjectives, adverbs are also excluded. Organization names and location-relevant nouns consist of more than one tokens. The NER module of LTP recognizes and merges these nouns into single words. \subsection{Data Distribution} \label{sub:datadistribution} The feature that contains the labeling information is \textit{responsible department description}. A simplified illustrating sample is depicted in Figure~\ref{fig:datasample}. The description needs further processing to generate labels for training and inference. The reason is the description usually contains location phrases with various levels of metropolitan granularity, national, provincial, county, and local communities. This means the records with the same responsible department but different location nouns become separate classes. As a result, the dataset distribution over the labels is spread widely with a large number of classes and the density of classes is diluted. Such a circumstance degrades the training quality and inference accuracy. \vspace{-0.3in} \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{./figure/datasample.png} \caption{One data sample with features of request description and responsible department description} \label{fig:datasample} \end{figure} To solve the above problem, we separate the location nouns from department names and titles. Only the organization names and titles remain to generate training labels. For cases that the department names and tiles are in abbreviation, we set up a dictionary and manually create an entry mapping between the standard full name and any forms of variation including abbreviation. This leads to 157 unique department names and titles, which are considered as the labels for classification. We further plot the data sample distributions over the 157 classes in Figure~\ref{fig:dataDistribution}. Among them, there are 101 classes whose data sample sizes are below 1000 samples (approximately 1000 out of a portion of 0.15\%). \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{./figure/imbalancedata.jpeg} \caption{Data distribution statistic for 157 classes, the x-axis is the index of the classes and the y-axis is the total number of data of a class} \label{fig:dataDistribution} \end{figure} We further explore the dataset characteristics upon observations of data distribution. We develop an inverted index of words in each request description record. The Bag-of-Words (BoW) algorithm \cite{harris1954distributional} is used to build the word pairs with their word counts per record. The word and the sample become a vector that is stored. Such a vector allows us to trace the word occurrence in samples. Therefore we compute the statistics of sample counts grouped by the word occurrence frequency. Figure~\ref{fig:wordfrequencydistribution} plots the word frequency distribution. This plot is interrupted as 9382 data samples contain words that only occur once in the whole request description text; 6275 data samples contain words occur 2 to 5 times in the whole request description text. The words of high frequency (such as over 50) only appear in a limited number of samples (such as 807 samples). Since stopping words are already filtered, this plot in Figure~\ref{fig:wordfrequencydistribution} indicates words with low frequency should be further measured with their relevance of other words in the feature space. Likewise, we produce the plot of word frequency and distribution of 157 labels as shown in Figure~\ref{fig:labelfrequencydistribution}. Our solution is training the Word2Vec model to generate the word embedding that represents the relations of word tokens in a high dimensional space. The details are presented in Section~\ref{subsec:featureextraction}. \begin{figure*} \centering \subfigure[Data sample distribution over word frequency in request description]{ \includegraphics[width=0.35\textwidth]{./figure/wordcountdistribution.png}% \label{fig:wordfrequencydistribution} }\hspace{0.2cm} \subfigure[Label distribution over word frequency]{% \includegraphics[width=0.4\textwidth]{./figure/labeldistribution.png} \label{fig:labelfrequencydistribution} } \caption{Data and label distribution} \end{figure*} \subsection{Information Values of Features} \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{./figure/categorysample.png} \caption{Data sample of categories} \label{fig:categorysample} \end{figure} Information values and Weight-of-Evidence (WoE) are techniques of feature selection. Information values measure the prediction power of a feature. The decision to select the four categories of responsible departments as features is evaluated by their information values. The value of each category are tags edited by customer service operators. A data sample is depicted in Figure~\ref{fig:categorysample}. The bottom textbox shows the corresponding responsible department. The four categories represent four levels of tags as a whole capturing context of the request. The tokenization and segmentation workflow is applied to each category. We combine tags of four categories as a combo for the information value analysis. By statistics, there are 418 unique values of category tag combo. \[ [tag_{1}, tag_{2}, ... tag_{n} ], n = 418 \] The information values (IVs) are calculated to measure the prediction power of the category tag combo to the class label as the responsible department. Therefore, the 157 responsible departments and the 418 category tag combo form a $157 \times 418$ vector. Each entry of this vector is notated as $TagCombo_{i,j}$, which represents the counts of data samples with tag combo $j$ that belong to responsible department $i$. Then the notation of $Non-TagCombo_{i,j}$ represents the total counts of data samples with tag combo $l\ne j$, that is \[ Non-TagCombo_{i,j} = \sum_{l=1,l \ne j}^{418} \#tag-combo_{i,l} \] Now, we can calculate the value of WoE as \[WOE_{i,j} = ln(\frac{TagCombo_{i,j}}{Non-TagCombo_{i,j}})\] Hence, we calculate the information value vectors for each category tag combo $j$ for responsible department $i$ as shown in Figure~\ref{fig:ivcalculation}. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{./figure/ivcalculation.png} \caption{Sample Calculation of IV} \label{fig:ivcalculation} \end{figure} Finally, the IV values of each category tag combo are summed up for all 157 responsible departments to measure the prediction power of each tag combo as \[ IV_{i,j} = (TagCombo_{j}\% - Non-TagCombo_{j}\%) * WOE_{i,j}\] \[ IV_{TagCombo_{j}} = \sum_{i=1}^{157} IV_{i,j} \] According to the rule of thumb described above, we find only 13 out of 418 category-combos are weak predictions, that is IV value is in the range of (0.02,0.10]. The rest of category-combos have IV values below 0.02, which indicates not useful for prediction. In another word, it means category-combo is not an important feature for classifying responsible departments. They are not selected as input features for classifiers. \subsection{Feature Extraction} \label{subsec:featureextraction} Feature extraction is to produce the word tokens into word vectors with numerical values. In this paper, we apply two methods namely Term-frequency-inverse document frequency (TF-IDF) \cite{sparck1972statistical}, and Word2Vec \cite{mikolov2013efficient}. \subsubsection{Generating word vector using TF-IDF} TF-IDF measures the relevance of words, not frequency. That is, word counts in the inverted index discussed in Section~\ref{sub:datadistribution} are replaced with TF-IDF relevance weight across the whole dataset. With TF-IDF, the more samples a word appears in, the less valuable that word is as a signal to differentiate a given request. The relevance weight of a word is calculated in Eq~\ref{eq:weight}. \begin{equation}\label{eq:weight} w_{i,j} = tf_{i,j} \times \log (\frac{N}{df_{i}}) \end{equation} where $N$ is the number of samples; $tf_{i,j}$ represents the number of occurrence of word token $i$ in sample request $j$; $df_{i}$ represents the number of occurrence of request samples that contain the token $i$. The vector of $w_{i,j}$ is a normalized data format that adds up to one for all the samples. This TF-IDF vector is used as input to the Naive Bayes classifier discussed in Section~\ref{sub:bayesian}. \subsubsection{Word embedding using Word2Vec}\label{word2vec} Word embedding maps word tokens of varied length to a fixed-length word vector as inputs to machine learning models. In nature, word embedding reconstructs linguistic contexts of words and produces a vector space. The algorithm of Word2Vec uses a group of related models that are two-layer neural networks that are trained over a large corpus of text and produces a vector space, typically of several hundred dimensions. Each unique word in the corpus is assigned a corresponding vector in the vector space so that words that share common contexts in the corpus are located in close proximity to one another in the space \cite{mikolov2013efficient}. We have trained the Continuous Bag-of-Words structure (CBOW) of the Word2Vec model with a corpus collected from the whole dataset. We segment the whole dataset of 65,9421 samples with sentences into data blobs of every five continuous words as input. The central word is the target for output. Empirical experiments indicate using the training dataset as corpus produces a better classification performance. The details of the experiments are not central to the research scope of this paper, thus omitted. By statistics, we observe the word length in a request description of the dataset ranges from 1 to 780 with a weighted average of 46. Over 90\% of the request descriptions have less than 100-word tokens. Thus we set the word vector dimension as 100, and pad zero if the word length is less than 100. \section{Hierarchical Classification Method}\label{sec:Hierarchical} A hierarchical classification method handles classification of a large number of possible classes \cite{Silla2011}. The current training dataset contains 157 unique labels and thus 157 classes. We develop a hierarchical classification method to evaluate whether it works for our dataset. There are two kinds of hierarchical classification. One uses meta-classes as a two-hierarchy structure where leaf classes are grouped by similarity into intermediate classes (the meta-classes)~\cite{hao2007hierarchically}. The other one copes with a pre-defined class hierarchy, a type of supervised learning. In this paper, our method is the former case by means of which we build the hierarchy during the training by a clustering method, and then classify a sample from the meta-classes to the leaf-classes. \subsection{Meta-class Generation using K-means and GMM} To create meta-classes for 157 labels according to similarity, we first apply the K-Means clustering algorithm~\cite{kanungo2002efficient}. The K-Means algorithm first assigns each sample to the cluster whose mean values have the least squared Euclidean distance. Secondly, it calculates the new means to be the centroids of the observations in the new clusters. Finally, the algorithm converges when the assignments no longer change. K-Means applies hard clustering by assigning data points to a cluster. This implies a data sample is assigned to the closest cluster. K-Means is simple to train but does not guarantee convergence to the global optimal whereby degrades the clustering accuracy. The Gaussian Mixture Model (GMM)~\cite{bilmes1998gentle} is a finite mixture probability distribution model. The parameters of GMM are estimated iteratively by using the Expectation-Maximization (EM) algorithm~\cite{bilmes1998gentle}. GMM/EM determines a sample's probability of belonging to a cluster, which is a soft clustering process. Each cluster, therefore, can have different options to constrain the covariance such as spherical, diagonal, tied or full covariance, instead of only spherical in K-Means, which means the clustering assignment is more flexible in GMM than in K-Means. The EM algorithm has its limitation. One issue is the number of mixtures affects the overall performance of EM and this number is an unknown prior. Therefore, the optimal number of mixtures is important to ensure an efficient and accurate estimation. Our solution is applying K-Means to obtain values of centroids (or geometric centers), then initializing GMM with centroids values\cite{figueiredo2002unsupervised}. The input to K-means is a word vector with 100 dimensions of 157 class labels produced from the feature extraction process described in Section~\ref{sub:featureextraction}. We apply silhouette coefficients to select the optimal value of $K$. A silhouette coefficient of a range of [-1, 1] indicates (1) a sample is far away from the neighboring clusters with the value near +1; (2)or a sample is on or very close to the decision boundary between two neighboring clusters with the value of 0; (3) or a sample might have been assigned to the wrong cluster if the value is negative. Figure~\ref{fig:kmeans} plots the clustering result with the optimal $K$ value as 5. We derive the silhouette coefficients by setting the $K$ value range in $[3,10]$. After the silhouette analysis, we consider $K=5$ is the optimal value for K-Means/GMM/EM clustering. Figure~\ref{fig:kmeans} plots the 5 clusters of 157 labels. \begin{figure}[h] \centering \includegraphics[scale=0.2]{./figure/Kmeans.jpeg} \caption{The plot of K-means and GMM clustering} \label{fig:kmeans} \end{figure} \subsection{Meta-class Generation using Topic-based Clustering} We develop another clustering method that considers the 157 labels as topics to cluster them into groups with similar themes. In this method, we apply the clustering method of OPTICS (Ordering Points To Identify the Clustering Structure Closely) \cite{ankerst1999optics} that finds a core sample of high density and expands clusters from them. The output from OPTICS provides $K$ clusters. In our method, we apply LDA (Latent Dirichlet Allocation) to output $K$ topics. Each topic consists of a fixed size of words and their associated weights. LDA assumes the Dirichlet prior distribution of topics. In practice, the Dirichlet prior distribution assumes documents (or labels in our case) cover only a small set of topics and topics consist of a small set of words frequently. Finally, we assign labels to a cluster by an entropy-based algorithm. The algorithm is listed below. The input to this topic-based algorithm is the output of the word vector representation of the 157 labels in $157 \times 10$ dimensions. The output is $K$ clusters of 157 labels. The clustering result is shown in Figure~\ref{fig:lda}. \begin{algorithm} \DontPrintSemicolon \SetAlgoLined \KwInput{$\hat{k} \gets \{k_{j}\}, j \in [1,157]$ word vector of 157 labels} \KwOutput{Clusters of 157 labels} \begin{algorithmic} \STATE $\varsigma \gets$ number of clusters output from OPTICS on Input\; \STATE $num\_topics \gets \varsigma$ to initialize LDA; \STATE $ \hat{v_{t}} \gets $ output from LDA; \textcolor{blue} {\COMMENT{a topic vector of $\varsigma \times 10$; each entry is a tuple $t$ of [word: weight]} } \ForEach{$\hat{v_{t}}[i], i \in [1,\varsigma]$}{ \STATE $l \gets |\hat{v_{t}}[i]|$; \textcolor{blue} {\COMMENT{number of tuples} } \STATE ${c_{i}}= \frac{1}{\left | l \right |} \sum_{t \in \hat{v_{t}}[i]} t.weight$ \textcolor{blue}{\COMMENT{calculate the centriod}} \ForEach{$k_{j}, j \in [1,157]$}{ \STATE $ p(k_{j}) = \frac{sim(k_{j},{c}_{i})}{\sum_{r=1}^{ \varsigma} sim(k_{j},{c}_{r})}$ \STATE $e^{c_i}_{k_{j}} = - p(k_{j}) \cdot log(p(k_{j}))$ \textcolor{blue}{\COMMENT{calculate the entropy of $k_{j}$ to a topic $\hat{v_{t}}[i]$}} } } \STATE Assign $k_{j}$ to the topic cluster $m$ whose entropy $e^{c_m}_{k_{j}}$ is minimal; \end{algorithmic} \caption{Topics and Entropy Based Clustering} \end{algorithm} \begin{figure}[h] \centering \includegraphics[scale=0.4]{./figure/LDA.jpg} \caption{Topic based clustering result} \label{fig:lda} \end{figure} \vspace{-0.3in} \subsection{Hierarchical Classification} The clustering algorithms generate the meta-classes of 157 labels for training a classifier. The classification is of the structure of a two-level tree as depicted in Figure~\ref{fig:hierarchytree}. The leaves are 157 labels and the non-leaf nodes are the $K$ meta-classes. \begin{figure}[h] \centering \includegraphics[scale=0.5]{./figure/hierarchytree.png} \caption{The two-level hierarchy of classes} \label{fig:hierarchytree} \end{figure} A classifier is first trained with the meta-classes, noted as $Model_{meta}$. The classification decides the meta-class that a sample belongs to. As a result, the original data samples are grouped into $K$ clusters. Furthermore, each meta-class $i$ contains $L_{i}$ labels and $\sum_{i=1}^{K} L_{i} = 157$. The data samples classified to one specific meta-class are used again to train a sub-model to further classify these samples to the leaf classes. The hierarchical training produces one meta-class model, noted as $Model_{meta}$ and $K$ number of leaf-class models, one for each cluster. In total, we have $K+1$ models. When it comes to inference, there are two methods. The first method inputs the test sample to $Model_{meta}$. Based on the classification on the meta-class, the test sample is further fed to one of the leaf-class models. The second method directly inputs the test samples to each of the leaf-class models. The classification with the highest probability is selected to be the classification result. The first method has shorter inference time as it runs two models, while the second methods run $K$ models. In term of accuracy, the second method tends to be more accurate as it reduces the error propagation from the meta-class level. \subsubsection{Hierarchical Naive Bayes Classifier} \label{sub:bayesian} Both the meta-class model and leaf-class model use the Naive Bayes classifier. The difference is the meta-class model has five classes for classification as the results of topic clustering of the labels. In the context of the inputs, $X$ contains the word tokens of the request description. Further, we adopt the Bernoulli Naive Bayes classifier to encode the input of word tokens. The Bernoulli Naive Bayes classifier assumes each feature has only binary values. In this case, the word token is encoded as a one-hot bag of words model that means $1$ indicating the presence of the term or $0$ indicating absence of the term. This leads to 19,607 dimensions of input features from the request description of 40,000 training set. The limitation of this approach is that when the training set changes, the input needs to be encoded again. For example, 80,000 training set leads to 32,698 dimensions of word token one-hot encoding. To avoid the zero value of $P(x_{i}|y_{j})$ causing the posterior probability always being zero, 0.2 smooth value is added to each conditional probability $P(x_{i}|y_{j})$. $y$ represents a set of 157 responsible departments \{$y_{1}, ...y_{157}$\}. The classification is calculated as the class $y_{j}$ that produces the maximum probability given the feature set of X. \begin{equation}\label{eq:bayesinappro} \begin{aligned} P(\widehat y|x_{0},...x_{n}) \propto \max_{j=1}^{157} P(y_{j}) \cdot \prod_{i=1}^{n}P(x_i|y_{j}) \\ \widehat{y} = \arg\max_{j=1}^{157} \prod_{i=1}^{n}P(x_i |y_{j}) \cdot P(y_{j}) \end{aligned} \end{equation} \subsubsection{Hierarchical MLP Neural Network} By applying a neural network model to the task of text classification, we assume the complex function created using the network of neurons is close to the true relationship between the input features (such as word tokens of request description) and the output (in our case the 157 classes of responsible departments). In other word, a neural network with a certain number of hidden layers should be able to approximate any function that exists between the input and the output according to the Universal Approximation Theorem~\cite{Cybenko1989}. We develop a Multiple Layer Perceptron neural network classifier in the hierarchical model. In this model, both the meta-class model and the leaf-class model have the same network structure as listed in Table~\ref{tab:fullyconnectednetwork} except that the output layer of the meta-class is 5 instead of 157. Accordingly, the weight size is $128 \times 5$ of the meta-class model. Within this structure, each input neuron is connected to each output neuron in the next layer, referred to as a Dense layer. Given the input to the first Dense layer is of the size of $10,000 = 100 \times 100$, the output of the Dense layer is $512$, then the size of weights of this Dense layer is $10,000 \times 512$. We stack two Dense layers with one output layer of 157 classes. The network structure is listed in Table~\ref{tab:fullyconnectednetwork}. \vspace{-0.3in} \begin{table}[h] \centering \caption{The Structure of Fully Connected Neural Network} \begin{tabular}{l|c|c|c} \hline Layers & Input Size& Output Size & Kernel (Weight) Size \\ \hline \multirow{2}{*}{Dense} &\multirow{2}{*}{10,000}&\multirow{2}{*}{512} &\multirow{2}{*}{$ 10,000\times512 $ }\\ &&&\\ \hline \multirow{2}{*}{Dense} &\multirow{2}{*}{512}& \multirow{2}{*}{128}&\multirow{2}{*}{$ 512\times128 $ }\\ &&&\\ \hline \multirow{2}{*}{Output} &\multirow{2}{*}{128}& \multirow{2}{*}{5}&\multirow{2}{*}{$ 128\times5 $ }\\ &&&\\ \hline \end{tabular} \label{tab:fullyconnectednetwork} \end{table} \vspace{-0.4in} \section{Hybrid Machine Learning}\label{sec:hybrid} \vspace{-0.1in} This hierarchical classification method is useful to deal with a large number of classes by means of training a classifier for a much smaller number of classes (at the level of meta-classes) while keeping comparable levels of confidence. The main issue with hierarchy classification is error propagation. Since the error in the meta-class level classification propagates to the leaf-class classification directly. \begin{figure}[h] \centering \includegraphics[scale=0.25]{./figure/hybridmodel.png} \caption{Oveview of hybrid machine learning models with word embeddings} \label{fig:hybrid} \end{figure} To improve the classification performance, we further develop a residual neural network classifier inspired by ResNet~\cite{ResNet}. By ensembling, the hierarchical model and the residual neural network model, the structure of a hybrid learning method is depicted in Figure~\ref{fig:hybrid}. We add in two basic models as benchmarks Naive Bayes classifier, and an MLP classifier. In totally, we have 5 classifier outputs on the same datasets. A simple ensembling method selects the best performing classification results according to metrics of \textit{log loss}. Classifiers based on Naive Bayes use inputs from word embedding of TF-IDF as a 19,607-dimension word vector. Other three neural network models based on Word2Vector embedding as $100*100$ dimension vectors. \subsection{Residual Convolutional Neural Network} In this paper, we apply the \textit{Full pre-activation} structure proposed by He et al.~\cite{he2016identity} to build our convolutional layers. To minimize the gradient vanishing problem when a CNN model grows with deep layers, residual skip connections or identity mapping are added to convolutional layers. The input to a layer $F(X)$ is added to convolutional output as a combined input to the next layer, $y=F(X, \{W_{i}\}) + W_{s}X$. This structure allows the gradient to be propagated without loss of representations. It is considered that the skip connection and the convolutional layer together form a Residual Block layer. In our model, a Residual Block contains two convolutional layers as shown in Figure~\ref{fig:residualblock}. \begin{figure}[h] \centering \includegraphics[scale=0.4, angle=90]{./figure/residualblock.png} \caption{Structure of the Residual Block} \label{fig:residualblock} \end{figure} A \textit{Batch Normalization-Activation} layer is placed in-between two convolutional layers. As discussed in the paper~\cite{he2016identity}, $1 \times 1$ convolution can be useful when there are fewer layers, thus we choose the $1 \times 1$ convolution layer as the skip connection method. Based on the Residual Block structure, we build our 20-layer residual convolutional neural network model shown in Table~\ref{tab:architecture}. \begin{table}[h] \centering \caption{Structure of Residual CNN} \begin{tabular}{l|c|c} \hline Layers& Kernel & Output Size \\ \hline \multirow{2}{*}{Convolution} &\multirow{2}{*}{$ \left[ \begin{matrix} \ 3\times 3, 32 \ \end{matrix} \right] \times 1 $ }& \multirow{2}{*}{100$\times$100 }\\ & & \\ \hline \multirow{4}{*}{Residual Block} &\multirow{4}{*}{$ \left[ \begin{matrix} \ 3\times 3, 32 \ \\ \ 3\times 3, 32 \ \end{matrix} \right] \times 1 $ }& \multirow{4}{*}{100$\times$100 }\\ & & \\ & & \\ & & \\ \hline \multirow{4}{*}{Residual Block} &\multirow{4}{*}{$ \left[ \begin{matrix} \ 3\times 3, 64 \ \\ \ 3\times 3, 64 \ \end{matrix} \right] \times 2 $ }& \multirow{4}{*}{50$\times$50 }\\ & & \\ & & \\ & & \\ \hline \multirow{4}{*}{Residual Block} &\multirow{4}{*}{$ \left[ \begin{matrix} \ 3\times 3, 128 \ \\ \ 3\times 3, 128 \ \end{matrix} \right] \times 2 $ }& \multirow{4}{*}{25$\times$25 }\\ & & \\ & & \\ & & \\ \hline \multirow{4}{*}{Residual Block} &\multirow{4}{*}{$ \left[ \begin{matrix} \ 3\times 3, 256 \ \\ \ 3\times 3, 256 \ \end{matrix} \right] \times 2 $ }& \multirow{4}{*}{13$\times$13 }\\ & & \\ & & \\ & & \\ \hline \multirow{4}{*}{Residual Block}&\multirow{4}{*}{$ \left[ \begin{matrix} \ 3\times 3, 512 \ \\ \ 3\times 3, 512 \ \end{matrix} \right] \times 2 $ } & \multirow{4}{*}{7$\times$7 }\\ & & \\ & & \\ & & \\ \hline \multirow{2}{*}{Dense}&\multirow{2}{*}{ } & \multirow{2}{*}{157 }\\ & & \\ \hline \end{tabular} \label{tab:architecture} \end{table} \vspace{-0.1in} \section{The Evaluation}\label{sec:evaluation} The evaluation focuses on the assessment of the classification performance. We compare the metrics of training five models with different sizes of training data and testing the models with data collected at different time spans. \subsection{The Data Setup} The dataset contains 659,421 samples. We split the data with a ratio of 80\%:20\% for the training set and the test set. We further partition the training set into shards of 40,000 samples for each shard. Likewise, we partition the test set into shards with 4,000 samples in each shard. Hence, the usage of the dataset is in the unit of a shard. We set up experiments of running 5 models on two data settings: (1) one shard of training set with one shard of test set; and (2) two shards of the training set with one shard of the test set. In addition to the 659,421 samples, we also test the best performing model using the data collected in a different time period that contains 35,663 valid samples. \subsection{The Model Assessment} The evaluation defines model assessment metrics as Precision and Recall. For a multi-classes classification model, the Precision and Recall score should be calculated for each class and take the average score as the final score. There are two averaging methods, $micro$, and $macro$. The macro method considers every class has the same weight, while in the micro method every data has the same weight. The calculation is as shown below, $P_{i}$ is the Precision of the $i$th class, and $R_{i}$ is the Recall of the $i$th class. $l$ is the total number of classes. \begin{equation} \begin{aligned} P_{macro}&= \frac{1}{l}\sum_{i=1}^{l}P_i \quad P_i= \frac{TP_i}{TP_i+FP_i}\\ R_{macro}&= \frac{1}{l}\sum_{i=1}^{l}R_i \quad R_i= \frac{TP_i}{TP_i+FN_i}\\ P_{micro}&= \frac{\sum_{i=1}^{l}TP_i}{\sum_{i=1}^{l}TP_i+\sum_{i=1}^{l}FP_i}\\ R_{micro}&= \frac{\sum_{i=1}^{l}TP_i}{\sum_{i=1}^{l}TP_i+\sum_{i=1}^{l}FN_i} \end{aligned} \end{equation} where $True Positive (TP)$: the real label is positive and the predicted label is also positive; $False Positive (FP)$: the real label is negative and the predicted label is positive; $True Negative (TN)$: the real label is negative and the predicted label is also negative; $False Negative (FN)$: the real label is positive and the predicted label is negative.\\ The above metrics focus on if the classification is correct or not. Log Loss measures the distance between the predicted label and the real label. It takes the prediction probability for each class of a model as the input and outputs a log loss value as calculated in Eq~\ref{eq:logloss}. The lower the log loss value (such as close to zero), the better performance of a model is. $l$ is the total class number, $y_i$ is the real label and $p_i$ is the predicted probability of class $i$. \begin{equation}\label{eq:logloss} Log Loss = \sum_{i=1}^{l}y_ilog(p_i) + (1-y_i)log(1-p_i) \end{equation} \subsection{The Experiment Results} \vspace{-0.1in} The first set of experiments evaluate the classification performance of the five models. The training data sets are of one shard (40,000 samples) and two shards (80,000 samples) respectively. The test data set is one shard of 4,000 samples. Both the training set and test set are from data samples collected within the same span of time. Table~\ref{tab:157clses} list the metrics measured for 5 classification models. \vspace{-0.2in} \begin{table}[h] \centering \captionof{table}{Classification Performance on Five Models} \begin{tabular}{|l|c|c|c|c|c|} \hline \multirow{3}{*}{Models} & \multirow{3}{*}{Training Subset} & \multicolumn{4}{|c|}{Metrics} \\ \cline{3-6} && \multicolumn{2}{|c|}{Precision} & \multicolumn{2}{|c|}{Recall} \\ \cline{3-6} &&Micro&Macro&Micro&Macro\\ \hline \multirow{2}{*}{Hierarchical MLP} & 40,000 &0.633 &0.247 & 0.633 & 0.214 \\ &80,000 &0.686 & 0.332 & 0.686 & 0.288 \\ \hline \multirow{2}{*}{MLP} & 40,000 &0.662 &0.276 & 0.662 & 0.236\\ &80,000 &0.689 & 0.281 & 0.689 & 0.233\\ \hline \hline \multirow{2}{*}{Hierarchical Naive Bayes} & 40,000 &0.746 &0.439 & 0.746 & 0.367\\ &80,000 &0.719 &0.375 & 0.719 & 0.288\\ \hline \multirow{2}{*}{Naive Bayes} &40,000 & 0.734 & 0.358 & 0.734 & 0.254 \\ &80,000& 0.700 & 0.296 & 0.700 & 0.194\\ \hline \hline \multirow{2}{*}{Residual CNN} &40,000 & 0.754&0.420& 0.754& 0.389 \\ &80,000 & \textbf{0.787} & \textbf{0.510} & \textbf{0.787} & \textbf{0.444}\\ \hline \end{tabular} \label{tab:157clses} \end{table} \textbf{Tranining sample size.} By doubling the training sample size, we observe three models improve the classification performance, including MLP, Hierarchical MLP, and Residual CNN. Both Naive Bayes and Hierarchical Naive Bayes metrics decrease. As presented in section~\ref{sub:bayesian}, we obtain 19,607 dimensions of input features from the request description of 40,000 training set and 32,698 dimensions of word token one-hot encoding from 80,000 training set. The observation from this experiment indicates increasing the feature size impacts the performance of our implementation of Naive Bayes classifier. Our Naive Bayes classifier learns one shard of the training set more linearly separable than the doubled size of the training set. In comparison, the word embedding method applied allows the MLP and the Residual CNN classifiers remain the same feature size of $100 \time 100$ dimensions of word token vectors regardless of the size of training data. \textbf{Hirarchical vs Non-hierarchical.} Hierarchical classification improves the marginal performance of Naive Bayes classifier. For MLP and Residual CNN, they both perform better than hierarchical classification. In all experiments, Residual CNN outperforms other models. Hierarchical classification overall has lower performance than non-hierarchical classification. The benefit of introducing meta-class through the hierarchical classification method is observing the source of classification errors. Figure~\ref{fig:confusion} shows the classification results with 5 meta-classes. It indicates the classification errors are mainly from the fact that class 2 and 4 are misclassified to class 1; class 1, 3, and 4 are misclassified to class 2. We also observe from the experiments that all the five classifiers produce over 80\% precision for the 5 meta-class classifications. The precision is higher than the classification performance of 157 classes. Due to the space limitation, we skip the values in details. \begin{figure}[h] \centering \includegraphics[scale=0.4]{./figure/5class-confusionmetrics.png} \caption{Distribution of predicted classes vs real classes} \label{fig:confusion} \end{figure} \textbf{Blind Test.} The second set of experiments run on a blind test. We select the best performing model trained from each of the 5 classifiers and further test them using the whole 35,663 data samples collected from a different span of time than the first set of experiments. The result is shown in Table~\ref{tab:blindtest}. Two classifiers, Naive Bayes and Residual CNN produce better classification performance than other three classifiers. Still, Residual CNN performs the best on the blind test data. \vspace{-0.4in} \begin{table} \centering \captionof{table}{Classification Performance of Blind Test} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{3}{*}{Models} & \multicolumn{4}{|c|}{Metrics} \\ \cline{2-5} & \multicolumn{2}{|c|}{Precision} & \multicolumn{2}{|c|}{Recall} \\ \cline{2-5} &Micro&Macro&Micro&Macro\\ \hline Hierarchical Fully connected NN &0.650 & 0.244 & 0.650 & 0.192 \\ \hline {Fully connected NN} &0.689 & 0.259 & 0.689 & 0.214\\ \hline \hline {Hierarchical Naive Bayesian} & 0.678 & 0.251 & 0.678 & 0.201\\ \hline {Naive Bayesian} & 0.726 & 0.295 & 0.726 & 0.256\\ \hline \hline {Residual Network} &\textbf{0.764} & \textbf{0.417} & \textbf{0.764} & \textbf{0.352} \\ \hline \end{tabular} \label{tab:blindtest} \end{table} \textbf{Log loss.} We further evaluate the Residual CNN performance on the blind test data using log loss to measure the distance between the predicted label and the real label. The result is listed in Table~\ref{tab:logloss}. When applied to different test data, the Residual CNN has marginal log loss change. \begin{table} \centering \captionof{table}{Log Loss of Hybrid Classifiers} \begin{tabular}{|l|c|c|} \hline Models& Test Data Size & Log Loss\\ \hline \multirow{2}{*}{Best Performing Residual CNN} & 4,000 & 1.152 \\ & 35,663 & 1.192 \\ \hline \end{tabular} \label{tab:logloss} \end{table} \textbf{Inference time.} We further measure the inference time taken on the 4,000 test data set. Note that the classifiers of Naive Bayes and Hierarchical Naive Bayes run on a CPU node while other three neural network models run on a GPU node. Therefore the comparison between Navie Baye models and neural network models should not be evaluated against the absolute values. Instead, we observe the hierarchical classification introduce approximately 10 times inference computing delays. The inference time taken by Residual CNN is over 20 times than MLP. \begin{figure}[h] \centering \includegraphics[scale=0.5]{./figure/testingtime.png} \caption{Inference time on the test data of 4,000 samples} \label{fig:testingtime} \end{figure} \textbf{Summary.} The experiments set up different sizes of training data and test data. The observation shows the residual convolutional neural network model produces the best performance over other classifiers. We also observe that Naive Bayes model with the one-hot encoding of word tokens performs reasonably well with the limitation of handling increasing feature sizes. A simple two-layer fully connected neural network model has the advantage of fast inference time. \subsection{Threat to Validity} This paper presents the first stage towards automated smart dispatching of residential service requests. The focus of this paper is exploring a combination of word embedding techniques, and machine learning models to improve classification performance. Our hybrid machine learning model follows a simple ensembling approach for selecting the best performing classifier based on metrics of log loss. For our model to be deployed as an online machine learning service handling requests, the model selection mechanism needs to be in the feedback loop based on actual inference results and quality. A weighted score of multiple metrics that best reflect the online service requirements should be further developed to replace the current simple selection based a single metrics. Our evaluation compares with two benchmarking models of Naive Bayes and MLP classifiers. In the literatures, NLP based machine learning methods on news classifications, customer review sentiment analysis, movie review classifications have related work to our method. However, the datasets are specific to the domains without a direct solution to address the problems in our datasets that are not directly labeled for training. Combining our hybrid word embedding and learning models with exiting mining and learning methods become a new stream of investigation that requires a dedicated project to develop that is beyond the current funding budget. \section{Conclusion}\label{sec:conclusion} In this paper, we present a machine learning based method of natural language classification task for a real-world application. We carry out a rigorous analysis of the dataset and design a feature engineering process that select and extract features with statistical evidence. We apply two-word embedding techniques and develop five classification models. This hybrid machine learning method produces benefits, namely (1) generating suitable labels for supervised learning; (2) clustering data samples into meta-class for training and initializing models to improve classification performance over unbalanced data samples;(3) producing the best performing model through comprehensive experiments and evaluation; and (4) understanding the source of error with the hierarchical classification method. It remains our future work to explore newly published word embedding model to study the effects of word embedding on classification performance. \bibliographystyle{spmpsci_unsrt} \section{Introduction} \label{intro} The context of smart cities relates to the capability of analyzing and responding to specific requests from residents. In addition to social media networks such as Twitter and Chinese Weibo, an important source of city events down to the county level are service requests directly reported from metropolitan residents. These requests are through phone calls, emails and chatbots sent to metropolitan residential service centers that employ Human agents to dispatch the requests manually to corresponding service sectors for handling such issues. However, the processing capacity of call centers is bounded by the limitation of human operation. Therefore an artificial intelligence-enabled system helps to automatically convert audio-based reported requests into text content and then dispatches the request to the corresponding sector of services. Such automation improves service responsiveness. The core of such an automated request analysis and dispatching system is a machine learning model that classifies a request in natural language to an organization that is responsible for handling the case. To scope the research problem, this paper has the assumption that requests are all text-based. Any audio request is converted to the text-based content. The issues of classifying these text-based request data are two aspects. First, the labels for classification require processing on the raw datasets to produce suitable labels. It could be assumed that the field called the responsible department in the original dataset should be the labels. However, the responsible departments are also presented in natural language description that contains abbreviation, location information and less precise rename of a department. In addition, future requests may even result in new department names that are not part of the historical labels. Another issue is the request distribution over the corresponding responsible departments by nature are not evenly distributed. Some responsible departments are of small request cases and lead to the overall datasets unbalanced. In this paper, we propose a hybrid method to address the above two issues. Our method consists of (1) an NLP processing workflow for feature extraction; (2) a clustering algorithm for generating meta-classes for hierarchical training; (3) multiple classifiers with a Naive Bayes model, a fully connected neural network model and a residual neural network model to produce an optimal inference for a certain request. Training the classification with generated meta-classes, we gain insights into the classification errors. A use case of our hybrid method has been developed on over 80,000 sample requests in Chinese that involve 157 responsible departments. Our method achieves with testing precision of 76.42\% and log loss value of 1.192 over 35,663 test data that collected in different time period than the training datasets. Our code and samples of the dataset are open-sourced on Github\footnote{https://github.com/OneClickDeepLearning/classificationOfResidentialRequests}. The \textbf{contribution} of this paper is three-fold as follows: \begin{itemize} \item We build an NLP-based feature engineering workflow with a rigorous measure of feature prediction power using information values; \item We demonstrate a hybrid machine learning method with both unsupervised and supervised machine learning to classify residential requests over unbalanced data sample distribution; \item We develop three classifiers to ensemble the best classification result. We compare the classification performance with two benchmarking models. \end{itemize} The structure of this paper is organized as follows: Section 2 presents the related work. In Section ~\ref{sub:featureextraction}, we introduce our dataset and feature engineering techniques. Section~\ref{sec:Hierarchical} describes a hierarchical classification method of generating meta-classes for classification of a large number of classes. The hybrid machine learning models in Section~\ref{sec:hybrid}. Finally, we present our assessment metrics and experiment results of the classification performance in Section~\ref{sec:evaluation}. We conclude our paper in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:1} \textbf{Learning from few labeled data.} Kamal et al. proposed an Expectation-Maximization based method~\cite{nigam2000text} to train a classifier with few labeled data and use the classifier to label the high-confidence unlabelled data, then use the labeled data to train a new classifier, and repeat this process until the classifier converge. Blum et al.~\cite{blum1998combining} proposed a co-training method that allows learning from two views of the features. Rajat et al. proposed a Self-taught method that uses Auto-Encoder to learn higher-level representations with unlabelled data~\cite{raina2007self}. \textbf{Imbalance dataset.} Nitesh et al. presented a method of SMOTE (Synthetic Minority Over-sampling Technique) \cite{chawla2002smote} to over-sampling the minority class by creating synthetic minority samples that fall between the minority data and their nearest neighbors. If the distance between the minority data and the neighbors is far, the synthetic data will have a large range of noise. Hui et al. presented a Borderline-SMOTE method \cite{han2005borderline} ensures that the sampling happens only when the majority of the selected neighbors are minority data to lower the randomness of noise. \textbf{Ensemble of Classifiers.} Breiman proposed a bootstrap aggregating method\cite{breiman1996bagging} using bootstrap to extract several sub training sets from the original training set and train a classifier for each sub training set. Then each classifier gives a weighted vote for the classification. Yoav and Robert proposed a boost-based method called AdaBoost that uses weighted data to train weak classifiers. The data miss-classified by a weak classifier gains more weight to train the next weak classifier. The weak classifiers are weighted to form a confident classifier. Based on AdaBoost \cite{freund1997decision} method, Wei et al. proposed a cost-sensitive Boosting \cite{fan1999adacost} that increases the weight of misclassified training data that have a higher cost. \textbf{Hierarchical Classification.} Pei-Yi et al proposed a hierarchically SVM text classification method that split a problem into sub-problems in the classification tree that can be solved accurately and efficiently \cite{hao2007hierarchically}. A hierarchical softmax architecture \cite{Bengio2015} was proposed by Morin and Bengio when dealing with a huge number of output classes. It significantly speeds up the training time compared to feed-forward networks. \textbf{Convolutional Neural Networks on NLP Tasks.} Convolutional Neural Network (CNN) models have been demonstrated performing well in the NLP tasks of sentence classification~\cite{Cybenko1989}~\cite{Zhang:2015:CCN:2969239.2969312}. A CNN model consists of layers of convolutions with non-linear activation function such as \textit{ReLU} or \textit{tanh}. In a CNN model, convolutions over the input layer are used to compute the output. As illustrated by Zhang~\cite{DBLP:journals/corr/ZhangW15b}, each region of the input is connected to a neuron in the output. Then each layer applies a filter to detect higher-level features. These features further go through a pooling layer to form a univariate feature vector for the penultimate layer. The final softmax layer then receives this feature vector as input and uses it to classify the sentence. Kim applied a single convolutional layer on top of Word2Vec embeddings \cite{kim2014convolutional} of words in a sentence to perform classification tasks. His work proves that convolutional neural network, with a good representation of the words, can provide promising results on NLP problems. Conneau et al. proposed ResNet-like deep convolutional neural network \cite{vdconvfortex} that can learn the different hierarchical representation of the input text. They use small convolutions and let the network learn what is the best combination of these extracted features. Their work allows the network to be deeper and finer-grained. Prabowo and Thelwall proposed a series of hybrid classifiers \cite{sentimentanalysis} that combine different rule-based classifiers and SVM as a pipeline to solve sentiment analysis problems and achieved good performance. \section{Feature Engineering}\label{sub:featureextraction} Feature engineering processes and transforms the dataset in texts to word vectors as inputs of machine learning models. The original dataset contains eight features with \textit{id}, \textit{time stamp}, four \textit{categories} of responsible departments, \textit{request description}, and \textit{responsible department description}. The feature engineering workflow is depicted in Figure~\ref{fig:featureeng-workflow}. Feature engineering first removes invalid data samples in which the values of the responsible department description are missing or originally marked as non-available. The training dataset contains 849,861 records, including 145,542 records originally marked as invalid and 58,394 records without a responsible department description. Finally, valid dataset contains 645,924 records. \vspace{-0.3in} \begin{figure*}[h] \includegraphics[width=1.0\textwidth]{./figure/featureeng-workflow.png} \caption{The major steps of feature engineering on two features of request description and responsible department description} \label{fig:featureeng-workflow} \end{figure*} \vspace{-0.4in} \subsection{Data Preprocessing} Sentences are segmented into tokens before feature extraction. Two major methodologies of tokenization and segmentation are dictionary-based and statistics-based. The dictionary-based methods recognize words based on a maintained vocabulary \cite{mikolov2013efficient}, while the statistics-based approach uses corpus as a resource to build a word-segmentation model. In this paper, we process the tokenization and segmentation with the second method using a tool called LTP, a Chinese language technology platform~\cite{LTP}. LTP consists of six Chinese processing modules, including 1) Word Segmentation (WordSeg); 2)Part-of-Speech Tagging (POSTag); 3) Named Entity Recognition (NER); 4) Word Sense Disambiguation (WSD); 5) Syntactic Parsing (Parser) and Semantic Role Labeling (SRL). Further on, the segmented tokens are filtered by lexical analysis modules of LTP to eliminate tokens that include digits, words in other languages, punctuation, and stop words. A combined list of public available stopping-words is applied to the LTP tool. In addition, verbs, adjectives, adverbs are also excluded. Organization names and location-relevant nouns consist of more than one tokens. The NER module of LTP recognizes and merges these nouns into single words. \subsection{Data Distribution} \label{sub:datadistribution} The feature that contains the labeling information is \textit{responsible department description}. A simplified illustrating sample is depicted in Figure~\ref{fig:datasample}. The description needs further processing to generate labels for training and inference. The reason is the description usually contains location phrases with various levels of metropolitan granularity, national, provincial, county, and local communities. This means the records with the same responsible department but different location nouns become separate classes. As a result, the dataset distribution over the labels is spread widely with a large number of classes and the density of classes is diluted. Such a circumstance degrades the training quality and inference accuracy. \vspace{-0.3in} \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{./figure/datasample.png} \caption{One data sample with features of request description and responsible department description} \label{fig:datasample} \end{figure} To solve the above problem, we separate the location nouns from department names and titles. Only the organization names and titles remain to generate training labels. For cases that the department names and tiles are in abbreviation, we set up a dictionary and manually create an entry mapping between the standard full name and any forms of variation including abbreviation. This leads to 157 unique department names and titles, which are considered as the labels for classification. We further plot the data sample distributions over the 157 classes in Figure~\ref{fig:dataDistribution}. Among them, there are 101 classes whose data sample sizes are below 1000 samples (approximately 1000 out of a portion of 0.15\%). \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{./figure/imbalancedata.jpeg} \caption{Data distribution statistic for 157 classes, the x-axis is the index of the classes and the y-axis is the total number of data of a class} \label{fig:dataDistribution} \end{figure} We further explore the dataset characteristics upon observations of data distribution. We develop an inverted index of words in each request description record. The Bag-of-Words (BoW) algorithm \cite{harris1954distributional} is used to build the word pairs with their word counts per record. The word and the sample become a vector that is stored. Such a vector allows us to trace the word occurrence in samples. Therefore we compute the statistics of sample counts grouped by the word occurrence frequency. Figure~\ref{fig:wordfrequencydistribution} plots the word frequency distribution. This plot is interrupted as 9382 data samples contain words that only occur once in the whole request description text; 6275 data samples contain words occur 2 to 5 times in the whole request description text. The words of high frequency (such as over 50) only appear in a limited number of samples (such as 807 samples). Since stopping words are already filtered, this plot in Figure~\ref{fig:wordfrequencydistribution} indicates words with low frequency should be further measured with their relevance of other words in the feature space. Likewise, we produce the plot of word frequency and distribution of 157 labels as shown in Figure~\ref{fig:labelfrequencydistribution}. Our solution is training the Word2Vec model to generate the word embedding that represents the relations of word tokens in a high dimensional space. The details are presented in Section~\ref{subsec:featureextraction}. \begin{figure*} \centering \subfigure[Data sample distribution over word frequency in request description]{ \includegraphics[width=0.35\textwidth]{./figure/wordcountdistribution.png}% \label{fig:wordfrequencydistribution} }\hspace{0.2cm} \subfigure[Label distribution over word frequency]{% \includegraphics[width=0.4\textwidth]{./figure/labeldistribution.png} \label{fig:labelfrequencydistribution} } \caption{Data and label distribution} \end{figure*} \subsection{Information Values of Features} \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{./figure/categorysample.png} \caption{Data sample of categories} \label{fig:categorysample} \end{figure} Information values and Weight-of-Evidence (WoE) are techniques of feature selection. Information values measure the prediction power of a feature. The decision to select the four categories of responsible departments as features is evaluated by their information values. The value of each category are tags edited by customer service operators. A data sample is depicted in Figure~\ref{fig:categorysample}. The bottom textbox shows the corresponding responsible department. The four categories represent four levels of tags as a whole capturing context of the request. The tokenization and segmentation workflow is applied to each category. We combine tags of four categories as a combo for the information value analysis. By statistics, there are 418 unique values of category tag combo. \[ [tag_{1}, tag_{2}, ... tag_{n} ], n = 418 \] The information values (IVs) are calculated to measure the prediction power of the category tag combo to the class label as the responsible department. Therefore, the 157 responsible departments and the 418 category tag combo form a $157 \times 418$ vector. Each entry of this vector is notated as $TagCombo_{i,j}$, which represents the counts of data samples with tag combo $j$ that belong to responsible department $i$. Then the notation of $Non-TagCombo_{i,j}$ represents the total counts of data samples with tag combo $l\ne j$, that is \[ Non-TagCombo_{i,j} = \sum_{l=1,l \ne j}^{418} \#tag-combo_{i,l} \] Now, we can calculate the value of WoE as \[WOE_{i,j} = ln(\frac{TagCombo_{i,j}}{Non-TagCombo_{i,j}})\] Hence, we calculate the information value vectors for each category tag combo $j$ for responsible department $i$ as shown in Figure~\ref{fig:ivcalculation}. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{./figure/ivcalculation.png} \caption{Sample Calculation of IV} \label{fig:ivcalculation} \end{figure} Finally, the IV values of each category tag combo are summed up for all 157 responsible departments to measure the prediction power of each tag combo as \[ IV_{i,j} = (TagCombo_{j}\% - Non-TagCombo_{j}\%) * WOE_{i,j}\] \[ IV_{TagCombo_{j}} = \sum_{i=1}^{157} IV_{i,j} \] According to the rule of thumb described above, we find only 13 out of 418 category-combos are weak predictions, that is IV value is in the range of (0.02,0.10]. The rest of category-combos have IV values below 0.02, which indicates not useful for prediction. In another word, it means category-combo is not an important feature for classifying responsible departments. They are not selected as input features for classifiers. \subsection{Feature Extraction} \label{subsec:featureextraction} Feature extraction is to produce the word tokens into word vectors with numerical values. In this paper, we apply two methods namely Term-frequency-inverse document frequency (TF-IDF) \cite{sparck1972statistical}, and Word2Vec \cite{mikolov2013efficient}. \subsubsection{Generating word vector using TF-IDF} TF-IDF measures the relevance of words, not frequency. That is, word counts in the inverted index discussed in Section~\ref{sub:datadistribution} are replaced with TF-IDF relevance weight across the whole dataset. With TF-IDF, the more samples a word appears in, the less valuable that word is as a signal to differentiate a given request. The relevance weight of a word is calculated in Eq~\ref{eq:weight}. \begin{equation}\label{eq:weight} w_{i,j} = tf_{i,j} \times \log (\frac{N}{df_{i}}) \end{equation} where $N$ is the number of samples; $tf_{i,j}$ represents the number of occurrence of word token $i$ in sample request $j$; $df_{i}$ represents the number of occurrence of request samples that contain the token $i$. The vector of $w_{i,j}$ is a normalized data format that adds up to one for all the samples. This TF-IDF vector is used as input to the Naive Bayes classifier discussed in Section~\ref{sub:bayesian}. \subsubsection{Word embedding using Word2Vec}\label{word2vec} Word embedding maps word tokens of varied length to a fixed-length word vector as inputs to machine learning models. In nature, word embedding reconstructs linguistic contexts of words and produces a vector space. The algorithm of Word2Vec uses a group of related models that are two-layer neural networks that are trained over a large corpus of text and produces a vector space, typically of several hundred dimensions. Each unique word in the corpus is assigned a corresponding vector in the vector space so that words that share common contexts in the corpus are located in close proximity to one another in the space \cite{mikolov2013efficient}. We have trained the Continuous Bag-of-Words structure (CBOW) of the Word2Vec model with a corpus collected from the whole dataset. We segment the whole dataset of 65,9421 samples with sentences into data blobs of every five continuous words as input. The central word is the target for output. Empirical experiments indicate using the training dataset as corpus produces a better classification performance. The details of the experiments are not central to the research scope of this paper, thus omitted. By statistics, we observe the word length in a request description of the dataset ranges from 1 to 780 with a weighted average of 46. Over 90\% of the request descriptions have less than 100-word tokens. Thus we set the word vector dimension as 100, and pad zero if the word length is less than 100. \section{Hierarchical Classification Method}\label{sec:Hierarchical} A hierarchical classification method handles classification of a large number of possible classes \cite{Silla2011}. The current training dataset contains 157 unique labels and thus 157 classes. We develop a hierarchical classification method to evaluate whether it works for our dataset. There are two kinds of hierarchical classification. One uses meta-classes as a two-hierarchy structure where leaf classes are grouped by similarity into intermediate classes (the meta-classes)~\cite{hao2007hierarchically}. The other one copes with a pre-defined class hierarchy, a type of supervised learning. In this paper, our method is the former case by means of which we build the hierarchy during the training by a clustering method, and then classify a sample from the meta-classes to the leaf-classes. \subsection{Meta-class Generation using K-means and GMM} To create meta-classes for 157 labels according to similarity, we first apply the K-Means clustering algorithm~\cite{kanungo2002efficient}. The K-Means algorithm first assigns each sample to the cluster whose mean values have the least squared Euclidean distance. Secondly, it calculates the new means to be the centroids of the observations in the new clusters. Finally, the algorithm converges when the assignments no longer change. K-Means applies hard clustering by assigning data points to a cluster. This implies a data sample is assigned to the closest cluster. K-Means is simple to train but does not guarantee convergence to the global optimal whereby degrades the clustering accuracy. The Gaussian Mixture Model (GMM)~\cite{bilmes1998gentle} is a finite mixture probability distribution model. The parameters of GMM are estimated iteratively by using the Expectation-Maximization (EM) algorithm~\cite{bilmes1998gentle}. GMM/EM determines a sample's probability of belonging to a cluster, which is a soft clustering process. Each cluster, therefore, can have different options to constrain the covariance such as spherical, diagonal, tied or full covariance, instead of only spherical in K-Means, which means the clustering assignment is more flexible in GMM than in K-Means. The EM algorithm has its limitation. One issue is the number of mixtures affects the overall performance of EM and this number is an unknown prior. Therefore, the optimal number of mixtures is important to ensure an efficient and accurate estimation. Our solution is applying K-Means to obtain values of centroids (or geometric centers), then initializing GMM with centroids values\cite{figueiredo2002unsupervised}. The input to K-means is a word vector with 100 dimensions of 157 class labels produced from the feature extraction process described in Section~\ref{sub:featureextraction}. We apply silhouette coefficients to select the optimal value of $K$. A silhouette coefficient of a range of [-1, 1] indicates (1) a sample is far away from the neighboring clusters with the value near +1; (2)or a sample is on or very close to the decision boundary between two neighboring clusters with the value of 0; (3) or a sample might have been assigned to the wrong cluster if the value is negative. Figure~\ref{fig:kmeans} plots the clustering result with the optimal $K$ value as 5. We derive the silhouette coefficients by setting the $K$ value range in $[3,10]$. After the silhouette analysis, we consider $K=5$ is the optimal value for K-Means/GMM/EM clustering. Figure~\ref{fig:kmeans} plots the 5 clusters of 157 labels. \begin{figure}[h] \centering \includegraphics[scale=0.2]{./figure/Kmeans.jpeg} \caption{The plot of K-means and GMM clustering} \label{fig:kmeans} \end{figure} \subsection{Meta-class Generation using Topic-based Clustering} We develop another clustering method that considers the 157 labels as topics to cluster them into groups with similar themes. In this method, we apply the clustering method of OPTICS (Ordering Points To Identify the Clustering Structure Closely) \cite{ankerst1999optics} that finds a core sample of high density and expands clusters from them. The output from OPTICS provides $K$ clusters. In our method, we apply LDA (Latent Dirichlet Allocation) to output $K$ topics. Each topic consists of a fixed size of words and their associated weights. LDA assumes the Dirichlet prior distribution of topics. In practice, the Dirichlet prior distribution assumes documents (or labels in our case) cover only a small set of topics and topics consist of a small set of words frequently. Finally, we assign labels to a cluster by an entropy-based algorithm. The algorithm is listed below. The input to this topic-based algorithm is the output of the word vector representation of the 157 labels in $157 \times 10$ dimensions. The output is $K$ clusters of 157 labels. The clustering result is shown in Figure~\ref{fig:lda}. \begin{algorithm} \DontPrintSemicolon \SetAlgoLined \KwInput{$\hat{k} \gets \{k_{j}\}, j \in [1,157]$ word vector of 157 labels} \KwOutput{Clusters of 157 labels} \begin{algorithmic} \STATE $\varsigma \gets$ number of clusters output from OPTICS on Input\; \STATE $num\_topics \gets \varsigma$ to initialize LDA; \STATE $ \hat{v_{t}} \gets $ output from LDA; \textcolor{blue} {\COMMENT{a topic vector of $\varsigma \times 10$; each entry is a tuple $t$ of [word: weight]} } \ForEach{$\hat{v_{t}}[i], i \in [1,\varsigma]$}{ \STATE $l \gets |\hat{v_{t}}[i]|$; \textcolor{blue} {\COMMENT{number of tuples} } \STATE ${c_{i}}= \frac{1}{\left | l \right |} \sum_{t \in \hat{v_{t}}[i]} t.weight$ \textcolor{blue}{\COMMENT{calculate the centriod}} \ForEach{$k_{j}, j \in [1,157]$}{ \STATE $ p(k_{j}) = \frac{sim(k_{j},{c}_{i})}{\sum_{r=1}^{ \varsigma} sim(k_{j},{c}_{r})}$ \STATE $e^{c_i}_{k_{j}} = - p(k_{j}) \cdot log(p(k_{j}))$ \textcolor{blue}{\COMMENT{calculate the entropy of $k_{j}$ to a topic $\hat{v_{t}}[i]$}} } } \STATE Assign $k_{j}$ to the topic cluster $m$ whose entropy $e^{c_m}_{k_{j}}$ is minimal; \end{algorithmic} \caption{Topics and Entropy Based Clustering} \end{algorithm} \begin{figure}[h] \centering \includegraphics[scale=0.4]{./figure/LDA.jpg} \caption{Topic based clustering result} \label{fig:lda} \end{figure} \vspace{-0.3in} \subsection{Hierarchical Classification} The clustering algorithms generate the meta-classes of 157 labels for training a classifier. The classification is of the structure of a two-level tree as depicted in Figure~\ref{fig:hierarchytree}. The leaves are 157 labels and the non-leaf nodes are the $K$ meta-classes. \begin{figure}[h] \centering \includegraphics[scale=0.5]{./figure/hierarchytree.png} \caption{The two-level hierarchy of classes} \label{fig:hierarchytree} \end{figure} A classifier is first trained with the meta-classes, noted as $Model_{meta}$. The classification decides the meta-class that a sample belongs to. As a result, the original data samples are grouped into $K$ clusters. Furthermore, each meta-class $i$ contains $L_{i}$ labels and $\sum_{i=1}^{K} L_{i} = 157$. The data samples classified to one specific meta-class are used again to train a sub-model to further classify these samples to the leaf classes. The hierarchical training produces one meta-class model, noted as $Model_{meta}$ and $K$ number of leaf-class models, one for each cluster. In total, we have $K+1$ models. When it comes to inference, there are two methods. The first method inputs the test sample to $Model_{meta}$. Based on the classification on the meta-class, the test sample is further fed to one of the leaf-class models. The second method directly inputs the test samples to each of the leaf-class models. The classification with the highest probability is selected to be the classification result. The first method has shorter inference time as it runs two models, while the second methods run $K$ models. In term of accuracy, the second method tends to be more accurate as it reduces the error propagation from the meta-class level. \subsubsection{Hierarchical Naive Bayes Classifier} \label{sub:bayesian} Both the meta-class model and leaf-class model use the Naive Bayes classifier. The difference is the meta-class model has five classes for classification as the results of topic clustering of the labels. In the context of the inputs, $X$ contains the word tokens of the request description. Further, we adopt the Bernoulli Naive Bayes classifier to encode the input of word tokens. The Bernoulli Naive Bayes classifier assumes each feature has only binary values. In this case, the word token is encoded as a one-hot bag of words model that means $1$ indicating the presence of the term or $0$ indicating absence of the term. This leads to 19,607 dimensions of input features from the request description of 40,000 training set. The limitation of this approach is that when the training set changes, the input needs to be encoded again. For example, 80,000 training set leads to 32,698 dimensions of word token one-hot encoding. To avoid the zero value of $P(x_{i}|y_{j})$ causing the posterior probability always being zero, 0.2 smooth value is added to each conditional probability $P(x_{i}|y_{j})$. $y$ represents a set of 157 responsible departments \{$y_{1}, ...y_{157}$\}. The classification is calculated as the class $y_{j}$ that produces the maximum probability given the feature set of X. \begin{equation}\label{eq:bayesinappro} \begin{aligned} P(\widehat y|x_{0},...x_{n}) \propto \max_{j=1}^{157} P(y_{j}) \cdot \prod_{i=1}^{n}P(x_i|y_{j}) \\ \widehat{y} = \arg\max_{j=1}^{157} \prod_{i=1}^{n}P(x_i |y_{j}) \cdot P(y_{j}) \end{aligned} \end{equation} \subsubsection{Hierarchical MLP Neural Network} By applying a neural network model to the task of text classification, we assume the complex function created using the network of neurons is close to the true relationship between the input features (such as word tokens of request description) and the output (in our case the 157 classes of responsible departments). In other word, a neural network with a certain number of hidden layers should be able to approximate any function that exists between the input and the output according to the Universal Approximation Theorem~\cite{Cybenko1989}. We develop a Multiple Layer Perceptron neural network classifier in the hierarchical model. In this model, both the meta-class model and the leaf-class model have the same network structure as listed in Table~\ref{tab:fullyconnectednetwork} except that the output layer of the meta-class is 5 instead of 157. Accordingly, the weight size is $128 \times 5$ of the meta-class model. Within this structure, each input neuron is connected to each output neuron in the next layer, referred to as a Dense layer. Given the input to the first Dense layer is of the size of $10,000 = 100 \times 100$, the output of the Dense layer is $512$, then the size of weights of this Dense layer is $10,000 \times 512$. We stack two Dense layers with one output layer of 157 classes. The network structure is listed in Table~\ref{tab:fullyconnectednetwork}. \vspace{-0.3in} \begin{table}[h] \centering \caption{The Structure of Fully Connected Neural Network} \begin{tabular}{l|c|c|c} \hline Layers & Input Size& Output Size & Kernel (Weight) Size \\ \hline \multirow{2}{*}{Dense} &\multirow{2}{*}{10,000}&\multirow{2}{*}{512} &\multirow{2}{*}{$ 10,000\times512 $ }\\ &&&\\ \hline \multirow{2}{*}{Dense} &\multirow{2}{*}{512}& \multirow{2}{*}{128}&\multirow{2}{*}{$ 512\times128 $ }\\ &&&\\ \hline \multirow{2}{*}{Output} &\multirow{2}{*}{128}& \multirow{2}{*}{5}&\multirow{2}{*}{$ 128\times5 $ }\\ &&&\\ \hline \end{tabular} \label{tab:fullyconnectednetwork} \end{table} \vspace{-0.4in} \section{Hybrid Machine Learning}\label{sec:hybrid} \vspace{-0.1in} This hierarchical classification method is useful to deal with a large number of classes by means of training a classifier for a much smaller number of classes (at the level of meta-classes) while keeping comparable levels of confidence. The main issue with hierarchy classification is error propagation. Since the error in the meta-class level classification propagates to the leaf-class classification directly. \begin{figure}[h] \centering \includegraphics[scale=0.25]{./figure/hybridmodel.png} \caption{Oveview of hybrid machine learning models with word embeddings} \label{fig:hybrid} \end{figure} To improve the classification performance, we further develop a residual neural network classifier inspired by ResNet~\cite{ResNet}. By ensembling, the hierarchical model and the residual neural network model, the structure of a hybrid learning method is depicted in Figure~\ref{fig:hybrid}. We add in two basic models as benchmarks Naive Bayes classifier, and an MLP classifier. In totally, we have 5 classifier outputs on the same datasets. A simple ensembling method selects the best performing classification results according to metrics of \textit{log loss}. Classifiers based on Naive Bayes use inputs from word embedding of TF-IDF as a 19,607-dimension word vector. Other three neural network models based on Word2Vector embedding as $100*100$ dimension vectors. \subsection{Residual Convolutional Neural Network} In this paper, we apply the \textit{Full pre-activation} structure proposed by He et al.~\cite{he2016identity} to build our convolutional layers. To minimize the gradient vanishing problem when a CNN model grows with deep layers, residual skip connections or identity mapping are added to convolutional layers. The input to a layer $F(X)$ is added to convolutional output as a combined input to the next layer, $y=F(X, \{W_{i}\}) + W_{s}X$. This structure allows the gradient to be propagated without loss of representations. It is considered that the skip connection and the convolutional layer together form a Residual Block layer. In our model, a Residual Block contains two convolutional layers as shown in Figure~\ref{fig:residualblock}. \begin{figure}[h] \centering \includegraphics[scale=0.4, angle=90]{./figure/residualblock.png} \caption{Structure of the Residual Block} \label{fig:residualblock} \end{figure} A \textit{Batch Normalization-Activation} layer is placed in-between two convolutional layers. As discussed in the paper~\cite{he2016identity}, $1 \times 1$ convolution can be useful when there are fewer layers, thus we choose the $1 \times 1$ convolution layer as the skip connection method. Based on the Residual Block structure, we build our 20-layer residual convolutional neural network model shown in Table~\ref{tab:architecture}. \begin{table}[h] \centering \caption{Structure of Residual CNN} \begin{tabular}{l|c|c} \hline Layers& Kernel & Output Size \\ \hline \multirow{2}{*}{Convolution} &\multirow{2}{*}{$ \left[ \begin{matrix} \ 3\times 3, 32 \ \end{matrix} \right] \times 1 $ }& \multirow{2}{*}{100$\times$100 }\\ & & \\ \hline \multirow{4}{*}{Residual Block} &\multirow{4}{*}{$ \left[ \begin{matrix} \ 3\times 3, 32 \ \\ \ 3\times 3, 32 \ \end{matrix} \right] \times 1 $ }& \multirow{4}{*}{100$\times$100 }\\ & & \\ & & \\ & & \\ \hline \multirow{4}{*}{Residual Block} &\multirow{4}{*}{$ \left[ \begin{matrix} \ 3\times 3, 64 \ \\ \ 3\times 3, 64 \ \end{matrix} \right] \times 2 $ }& \multirow{4}{*}{50$\times$50 }\\ & & \\ & & \\ & & \\ \hline \multirow{4}{*}{Residual Block} &\multirow{4}{*}{$ \left[ \begin{matrix} \ 3\times 3, 128 \ \\ \ 3\times 3, 128 \ \end{matrix} \right] \times 2 $ }& \multirow{4}{*}{25$\times$25 }\\ & & \\ & & \\ & & \\ \hline \multirow{4}{*}{Residual Block} &\multirow{4}{*}{$ \left[ \begin{matrix} \ 3\times 3, 256 \ \\ \ 3\times 3, 256 \ \end{matrix} \right] \times 2 $ }& \multirow{4}{*}{13$\times$13 }\\ & & \\ & & \\ & & \\ \hline \multirow{4}{*}{Residual Block}&\multirow{4}{*}{$ \left[ \begin{matrix} \ 3\times 3, 512 \ \\ \ 3\times 3, 512 \ \end{matrix} \right] \times 2 $ } & \multirow{4}{*}{7$\times$7 }\\ & & \\ & & \\ & & \\ \hline \multirow{2}{*}{Dense}&\multirow{2}{*}{ } & \multirow{2}{*}{157 }\\ & & \\ \hline \end{tabular} \label{tab:architecture} \end{table} \vspace{-0.1in} \section{The Evaluation}\label{sec:evaluation} The evaluation focuses on the assessment of the classification performance. We compare the metrics of training five models with different sizes of training data and testing the models with data collected at different time spans. \subsection{The Data Setup} The dataset contains 659,421 samples. We split the data with a ratio of 80\%:20\% for the training set and the test set. We further partition the training set into shards of 40,000 samples for each shard. Likewise, we partition the test set into shards with 4,000 samples in each shard. Hence, the usage of the dataset is in the unit of a shard. We set up experiments of running 5 models on two data settings: (1) one shard of training set with one shard of test set; and (2) two shards of the training set with one shard of the test set. In addition to the 659,421 samples, we also test the best performing model using the data collected in a different time period that contains 35,663 valid samples. \subsection{The Model Assessment} The evaluation defines model assessment metrics as Precision and Recall. For a multi-classes classification model, the Precision and Recall score should be calculated for each class and take the average score as the final score. There are two averaging methods, $micro$, and $macro$. The macro method considers every class has the same weight, while in the micro method every data has the same weight. The calculation is as shown below, $P_{i}$ is the Precision of the $i$th class, and $R_{i}$ is the Recall of the $i$th class. $l$ is the total number of classes. \begin{equation} \begin{aligned} P_{macro}&= \frac{1}{l}\sum_{i=1}^{l}P_i \quad P_i= \frac{TP_i}{TP_i+FP_i}\\ R_{macro}&= \frac{1}{l}\sum_{i=1}^{l}R_i \quad R_i= \frac{TP_i}{TP_i+FN_i}\\ P_{micro}&= \frac{\sum_{i=1}^{l}TP_i}{\sum_{i=1}^{l}TP_i+\sum_{i=1}^{l}FP_i}\\ R_{micro}&= \frac{\sum_{i=1}^{l}TP_i}{\sum_{i=1}^{l}TP_i+\sum_{i=1}^{l}FN_i} \end{aligned} \end{equation} where $True Positive (TP)$: the real label is positive and the predicted label is also positive; $False Positive (FP)$: the real label is negative and the predicted label is positive; $True Negative (TN)$: the real label is negative and the predicted label is also negative; $False Negative (FN)$: the real label is positive and the predicted label is negative.\\ The above metrics focus on if the classification is correct or not. Log Loss measures the distance between the predicted label and the real label. It takes the prediction probability for each class of a model as the input and outputs a log loss value as calculated in Eq~\ref{eq:logloss}. The lower the log loss value (such as close to zero), the better performance of a model is. $l$ is the total class number, $y_i$ is the real label and $p_i$ is the predicted probability of class $i$. \begin{equation}\label{eq:logloss} Log Loss = \sum_{i=1}^{l}y_ilog(p_i) + (1-y_i)log(1-p_i) \end{equation} \subsection{The Experiment Results} \vspace{-0.1in} The first set of experiments evaluate the classification performance of the five models. The training data sets are of one shard (40,000 samples) and two shards (80,000 samples) respectively. The test data set is one shard of 4,000 samples. Both the training set and test set are from data samples collected within the same span of time. Table~\ref{tab:157clses} list the metrics measured for 5 classification models. \vspace{-0.2in} \begin{table}[h] \centering \captionof{table}{Classification Performance on Five Models} \begin{tabular}{|l|c|c|c|c|c|} \hline \multirow{3}{*}{Models} & \multirow{3}{*}{Training Subset} & \multicolumn{4}{|c|}{Metrics} \\ \cline{3-6} && \multicolumn{2}{|c|}{Precision} & \multicolumn{2}{|c|}{Recall} \\ \cline{3-6} &&Micro&Macro&Micro&Macro\\ \hline \multirow{2}{*}{Hierarchical MLP} & 40,000 &0.633 &0.247 & 0.633 & 0.214 \\ &80,000 &0.686 & 0.332 & 0.686 & 0.288 \\ \hline \multirow{2}{*}{MLP} & 40,000 &0.662 &0.276 & 0.662 & 0.236\\ &80,000 &0.689 & 0.281 & 0.689 & 0.233\\ \hline \hline \multirow{2}{*}{Hierarchical Naive Bayes} & 40,000 &0.746 &0.439 & 0.746 & 0.367\\ &80,000 &0.719 &0.375 & 0.719 & 0.288\\ \hline \multirow{2}{*}{Naive Bayes} &40,000 & 0.734 & 0.358 & 0.734 & 0.254 \\ &80,000& 0.700 & 0.296 & 0.700 & 0.194\\ \hline \hline \multirow{2}{*}{Residual CNN} &40,000 & 0.754&0.420& 0.754& 0.389 \\ &80,000 & \textbf{0.787} & \textbf{0.510} & \textbf{0.787} & \textbf{0.444}\\ \hline \end{tabular} \label{tab:157clses} \end{table} \textbf{Tranining sample size.} By doubling the training sample size, we observe three models improve the classification performance, including MLP, Hierarchical MLP, and Residual CNN. Both Naive Bayes and Hierarchical Naive Bayes metrics decrease. As presented in section~\ref{sub:bayesian}, we obtain 19,607 dimensions of input features from the request description of 40,000 training set and 32,698 dimensions of word token one-hot encoding from 80,000 training set. The observation from this experiment indicates increasing the feature size impacts the performance of our implementation of Naive Bayes classifier. Our Naive Bayes classifier learns one shard of the training set more linearly separable than the doubled size of the training set. In comparison, the word embedding method applied allows the MLP and the Residual CNN classifiers remain the same feature size of $100 \time 100$ dimensions of word token vectors regardless of the size of training data. \textbf{Hirarchical vs Non-hierarchical.} Hierarchical classification improves the marginal performance of Naive Bayes classifier. For MLP and Residual CNN, they both perform better than hierarchical classification. In all experiments, Residual CNN outperforms other models. Hierarchical classification overall has lower performance than non-hierarchical classification. The benefit of introducing meta-class through the hierarchical classification method is observing the source of classification errors. Figure~\ref{fig:confusion} shows the classification results with 5 meta-classes. It indicates the classification errors are mainly from the fact that class 2 and 4 are misclassified to class 1; class 1, 3, and 4 are misclassified to class 2. We also observe from the experiments that all the five classifiers produce over 80\% precision for the 5 meta-class classifications. The precision is higher than the classification performance of 157 classes. Due to the space limitation, we skip the values in details. \begin{figure}[h] \centering \includegraphics[scale=0.4]{./figure/5class-confusionmetrics.png} \caption{Distribution of predicted classes vs real classes} \label{fig:confusion} \end{figure} \textbf{Blind Test.} The second set of experiments run on a blind test. We select the best performing model trained from each of the 5 classifiers and further test them using the whole 35,663 data samples collected from a different span of time than the first set of experiments. The result is shown in Table~\ref{tab:blindtest}. Two classifiers, Naive Bayes and Residual CNN produce better classification performance than other three classifiers. Still, Residual CNN performs the best on the blind test data. \vspace{-0.4in} \begin{table} \centering \captionof{table}{Classification Performance of Blind Test} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{3}{*}{Models} & \multicolumn{4}{|c|}{Metrics} \\ \cline{2-5} & \multicolumn{2}{|c|}{Precision} & \multicolumn{2}{|c|}{Recall} \\ \cline{2-5} &Micro&Macro&Micro&Macro\\ \hline Hierarchical Fully connected NN &0.650 & 0.244 & 0.650 & 0.192 \\ \hline {Fully connected NN} &0.689 & 0.259 & 0.689 & 0.214\\ \hline \hline {Hierarchical Naive Bayesian} & 0.678 & 0.251 & 0.678 & 0.201\\ \hline {Naive Bayesian} & 0.726 & 0.295 & 0.726 & 0.256\\ \hline \hline {Residual Network} &\textbf{0.764} & \textbf{0.417} & \textbf{0.764} & \textbf{0.352} \\ \hline \end{tabular} \label{tab:blindtest} \end{table} \textbf{Log loss.} We further evaluate the Residual CNN performance on the blind test data using log loss to measure the distance between the predicted label and the real label. The result is listed in Table~\ref{tab:logloss}. When applied to different test data, the Residual CNN has marginal log loss change. \begin{table} \centering \captionof{table}{Log Loss of Hybrid Classifiers} \begin{tabular}{|l|c|c|} \hline Models& Test Data Size & Log Loss\\ \hline \multirow{2}{*}{Best Performing Residual CNN} & 4,000 & 1.152 \\ & 35,663 & 1.192 \\ \hline \end{tabular} \label{tab:logloss} \end{table} \textbf{Inference time.} We further measure the inference time taken on the 4,000 test data set. Note that the classifiers of Naive Bayes and Hierarchical Naive Bayes run on a CPU node while other three neural network models run on a GPU node. Therefore the comparison between Navie Baye models and neural network models should not be evaluated against the absolute values. Instead, we observe the hierarchical classification introduce approximately 10 times inference computing delays. The inference time taken by Residual CNN is over 20 times than MLP. \begin{figure}[h] \centering \includegraphics[scale=0.5]{./figure/testingtime.png} \caption{Inference time on the test data of 4,000 samples} \label{fig:testingtime} \end{figure} \textbf{Summary.} The experiments set up different sizes of training data and test data. The observation shows the residual convolutional neural network model produces the best performance over other classifiers. We also observe that Naive Bayes model with the one-hot encoding of word tokens performs reasonably well with the limitation of handling increasing feature sizes. A simple two-layer fully connected neural network model has the advantage of fast inference time. \subsection{Threat to Validity} This paper presents the first stage towards automated smart dispatching of residential service requests. The focus of this paper is exploring a combination of word embedding techniques, and machine learning models to improve classification performance. Our hybrid machine learning model follows a simple ensembling approach for selecting the best performing classifier based on metrics of log loss. For our model to be deployed as an online machine learning service handling requests, the model selection mechanism needs to be in the feedback loop based on actual inference results and quality. A weighted score of multiple metrics that best reflect the online service requirements should be further developed to replace the current simple selection based a single metrics. Our evaluation compares with two benchmarking models of Naive Bayes and MLP classifiers. In the literatures, NLP based machine learning methods on news classifications, customer review sentiment analysis, movie review classifications have related work to our method. However, the datasets are specific to the domains without a direct solution to address the problems in our datasets that are not directly labeled for training. Combining our hybrid word embedding and learning models with exiting mining and learning methods become a new stream of investigation that requires a dedicated project to develop that is beyond the current funding budget. \section{Conclusion}\label{sec:conclusion} In this paper, we present a machine learning based method of natural language classification task for a real-world application. We carry out a rigorous analysis of the dataset and design a feature engineering process that select and extract features with statistical evidence. We apply two-word embedding techniques and develop five classification models. This hybrid machine learning method produces benefits, namely (1) generating suitable labels for supervised learning; (2) clustering data samples into meta-class for training and initializing models to improve classification performance over unbalanced data samples;(3) producing the best performing model through comprehensive experiments and evaluation; and (4) understanding the source of error with the hierarchical classification method. It remains our future work to explore newly published word embedding model to study the effects of word embedding on classification performance. \bibliographystyle{spmpsci_unsrt}
1,314,259,995,212
arxiv
\section{Introduction} FU Orionis type objects (FUors) are a group of pre-main sequence objects showing a long-lived outburst in optical bands \citep{hk1996}. A prototype of this group, FU Orionis, flared up by 6 magnitude in $B$ band for a few months in 1936. Since then, FU Orionis has stayed bright for $\sim$80 years and dimmed only 0.015 magnitude per year \citep{kenyon2000}. A dozen FUors have been discovered with analogous photometric features. They are located in an active star forming region and show a rapid increase of optical brightness (3--5 mag) within a few months. A ring-shaped asymmetric reflection nebula appears after outburst \citep{goodrich1987}. They are expected to have decaying time of 10--100 years \citep{bl1994} although there are small differences in each source \citep{hk1996}. Another dozen of FUor-like objects have been found via spectral diagnosis. They have a F--G supergiant spectum in optical while a K--M giant-supergiant spectrum in near-infrared wavelength region. In addition they show double-peaked line profiles and P Cygni profiles in H$\alpha$, and infrared excess in spectral energy distribution (SED) \citep{wein1991,lee2011}. FU Orionis type outburst phenomena have been interpreted as a sudden increase in the accretion rate by a factor of 100--1000 in comparison to that in the quiescent state. Throughout a whole outburst event, $\sim$0.01 M$_{\sun}$ of disk material is supplied to the central star \citep{bl1994,hk1996}. In this picture, material reaching to the innermost part of the circumstellar disk is dumped into the central star. This process in FUors might give rise to detectable sign of accretion and material dumping at inner edge of the disk, often dealt with ``flickering". It can also cause inhomogeneities such as cool or hot spots on stellar surface or disk instability, which are believed to be the reason of periodic or sporadic variabilities in hour -- day timescales. HBC 722 (also known as LkH$\alpha$ 188 G4, PTF 10qpf and V2493 Cyg) is located in the dark cloud region, named ``Gulf of Mexico", in the southern part of the North America/Pelican Nebula Complex at a distance of 520 pc \citep{laugalys2011}. HBC 722 is the second FUor with well-characterized pre-outburst optical spectrum. Before the outburst, the source had characteristics of classical T Tauri star with a mass of $\sim$ 0.5 M$_{\sun}$, bolometric luminosity of 0.85 L$_{\sun}$, visual extinction of 3.4 magnitude, and a small amount of variability generally seen in Class II young stellar object (YSO) \citep{semkov2010,miller2011,kospal2011}. Its prominent H$\alpha$ features with equivalent width of 100 nm imply that the accretion activity was high even in the quiescent state \citep{ck1979}. In 2010 July, HBC 722 produced a large amplitude optical outburst over a few months ($\Delta$$V$= 4.7 mag), and was classified as FUors \citep{semkov2010,miller2011}. After the peak of 2010 September, it got darkened by about 1.5 magnitude in $V$ for 6 months unlike FUors. \citet{kospal2011} suggested that, with this rate of decline, HBC 722 could return to the quiescent state in a year. They claimed that it was necessary to reconsider the classification of the source as FUors, or as another category of flaring YSO with a shorter outburst duration period, like EXors. They also calculated some properties of HBC 722 after the outburst. They derived the accretion rate of 10$^{-6}~$M$_{\sun} $yr$^{-1}$ and a bolometric luminosity of 8--12 L$_{\sun}$ assuming the mass of 0.5 M$_{\sun}$ and radius of 3 R$_{\sun}$. During the outburst, the luminosity rose roughly by a factor of 10 compared to the quiescence luminosity (0.85 L$_{\sun}$), which is somewhat smaller than classical FUors whose luminosity rises by a factor of 10--100, although the source often show typical spectra of FUors \citep{audard2014}. Since then, however, HBC 722 remained in a relatively constant status with small fluctuations, maintaining the level brighter than that in the quiescent state by 3.3 magnitude (V) for a few months and then started to re-brighten \citep{semkov2012a,green2013}. According to recent study, the source has gradually regained its brightness about 1.5 magnitude in $V$ band over two years. Thus, HBC 722 can be classified as FUors \citep{audard2014}. Meanwhile, the color also has slightly changed during the re-brightening state. It became bluer by 0.2 magnitude in $R$-$I$ color compared to the constant status \citep{semkov2014}. In this paper, we present the results of high cadence photometric observations from 2011 April to 2013 May. This observation includes the re-brightening state in optical/near-infrared wavelengths to track the re-increase of the accretion rate of HBC 722. We trace variabilities of HBC 722 in multiple timescales (year, day and intra-day) and try to find evidence of flickering. In section 2, the details of observation strategies and data reduction techniques are described. We analyze behaviors of HBC 722 by using light curves, color curves, color-magnitude diagram and color-color diagram in section 3. In section 4, we discuss the time evolution of SEDs after outburst and diagnosis of flickering with day variability and intra-day variability (IDV) check. Finally, we summarize our conclusions in section 5. In this study, we use the AB magnitude system. Also, we deal with `long term' as year- or longer timescales and `short term' as day or intra-day timescales. \section{Observations and Data reduction} We carried out photometric observations of HBC 722 using Camera for QUasars in EArly uNiverse (CQUEAN) attached to the 2.1 m Otto Struve telescope at the McDonald Observatory \citep{kim2011,park2012}. Using a 1024$\times$1024 pixel deep-depletion CCD chip, CQUEAN has a 4.\arcmin7$\times$4.\arcmin7 field of view with a custom-made focal reducer \citep{lim2013}. We obtained minute scale cadence images in $r$, $i$ and $z$ bands in SDSS filter system \citep{fj1996} for 60 nights from 2011 April to 2013 May. To see the short scale behavior of HBC 722, we performed continuous time series monitoring observations for 2--8 hours in 18 nights. The observation field is shown in \citet{green2013}. The log of observations are listed in Table \ref{log}. The images were reduced with the IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.}$\slash$CCDRED packages. Since most of the images were exposed for less than 30 seconds, dark subtraction was unnecessary except for the 2011 December $r$ band images. Aperture photometry was conducted by using Source Extractor \citep{ba1996}. We set the aperture size with 3 times of FWHM of seeing of each night. Errors include Poisson errors, sky background fluctuations and subtraction errors for differential photometry. For the nightly averaged points, errors are denoted by standard deviation of comparison star in a night. Typical averaged error value is $\leq$ 0.01 magnitude. Since HBC 722 is located in an active star forming region, most of the surrounding objects in our HBC 722 field are likely to be YSOs, which could show small amplitude variabilities \citep{semkov2010,green2013}. Thus for the differential photometry, we had to carefully select comparison stars in the field. We carried out variability checks on the background objects by repetitive differential photometry paring two candidates. Finally C7 and C4 (see Table \ref{calib} for details) were chosen as a comparison star and check star among 12 selections. \citet{semkov2010} showed a comparison sequence for HBC 722 field, which consists of fifteen objects in $BVRI$ bands with care for low amplitude variability. In their work, C4 and C7 were used as photometric standards and labeled as A and C, respectively, according to their convention. We present spectral energy distributions of two objects in Figure \ref{sedcom}. We took $u$ and $g$ bands data from Sloan Digital Sky Survey (SDSS) database and $r$, $i$ and $z$ bands data from this work. We also took magnitudes from Two Micron All Sky Survey (2MASS) point source catalog \citep{2mass} for $J$, $H$ and $Ks$ bands, and Wide-field Infrared Survey Explorer (WISE) source catalog for 3.4, 4.6 $\mu$m data \citep{wise}. We converted magnitudes of 2MASS and WISE from Vega to AB magnitude system. In optical/near-infrared, both the comparison and the check star do not show any special features but have blackbody-like spectrum. We could fit SEDs assuming a single temperature blackbody and obtained approximate temperature of 3400--3500 K (spectral type M2--M3) and 3500--3600 K (spectral type M3--M4) for C4 and C7, respectively. Flux calibration was conducted using SDSS standard stars from \citet{smith2002}, taken in a photometric night during the 2012 June observing run. We considered only the zero point and airmass terms of the standard calibration formula, since secondary and higher order terms were small enough to be ignored in the flux calibration. \section{Results} \subsection{Monitoring brightness and color variabilities} We observed HBC 722 from 2011 April to 2013 May, during constant and re-brightening states of the object. The upper part of Figure \ref{overall} shows light curves for our observations in $r$, $i$ and $z$ bands and Johnson-Cousins $R$ and $I$ band literature data for comparison \citep{semkov2012a,semkov2012b,semkov2014}. We converted the magnitudes of $R$ and $I$ bands from Vega to AB magnitude system using \citet{br2007}. Our observation and archival data are fairly consistent in terms of monotonic behaviors of brightness, though there are small offsets of magnitude due to differences in filter system. From 2011 April to 2013 May, HBC 722 brightened 1.8 mag in $r$ band and 1.6 magnitude in $i$ and $z$ bands including smaller scale variations. For comparison, $R$ and $I$ bands from \citet{semkov2012a,semkov2014} got brighter by 1.73 and 1.55 magnitude in the same period. The bottom part of Figure \ref{overall} presents long term color curves. At the same time, color got bluer about 0.18 mag in $r$-$i$ and 0.1 mag in $i$-$z$. $R$-$I$ color also got bluer by 0.18 mag, and from the reddest point, it differed by 0.25 mag. We divide outburst stages of HBC 722 until 2013 May into five phases according to its brightness and color changes (see Table \ref{phase} and Figure \ref{overall}). First, Phase 1 deals from the beginning of the outburst to the first brightness peak in 2010 September. The brightness rapidly increased and color got bluer in Phase 1. The following Phase 2 is from 2010 September to 2011 February during which the brightness got fainter and color became redder. These two phases are presented in \citet{semkov2010} and \citet{miller2011}. After dimming by 1.4 magnitude ($R$) during 6 months, the source remained relatively constant in brightness for another few months with only small bounces, but color continuously got redder. We label it as Phase 3, which describes relatively constant state in the period of outburst. On the other hand, in Phase 4, HBC 722 started to re-brighten slowly from 2011 October. It steadily recovered its luminosity and the color curves showed distinctive bluer tendency again until 2012 May. Lastly, in Phase 5, the brightness continuously increased and went over its first peak of luminosity in 2013 May observation. In contrast to Phase 4, color curves of the last phase have weak bluer tendency or close to constant with small fluctuations. According to \citet{semkov2014}, HBC 722 maintains similar brightness from the secondary peak in 2013 May. We found hints of short term phenomena distinguished from the long term behaviors. Figure \ref{201108} shows sample data collected over two weeks in 2011 August. Upper two panels are the light and color curves of HBC 722, and lower two panels are those of comparison star, C7. We averaged the data of each night to see day scale brightness and color variabilities. Note that this period belongs to Phase 3 in the long term, which HBC 722 stayed relatively constant in brightness and became redder in color (see Figure \ref{overall}). In contrast, day scale fluctuations are seen with the other short scale decreasing tendency of 0.1 mag amplitude. Besides, color hardly changed at the same time. Figure \ref{201209} is another sample data collected in 2012 September. Figure \ref{201209} is produced in the same manner with Figure \ref{201108}. This period is on Phase 5 with respect to the long term, whose luminosity got brighter and color showed slight changes. Likewise to the previous case, there are small fluctuations in light curves in the short term. In summary, distinguished behaviors on different timescales imply that there could be shorter timescale mechanisms in HBC 722 system. \subsection{Color-magnitude diagram} Based on our $r$, $i$ and $z$ bands photometry in Phase 3, 4, and 5, we present $i$ vs. $r$-$i$ and $i$-$z$ color-magnitude diagram in Figure \ref{cm_overall_2}. In the figure, the upper and lower sequences depict $i$-$z$ and $r$-$i$ colors, respectively. Each point is obtained by averaging whole data taken in a night. During overall observed period, both $r$-$i$ and $i$-$z$ colors have become bluer as brightness has increased. The $r$-$i$ tendency is slightly steeper than $i$-$z$'s. This is an expected behavior due to the temperature change of the source. When accretion-related variability occurs, the amount of energy from the source and emission distribution with wavelengths change. This phenomenon results in the variation of SED shape. At that time, the wavelength range shorter than emission peak would lie on the blue edge or ``Wien side" of SED, and thus this part is very sensitive to the temperature change \citep[e.g.,][]{h2008}. In the case of HBC 722, the characteristic wavelengths are located at optical/near-infrared. Thus over the long term, the bluer tendency at optical/near-infrared colors as brightness increases is reasonable. We also analyze the data in each phase. Color remained constant in Phase 3 and got bluer when it entered to Phase 4. In Phase 5, $i$-$z$ color remains almost constant again but $r$-$i$ color gets still bluer. Additionally, there are local fluctuations of a few days in the phases. In the long term, the grouped phases well follow the bluer trend, but in the short term, the order of a few days variation are fluctuating within the grouped phases, which is not in regular sequence along with the long term trend. \subsection{Color-color diagram} Figure \ref{cc_overall} shows $r$-$i$ vs. $i$-$z$ color-color diagram of our observation data. Points averaged over a single night are presented to see day scale behaviors. Again, data are grouped according to the phases. We hardly see color variations during Phase 3, but they moved toward bluer direction with the increase in brightness in Phase 4. It remained in similar position from Phase 5, in which the color hardly varied in $i$-$z$ but got bluer in $r$-$i$. Both Figure \ref{cm_overall_2} and \ref{cc_overall} illustrate varying color features of HBC 722 in regard to the phases. Therefore, we argue that this color changes might be caused by altering physical properties in individual phases. \section{Discussion} \subsection{Periodic signals} \citet{green2013} reported two families of day scale periodic variability in SDSS $r$ band (5.8 and 1.28 day, 44 and 16 mmag amplitude, respectively) in the re-brightening phase. The authors proposed two scenarios, assuming the 5.8 and 1.28 day periods were attributed to stellar rotation period and disk instability and vice versa. Additionally, they suggest flickering rather than a fixed asymmetry at the inner disk edge, as an alternative source of the periodicities, even though the families of periodicity continue in a full year timescale. In this Section we look for evidence of short term periodic variability in the $r$, $i$, and $z$ bands and check the trend with \citet{green2013} . We first search for periodicities by computing a generalized Lomb-Scargle periodogram \citep{zechmeister2009} for the photometry obtained in each band. The normalized power at each frequency $\omega$ is given by \begin{equation} P(\omega) = N_H \frac{\chi^2_0 - \chi^2(\omega)}{2\chi^2_0} = \frac{N_H}{2} p(\omega) \, \end{equation}\label{chi2} where $\chi^2(\omega)$ is computed for the best-fitting sinusoidal signal at that period, $\chi^2_0$ is the weighted mean of the observations, and $N_H$ is the number of degrees of freedom. $p = 0$ indicates no improvement over the null hypothesis (no coherent signal), while $p = 1$ is a perfect fit of the data. In order to account for the long-term trend in the brightness of HBC 722 and any drifts in the photometric baseline, we separated each observing run into an independent dataset with an adjustable offset. Given that the floating offsets can affect the strength of the signal at each periodicity, we compute the power at each periodicity by self-consistently including the offsets in the minimization of the parameters. We show the computed periodogram in Figure \ref{periodograms}. We only scan for periodicities between 0.1 and 16 days; this range takes into account the periodicities that are effectively sampled by the data. The bottom panel of Figure \ref{periodograms} shows the periodogram of the window function, which indicates the presence of spurious periodicities related to the observational cadence, displaying the usual sharp peak at 1 day (with an alias at 0.5 days). The periodograms in Figure \ref{periodograms}, similarly to the data in \citet{green2013}, show an abundance of strong peaks (with very low false alarm probabilities) at a number of periods. The fact that HBC 722 is an active star makes the interpretation of the data in the time domain a difficult task, since it is expected to exhibit high-frequency noise and flickering to a degree. In order to attempt to tentatively identify periodicities over two years, as opposed to the underlying noise, we also characterize signals according to the goodness of the model, as computed by a cross-validation algorithm. Cross-validation algorithms can help identify overfitting or underfitting by resampling the data. We use here the ``leave-one-out'' implementation that we divide the full dataset of $N$ observations into a \textit{training set} of $N-1$ observations and a \textit{testing set} of a single observation, rotated among all observations for each band; each training set is used to derive a new fit. The goodness of the fit at each testing data point is then used to compute the combined likelihood ($\log\mathcal{L}$). The combined likelihood is defined as \begin{equation} \log\mathcal{L} = \sum_i^N \log \left(\frac{y_i - f_i(\theta_i)}{\sigma_i}\right) \end{equation} where $y_i$ is the i-th observation comprising the testing set, $f_i(\theta_i)$ is the corresponding prediction from the model with the set of parameters $\theta_i$ derived by fitting the training set, and $\sigma_i$ is the corresponding uncertainty. A fit that has lower $\log\mathcal{L}$ compared to an alternative fit has a higher predictive power. To facilitate comparison with the results of \citet{green2013}, we derive a set of potential two-signal solutions using a grid search, which starts with the periodogram computed for the single frequency search (Figure \ref{periodograms}), then computes a periodogram for the residuals and adds candidate second signals to the fit. Table \ref{solutions} shows the best-fitting solutions for each band. Since each band probes potentially different physical phenomena, we did not combine the fits and instead computed each solution separately for each band. For each solution, we list the period(s), normalized $\chi^2$ and likelihood computed by the cross-validation algorithm ($\log\mathcal{L}$). Firstly, we note that according to the $\log\mathcal{L}$ metric, two-signal solutions are strict, marked improvements over one-signal solutions and no-signal solutions; this suggests that the models should not be overfitting the data, or be driven by small features in the photometry. Secondly, we note that there are three families of signals -- around $\sim$6, $\sim$10 and $\sim$1 days. This is broadly consistent with \citet{green2013}, which uncovered similar families of periodicities. Unfortunately, given the noisiness of the data in the frequency domain and the potential for aliasing, it is hard in practice to disentangle the different periodicities exhibited by the data. Figure \ref{bestfits} shows the signal harmonics and the phased data for each band. We note there is substantial scatter both within and among the different observation runs, even after the two signals with the largest amplitude are removed. The residual scatter is likely due to high-frequency noise, in accordance to the nature of HBC 722. \subsection{Spectral energy distribution (SED)} To look at the deviations of spectral emissions in the post-outburst phase, we plot the multiple SEDs for several different brightness states in Figure \ref{sed}. We take \citet{semkov2012b,semkov2014} for $B$, $V$, $R$, and $I$ bands from the outburst of 2010 to 2013 May. Data in $r$, $i$ and $z$ bands are taken from this work. For near-infrared, we take $J$, $H$ and $K_{S}$ band data from \citet{kospal2011}, \citet{anto2013} and \citet{sung2013}. All literature data are converted from Vega magnitude to AB magnitude system by using the formula of \citet{br2007}. We match the archival data to our $r$, $i$ and $z$ data to construct SEDs with closest nights. The first SED is from the first peak in 2010, on the boundary between Phase 1 and Phase 2 in our criteria. We only use literature data taken in 2010 September 19 and 20. Second one represents Phase 3 which shows relatively calm state, constructed by the data of 2011 April 28, 30 and May 2. Third one is on the boundary between Phase 3 and Phase 4, when HBC 722 started to re-brighten. We use 2011 October 30 data for all wavelengths. Fourth one is on the boundary between Phase 4 and Phase 5, when the source reached to similar brightness with the first peak brightness in 2010. We use 2012 May 20 and 27 data for this state. The last one is from Phase 5 when it brighten up more than the first peak. We use the data of 2013 April 14 May 4, 5. To begin with, there is a main difference between the SED of re-brightening period and that of the first peak. The shape of SED for re-brightening period shows redder feature than that of first peak, even at Phase 5, when the brightness got over the first peak. Therefore comparing to the original flaring, HBC 722 showed more increase in brightness but less bluer in color at re-brightening state. From Phase 3 to Phase 5, the shape of SED also changed slightly. The gradient becomes less steeper as HBC 722 re-brightened up, which suggests the emission from shorter wavelength becomes higher. Thus in the long term, the source got bluer. It is coherent to our color analysis in previous section again. \citet{johnstone2013} proved the time evolution of SED in outburst with their established model. When the accretion rate increases, the peak of SED moves toward a shorter wavelength which fits for a higher temperature blackbody. Also, they predicted that the first indication of heating is luminosity rise of the source at near or shorter wavelength region than the peak of SED. Our time evolution of SED in the re-brightening state shows a good agreement with their prediction. Assuming that the emissions entirely came from disk accretion, we calculate relative accretion rate as a function of time. The bolometric luminosities limited at optical--near infrared are obtained in each phase (the same epochs of SEDs). By taking accretion luminosity formula and properties of HBC 722 used in \citet{green2013}, accretion rates are estimated. We normalize the values for the minimum brightness epoch of Phase 3 (2011 April) and present relative accretion rate change (Figure \ref{acc}). The actual accretion rate obtained at the similar epoch to Phase 3 reported in \citet{green2013} is $\dot{M}$ = 1.31$\times$10$^{-6}$ M$_{\sun}$ yr$^{-1}$. \subsection{Flickering} Although only small number of outbursting YSOs have been detected, these uniquely enhanced systems provide how accretion process occurs from innermost part of circumstellar disk to central star, which play an important role for understanding the evolution of YSOs. In accordance with \citet[][and references therein.]{kenyon2000}, ``flickering" is randomly fluctuated small amplitude (0.01--1.0 mag) variations in dynamic timescales. It could be observed in cataclysmic variables and erupting YSOs. Flickering is often thought to be a signature of disk accretion. The most widely accepted origin of flickering is a temperature region between the central star and disk, which is the vicinity of inner edge of the disk. Since a large amount of disk material falls to the stellar surface from the inner disk, flickering could be evidence for inhomogeneous accretion flow \citep[e.g.,][]{shu1994,bastien2011}. In order to check short term behaviors of intra-day and day time scales, we quantify the potential variabilities of HBC 722 in $r$, $i$ and $z$ bands. We expect to see footprints of flickering by using a micro-variability method developed by \citet{jm1997}. \begin{equation} C_{1}= \frac{\sigma \left ( Object - Comp1 \right )}{\sigma \left ( Comp 1 - Comp 2 \right )} ~\ and ~\ C_{2}= \frac{\sigma \left ( Object - Comp 2 \right )}{\sigma \left ( Comp 1 - Comp 2 \right )} \end{equation} C$_{1}$ is calculated by standard deviation of differential photometry for an object (Object) and a comparison star (Comp1), divided by that of the comparison star (Comp1) and a check star (Comp2). C$_{2}$ is also calculated in the same manner. Finally we derived the parameter C by taking the average of C$_{1}$ and C$_{2}$ \citep[e.g.,][]{romero1999,gupta2008}. Since the calculation of parameter C includes possible variations of comparison star, valid number of variability of HBC 722 can appear only if the variation of HBC 722 is superior to that of comparisons. According to \citet{jm1997}, parameter C depends on normal distribution, so we can suggest that potential variabilities exist with 90\%, 95\% and 99\% confidence level if C values are 1.64, 1.96 and 2.57, respectively. Finally, we substitute HBC 722, C7 and C4 for Object, Comp1 and Comp2, respectively. The results of day scale variability of HBC 722 are found in Table \ref{var_day} and Figure \ref{C}. We do not include 2011 April, 2011 December, 2012 May and 2012 November runs because each covered only a few nights including non-photometric nights. For the majority of nights, C values are over 2.57, which implies HBC 722 shows variability with a 99\% confidence level in the day scale. On the other hand, C$_{r}$ in 2011 July, C$_{r}$ and C$_{i}$ in 2011 November and C$_{i}$ in 2012 September have values between 1.96 and 2.57. According to the aforementioned rule, these have 95\% confidence for their variabilities. Lastly, C$_{z}$ in 2011 July is located between 1.64 and 1.96, which is a little less confident than the others, but we still have 90\% confidence of variability. To sum up, HBC 722 strongly shows day scale variability in 2011 August, 2012 June and 2013 May. It displays meaningful variabilities in the other months as well. Therefore, we conclude that HBC 722 is flickering. Since there is no certain relation between the length of observation and amplitude of brightness change, it is unlikely that the duration of continuous observation affects the C values. According to the historical efforts to find clues of flickering in the outburst stage of cataclysmic YSOs, many studies focused on finding day scale periodic and aperiodic variations. However, using the properties of low mass YSOs and their Keplerian disks, we can explore IDV, which could be originated from between the innermost part of the disk and the central star. Opportunely, our observing time at the re-brightening state of HBC 722 is related to the re-stimulated disk accretion activity. Because of our short cadence monitoring observation strategy, we can also look at the behaviors of HBC 722 in intra day scale. We quantify the IDV in the same manner with the day scale variability. The results of derived C values are tabulated in Table \ref{var_intraday}. In analysis of micro-variability of each night, we find that C$_{r}$ and C$_{i}$ on 2011 July 5, C$_{i}$ on 2012 September 2 and 2013 May 1 have between 1.96 and 2.57, which means potential variabilities with a 95\% confidence level. Additionally, C$_{r}$ on 2011 August 24, 2011 November 4, 2012 September 6 and 2013 May 5 are all between 1.64 and 1.96, which imply 90\% of potential variability. Meanwhile, many of the C values in $z$ band, and a few of $r$ and $i$ bands are lower than 1.0. Since the micro-variability method depends on the comparison and check star, these values can be caused by the brightness differences among the object (HBC 722), comparison (C7) and check stars (C4). In this analysis, IDV is less convincing than day scale variability. Note that we suggest statistical values for variability rather than specific values. There are several previous studies of short term variability of FUors. \citet{herbig2003} observed FU Orionis and revealed $\sim$14 days of spectroscopic periodicity in P Cygni profiles, especially in H$\alpha$, lasting more than 1.5 years. The author also discovered another 3.54 days of periodic variation arising from inner structure of photosphere. \citet{powell2012} confirmed the periodicities found by \citet{herbig2003} and suggested that the periodic phenomena continued over 10 years. Recently \citet{siwak2013} discovered 2--9 day quasi-periodic features in FU Orionis using the Microvariability and Oscillations of STars ($MOST$) satellite. Furthermore, \citet{siwak2013} stated that it could be caused by a dump of plasma, or magneto-rotationally unstable heterogeneities in the localized accretion disk rotating at different Keplerian radii. Meanwhile, \citet{clarke2005} reported day scale non-periodic fluctuations in another classical FUors. They detected photometric variabilities with amplitudes of 0.1 and 0.3 mag ($V$) for V1057 Cyg and V1515 Cyg, respectively, which might be caused by flickering events. In HBC 722, day scale non-periodic variabilities are detected similar to the V1057 Cyg and V1515 Cyg cases. Because two classical FUors are brighter than HBC 722 in optical, it is plausible that HBC 722 is flickering with smaller amplitude. We found the variabilities last more than two years with diverse amplitudes. Looking back to other studies of IDV for flaring YSOs, \citet{kenyon2000} collected photometric data of FU Orionis and detected random brightness fluctuations in a dynamic timescale of a day or less, with 0.035--0.1 mag amplitude ($V$). \citet{bastien2011} conducted rapid cadence time series photometry for V1647 Orionis in the outburst stage, which belongs to another eruptive YSO, EXors. As a result, 0.13 day (51 mmag amplitude) periodic variability was found. They believed that the periodic variability would be related to flickering by detecting `flickering noise' signs in the power spectrum of the light curves. Since HBC 722 still remains enhanced, further investigation with sufficient data will more clearly reveal IDV and properties of intra-day scale flickering. In Figure \ref{C}, we attempt to find a relation between the long term brightness change and the short term variability. Two of the C values around 2011 August and 2012 September are particularly large, and they lie at the boundary between the phase lines (noted in Table \ref{phase}). These larger points near the boundary of phases might be caused by a transition to the next phases, as one of possible interpretation. It can be proved if additional short term variabilities are detected when the long term brightness tendency changes. Finally, we find a tentative relation that C values get lower from $r$ to $z$, implying smaller amplitude of variation in longer wavelength. A few explanations might be possible: first, this tendency can be simply due to the brightness effect. Under assumption of no intrinsic variabilities for two objects, brighter star shows smaller brightness deviation than dimmer one. In our case, HBC 722 clearly shows intrinsic variability but this brightness effect would be added. In other words, in $r$, $i$ and $z$ bands HBC 722 is brighter than comparison stars but the brightness difference in $z$ band is greater than that in $r$ band. Since C values that we used is obtained by relative amount of brightness deviation from that of comparisons, the values could be affected by brightness difference in each band. Therefore the C values could decrease with increasing wavelengths. Second, intrinsic characteristics of HBC 722 could appear. \citet{sung2013} reported that HBC 722 showed strong correlation between flux variation and fading period after outburst. They estimated the flux variations from maximum brightness in 2010 to minimum in 2011 and compared them with fading periods. As a result, both flux variation and fading timescale are larger at shorter wavelengths. The flux variation tends to decrease linearly as wavelength increases in the optical/near-infrared range. Therefore we could expect that there is an inverse proportion relation between wavelengths ($r$, $i$ and $z$ bands) and amplitudes. \section{Conclusion} We observed HBC 722 in SDSS $r$, $i$ and $z$ bands from 2011 April to 2013 May with CQUEAN attached to the 2.1m Otto Struve telescope at McDonald Observatory. The photometric results show that HBC 722 have re-brightened for two years and presented unprecedented high brightness. The color also have become bluer at the same period. However, the brightness and color occasionally maintained similar status rather than steadily changed, which could be related to physical processes at inner disk. Thus we divide the post-outburst period into five phases according to brightness and color variations. We analyze color-magnitude diagram and color-color diagram to depict tendency along with the phases and possible day scale variabilities. Different shapes of optical/near-infrared emissions between at the first peak and henceforward are shown in spectral energy distribution. Additionally, multi-epoch SED shapes indicate that HBC 722 has become hotter as brightness have increased. We conducted periodicity check for HBC 722. As shown in Figure \ref{periodograms}, HBC 722 exhibit high-frequency noise and flickering, as expected because it is in enhanced state. Thus we statistically show best-fitting solutions for each band, which were broadly converged in three families of signals around $\sim$6, $\sim$10 and $\sim$1 days. These results are well-matched with the discovered periods of \citet{green2013}. We also investigate short term variability separated from long term variations to find indications of flickering. Intra-day and day scale variabilities are quantified by using micro-variability method, which takes relative amount of variability of HBC 722 to that of comparison star. We find clear evidences of day scale variabilities and weaker signs of IDV in $r$, $i$ and $z$ bands. Comparing these short term variabilities with long term brightness variations from Phase 3 to Phase 5, derived C values of variability tend to display larger number on nearly boundary of phases. We suggest that there could be transitions of physical processes at the innermost part of disk which attribute to the change of brightness and color behaviors. \acknowledgements This work was supported by the National Research Foundation of Korea (NRF) grant, No. 2008-0060544, funded by the Korea government (MSIP). The authors thank Prof. Sang-Gak Lee and Prof. Tae Seog Yoon for valuable discussions and suggestions. This paper includes data taken at The McDonald Observatory of The University of Texas at Austin. This research has made use of the USNOFS Image and Catalogue Archive operated by the United States Naval Observatory, Flagstaff Station {http://www.nofs.navy.mil/data/fchpix/).
1,314,259,995,213
arxiv
\section{Introduction} The analytic conformal bootstrap has uncovered universal features in sparse corners of the spectrum of conformal field theories (CFTs), at large spin \cite{Fitzpatrick:2012yx,Komargodski:2012ek} or large charge \cite{Jafferis:2017zna}. The `middle' of the spectrum is instead exponentially dense, but reveals universal properties as well \cite{Pappadopulo:2012jk,Mukhametzhanov:2018zja}. Some of these advances were guided by the existence of semiclassical descriptions, such as weakly interacting probe particles in AdS \cite{Fitzpatrick:2012yx} for large spin states, or a superfluid effective field theory (EFT) for large charge states of certain CFTs \cite{Hellerman:2015nra,Alvarez-Gaume:2016vff,Monin:2016jmo,Cuomo:2017vzg}. The middle of the spectrum also enjoys a natural semiclassical description: thermodynamics \cite{Lashkari:2016vgj}, and more generally hydrodynamics. The subject of this paper is to study the consequences of this description. Hydrodynamics is expected to emerge as the late time dynamics of any non-integrable quantum field theory (QFT) at finite temperature. The first theoretically controlled demonstration of this phenomenon is possibly Landau's two-fluid model \cite{PhysRev.60.356}; for weakly coupled QFTs the emergence of hydrodynamics is now well understood within the framework of Boltzmann kinetic theory \cite{Jeon:1995zm, Arnold:1997gh}. The fluid-gravity correspondence is a more recent example \cite{Policastro:2002se,Policastro:2002tn,Bhattacharyya:2008jc}, for strongly coupled holographic theories. Although an analogous proof in generic CFTs may be too formidable a task for the conformal bootstrap, analytic methods may be able to place constraints on hydrodynamic, such as bounds on transport parameters \cite{Kovtun:2004de}. The approach followed here is instead to work from the bottom-up, with the hope to guide future efforts from the analytic or numerical bootstrap. Hydrodynamics tightly constrains the thermal correlator of any light neutral operator (e.g.~any $\mathbb Z_2$-even light operator in the 3d Ising model) at late times. This regime is difficult to address with conventional CFT methods because large Lorentzian times $t\gg \beta$ are far outside of the radius of convergence of the operator product expansion (OPE) \cite{Iliesiu:2018fao}. In the microcanonical ensemble, hydrodynamics controls heavy-light four-point functions $\langle HLLH\rangle$ far from the $LL$ OPE limit. Assuming typicality of heavy operators, hydrodynamic predictions can be recast as expressions for off-diagonal heavy-heavy-light OPE coefficients $C_{HH'L}$. Our results, summarized below, should hold in any non-integrable unitary CFT in three or more spacetime dimensions. \subsection{Summary of results} We consider thermalizing (or chaotic) CFTs in $d+1$ spacetime dimensions. Operators that do not carry any internal quantum numbers acquire thermal expectation values: for example a neutral dimension $\Delta_{\mathcal{O}}$ scalar satisfies \begin{equation}\label{eq_thermal_ev} \langle \mathcal{O}\rangle_\beta = \frac{b_{\mathcal{O}}}{\beta^{\Delta_{\mathcal{O}} }}\, , \end{equation} where $\beta$ is the inverse temperature, and $b_{\mathcal{O}}$ a coefficient that is generically $O(1)$. As argued in Ref.~\cite{Lashkari:2016vgj}, consistency with the microcanonical ensemble implies that diagonal heavy-heavy-light OPE coefficients are on average controlled by the thermal expectation value \eqref{eq_thermal_ev}. Assuming typicality of heavy eigenstates allows one to drop the averages and leads to the prediction \cite{Lashkari:2016vgj} (dropping numerical factors) \begin{equation}\label{eq_liu_res} C_{HH \mathcal{O}} \simeq b_{\mathcal{O}} \left[\frac{\Delta}{b_T} \right]^{\Delta_{\mathcal{O}}/(d+1)} , \end{equation} for the OPE coefficient of two copies of a heavy operator $H$ of dimension $\Delta$ with the light operator $\mathcal{O}$. The dimensionless thermal entropy density $b_T \equiv s\beta^d$ controls the thermal expectation value of the stress-tensor. In contrast, off-diagonal heavy-heavy-light OPE coefficients $C_{HH'\mathcal{O}}$ should probe out of equilibrium dynamics. If $\mathcal{O}$ is light and the difference in the dimension of the heavy operators is not too large, this will probe the late time, near-equilibrium dynamics, which is controlled by hydrodynamics if $d\geq 2$. Eq.~\eqref{eq_thermal_ev} shows that $\mathcal{O}$ couples to fluctuations in temperature (or energy density). These propagate as sound, with velocity $c_s^2 = \frac{1}{d}$ and attenuation rate related to the shear viscosity to entropy ratio $\eta_o \equiv \eta/s$ of the CFT, leading to poles near $\omega=\pm k/\sqrt{d}$ in the low frequency $\omega$ and wavevector $k$ thermal two-point function of $\mathcal{O}$ \begin{equation} \langle\mathcal{O} \mathcal{O}\rangle_\beta(\omega,k) \simeq \left( \frac{b_{\mathcal{O}}\Delta_{\mathcal{O}}}{\beta^{\Delta_{\mathcal{O}}}}\right)^2 \frac{\beta^d}{b_T} \frac{ \eta_o \beta k^4 }{\left(\omega^2 - \frac{1}{d}k^2\right)^2 + \left(\frac{2d-1}{d}\eta_o \beta \omega k^2\right)^2}\, . \end{equation} We show under the same assumptions that lead to \eqref{eq_liu_res} that this hydrodynamic correlator implies \begin{equation}\label{eq_my_res1} |C_{H_J H'_{J'}\mathcal{O}}|^2 \simeq \frac{b_{\mathcal{O}}^2}{e^{S}} \frac{\eta_o(J-J')^4} {\left[ \left(\Delta-\Delta'\right)^2 - \frac{1}{d}(J-J')^2\right]^2 + a_{d\,} \eta_o^2\left(\frac{b_T}{\Delta}\right)^{\frac{2}{d+1}} \left(\Delta-\Delta'\right)^2 (J-J')^4}\, , \end{equation} for the OPE coefficient of the light operator $\mathcal{O}$ with heavy operators of dimension $\Delta$, $\Delta'$ and spin $J$, $J'$. Off-diagonal OPE coefficients are exponentially suppressed in the entropy $S\sim (b_T \Delta^d )^{{1}/({d+1})}$, as expected on general grounds \cite{Pappadopulo:2012jk}. We have dropped subexponential dependence on $\Delta$, but instead emphasize the singular dependence on $\Delta-\Delta'$ and $J-J'$ featuring the hydrodynamic sound pole. This result holds for heavy operators satisfying \begin{equation}\label{eq_regime} \left(\frac{\Delta}{b_T}\right)^{-\frac{1}{d+1}} \quad \lesssim \quad \Delta - \Delta' \quad \lesssim \quad \left(\frac{\Delta}{b_T}\right)^{\frac{1}{d+1}}\ . \end{equation} The difference in spin must satisfy the same upper bound $J-J'\lesssim \left({\Delta}/{b_T}\right)^{1/(d+1)}$. This upper bound comes from the UV cutoff of hydrodynamics, which only describes dynamics at times larger than the thermalization time $t\gtrsim \tau_{\rm th}$. The lower bound comes from IR effects which resolve the singularity in \eqref{eq_my_res1}. In \eqref{eq_regime} we have assumed $\tau_{\rm th} \sim \beta$; weakly coupled CFTs have $\tau_{\rm th} \gg \beta$ and the window \eqref{eq_regime} is parametrically smaller. Hydrodynamics pervades late time correlators, and not just those of scalar operators. In a thermal state, neutral operators of any integer spin can decay into composite hydrodynamics operators -- this is illustrated in Fig.~\ref{fig_decay}. Consider an operator of spin $\ell$. Its component with an even number $\bar \ell$ of spatial indices with $2\leq \bar\ell\leq \ell$ has the same quantum numbers as composite hydrodynamic fields involving the stress tensor $T_{\mu\nu}$ \begin{equation} \mathcal{O}_{i_1 \cdots i_{\bar \ell} 0\cdots 0} \ \sim \ \partial_{i_1} \cdots\partial_{i_{\bar\ell-1}} T_{0i_{\bar\ell}} \ + \ T_{0 i_1} \partial_{i_2} \cdots \partial_{i_{\bar\ell-1}} T_{ 0i_{\bar\ell}} \ + \ \cdots \ , \end{equation} This equation is not meant as a microscopic operator equation in the CFT, but rather as an operator equation in the low-energy (dissipative) effective theory around the thermal state. The first term shows that the operator overlaps linearly with hydrodynamic excitations. Its two-point function will therefore contain hydrodynamic poles, leading to OPE coefficients similar to \eqref{eq_my_res1}. If we consider this operator at vanishing wavevector $k=0$, then the leading term drops because it is a total derivative and the operator no longer overlaps linearly with hydrodynamic modes. However, it can still decay into the second composite operator which leads to a hydrodynamic loop contribution to its correlator \begin{equation}\label{eq_my_res_ltt} \langle \mathcal{O}_{i_1 \cdots i_{\bar \ell} 0\cdots 0} \mathcal{O}_{j_1 \cdots j_{\bar \ell} 0\cdots 0}\rangle{\vphantom{\left(o_{i_{\bar o}}\right)}}_{\!\beta\,} (t,k=0) \ \sim \ \frac{1}{t^{\frac{d}{2}+\bar \ell - 2}}\, . \end{equation} Although this universal late-behavior for thermal correlators of generic operators in QFTs can be straightforwardly derived using the time-honored framework of fluctuating hydrodynamics, it has to our knowledge not appeared previously in the literature. \begin{figure} \Large \begin{align*} \mathcal{O}_{(\bar\ell,\ell)} && &\sim & &\begin{gathered}\includegraphics[height=0.25\linewidth,angle=0,trim={0 0 5.5cm 0},clip]{fig/diag5}\end{gathered}& &+& &\begin{gathered}\includegraphics[height=0.25\linewidth,angle=0,trim={0 0 6.5cm 0},clip]{fig/diag1}\end{gathered}& &+& &\cdots& &+& &\begin{gathered}\includegraphics[height=0.25\linewidth,angle=0,trim={0 0 6.5cm 0},clip]{fig/diag4}\end{gathered}& &+& &\cdots \\[-5ex] && &\sim& &\qquad \partial^{\bar\ell-1}{\color{inkred}T} & &+& &\quad {\color{inkred} T} \partial^{\bar\ell-2} {\color{inkred} T}& &+& &\cdots& &+& &\quad {\color{inkred} T} \cdots {\color{inkred} T}& &+& &\cdots \end{align*} \caption{\label{fig_decay} Neutral operators in finite temperature QFT are long-lived as they can decay into hydrodynamic excitations carried by the stress-tensor $T_{\mu\nu}$. $\mathcal{O}_{(\bar\ell,\ell)}$ denotes components of a spin-$\ell$ operator with $\bar\ell$ spatial indices.} \end{figure} The hydrodynamic two-point function \eqref{eq_my_res_ltt} controls certain OPE coefficients of spinning light operators with two heavy ones, for example when $J=J'$ one finds \begin{equation}\label{eq_my_res2} |C_{H_J H'_J \mathcal{O}_\ell}^{\bar \ell}|^2 \simeq e^{-S} \left(\Delta-\Delta'\right)^{\frac{d}{2}+\bar \ell-1} \, , \end{equation} for $\bar\ell$ even satisfying $2\leq\bar\ell\leq \ell$. Similar results hold for general $\ell,\,\bar\ell$, with different exponents in \eqref{eq_my_res_ltt} and \eqref{eq_my_res2}. The superscript $\bar \ell$ on the left-hand side (partially) labels the tensor structure of the spinning OPE. For general spins $J\neq J'$ and $\ell\geq 0$, leading OPE coefficients can be controlled by hydrodynamic correlators at tree-level as in Eq.~\eqref{eq_my_res1}, at one-loop as in \eqref{eq_my_res2}, or at higher loop, see Eq.~\eqref{eq_CHHL_final} for the general expression. Strictly speaking, the results \eqref{eq_liu_res}, \eqref{eq_my_res1} and \eqref{eq_my_res2} hold after averaging the heavy operators over a microcanonical window. However, the expected typicality of heavy operators in generic CFTs imply that a much more sparing averaging may suffice. The eigenstate thermalization hypothesis \cite{PhysRevA.43.2046,PhysRevE.50.888,rigoleth} suggests that the diagonal OPE \eqref{eq_liu_res} holds at the level of individual operators \cite{Lashkari:2016vgj}, and that the off-diagonal OPEs in e.g.~\eqref{eq_my_res1} and \eqref{eq_my_res2} hold after averaging over $n$ operators, if one tolerates an error $\sim 1/\sqrt{n}$. \begin{figure} \centerline{ \subfigure{ \begin{overpic}[width=0.71\textwidth,tics=10]{fig/spectrum3_pp_bis} \put (25,90) {\Large $\Delta$} \put (90,20) {\Large $Q$} \put (10,5) {\Large $J$} \put(24,60){\small \rotatebox{90}{Tauberian}} \put(4,60){\small \rotatebox{-50}{LC bootstrap}} \put(72,56){\small \rotatebox{48}{Large charge}} \put(25.5,34.5){\small {Numerics}} \end{overpic} } } \vspace{20pt} \centerline{ \subfigure{ \begin{overpic}[width=0.71\textwidth,tics=10]{fig/spectrum_bkt_pp_3_bis} \put (25,90) {\Large $\Delta$} \put (90,20) {\Large $Q$} \put (10,5) {\Large $J$} % \put(30,74){\small{$H$}} \put(36.5,70){\small{$H'$}} \put(24.5,34){\small{$L$}} % \put(14,71.5){\color{white}\rotatebox{-70}{spinning}} \put(12,69){\color{white}\rotatebox{-70}{fluid}} \put(22,71){\color{white}\rotatebox{-80}{$\mu =0$}} \put(35,54){ \color{white}\rotatebox{60}{charged fluid }} \put(44,59){ \color{white}\rotatebox{60}{$\mu\neq 0$}} \put(53,48){ \color{white}\rotatebox{54}{dissipative superfluid}} \put(68,52){ \color{white}\rotatebox{48}{$T=0$ superfluid}} \end{overpic} } } \caption{\label{fig_spectrum} Top: The spectrum of a CFT can be organized using quantum numbers associated with dimension $\Delta$, spin $J$, and internal charge $Q$ if the CFT has additional global symmetries. Existing analytic methods to study various regions of the spectrum include the light-cone bootstrap \cite{Fitzpatrick:2012yx,Komargodski:2012ek}, Tauberian theorems \cite{Pappadopulo:2012jk,Mukhametzhanov:2018zja}, and the large charge limit \cite{Hellerman:2015nra}. Bottom: The regions that admit a hydrodynamic description are in red. The triangle shows an OPE coefficient $C_{HH'L}$ controlled by hydrodynamics.} \end{figure} We further derive generalizations of Eqs.~\eqref{eq_my_res1} and \eqref{eq_my_res2}; these results apply to any non-integrable CFT in spatial dimensions $d\geq 2$, without additional continuous global symmetries. Continuous global symmetries $G$ can be incorporated straightforwardly: they lead to additional hydrodynamic modes which can give further contributions to OPE coefficients. We illustrate this with the case $G=U(1)$. OPE coefficients involving charged heavy operators are similar to \eqref{eq_my_res1} and \eqref{eq_my_res2}, with some differences for odd-spin light operators which receive larger hydrodynamic contributions because of the new slow density. The $U(1)$ symmetry can be spontaneously broken in the state created by the heavy operator of large charge. In this case, the hydrodynamic description includes a Goldstone phase. This allows us to connect to the large charge program \cite{Hellerman:2015nra,Alvarez-Gaume:2016vff,Monin:2016jmo,Cuomo:2017vzg,Jafferis:2017zna,Cuomo:2019ejv}, which can be thought of as a special case where a hydrodynamic (or semiclassical) description survives the $T\to 0$ limit thanks to the spontaneous breaking of the $U(1)$ symmetry. The various possible phases created by heavy operators are shown in Fig.~\ref{fig_spectrum}. The rest of this paper is organized as follows: Fluctuating hydrodynamics is reviewed in Sec.~\ref{sec_hydro}, and applied to relativistic QFTs. A few novel results are also obtained there, including the hydrodynamic long-time tails in Eq.~\eqref{eq_my_res_ltt} and a curious aspect of correlation functions $G(t,k)$: these are expected to decay as $e^{-Dk^2t}$ after the thermalization time in diffusive systems with diffusion constant $D$. However we find that at later times $t \gtrsim \frac{1}{D k^2} \log \frac{1}{k}$, irrelevant interactions lead to a `diffuson cascade' with stretched exponential decay $e^{-\sqrt{Dk^2 t}}$. At even later times $t\gtrsim \frac{1}{k^{2d+2}} \log \frac{1}{k}$, perturbative control is lost. In Sec.~\ref{sec_largeD} we study how hydrodynamic correlators control the CFT data, and derive our main results \eqref{eq_my_res1} and \eqref{eq_my_res2} along with their generalizations. In Sec.~\ref{sec_u1}, we extend this framework to CFTs with a global $U(1)$ symmetry. We explain how the superfluid EFT can be heated up at small temperatures $1 \ll \beta\mu < \infty$ to connect the hydrodynamic description, and speculate on signatures of thermal phase transitions in the spectrum of heavy operators. \section{Hydrodynamics in QFT}\label{sec_hydro} Hydrodynamics governs the late time dynamics of non-integrable QFTs at finite temperature. The simplicity of the hydrodynamic description arises from the fact that most excitations are short-lived at finite temperature, with lifetimes of order of the thermalization time $\tau_{\rm th}$. This allows for an effective description of the system for times \begin{equation}\label{eq_tau_th} t \gg \tau_{\rm th}\, , \end{equation} in terms long wavelength fluctuations of the variables characterizing thermal equilibrium, namely temperature and velocity $\beta(x),\, u_\mu(x)$, or their associated densities $T_{00}(x),\, T_{0i}(x)$. Additional continuous global symmetries would lead to more conserved quantities. These modes are parametrically long lived because their lifetime grows with their wavelength $1/k$. We define the thermalization length $\ell_{\rm th}$ as the length scale where hydrodynamic modes are no longer parametrically longer-lived than $\tau_{\rm th}$. We will then focus on modes satisfying \begin{equation}\label{eq_ell_th} k \ell_{\rm th} \ll 1\, . \end{equation} These time and length scales are parametrically long when the microscopics is weakly coupled, for example $\ell_{\rm th}\sim \tau_{\rm th} \sim \frac{\beta}{g^4}$ in (3+1)d gauge theories with coupling $g\ll 1$ \cite{Arnold:1997gh} . For strongly interacting QFTs (with speed of sound $\sim 1$) one expects $\ell_{\rm th} \sim \tau_{\rm th} \gtrsim \beta$, see e.g.~\cite{Delacretaz:2018cfk}. We briefly outline the construction of hydrodynamics for relativistic QFTs, see \cite{Kovtun:2012rj} for a self-contained introduction. Correlation functions for the conserved densities are obtained by solving continuity relations \begin{equation}\label{eq_ward} \partial_\mu T^{\mu\nu} = 0\, . \end{equation} These equations also involve the currents $T_{ij}$. They can be closed by writing constitutive relations for the currents in a gradient expansion -- in the Landau frame one has \begin{equation}\label{eq_T_consti} \langle T_{\mu\nu}\rangle =\epsilon u_\mu u_\nu + P \Delta_{\mu\nu} - \zeta \Delta_{\mu\nu}\partial_\lambda u^\lambda - \eta \Delta_\mu^\alpha \Delta_\nu^\beta \left(\partial_{ \alpha} u_{\beta} + \partial_{ \beta} u_{\alpha} - \frac{2}{d}\eta_{\alpha\beta} \partial_\lambda u^\lambda\right) + \cdots \, , \end{equation} where $P$ is the pressure, $\epsilon$ the energy density, $\zeta,\,\eta$ the bulk and shear viscosities, and the velocity satisfies $u_\mu u^\mu = -1$. We defined the projector $\Delta_{\mu\nu} \equiv \eta_{\mu\nu} + u_\mu u_\nu$. The ellipses denote terms that are higher order in derivatives. Hydrodynamic correlation functions can be found by expanding fields around equilibrium. These correlation functions are therefore obtained after two expansions: a derivative expansion, apparent in \eqref{eq_T_consti}, and an expansion in fields that we will perform below. The former is always controlled and gives corrections to correlators that are suppressed at late times \eqref{eq_tau_th}, whereas the perturbative expansion in fields is only controlled if interactions are irrelevant -- this is the case in $d\geq 2$ spatial dimensions. We first focus on $d>2$. When $d=2$, hydrodynamic interactions are only marginally irrelevant \cite{PhysRevA.16.732} -- this case will be treated separately in Sec.~\ref{ssec_dc}. In $d=1$, interactions are relevant and the theory flows to a new dissipative IR fixed point with dynamic exponent $z=3/2$ \cite{PhysRevA.16.732,PhysRevLett.89.200601,Ferrari2013} (to be contrasted with the unstable diffusive fixed point, where $z=2$)% \footnote{Neither fixed point describes CFTs in $d=1$, where the enhanced symmetries completely fix thermal physics in the thermodynamic limit $R/\beta \gg 1$.}. When interactions are irrelevant, it is possible to solve Eqs.~\eqref{eq_ward} and \eqref{eq_T_consti} perturbatively in the fields, by expanding around equilibrium \begin{subequations}\label{eq_linearize} \begin{align} u_\mu(x) &= \delta_\mu^0 + \delta_\mu^i \frac{\beta}{s} T_{0i} + \cdots\, , \\ \beta(x) &= \beta - \frac{\beta^2 c_s^2}{s} \delta T_{00} + \cdots\, , \end{align} \end{subequations} where the entropy density is given by $s=\beta(\epsilon + P)$ and the speed of sound $c_s^2 = \frac{\partial P}{\partial \epsilon}$. This leads to the retarded Green's function \begin{subequations}\label{eq_GT0i0j} \begin{align} G^R_{T_{00}T_{00}}(\omega,k) &= \frac{s}{\beta} \left[\frac{k^2}{c_s^2k^2 - \omega^2 - i\Gamma_s k^2 \omega}\right] + \cdots \\ G^R_{T_{0i}T_{0j}}(\omega,k) &= \frac{s}{\beta} \left[\frac{k_i k_j}{k^2} \frac{\omega^2}{c_s^2 k^2 - \omega^2 - i\Gamma_{s}k^2 \omega} + \left(\delta_{ij} - \frac{k_i k_j}{k^2}\right) \frac{D k^2}{-i\omega + Dk^2}\right] + \cdots\, , \end{align} \end{subequations} where $\cdots$ denotes terms that are analytic or subleading when $\omega \tau_{\rm th},\, k\ell_{\rm th} \ll 1$. The long lived densities $T_{00},\, T_{0i}$ carry a sound mode with attenuation rate $\Gamma_s = \beta \cdot \left(\zeta + \frac{2(d-1)}{d}\eta\right)/s$, and a diffusive mode with diffusion constant $D=\beta\cdot{\eta}/s$. Other two-point functions can be obtained from the fluctuation-dissipation theorem: the Wightman Green's function for example is $\langle \mathcal{O}\mathcal{O}\rangle(\omega) = \frac{2}{1-e^{-\beta\omega}} \Im G^R_{\mathcal{O}\mathcal{O}}(\omega)\simeq \frac{2}{\beta\omega} \Im G^R_{\mathcal{O}\mathcal{O}}(\omega)$ (here \eqref{eq_tau_th} implies that we are working at small frequencies $\beta\omega \ll 1$). Its Fourier transform will be used below: \begin{equation}\label{eq_G_sym} \langle{T_{0i}T_{0j}}\rangle(t,k) = -\frac{s}{\beta^2} \left[\frac{k_i k_j}{k^2} \cos(c_s k |t|)e^{-\frac12 \Gamma_s k^2 |t|} + \left(\delta_{ij} - \frac{k_i k_j}{k^2} \right)e^{-D k^2 |t|}\right] + \cdots\, . \end{equation} For the present purposes it will be useful to understand the constitutive relation \eqref{eq_T_consti} as an operator equation. Namely, using \eqref{eq_linearize} we can write the traceless spatial part as \begin{equation}\label{eq_T_consti2} T_{\langle ij\rangle} = -2D \partial_{(i} T_{j) 0} + \frac{\beta}{s} T_{0i} T_{0j} - \hbox{traces} + \cdots\, . \end{equation} Traceless symmetric combinations are denoted by $A_{\langle ij\rangle}\equiv A_{(ij)} - \frac{1}{d}\delta_{ij} A_k^k$. The operator on the left is studied in the IR by expanding it in terms of composites of IR operators $T_{00},\, T_{0i}$ with the same quantum numbers (here the quantum number being matched is spin under spatial rotations $SO(d)$). Correlation functions of both operators will match in the IR. This is routinely done in EFTs, e.g.~in chiral perturbation theory where UV operators are represented in the IR in terms of pion degrees of freedom. A similar strategy was followed in \cite{Monin:2016jmo} where operators with small global charge were represented in terms of operators in the superfluid effective field theory. Although this distinction of UV and IR operators may seem awkward for components of the stress-tensor, we will see that it becomes a useful concept when studying other operators. In the case at hand, the linear overlap of $T_{\langle ij\rangle}$ with IR degrees of freedom implies that the two-point function of $T_{\langle ij\rangle}$ will contain the hydrodynamic poles in \eqref{eq_GT0i0j} (as can be checked explicitly, see e.g.~appendix A in Ref.~\cite{Delacretaz:2018cfk}). At $k=0$, the linear term vanishes, but $T_{\langle ij\rangle}$ can still decay into a composite of hydrodynamic operators via the second term in \eqref{eq_T_consti2}. It was found \cite{PhysRevLett.25.1254} (see \cite{Kovtun:2003vj} for a more recent relativistic exposition) that this term leads to `long-time tails' in the two-point function \begin{equation}\label{eq_LTT} \begin{split} \langle T_{\langle ij\rangle} T_{\langle kl\rangle}\rangle (t,k=0) &\simeq \left(\frac{\beta}{s}\right)^2 \int \frac{d^d k}{(2\pi)^d} G_{T_{0i}T_{0k}}(t,k) G_{T_{0j}T_{0l}}(t,-k) + (i \leftrightarrow j) - \hbox{traces} \\ &= \frac{A_{ijkl}}{\beta^2d(d+2)} \left[ \frac{1}{(4\pi \Gamma_s |t|)^{d/2}} + \frac{d^2 -2}{(8\pi D |t|)^{d/2}}\right] + \cdots\, . \end{split} \end{equation} where $A_{ijkl}=\delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk} - \frac{2}{d}\delta_{ij}\delta_{kl}$ and the integral was computed using \eqref{eq_G_sym}, dropping terms that decay exponentially fast in time. In the first step, we assumed the theory was Gaussian in the hydrodynamic variables, in which case the symmetric Green's functions factorize \cite{Kapusta:2006pm}. This is of course not the case; the same term in \eqref{eq_T_consti2} that leads to long-time tails is responsible for hydrodynamic interactions (classically, these non-linearities are responsible for turbulence in the Navier-Stokes equations). The framework of fluctuating hydrodynamics addresses these interactions. Although hydrodynamics has been understood as a field theory since the work of Euler, the formulation of dissipative hydrodynamics as an EFT is somewhat more recent \cite{PhysRevA.8.423,POMEAU197563,PhysRevA.16.732} and was motivated by the observation of long-time tails in numerics \cite{PhysRevA.1.18}, which are now understood as hydrodynamic loops as in Eq.~\eqref{eq_LTT}. Recent developments in dissipative EFTs for hydrodynamics include \cite{Grozdanov:2013dba,Haehl:2015pja,Crossley:2015evo,Jensen:2018hse} (see \cite{Glorioso:2018wxw} for a review, and e.g.~\cite{Hayata:2015lga,Akamatsu:2016llw,An:2019osr} for alternative approaches). These constructions allow for a systematic treatment of interactions to arbitrary order in perturbations. Here, we will be working in dimensions where interactions are irrelevant, and will only be interested in the leading hydrodynamic contribution to correlation functions at late times. In this sense we are justified in approximating the action as Gaussian in evaluating \eqref{eq_LTT} and in the following. Systematically accounting for corrections to our results would require knowing the structure of interactions in the effective field theory -- this was done for simple diffusion in \cite{Chen-Lin:2018kfl}. \subsection{Late time correlators from hydrodynamics}\label{ssec_hyd_cor} How do the thermal correlators of other simple operators behave at late times? The central assumption of thermalization and hydrodynamics is that after short time transients, the only long-lived dynamical degrees of freedom are the densities \eqref{eq_linearize}. Hence any simple operator will be carried by these densities at late times. For example, any neutral spin-2 operator $\mathcal{O}_{\mu\nu}$ will have a constitutive relation similar to \eqref{eq_T_consti} -- the stress tensor is only distinguished by the coefficients in its constitutive relation which are fixed in terms of thermodynamic and transport parameters. More generally, consider a traceless symmetric tensor $\mathcal{O}_{\mu_1 \cdots \mu_\ell}$ with even spin $\ell$ (odd spin is mostly similar and is treated in appendix \ref{sapp_all_spin}). Its constitutive relation has the schematic form \begin{align}\label{eq_q_match_consti} \mathcal{O}_{\mu_1 \cdots \mu_\ell} \notag &= \lambda_0 \, u_{\mu_1} \cdots u_{\mu_\ell} + \lambda_1 \beta\, \partial_{\mu_1}u_{\mu_2}\cdots u_{\mu_\ell} + \cdots + \lambda_{\ell-1} \beta^{\ell-1} \partial_{\mu_1} \cdots \partial_{\mu_{\ell-1}} u_{\mu_\ell} \\ &\quad + \lambda_\ell \beta^{\ell-1} \partial_{\mu_1} \cdots \partial_{\mu_\ell} \beta \ + \ \hbox{higher derivative}\, , \end{align} where all terms should be understood to be symmetrized, with traces removed. For some terms there are several possible choices for how the derivatives are distributed -- we will be more precise below after determining which terms are most important. The strategy is simply to write all possible composite hydrodynamic operators with the right quantum numbers, in a derivative expansion -- we therefore do not explicitly include terms like $u_{\mu_1} \left(\partial_{\mu_2}\cdots \partial_{\mu_{\ell-1}}\right) \partial^2 u_{\mu_\ell} $ which are manifestly higher order in derivatives. The powers of $\beta$ are chosen such that all coefficients $\lambda$ (which are still functions of $\beta$) have the same dimension, namely that of $\mathcal{O}$ -- for CFTs it will be useful to use scale invariance to define instead the dimensionless numbers \begin{equation}\label{eq_lambda_to_b} \lambda_i \equiv b_i / \beta^{2\Delta_{\mathcal{O}}}\, . \end{equation} The $\lambda_0$ term in \eqref{eq_q_match_consti} was considered in a CFT context in \cite{Iliesiu:2018fao} -- it is special in that it leads to a non-vanishing equilibrium expectation value $\langle \mathcal{O}\rangle_\beta\neq 0$. However, this term will not always give the leading hydrodynamic contribution to the late time correlators of $\mathcal{O}$, as we show below. In particular this term is forbidden by CPT for odd spin $\ell$, but odd spin operators still have hydrodynamic tails. Let us first consider the components of $\mathcal{O}$ with zero or one spatial index. Linearizing the constitutive relation \eqref{eq_q_match_consti} using Eq.~\eqref{eq_linearize} shows that these components overlap linearly with hydrodynamic modes: the leading terms are \begin{subequations} \begin{align} \delta \mathcal{O}_{0\cdots 0} &= -\partial_\beta \lambda_0 \frac{\beta^2 c_s^2}{s} \delta T_{00} + \cdots\, , \\ \delta \mathcal{O}_{i0\cdots 0} \label{eq_consti_1spatial} & = \lambda_0 \frac{\beta}{s} T_{0i} - \lambda_1 \frac{\beta^2 c_s^2}{s} \partial_i T_{00} + \cdots\, . \end{align} \end{subequations} Using \eqref{eq_GT0i0j}, one finds correlation functions that involve the hydrodynamic poles \begin{subequations}\label{eq_OO_01spatial} \begin{align} \langle \mathcal{O}_{0\cdots 0} \mathcal{O}_{0\cdots 0}\rangle (\omega,k) \label{eq_OO_0spatial} &= \frac{2 \beta^d}{s_o} \frac{\left(\beta\partial_\beta \lambda_0\right)^2\Gamma_s c_s^4 k^4}{(\omega^2 - c_s^2 k^2)^2 + (\Gamma_s \omega k^2)^2} + \cdots \, , \\ \langle \mathcal{O}_{i0\cdots 0} \mathcal{O}_{j0\cdots 0} \rangle (\omega,k) \label{eq_OO_1spatial} &= \frac{2\beta^d}{s_o} \frac{k_i k_j}{k^2} \frac{\left(\omega\lambda_0 + {\lambda_1\beta c_s^2 k^2}\right)^2\Gamma_s k^2}{(\omega^2 - c_s^2 k^2)^2 + (\Gamma_s\omega k^2)^2} \\ & \quad + \frac{2\beta^d}{s_o}\left(\delta_{ij} - \frac{k_ik_j}{k^2}\right) \frac{(\lambda_0)^2D k^2}{\omega^2 + (Dk^2)^2} + \cdots\, , \notag \end{align} \end{subequations} where we defined the dimensionless entropy density $s_o\equiv s\beta^d$, and $\cdots$ are corrections that are subleading when $\omega \tau_{\rm th} ,\,k \ell_{\rm th}\ll 1$. Now consider correlators involving $\bar \ell$ spatial components of the operator $\mathcal{O}$, with $1<\bar\ell \leq \ell$. The constitutive relation \eqref{eq_q_match_consti} can again be turned into an operator equation using \eqref{eq_linearize} -- the part that is traceless symmetric in spatial indices is \begin{equation}\label{eq_q_match} \begin{split} \mathcal{O}_{\langle i_1 \cdots i_{\bar \ell}\rangle 0 \cdots 0} &\sim \frac{\lambda_0 \beta^{\bar\ell}}{s^{\bar\ell}} T_{0i_1} \cdots T_{0i_{\bar\ell}} + \cdots + \frac{\lambda_{{\bar\ell}-2}\beta^{\bar\ell}}{s^2} T_{0i_1} \left(\partial_{i_2}\cdots \partial_{i_{{\bar\ell}-1}}\right) T_{0 i_{\bar\ell}} \\ &\ + \frac{\lambda_{\bar \ell-1}\beta^{\bar \ell}}{s} \partial_{i_1}\cdots \partial_{i_{\bar\ell-1}} T_{0i_{\bar \ell}} + \frac{\lambda_{\bar \ell} \beta^{{\bar\ell}+1}c_s^2}{s} \left( \partial_{i_1} \cdots \partial_{i_{{\bar\ell}}}\right) T_{0 0} + \cdots\,, \end{split} \end{equation} where again all terms should be understood to be symmetrized, with traces removed. There are still several possibilities for how the derivatives act, e.g.~in the $\lambda_{\bar \ell-2}$ term -- this will be specified shortly. We focus on the traceless symmetric part of $\mathcal{O}_{i_1 \cdots i_{\bar \ell} 0 \cdots 0}$, because its traces are related to time components of the operator (e.g.~$\delta^{i_1i_2}\mathcal{O}_{i_1\cdots i_{\bar \ell} 0\cdots } = \mathcal{O}_{00i_{3}\cdots i_{\bar \ell} 0\cdots}$), which in turn satisfy similar constitutive relations with fewer indices. This operator matching equation is illustrated in Fig.~\ref{fig_decay}. We could now proceed by studying the contribution of every operator in \eqref{eq_q_match} to the correlator $\langle \mathcal{O}\mathcal{O} \rangle$. However a simple scaling argument can be used to determine which term in \eqref{eq_q_match} is the most relevant: note that \eqref{eq_G_sym} implies that the densities scale as $T_{00}\sim T_{0i}\sim k^{d/2}$. For dimensions $d> 2$, it is therefore more advantageous to use gradients to build spin. The most relevant operator is the total derivative term $\lambda_{\bar\ell-1}$. We must also keep the term $\lambda_{\bar \ell}\,$; although it is suppressed when $\omega\sim k$ it can give an enhanced contribution when $\omega\lesssim \beta k^2$, as was shown for $\bar\ell=1$ in \eqref{eq_consti_1spatial} and \eqref{eq_OO_1spatial}. Finally, since both of these terms vanish at $k=0$, it is also important to keep the most relevant operator that is not a total derivative -- when $\bar\ell$ is even this is $\lambda_{\bar \ell-2}$ in \eqref{eq_q_match} (when $\bar\ell$ is odd, the $\lambda_{\bar \ell-2}$ term is a total derivative -- this case is treated below). The terms in the constitutive relation \eqref{eq_q_match} that give the leading contribution in the hydrodynamic regime $\omega \tau_{\rm th},\, k\ell_{\rm th} \lesssim 1$ are therefore $\lambda_{\bar\ell-2},\, \lambda_{\bar \ell-1}$ and $\lambda_{\bar \ell}$. Which term dominates depends on how $\omega$ compares to the scales $c_s k$ and $D k^2\sim \Gamma_s k^2$; their contributions to the correlator take the form% \footnote{These hydrodynamic contributions also imply that correlators $\langle \mathcal{O}_{(\bar\ell,\ell)} \mathcal{O}_{(\bar\ell,\ell)}\rangle (x,t)$ of neutral operators always decay polynomially in real time. Exponential decay of correlators is therefore not a good criterion for thermalization. I thank Erez Berg for discussions on this point.} \begin{align}\label{eq_OO_lspatial} \langle \mathcal{O}_{(\bar\ell,\ell)} \mathcal{O}_{(\bar\ell,\ell)}\rangle (\omega,k)\notag &= \frac{\beta^d}{s_o} \frac{ (\lambda_{\bar \ell - 1})^2 D k^2 (\beta k)^{2\bar\ell - 2}}{\omega^2 + (D k^2)^2} + \frac{\beta^d}{s_o} \frac{\left(\omega\lambda_{\bar \ell - 1} + {\lambda_{\bar \ell}\beta c_s^2 k^2}\right)^2\Gamma_sk^2 (\beta k)^{2\bar\ell-2}}{(\omega^2 - c_s^2 k^2)^2 + (\Gamma_s \omega k^2)^2} \\ &\quad + \frac{\beta^d}{s_o^2} \frac{ (\lambda_{\bar\ell-2})^2}{\omega} \left[\left(\frac{\omega \beta^2}{\Gamma_s}\right)^{\frac{d}{2} + \bar\ell - 2 } \!\!\!\! + \left(\frac{\omega \beta^2}{D}\right)^{\frac{d}{2} + \bar\ell - 2 }\right]\left(1 + O(\tfrac{D^2 k^4}{\omega^2})\right)\\ &\quad + \cdots \, . \notag \end{align} Here we let $\mathcal{O}_{(\bar\ell,\ell)}\equiv \mathcal{O}_{\langle i_1 \cdots i_{\bar \ell}\rangle 0\cdots 0}$ denote components of a spin $\ell$ operator with $\bar \ell$ spatial indices, omitting the corresponding tensor structures; these are treated more carefully in appendix \ref{app_hydro}, see Eq.~\eqref{eq_OO_lspatial_full}. The first line follows from the linear overlaps with the hydrodynamic modes as in \eqref{eq_OO_01spatial}. The second line dominates for $k\to 0$ and comes from a long-time tail contribution to the two-point function from a hydrodynamic loop, as we now explain. The hydrodynamic loop computation is similar to \eqref{eq_LTT}, with extra gradients acting on the internal legs. Since $\lambda_{\bar \ell-2}$ term in \eqref{eq_q_match} scales as $k^{d+ \bar \ell -2}$, one expects a contribution to the two-point function $G_{\mathcal{O}\mathcal{O}}(t) \sim 1/t^{\frac{d}{2}+ \bar \ell - 2}$ (note that one must scale $\omega\sim k^2$). The numerical prefactor can be found by performing the loop integral (see appendix \ref{sapp_loop_spin} for more details and the tensor structure): \begin{equation}\label{eq_Gt_even} \langle \mathcal{O}_{(\bar\ell,\ell)} \mathcal{O}_{(\bar\ell,\ell)} \rangle (t,k=0) =\Bigl(\frac{\lambda_{\bar\ell-2}}{s_o}\Bigr)^2 \beta^{d} \left[\frac{{a_1}}{\left(2 \Gamma_s |t|/\beta^2\right)^{\frac{d}{2}+{\bar\ell} - 2}} + \frac{{a_2}}{\left(4 D |t|/\beta^2\right)^{\frac{d}{2}+{\bar\ell} - 2}}\right] + \cdots\, , \end{equation} where the numerical coefficients $a_1$ and $a_2$ are given in \eqref{eq_as}, and were dropped in \eqref{eq_OO_lspatial}. We see that operators with $\bar\ell \geq 2$ spatial indices universally decay as $1/t^{\frac{d}{2}+\bar \ell - 2}$ in thermalizing QFTs -- although this is a straightforward extension of the well-known stress tensor long-time tails \eqref{eq_LTT} to operators with higher spin, this result has to our knowledge not appeared previously in the literature. Fourier transforming this result gives the last line in \eqref{eq_OO_lspatial}, where we have also indicated the subleading corrections $O(\frac{D^2 k^4}{\omega^2})$ for small $k\neq 0$ (they are computed explicitly in a special case in appendix \ref{sapp_hydro_subleading}, where the analytic structure is also discussed). When the number of spatial derivatives $\bar\ell$ is odd, the $\lambda_{\bar\ell-2}$ term in the constitutive relation \eqref{eq_q_match} is a total derivative, and there is competition between less relevant terms. Their contribution to the late time correlator can be computed as in the even $\bar\ell$ case -- for $\bar \ell\geq 3$, one finds \begin{equation}\label{eq_LTT_odd} \langle \mathcal{O}_{(\bar \ell,\ell)}\mathcal{O}_{(\bar \ell,\ell)}\rangle (t,k=0) \sim \frac{1}{|t|^{\alpha_{\bar \ell}}} \qquad \hbox{with} \quad \alpha_{\bar \ell} =\left\{\begin{split} d + {\bar \ell} - 3 & \quad \hbox{if $d\leq 4$} \, , \\ \frac{d}{2}+\bar\ell -1 & \quad \hbox{if $d>4$}\, . \end{split}\right. \end{equation} See appendix \ref{sapp_all_spin} for more details. The first line in \eqref{eq_OO_lspatial} is then unchanged for $\bar \ell$ odd, but the second line will be given by the Fourier transform of \eqref{eq_LTT_odd} instead of \eqref{eq_Gt_even}. In theories with a large number of degrees of freedom such as holographic theories, the suppression in \eqref{eq_Gt_even} by the dimensionless entropy density $s_o\equiv s\beta^d \sim N^2\gg 1$ implies that these hydrodynamic tails will only overcome short-time transients $\sim e^{-t/\tau_{\rm th}}$ at times $t\gtrsim \tau_{\rm th} \log s_o$ (the late time limit of correlation functions therefore does not commute with the $N\to \infty$ limit). The stress tensor tails \eqref{eq_LTT} were captured in a holographic model in Ref.~\cite{CaronHuot:2009iq} by computing a graviton loop in the bulk. However certain tails in the holographic correlators of higher-spin operators \eqref{eq_Gt_even} are reproduced more simply, and are direct consequences of large $N$ factorization: consider a holographic model with a single trace scalar $\phi$. In the absence of a $\phi\to -\phi$ symmetry, the scalar will have a thermal expectation value \begin{equation} \langle \phi\rangle_\beta = \frac{b_\phi}{\beta^{\Delta_\phi}}\, . \end{equation} This is achieved in bottom-up holographic models by including a coupling in the bulk between the scalar and the Weyl tensor \cite{Myers:2016wsu} (see also \cite{Katz:2014rla, Wu:2018xjy}). A computation of the scalar two-point function should reveal the sound mode as in \eqref{eq_OO_0spatial}. Double-trace spin-$\ell$ operators $\mathcal{O}_{\ell} \sim \phi \partial^\ell \phi$ will then have long-time tail contributions to their thermal correlators, similar to \eqref{eq_Gt_even}. Although $s_o$ acts as a loop counting parameter in fluctuating hydrodynamics, we emphasize that the perturbative expansion is controlled even when $s_o\sim 1$ because hydrodynamic interactions are irrelevant% \footnote{For example, the quark-gluon plasma has $s_o\sim 10$ \cite{Bazavov:2009zn,Borsanyi:2010cj,Kovtun:2011np}.}. In this paper, we do not assume that $s_o$ is large. \begin{figure} \centerline{ \subfigure{\label{sfig_label2} \includegraphics[height=0.3\linewidth, angle=0]{fig/diag2}} \hspace{30pt} \subfigure{\label{sfig_label2} \includegraphics[height=0.3\linewidth, angle=0]{fig/diag3}} } \caption{\label{fig_loop}Hydrodynamic loops control the correlators of $k=0$ neutral operators at large time separation.} \end{figure} We focused above on diagonal two-point functions; extending these results to off-diagonal correlators $\langle {\mathcal{O}_{(\bar\ell,\ell)}\mathcal{O}'_{(\bar\ell',\ell')}}\rangle$ is straightforward, see appendix \ref{sapp_loop_spin}. These methods can also be easily extended to compute thermal higher-point correlators, which at large time separations are also controlled by a hydrodynamic loop, see Fig.~\ref{fig_loop}. For example, operators $\mathcal{O}_{(\bar\ell,\ell)}(t,k=0)$ with an even number of spatial indices $\bar\ell$ have a symmetric connected $n$-point function with $n\geq 3$ odd given by \begin{equation}\label{eq_npoint} \langle \mathcal{O}_{(\bar\ell,\ell)}(t_1) \cdots \mathcal{O}_{(\bar\ell,\ell)}(t_n)\rangle_c \sim \left(\frac{\lambda_{\bar\ell-2}}{s_o}\right)^n \frac{\beta^{d(n-1)}}{\left[D(t_{12} + t_{23} + \cdots +t_{n1})/\beta^2\right]^{\frac{d}{2}+\frac{n}{2}(\ell-2)}} + \hbox{sym.}\, , \end{equation} with $t_{ij} \equiv |t_i-t_j|$, and where `sym.' means symmetrizing% \footnote{In the approach presented here, the correlators are necessarily symmetrized. Correlators with arbitrary time orderings can however still be computed from hydrodynamics using the effective action, see \cite{Blake:2017ris}.} the times $t_1,\cdots,t_n$. When $n$ is odd, the contribution from the sound pole vanishes because the integrand $\cos^n(c_s k |t|)$ oscillates around zero; when $n$ is even the correlator receives an extra contribution from the sound attenuation rate as in \eqref{eq_Gt_even}. \subsection{The critical dimension $d=2$}\label{ssec_dc} The results in the previous section apply to any QFT in spatial dimensions $d>2$. For $d=2$, hydrodynamic interactions are only marginally irrelevant. One manifestation of this is that all terms in the first line of \eqref{eq_q_match} have the same scaling. This implies that many terms contribute to the correlator, which however still scales like \eqref{eq_Gt_even}. An additional subtlety is that the transport parameters $D,\, \Gamma_s$ now run. For simplicity, let us assume the bulk viscosity $\zeta =0$ (as is the case for CFTs), so that $\Gamma_s = D = \beta \eta/s$. The $\beta$-function for $D$ is negative \cite{PhysRevA.16.732,Kovtun:2012rj}, so that it flows to infinity in the IR% \footnote{This implies that canonically normalized interactions $\sim 1/D$ are marginally irrelevant and the theory is `free' in the IR, in the sense that it is described by regular tree level hydrodynamics like in higher dimensions.}. Indeed, the tree-level and one-loop contribution to the Green's function can be found from \eqref{eq_GT0i0j} and \eqref{eq_LTT} to be \begin{equation} G_{T_{xy}T_{xy}}(\omega,k=0) \, = \, \frac{2 s}{\beta^2} \left[ D + \frac{1}{16\pi s D} \log \frac{1}{\omega} + \cdots \right]\, . \end{equation} Interpreting the quantity in brackets $D + \frac{\partial D}{\partial \log \omega} \log \omega$ as a running of the diffusion constant one finds \cite{PhysRevA.16.732} \begin{equation}\label{eq_D_RG} D(\omega) \simeq D_\Lambda \sqrt{1+\frac{\log \Lambda/\omega}{8\pi s D_\Lambda^2}} \, , \end{equation} where $D_\Lambda$ is the diffusion constant at the scale $\Lambda$. In the deep IR \begin{equation}\label{eq_D_asymp} D(\omega) \simeq \sqrt{\frac{\log 1/\omega}{8\pi s}} \, . \end{equation} It is a striking feature of (2+1)d hydrodynamics that dissipation does not introduce new parameters at the latest times -- transport parameters are fixed in terms of the thermodynamics \cite{Kovtun:2012rj}% \footnote{Dissipation is also tied to thermodynamics in (1+1)d when hydrodynamic fluctuations are relevant \cite{PhysRevA.16.732,PhysRevLett.89.200601,Ferrari2013}, as was recently emphasized in Ref.~\cite{Delacretaz:2020jis}.}. In practice, the asymptotic value may only be reached at very late times, or small frequencies. Taking $\Lambda=1/\beta$ and assuming $D_\Lambda\approx \beta$ one needs frequencies $\beta\omega \lesssim e^{-8\pi s_o}$ for the asymptotic diffusion \eqref{eq_D_asymp} to be reached, where the dimensionless entropy density% \footnote{For CFTs, $s_o = b_T$ in the notation of \cite{Iliesiu:2018fao}. A free massless scalar has $s_o = \frac{3\zeta(3)}{2\pi}$, so that $e^{-8\pi s_o}\approx 5\times 10^{-7}$. For the $(2+1)$d Ising model $s_o \approx0.459$ \cite{PhysRevE.56.1642,Iliesiu:2018fao} so $e^{-8\pi s_o} \approx 10^{-5}$.} $s_o \equiv s \beta^2$. These logarithmic corrections to transport propagate to correlation functions of generic operators, so that transport parameters in e.g.~Eq.~\eqref{eq_OO_lspatial} will be replaced with \eqref{eq_D_RG} -- however since many other terms in \eqref{eq_q_match} contribute to the same order in $\omega$ and $k$ when $d=2$, we will not attempt to obtain the exact correlator. These logarithmic corrections are negligible for many practical purposes, but have been observed in classical simulations, see e.g.~\cite{PhysRevE.77.021201,Donev_2011}. We will mostly ignore logarithmic corrections in applications to CFT data in Sec.~\ref{sec_largeD}. \subsection{Real time correlators and diffuson cascade}\label{ssec_diffuson_cascade} The hydrodynamic correlators in frequency space $G(\omega,k)$ obtained above are the ingredients needed for the CFT applications in Sec.~\ref{sec_largeD}; the reader interested in these results may therefore directly skip ahead to that section. In this section we take a slight digression to discuss finite temperature QFT correlators in real time. At finite wavevector $k$, the linearized hydrodynamic correlators \eqref{eq_G_sym} decay exponentially in time $\sim e^{-Dk^2|t|}$. We will see in this section that even this standard result is drastically affected by hydrodynamic fluctuations, which in a sense are dangerously irrelevant: although they only give small corrections to $G(\omega,k)$, they entirely control the leading behavior of $G(t,k)$ at late times. Let us therefore study the real time thermal correlation function of an operator with $\bar\ell$ spatial indices \begin{equation}\label{eq_Gs_FT} G(t,k)\equiv \langle \mathcal{O}_{(\bar\ell,\ell)}\mathcal{O}_{(\bar\ell,\ell)}\rangle (t,k)\, . \end{equation} We will take $\bar \ell\geq2$ even for simplicity, but similar results hold for any $\bar \ell$. Based on the previous section, one expects the polynomial decay of correlation functions \eqref{eq_Gt_even} to still hold for times smaller than the diffusion time ${1}/{Dk^2}$ of the mode \begin{equation} 1 \quad \ll\quad \frac{t}{\tau_{\rm th}} \quad \ll\quad \frac{1}{Dk^2\tau_{\rm th}} \ \sim \ \frac{1}{(k{\ell}_{\rm th})^2}\, , \end{equation} (in this section, we assume for simplicity that $D\sim \Gamma_s \sim \ell_{\rm th}^2/\tau_{\rm th}$). The small wavevector of the operator in units of the UV cutoff of hydrodynamics $k{\ell}_{\rm th}\ll 1$ will allow for a parametric separation between various regimes of the correlator at late times. To find the cross-over time more precisely, we can compare the contributions from the two first terms in Fig.~\ref{fig_decay} to the two-point function -- one finds that the polynomial decay \eqref{eq_Gt_even} holds in the window \begin{equation}\label{eq_t_reg} \hbox{regime I:} \qquad\qquad G(t, k) \sim \frac{1}{t^{\frac{d}{2}+{\bar\ell} - 2}}\, , \qquad\qquad 1 \quad \ll\quad \frac{t}{\tau_{\rm th}} \quad \ll\quad \frac{1}{(k{\ell}_{\rm th})^{2-\gamma} }\, , \end{equation} with $\gamma = 2\frac{d-2}{d+2\bar \ell -4}\in (0,2)$. At slightly later times, the correlator is controlled by the linear overlap with the hydrodynamic mode and has the form \begin{equation}\label{eq_t_interm} \hspace{-5pt} \hbox{regime II:} \quad G(t, k) \sim {k^{2\bar\ell -2}}e^{-D k^2 |t|}\, , \quad \frac{1}{(k\ell_{\rm th})^{2-\gamma} } \ \ll \ \frac{t}{\tau_{\rm th}} \ \ll \ \frac{1}{(k\ell_{\rm th})^{2} } \log \frac{1}{(k\ell_{\rm th})^d} \, . \end{equation} The power-law decay therefore plateaus to a constant before starting to decay exponentially around the diffusion time $1/Dk^2$. So far the discussion here mirrors the one in frequency space, see Eq.~\eqref{eq_OO_lspatial}. However the result above eventually breaks down at late times. Indeed, consider again the second term in Fig.~\ref{fig_decay}, where the operator decays into two hydrodynamic excitations. Since it is less relevant than the first term, its contribution to the correlator will be more suppressed by $1/t$ (or $k$); however its exponential factor is larger: \begin{equation} G_{T\partial^{\bar\ell -2} T,\, T \partial^{\bar\ell -2} T}(t,k) \sim k^{2\bar\ell - 2 }k^{d-2}e^{-\frac{1}{2}D k^2 |t|}\,. \end{equation} The exponent corresponds to the energy threshold for production of two diffusive fluctuations, which is {\em half} that of a single diffusive mode \cite{Chen-Lin:2018kfl}. More generally, the operator $\mathcal{O}(k,t)$ can decay into $n$ diffusive modes, distributing its momentum such that each mode carries $k'=k/n$ so that the exponential factor becomes $\left(e^{-D k'^2 t}\right)^n = e^{-\frac{1}{n}D k^2 t}$. This is the manifestation in real time $t$ of the $n$-diffuson branch cut, with branch point $\omega_{n\hbox{\scriptsize -diff}} = -\frac{i}{n} D k^2$. There are similar branch points at the threshold for production of $n$ sound modes $\omega_{n\hbox{\scriptsize -sound}} = \pm c_s k - \frac{i}{2n} \Gamma_s k^2 $ (see appendix \ref{sapp_hydro_subleading}). The analytic structure of $G(\omega,k)$ is shown in Fig.~\ref{fig_ana}. \begin{figure}[h] \centerline{ \begin{overpic}[width=0.4\textwidth,tics=10]{fig/ana} \put (5,66) {{\Large $\omega$}} \put (55,52) {{\rm \ $\vdots$}} \put (55,45) {{\rm $-\frac{i}{2}D k^2$}} \put (55,30) {{\rm $-iD k^2$}} \put (96,52) {{\rm \ $\vdots$}} \put (96,47) {{\rm $c_s k -\frac{i}{4}\Gamma_s k^2$}} \put (96,33) {{\rm $c_s k -\frac{i}{2}\Gamma_s k^2$}} \end{overpic}} \caption{\label{fig_ana} Analytic structure of hydrodynamic correlation functions $G^R(\omega,k)$. The circles denote hydrodynamics poles $\omega_{\rm diff} = -i D k^2$ and $\omega_{\rm sound} = \pm c_s k - \frac{i}{2}\Gamma_s k^2$, and the crosses denote branch points $\omega_{n\hbox{\scriptsize -diff}} = -\frac{i}{n} D k^2$ and $\omega_{n\hbox{\scriptsize -sound}} = \pm c_s k - \frac{i}{2n}\Gamma_s k^2$ located at the threshold for production of $n$ hydrodynamic excitations.} \end{figure} For $n$ sufficiently large, the $n$-diffuson contribution to the correlator has the form% \footnote{This expression applies when $n$ is larger than the spin $\ell$. When $n\lesssim\ell$, decaying into one more $T_{0i}$ costs $k^{d/2}$ but saves a derivative $k$, so that the suppression is only ${(k\ell_{\rm th})^{n (d-2)}}$ instead of ${(k\ell_{\rm th})^{n d}}$ in \eqref{eq_G_ndiff}.} \begin{equation}\label{eq_G_ndiff} G(t,k)|_{\hbox{\scriptsize $n$-diff}} \sim {n!}{(k\ell_{\rm th})^{n d}} e^{-\frac{1}{n} D k^2 t}\, . \end{equation} The perturbative expansion in $k\ell_{\rm th}$ is presumably asymptotic and this result should therefore only be trusted for $n\lesssim 1/(k\ell_{\rm th})^{d}$. The largest contribution can be determined by extremizing \eqref{eq_G_ndiff} over $n$. Approximating $\log n!\sim n \log n$ and using $n\ll 1/(k\ell_{\rm th})^{d}$ one finds that the largest contribution at time $t$ comes from decay into \begin{equation} n(t) \simeq \sqrt{\frac{D k^2 t}{d\log \frac{1}{k\ell_{\rm th}}}} \end{equation} diffusons. Plugging back into \eqref{eq_G_ndiff} produces the correlator \begin{equation}\label{eq_t_cascade} \hbox{regime III:} \quad \ G(t, k) \si e^{-\alpha \sqrt{D k^2 |t|}}\, , \quad \ \frac{1}{(k\ell_{\rm th})^{2} } \log \frac{1}{k\ell_{\rm th}} \, \ll \, \frac{t}{\tau_{\rm th}} \, \ll \, \frac{1}{(k\ell_{\rm th})^{2d+2} } \log \frac{1}{k\ell_{\rm th}} \, , \end{equation} with $\alpha\sim \sqrt{d \log \frac{1}{k\ell_{\rm th}}}\sim 1$. In this regime, operators decay into more and more diffusive excitations, which leads to a stretched exponential decay of correlators. Although we have focused on the decay of operators with $\bar\ell$ spatial indices in QFTs at finite temperature, similar results apply for two-point functions of generic neutral operators in any diffusive system, including non-integrable spin chains and random unitary circuits with conservation laws. In particular Eqs.~\eqref{eq_t_interm} and \eqref{eq_t_cascade} would apply there, after removing the spatial spin dependence $k^{2\bar \ell - 2} \to 1$. Certain signatures of diffusive tails have been observed numerically in these systems \cite{PhysRevB.73.035113,PhysRevLett.122.250602}, and finite $k$ correlators have been studied e.g.~in \cite{PhysRevX.9.041017}, but to our knowledge this diffuson cascade \eqref{eq_t_cascade} and cross-over from $e^{-Dk^2 t}$ to $e^{-\sqrt{Dk^2 t}}$ has yet to be observed. One issue is that of finite system size, which we discuss below. In the thermodynamic limit, correlators will decay as \eqref{eq_t_cascade} as long as the perturbative expansion of fluctuating hydrodynamics holds. Given that Eq.~\eqref{eq_G_ndiff} explodes for decay into $n\gg 1/(k\ell_{\rm th})^d$ diffusons, we expect the hydrodynamic expansion for $G(t,k)$ to breakdown at times $t\gtrsim t_{\rm breakdown}$, with \begin{equation}\label{eq_breakdown} n(t_{\rm breakdown}) \sim \frac{1}{(k\ell_{\rm th})^d} \qquad \Rightarrow \qquad \frac{t_{\rm breakdown}}{\tau_{\rm th}} \sim \frac{1}{(k\ell_{\rm th})^{2d+2} } \log \frac{1}{k\ell_{\rm th}}\, , \end{equation} which is therefore the upper limit of regime III in \eqref{eq_t_cascade}. We do not know of a controlled way to compute hydrodynamic correlation functions $G(t,k)$ at times $t\gtrsim t_{\rm breakdown}$. In a finite volume $L^d$ there is a minimal wavelength that the diffusive fluctuations in the loop can carry: $k_{\rm min} = \frac{2\pi}{L}$. The correlator will then be controlled by decay of the operator into $n_{\rm max}\sim k/k_{\rm min}$ \begin{figure} \vspace{30pt} \centerline{ \begin{overpic}[width=0.95\textwidth,tics=10]{fig/Gtk_decay_2} \put (-3,62) {$\dfrac{G_{\bar \ell}(t,k)}{G_{\bar \ell}(\tau_{\rm th},k)}$} \put (-3,46.5) {$1$} \put (-11,36) {${(k\ell_{\rm th})^{2{\bar \ell}-2}}$} \put (-9,25) {${(k\ell_{\rm th})^{2{\bar \ell}}}$} \put (-5.5,7) {$e^{-S}$} % \put (5.5,50) {{\color{inkblue} UV}} \put (12,-4) {$1$} \put (25,-4) {$\frac{1}{(k\ell_{\rm th})^{2-\gamma}}$} \put (39,-4) {$\frac{1}{(k\ell_{\rm th})^2}\log \frac{1}{k\ell_{\rm th}}$} \put (63.5,-4) {$\frac{L^2}{\ell_{\rm th}^2}$} \put (78,-4) {$\frac{s L^{d+1}}{k\ell_{\rm th}^2}$} \put (95,-4) {$t/\tau_{\rm th}$} \put (92,11) {{\color{inkred} RMT}} % \put (15,35) {\small $\sim 1/t^{\frac{d}{2} + \bar \ell - 2}$} \put (33,29) {\small $\sim e^{-Dk^2 t}$} \put (50,20) {\small $\sim e^{-\sqrt{Dk^2 t}}$} \put (68,23) {\small $\sim e^{-Dk k_{\rm min} t}$} \put (52,40) {\small $1\to n(t)$} \put (70,40) {\small $1\to n_{\rm max}$} % \put (21,6) {{\rm I}} \put (38,6) {{\rm II}} \put (55,6) {{\rm III}} \put (73,6) {{\rm IV}} \end{overpic}} \vspace{20pt} \caption{\label{fig_Gtk}\small Schematic log-log plot of late time two-point functions \eqref{eq_Gs_FT} in interacting QFTs at finite temperature. The polynomial decay in regime I depends on the spatial spin of the operator, however regimes II-IV should occur in any diffusive system without the notion of spatial spin, such as non-integrable spin chains. The small wavevector of the operator $k\ell_{\rm th}\ll 1$ allows for a parametric separation of the four hydrodynamic regimes. We have assumed that the Thouless time occurs before the breakdown time \eqref{eq_breakdown}.} \end{figure} diffusive modes with momentum $k_{\min}$, so that at times later than the Thouless time $L^2/D$ the correlation function has the form \begin{equation}\label{eq_t_lastdecay} \hbox{regime IV:} \qquad\quad G(t, k) \sim e^{-D |k| k_{\rm min} |t|}\, , \qquad\quad \frac{L^2}{\ell_{\rm th}^2} \ \ll \ \frac{t}{\tau_{\rm th}} \ \ll \ \frac{s L^{d+1}}{k\ell_{\rm th}^2} \, , \end{equation} where $s$ is the entropy density. Here we are assuming that the Thouless time occurs before the breakdown of hydrodynamics \eqref{eq_breakdown}. When $t$ reaches the upper limit, the Green's function is exponentially small $\sim e^{-S}$. At this point we expect the correlator to be described by random matrix theory (RMT) \cite{Cotler:2016fpe,Gharibyan:2018jrp}, exhibiting a ramp that levels off to a plateau% \footnote{Note that the onset time of a RMT description $t_{\rm RMT}\sim \frac{s L^{d+1}}{k\ell_{\rm th}^2} $ depends on the observable, here through its wavevector $k$. Onset of RMT in the spectral form factor is expected to happen at earlier times: in $d=1$, $t_{\rm RMT}^{\rm SFF} \sim \frac{L^2}{D}$ \cite{Gharibyan:2018jrp} is smaller than the time scale above by a factor $k/s\lesssim k\ell_{\rm th} \ll 1$. }, upon averaging (over a few operators with the same quantum numbers, for example). The exponentially small value of the Green's function \begin{equation} G\sim e^{-S} \sim \exp \left[- \frac{s_o}{(k_{\rm min}\beta )^d}\right]\, , \end{equation} shows that RMT effects are non-perturbative in the hydrodynamics description, which is an expansion in $k\ell_{\rm th}\sim k\beta$ \eqref{eq_ell_th}. Fig.~\ref{fig_Gtk} summarizes the various regimes of the correlator. To our knowledge, regimes I, III and IV have not appeared previously in the literature. We emphasize that these results hold for any non-integrable QFT, with the regimes II, III and IV holding more generally for any diffusive system. The microscopic couplings only enter in the determination of the thermalization time $\tau_{\rm th}$, and transport parameters such as $D$. For weakly coupled theories, the early time behavior $t\ll \tau_{\rm th}$ can be studied using direct finite temperature perturbation theory or kinetic theory \cite{Arnold:1997gh} (which can also capture chaos \cite{Grozdanov:2018atb}). However it is difficult to observe the regimes I, III and IV directly in a weakly coupled approach, as hydrodynamic fluctuations are not captured by the linearized approximation to the Boltzmann kinetic equation. We close with a comment on the convergence of the perturbative expansion. The convergence of the hydrodynamic gradient expansion in large $N$ systems (where hydrodynamic interactions can be ignored if one takes the $N\to \infty$ limit first) has been discussed e.g.~in \cite{Heller:2015dha,Grozdanov:2019uhi}. Away from the large $N$ limit, one expects that loop effects cause the gradient expansion to be asymptotic, as usual in effective field theories. This is apparent in the $n$-diffuson contribution to the correlator \eqref{eq_G_ndiff}, which blows up when $n\gg 1/(k\ell_{\rm th})^d$. It would be interesting to understand if this explosion can be tamed, or Borel resummed, to produce a prediction for correlators $G(t,k)$ after the breakdown time \eqref{eq_breakdown}. In perturbative QFT, processes involving many particles also lead to a breakdown of the perturbative expansion, which can however be saved by expanding around a different saddle \cite{Libanov:1994ug,Son:1995wz} (see \cite{Badel:2019oxl} for recent developments); diffusive systems are a natural venue to study multiparticle processes, and perhaps apply some of these techniques. \section{Semiclassical theory of heavy operators in CFTs}\label{sec_largeD} We found in the previous section that the late time thermal two-point functions of light neutral operators of any spin are governed by hydrodynamics in generic (thermalizing) CFTs. Working in the microcanonical ensemble, this implies that off-diagonal heavy-heavy-light OPE coefficients $C_{HH'L}$ are universal, at least on average. A priori, the averaging must be done over a microcanonical window of states. However, heavy operators in thermalizing CFTs are expected to look typical, so that much less averaging may be needed in practice. This expectation is objectified by the ETH Ansatz \cite{PhysRevA.43.2046,PhysRevE.50.888,rigoleth,Lashkari:2016vgj} for the matrix elements of a light local operator $\mathcal{O}$ in energy-momentum eigenstates% \footnote{More precisely, the energy of the heavy state on the cylinder $\mathbb R \times S^d$ is $p_0$ and $p_i$ labels the spherical harmonic on the spatial sphere. We will mostly focus on regimes where the sphere can be approximated as $S^d\to \mathbb R^d$ (see Eq.~\eqref{eq_window} and comment below), so that $p_i=k_i$ will denote regular spatial momentum.} $\hat P_\mu|H\rangle = |H\rangle p_\mu $ : \begin{equation}\label{eq_ETH} \langle H' | \mathcal{O} | H\rangle = \langle \mathcal{O}\rangle_\beta\delta_{HH'} + \Omega(p)^{-1/2}R_{HH'}^{\mathcal{O}} \sqrt{\langle{\mathcal{O}\mathcal{O}}\rangle(p-p')}\, , \end{equation} where $\Omega(p)$ is the density of states at momentum $p$, and the $R_{HH'}^{\mathcal{O}}$ behave like independent random variables with unit variance. Averaging Eq.~\eqref{eq_ETH} over a microcanonical window of heavy operators $H,\,H'$ simply states the equivalence between microcanonical and canonical ensembles; the non-trivial content of Eq.~\eqref{eq_ETH} is instead that microcanonical averaging is unnecessary: diagonal matrix elements directly produce thermal expectation values, and off-diagonal matrix elements probe out of equilibrium response, for example through symmetric two-point function $\langle{\mathcal{O}\mathcal{O}}\rangle$. The appearance of $\langle{\mathcal{O}\mathcal{O}}\rangle$ in the variance above is required for the Ansatz to reproduce the two-point function \cite{rigoleth,Delacretaz:2018cfk} (note that the Wightman and symmetric two-point functions are approximately equal in the hydrodynamic regime \eqref{eq_tau_th}). In a CFT, the state-operator correspondence relates these matrix elements to OPE coefficients. For scalar operators (see e.g.~\cite{Pappadopulo:2012jk}) \begin{equation}\label{eq_OPE} C_{HH' \mathcal{O}} = R^{\Delta_{\mathcal{O}}}\langle H|\mathcal{O} |H'\rangle\, . \end{equation} The diagonal part of ETH \eqref{eq_ETH} implies that diagonal heavy-heavy-light OPEs are controlled by equilibrium thermodynamics, as found in Ref.~\cite{Lashkari:2016vgj} (see also \cite{Pappadopulo:2012jk,Gobeil:2018fzy}). In section \ref{ssec_OPE_thermo} their results are reviewed and extended to operators with spin. In section \ref{ssec_OPE_hydro} we turn to the off-diagonal part of \eqref{eq_ETH}, and show how hydrodynamics controls the corresponding OPE coefficients. \subsection{Thermodynamics in OPE data}\label{ssec_OPE_thermo} Consider a heavy operator $H$, with dimension $\Delta \equiv\Delta_H\gg 1$ larger than any other intrinsic number of the CFT (such as measures of the number of degrees of freedom). It will be useful to define the energy density $\epsilon$ of the state that $H$ creates on the cylinder $\mathbb R\times S^d$ of radius $R$ \begin{equation}\label{eq_stateop} \Delta = {\epsilon R^{d+1}}{S_d}\, , \end{equation} where $S_d \equiv {\rm Vol} S^d = \frac{2\pi^{(d+1)/2}}{\Gamma \left(\frac{d+1}{2}\right)}$. We can then reach the macroscopic limit by taking $R\to \infty$ while keeping $\epsilon$ fixed. The diagonal OPE coefficient is fixed by thermodynamics \cite{Lashkari:2016vgj} \begin{equation}\label{eq_Liu_result} C_{HH \mathcal{O}} = R^{\Delta_{\mathcal{O}}}\langle H|\mathcal{O} |H\rangle \simeq b_{\mathcal{O}}(R/\beta)^{\Delta_{\mathcal{O}}} = b_{\mathcal{O}} \left[\frac{d+1}{dS_d } \frac{\Delta}{b_T} \right]^{\Delta_{\mathcal{O}}/(d+1)}\, , \end{equation} where in the second step we used \eqref{eq_ETH}, and \eqref{eq_stateop} in the last to eliminate the radius. We used \eqref{eq_q_match_consti} and \eqref{eq_lambda_to_b} to express thermal expectation values as \begin{equation}\label{eq_T_thermalvev} \langle \mathcal{O}\rangle_\beta = \frac{b_{\mathcal{O}}}{\beta^{\Delta_{\mathcal{O}}}}\, , \qquad\qquad \langle T_{\mu\nu}\rangle_{\beta} = \frac{b_T}{\beta^{d+1}} \left( \delta_\mu^0 \delta_\nu^0 + \frac{\eta_{\mu\nu}}{d+1}\right)\, , \end{equation} where $b_T=s_o=s\beta^d$ is the dimensionless entropy density, and is related to the energy density as $\epsilon = \frac{d}{d+1} b_T/\beta^{d+1}$. Eq.~\eqref{eq_Liu_result} can be straightforwardly extended to operators with spin. From Eq.~\eqref{eq_q_match} we find that the thermal expectation value of light operator of even spin $\ell$ takes the form \begin{equation}\label{eq_cft_spin_vev} \langle\mathcal{O}_{\mu_1 \cdots \mu_\ell}\rangle_\beta = \frac{b_{\mathcal{O}}}{\beta^{\Delta_{\mathcal{O}}}} \left(\delta_{\mu_1}^0 \cdots\delta_{\mu_{\ell}}^0 - {\rm traces}\right)\, , \end{equation} where we again used scale invariance to write $\lambda_0 = {b_{\mathcal{O}}}/{\beta^{\Delta_{\mathcal{O}}}}$. If the heavy operators are still scalars, the OPE coefficient $C_{HH\mathcal{O_{\ell}}}$ is parametrized by a single tensor structure \cite{Costa:2011mg}, which agrees with \eqref{eq_cft_spin_vev} (see Ref.~\cite{Jafferis:2017zna} for similar checks in the large charge limit). Now if the heavy operators carry a spin $J$ that is not macroscopic -- i.e. $\frac{J}{\Delta}\to 0$ in the macroscopic limit -- the states they create on the cylinder are homogeneous so that \eqref{eq_cft_spin_vev} still applies. However, many tensor structures can now appear \cite{Costa:2011mg}, each with their own OPE coefficients. For example, the OPE coefficient involving heavy states $| H,Jm\rangle $ in an irreducible representation $J$ of the Lorentz group with weight $|m|\leq J$ \begin{equation} \langle H, Jm| \mathcal{O}_{\ell}| H,Jm\rangle \sim C^m_{H_{J}H_{J} \mathcal{O}_{\ell}} \end{equation} could depend on $J$ and $m$ (we are using $SO(3)$ notation for simplicity, which can be easily generalized to $SO(d+1)$ for $d>2$). Note that we are focusing on diagonal matrix elements here, so that both states have to have the same weight. Comparison with \eqref{eq_cft_spin_vev} shows that the leading answer is in fact independent of $J$ and $m$ as long as $J\ll \Delta$, so that \eqref{eq_Liu_result} still holds \begin{equation}\label{eq_Liu_spin} C^m_{H_{J}H_{J} \mathcal{O}_{\ell}} \simeq b_{\mathcal{O}} \left[\frac{d+1}{dS_d } \frac{\Delta}{b_T} \right]^{\Delta_{\mathcal{O}}/(d+1)}\, . \end{equation} Diagonal OPE coefficients involving heavy operators with macroscopic spin $J\sim \Delta$ are discussed in section \ref{ssec_CFT_j}. \subsection{Hydrodynamics in OPE data}\label{ssec_OPE_hydro} We will use \eqref{eq_ETH} and the correlators obtained in Sec.~\ref{sec_hydro} to determine OPE coefficients \eqref{eq_OPE} in the `macroscopic' limit $R\to \infty$, with a `mesoscopic' difference in the dimensions of the heavy operators $\Delta\equiv \Delta_H$, $\Delta'\equiv \Delta_{H'}$, namely \begin{equation}\label{eq_stateop_meso} \Delta\simeq \Delta' = {\epsilon R^{d+1}}{S_d}\, , \qquad\qquad \Delta-\Delta' = \omega R\, , \end{equation} in spacetime dimensions $d+1\geq 3$, keeping the energy density $\epsilon$ and frequency $\omega$ finite. When the mesoscopic difference in their dimensions is not large \begin{equation} \omega = \frac{\Delta - \Delta'}{R} \ll \frac{1}{\tau_{\rm th}}\sim \frac{1}{\beta}\, , \end{equation} the off-diagonal OPE coefficient $C_{HH' \mathcal{O}}$ is controlled by hydrodynamics. In the last step we have assumed the CFT is strongly coupled, so that the thermalization time is set by the temperature -- in a weakly coupled CFT the frequency window where hydrodynamic applies is parametrically suppressed,% \footnote{A CFT is expected to be weakly coupled when its twist gap $\gamma \equiv \min \left(\Delta- J\right)-d+1\geq 0$ is small $\gamma\ll 1$. In this case the thermalization time is parametrically enhanced $\tau_{\rm th} \sim \beta / \gamma$, as can be observed e.g.~in the $O(N)$ model by comparing its thermalization time \cite{sachdev2007quantum} to its twist gap \cite{Giombi:2016hkj}. Generic CFTs are expected to satisfy $\gamma\gtrsim 1$ and $\tau_{\rm th} \sim \beta$.} see discussion below \eqref{eq_ell_th}. Eliminating the radius, this hydrodynamic window is \begin{equation}\label{eq_window} \left(\frac{\Delta}{b_T}\right)^{-\frac{1}{(d+1)}} \lesssim\ \Delta - \Delta' \ \lesssim \ \left(\frac{\Delta}{b_T}\right)^{\frac{1}{(d+1)}} \end{equation} The lower bound comes from the fact that the hydrodynamic results will receive corrections from the finite size of the sphere of radius $R$ at the Thouless energy $\omega \sim D/R^2$ -- these could be obtained by generalizing the hydrodynamic correlators of Sec.~\ref{sec_hydro} to the sphere, but we will not attempt to do so here; we expect the singular features that we find in OPE coefficients to be softened in that regime. The upper bound however is a fundamental UV cutoff of hydrodynamics \eqref{eq_tau_th}, assuming $\tau_{\rm th}\sim \beta$. The OPE coefficients $C_{H_JH'_{J'}\mathcal{O}_\ell}$ will depend on the quantum numbers of the heavy operators $\Delta,\, \Delta',\, J,\, J'$, those of the light operator $\Delta_{\mathcal{O}},\, \ell$, the thermal properties of the CFT through $b_T$ and $\eta_o\equiv \eta / s$, and finally on the thermal properties of the light operator through the coefficients $b_i$ in \eqref{eq_lambda_to_b}. In CFTs, the hydrodynamic correlators of Sec.~\ref{sec_hydro} simplify somewhat, because tracelessness of the stress tensor forces the bulk viscosity to vanish $\zeta=0$ and fixes the speed of sound $c_s^2 \equiv \frac{\partial P}{\partial\epsilon} = \frac{1}{d}$. The diffusion constant and sound attenuation rate in \eqref{eq_GT0i0j} or \eqref{eq_OO_lspatial} are therefore given by \begin{equation} D = \eta_o \beta\, , \qquad\qquad \Gamma_s = \frac{2(d-1)}{d} \eta_o \beta\, . \end{equation} The simplest case of three scalar operators $J=J'=\ell=0$ is somewhat subtle and will be discussed further below. Instead we start with a light spinning operator with $\ell\geq 2$, and take $J,J'$ `microscopic', i.e.~they are kept fix $\sim 1$ in the macroscopic limit $R\to \infty$. \subsubsection{Microscopic spin $J,J'$} Keeping the spin $J,\,J'\sim 1$ fixed in the $R\to \infty$ limit implies that the Green's function in \eqref{eq_ETH} must be evaluated at spatial wave-vector $k = \frac{J-J'}{R} = 0$. In this case we found that the correlator is controlled by hydrodynamic loops: for components with $\bar\ell\geq 2$ spatial indices, the Green's function is (using \eqref{eq_OO_lspatial} and \eqref{eq_lambda_to_b}) \begin{equation}\label{eq_cft_gll} \langle{\mathcal{O}_{(\bar \ell,\ell)}\mathcal{O}_{(\bar \ell,\ell)}}\rangle(\omega,k=0) \simeq \frac{\beta^{d-2\Delta_{\mathcal{O}}}}{b_T^2} \frac{b_{\bar\ell-2}^2 }{\omega} \left(\frac{\beta \omega}{\eta_o}\right)^{\alpha_{\bar \ell}} \end{equation} with $\alpha_{\bar\ell}= \frac{d}{2} + \bar \ell -2$ for $\bar\ell$ even, and $\alpha_{\bar\ell}= d + \bar\ell - 3 $ for $\bar\ell$ odd (see \eqref{eq_LTT_odd}). Converting this into an expression for the OPE $C_{H_JH'_{J'}\mathcal{O}_\ell}$ using \eqref{eq_ETH} and \eqref{eq_OPE}, we see that the hydrodynamic answer predicts a tensor structure (and fixes the correspondig OPE coefficient) for each $\bar \ell = 0,\,1,\,\ldots,\,\ell$. However if the heavy operators are both scalars, conformal invariance constrains the three-point function up to a single OPE coefficient (the illegal step was to apply ETH \eqref{eq_ETH} before accounting for all symmetries). To accommodate the tensor structures obtained from hydrodynamics, it is sufficient to let one of the heavy operators have spin $J\geq \ell$ -- this leads to precisely $\bar \ell+1$ tensor structures in agreement with the CFT prediction% \footnote{In the notation of Ref.~\cite{Costa:2011mg}, the tensor structures in the sum are $H_{\mathcal{O}H}^{\bar\ell} V_{\mathcal{O}}^{\ell-\bar\ell}$.} \begin{equation} \langle H, Jm | \mathcal{O}_{\mu_1 \cdots \mu_\ell} | H'\rangle = \sum_{{\bar\ell}=0}^\ell C^{\bar\ell}_{H_J H' \mathcal{O}_\ell} \delta_{|m|}^{\bar\ell} \delta_{\mu_1}^0 \cdots \delta_{\mu_{\ell -{\bar\ell}} }\delta_{\mu_{\ell - {\bar\ell} + 1}}^{\sigma} \cdots \delta_{\mu_{\ell}}^{\sigma} + \hbox{perm} - \hbox{traces}\, , \end{equation} where $\sigma= {\rm sgn}(m) = \pm$ denotes the spatial directions $\pm = x_1 \pm ix_2$ ($x_1,\,x_2$ are the directions used to define the weight $m$, i.e. $J_{12}|H,Jm\rangle = |H,Jm\rangle m$). Combining Eqs.~\eqref{eq_ETH}, \eqref{eq_OPE} and \eqref{eq_cft_gll} therefore gives \begin{equation}\label{eq_CHHL_1} |C_{H_{J}H_{J'}\mathcal{O}_\ell}^{\bar \ell}|^2 \simeq e^{-S} \frac{b_{\bar\ell -2}^2}{b_T^2} \frac{(R/\beta)^{\Delta_{\mathcal{O}}}}{\beta\omega} \left(\frac{\beta\omega}{\eta_o}\right)^{\alpha_{\bar\ell}}\, . \end{equation} Here we have replaced the random number with unit variance $|R_{HH'}|^2 \to 1$. Strictly speaking this expression for $|C_{HH'L}|^2$ and those below should be thought as average statements, averaged over a few heavy operators $H$ or $H'$. We have taken the density of states at energy $E = \Delta/R$ to be \begin{equation}\label{eq_Cardy} \Omega(E) \simeq \beta^{d+1} e^{S}\,, \quad\qquad \hbox{with} \quad S = b_T S_d (R/\beta)^d = b_TS_d \left(\frac{d+1}{d S_d} \frac{\Delta}{b_T}\right)^{\frac{d}{d+1}}\, . \end{equation} Eliminating the radius $R$ in \eqref{eq_CHHL_1} gives (dropping numerical factors) \begin{equation}\label{eq_CHHL_2} |C_{H_{J}H'_{J'}\mathcal{O}_\ell}^{\bar \ell}|^2 \simeq e^{-S} \frac{b_{\bar\ell-2}^2}{b_T^2} \left( \frac{\Delta}{b_T}\right)^{\frac{2(\Delta_{\mathcal{O}}-\alpha_{\bar\ell}+1)}{d+1}} \frac{\left(\Delta-\Delta'\right)^{\alpha_{\bar\ell}-1}}{\eta_o^{\alpha_{\bar\ell}}} \times\left[1+ O(\omega\tau_{\rm th}) + O\Bigl(\frac{1}{\omega R}\Bigr)\right]. \end{equation} We will not attempt to control subleading extensions to the Cardy formula \eqref{eq_Cardy}, and therefore will not comment on the subexponential dependence on $\Delta$. However, we attract the reader's attention to the non-analytic dependence on $\Delta-\Delta'$, coming from hydrodynamic fluctuations. Corrections to the leading result are shown in the square brackets, and come from less relevant terms in hydrodynamics and finite volume corrections: \begin{equation} \omega\tau_{\rm th} \sim \frac{\Delta-\Delta'}{(\Delta/b_T)^{\frac{1}{d+1}}}\, , \qquad\qquad \frac{1}{\omega R} \sim \frac{1}{(\Delta-\Delta')(\Delta/b_T)^{\frac{1}{d+1}}}\, , \end{equation} both of which are parametrically small in the regime \eqref{eq_window}. If both $J,\,J'\geq 1$, there may be more tensor structures allowed by conformal invariance than needed -- the claim of thermality is that as in \eqref{eq_Liu_spin} the leading OPE coefficients will not depend on these extra indices. \subsubsection{Mesoscopic spin $J,J'$} Let us now extend to heavy operators with `mesoscopic' spin. More precisely, we want the spins to be non-macroscopic (so that the state on the sphere remains homogeneous), and the difference in spins to be mesoscopic, or \begin{equation} {J},\, {J'} = o(R^{d+1})\, , \qquad\quad J-J' = kR\, , \end{equation} in the limit $R\to \infty$. Let us first consider a light operator $\mathcal{O}$ with spin $\ell = 0$. In Sec.~\ref{sec_hydro} we found that its thermal expectation value leads to (see \eqref{eq_OO_0spatial}) \begin{equation}\label{eq_cft_OOscalar} \langle \mathcal{O}\rangle_\beta = \frac{b_0}{\beta^{\Delta_{\mathcal{O}}}} \quad \Rightarrow \quad \langle \mathcal{O} \mathcal{O}\rangle(\omega,k) \simeq \left( \frac{b_{0}\Delta_{\mathcal{O}}}{\beta^{\Delta_{\mathcal{O}}}}\right)^2 \frac{2\beta^d}{b_T} \frac{\frac{2d-1}{d^3} \eta_o \beta k^4 }{\left(\omega^2 - \frac{1}{d}k^2\right)^2 + \left(\frac{2d-1}{d}\eta_o \beta \omega k^2\right)^2}\, . \end{equation} Conformal invariance allows for many tensor structures for the three-point function \begin{equation} \langle H,Jm | \mathcal{O} | H',J'm'\rangle = \delta_{mm'} C^{|m|}_{H_JH_{J'}\mathcal{O}}\, , \end{equation} i.e. there is an OPE coefficient for every $|m|=0,\,1,\,\ldots , \, {\rm min}(J,J')$ (the OPE is diagonal in the weights $m,\, m'$ because a scalar operator $\mathcal{O}$ inserted at the north pole preserves rotations about the pole). However we see from \eqref{eq_cft_OOscalar} that these coefficients do not depend on $m$ and are given by \begin{equation}\label{eq_HJHJscalar} |C_{H_J H'_{J'}\mathcal{O}}^{|m|}|^2 \simeq \frac{\alpha}{e^S} \frac{\eta_o(J-J')^4} {\left[ \left(\Delta-\Delta'\right)^2 - \frac{1}{d}(J-J')^2\right]^2 + a_{d\,} \eta_o^2\left(\frac{b_T}{\Delta}\right)^{\frac{2}{d+1}} \left(\Delta-\Delta'\right)^2 (J-J')^4}\, , \end{equation} with $a_d = \left(\frac{2(d-1)}{d}\right)^2 \left(\frac{d S_d}{d+1}\right)^{\frac{2}{d+1}}$ and where the subexponential dependence on $\Delta$ (which is degenerate with logarithmic corrections to $S(\Delta)$) was packaged in $\alpha \propto \frac{(b_0\Delta_{\mathcal{O}})^2}{b_T^2} \left(\frac{\Delta}{b_T}\right)^{{2\Delta_{\mathcal{O}}}/({d+1})} $. These OPE coefficients feature a `resonance' at the sound mode $\Delta - \Delta' = \pm\frac{1}{\sqrt{d}}(J-J')$. The resonance is sharp for heavy operators, with a width $\frac{\eta_o}{(\Delta/b_T)^{1/(d+1)}}\ll 1$ controlled by the shear viscosity to entropy ratio $\eta_o \equiv \eta/s$. The case $J=J'$ is somewhat special: for this case only the contribution \eqref{eq_HJHJscalar} vanishes, and the OPE is given by a subleading hydrodynamic tail $|C_{H H'\mathcal{O}}^{|m|}|^2\sim \alpha e^{-S} \left(\Delta-\Delta'\right)^{\frac{d}{2}-1}$ similar to \eqref{eq_CHHL_2}, see appendix \ref{sapp_hydro_subleading}. We are now ready to turn to the general case of the heavy-heavy-light OPE coefficient of three spinning operators. The hydrodynamic prediction was given in Eq.~\eqref{eq_OO_lspatial}. To match with OPE coefficients we will need the precise index structure, which can be conveniently packaged by using the index-free notation \begin{equation} \langle \mathcal{O}_{(\bar\ell,\ell)}\mathcal{O}_{(\bar\ell,\ell)}\rangle \equiv z^{i_1} \cdots z^{i_{\bar \ell}} \langle \mathcal{O}_{i_1\cdots i_{\bar\ell}0\cdots 0} \mathcal{O}_{j_1\cdots j_{\bar\ell}0\cdots 0}\rangle z'^{j_1} \cdots z'^{j_{\bar \ell}}\, , \end{equation} with $z^2 = z'^2 = 0$. In this notation, the full index structure of \eqref{eq_OO_lspatial} is given in appendix \ref{app_hydro} (see Eq.~\eqref{eq_OO_lspatial_full}). For a CFT \eqref{eq_OO_lspatial_full} becomes \begin{equation}\label{eq_OO_lspatial_cft} \begin{split} \frac{\langle \mathcal{O}_{(\bar\ell,\ell)}\mathcal{O}_{(\bar\ell,\ell)}\rangle(\omega,k)}{\beta^{d-2\Delta_{\mathcal{O}} +1}} &= \frac{(b_{\bar \ell-1} + b_{\bar\ell} \frac{k^2}{\omega \sqrt{d}})^2}{b_T} \frac{\eta_o \omega^2 (k\cdot z)^{\bar\ell}(k\cdot z')^{\bar\ell}}{\left(\omega^2- \frac{1}{d}k^2\right)^2 + \left(\frac{2(d-1)}{d}\right)^2\eta_o^2\omega^2 k^4}\\ & + \frac{(b_{\bar \ell-1})^2}{b_T} \left(z\cdot z' - \frac{(k\cdot z) (k\cdot z')}{k^2}\right) \frac{\eta_o k^2 (k\cdot z)^{\bar\ell -1}(k\cdot z')^{\bar\ell -1}}{\omega^2 + \eta_o^2 k^4}\\ &+ \frac{(b_{\bar\ell-2})^2}{b_T^2} \frac{(z\cdot z')^{\bar \ell}}{\omega} \left(\frac{\omega}{\eta_o}\right)^{\alpha_{\bar\ell}} + \cdots\, , \end{split} \end{equation} where we absorbed numerical factors in the coefficients $b_i$, and $\omega$ and $k$ are measured in units of temperature to simplify the expression. The hydrodynamic result \eqref{eq_OO_lspatial_cft} contains a structure for each $\bar \ell = 0,\,1,\,\dots,\,\ell$. Moreover, the index contractions in \eqref{eq_OO_lspatial_cft} take the form \begin{equation}\label{eq_my_basis} (k\cdot z)^{\symb} (k\cdot z')^{\symb} (z\cdot z')^{\bar\ell-\symb}\, , \end{equation} with $\symb = 0,\,1,\,\dots,\,\bar \ell$. The leading hydrodynamic result \eqref{eq_OO_lspatial_cft} only contains these structures for $\symb = \bar\ell,\, \bar\ell-1$ and $0$; coefficients of the structures for other values of $\symb$ will be controlled by subleading hydrodynamic tails (in $d=2$ these additional tails are only log suppressed compared to the leading ones, see Sec.~\ref{ssec_dc}). One therefore obtains $\sum_{\bar\ell=0}^\ell (\bar\ell+1) = \frac{1}{2} (\ell+1)(\ell+2)$ structures. When $J,J'\geq \ell$, Ref.~\cite{Costa:2011mg} showed that the CFT three-point $\langle H| \mathcal{O} | H'\rangle$ function contains more structures: there are $\frac{1}{2}\left(\ell +1\right)\left(\ell +2\right) \left({\rm min}(J,J') + 1 -\frac{\ell}{3}\right)$ structures, which in their notation take the form \begin{figure} \centerline{ \subfigure{ \begin{overpic}[width=0.71\textwidth,tics=10]{fig/tensor_structures} \put (2.5,57.5) {$\ell$} \put (13,67.5) {$J$} \put (13,45.5) {$J'$} % \end{overpic} } } \caption{\label{fig_structures} CFT tensor structures for the three point function $\langle H,J| \mathcal{O}_\ell|H',J'\rangle$ with $\ell=2$, adapted from \cite{Costa:2011mg}, Fig.~2. The OPEs obtained from hydrodynamics only depend on the $\frac{1}{2}(\ell+1)(\ell+2)$ `contractions' involving the light operator (solid lines), and not on the contractions between the heavy operators that create the thermal state (dashed lines). } \end{figure} \begin{equation}\label{eq_cft_basis} H_{\mathcal{O}H}^a H_{\mathcal{O}H'}^{a'} H_{HH'}^b V_\mathcal{O}^{\ell - a - a'} V_H^{J-a-b} V_{H'}^{J'-a'-b} \end{equation} where $a,\, a',\,b$ run over all integers such that the powers above are positive. The hydrodynamic OPE coefficients \eqref{eq_OO_lspatial_cft} only depend on the contractions between the light operator and the heavy ones, hence on $a,\, a'$ but not on $b$ (see Fig.~\ref{fig_structures}). Since $a,\, a'= 0,\,1,\,\dots,\,\ell$ satisfy $a+a'\leq \ell$ this produces indeed $\frac{1}{2} (\ell+1)(\ell+2)$ structures. We will not explicitly write the map $(a,a') \leftrightarrow (\bar\ell,\symb)$ between the bases \eqref{eq_my_basis} and \eqref{eq_cft_basis}, and instead label OPE coefficients with $\bar\ell,\,\symb$ and $b$ as $C_{H_J H'_{J'} \mathcal{O}_\ell}^{(\bar\ell,\, \symb,\,b)}$. From \eqref{eq_OO_lspatial_cft} one then finds \begin{align}\label{eq_CHHL_final} C_{H_J H'_{J'} \mathcal{O}_\ell}^{(\bar\ell,\, 0,\,b)} \notag &= \hbox{Eq. \eqref{eq_CHHL_2}} \, , \\ C_{H_J H'_{J'} \mathcal{O}_\ell}^{(\bar\ell,\, \bar \ell-1,\,b)} &\simeq \frac{\alpha}{e^S} \frac{ \frac{(b_{\bar\ell-1})^2}{b_T}\eta_o(J-J')^{2\bar\ell}} {\left(\Delta-\Delta'\right)^2 + \tilde a_{d\,} \eta_o^2\left(\frac{b_T}{\Delta}\right)^{\frac{2}{d+1}} (J-J')^4} \, , \\ C_{H_J H'_{J'} \mathcal{O}_\ell}^{(\bar\ell,\, \bar \ell,\,b)} &\simeq \frac{\alpha}{e^S} \frac{\frac{1}{b_T} \left(b_{\bar\ell-1} + b_{\bar \ell} \sqrt{\frac{\tilde a_d}{d}}\left(\frac{b_T}{\Delta}\right)^{\frac{1}{d+1}}\frac{(J-J')^2}{(\Delta-\Delta')}\right)^2\eta_o(\Delta - \Delta')^2(J-J')^{2\bar\ell}} {\left[ \left(\Delta-\Delta'\right)^2 - \frac{1}{d}(J-J')^2\right]^2 + a_{d\,} \eta_o^2\left(\frac{b_T}{\Delta}\right)^{\frac{2}{d+1}} \left(\Delta-\Delta'\right)^2 (J-J')^4} - C_{H_J H'_{J'} \mathcal{O}_\ell}^{(\bar\ell,\, \bar \ell-1,\,b)} \, . \notag \end{align} with $\alpha \propto \left(\frac{\Delta}{b_T}\right)^{\frac{2(\Delta_{\mathcal{O}} - \ell + 1)}{d+1}}$ and with the numerical factors $a_d = \left(\frac{2(d-1)}{d}\right)^2 \left(\frac{d S_d}{d+1}\right)^{\frac{2}{d+1}}$, $\tilde a_d = \left(\frac{d S_d}{d+1}\right)^{\frac{2}{d+1}}$. Subleading corrections to these results are similar to those in Eq.~\eqref{eq_CHHL_2}. \subsection{Macroscopic spin}\label{ssec_CFT_j} Let us now briefly comment on heavy operators with macroscopic spin \begin{equation}\label{eq_J_macro} J \sim \Delta \sim R^{d+1}\, . \end{equation} Macroscopic spin has been treated in an EFT approach for large charge in CFTs with a $U(1)$ symmetry \cite{Cuomo:2017vzg,Cuomo:2019ejv}. It was found there that in the regime \eqref{eq_J_macro}, the superfluid state forms a vortex lattice, such that the coarse-grained superfluid velocity is equal to that of a rotating body with angular momentum $J$. For a normal fluid, one expects a similar stationary solution to the Navier-Stokes equations% \footnote{We thank Jo\~ao Penedones for suggesting this.}. Let us work in $d=2$ spatial dimensions for simplicity, and search for a velocity profile $u_\mu \equiv (u_0,u_\theta,u_\phi) = \left((1+v_\phi^2)^{1/2},0,v_\phi\right)$ with an azimuthal velocity that only depends on the polar angle $v_\phi=v_\phi(\theta)$. Now a typical state with angular momentum on the sphere will equilibrate (preserving its angular momentum) to an equilibrium velocity profile $v_{\phi}(\theta)$ that does not dissipate; in particular it must be annihilated by the shear viscosity term in \eqref{eq_T_consti} -- this leads to a differential equation which can be solved for $v_{\phi}(\theta)$. Energy eigenstates created by heavy operators are expected to look thermal and should have this velocity profile. The ideal stress tensor can then be obtained by imposing conservation \begin{equation} T_{\mu\nu} \simeq s_o(\theta) \left(u_\mu(\theta)u_\nu (\theta) + \frac{g_{\mu\nu}}{d+1}\right)\, , \qquad\quad \nabla_\mu T^{\mu\nu} = 0\, , \end{equation} where $g_{\mu\nu}$ is the metric on the sphere. This equation can be solved for $s_o(\theta)$. Finally, computing the total angular momentum of this flow one finds that it is related to the velocity at the equator by \begin{equation} v_{\rm max} = v_\phi(\pi/2) \sim \frac{J}{\Delta}\, . \end{equation} OPE coefficients between heavy operators of macroscopic spin $J$ and light operators can be obtained as in the previous sections by now expanding the constitutive relations around the velocity profile $u_{\mu}(\theta)$. This hydrodynamic picture is expected to break down near the unitarity bound $J\leq \Delta - d + 1$ -- in particular at low twist $\Delta-J \sim 1$ the spectrum is sparse and populated by double- and higher-twist primaries \cite{Fitzpatrick:2012yx,Komargodski:2012ek}, see Fig.~\ref{fig_spectrum}. Increasing twist to go away from the edges of the spectrum will increase the density of state, eventually leading to a finite entropy density and temperature. It is tempting to view the thermal state with macroscopic spin \eqref{eq_J_macro} as a `gas of multi-twist states', analogously to how heating up a superfluid leads to a normal fluid component carried by a gas of phonons (this two-fluid picture, and the emergence of dissipative hydrodynamics from a conformal superfluid is discussed in Sec.~\ref{sss_sflu}). The operator phase diagram, including spin, is discussed in more depth in Sec.~\ref{ssec_transition} for theories with an additional $U(1)$ symmetry. We leave the study of OPE coefficients for heavy operators with macroscopic spin using hydrodynamics in a rotating background for future work. \section{Global symmetries}\label{sec_u1} It is straightforward to extend the results above to QFTs and CFTs with an internal symmetry group $G$; this section deals with the simplest example $G=U(1)$. The additional Ward identity $\partial_\mu J^\mu = 0$ protects a new slow excitation -- charge density $J_0$ -- whose fluctuations will give additional contributions to late time correlators. A background chemical potential $\mu$ can be introduced for the internal symmetry. In sections \ref{ssec_u1hydro} and \ref{ssec_u1hydro_mu} we briefly review the hydrodynamic treatment with $\mu=0$ and $\mu\neq 0$ (a more complete exposition can be found in Ref.~\cite{Kovtun:2012rj}) and derive the universal late time behavior of thermal correlators for QFTs with a global $U(1)$ symmetry. The internal symmetry can be spontaneously broken, in which case the theory is described by dissipative superfluid hydrodynamics% \footnote{Dissipative superfluid hydrodynamics also describes 2+1d theories at finite temperatures $0<T<T_{\rm BKT}$, where strictly there is no spontaneous symmetry breaking; the protection of the long-lived superfluid phase can however be understood without reference to symmetry breaking \cite{Delacretaz:2019brr}.}. A new feature in this phase is that the late time correlators of operators charged under the $U(1)$ symmetry are also controlled by hydrodynamics, because of the additional hydrodynamic field $\phi$ which non-linearly realizes the $U(1)$ symmetry. In section \ref{ssec_sflu}, the hydrodynamic treatment of Refs.~\cite{Herzog:2011ec,Bhattacharya:2011tra} is reviewed, and late time correlators of light operators derived. In the simplest situations we expect the dissipative superfluids to be smoothly connected to $T=0$ superfluids on the edge of the spectrum in Fig.~\ref{fig_spectrum}. This will allow us to connect to recent work on the large charge limit of CFTs \cite{Hellerman:2015nra,Alvarez-Gaume:2016vff,Monin:2016jmo,Cuomo:2017vzg,Jafferis:2017zna,Cuomo:2019ejv}. The large charge limit can be thought of as a situation where a semiclassical description survives as $T\to 0$ (with fixed $\mu\neq 0$), thanks to spontaneous breaking of the $U(1)$ symmetry. The implications of long-time tails on the CFT data of CFTs with a $U(1)$ symmetry are studied in section \ref{ssec_CFT_Q}. Various regions in the $(\Delta,Q)$ plane will be described by the hydrodynamic theories of sections \ref{ssec_u1hydro}, \ref{ssec_u1hydro_mu} and \ref{ssec_sflu}, following Fig.~\ref{fig_spectrum}. Finally, the presence of distinct phases in the large $\Delta$ spectrum naturally brings us to phase transitions. In section \ref{ssec_transition}, we study signatures of thermal phase transitions on the CFT data. \subsection{Hydrodynamics of a charged fluid}\label{ssec_u1hydro} The conservation laws $\partial_\mu T^{\mu\nu}=0,\,\partial_\mu J^\mu = 0$ must be supplemented with constitutive relations for the currents. In the Landau frame and up to first order in derivatives, the constitutive relation for the stress-tensor is still given by Eq.~\eqref{eq_T_consti} and that of the $U(1)$ current is \cite{Kovtun:2012rj} \begin{equation}\label{eq_j_consti} J^\mu = \rho u^\mu -\kappa \Delta^{\mu\nu} \partial_\nu(\beta \mu) + \chi_{\rm T} \Delta^{\mu\nu} \partial_\nu\beta + O(\partial^2)\,. \end{equation} Three new parameters were introduced: $\rho,\, \kappa$ and $\chi_{\rm T}$.% \footnote{Another commonly used notation for the conductivity is $\sigma_Q \equiv \kappa \beta$.} These, along with those appearing in \eqref{eq_T_consti}, are functions of both $\mu$ and $\beta$. Consistency with thermodynamics fixes $\rho$ in terms of the equation of state $\rho = \partial P / \partial\mu$, and imposes $\kappa \geq 0$ and $\chi_{\rm T}=0$.% \footnote{Note that $\chi_{\rm T}$ is only forbidden because $J_\mu$ is conserved. Generic non-conserved spin-1 operators will have terms like $\chi_{\rm T}$ in their constitutive relation. In holographic models, along the lines of Ref.~\cite{Myers:2016wsu}, these could come from coupling a massive gauge field in the bulk to the Weyl tensor, e.g.~through $A^\mu \partial_\mu C_{\rm Weyl}^2$. } Hydrodynamic correlators can again be obtained by expanding around equilibrium \eqref{eq_linearize} with $\mu(x) = \mu + \delta \mu(x)$. If we first take the background chemical potential to vanish $\mu = 0$, then the background charge density $\rho$ vanishes by CPT and we see directly from \eqref{eq_j_consti} that there is no mixing at the linear level between the new hydrodynamic degree of freedom $\delta\mu$ and the ones considered previously $\delta u_\mu,\, \delta \beta$, at least to this order in derivatives. The stress-tensor correlator \eqref{eq_GT0i0j} is therefore unchanged, and the current correlator is given by \begin{equation}\label{eq_Gj0j0} G^R_{J_0 J_0}(\omega,k) = \frac{\chi D_{\rm c} k^2}{-i\omega + D_{\rm c}k^2 } + \cdots\, , \end{equation} where $\chi \equiv \partial \rho/\partial \mu$ is the charge susceptibility, and $D_c \equiv \kappa\beta / \chi$ the charge diffusion constant. The late time thermal correlation functions of light operators $\mathcal{O}_{\ell}$ of spin $\ell$ can be found by matching them to composite hydrodynamic operators as in Section \ref{sec_hydro}. The new hydrodynamic degree of freedom $\mu$ can now also be used. Comparing \eqref{eq_Gj0j0} with \eqref{eq_GT0i0j} shows that it scales like the other hydrodynamic fluctuations \begin{equation}\label{eq_scaling_var} \delta \mu \sim \delta \beta \sim \delta u_\mu \sim k^{d/2}\, . \end{equation} It is easy to see that the new hydrodynamic field $\delta \mu$ does not allow the construction of more relevant operators -- the results from Section \ref{sec_hydro} are therefore largely unchanged -- except for odd spin $\ell$ operators with $\bar \ell = 0$ or $1$ spatial indices. The reason is that for these cases we found in Sec.~\ref{sec_hydro} that the dominant hydrodynamic contributions to the correlators \eqref{eq_OO_01spatial} involve the term $\lambda_{0}$ in \eqref{eq_q_match_consti}, which was forbidden by CPT for odd-spin operators (see appendix \ref{ssapp_ell_odd}). However, thanks to the conserved $U(1)$ charge this term is now allowed for odd spin $\ell$ as well \begin{equation}\label{eq_spin1_ltt} \mathcal{O}_{\mu_1 \cdots \mu_\ell} = \lambda_0(\mu,\beta) u_{\mu_1}\cdots u_{\mu_\ell} + O(\partial)\, , \end{equation} where $\lambda_0(\mu,\beta)$ is an odd function of $\mu$ by CPT. Expanding $\lambda_0$ in $\delta \mu$, one finds that components with $\bar\ell=0$ spatial indices can overlap linearly with the density, so that \eqref{eq_Gj0j0} implies \begin{equation} \langle \mathcal{O}_{0\cdots 0}\mathcal{O}_{0\cdots 0}\rangle (\omega,k) = \frac{2 (\partial\lambda_0/\partial\mu)^2}{\chi \beta} \frac{D_ck^2}{\omega^2 + (D_ck^2)^2} + \cdots\, , \end{equation} and components with $\bar\ell=1$ spatial indices are controlled by a hydrodynamic loop at $k=0$ \begin{equation}\label{eq_ltt_og} \langle\mathcal{O}_{i0\cdots 0} \mathcal{O}_{j0\cdots 0}\rangle(t,k=0) = \delta_{ij} \frac{(\partial\lambda_0/\partial\mu)^2}{\chi s \beta} \frac{d-1}{d} \frac{1}{[4\pi(D+D_c)t]^{d/2}} + \cdots\, . \end{equation} A special case is the correlator of the current operator itself $\mathcal{O}_\mu = j_\mu$. Then $\lambda_0=\rho$ so $\partial\lambda_0/\partial\mu = \chi$ and \eqref{eq_ltt_og} reproduces known results \cite{PhysRevLett.25.1254,Kovtun:2003vj}. This correlator with $\mathcal{O}_\mu = j_\mu$ is the one that led to the original discovery of long-time tails \cite{PhysRevA.1.18}. \subsubsection{Turning on a background $\mu\neq 0$}\label{ssec_u1hydro_mu} A background chemical potential will allow the longitudinal hydrodynamic modes $j_0,\, T_{00}$ and $\partial_i T_{0i}$ to mix (the transverse sector is unaffected and still given by the second term in \eqref{eq_GT0i0j}). The longitudinal sector will still contain a diffusive mode and a sound mode, but these will be carried by linear combinations of $j_0$ and $T_{00}$, see e.g.~\cite{Kovtun:2012rj}. The correlators \eqref{eq_OO_lspatial} of neutral operators therefore do not change qualitatively: the functional dependence on $\omega,\, k$ is unchanged, but the thermodynamic and transport factors are more complicated. One exception is again for operators of odd spin $\ell$. For example components with $\bar \ell=1$ spatial indices can now overlap linearly with hydrodynamic modes, even at $k=0$. Indeed, $\lambda(\mu,\beta)$ in \eqref{eq_spin1_ltt} is now expanded around $\mu \neq 0$ so that the constitutive relation $\mathcal{O}_{i0\cdots 0} = \lambda_0 u_i + \cdots$ has the same form as \eqref{eq_consti_1spatial}, and the two-point function $\langle \mathcal{O}_{i0\cdots0} \mathcal{O}_{j0\cdots0}\rangle(\omega,k)$ is given by \eqref{eq_OO_1spatial}. More generally, in charged hydrodynamics at finite density, the results of Sec.~\ref{sec_hydro} hold for both even and odd spin $\ell$, because of the absence of any CPT constraint. \subsection{Dissipative superfluids}\label{ssec_sflu} The hydrodynamic theory of relativistic, dissipative superfluids was thoroughly studied in Refs.~\cite{Herzog:2011ec,Bhattacharya:2011tra}. Compared to normal charged fluids, superfluids contain an additional slow hydrodynamic degree of freedom carried by the Goldstone field $\phi$ that non-linearly realizes the internal $U(1)$ symmetry. Here we will focus on conformal superfluids, and will not give an expectation value to the superfluid velocity, i.e.~$\langle\partial_i\phi\rangle = 0$. This velocity can be thought of as the charge density associated with an emergent higher-form symmetry \cite{Delacretaz:2019brr} -- since the symmetry is emergent, heavy CFT operators creating superfluid states are not labeled by their representations under it. Working to linear order in the superfluid velocity, there is only one new thermodynamic parameter compared to \eqref{eq_j_consti} -- the superfluid stiffness $\rho_{\rm s}$ -- and one new dissipative parameter $\zeta_3$ \cite{Herzog:2011ec} (see also \cite{Bhattacharya:2011tra}) : \begin{subequations}\label{eq_sflu_constirel} \begin{align} T_{\mu\nu} &= \epsilon u_\mu u_\nu + P\Delta_{\mu\nu} + 2 \rho_{\rm s} \mu_{\rm s} n_{(\mu} u_{\nu)} + \rho_{\rm s} \mu_{\rm s} n_\mu n_\nu - \eta \sigma_{\mu\nu} + \cdots\, , \\ J_\mu &= \rho_{\rm s} \partial_\mu\phi + \rho_{\rm n} u_\mu - \kappa \Delta_{\mu\nu}\partial^\nu \frac{\mu}{T} + \cdots\, , \\ u^\mu \partial_\mu \phi &= - \mu + \zeta_3 \partial_\mu(\rho_s n^\mu) + \cdots \, , \end{align} \end{subequations} where $n_\mu \equiv \Delta_{\mu\nu}\partial^\nu\phi / \mu_{\rm s}$ and $\mu_{\rm s} \equiv -u^\mu\partial_\mu \phi$. The projection $\Delta_{\mu\nu} \equiv \eta_{\mu\nu} + u_\mu u_\nu$, and $\sigma_{\mu\nu}$ is the shear viscosity tensor appearing in \eqref{eq_T_consti}. The thermodynamic parameters $\epsilon,\, P,\, \rho_{\rm s},\, \rho_{\rm n}$ and dissipative parameters $\eta,\,\kappa,\, \zeta_3$ are inputs in the hydrodynamic treatment -- however when the dissipative superfluid is obtained by heating up a $T=0$ conformal superfluid (as in Fig.~\ref{fig_spectrum}), these can all be expressed in terms of a single EFT parameter at low temperature, see section \ref{sss_sflu}. The fluctuations $\partial_\mu\delta \phi$, with $\delta \phi = \phi + \mu t$, have the same scaling as the other hydrodynamic variables \eqref{eq_scaling_var}. This new degree of freedom lifts the diffusive mode \eqref{eq_Gj0j0} into a second sound mode (with sound attenuation controlled by $\kappa$ and $\zeta_3$). The correlation function for $\partial_i\delta \phi$ is qualitatively similar to the longitudinal part of \eqref{eq_GT0i0j}, and correlators of neutral operators will be controlled by similar hydrodynamic tails as in the previous sections. One new feature is that operators with finite charge $q\in \mathbb Z$ under the $U(1)$ symmetry can now be matched in the IR using the Goldstone phase \begin{equation}\label{eq_sflu_matching} \mathcal{O}_{\mu_1 \cdots \mu_{\ell}}^q \ \sim \ \partial_{\mu_1}\cdots \partial_{\mu_{\ell} } e^{i q \phi} \ + \ e^{i q \phi} u_{\mu_1} \partial_{\mu_2}\cdots \partial_{\mu_{\ell-1}} u_{\mu_{\ell}} \ + \ \cdots\ \, , \end{equation} where as in Fig.~\ref{fig_decay} the first term is the most relevant operator, and the second is the most relevant operators when $k=0$. There are several operators that compete with the ones above, but all lead to similar results. The correlators of charged operators $\mathcal{O}^q$ can now be obtained as those of neutral operators $\mathcal{O}$ in Sec.~\ref{ssec_hyd_cor}, by expanding the hydrodynamic fields. Expanding $\phi = \delta\phi - \mu t$ shows that real time correlators contain an extra factor of $e^{-iq\mu t}$, so that in frequency space one finds \begin{equation}\label{eq_sflu_OqOq} \langle \mathcal{O}^{q\dagger}_{(\bar \ell,\ell)}\mathcal{O}^{q}_{(\bar \ell,\ell)}\rangle(\omega,k) \sim \langle \mathcal{O}_{(\bar \ell,\ell)}\mathcal{O}_{(\bar \ell,\ell)}\rangle(\omega - q\mu,k)\, , \end{equation} where the right-hand side simply refers to the general result \eqref{eq_OO_lspatial} for neutral operators, evaluated at frequency $\omega - q \mu$. One may worry that the hydrodynamic features in this correlator appear at frequencies above the hydrodynamic cutoff $\omega\sim q\mu \gg 1/\tau_{\rm th}$ -- however this is simply because the operator carries a phase $e^{-iq\mu t}$ which translates hydrodynamic features usually at $\omega\sim 0$ to $\omega\sim q\mu$ in the correlator of operators of charge $q$. We will see in the CFT application in Sec.~\ref{ssec_CFT_Q} that $\omega-q \mu$ measures the difference in dimensions $\Delta-\Delta'$ of the heavy operators (as did $\omega$ for neutral operators, see \eqref{eq_stateop_meso}). Of course, superfluids at finite temperature also have well known static properties which control equal time correlators. For example terms like the first term in \eqref{eq_sflu_matching} will lead to \begin{equation}\label{eq_Gspatial_sflu} \langle \mathcal{O}^{q\dagger}_{(\bar \ell,\ell)}\mathcal{O}^{q}_{(\bar \ell,\ell)}\rangle(x,t=0) \sim e^{-\frac{q^2}{2} \langle \phi(x)\phi\rangle} \partial_i^{2\bar \ell} \langle \phi(x)\phi\rangle \sim \frac{1}{x^{2\bar \ell + d - 2}} \qquad\hbox{for }\ d>2\, . \end{equation} For $d=2$, where the equal-time phase correlator $\langle \phi(x) \phi\rangle = \frac{1}{2\pi \rho_s} \log |x|$ (where $\rho_s$ is the stiffness in \eqref{eq_sflu_constirel}) one has instead \begin{equation} \langle \mathcal{O}^{q\dagger}_{(\bar \ell,\ell)}\mathcal{O}^{q}_{(\bar \ell,\ell)}\rangle(x,t=0) \sim \frac{1}{x^{2\bar \ell + \frac{q^2}{4\pi \rho_s}}}\, . \end{equation} These also provide EFT constraints on the CFT data involving heavy charged operators that create a superfluid state. \subsubsection{Dissipative superfluids from the EFT}\label{sss_sflu} It is natural to expect that if a CFT exhibits a superfluid phase, this phase will be connected to a $T=0$ superfluid, as in Fig.~\ref{fig_spectrum}. At $T=0$, a superfluid EFT describes the physics up to a cutoff, which in the case of a CFT must be proportional to the chemical potential $\mu$ \cite{weinberg1995quantum,Son:2002zn,Hellerman:2015nra,Monin:2016jmo}. Dissipative hydrodynamics can be seen to emerge from the EFT at finite temperatures $0<T\ll \mu$; in other words, all the thermodynamic parameters and dissipative parameters in the previous section can be computed in terms of the EFT parameters. Let us illustrate this to leading order in gradients, where the EFT is simply \cite{Son:2002zn} \begin{equation}\label{eq_superfluid_EFT} S = \frac{c_1}{d(d+1)} \int d^{d+1}x\, |\partial\phi|^{d+1} + \cdots\, , \end{equation} with $|\partial\phi|\equiv \sqrt{-\partial_\mu\phi \partial^\mu \phi}$. The dimensionless constant $c_1$ is non-universal and depends on the underlying CFT. The $U(1)$ current is \begin{equation}\label{eq_J_sflu} J_\mu = \frac{c_1}{d} |\partial\phi|^{d-1} \partial_\mu \phi + \cdots\, . \end{equation} Expanding around the saddle $\phi = \mu t + \pi$, one finds a zero temperature superfluid density \begin{equation}\label{eq_sflu_rhos} \rho_{\rm s} = \langle J_0\rangle_{\beta\to \infty} = \frac{c_1 }{d}\mu^d\,. \end{equation} Thermodynamics $\delta\epsilon = \mu \delta \rho$ then fixes the zero temperature energy density \begin{equation}\label{eq_sflu_eps} \epsilon \equiv \langle T_{00}\rangle_{\beta\to \infty} = \frac{c_1}{d+1} \mu^{d+1}\, , \end{equation} as can be checked by computing the stress tensor directly from \eqref{eq_superfluid_EFT}. The pressure for a CFT is given by $P = \epsilon / d$. From the CFT perspective, $c_1$ can be defined by Eq.~\eqref{eq_sflu_rhos} or \eqref{eq_sflu_eps} and can be viewed as CFT data on a similar footing as the thermal expectation value of the stress tensor, $b_T$ in \eqref{eq_T_thermalvev}. The action \eqref{eq_superfluid_EFT} can be expanded around the saddle $\phi = \mu t + \pi$ \begin{subequations}\label{eq_superfluid_EFT_pert} \begin{align} S &= S_2 + S_{\rm int}\, , & \label{seq_superfluid_EFT_pert_free} S_2 &= \int d^{d+1}x \, \frac{1}{2}\left(\dot \pi^2_c - \frac{1}{d} (\nabla \pi_c)^2\right)\, , &&\quad \\ &&S_{\rm int} &\sim \int d^{d+1}x \, \frac{(\partial\pi_c)^{3}}{\sqrt{\epsilon}} + \frac{(\partial\pi_c)^{4}}{\epsilon} + \cdots \, . \end{align} \end{subequations} with $\pi_c\equiv \sqrt{c_1\mu^{d-1}}\,\pi$. Only the schematic form of the interactions $S_{\rm int}$ will be needed -- $\partial\pi_c$ symbolizes either time or space derivatives and numerical factors have been dropped. The strong coupling scale of the EFT is given by the energy density $\Lambda_{\rm sc}\sim \epsilon^{1/(d+1)} \sim c_1^{1/(d+1)}\mu$. Let us start with the thermodynamics, which to leading order can be studied from the free part Euclidean version of the action \eqref{seq_superfluid_EFT_pert_free}. The simplest finite temperature quantity to compute is the entropy density, which can be obtained from the free energy \begin{equation} \begin{split} f &= - \frac{1}{\beta V} \log Z \simeq - \frac{1}{\beta V} \log \int D \phi \, e^{-S_{2,E}}\\ &= \frac{1}{\beta}\int \frac{d^dk}{(2\pi)^d} \log \left[1-e^{-c_s\beta k}\right] = \frac{1}{c_s^d\beta^{d+1}} \frac{\Gamma(\tfrac{d+1}{2})\zeta(d+1)}{\pi^{(d+1)/2}}\, , \end{split} \end{equation} with $c_s = 1/\sqrt{d}$, from which we can obtain the dimensionless entropy density \begin{equation}\label{eq_so_sflu} s_o^{\rm sflu} \equiv \beta^d s = -\beta^{d+2}\partial_\beta f \simeq \frac{d+1}{c_s^d}\frac{\Gamma(\tfrac{d+1}{2})\zeta(d+1)}{\pi^{(d+1)/2}}\, . \end{equation} Terms that are higher order in gradients or field in the action \eqref{eq_superfluid_EFT} and \eqref{eq_superfluid_EFT_pert} lead to corrections to the expressions above that are suppressed by powers of $T/\mu$. The normal density is slightly more subtle: it comes from taking the thermal expectation value of nonlinear terms in the current \eqref{eq_J_sflu}, and showing the disalignment between the current and the expectation value of $\partial_\mu \phi$ \cite{Delacretaz:2019brr}. One finds \begin{equation} \rho_{\rm n} = \frac{s}{\beta\mu}\frac{1-c_s^2}{c_s^2} + \cdots\, , \end{equation} with $s$ given by \eqref{eq_so_sflu}. The non-relativistic limit $c_s\ll 1$ of this expression is well known \cite{khalatnikov2018introduction}. A similar expression has appeared in a holographic context recently \cite{Gouteraux:2019kuy} -- we see here that it is a universal prediction of the EFT. For a CFT, $c_s^2=1/d$. Furthermore, the emergence of hydrodynamics at finite temperature leads to an additional sound mode (second sound) -- since the EFT is to leading order a free scalar and hence scale invariant at low energies, the speed of second sound is itself related to that of first sound as $c_{s,2}^2 = c_s^2/d$ at low temperatures $T\ll \mu$ (see e.g.~\cite{khalatnikov2018introduction,Herzog:2009md}). Finally, the dissipative parameters $\eta,\, \kappa,\, \zeta_3$ that appeared in the previous section can also be computed from the EFT \eqref{eq_superfluid_EFT_pert} by treating the weakly coupled phonons with kinetic (Boltzmann) theory. This was done for non-conformal superfluids (which have two additional viscosities $\zeta_1,\,\zeta_2$) in the non-relativistic limit in Ref.~\cite{khalatnikov2018introduction}. The calculation is quite lengthy so we only sketch it here, focusing on the shear viscosity $\eta$ for illustration. The phonon differential cross section can be computed at tree level from the cubic and quartic terms in \eqref{eq_superfluid_EFT_pert} (see e.g.~\cite{Nicolis:2017eqo}), the diagrams in Fig.~\ref{fig_sflu_cross} lead to \begin{equation}\label{eq_dsigma} \frac{d\sigma}{d\Omega} \sim \frac{p^{d+3}}{\epsilon^2}\, , \end{equation} where $\epsilon$ is the energy density \eqref{eq_sflu_eps} and $p$ symbolizes dependence on the individual phonon momenta $p_i$, $i=1,2,3,4$. The dependence on the individual momenta can be important, in particular the total cross section $\sigma$ diverges because of small angle scattering \cite{khalatnikov2018introduction,Nicolis:2017eqo}. This divergence is regulated by more irrelevant terms in the action \eqref{eq_superfluid_EFT}, so that the total cross-section is less suppressed by the cutoff $\epsilon$ than Eq.~\eqref{eq_dsigma} suggests \cite{khalatnikov2018introduction}. However it is large angle scattering that controls the shear viscosity \cite{Arnold:1997gh}, so that the naive expression \eqref{eq_dsigma} is sufficient for our parametric estimate. One can now estimate the thermalization time from the thermally averaged cross-section \begin{equation}\label{eq_sflu_tauth} \tau_{\rm th} \sim \frac{1}{\langle s \sigma v\rangle} \sim \beta (\epsilon \beta^{d+1})^2\, . \end{equation} The thermalization time is large $\tau_{\rm th} \gg \beta$ because the phonons are weakly coupled. The shear viscosity can then be estimated as \begin{equation}\label{eq_sflu_eta} \eta \sim \frac{s \tau_{\rm th}}{\beta} \sim \epsilon^2 \beta^{d+2} \sim c_1^2 \mu^{2d+2} \beta^{d+2}\, . \end{equation} The viscosity diverges rapidly as $T\to 0$ because of the long thermalization time \eqref{eq_sflu_tauth} of the superfluid. \begin{figure} \vspace{5pt} \centerline{ \begin{overpic}[width=0.6\textwidth,tics=10]{fig/phonon_diag3} \put (15,21) {$\dfrac{1}{\sqrt{\epsilon}}$} \put (30.5,21) {$\dfrac{1}{\sqrt{\epsilon}}$} \put (83.5,21) {$\dfrac{1}{\epsilon}$} \end{overpic} } \caption{\label{fig_sflu_cross} Diagrams in the superfluid EFT contributing to the shear viscosity $\eta$ and other transport parameters $\kappa,\, \zeta_3$ at leading order in $T/\mu$.} \end{figure} It is interesting to contrast these results to holographic superfluids \cite{Hartnoll:2008vx,Herzog:2008he}. Because these theories have a large $O(N^2)$ number of degrees of freedom, the superfluid sector only gives small $O(1)$ corrections to thermodynamic quantities such as the entropy density $s$. However, transport is more sensitive to the presence of the weakly coupled superfluid sector. The holographic value of the low temperature shear viscosity \begin{equation} \eta = \frac{s}{4\pi} \sim N^2 T^d \end{equation} should receive a subleading in $N^2$ phonon contribution \eqref{eq_sflu_eta}, which dominates for temperatures \begin{equation}\label{eq_Tholo} T\lesssim \mu \left(\frac{c_1}{N}\right)^{1/(d+1)}\, . \end{equation} A similar conclusion holds for more general hyperscaling-violating Lifschitz geometries where $\eta = \frac{s}{4\pi} \sim N^2 \mu^d (T/\mu)^{\frac{d-\theta}{z}}$ \cite{Gubser:2009cg,Gouteraux:2019kuy}, with a different exponent in \eqref{eq_Tholo}. It would be interesting to understand if this non-commutativity of the $T\to 0$ and $N\to \infty$ limits signals a more important breakdown of low temperature finite density holographic solutions (such as extremal black holes) due to quantum effects \cite{ArkaniHamed:2006dz}. \subsection{Implications for heavy CFT operators with macroscopic charge}\label{ssec_CFT_Q} The ETH Ansatz \eqref{eq_ETH} is slightly modified for systems with additional symmetries. For the case of an internal $U(1)$ symmetry with generator $\hat Q$, the extension can be simply obtained by using the Hamiltonian $\hat H \to \hat H - \mu \hat Q$ without the need of using the grand canonical ensemble explicitly. One then obtains, for a few-body operator $\mathcal{O}_q$ of $U(1)$ charge $q$, \begin{equation}\label{eq_ETH_mu} \langle H' , Q+q | \mathcal{O}_q| H, Q\rangle = \langle \mathcal{O}_q\rangle\delta_q^0 \delta_{HH'} + \Omega(p)^{-1/2} R^{\mathcal{O}}_{HH'} \sqrt{\langle \mathcal{O}_q^\dagger \mathcal{O}_q\rangle(E - E' -\mu q)}\, . \end{equation} Both the one-point and two-point functions are evaluated at finite inverse temperature $\beta$ and chemical potential $\mu$ related to the charge and energy density of $|H,Q\rangle$ by the equation of state. For neutral light operators $q=0$ the results of section \ref{sec_largeD} are largely unchanged. One exception is for light operators of spin $\ell=1$, see Eq.~\eqref{eq_ltt_og} and discussion in Sec.~\ref{ssec_u1hydro_mu}; the resulting OPE predictions can be straightforwardly obtained following the method in Sec.~\ref{sec_largeD}. In a superfluid phase, we found in Eq.~\eqref{eq_sflu_OqOq} that the correlators of light charged operators $\mathcal{O}_q$ are also controlled by hydrodynamics. Therefore, when the state created by the heavy operator $H_{Q,J}$ is a finite temperature superfluid we can use \eqref{eq_ETH_mu} to obtain hydrodynamic predictions for OPE coefficients of light charged operators. We find that the results in Sec.~\ref{sec_largeD} for neutral operators ($q=0$) are essentially unchanged, but now also hold for charged operators (with the obvious constraint of charge conservation). For example \eqref{eq_CHHL_2} becomes \begin{equation} |C^{\bar\ell}_{H_{Q,J}H'^\dagger_{Q+q,J'}\mathcal{O}_{q,\ell}}|^2 \simeq e^{-S} \frac{b_{\bar\ell-2}^2}{b_T^2} \left( \frac{\Delta}{b_T}\right)^{\frac{2(\Delta_{\mathcal{O}}-\alpha_{\bar\ell}+1)}{d+1}} \frac{\left(\Delta-\Delta'\right)^{\alpha_{\bar\ell}-1}}{\tilde \eta_o^{\alpha_{\bar\ell}}} \, , \end{equation} the only difference with \eqref{eq_CHHL_2} being that this also holds for $q\neq 0$ and the relevant transport parameter $\tilde\eta_o$ is not simply the shear viscosity but a combination of the superfluid dissipative parameters $\eta,\,\kappa$ and $\zeta_3$ from Sec.~\ref{ssec_sflu}. The other results in Sec.~\ref{sec_largeD} are similarly generalized. For example, for a light charged scalar $\mathcal{O}_{q}$ a result similar to \eqref{eq_HJHJscalar} holds: the OPE coefficient features hydrodynamic poles, but there are now two sound modes (first and second superfluid sound), with speed of sound that are no longer fixed to $1/\sqrt{d}$ by conformal invariance. Further increasing the charge $Q$ of the heavy operator, one eventually reaches the edge of the spectrum. If the operator at the edge of the spectrum still creates a state of finite charge and energy density, its dimension must satisfy \begin{equation} \Delta_{\rm min}(Q) \propto Q^{\frac{d+1}{d}}\, . \end{equation} A natural possibility is that this state is a superfluid \cite{Hellerman:2015nra}. The superfluid EFT then predicts both the spectrum of low-lying operators, and OPE coefficients between these operators and light CFT operators \cite{Monin:2016jmo,Jafferis:2017zna}. As one moves away from the edge $\Delta\geq \Delta_{\rm min}(Q)$, the spectrum becomes dense and the many phonon state eventually start to look thermal. Notice that here the thermalization time is large \eqref{eq_sflu_tauth}, because the original EFT is weakly coupled. The dimension of operators near the edge can be written as \begin{equation} \Delta = (1+\delta) \Delta_{\rm min}(Q)\, , \end{equation} where $\delta \ll 1$ is related to the temperature by (this relation follows from Eq.~\eqref{eq_beta_asym2} derived in the following section) \begin{equation} \delta \sim \frac{s_o^{\rm sflu}}{\epsilon \beta^{d+1}}\, , \end{equation} with $s_o^{\rm sflu}$ given by \eqref{eq_so_sflu}. This implies that the hydrodynamic window \eqref{eq_window} is parametrically smaller close to the edge of the spectrum \begin{equation} \left(\frac{\delta}{s_{o}^{\rm sflu}}\right)^{-2} \left(\frac{\Delta \delta}{s_{o}^{\rm sflu}}\right)^{-\frac{1}{d+1}} \lesssim\ \Delta - \Delta' \ \lesssim \ \left(\frac{\delta}{s_{o}^{\rm sflu}}\right)^2 \left(\frac{\Delta \delta}{s_{o}^{\rm sflu}}\right)^{\frac{1}{d+1}}\, . \end{equation} \subsection{Phase transitions in the spectrum}\label{ssec_transition} The equation of state of a CFT at finite $\mu$ and $\beta$ is no longer fixed by scale invariance, but can depend on the dimensionless reduced chemical potential \begin{equation} \alpha\equiv \beta \mu\, . \end{equation} In the previous sections, we explored the hydrodynamic descriptions pertaining to two natural phases of CFTs at finite density -- the superfluid phase that is expected for $\alpha \gtrsim 1$ and normal phase for $\alpha\lesssim 1$ -- and determined how hydrodynamics controls some of the CFT data. These phases should be separated by a phase transition. In this section, we explore how the non-trivial {\em thermodynamic} properties of the transition control the data of the underlying CFT, and leave for future work a hydrodynamic treatment of the system near the phase transition (this would require incorporating long-lived critical fluctuations, see e.g.~Refs.~\cite{RevModPhys.49.435,Stephanov:2017ghc}). In this sense, this section extends the work of Ref.~\cite{Lashkari:2016vgj}, where thermodynamics was seen to control some of the CFT data, to situations where the thermodynamic equation of state and corresponding phase structure are non-trivial. Expectation values of the currents now take the form \begin{equation} \langle J_\mu\rangle_{\beta,\mu} = \frac{\rho_o(\alpha)}{\beta^{d}} \delta_\mu^0\, , \qquad\qquad \langle T_{\mu\nu}\rangle_{\beta,\mu} = \frac{s_o(\alpha)}{\beta^{d+1}} \left(\delta_\mu^0\delta_\nu^0 - \rm trace\right)\, , \end{equation} where $\rho_o$ and $s_o$ are odd and even functions of $\alpha$ respectively (by CPT), and $s_o(0) = b_T$. In a CFT, the thermodynamic relations \begin{equation}\label{eq_thermo} \delta\epsilon = T\delta s + \mu \delta \rho \, , \qquad\qquad \frac{d+1}{d} \epsilon = Ts + \mu \rho \, , \qquad\qquad \end{equation} reduce the equation of state to a single function of one variable, which we could take for example to be $s_o(\alpha)$. However when studying the operator spectrum in a CFT, it is most convenient to work in the microcanonical ensemble and to think instead of $\alpha$ (or $\mu$) and $\beta$ as functions of the densities, say $\epsilon$ and $\rho$. In particular, it will be convenient to study a slice of Fig.~\ref{fig_spectrum} at fixed $\Delta\gg 1$, i.e.~fixed energy density $\epsilon$, and vary charge. Since $\epsilon$ is fixed, we can use it to define a dimensionless charge density and temperature \begin{equation} n \equiv \frac{\rho}{\epsilon^{\frac{d}{d+1}}} = \frac{Q}{(S_d \Delta^d)^{\frac{1}{d+1}}}\, , \qquad\qquad \bar\beta \equiv \beta \epsilon^{1/(d+1)} \, , \end{equation} where again $S_d \equiv {\rm Vol} S^d = \frac{2\pi^{(d+1)/2}}{\Gamma \left(\frac{d+1}{2}\right)}$. The potentials $\alpha(n)$ and $\bar\beta(n)$ are dimensionless functions of the dimensionless charge density $n$. The thermodynamic relations \eqref{eq_thermo} imply that these functions satisfy \begin{equation}\label{eq_therm_rel} n\partial_n \alpha(n) = \frac{d+1}{d}\partial_n \bar \beta(n)\, , \end{equation} so that only one function is independent, say $\bar \beta(n)$, and can be thought of as the equation of state characterizing the thermodynamic properties of the CFT. Thermodynamic stability further implies \begin{equation}\label{eq_beta_monotonic} \partial_n \bar\beta(n) \geq 0\, , \end{equation} so that both $\alpha$ and $\bar \beta$ are positive, monotonically increasing functions of $n$. The asymptotic properties of the equation of state can be related to familiar parameters of the CFT. For example, as $n\to 0$ one has \begin{equation}\label{eq_beta_asym1} \qquad\qquad \bar \beta(n) = \left( \frac{b_T d}{d+1}\right)^{\frac{1}{d+1}} \left[1+ \frac12 \frac{d}{d+1}\frac{n^2}{\chi_o} + O(n^4)\right] \qquad\quad (\hbox{as } n\to 0)\, . \end{equation} The first term simply comes from \eqref{eq_T_thermalvev}, and the subleading term follows from \eqref{eq_thermo} and \eqref{eq_therm_rel} and features the dimensionless charge susceptibility \begin{equation} \chi\equiv \lim_{\mu\to 0}\frac{\langle J_0\rangle_{\beta,\mu}}{\mu}\, , \qquad\qquad \chi_o \equiv \chi / \epsilon^{\frac{d-1}{d+1}}\, . \end{equation} ($\chi$ can also be expressed as a thermal 2-point function of the current at zero chemical potential). The monotonicity of $\bar\beta$ \eqref{eq_beta_monotonic} for $n\ll 1$ is equivalent to $\chi_o\geq 0$. The equation of state is also fixed in the opposite limit if we assume, following Ref.~\cite{Hellerman:2015nra}, that the state at \begin{equation} n \to n_{\rm max} = \frac{Q_{\rm max}}{(\Delta^d S_d)^{\frac{1}{d+1}}} = \frac{(d+1)^{\frac{d}{d+1}}}{d} c_1^{\frac{1}{d+1}} \end{equation} is a zero-temperature superfluid% \footnote{This equation can be viewed as a microcanonical CFT definition of the EFT parameter $c_1$. Alternatively Eqs.~\eqref{eq_sflu_rhos} or \eqref{eq_sflu_eps} are canonical definitions of $c_1$.}. Using again the thermodynamic identities \eqref{eq_therm_rel}, one finds that the equation of state near the zero-temperature superfluid takes the form \begin{equation}\label{eq_beta_asym2} \qquad\qquad \bar\beta(n) = \left[ \frac{d}{d+1} \,\frac{s_o^{\rm sflu}}{1-\frac{n}{n_{\rm max}}}\right]^{\frac{1}{d+1}} + \cdots\, , \qquad\quad (\hbox{as } n\to n_{\rm max})\, , \end{equation} where $s_o^{\rm sflu}$ is given by \eqref{eq_so_sflu}. Note that the two asymptotic behaviors \eqref{eq_beta_asym1} and \eqref{eq_beta_asym2} of $\bar\beta(n)$ are consistent with its monotonicity property \eqref{eq_beta_monotonic}. A sketch of the equation of state is shown in Fig.~\ref{fig_eqstate}. \begin{figure}[h] \vspace{10pt} \centerline{ \begin{overpic}[width=0.71\textwidth,tics=10]{fig/eq_of_state} % \put (-6,67) {\Large $\tilde \beta$} \put (-7,35) {\large $\tilde \beta_c$} \put (-9,9) {\large $b_T^{\frac{1}{d+1}}$} \put (-5,0) {\large $0$} % \put (1,-5) {\large $0$} \put (47,-5) {\large $n_c$} \put (85,-5) {\large $n_{\rm max}$} \put (105,0) {\Large $n$} % \put(20,7){\small {Normal}} \put(63,7){\small {Superfluid}} % \put(8,50) {\small \color{inkblue} $b_T^{\frac{1}{d+1}}\left[1 + \frac12 \frac{d}{d+1}\frac{n^2}{\chi_o}\right]$} \put (40,45) {\small \color{inkblue} $\tilde \beta_c \pm |n_c-n|^{\frac{1}{d\nu - 1}}$} \put (69,30) {\small \color{inkblue} $\left[\frac{s_o^{\rm sflu}}{1-\frac{n}{n_{\rm max}}}\right]^{\frac{1}{d+1}}$} \end{overpic} } \vspace{10pt} \caption{\label{fig_eqstate} Equation of state for a CFT with a global $U(1)$ symmetry, assuming it reaches a superfluid phase at zero temperature and finite chemical potential. The equation of state (red) can be parametrized by the dependence of the dimensionless inverse temperature $\bar\beta = \beta \epsilon^{\frac{1}{d+1}}$ on the dimensionless density $n = \rho/\epsilon^{\frac{d}{d+1}}$ at fixed energy density $\epsilon$, the $y$ axis is normalized as $\tilde \beta \equiv \left(\frac{d+1}{d}\right)^{\frac{1}{d+1}} \bar \beta$ for convenience. The dashed blue curves show the behavior near $n=0$ (Eq.~\eqref{eq_beta_asym1}), $n=n_c$ and $n=n_{\rm max}$ (Eq.~\eqref{eq_beta_asym2}).} \end{figure} Now the superfluid phase is certainly not expected to persist at large temperatures $\beta \mu \ll 1$ (or small charge at fixed energy $n\ll 1$)% \footnote{See however \cite{Chai:2020zgq} for constructions in fractional dimensions of ordered finite temperature phases at zero density.}; we therefore expect the symmetry to be restored at a critical value $n = n_c$, with $n_c = O(1)$ for a generic CFT. If this thermal phase transition is continuous, we see that the spectra of $(d+1)$-dimensional CFTs contain information about criticality in $d$ dimensions. Using scaling relations the critical point can be characterized by a correlation length critical exponent $\nu$ and anomalous dimension $\eta$ of the order parameter% \footnote{When a $d$-dimensional Euclidean CFT describes the critical point, these are related to the dimensions of the lightest neutral scalar $\Delta_s = d- \frac{1}{\nu}$ and charged order parameter $\Delta_{\vec \phi} = \frac{1+\eta}{2}$. Even then we purposely use `old-fashioned' notation for critical exponents $\nu,\, \eta$ to avoid confusion with the underlying $(d+1)$-dimensional Lorentzian CFT. }. Holographic superfluids are an example of CFTs that can be tuned across a $U(1)$-restoring thermal phase transition% \footnote{See Fig.~3 in \cite{Denef:2009tp} for a distribution of $n_c$ in a class of holographic superfluids.}. That the transition is in the mean-field universality class in this case \cite{Hartnoll:2008vx}, with $\eta=0$ and $\nu=1/2$, is likely an artefact of large $N$; mean-field critical exponents are not expected for generic CFTs. Consider for example a (3+1)$d$ CFT with a global $U(1)$ symmetry, and assume following Ref.~\cite{Hellerman:2015nra} that the lightest operator of charge $Q$ creates a superfluid state when $n = n_{\rm max}$. When $n$ is decreased past $n_c$, the symmetry is restored and we expect the transition to be in the 3d Wilson-Fisher universality class, with $\nu \simeq 0.672$ and $\eta\simeq 0.038$% \footnote{See Ref.~\cite{Chester:2019ifh} for a recent discussion on the 8$\sigma$ tension between the numerical and experimental values of these exponents.}. These exponents control correlators near or at the critical $n_c$, which like the hydrodynamic long-time tails will lead to predictions for some of the CFT data. For example the anomalous dimension $\eta$ will control the equal-time correlator of light, charged operators \begin{equation}\label{eq_etacr_cft} \langle \mathcal{O}_q(0,x) \mathcal{O}_q^\dagger\rangle_{\beta_c} \sim \frac{1}{x^{d-2+\eta}} \, . \end{equation} The correlation length critical exponent can be obtained from the vanishing of the thermal mass at the critical point. The thermal mass $m_{\rm th} = m_{\rm th}(\beta,\mu)$ is defined in the normal (non-superfluid) phase as the decay of spatial correlators of light operators at finite temperature (see e.g.~\cite{Iliesiu:2018fao}) \begin{equation} \lim_{x\to \infty} \langle \mathcal{O}_q(0,x) \mathcal{O}_q^\dagger\rangle_{\beta} \sim e^{-m_{\rm th} |x|} \end{equation} (in the superfluid side $n>n_c$, these correlators decay polynomially, see \eqref{eq_Gspatial_sflu}). As we approach the critical point from the normal phase $n\to n_c$, the thermal mass should vanish as \begin{equation}\label{eq_nucr_cft} m_{\rm th}(n) \sim |\beta(n) - \beta_c|^\nu\, . \end{equation} Because scaling relations connect several observables, Eqs.~\eqref{eq_etacr_cft} and \eqref{eq_nucr_cft} are but one of several ways to observe the 3$d$ critical exponents $\nu$ and $\eta$ in the (3+1)$d$ CFT data. Transitions are only sharp in strict thermodynamic limit $\Delta = \infty$ -- thermodynamic singularities are as usual resolved at finite volume, or here finite $\Delta \gg 1$. The case of (2+1)$d$ CFTs with a $U(1)$ symmetry is particularly interesting. Let us consider the (2+1)$d$ $U(1)$ Wilson-Fisher CFT to be concrete. Monte-Carlo simulations have shown a $\Delta_{\rm min}(Q)\sim Q^{3/2}$ scaling of the lightest operator at fixed $Q$ \cite{Banerjee:2017fcx}, implying that this operator creates a state with both finite energy and charge density. Since the theory is fully bosonic, this state is expected to be in a superfluid phase. As $n$ is decreased past $n_c$, the $U(1)$ symmetry is restored -- since now $d=2$ we expect the transition to be in the Berezinskii-Kosterlitz-Thouless (BKT) universality class. In particular the thermal mass in the normal phase near the transition behaves as \begin{equation}\label{eq_mBKT} m_{\rm th}(n) \sim \exp \left[-\frac{1}{\sqrt{\beta_c - \beta(n)}}\right]\, , \end{equation} and the equation of state $\beta(n)$ is very smooth, with an essential singularity at $n=n_c$. The phase diagram can be considerable enriched by considering operators with spin $J = j\Delta$, with $0\leq j \leq 1$ (still in the $\Delta\to \infty$ limit)% \footnote{For $d>2$, the Lorentz group has more than one Cartan generator, but we will only consider one large spin quantum number for simplicity, see \cite{Cuomo:2019ejv} for a more general study.}. The corresponding states at zero temperature, i.e.~keeping the charge density as large as possible $n=n_{\rm max}(\epsilon,j)$, were studied in $d=2$ and $d=3$ in Refs.~\cite{Cuomo:2017vzg,Cuomo:2019ejv}, where it was found that the angular momentum of the state is carried by different objects (on top of the superfluid background) depending on $j$: \begin{subequations} \begin{align} 0&\leq j \lesssim \Delta^{-\frac{d}{d+1}}& &\hbox{single phonon} \\ \Delta^{-\frac{d}{d+1}}&\lesssim j \lesssim \Delta^{-\frac{1}{d+1}}& &\hbox{vortex-antivortex pair} \\ \Delta^{-\frac{1}{d+1}}&\lesssim j \lesssim 1& &\hbox{vortex crystal} \end{align} \end{subequations} As $j = \frac{J}{\Delta}\to 1$, the superfluid EFT breaks down and the spectrum is instead governed by the light-cone bootstrap. Departing from the manifold of maximal charge at fixed dimension and spin, these `phases' will be embedded in a larger phase diagram with finite temperature phases. At large enough temperatures, the $U(1)$ symmetry will be restored, and the vortex lattice will melt. In Fig.~\ref{fig_phase_diag}, we show a tentative operator spectrum `phase diagram' for heavy operators $\Delta\gg 1$ of a CFTs with a global $U(1)$ symmetry. \begin{figure}[h] \vspace{10pt} \centerline{ \begin{overpic}[width=0.75\textwidth,tics=10]{fig/op_phasediag} % \put (-7,70) {\Large $j$} \put (-4,62) {\large $1$} \put (-13,18) {\large $1/\Delta^{\frac{1}{d+1}}$} \put (-13,6) {\large $1/\Delta^{\frac{d}{d+1}}$} \put (-4,0) {\large $0$} % \put (1,-4) {\large $0$} \put (43,-4) {\large $n_c$} \put (87,-4) {\large $n_{\rm max}$} \put (105,0) {\Large $n$} % % \put(6,59){\small \rotatebox{0}{LC bootstrap}} % \put(8,40){{Normal fluid flowing}} \put(8,35){{with velocity $v\sim j$}} \put(13,30){{(Sec.~\ref{ssec_CFT_j})}} % \put(12,12){{Normal fluid}} \put(11,7){{(Sec. \ref{sec_hydro} and \ref{ssec_u1hydro})}} % \put(52,9){{Superfluid}} \put(52,4){{(Sec. \ref{ssec_sflu})}} % \put(50,36){{Vortex lattice}} % \put(75,3){\scriptsize{single phonon}} % \put(72,14){\scriptsize{single vortex}} % \end{overpic} } \vspace{10pt} \caption{\label{fig_phase_diag} Cut in the spectrum (Fig.~\ref{fig_spectrum}) at fixed $\Delta\gg1$, showing a possible `heavy operator phase diagram' for CFTs with a global $U(1)$ symmetry, as a function of their charge $n\sim Q/\Delta^{\frac{d}{d+1}}$ and spin $j\equiv J/\Delta$. Although certain limiting regions are fairly well understood, most regions, cross-overs and transitions are conjectural. For example, a continuous superfluid to normal transition at $j\ll 1$ could also turn into a first order transition at larger $j$, as is observed in holographic superfluids \cite{Herzog:2008he}. } \end{figure} Similar phase diagrams have been observed in liquid helium \cite{Lounasmaa7760}, Bose-Einstein condensates \cite{cooper_bec}, thin film superconductors \cite{RevModPhys.91.011002}, and quantum Hall systems; in the last spin per charge is mapped to the filling fraction $\nu=J/Q\sim \Delta^{\frac{1}{d+1}}j/n$ (see e.g.~\cite{cooper_bec,PhysRevB.93.205116,PhysRevLett.122.214505}). Comparison with these systems suggest a number of possible exotic features in the phase diagram in Fig.~\ref{fig_phase_diag}. For example in (2+1)d, the spinning operators studied in \cite{Cuomo:2017vzg} can lead to states with opposite vorticity on each poles, and vanishing vorticity along the equator. Gapless edge states are then expected to live along the equator. Since these are supported in (1+1)d, their hydrodynamic interactions are relevant and dissipation anomalous \cite{PhysRevA.16.732,PhysRevLett.89.200601,Ferrari2013,Delacretaz:2020jis}. The CFT spectrum may also probe the melting of the vortex lattice in Fig.~\ref{fig_phase_diag}. In (2+1)d this transition is infinite order like BKT \eqref{eq_mBKT}, but with different exponents \cite{PhysRevB.19.2457}. Finally dynamical response and transport near the equilibrium critical point is also singular \cite{RevModPhys.49.435}. We leave a more thorough exploration of this phase diagram for future work. \section{Conclusion} We showed that hydrodynamics controls a large portion of the CFT data, namely OPE coefficients of any two heavy operators close enough in dimensions (see \eqref{eq_regime}) with light neutral operators of any spin. Only light operators with internal quantum numbers can escape this fate: for example fermions, or $\mathbb Z_2$-odd operators in the Ising model. In superfluid states we found that even light operators that are charged under the $U(1)$ have hydrodynamical OPE coefficients. More generally, when the thermal state created by the heavy operator contains long-lived excitations that nonlinearly realize a global symmetry, hydrodyanmics will control the evolution of light operators charged under that symmetry. Our results apply to thermalizing CFTs in $d+1$ dimensions with $d\geq2$. The infinite tower of Virasoro symmetries make 1+1d CFTs special. In the thermodynamic limit, thermal correlation functions are trivial and there is no room for a hydrodynamic description. However they still exhibit thermalization after a quench \cite{Calabrese:2006rx} (towards a generalized Gibbs ensemble for the KdV charges \cite{PhysRevLett.98.050405, Calabrese:2016xau, deBoer:2016bov, Dymarsky:2018lhf, Maloney:2018yrz}), non-trivial non-equilibrium behavior \cite{Bernard:2016nci}, and chaos \cite{Roberts:2014ifa,Jensen:2019cmr,Kudler-Flam:2019kxq}. It would be interesting to see if out of equilibrium methods can be used to determine heavy-heavy-light OPE coefficients, comparing against results obtained from other methods \cite{Fitzpatrick:2015zha, Kraus:2016nwo, Basu:2017kzo, Faulkner:2017hll,Collier:2019weq}, see in particular \cite{Brehm:2018ipf,Romero-Bermudez:2018dim,Hikida:2018khg} for discussions on the off-diagonal part of ETH in this context. Far from equilibrium techniques and turbulence may also be useful in higher dimensions to determine OPE coefficients $C_{HH'L}$ away from the hydrodynamic linear response regime regime \eqref{eq_regime}, e.g.~to study $\Delta-\Delta' \gtrsim (\Delta/b_T)^{\frac{1}{d+1}}$. There are a number of possible interesting extensions, which we leave for future work. We list a few below: \begin{itemize} \item It should be possible to extend our results to CFTs with anomalies or non-trivial current algebras by studying hydrodynamics with anomalous Ward identities \cite{Fukushima:2008xe,Son:2009tf, Landsteiner:2011cp, 2group}. \item We have mostly focused on local operators. Certain nonlocal operators, for example in gauge theory, have signatures in the corresponding hydrodynamic theories as higher-form charges \cite{Grozdanov:2016tdf,Armas:2018ibg}. \item Operators that are odd under parity (or inversion) can be considered as well, with hydrodynamic tails that depend non-trivially on dimensionality. One can also study heavy operators in CFTs without inversion symmetry, using parity-violating hydrodynamics e.g.~in 2+1d \cite{Jensen:2011xb}. \item Boost symmetry plays only a minor role in Sec.~\ref{sec_hydro} -- hydrodynamic tails control late time correlators in non Lorentz-invariant QFTs as well. The CFT implications in Sec.~\ref{sec_largeD} rely on a state-operator map. We expect similar results to exist in non-relativistic CFTs (with Schr\"odinger symmetry), since these also enjoy an operator-state correspondence \cite{Nishida:2007pj}. The large charge bootstrap has already been extended in this direction \cite{Favrod:2018xov,Kravec:2018qnu}. \end{itemize} The present work revealed hydrodynamic constraints on CFTs. It is our hope that the favor may one day be returned, with techniques such as crossing and unitarity leading to constraints on dynamics in thermalizing CFTs, e.g. in the form of bounds on transport and thermalization \cite{Kovtun:2004de,sachdev2007quantum, Hartnoll:2014lpa, Hartman:2017hhp, Delacretaz:2018cfk}. It would also be interesting to explore if the novel features in late time thermal correlators discussed here have implications for cosmology, where thermal physics enters both in the thermal desription of de Sitter space and through the actual temperature of the universe. We end with an amusing observation: since reflection positive Euclidean CFTs can be continued to a unitary Lorentzian CFTs \cite{lecturesSaclay,Kravchuk:2020scc}, the equilibrium properties of certain statistical mechanical systems at their critical point know about hydrodynamics in one lower dimension!% \footnote{This should not be confused with dynamical properties of the fixed point, which are controlled by hydrodynamics in the same amount of spatial dimensions \cite{RevModPhys.49.435}.} \subsection*{Acknowledgements} I am thankful for inspiring discussions with Nima Afkhami-Jeddi, Alex Belin, Clay C\'ordova, Gabriel Cuomo, Angelo Esposito, Hrant Gharibyan, Paolo Glorioso, Blaise Gout\'eraux, Sean Hartnoll, Nabil Iqbal, Kristan Jensen, Steve Kivelson, Umang Mehta, Sasha Monin, Baur Mukhametzhanov, Jo\~ao Penedones, Riccardo Rattazzi, Dam T.~Son, and Paul Wiegmann. I also thank Gabriel Cuomo, Blaise Gout\'eraux and Diego Hofman for useful comments on a draft of this paper. This work was supported by the Swiss National Science Foundation and the Robert R.~McCormick Postdoctoral Fellowship of the Enrico Fermi Institute.
1,314,259,995,214
arxiv
\section{Introduction} \label{sec:intro} In the currently favored $\Lambda$CDM paradigm, the fossil record of galaxy formation is imprinted in the stellar halos that surround massive galaxies like the Milky Way. The number of substructures, or satellites, embedded in the halo is an important prediction of cosmological models --- and one that depends sensitively on a number of parameters: the small-scale power spectrum, the assumed properties of the dark matter particles and a variety of baryonic processes that can modify the shape of the satellite luminosity function. In principle, a comparison between the predicted and observed number of satellites can provide a straightforward test of cosmological models \citep{moore99a,klypin99a}. In practice, though, there are significant technical challenges and serious selection effects involved in finding and characterizing Galactic satellites. Such satellites have historically been separated into two types of objects thought to have had dissimilar origins: globular clusters and dwarf galaxies. In recent years, it has become fashionable to further subdivide the latter category on the basis of luminosity or surface brightness: i.e., ``classical" versus ``ultra-faint" dwarf galaxies, although these are not physically distinct classes. Beginning with the SDSS \citep{york00a} --- which provided uniform $ugriz$ photometry covering a significant fraction of the sky --- a large number of faint dwarf galaxies and globular clusters have been discovered. The number of known satellites has increased steadily for more than two centuries. Historically, sharp increases in the number of satellites have followed soon after the deployment of powerful new survey facilities (e.g., the telescopes of Herschel, the Oschin Schmidt, the SDSS, Pan-STARRS, and, most recently, the Dark Energy Camera on the Blanco 4m telescope). As of early 2017, the Milky Way is known to contain at least 77 satellites located beyond a Galactocentric radius of 25~kpc. It is virtually certain that this number will continue to rise as observing facilities improve. In the next decade, the highly anticipated Large Synoptic Survey Telescope (LSST) should be especially important in improving our census of satellites, at least in the southern hemisphere. In the meantime, it is desirable to characterize the properties of the known satellites in a careful and systematic way. Such efforts are presently hampered by the lack of uniform, high-quality imaging for these objects, which are scattered over the entire sky and span wide ranges in apparent size, luminosity and surface brightness. Existing catalogs containing photometric and structural parameters for the Milky Way satellites have tended to focus on either globular clusters and dwarf galaxies, despite the fact that the distinction between these stellar systems has become increasingly blurred over the years (i.e., at the lowest surface brightnesses, separating these populations is often impossible without spectroscopic information). Moreover, previous compilations continue to rely on shallow and heterogenous data (with some of the early ones even photographic), some of it dating back to the 1960s (see, e.g., \citealt{djorgovski93,pryor93,trager95a,irwin95a,harris96a,mateo98a,mclaughlin05,mcconnachie12} and references therein). Deep, homogeneous, digital photometry for a nearly complete sample of halo satellites would allow a fresh look into the nature of these satellites. Issues of particular interest include possible connections between the various sub-populations (globular clusters vs. classical dwarfs vs. ultra-faint dwarfs), including their density distributions \citep{hubble30,sersic68,elson87,king62a,king66,plummer11a,wilson75}, stellar populations, dark matter content, and evidence for tidal interactions with the Milky Way. Here we introduce a deep, wide-field imaging survey of satellites belonging to the outer Galactic halo. The survey, which makes no {\it a priori} selection on the basis of object morphology or classification, is based on homogeneous $g$- and $r$-band imaging for 44 objects obtained with mosaic CCD cameras on the 3.6m Canada-Hawaii-France Telescope (CFHT) and the 6.5m Magellan/Clay telescope. Our imaging is supplemented with photometry assembled from the literature --- or derived from our reduction and analysis of archival imaging --- for an additional 14 objects. In 2015, when the analysis of our secondary targets was completed, the survey provided uniform photometry for a nearly complete ($\gtrsim$95\%) sample of the 60 satellites located more than 25~kpc from the Galactic center. Since that time, a number of additional satellites have been detected, primarily by the Dark Energy Survey (DES), so our sample now represents roughly three quarters of the 77 cataloged members of the outer halo. Our survey is the first to combine depth, wide areal coverage, multi-band imaging, and a high level of completeness for outer halo satellites. In this paper, we describe the survey strategy, sample selection, data reduction and calibration. Previous papers in this series have examined the internal dynamics of the unusual stellar system {\tt Palomar~13} \citep{bradford11}, reported the discovery of {\tt Munoz~1}, an extremely low-luminosity star cluster in the field of the {\tt Ursa Minor} dwarf galaxy \citep{munoz12b}, studied the properties of blue straggler stars in remote satellites \citep{santana13}, examined possible foreground populations in the direction of {\tt NGC2419} and {\tt Koposov~2} \citep{carballobello15}, and characterized the Sagittarius tidal stream in the vicinity of the globular cluster {\tt Whiting~1} \citep{carballobello17}. A companion paper \citep{munoz18a} presents structural parameters homogeneously derived and future papers will explore the density distributions of our sample objects and compare their photometric and structural parameters to those of other stellar systems. This paper is organized as follows. In \S\ref{sec:samp}, we discuss the sample selection for our survey. In \S\ref{sec:obs_primary}, we describe the observing strategy and data reduction procedures for our primary sample of 44 objects observed with the CFHT or Clay telescopes. In \S\ref{sec:obs_secondary}, we discuss the assembly of published or archival data for our secondary sample of 14 recently discovered objects that were not included in the primary sample. \S\ref{sec:checks} discusses some consistency checks on our photometry, while \S\ref{sec:results} presents some illustrative results from our program. We summarize and conclude in \S\ref{sec:summary}. \section{Selection of Targets} \label{sec:samp} The goal of our survey is a homogeneous study of the photometric and structural parameters for a large, unbiased sample of satellites residing in the outer halo of the Milky Way. This is a significant undertaking as it requires deep, uniform, wide-field, and multi-filter imaging for dozens of objects that span a wide range in luminosity, surface brightness and distance, and are scattered across the northern and southern skies. Initially, our survey was designed to rely entirely on CFHT and Magellan imaging for 41 Galactic satellites --- a complete list of Galactic satellites as of 2009 (excluding Sagittarius and the Magellanic Clouds). Henceforth, we shall refer to this sample --- and three more objects that were discovered after the start of the survey --- as our ``primary sample" (see \S\ref{sec:primary}). In \S\ref{sec:secondary}, we discuss a ``secondary sample" of 14 satellites discovered in 2013, 2014 or 2015 that relies on published or archival data. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{f1.png} \caption{Distribution of Milky Way satellites in a Cartesian coordinate system centered on the Galactic center. The 44 objects in our primary sample (i.e., having CFHT or Clay imaging) are indicated by the red crosses. Blue crosses indicate objects in our expanded sample: i.e., recently discovered satellites whose structural parameters are examined using archival or literature data. Gray crosses show Galactic globular clusters while green crosses indicate known satellites as of 2017 that are absent from our study. The inner circle at 25~kpc indicates the Galactocentric radius that defines the boundary of study of ``outer halo" substructures. The dashed circle at 200~kpc shows the virial radius of the Milky Way (e.g., Dehnen et~al. 2006).} \label{fig:xyz} \end{figure} \subsection{Primary Sample} \label{sec:primary} In an important departure from most previous studies, we make no {\it a priori} selection on satellite morphology or classification. This seems prudent since, as discussed in \S\ref{sec:intro}, the once-clear distinction between globular clusters, classical dwarf galaxies and ultra-faint dwarf galaxies has become increasingly blurred in recent years. Rather, our sample is defined purely on the basis of location in the Galactic halo, with all overdensities located beyond a Galactocentric distance of $R_{\rm GC} = 25$~kpc considered appropriate targets. Although somewhat arbitrary, this choice for the inner boundary of the ``outer halo" seems reasonable given that inside this radius, halo stars tend to have higher metallicities and different kinematics than their more distance counterparts (see, e.g.,~\citealt{carollo07a,carollo10a} and references therein). While we impose no firm cutoff on {\it outer} radius, we do confine ourselves to satellites that are believed to be members of the Milky Way satellite system (e.g., \citealt{mateo98a,mcconnachie12}). Our most distant satellite, {\tt Leo~T}, lies at $D_{\odot} \approx 417$~kpc --- a point at which the distinction between membership in the Galactic and Local Group satellite subsystems becomes unclear. Our next most distant objects --- {\tt Leo~I} and {\tt Leo~II} --- lie at distances of 254 and 233~kpc, respectively. Thus, our survey focuses on the population of satellites between $R_{\rm GC} = 25$~kpc and the virial radius of the Milky Way, which various estimates place between 200 and 300~kpc (e.g., \citealt{klypin02,dehnen06,xue08}). \begin{figure}[t] \includegraphics[width=0.49\textwidth]{f2.pdf} \caption{{\it upper panel:} Aitoff projection in equatorial coordinates of the distribution of our program objects on the sky. Red squares show objects with CFHT or Clay imaging. Blue squares indicate objects in our expanded (secondary) sample, having structural parameters derived from archival or literature data. Galactic globular clusters with $R_{\rm GC} < 25$~kpc are shown as gray squares. Galaxies that were not included in our study are shown as green squares. {\it lower panel:} Same as the upper panel except in Galactic coordinates.} \label{fig:allsky_equatorial} \end{figure} Based on the above criteria, we identified in 2009 a total of 41 satellites belonging to the outer halo of the Milky Way. This sample includes a mixture of globular clusters \citep{harris96a,harris10a} and classical or ultra-faint dwarf galaxies (see, e.g., the compendia of \citealt{mateo98a} and \citealt{mcconnachie12}). Three massive satellites --- the LMC, SMC and Sagittarius dwarf spheroidal galaxy --- were deemed too large and luminous to observe efficiently within the context of this survey and were excluded. At the same time, two satellites that were discovered after our survey began --- {\tt Segue~3} and {\tt Pisces~II} \citep{belokurov10a} --- were added to our sample during the 2010A semester. A third object residing in the outer halo --- the ultra-faint globular cluster {\tt Mu{\~n}oz~1}, which is offset by $\sim 1\fdg8$ from the center of the {\tt Ursa~Minor} dwarf galaxy but located well in the foreground --- was, in fact, discovered serendipitously in this survey \citep{munoz12a}. Thus, our primary sample consists of 44 objects that, aside from the aforementioned cases of the LMC, SMC and Sagittarius, represents a complete sample of outer halo satellites as of 2010. Table~\ref{tab:samp1} gives some basic information for these 44 objects. From left to right, the columns record an ID number, adopted name, other names appearing in the literature, position in equatorial ($\alpha$,~$\delta$) and Galactic ($l$,~$b$) coordinates, reddening $E(B-V)$ from \citet{schlafly11a}, Galactocentric distance and Cartesian coordinates in a Galactocentric frame, \begin{equation} \begin{array}{lcl} X & = & R_{\odot}\cos{b}\cos{l} - X_{\odot}\\ Y & = & R_{\odot}\cos{b}\sin{l}\\ Z & = & R_{\odot}\sin{b}, \end{array} \label{eq1} \end{equation} where $X_{\odot} = 8.5$~kpc is our adopted distance to the Galactic center, $R_{\odot}$ is the heliocentric distance to each object, so that $R_{\rm GC} = \sqrt{X^2 + Y^2 + Z^2}$ is the distance to the Galactic center. For each satellite, references to the original discovery paper or papers are given in the final column. As described in \S\ref{sec:secondary}, this sample has been supplemented by an additional 14 satellites discovered between mid-2010, when data acquisition for our primary sample was completed, and 2015. This brings our total sample to 58 objects which, in 2015, represented $\simeq$ 95\% of all known Milky Way satellites beyond 25~kpc (i.e., excluding the three massive systems). Figure~\ref{fig:xyz} shows three projections ($YZ$, $XZ$ and $XY$) of our sample in Cartesian coordinates centered on the Milky Way center (equation~\ref{eq1}). Objects belonging to our primary and secondary samples are shown as red and blue crosses, respectively. Globular clusters at $R_{\rm GC}<25$\,kpc are denoted by gray squares, and satellites absent from our study are shown as green crosses. The inner and outer circles plotted in each panel show our adopted boundary for the outer halo and the virial radius of the Milky Way according to \citet{dehnen06}, respectively. The arrow indicates the direction to M31 in each projection. Figure~\ref{fig:allsky_equatorial} shows the distribution of these same populations on the sky, in equatorial and Galactic coordinates (upper and lower panel, respectively). \subsection{Secondary Sample} \label{sec:secondary} Data acquisition for the 44 objects belonging to our primary sample was completed in mid-2011. Between that time and 2015, a number of new Galactic satellites were identified, many located beyond the Galactocentric distance of $R_{\rm GC} = 25$\,kpc that defines the boundary of the outer halo sample. We therefore define a ``secondary" sample for our survey that is listed in Table~\ref{tab:samp2}. This table presents the same basic information for the 14 new Milky Way satellites as Table~\ref{tab:samp1} did for our primary sample. From left to right, the columns of this table give an ID number, adopted name, other names appearing in the literature, position in equatorial and Galactic coordinates, reddening from \citet{schlafly11a}, Galactocentric distance, Cartesian coordinates in a Galactocentric frame (equation~\ref{eq1}) and references to the original discovery paper or papers. The 14 objects in our secondary sample includes a mixture of probable or confirmed dwarf galaxies, probable or confirmed globular clusters, and objects whose classification remain ambiguous at the present time (i.e., dynamical masses and metallicity distribution functions are needed to surmise their true natures). In every case, the initial discovery was based on survey data acquired with wide-field imaging telescopes. For instance, imaging from Pan-STARRS led to the detection of {\tt Triangulum~II} \citep{laevens15a} while {\tt Crater} was discovered independently by \citet{laevens14} using Pan-STARRS and by \citet{belokurov14} using VST/ATLAS\footnote{A french amateur astronomer, Pascal Le D\^u also discovered Crater using a small aperture telescope. He published his results in the french magazine L'Astronomie in January of 2014. The article can be found at http://www.cielocean.fr/uploads/images/FichiersPDF/L-Astronomie-\_Janvier2014.pdf}. Three additional satellites were discovered in SDSS imaging: {\tt Balbinot~1} \citep{balbinot13}, {\tt Kim~1} \citep{kimjerjen15a} and {\tt Pegasus~3} \citep{kim15b}. However, it has been the deployment of the Dark Energy Camera (DECam; \citealt{flaugher15}) on the Blanco telescope that has produced the largest harvest of satellites. A total of seven faint stellar systems --- {\tt Eridanus~3}, {\tt Horologium~I}, {\tt Reticulum~II}, {\tt Eridanus~II}, {\tt Pictoris~I}, {\tt Tucana~2} and {\tt Phoenix~2} --- were identified by two independent groups \citep{bechtol15a,koposov15a} using imaging from the {\it Dark Energy Survey} (DES; \citealt{diehl14}). An eighth DES satellite, {\tt Grus~I}, was discovered by \citet{koposov15a}, and a ninth, {\tt Indus~1}, was actually discovered by \citet{kim15a} using DECam imaging obtained as part of the {\it Stromlo Milky Way Satellite Survey} and later identified in the DES \citep{bechtol15a,koposov15a}. A tenth object, {\tt Horologium~II}, was identified by \citet{kimjerjen15b} using DES Y1A1 public data while {\tt Hydra~II} was discovered by \citet{martin15} in their DECam {\it Survey of the Magellanic Stellar HIstory} (SMASH). \begin{figure} \includegraphics[width=0.48\textwidth]{f3.png} \caption{({\it Left panels}) Difference between the CFHT instrumental and SDSS calibrated magnitudes as a function of $(g-r)$ color for stellar sources in CVn~I. This comparison includes data from all $36$ chips, for objects having $g$- and $r$-band SDSS magnitudes in the range 18--21.5. ({\it Right panels}) Similar to the previous panel, except for stars in Leo~IV as observed by the Clay telescope.} \label{fig:fig_calib} \end{figure} At this stage, the nature of many of these satellites is an area of active investigation. Spectroscopy for member stars in a handful of systems has provided velocity dispersions, mass-to-light ratios and elemental abundances for a few objects, allowing them to be classified as either dwarf galaxies (e.g., {\tt Horologium~I}, {\tt Reticulum~II}, {\tt Hydra~II}; \citealt{koposov15b, simon15,kirby15}) or globular clusters ({\tt Crater}; \citealt{kirby15}). But for most of the new satellites, only preliminary classifications are available --- usually based on their structural properties. In this survey, we are most concerned with the measurement of structural properties from homogeneous, high-quality CCD imaging. Fortuitously, for nearly all of these objects, either the discovery or follow-up observations includes imaging in the SDSS $g$ and $r$ filters --- the same filter combination used for our primary survey. It is therefore possible to maintain the uniformity and homogeneity of the primary survey by adding published photometry, or photometry derived from data in the archive, for these newly discovered satellites. Details on the photometric catalogs for our secondary objects --- including both photometry assembled from the literature and photometry obtained using data retrieved from public archives --- will be presented in \S\ref{sec:obs_secondary}. It is worth noting that the number of known Galactic satellites continues to rise, with many objects identified since 2015 (e.g., \citealt{kimjerjen15b}, \citealt{kim15b}, \citealt{bechtol15a}, \citealt{koposov15a}, \citealt{luque16a}, \citealt{luque17a}, \citealt{laevens15a, laevens15b}, \citealt{torrealba16a, torrealba16b}, \citealt{homma16a}). At the time of writing, our sample represents roughly three quarters of the 77 known satellites having $R_{\rm GC} \ge 25$~kpc. \citet{munoz18a} gives more details on those satellites that are absent from our samples. \begin{figure} \includegraphics[width=0.485\textwidth]{f4.png} \caption{ Photometric zeropoint plotted as a function of airmass for CFHT observations of CVn~I. The upper and lower symbols show the trends observed in the $g$ and $r$ bands, respectively.} \label{fig:zpoint_am_cfht} \end{figure} \section{Imaging and Data Reductions for Primary Targets} \label{sec:obs_primary} For our primary sample, observations for northern hemisphere objects were carried out using the MegaCam instrument on the 3.6m CFHT. In the south, observations were made using the mosaic camera --- also named Megacam --- on the 6.5m Magellan II-Clay telescope. To avoid confusion we henceforth refer to these instruments as CFHT-MegaCam and Clay-Megacam. Table~\ref{tab:runs} summarizes the details of the six CFHT and Clay observing programs, amounting to $\sim$ 100~hrs of telescope time, that comprise our survey. \subsection{CFHT-MegaCam Imaging} \label{subsec:cfht} CFHT-MegaCam is a wide-field imager consisting of $36$ CCDs --- each measuring $2048\times4612$ pixels --- that together cover a $0\fdg96\times0\fdg94$ field of view at a scale of $0\farcs187$~pixel$^{-1}$ \citep{boulade03}. All observations were carried out in queue mode during the 2009-A, 2009-B and 2010-A observing semesters. Table~\ref{tab:obslog} provides details on the observations for our primary objects. From left to right, the columns of this table record the target name, telescope, mosaic geometry, total areal coverage, source of astrometric and photometric calibration (see \S\ref{subsec:astrometry} and \S\ref{subsec:photometry}, respectively), mean airmass, and total exposure time in the $g$ and $r$ bandpasses. Although the color baseline offered by these two filters is limited, their choice allows us to minimize exposure times for color magnitude diagram (CMD) analyses (see \S\ref{subsec:comparison2}). Exposure times were chosen so that the $5\sigma$ point-source limiting magnitude for all objects, and in both bands, lie $\sim$ 2--3 magnitudes below the main sequence turnoff (MSTO). For 22 of our 30 CFHT targets, a single pointing was adequate to provide complete coverage. For the remaining eight objects, a grid of either $2\times1$ or $2\times2$ pointings was used depending on the spatial extent of the satellite. In all cases, a series of dithered exposures was collected, usually in dark conditions. The dithering pattern used was selected from the standard CFHT-MegaCam operation options to provide coverage of both the small and large gaps between chips (i.e., the largest vertical gaps in MegaCam are six times wider than the small gaps). Typical image quality for the CFHT imaging is $\approx 0\farcs7-0\farcs9$. Altogether, the CFHT component of our MegaCam survey covers a total area of 43.25 deg$^2$. For two program objects --- {\tt Pal~3} and {\tt NGC7492} --- imaging was collected using both facilities as a cross check on our photometry and astrometry. The results of this comparison will be presented in \S\ref{subsec:comparison1} below. \begin{figure} \includegraphics[width=0.48\textwidth]{f5.png} \caption{ Photometric zeropoint plotted as a function of airmass for imaging carried out with the Clay telescope. In this case, the data points show different objects observed using the same exposure time. As in the previous figure, the upper and lower symbols show the trends in $g$ and $r$ bands, respectively.} \label{fig:zpoint_am_clay} \end{figure} \subsection{Clay-Megacam Imaging} \label{subsec:clay} For 16 additional targets, including {\tt Pal~3} and {\tt NGC7492}, in the southern hemisphere, Clay-Megacam imaging was acquired during eight nights on the 6.5m Clay telescope in November 2010 and April 2011. Clay-Megacam is a large mosaic CCD camera that also consists of $36$ CCDs ($2048\times4608$ pixels) but at a scale of $0\farcs08$~pixel$^{-1}$ \citep{mcleod15}. This array provides instantaneous coverage of a $0\fdg4\times0\fdg4$ field. Because this field of view is about five times smaller than that of its northern counterpart, we used multiple pointings for the most extended objects; in the case of Carina, we cover an area of $2.6$\,deg$^2$ in 16 different fields. For the more compact objects --- usually globular clusters --- only a single pointing was needed, with the target positioned at the center of the mosaic. To maintain survey homogeneity, Clay-Megacam images were also taken in the Sloan $g$ and $r$ filters. In all cases, we collected five dithered exposures per pointing in each filter to cover chip gaps. Images were usually acquired in dark time and during seeing conditions comparable to those at CFHT ($0\farcs7-1\farcs1$). Excluding the two targets that appear in both our CFHT and Clay programs (see below), the Clay imaging covers a total area of 8.75 deg$^2$. In all, our MegaCam imaging covers a combined area of 52 deg$^2$. \subsection{Image Processing and Astrometric Calibration} \label{subsec:astrometry} Data from both instruments used in our primary survey were pre-processed prior to delivery. For CFHT-MegaCam, preprocessing was done by the CFHT staff using the standard {\it Elixir} package while the Clay-Megacam data were pre-reduced at the Harvard-Smithsonian Center for Astrophysics (CfA). In both cases, the goal of preprocessing is to provide the user with frames that are corrected for the instrumental signature across the mosaic. This involves bad pixel correction, bias subtraction, flat fielding and the calculation of preliminary astrometric and photometric solutions that are included in the headers of the pre-processed images. However, the World Coordinate System (WCS) information provided with the processed data is only approximate. For both the Clay and CFHT data, we therefore refine the astrometric solution using the latest freely available SCAMP\footnote{{\tt http://astromatic.net/software/scamp/}} package \citep{bertin06a}. First, Terapix SExtractor\footnote{{\tt http://astromatic.net/software/sextractor/}} \citep{bertin96a} was run on all chips (SCAMP reads in the output files generated by SExtractor) and output files were written in the FITS-LDAC format (where LDAC = Leiden Data Analysis Center). SCAMP was then run on all chips separately. SCAMP uses the approximate WCS information in the frames' headers as a starting point, and then computes astrometric solutions using external reference catalogs. In our case, we used the GAIA (DR1, \citealt{gaia16a}) catalog for 39 objects for which the combination of spatial density and magnitude overlap yielded enough stars in common to determine a reliable solution. For the other five objects, the solutions based on GAIA were not precise enough due to the low number of stars per chip in common, and thus we used the SDSS-Data Release 7 (DR7, \citealt{abazajian09a}) for three of them present in SDSS, and USNO-B1 catalog for the remaining two targets that fall outside the SDSS footprint. CFHT-MegaCam chips are five times larger in terms of sky coverage than those of Clay-Megacam so we found systematically more stars in common between each of our chips and the reference catalog for the CFHT data. Generally speaking, a couple of hundred stars were used to compute the astrometric solution for CFHT chips, while several tens of stars were typically used for the case of Clay data. Despite the difference in sample size, these are sufficient in both cases to avoid significant shot-noise and thus, our astrometric uncertainties do not depend on the instrument but on the reference catalog used for each object. For objects where GAIA was used, we typically obtained global astrometric uncertainties of $rms\sim0\farcs04-0\farcs06$ with internal accuracy typically better than$\sim0\farcs02$. For those in SDSS, we obtained $rms\sim0\farcs10-0\farcs20$ and for those objects in which we used USNO-B1 as the reference catalog, typical $rms$ uncertainties in the astrometry were $\sim0\farcs3$. The output from SCAMP is a single FITS header file per processed frame. For the CFHT-MegaCam images, this SCAMP output was used to update the WCS information for each chip. Point source photometry was then performed on the images with the updated headers (see \S \ref{subsec:photometry}). The final photometry was then used to translate $x$ and $y$ stellar positions into equatorial coordinates using the astrometric solution coefficients in the image headers. In the case of the Clay-Megacam images, the celestial projection used by the CfA team to determine the preliminary astrometric solution is zenithal polynomial. Unfortunately, this projection is incompatible with the current version of SCAMP, so the images were reprojected using REMAP into a tangential projection (which is SCAMP compatible). There is, however, an uncertainty involved in translating a given pixel value from one projection to another --- a process that introduces small but noticeable differences in the magnitudes obtained in the subsequent photometry. Therefore, we carried out the photometry in the images with the original zenithal polynomial projection. The $x$ and $y$ positions of stars in the catalogs were then translated into $x'$ and $y'$ positions corresponding to the same star in the image reprojected into a tangential projection. Finally, these tangential $x'$ and $y'$ positions were transformed into equatorial coordinates using the WCS information obtained using SCAMP in the tangential reprojected image. \subsection{PSF Photometry} \label{subsec:photometry} The photometric processing was similar for images from both telescopes. Prior to carrying out point-source photometry on our data, we split each mosaic frame into its 36 individual chips. We then performed point spread function (PSF) photometry by first running DAOPHOT/ALLSTAR on the individual (non-coadded) frames and then running ALLFRAME on the resulting files, as detailed in \citet{stetson94a}. ALLFRAME performs photometry simultaneously on all $g$ and $r$ frames for a given field. DAOPHOT/ALLSTAR must be run prior to ALLFRAME in order to determine PSF solutions for each chip, and to generate starlists for individual frames. The optimum starlists that are needed as inputs to ALLFRAME were generated by cross matching the DAOPHOT/ALLSTAR results for the individual frames using the DAOMATCH and DAOMASTER packages (\citealt{stetson93a}). These packages also provide reasonably good estimates of the spatial offsets between dithered individual exposures necessary to run ALLFRAME. Final output files from ALLFRAME were then combined into a single master catalog for each program object. \subsection{Photometric Calibration: Objects in Common with SDSS} \label{subsec:calibration1} For objects that fall inside the SDSS footprint, our instrumental magnitudes have been calibrated through a direct comparison to the SDSS-DR7. First, we matched our photometric catalog for each object with the SDSS stellar catalog, typically finding several hundred stars per chip in common with the SDSS. To determine zeropoints and color terms, we used only SDSS stars with $18<r_{\rm SDSS} <21.5$ and $18<g_{\rm SDSS} <22$. The faint limit was chosen to eliminate stars from SDSS with large photometric uncertainties and the bright limit was chosen to avoid saturated stars in our MegaCam data. We then used the matched catalog to fit equations of the form: \begin{equation} \begin{array}{rrcll} g_{\rm SDSS} & = & g + g_{\rm 0} + g_{\rm 1}(g-r) \\ r_{\rm SDSS} & = & r + r_{\rm 0} + r_{\rm 1}(g-r). \end{array} \label{eq2} \end{equation} \noindent Here $g$ and $r$ are our instrumental magnitudes, $g_{\rm 0}$ and $r_{\rm 0}$ are the zeropoints and $g_{\rm 1}$ and $r_{\rm 1}$ are the color terms, Because we are calibrating directly to SDSS photometry, we do not need to determine the airmass terms. In their CFHT-MegaCam study of {\tt Coma Berenices} and {\tt Ursa Major~II}, \citet{munoz10a} derived zeropoints and color terms for each chip individually, in order to examine possible chip-to-chip variations for this instrument. In both cases, they found that the chip-to-chip differences, for both the zeropoints and color terms, were smaller than the uncertainties in the derived parameters. For this study, we repeated this test using {\tt CVn~I} and {\tt Segue~1} and found similar results. Unfortunately, for the Clay-Megacam data, we could not carry out the same test given the low number of stars per chip in common with SDSS (i.e., all the Clay targets that fall within the SDSS footprint are ultra-faint dwarf galaxies with low stellar counts). As an alternative, we used stars in the overlapping regions between chips to assess whether there were systematic chip-to-chip variations. For these stars, we found that the average magnitude difference between the chips was always smaller than the magnitude uncertainties of our stars. For each satellite, we therefore combined stars from all 36 chips to derive global zeropoint and color term values via a linear least-squares fit (weighting by the respective uncertainties in the ALLFRAME magnitudes and rejecting $3\sigma$ outliers). We calculated zeropoints and color terms for each mosaic field independently. For the CFHT-MegaCam calibration, uncertainties in the zeropoints were typically $0.003-0.004$~mag. While the $g'_{\rm 0}$ and $r'_{\rm 0}$ terms are a function of exposure time and airmass, the color terms remain fairly constant for all objects, with variations of less than $2$\%. For the Clay-Megacam calibration, typical uncertainties in the zero points are $0.002-0.009$~mag. The color terms showed object-to-object variations of less than $5$\%. A comparison between the CFHT and SDSS magnitudes used to calibrate our photometry is shown in Figure~\ref{fig:fig_calib} for two representative stellar systems: {\tt CVn~I} (CFHT) and {\tt Leo~IV} (Clay). \subsection{Photometric Calibration: Objects Not in Common with SDSS} \label{subsec:calibration2} Some objects in our primary sample fall outside the SDSS footprint and therefore require a different method of calibration. In such cases, the instrumental magnitudes were calibrated by applying the following equation: \begin{equation} \begin{array}{rrcll} g_{\rm SDSS} & = & g + g_{\rm 0} + g_{\rm 1}(g-r) + g_{\rm 2}X \\ r_{\rm SDSS} & = & r + r_{\rm 0} + r_{\rm 1}(g-r)+ r_{\rm 2}X. \end{array} \label{eq3} \end{equation} \noindent where $g_{\rm 2}$ and $r_{\rm 2}$ are the airmass terms, and $X$ is the airmass. The full set of coefficients $g_{0}$, $g_{1}$, $g_{2}$, $r_{0}$, $r_{1}$ and $r_{2}$ where determined using the photometry of objects in SDSS as secondary standards. The color terms derived for CFHT were $\langle{g_{\rm 1,CFHT}}\rangle=0.203\pm0.003$ and $\langle{r_{\rm 1,CFHT}}\rangle=0.021\pm0.002$, while for Clay, we obtained $\langle{g_{\rm 1,Clay}}\rangle=-0.098\pm0.010$ and $\langle{r_{\rm 1,Clay}}\rangle=0.052\pm0.005$. To determine the airmass terms, we calculated the variation of the zeropoints as a function of airmass for objects in the SDSS. Figure~\ref{fig:zpoint_am_cfht} shows the $g$- and $r$-band zeropoints obtained for a variety of airmasses and the linear trends fitted to them: \begin{equation} \begin{array}{rrcll} g_{\rm 2,CFHT} & = & -0.176\pm0.015 \\ r_{\rm 2,CFHT} & = & -0.080\pm0.007 \end{array} \label{eq5} \end{equation} To derive the Clay airmass terms, we used different objects at different airmasses, taking advantage of the fact that most of the objects observed at Clay had the same exposure time. In particular, we used those objects having the lowest photometric errors in the Clay sample: {\tt Leo~IV}, {\tt Leo~V}, {\tt Palomar~3}, {\tt Segue~1}, and two different fields for {\tt Leo~II}. Figure~\ref{fig:zpoint_am_clay} shows zeropoints obtained for these systems at different airmasses. The best-fit linear trends yield airmass terms of: \begin{equation} \begin{array}{rrcll} g_{\rm 2,Clay} & = & -0.222\pm0.011 \\ r_{\rm 2,Clay} & = & -0.094\pm0.009. \end{array} \label{eq6} \end{equation} Lastly, we determined zeropoints $g_{\rm 0}$ and $r_{\rm 0}$. For our Clay targets, this was carried out simultaneously with the measurement of the extinction terms since the exposure times were the same. The resulting zeropoints were found to be: \begin{equation} \begin{array}{rrcll} g_{\rm 0,Clay} & = & 7.016\pm0.014 \\ r_{\rm 0,Clay} & = & 7.463\pm0.011. \end{array} \label{eq7} \end{equation} Meanwhile, for the CFHT observations, the zeropoint variations with exposure time (after correcting for airmass) had to be computed explicitly since, in this case, the images were taken over a fairly wide range in exposure time. For objects in common with the SDSS we found: \begin{equation} \begin{array}{rrcll} g_{\rm 0,CFHT} & = & 1.476 + 2.5\log{T_{exp}} \\ r_{\rm 0,CFHT} & = & 1.014 + 2.5\log{T_{exp}} \end{array} \label{eq8} \end{equation} Using these relations, zeropoints were then calculated for the remaining objects using their respective exposure times. Table~\ref{t:photometry_all} presents the full catalogs for all 44 primary targets. The table includes the 2000.0 equatorial coordinates, calibrated, unreddened $g$ and $r$ magnitudes as well as their uncertainties. We also include the DAOPHOT $chi$ and $sharp$ parameters. We removed the majority of spurious and non-stellar detections by applying the following cut: $-0.5<sharp<0.5$ and $chi<3$. Stars from different objects can be distinguished by their ID name. \begin{figure} \includegraphics[width=0.47\textwidth]{f6.png} \caption{ Comparison of the color-magnitude diagrams for two of our program objects, NGC7492 and Pal~3, observed from both hemispheres. The left and right panels show data within the half-light radii obtained with Clay and CFHT, respectively. The Clay-Megacam data appear deeper showing narrower sequences at fainter magnitudes.} \label{fig:cmd_comp} \end{figure} \section{Imaging and Data Reductions for Secondary Targets} \label{sec:obs_secondary} As detailed in \S\ref{sec:secondary}, in addition to the 44 objects observed with the Megacam imagers, we include 14 objects discovered after the completion of our observing campaign. For eight satellites discovered using DECam data from the DES survey \citep{bechtol15a,koposov15a}, {\tt Eridanus~II} and {\tt 3}, {\tt Horologium~I}, {\tt Reticulum~II}, {\tt Pictoris~I}, {\tt Indus~1}, {\tt Grus~I} and {\tt Phoenix~2} we retrieved archival DECam data from the NOAO Science archive\footnote{http://www.portal-nvo.noao.edu/}. In the case of Indus~1, discovered independently, we also obtained Kim's et al. photometry. We obtained the photometry for two other satellite candidates, {\tt Peg~3} \citep{kim15b} and {\tt Tucana~2} (from the DES sample), but unfortunately our method to retrieve structural parameters was not able to converge due to the low number of stars in the filed and therefore we do not include them in the final list. Table~\ref{tab:obslog2} presents a summary of observations for the secondary targets. The archival data used in this catalog consisted, in most cases, of one DECam pointing observed in both the $g-$ and $r-$bands. The DECam imager consists of $62$ $2048\times4096$ pixel chips with a pixel scale of $0.2626$\,arcsec/pixel covering a total area of $3$\,deg$^{2}$. In some cases ({\tt Horo~I}, {\tt Ret~II}, {\tt Eri~II}), a second DECam pointing overlapping the position of the satellite was present in the archives and thus both pointings were reduced together and combined to cover the chip gaps. In all cases, the exposure times were $90$ seconds. The subsequent photometry procedure was similar to that carried out for the Megacam imagers, i.e., DAOPHOT, ALLSTAR was performed in all the individual images and ALLFRAME was performed in the cases where more than one observation per field was used. Equatorial coordinates for all objects detected by the DAOPHOT/ALLSTAR routines were obtained using the WCS information provided in the image headers. Comparison between stellar detections present in multiple observations of the same field showed that the internal precision was better than $0.1$\,arcsec. \begin{figure} \includegraphics[width=0.48\textwidth]{f7.png} \caption{ Histograms showing the difference in $g$- and $r$-band magnitude for stellar sources in Pal~3 (upper panels) and NGC7492 (lower panels), both of which were observed from Clay and CFHT The smooth curve in each panel shows the best-fit gaussian distribution.} \label{fig:hist_comp1} \end{figure} To calibrate the instrumental photometry we used DECam data taken for a different program in the same bands. From our own DECam data we estimated the zero points and color terms to be: \begin{equation} \begin{array}{rrcll} g_{\rm 0,DECam} & = & 4.960\pm0.031\\ r_{\rm 0,DECam} & = & 5.010\pm0.024\\ \end{array} \label{eq9} \end{equation} and \begin{equation} \begin{array}{rrcll} g_{\rm 1,DECam} & = & 0.102\pm0.026\\ r_{\rm 1,DECam} & = & 0.113\pm0.021\\ \end{array} \label{eq10} \end{equation} These values are consistent within the uncertainties with the zero points derived by the DECam SMASH survey of the Magellanic Clouds \citep{nidever17a}. \begin{figure*} \plotone{f8.png} \caption{ Color-magnitude diagrams for four of our program objects: NGC2419, CVn~I, UMa~II and ComBer. The panels on the left show CMDs based on data from SDSS (DR12) while those on the right show our new CFHT and Clay photometry.} \label{fig:comp_sdss} \end{figure*} Unfortunately, we were not able to derive an airmass term from our dataset, and therefore we used the same zero-points for all the DECam data we processed. If we assume that the missing airmass term is similar to those derived for Clay and CFHT the uncertainty introduced in the zero-points by not correcting for this effect is of the order of $0.05$ magnitudes in the $g-$band and $0.02$ magnitudes in the $r-$band. We include these values when estimating the global photometric uncertainties and the subsequent luminosity values derived from them. For an additional six satellites, {\tt Laevens~1} \citep{laevens14}, also known as {\tt Crater} \citep{belokurov14}, {\tt Triangulum~II} \citep{laevens15a}, {\tt Horologium~II} \citep{kimjerjen15b}, {\tt Hydra~II} \citep{martin15}, {\tt Kim~1} \citep{kimjerjen15a} and {\tt Kim~2} \citep{kim15a} and {\tt Balbinot~1} \citep{balbinot13}, the respective authors were kind enough to send us their photometric catalogs for the purpose of measuring their structural parameters. \begin{figure} \includegraphics[width=0.48\textwidth]{f9.png} \caption{ ({\it Upper left panel}) Histogram of the difference between the $g-$ band magnitude from the CFHT and the SDSS catalogs for {\tt CVn~1}. ({\it Lower left panel}) Same as above but for {\tt NGC2419}. ({\it Upper right panel}) Histogram of the difference between the $g-$ band magnitude from the Clay and the SDSS catalogs for {\tt Palomar~3}. ({\it Lower right panel}) Same as above but for {\tt Leo~II}.} \label{fig:phot_comp_g} \end{figure} \section{Consistency Checks} \label{sec:checks} \subsection{Comparison of CFHT and Clay Photometry} \label{subsec:comparison1} Three objects in our primary sample were observed using {\it both} CFHT and Clay: {\tt Palomar~3}, {\tt Segue~1} and {\tt NGC7492}. This provides us with an opportunity to assess the overall homogeneity of the photometry obtained with the northern and southern facilities. Although the exposure times were slightly shorter for our Clay observations (see Table~\ref{tab:obslog} for details) this is roughly offset by the larger telescope aperture, resulting in comparable depths in both objects. Because a single pointing was used for both targets, though, the CFHT data have the advantage of covering a $\sim$ five times larger field. The upper and lower panels of Figure~\ref{fig:cmd_comp} show CMDs for the inner regions of {\tt Palomar~3} and {\tt NGC7492}, respectively. Results from Clay are shown in the left panels, while those from CFHT are shown on the right. A visual inspection of this figure shows that the data are of comparable quality at the bright end, but the Clay-Megacam data are deeper, which is evident by the narrower main sequence at the faint end. To compare the photometry from the two instruments, Figure~\ref{fig:hist_comp1} shows histograms for $(g_{\rm Clay}-g_{\rm CFHT})$ and $(r_{\rm Clay}-r_{\rm CFHT})$ for both objects. This comparison uses sources down to $r_{\rm Clay}=24$ and applies a cut of $-0.5 < {\tt sharp} < 0.5$ to isolate only star-like detections. For {\tt Palomar~3}, the distributions are centered on $(g_{\rm Clay}-g_{\rm CFHT})=0.011$ and $(r_{\rm Clay}-r_{\rm CFHT})=0.022$ with dispersions of $\sim0.031$ and $\sim0.030$, respectively. In the case of {\tt NGC7492}, the distributions are centered on $(g_{\rm Clay}-g_{\rm CFHT})=0.038$ and $(r_{\rm Clay}-r_{\rm CFHT})=0.049$. The respective dispersions are $0.070$ and $0.070$. \begin{figure} \includegraphics[width=0.48\textwidth]{f10.png} \caption{ Same as Figure~\ref{fig:phot_comp_g} but for $r-$band magnitudes.} \label{fig:phot_comp_r} \end{figure} \subsection{Comparison of MegaCam and SDSS Photometry} \label{subsec:comparison2} Deep, homogenous photometry for our program objects is essential if we are to achieve the scientific goals laid out in \S\ref{sec:intro}: namely, the analysis of CMDs, star formation histories, density distributions and structural parameters for a nearly complete sample of outer halo satellites. An obvious point of comparison is the SDSS, which has had an enormous impact on our census and understanding of the halo and its substructures. Recall from \S\ref{subsec:astrometry} that SDSS photometry is available for 27 of our 44 primary survey targets. In Figure~\ref{fig:comp_sdss}, we compare our new CMDs to those from SDSS (DR12) for four of our program objects. From top to bottom, the panels in this figure show CMDs for {\tt NGC2419}, {\tt CVn~I}, {\tt UMa~II} and {\tt ComBer}. Results from SDSS are shown in the left panels, while those from our program (all based on CFHT data) are shown on the right. The comparison has been restricted to the inner regions of the targets (roughly within their respective effective radii, the actual radii are shown in the Figure). In all cases, the SDSS data were limited to unresolved sources with photometric uncertainties lower than $0.25$~mag in both the $g$ and $r$ bands. Similarly, the CFHT data were restricted to detections having $-0.5 < {\tt sharp} < 0.5$ and $ {\tt chi} < 3$ in order to eliminate as many extended sources as possible. Similar restrictions on the $g$ and $r$ photometric errors were also applied. For all four objects, Figure~\ref{fig:comp_sdss} shows there is a dramatic improvement in depth and precision compared to SDSS. The SDSS CMDs typically reach only to a depth of $g \sim 22-23$ which is adequate only to identify the red giant branch at the distances of {\tt NGC2419} and {\tt CVn~I}. For {\tt UMa~I} and {\tt ComBer}, it is just possible to identify their main sequence turnoffs (MSTOs). By contrast, all the major evolutionary sequences are easily identified and well defined in the panels on the right, as the CFHT-MegaCam photometry reaches several magnitudes below the MSTO. Figures~\ref{fig:phot_comp_g} and \ref{fig:phot_comp_r} show histograms of the difference between our $g-$ and $r-$band magnitudes for a region of $20$\,arcmin around four objects. The left panels show the difference between CFHT and SDSS data for {\tt CVn~1} and {\tt NGC2419} while the right panels show the difference between Clay and SDSS data for {\tt Palomar~3} and {\tt Leo~II}. Figure~\ref{fig:phot_comp_diff} show the difference in $g-$ magnitudes as a function of depth. All histograms are centered around zero, as expected, and their dispersions are consistent with the photometric uncertainties, typically larger for the shallower SDSS data. \begin{figure} \includegraphics[width=0.48\textwidth]{f11.png} \caption{ ({\it Upper left panel}) Difference in $g-$band magnitudes between CFHT and SDSS data versus CFHT $g-$band magnitude for {\tt CVn~1}. ({\it Lower left panel}) Same as above but for {\tt NGC2419}. ({\it Upper right panel}) Difference in $g-$band magnitudes between CFHT and SDSS data versus CFHT $g-$band magnitude for {\tt Palomar~3}. ({\it Lower right panel}) Same as above but for {\tt Leo~II}.} \label{fig:phot_comp_diff} \end{figure} As noted in \S\ref{sec:obs_primary}, exposure times were chosen to ensure that our photometry would reach several magnitudes below the MSTO point in each of our program objects, irrespective of distance. This was achieved for most objects, except {\tt Leo~T}, {\tt CVn~I} and {\tt NGC2419}. Figure~\ref{fig:depth} illustrates the depths reached in our primary survey. Results for the $g$ and $r-$bands are shown in the upper and lower panels, respectively. The panels on the left show the approximate $5\sigma$ limits for point sources in our MegaCam targets; there is a distribution in limiting magnitude that broadly peaks between $\sim$ 25 and 26 AB mag. In the right panels, we combine these limiting magnitudes with distances for our targets to show the approximate depths we reach below the MSTO. As expected, we see broad distributions in both bands that peak $\sim$ 2--3 magnitudes below the MSTO. Finally, in Figure~\ref{fig:comp_astro} we compare the equatorial coordinates of the stellar-like detections ($-0.5<sharp<0.5$) for two objects observed by both imagers, {\tt Segue~1} and {\tt Palomar~3}. The upper panels show histograms of the difference, in arcsec, between the positions determined from the Clay and the GAIA data for the same stars. The lower panels show the same differences but this time between Clay and CFHT data to check for internal consistency. We note that stellar-like objects down to our magnitude limit are included. \begin{figure} \includegraphics[width=0.48\textwidth]{f12.png} \caption{({\it Upper left panel}) Distribution of limiting magnitude ($5\sigma$, point source limits) for our 44 CFHT or Clay program objects. For comparison, the dashed vertical lines show the corresponding limits from a number of other notable surveys: i.e., Sloan Digital Sky Survey (SDSS), the Pan-STARRS 3PI survey, a single visit from LSST, the coadded Dark Energy Survey (DES) and the five year, coadded depth from LSST. ({\it Lower left panel}) Same as above, except for the $r$ band. ({\it Upper right panel}) Difference between our limiting magnitude the main-sequence turnoff magnitude, $m_{\rm MSTO}$, for the same sample of 44 program objects. Our CFHT and Clay images reach a median depth $\simeq$~2.2~mag below the main sequence turnoff in both bands. ({\it Lower right panel}) Same as above, except for the $r$ band.} \label{fig:depth} \end{figure} \section{Results} \label{sec:results} As an illustration of serendipitous findings from our catalog, we highlight two results. \subsection{Sagittarius debris in the fields of {\tt Whiting~1} and {\tt NGC7492}} \label{sec:ngc7492} One of the advantages of having relatively extended spatial coverage of our Galactic satellites is the possibility of studying their outer structure. This is particularly true in the case of globular clusters observed with CFHT, for which we typically cover several times their half-light (or equivalently their effective) radii. For a number of these clusters, secondary main sequences are observed beyond the cluster's extent. In some cases, such as {\tt NGC2419} and {\tt Kop~2}, the extra main sequences lie in the background or foreground of the clusters \citep{carballobello15}. In other cases, like the ones presented here, the sequences seem to be at the same distance of the clusters, indicating either the presence of an extended structure related to the cluster (i.e., tidal tails or extended halos) or revealing the existence of a stream within which the clusters lie. \begin{figure} \includegraphics[width=0.48\textwidth]{f13.png} \caption{ ({\it Upper left panel}) Histogram of the difference between the equatorial positions of the same objects in the {\tt Segue~1} field taken from the Clay and GAIA catalogs. ({\it Lower left panel}) Same as above but this time for objects from the Clay and CFHT datasets. ({\it Upper right panel}) Same as {\it upper left panel} but for {\tt Palomar~3}. ({\it Lower right panel}) Same as above but for {\tt Palomar~3}.} \label{fig:comp_astro} \end{figure} {\tt Whiting~1} lies at $\sim30$\,kpc from the Sun and is one of the youngest Galactic globular clusters known to date with an estimated age of $\sim6.5$\,Gyr, according to \citet{carraro07a}. These authors also showed that {\tt Whiting~1}'s distance, position in the sky and mean heliocentric radial velocity ($v_h=131$\,km s$^{-1}$) coincide almost exactly with the position and radial velocity of tidal debris from the {\tt Sgr} dSph \citep{majewski03a,law05a}. They thus argue that the cluster's origin is most likely to be associated with this dwarf galaxy. Through the analysis of $N-$body simulations of the {\tt Sgr} dSph+tail system, \citet{law10a} also associate {\tt Whiting~1} with {\tt Sgr}. Figure~\ref{fig:whiting1} shows the Hess diagram for the region within two effective radii of the cluster (left panel) using our photometry. The cluster's main and scarcely populated evolved sequences are evident. Also shown in this panel is the best-fit Dartmouth isochrone \citep{dotter08a}: a $6.5$\,Gyr, [Fe/H]$=-0.7$ track located at $d_{\rm helio}=30.5$\,kpc. The middle panel of this figure shows the same diagram but for a region beyond five effective radii, where the cluster stellar density is extremely low and thus the contribution of cluster stars should be negligible. A secondary main sequence is clearly visible in this region. Based on the results from \citet{carraro07a} and \citet{law10a} we associate this ``extra" population with debris from the {\tt Sgr} galaxy. According to the Law et al.'s model, the {\tt Sgr} population at the position of {\tt Whiting~1} should be a combination of both old trailing and younger leading-arm debris, where the age refers to the time when the stars became unbound to the galaxy. For reference, in the right panel of Figure~\ref{fig:whiting1} we overplot both a [Fe/H]$=-1.4$, $12$\,Gyr old and a [Fe/H]$=-0.5$, $6.5$\, Gyr old isochrone located at distances of $d_{hel}=26$ and $30.5$\,kpc respectively. These isochrones are meant to represent old and intermediate-age {\tt Sgr} populations. The choice of age and metallicity is motivated by the metallicity and age distribution of {\tt Sgr} stars along its tails reported by \citet{chou07a} and the star formation history derived by \citet{siegel07a}. Given the lack of detectable stars in the sub- and red-giant branch region of the CMD, with our current data we cannot discriminate between a young population located at a similar distance as the cluster and an old but slightly closer one. However, we consider it likely that this {\tt Sgr} population is a combination of ages and metallicities, and that it spreads in distance over a range of a few kpc. \begin{figure} \includegraphics[width=0.49\textwidth]{f14.png} \caption{ (a) Hess diagram for the inner regions of {\tt Whiting~1}. (b) Hess diagram for the region outside five effective radii. (c) Same as (b) but with the best-fit isochrone overlaid, indicating that the secondary main sequence is at a similar distance as the cluster.} \label{fig:whiting1} \end{figure} Another globular cluster located close to tidal debris from the {\tt Sgr} dSph is {\tt NGC7492}. In this case, \citet{law10a} found a coincidence between the spatial position of the cluster and its distance with those of {\tt Sgr} debris but considered a connection between the cluster and the galaxy unlikely based on the difference in mean radial velocity between the two systems. Figure~\ref{fig:ngc7492} is the equivalent of \ref{fig:whiting1} but this time showing Hess diagrams for our data in the field of the {\tt NGC7492}. The left panel again shows the central region of the cluster with the best-fit Dartmouth isochrones, a $13$\,Gyr, [Fe/H]$=-1.7$ sequence located at $d_{\rm helio}=24$\,kpc. The middle panel shows a diagram for the outer regions of the cluster. Here, again, a secondary main sequence is clearly visible. Finally, the right panel is the same as the middle one but with the same set of isochrones shown in the right panel of Figure \ref{fig:whiting1} (c) overplotted. The old isochrone is located at the same distance as the cluster, while the younger isochrone is placed at $d_{\rm helio}=28$\,kpc. \begin{figure} \includegraphics[width=0.49\textwidth]{f15.png} \caption{ (a) Hess diagram for the inner regions of {\tt NGC7492}. (b) Hess diagram for the region outside five effective radii. (c) Same as (b) but with the best-fit isochrone overlaid, indicating that the secondary main sequence is at a similar distance as the cluster.} \label{fig:ngc7492} \end{figure} In addition to the potential connection to {\tt Sgr} debris, it has been reported that {\tt NGC7492} shows signs that it has been affected by Galactic tides. Using deep photometry, \citet{lee04a} reported an elongated stellar distribution along with the presence of extended material asymmetrically distributed around the cluster (see their Figure 9). Our dataset is slightly deeper and covers a larger area and thus we use it to investigate the potential extended structure of the cluster. Unfortunately, our results do not corroborate those of \citet{lee04a}. Figure~\ref{fig:ngc7492_contours} shows a density contour map for {\tt NGC7492} where no clear elongation or tidal extensions are visible. We find an almost circular stellar distribution past the effective radius of the cluster, and cannot confirm the existence of an extended, asymmetrical structure surrounding the system. We suggest that the presence of {\tt Sgr} debris at a similar distance as {\tt NGC7492} affects the selection of cluster member stars based on color-magnitude diagram filtering, making it difficult to reach firm conclusions on the extended structure of this object. \section{Summary} \label{sec:summary} We have described a new, systematic imaging survey of satellites belonging to the outer halo of the Milky Way (i.e., $R_{\rm GC} \ge 25$~kpc). In a point of departure from most previous studies, our sample selection has been made with no constraint on morphology or classification. Our primary sample is composed of 44 objects for which we have acquired deep, wide-field (0.16--4~deg$^2$) $gr$ imaging with the MegaCam instruments on the 3.6m CFHT and the 6.5m Magellan/Clay telescopes. The point-source limiting magnitude for our MegaCam imaging is typically 2--3 magnitudes below the MSTO in these objects in both bands. Collectively, the survey covers an area of 52 deg$^2$. \begin{figure} \includegraphics[width=0.48\textwidth]{f16.png} \caption{ Density contour map for {\tt NGC7492}. The isodensity contours shown correspond to $2, 3, 10, 50, 200, 450, 850$ and $1450$\,$\sigma$ over the background level. } \label{fig:ngc7492_contours} \end{figure} This sample has been supplemented with published photometry, or photometry derived from archival imaging, for 14 objects discovered between 2010 and 2015. Our final sample of 58 objects represents roughly three quarters of outer halo satellites known as of 2015 (and roughly three quarters of the currently known satellites). Our photometric catalog has already been used in a series of papers on outer halo satellites, their constituent stars, and possible foreground substructures \citep{bradford11,munoz12b, santana13, carballobello15, carballobello17}. In a companion paper, we present homogeneous photometric and structural parameters for these satellites \citep{munoz18a} and in future papers we will examine their scaling relations in the context of other stellar systems. \acknowledgements The authors thank Dongwon Kim, Helmut Jerjen and Eduardo Balbinot for kindly sharing their photometric catalogs for several of our secondary targets. We also thank an anonymous referee for helping us improve this article. This work was supported in part by the facilities and staff of the Yale University Faculty of Arts and Sciences High Performance Computing Center. R.R.M. acknowledges partial support from project BASAL PFB-$06$ as well as FONDECYT project N$^{\circ}1170364$. M.G. acknowledges support from the National Science Foundation under award number AST-0908752 and the Alfred P.~Sloan Foundation. S.G.D. was supported in part by the NSF grants AST-1313422, AST-1413600, AST-1518308, and by the Ajax Foundation. This work was supported in part by the facilities and staff of the Yale University Faculty of Arts and Sciences High Performance Computing Center. \clearpage
1,314,259,995,215
arxiv
\section{Introduction} The first step of the transcription of deoxyribonucleic acid (DNA) is a local opening of the double helix which extends over about 20 base pairs. Such local unwindings of the helix can be obtained by heating DNA to about $70^{\circ}~C$. But in the life of an organism they must occur at physiological temperature. This is achieved by the action of an enzyme\cite{CD}. However one may wonder how this can be possible since, whatever its origin, the local opening requires the breaking of the same number of hydrogen bonds, hence the same amount of energy, and the enzyme does not bring in energy. However, under normal physiological conditions there are thermal fluctuations along the DNA chain. They can be weakly localized by nonlinear effects to generate what biologists call the ``breathing of DNA''. But their intensity is not high enough to open the double helix over many base pairs. A possible pathway to the opening would be to collect the thermal energy that is present along the molecule. This could be the role of the enzyme. From a physicist point of view, the effect of an enzyme can be considered as a perturbation to the DNA lattice. Recently Forinash {\it et al} considered the interaction between a mass impurity on a DNA chain, and thermal nonlinear waves described as breathers traveling along the chain.\cite{FPM} They found that the impurity is selective toward the breather it can trap. Although this is a first indication that a defect can contribute to localize energy in a nonlinear chain, it does not appear to be a good model for the action of an enzyme because, with such a localized defect, only some predefined frequencies of the thermal fluctuations would contribute to bring in the energy. Therefore one may ask whether there exist any other mechanism more efficient to trap energy. One learns from biological studies that some proteins, make contact with DNA at multiple sites\cite{A,EYSB}. Moreover the transcription enzyme actually bends DNA toward itself. It has the effect not only to modify the mass at some sites but also to modify the coupling constants along the strands. The bases which are inside the bend are brought closer to each other while the ones which are outside are moved farther apart. Although the variation of the distances between neighboring bases may be rather small, it can have a large effect because the interaction between bases is due to the overlap of $\pi$ electrons over the whole surface of the planar bases. We examine in this paper whether the interaction of the enzyme with more than one site might be more efficient for trapping breathers than isolated impurities by studying the effect of an extended modification of the coupling along the DNA chain. The effect of bending and twisting to modify the elasticity of DNA has been considered previously by Barkley and Zimm\cite{BZ}, and by Marko and Siggia\cite{MS} but they did not study the consequences of base pair opening. Salerno\cite{S} considered the dynamical properties of a DNA promoter which has some similarities with our problem because we treat here the enzyme as an inhomogeneity due to an external effect while he considers inhomogeneities from the DNA composition itself. However he was interested by kinks while we study breathing modes. In a more abstract level we are investigating here a nonlinear model, with an ``extended defect'', and we try to understand the interplay between nonlinearity and disorder. In the harmonic case, a one dimensional chain with isolated defects has been considered before by Montroll and Potts\cite{MP}. However, besides the introduction of nonlinearity, one should also notice that for the type of extended defect that we consider, there is no evanescent local mode which would couple to a breather as in the case considered by Forinash {\it et al}, so that the mechanism for energy localization must be different. \section{DNA lattice model} If one neglects the small longitudinal motion and concentrates on the stretching of the base pairs, DNA can be described by a simple one-dimensional model\cite{PB} which consists of an array of harmonically coupled particles subjected to a Morse potential. Such a model is sufficient to provide a good qualitative description of the thermal denaturation of the molecule\cite{Dauxois}. If one treats a chain with inhomogeneous coupling, the equations of motion read \begin{eqnarray} \label{discrete} m {{\partial^2 Y_n} \over {\partial T^2}} - K_{n+1} ( Y_{n+1} - Y_n ) + K_n ( Y_n - Y_{n-1} ) \nonumber \\ - 2 D \alpha e^{- \alpha Y_n} ( e^{- \alpha Y_n} - 1) = 0 \; , \end{eqnarray} in which $\alpha$ and $D$ are parameters for the Morse potential\cite{M}, which have dimensions of inverse length and energy respectively, and $n$ is the site index. Fig.~\ref{line1} shows the geometry and the coordinate used. It is convenient for the analytical calculations to transform these equations into a dimensionless form by defining the following dimensionless variables: \begin{eqnarray} y_n &=& \alpha Y_n, \\ t_{~} &=& \sqrt {{D \alpha^2 } \over m} T , \\ k_n &=& {K_n \over {D \alpha^2 }}. \end{eqnarray} The equations become \begin{eqnarray} \label{ND} {{\partial^2 y_n} \over {\partial t^2}} - k_{n+1} ( y_{n+1} - y_n ) + k_n ( y_n - y_{n-1} ) \nonumber \\ - 2 e^{- y_n} ( e^{- y_n} - 1) = 0 \; . \end{eqnarray} One notices that the last set of equations contain only one parameter, the coupling constant. In order to represent the perturbation due to the enzyme, one could imagine to modify locally any of the parameters of Eq.~(\ref{discrete}), but it is likely that the presence of an enzyme will affect the coupling constant through the bending of the molecule. Moreover previous studies of the role of disorder on the dynamics of the DNA model\cite{Tashkent} have shown that the formation of open regions in the model are much more sensitive to modulations of the coupling constant than to changes in other parameters. Therefore we only consider here an extended perturbation of the coupling constant. An additional possibility to model the enzyme specificity is however examined in the discussion. Since we do not know how to solve in the discrete case, we transform the set of equations, Eqs.~(\ref{ND}), into the corresponding continuous PDE. In the continuum limit, with a Taylor expansion in the potential term which assumes small amplitude oscillation, Eq.~(\ref{ND}) becomes, \begin{equation} \label{continue} {{\partial^2 y} \over {\partial t^2}} - {\partial \over {\partial x} } (k_1 {{\partial y} \over {\partial x}} ) d^2 + 2 ( y - {3 \over 2} y^2 + {7 \over 6} y^3 ) = 0 \; , \end{equation} in which $d$ is the lattice spacing and $k_1$ is a space dependent coupling constant. We set $d$ equal to unity in the following calculations. \section{The Perturbed Nonlinear Schr\"odinger Equation} Equation (\ref{continue}) can be transformed into a perturbed Nonlinear Schr\"odinger equation by a multiple-scale expansion\cite{H,R}. Assuming that the amplitude of the thermal oscillation is small, $y \approx \epsilon \phi $, we perform the expansion \begin{eqnarray} \label{eq:approx} \phi & = & F_0 + \epsilon F_1 + \epsilon^2 F_2 + O (\epsilon^3) \; , \\ {\partial \over {\partial t}} & = & {\partial \over {\partial t_0}} + {\partial \over {\partial t_1}} \epsilon + {\partial \over {\partial t_2}} \epsilon^2 + O ( \epsilon ^3 ) \; , \\ {\partial \over {\partial x}} & = & {\partial \over {\partial x_0}} + {\partial \over {\partial x_1}} \epsilon + {\partial \over {\partial x_2}} \epsilon^2 + O ( \epsilon ^3 ) \; . \end{eqnarray} Moreover we assume a modulation of the coupling constant of the order of $\epsilon$, {\it i.e.} \begin{equation} \label{assume} {{\partial k_1} \over {\partial x}} \approx {{\partial k_1} \over {\partial x_1}} \; \epsilon \; . \end{equation} Equating like powers of $\epsilon$ yields a sequence of equations, in ascending powers of $\epsilon$: \begin{eqnarray} \label{eq:order} & & {\partial^2 F_0 \over \partial t_0^2} - k_0 {\partial^2 F_0 \over \partial x_0^2} + 2 F_0 = 0 \; ,\\ & & ({\partial^2 F_1 \over \partial t_0^2} + 2 {{\partial^2 F_0} \over {\partial t_0 \partial t_1}}) - {{\partial k_1} \over \partial x_1}{\partial F_0 \over \partial x_0} - k_0 ({\partial^2 F_1 \over \partial x_0^2} + 2 {{\partial^2 F_0} \over {\partial x_0 \partial x_1}}) \nonumber \\ & &+ 2 (F_1 - {3 \over 2} F_0^2 ) = 0 \; ,\\ & &{\text{ and higher order equations}} \; , \nonumber \end{eqnarray} in which $k_0$ is the unperturbed coupling constant. Solving for equations in each order of $\epsilon$ sequentially one obtains, \begin{eqnarray} F_0 &=& u ( x_1, x_2, t_1, t_2) e^{ i (q x_0 - \omega t_0 )} + c.c. \; ,\\ F_1 &=& {3 \over 2} | u |^2 + {{ 3 u^2 }\over { -4 \omega^2 + 4 k_1 q^2 + 2 }} e^{ 2 i (q x_0 - \omega t_0 )} \nonumber \\ & &+ c.c. \; , \end{eqnarray} and the dispersion relation: $\omega^2 = \omega_0^2 + {k_0 } q^2 \;$, with $\omega_0^2 = 2 $. From the vanishing of the secular equation at $ q = 0 $ one obtains the perturbed Nonlinear Schr\"odinger equation (NLS) at order $\epsilon^2$: \begin{equation} 2 i \omega { \partial u \over \partial t_2} + {{\partial k_1} \over {\partial x_1}} {{\partial u} \over {\partial x_1}} + k_1 {{\partial^2 u} \over {\partial x_1^2}} + 8 u | u |^2 = 0 \;. \end{equation} We can further rescale the equation into a standard form: Defining the following new dimensionless variables, \begin{eqnarray} \hat k ( \hat x ) &=& {k_1 \over k_0} - 1 \, ,\\ \hat u &=& \sqrt{ { 8 } \over { \omega^2}} u \, , \label{u}\\ \hat x &=& \sqrt{ { \omega^2} \over { 2 k_0 } } x_1\, , \label{x}\\ \hat t &=& {\omega \over 2} t_2 \, , \end{eqnarray} with $\hat k$ being the normalized deviation coupling in the vicinity of the enzyme, one obtains the following perturbed dimensionless NLS, \begin{equation} \label{PNSE} i {\hat u}_{\hat t} + {1 \over 2} \hat u_{\hat x \hat x} + \hat u | \hat u |^2 + {1 \over 2} {{\partial } \over {\partial {\hat x}}} ( {\hat k} {\hat u}_{\hat x} ) = 0 \;, \end{equation} and the corresponding Lagrangian density, \begin{equation} \label{Lagrangian} \Lambda = {i \over 2} ( {\hat u}^* {\hat u}_{\hat t} - {\hat u} {\hat u}_{\hat t}^*) - {1 \over 2} ( { 1 } + {\hat k} ) | {\hat u}_{\hat x} |^2 + {1 \over 2} | {\hat u} | ^ 4 \; . \end{equation} In the following section we drop the $\;\hat{}\;$ for nomenclature simplicity. \section{One soliton collective coordinate analysis.} The collective coordinate method, which is a particle description of the soliton in contrast to the field description given by the Lagrangian, provides a good way to study the influence of a perturbation on a soliton. The spirit is the same as using the center of mass to analyze the behavior of a system of particles. Without the perturbating term in Eq.~(\ref{PNSE}), one has a breather solution \begin{equation} \label{breather} u ( x , t ) = \eta {\rm sech} [ \eta ( x - u_e t ) ] e^{i {u_e} ( x - u_c t )} + c.c. \;, \end{equation} in which $\eta = \sqrt {(u_e^2 -2 u_e u_c)/(2PQ)}$,where $u_e$ is the envelope velocity, $u_c$ the carrier velocity, and $P = 1 / 2$, $Q = 1$ are coefficients of the second space derivative and the nonlinear terms in Eq.~(\ref{PNSE}) respectively. In view of this solution, we use an {\em ansatz} for the collective coordinate analysis \begin{equation} \label{ansatz} u ( x , t ) = \eta\, {\rm sech} ( {{\eta x} } - \zeta )\, e^{i (\phi + \xi x )} \;, \end{equation} where the parameters $\eta$, $\zeta$, $\phi$, $\xi$ are functions of $t$. For an unperturbed system this implies the following relations between the parameters: \begin{eqnarray} \eta &=& \sqrt {{u_e^2 - 2 u_e u_c} } \, , \label{eta}\\ \zeta &=& {u_e \eta t }\, , \label{zeta}\\ \xi &=& {u_e }\, , \label{xi}\\ \phi &=& - {{u_e u_c t} } \,. \end{eqnarray} At $t = 0, \zeta = 0$ and $\phi = 0$ and there are only two parameters left, which is consistent with Eq.~(\ref{breather}), because the NLS breather is a two-parameter solution. Even when the breather is far away from the defect, because the ansatz extends to infinity and always feels the defect, we do not expect these relations to hold for a perturbed system. Hence in what follows we examine the whole four-parameter space for the equations of motion. Introducing this ansatz into the Lagrangian density, Eq.~(\ref{Lagrangian}), and integrating over space, one obtains an effective Lagrangian, \begin{eqnarray} \label{eqlagrange} L &=& - 2 \eta \phi_t - 2 \zeta \xi_t + {{ \eta^3 } \over 3} - \xi^2 \eta - {1 \over 2} \int_{- \infty}^{+ \infty} k | u_x |^2 d x \; . \end{eqnarray} and the corresponding Hamiltonian \begin{eqnarray} \label{Hamiltonian} H &=& - {\eta^3 \over 3} + \xi^2 \eta + {1 \over 2} \int_{- \infty}^{+ \infty} k | u_x |^2 d x \; , \end{eqnarray} which contains no momentum term. At this point, we must specify an expression for $k(x)$ to proceed. For algebraic convenience let us choose \begin{equation} \label{step} k = \kappa [ \Theta ( x + l ) - \Theta ( x - l ) ] \; , \end{equation} in which $\Theta$ is the Heaviside step function and $l$ is the half-length of the defect. This form of $k$ violates Eq.~(\ref{assume}), however previous works showed that the collective coordinate results are generally robust for the treatment of dynamics in the presence of perturbation\cite{SB3}, therefore we can expect to get results which are at least qualitatively correct in spite of this rather crude approximation. Moreover we shall check them against full numerical simulations in the next section. Introducing the following abbreviated notation: \begin{eqnarray} T_+ &=& \tanh (\eta l + \zeta ) \, ,\\ T_- &=& \tanh (\eta l - \zeta ) \, ,\\ S_+ &=& {\rm sech} (\eta l + \zeta ) \, ,\\ S_- &=& {\rm sech} (\eta l - \zeta ) \, , \end{eqnarray} one obtains \begin{equation} \int_{- \infty}^{+ \infty} k | u_x |^2 d x = {\kappa \over 3} ( T_+^3 + T_-^3 ) \eta^3 + \kappa ( T_+ + T_- ) \xi^2 \eta \; , \end{equation} which characterizes the effect of the defect and decays fast towards zero soon outside of the impurity region, and the equations of motion: \begin{eqnarray} \phi_t &=& {{ {\eta^2} \over 2} } - { {\xi^2 \over 2} } - { \kappa \over 4} ( T_+ + T_- ) \xi^2 - {{\kappa l } \over 4} ( S_+^2 + S_-^2 ) \xi^2 \eta \nonumber \\ & &- {{\kappa l } \over 4} ( S_+^2T_+^2 + S_-^2T_-^2 ) \eta^3 - {{ \kappa} \over 4} ( T_+^3 + T_-^3 ) \eta^2 \, , \\ \xi_t &=& - {{\kappa } \over 4} ( S_+^2T_+^2 - S_-^2T_-^2 ) \eta^3 - {{\kappa } \over 4} ( S_+^2 - S_-^2 ) \xi^2 \eta \, , \\ \zeta_t &=& \xi \eta + {{\kappa} \over 2} ( T_+ + T_- ) \xi \eta \, , \label{CZ}\\ \eta_t &=& 0 \label{CA} \, . \end{eqnarray} As expected, far away from the defect, {\it i.e.} when $S$ and $T$ vanish, one recovers the usual relations for the NLS equation because in this case $\xi_t = 0$ so that $\xi$ is a constant that we can denote by $u_e$. Then $\zeta_t = \xi \eta$ gives $\zeta = u_e \eta t$ as expected, and $\phi_t = ( \eta^2 - \xi^2) / 2$ gives $\phi = - u_e u_c t $ if $u_c$ is defined by Eq.~(\ref{eta}) for $\eta$. In the presence of the defect, the set of nonlinear differential equations for the collective variables cannot be integrated analytically. It is however much simpler than the full set of discrete equations since it contains only 3 equations. It can be integrated by a fourth-order Runge-Kutta method. One can however make general remarks on the properties of the solution before resorting to numerical calculations. The soliton described by the ansatz is an unbreakable entity and moreover the energy given by Eq.~(\ref{Hamiltonian}) is conserved even when a potential well is encountered. Therefore when the soliton reaches a defect, it may speed up to compensate for the extra energy requirement due to a decrease in coupling energy, as shown in Fig.~\ref{CC}(a), in which a large carrier velocity was chosen to exaggerate the effect. However, the behavior is richer than the one generally found for topological solitons because, in addition to the time dependent position $\zeta(t)$, the ansatz contains an internal degree of freedom, $\xi$, so that the energy can also be transferred between different collective coordinates. With $\kappa < 0$ the last term of Eq.~(\ref{Hamiltonian}) decreases in the region of the defect, therefore $\xi^2 $ has to increase accordingly. If $\kappa > 0$ and the breather is initially inside the defect, it simply slips away as in Fig.~\ref{CC}(b). If it is initially outside, for some suitable range of amplitude it is first slowed down and eventually reflected, as shown in Fig.~\ref{CC}(c); reflection occurs beyond $\eta \approx 0.25$. For smaller $\eta$, breathers pass through the defect, indicating that the broader breathers are less influenced by the presence of defects, just as a large-wheel bike will not be stopped by a pebble or a ditch. In Fig.\ref{CC}(c) the breather has actually penetrated into the defect before being reflected. When the breather is trapped it oscillates between two positions, which may not be the defect boundary, as shown in Fig.~\ref{CC}(d). For values of $\eta$ close to the threshold between trapping and non-trapping, the breather slowly turns around at the boundaries as in Fig.~\ref{CC}(e). A {\it necessary} condition for a moving breather to be trapped in the above defect is $\kappa < 0$. This statement can be proved through the following argument: a necessary condition for trapping is $\zeta_t = 0$ more than twice, which, according to Eq.~(\ref{CZ}), is equivalent to \begin{equation} \cosh^2 \zeta = 1 - \cosh^2 ( \eta l ) - {\kappa} \sinh ( \eta l ) \cosh ( \eta l ). \label{trap} \end{equation} Since $\cosh^2 > 1$, $k_0 > 0$ and $\eta > 0$, $\kappa $ has to be less than zero. We have therefore showed that trapping occurs only if the perturbed coupling constant is less than the unperturbed one, which is consistent with our simulations although it has been proven only in the collective coordinate approach. Since Eq.~(\ref{trap}) contains only $\zeta$, $\eta$, $\kappa$, and $l$, if the characters of the defect, {\it i.e.} the length, $l$, and the strength, $\kappa$, have been fixed for a given system, and the initial position of the breather is chosen, the only factor which characterizes trapping is the breather amplitude. The initial value of $\phi $ seems to have no consequence on the results. In general, if one finds that a breather passes through a defect for $\kappa < 0$ as in Fig.\ref{CC}(a), one can obtain trapping by increasing its amplitude. Because of the helicoidal structure of DNA, a given strand is alternatively inside and outside the bend so that it experiences a periodical modulation of its elasticity by an attached enzyme. We examined the consequence of such a modification by considering the following coupling constant modulation: \begin{equation} k = \kappa [ \Theta ( x + l ) - 2 \Theta ( x ) + \Theta ( x - l ) ] \label{SS} \end{equation} for which the coupling is first increased by $\kappa$ and then decreased by the same amount if $\kappa > 0$. It can be viewed as a step approximation of one period of a sinusoidal modulation. In general one finds nothing essentially new with this perturbation because it is only a superposition of two step defects. However, this perturbation is asymmetric in space. By changing the sign of $\kappa$ we can reverse the orientation of the perturbation with respect to an incoming breather. If $\kappa < 0$, a breather starting from the left side of the defect encounters first the region where the coupling constant is decreased. Table I summarizes the behavior of breathers with various amplitudes and the two possible signs of $\kappa$ and $\xi$. For negative $\kappa$ and positive $\xi$, the breather is reflected for intermediate $\eta$ while for large enough $\eta$ it is trapped. If one switches to positive $\kappa$ there is still a range of $\eta$ values that produce reflection, but for large $\eta$ the breather passes through. If the breather starts from the side where coupling constant is decreased the trapping can still exist even if the initial position of the breather is far away from the defect, but the pass-through region disappears as expected. These results show that it is the first encounter which determines the trapping. However, for this case of a composite defect the collective coordinate calculation can, in some cases, lead to qualitatively wrong results. The full numerical calculation shown in Fig.\ref{all}(a) indicate that the breather can be trapped even if it were coming from the higher side of the defect. This points out the limit of the collective coordinate method for successive perturbations of the breather. The first interaction of the breather with a perturbation appears to be qualitatively well described. But then the perturbed breather is not accurately described by the ansatz. Thus when it encounters a second perturbation (here the second step in coupling constant), the collective coordinate description fails to describe the interaction. \section{Direct numerical simulations} Since the last example has shown that the collective coordinates cannot provide a full description of the breather dynamics, it is necessary to check them against full numerical simulations of Eqs.~({\ref{ND}). Using the breather solution given by Eq.~(\ref{breather}) as an initial condition, and periodic boundary conditions, we integrate Eqs.~({\ref{ND}) with a 4$^{\text{th}}$ order Runge-Kutta scheme and a time step chosen to provide a conservation of energy to an accuracy better than $10^{-6}$ over a full simulation. The calculations have been tested on different system sizes to make sure that the results are not modified by boundary effects. The ansatz (\ref{breather}) is not an exact solution of the full set of equations because the transformation to the NLS form involved several approximations, however, except for very discrete cases or large amplitude breathers, it provides a rather good solution far away from the defect. As long as the breather is far away from the defect, one generally notices only a small decay of the initial energy peak due to radiation. Full numerical calculations have the advantage of allowing radiation and breaking of a breather. Furthermore, although the collective coordinate method starts from the perturbed NLS which requires small $u_e$ and $u_c$ and hence small amplitude, the full numerical calculations do not have this restriction. In what follows we show both energy distribution and the breather amplitude. The energy distribution is more relevant to the opening of DNA chain while the breather amplitude allows a comparison with the results of the collective coordinate calculations. Fig.~\ref{DS}(a) is a typical case for trapping at an equivalent amplitude $\eta = 0.19$. The correspondence is made from Eq.~(\ref{eta}) and Eq.~(\ref{xi}). The threshold for trapping predicted by collective coordinate is higher ($\eta = 1.01$ for a breather initially at $x = 12$ when both systems have the same dimensionless coupling constant). As noticed earlier, it is not surprising to find such a discrepancy because we have used a sharp perturbation that violates the condition (\ref{assume}). However the full simulations confirm the qualitative predictions of the collective coordinate calculations: small amplitude breathers are transmitted, while larger ones are trapped. Fig.~\ref{DS}(b) shows that the energy distribution around the breather gets sharper in the region of the defect. Therefore a negative perturbation, which tends to trap breathers, is also favorable for base-pair opening since it concentrates the energy of the incoming breathers in a narrow domain. This sharpening of the breather shape occurs when the breather is inside the perturbation domain, whether it will stay trapped or not. One can also find on the contrary that, if a breather meets a positive perturbation, its energy distribution broadens. This behavior is similar to that of a vortex in shallow water: the vortex becomes wider when it is in shallower water and thinner in deeper water.\cite{T} In the amplitude plot of Fig.~\ref{DS}(b), when the breather reaches the boundary of the defect, one can see two small reflected waves. They were not included in the collective coordinate analysis, and their presence explains part of the quantitative discrepancy between the analytical approach and the full simulations. Sometimes one can also notice that the breather changes its oscillation frequency after the collision with the defect. The results of the full numerical simulations show that, although the collective coordinate analysis is able to predict qualitatively the main features, in particular the existence of a threshold for trapping when the breather amplitudes increases, it is quantitatively wrong. The same conclusion had been found for an isolated impurity\cite{FPM}. There are several reasons for that. Firstly we do not know an appropriate ansatz for the original equations of motion (\ref{discrete}) and we start from a perturbed NLS Lagrangian which is already approximate. Then we use an ansatz which is localized in space and does not allow for the breaking of the breather or the emission of reflected waves. And finally the calculation assumes a smooth evolution of the coupling constant while we later use a sharp variation to make the analytical calculation possible. In spite of all their weaknesses, the collective coordinate calculations are however useful to get an insight of the behavior of the breather in the presence of the defect or even draw general conclusions on the kind of defects that can trap energy as explained above. Another point of interest is the trapping of {\it several} breathers in the defect region which could really enhance the energy density locally and cause local openings in DNA. We show in Fig.~\ref{all} examples for trapping for two kinds of coupling constant shapes: Fig.~\ref{all}(a) is an example when the breather comes from the higher side of a two-step defect but is trapped. However, unless in favorable conditions we seldom find that two breathers can be trapped inside the same perturbation. When the second breather gets trapped it often kicks out the first breather that was trapped before, as shown in Fig.~\ref{all}(b). In other cases we noticed that, when a first breather is trapped in the defect, a second breather that would have been trapped if it were alone, is on the contrary reflected. Therefore, if one studies only the positions of the breathers during their first interactions with the extended defect, it seems that the defect will never collect more than the energy of one breather. This is in fact not true, but the complete phenomena require a more detailed analysis. It is interesting to study the evolution of the energy in the region of the defect versus time. An example is shown on Fig.~\ref{figu5}. In this case the first breather that interacts with the defect has an amplitude $\eta = 0.2$ which is above the trapping threshold and the second one has an amplitude $\eta = 0.1$ below the threshold. As expected the first breather is trapped and oscillates around the defect. The second one passes through the defect region that contains the first breather. However, if one looks at the energy density on the three-dimensional plot of Fig.~\ref{figu5}(a), one can notice a significant increase of energy density after the interaction of the second breather with the defect. The reason is that the second breather is only {\it partly} transmitted. A large part of its energy is given to the trapped breather, i.e. it stays in the defect region. The same phenomenon occurs again when the second breather collides a second time with the trapped breather. Due to this complex process, the time evolution of the energy inside the defect region (Fig.~\ref{figu5}-c) is a complicated curve, but it is important to notice that it tends to grow, and never falls again to a small value, indicating that the multiple collision process does cause a concentration of energy in the defect region. The origin of this localization of energy does not lie in breather trapping but in breather interactions in the presence of a perturbation, and therefore it is not included the collective coordinate description of Sect.~IV. The result is very reminiscent of a mechanism described recently for energy localization due to discreteness effects in nonlinear lattices\cite{Nlenloc}. In both cases the collisions of breathers, perturbed either by a defect or by discreteness, cause energy transfers that, on average, favor the big excitation at the expense of the small one. We have checked that the mechanism is not restricted to a particular case. Fig.~\ref{figu6} shows another example in which 3 breathers with the same initial amplitude $\eta=0.2$ were sent to the defect. Although the details of the process are different, they lead to the same final result: breather interactions in the presence of the defect tend to favor the formation of a large amplitude breather that concentrates a large part of the energy of the three incoming breathers and is finally trapped at the defect site. Hence the energy in the defect region settles to a high value. Tests have been performed with various breather amplitudes, leading to the same general result. \section{Conclusion} Using a simple DNA model we have modeled the effect of a transcription enzyme by an extended modification of the coupling constant along the strands. The results show that such a perturbation is more efficient than an isolated impurity to trap breathers, in particular because trapping can occur provided that the amplitude of the incoming breather exceeds a threshold instead of requiring breathers with a well defined frequency. This conclusion can be derived from collective coordinate calculations as well as from numerical integration of the full set of equations of motion, although the collective coordinate method overestimates the trapping threshold. One cannot expect quantitative results from the collective coordinate analysis because we have violated at least one basic assumption, Eq.~(\ref{assume}), to allow the analytical calculations, but it gives insight into the physics, and in particular a necessary condition for breather trapping which is confirmed by the full simulations. We have also showed that energy exchanges between a first breather, already trapped, and other incoming breathers can lead to a concentration of energy in the region of the defect. One may wonder whether the results obtained above for specific perturbations are extendable to more realistic cases. Although it is difficult to give general answers to this question, one can get insights through numerical simulations of the full system. In real DNA one has $D = 0.03 eV$, $\alpha = 4.45 \AA^{-1}$, $k_1 = 0.08 eV/\AA^2$, $m=300$ a.m.u. for AT base pairs, while for GC pairs we have $D=0.035 eV$, $k_1 =0.104$. \cite{Dauxois} This is equivalent to $k_n \approx 0.13$ to $0.15$. In this range of coupling large amplitude breathers are trapped by discreteness\cite{BangP}. The collective coordinate calculations suggest that the low amplitude breathers, which can move, will not be trapped by the 20 base-pair defect. Simulations show that it is not necessarily so. For instance a 20 base-pair defect with $K_n=0.12$ in a chain with $K_n=0.15$ can trap breathers of various amplitudes. The energy exchange mechanism in the presence of the defect, discussed above, interferes with discreteness effects that can have similar effects to localize energy\cite{Nlenloc}. Therefore, although we have exhibited a mechanism which is active in a wider frequency range than an isolated defect, the calculations performed on a simple model are not be sufficient to draw a conclusion about its validity to describe the effect of an enzyme on DNA transcription. It may however deserve attention because of its greater efficiency compared to the case of a point defect that was considered previously. In this work we have modeled the role of the enzyme by modulating only the coupling constant along the strands. As mentioned in the introduction, other possibilities could be considered, particularly if one attempts to take into account the enzyme specificity which suggests that the enzyme could have another role than merely bending locally the molecule. As a first step in this direction, we have considered a local change of the Morse potential in addition to the effect of the bending. Figure \ref{figu7} shows the result of a numerical simulation where all the conditions are the same as for Fig.~\ref{figu6}, except that, in addition to changing the coupling constant inside the defect to model the bending, we have also multiplied the denaturation energy of the base pairs (parameter $D$ of Eq.~(\ref{discrete})~) by a factor 0.8. This means that we also assume that the enzyme can have some chemical effect to reduce the base pairing interaction. The comparison of Fig.~7 and Fig.~6c shows that this modification has a rather drastic effect on the results. It is easy to understand qualitatively why because locally the vibrating frequency of base pairs has been reduced. As we consider low energy breathers which are the most likely to be excited at physiological temperatures, their frequency, situated below the base-pair linear frequency of the unperturbed region because of the soft nonlinearity of the Morse potential, is however very close to the bottom of the phonon band of the unperturbed part of the molecule. As the enzyme lowers the frequencies of phonon band in the defect region, the breather frequency is now {\it in resonance} with some modes of the phonon band of the defect. Therefore when the breather is trapped at the defect site by the bending, it is trapped in a region where it resonates with phonons. As a result it loses energy by radiation, but, as the emitted modes have a frequency below the lowest frequency of the unperturbed lattice, the vibrations are trapped in the defect region. One observes that the trapped breather spreads out its energy inside the defect region. When a second breather comes to this excited defect it is no longer repelled by a highly localized breather as in Fig.~\ref{figu6}. Thus it is more likely to penetrate in the defect region too. This makes the energy localization effect more efficient and instead of the large oscillations that were observed in Fig.~\ref{figu6}-c, Fig.~\ref{figu7} shows that the energy in the defect region now grows steadily, each new breather having a high probability to add its contribution. Although it is still preliminary, this example shows that, if one combines the bending effect of the enzyme with some model for its specific action on the promoter site, one can perhaps provide a mechanism to achieve the local opening of the double helix which is required by DNA transcription. \section{Acknowledgments} We would like to thank T. Dauxois for helps on computer programming. J.J.-L.T. acknowledges the hospitality of the Laboratoire de Physique de l'Ecole Normale Sup\'erieure de Lyon where part of this work was done. J.J.-L.T. also acknowledges partial support of National Science Council, Taiwan, grant No. 84-2911-I-007-030-B21. Part of this work has been supported by the EU Science Program through grant SC1*CT91-0705.
1,314,259,995,216
arxiv
\section{Introduction} \label{s_intro} Effective analysis and design of timber and composite structures unavoidably require beam and plate models capable to handle the material anisotropy. Nowadays the need of accurate analysis tools is even more urgent due to the fast development of novel technologies. As an example, additive manufacturing allows to create structural elements with variable orientation of fibers \citep{gw_14}. Likewise, laser scanners detect grain orientation on the surfaces of timber boards, allowing for accurate analyses of glued laminated timber beams \citep{kfse_15} and advanced optimization of structural elements \citep{pklf_18}. Engineering research has spent great effort in terms of beam and plate modeling within the last decades. Nevertheless, most of models were derived under the hypothesis of isotropic or, at most, orthotropic material \citep{vgp_12, cc_05}. As a consequence, several features of anisotropic structural elements are not yet well addressed, in particular when principal directions of the material are not aligned with the beam axis or the plate reference surface. Limiting the discussion to planar problems, the generalized 2D Hook's law of an isotropic material is represented by a block diagonal matrix. Consistently, beam constitutive relations are represented by a diagonal matrix i.e., axial deformation, curvature, and shear deformation uniquely depend on axial internal force, bending moment, and transversal internal force, respectively. Conversely, anisotropy leads the generalized 2D Hook's law to be represented by a full matrix \citep{l_68}. As a consequence, also constitutive relations for anisotropic structural elements may be represented by a full matrix i.e., all generalized deformations may depend on all internal forces. Despite its importance, the above-mentioned problematic was only partially addressed in literature. \citet{mry_96} proposed a \ac{FSDT} for a planar homogeneous anisotropic beam where a coupling term (mentioned also as coefficient of mutual influence \cite{mvps_10, mv_12}) relates axial deformation with shear force (and vice-versa). In the successive years, several researchers \citep{my_96, jnc_02, ql_02, vmp_03, mvps_10} used different approaches for the estimation of the coupling term, reaching slightly different solutions. As will be discussed in the following, the introduction of a single coupling term allows to define models that are effective only for an extremely limited set of cross-section geometries, while simple and effective models capable to handle more general cases has not been proposed jet. More recently, \citet{kh_16} have proposed an accurate analysis of the structural behavior of an homogeneous anisotropic planar beam. The analytical expression for the stress distribution is calculated using the Airy's stress function, analytical expression for deformations is computed using 2D constitutive relations, and the 2D displacement field is recovered using the compatibility \acp{PDE}. Simple calculations allow to reformulate the obtained analytical expression for 2D displacements and stresses in terms of 1D functions coinciding with internal forces and \ac{FSDT} kinematic parameters. Taking advantage of this simplification, the authors provide the analytical expression of the \ac{FE} stiffness matrix of the beam. Analogous stress distributions was obtained also by \citet{h_67}, nevertheless the simplification proposed by \citet{kh_16} highlights that the axial stress explicitly depends on transversal internal force due to the non-trivial constitutive relations of the material. The above discussed analytical results represent a milestone for the development of effective anisotropic beams. Nevertheless, the derivation procedure can not be easily generalized to multilayer structures, resulting therefore of limited interest for practitioners. An other significant aspect that has to be carefully handled is the length of zones where boundary effects extinguish according to the Saint-Venant principle. While for a isotropic beam boundary effects are negligible at a distance greater than the maximal size of the cross-section, for an anisotropic beams such a distance depends on the ratio between axial and shear modulus and may be greater than six or seven times the maximal cross-section dimension \cite{ch_77, b_85, hc_18}. On the one hand, this reduces the effectiveness of beam models and, on the other hand, it introduces further phenomena to be considered in the analysis of structural elements, impeding a straightforward interpretation of both numerical and experimental results. Nowadays, effective and accurate cross-section analysis tools (e.g., \citep{gw_16a}, Variational Asymptotic Beam Sectional Analysis \citep{yhvc_02, yvhh_02, phd_g_14, gssh_18}, Semi-Analytical Finite Element \citep{dkl_01a,dkl_01b,dkl_01c,dat_10}, Generalized Beam Thory \citep{sc_02}) that may accurately handle the so far introduced problems are available. Nevertheless, all the cross-section analysis tools are based on auxiliary \acp{PDE} and functionals, impeding an immediate physical understanding of the analysis results. As a consequence, engineers use the above-mentioned analysis tools as black-boxes. Furthermore, the scarce awareness about the effects of anisotropy on the structural behavior leads engineers to erroneously believe that coarse adaptations of isotropic beam models are effective \citep{bkf_18}. This paper proposes a simple planar beam model that effectively describes the linear elastic behavior of anisotropic multi-layer structural elements accounting for the previously introduced issues. Specifically, the beam model will assume that the beam cross-section behaves rigidly, in analogy to the Timoshenko beam. On the one hand, this choice limits the accuracy and the applicability of the proposed beam model. On the other hand, it leads to \acp{ODE} for which an analytical solution can be computed and easily interpreted by simple physical considerations, allowing for a deep understanding of the structural behavior of anisotropic beams. The main novelty of the developed model is an enhanced and effective stress recovery based on a two-steps iterative procedure. The former step uses the first 2D constitutive relation for the recovery of axial stress and allows to handle the effects of the anisotropy on the stress distribution. The latter step uses the horizontal equilibrium \ac{PDE} for the recovery of shear stress, in analogy with standard Jourawsky approach \citep{j_56,b_03}. Such a procedure allows to identify the non-trivial and explicit dependence of axial stress on transversal internal force and to manage also multi-layer anisotropic beams. Beam constitutive relations are derived from the stress potential and the outcomes of stress recovery procedure. Such an approach properly embeds anisotropy effects within the beam model and the effectiveness of the proposed constitutive relation derivation path was already demonstrated for non-prismatic and functionally graded material beams \citep{basfea_16, baaf_17, baf_17, bsaf_17}. Numerical results will demonstrate that the proposed beam model describes the behavior of anisotropic structural element with a good accuracy, leading to an extremely convenient cost-benefit ratio. In particular, the proposed beam model effectively predicts: (i) the highly non-linear distribution of axial stresses, obtained despite deformations have a linear distribution and the material is linear-elastic, (ii) the explicit dependency of horizontal stress on transversal internal force and load, (iii) the fact that the anisotropy influences the beam displacements more than shear deformation. The outline of the paper is as follows: Section \ref{s_beam_model} defines the problem and illustrates the beam model \acp{ODE}, Section \ref{s_anal_sol} derives the \acp{ODE} analytical solution, Sections \ref{s_anal_res} and \ref{s_num_res} discuss some meaningful examples, and Section \ref{s_conclusion} resumes main properties, advantages, and limitations of the proposed method and delineates future research. \section{Beam model} \label{s_beam_model} This section introduces the 2D problem as well as notations (Section \ref{s_mech_prop}), it discusses the beam compatibility and equilibrium \acp{ODE} (Section \ref{s_equil+comp}), the recovery of cross-section stress distribution (Section \ref{s_stress_rec}), and the beam constitutive relations (Section \ref{s_beam_const_rel}). \subsection{ 2D problem definition} \label{s_mech_prop} The beam \name{longitudinal axis} $L$ and the beam \name{cross-section} $H$ are closed and bounded subsets of $x$- and $y$- axes defined as \begin{equation} \label{axis_def} L := \left\{x \in \left[ 0 , l \right]\right\}; \quad H := \left\{y \in \left[ - \beta h , \left(1-\beta\right) h \right]\right\} \end{equation} where $l$, $h$, and $0<\beta<1$ are the \name{beam length}, the \name{beam thickness}, and a dimensionless parameter defining the distance between the $x$-axis and the lower boundary of the cross-section, respectively. The \name{beam depth} $b$ denotes the cross-section size along the $z$ coordinate and, in the following, we assume that $b = 1$. As illustrated in Figure~\ref{f_trave}, the 2D \name{beam body} $\Omega$ is defined as \begin{equation} \label{beam_body_def} \Omega := L \times H \end{equation} Finally, we assume that the body is slender (i.e., $l \gg h$) and behaves under the hypothesis of plane stress and small displacements. \begin{figure}[htbp] \centering \psfrag{O}{\footnotesize $O$} \psfrag{x}{\footnotesize $x$} \psfrag{y}{\footnotesize $y$} \psfrag{p}{\footnotesize $p \left(x\right)$} \psfrag{q}{\footnotesize $q$} \psfrag{m}{\footnotesize $m \left(x\right)$} \psfrag{h}{\footnotesize $h$} \psfrag{Bh}{\footnotesize $\beta h$} \psfrag{l}{\footnotesize $l$} \psfrag{L}{\footnotesize $L$} \psfrag{omega}{\footnotesize $\Omega$} \psfrag{theta}{\footnotesize $\theta$} \includegraphics[width=0.5\columnwidth]{trave_aniso.eps} \caption{\footnotesize{ Anisotropic multilayer beam with arbitrary orientation of principal directions. Geometry, coordinate system, dimensions and adopted notations.}} \label{f_trave} \end{figure} We introduce the \name{displacement} vector field $\pmb{s} \left(x,y\right) = \left[ s_x \left(x,y\right) , s_y \left(x,y\right) \right]$, the \name{stress} tensor field $\pmb{\sigma} \left(x,y\right) = \left[\sigma_{x} \left(x,y\right) , \sigma_{y} \left(x,y\right) , \tau \left(x,y\right)\right]^T$, and the \name{strain} tensor field $\pmb{\varepsilon} \left(x,y\right) = \left[\epsilon_{x} \left(x,y\right) , \epsilon_{y} \left(x,y\right) , \gamma_{xy} \left(x,y\right)\right]^T$ using the engineering notation. Furthermore, a \name{distributed load} $\pmb{f} = \left[ f_x , f_y \right]$ is applied within the domain and suitable \acp{BC} are assigned on the boundary of domain $\Omega$. The 2D compatibility \acp{PDE} read \begin{subequations}\label{compat_eq} \begin{align} \epsilon_{x} \left(x,y\right) = & s_{x,x} \left(x,y\right) \label{comp_epsx}\\ \epsilon_{y} \left(x,y\right) = & s_{y,y} \left(x,y\right) \label{comp_epsy}\\ \gamma_{xy} \left(x,y\right) = & \frac{1}{2} \left( s_{x,y} \left(x,y\right) + s_{y,x} \left(x,y\right) \right) \label{comp_gamm} \end{align} \end{subequations}% where the notation $\left(\cdot\right),i$ for $i=x,y$ represents partial derivatives. The 2D equilibrium \acp{PDE} read \begin{subequations}\label{equil_eq} \begin{align} \sigma_{x,x} \left(x,y\right) + \tau_{,y} \left(x,y\right) = & -f_x \left(x,y\right) \label{equil_x}\\ \tau_{,x} \left(x,y\right) + \sigma_{y,y} \left(x,y\right) = & -f_y \left(x,y\right) \label{equil_y} \end{align} \end{subequations} The beam is made of a linear-elastic and anisotropic material. As represented in Figure \ref{f_trave}, the material properties do not depend on the beam axis $x$ coordinate and are piecewise constant within the beam thickness. Following the notation introduced by \citet{mry_96}, the 2D anisotropic constitutive relations can be represented as \begin{subequations}\label{mat_const_rel} \begin{align} \epsilon_x \left(x,y\right) = & \frac{\sigma_{x}\left(x,y\right)}{E_{xx}\left(y\right)} + \frac{\sigma_{y}\left(x,y\right)}{E_{xy}\left(y\right)} + \frac{\tau\left(x,y\right)}{G_x\left(y\right)} \label{mat_const_rel_x} \\ \epsilon_y \left(x,y\right) = & \frac{\sigma_{x}\left(x,y\right)}{E_{xy}\left(y\right)} + \frac{\sigma_{y}\left(x,y\right)}{E_{yy}\left(y\right)} + \frac{\tau\left(x,y\right)}{G_y\left(y\right)} \label{mat_const_rel_y} \\ \gamma_{xy} \left(x,y\right) = & \frac{\sigma_{x}\left(x,y\right)}{G_{x}\left(y\right)} + \frac{\sigma_{y}\left(x,y\right)}{G_{y}\left(y\right)} + \frac{\tau\left(x,y\right)}{G\left(y\right)} \label{mat_const_rel_xy} \end{align} \end{subequations} The coefficients of the material constitutive relations can be collected in a matrix $\pmb{D}$ that is defined as \begin{equation} \label{const_rel_rotation} \pmb{D}\left(y\right) = \left[ \begin{array}{ccc} \frac{1}{E_{xx}\left(y\right)} & \frac{1}{E_{xy}\left(y\right)} & \frac{1}{G_x\left(y\right)} \\ \frac{1}{E_{xy}\left(y\right)} & \frac{1}{E_{yy}\left(y\right)} & \frac{1}{G_y\left(y\right)} \\ \frac{1}{G_{x}\left(y\right)} & \frac{1}{G_{y}\left(y\right)} & \frac{1}{G\left(y\right)} \end{array} \right] = \pmb{R}^T\left(y\right) \left[ \begin{array}{ccc} \frac{1}{{E}_{11}\left(y\right)} & -\frac{\nu\left(y\right)}{{E}_{11}\left(y\right)} & 0 \\ -\frac{\nu\left(y\right)}{{E}_{11}\left(y\right)} & \frac{1}{E_{22}\left(y\right)} & 0 \\ 0 & 0 & \frac{1}{G_{12}\left(y\right)} \end{array} \right] \pmb{R}\left(y\right) \end{equation} where $\pmb{R}\left(y\right)$ reads \begin{equation} \pmb{R}\left(y\right) = \left[ \begin{array}{ccc} \cos ^{2} \left( \theta\left(y\right) \right) & \sin ^{2} \left( \theta\left(y\right) \right) & 2 \sin \left( \theta\left(y\right) \right) \cos \left( \theta\left(y\right) \right) \\ \sin ^{2} \left( \theta\left(y\right) \right) & \cos ^{2} \left( \theta\left(y\right) \right) & -2 \sin \left( \theta\left(y\right) \right) \cos \left( \theta\left(y\right) \right) \\ -\sin \left( \theta\left(y\right) \right) \cos \left( \theta\left(y\right) \right) & \sin \left( \theta\left(y\right) \right) \cos \left( \theta\left(y\right) \right) & \cos ^{2} \left( \theta\left(y\right) \right) - \sin ^{2} \left( \theta\left(y\right) \right) \end{array} \right] \end{equation} ${E}_{11}\left(y\right) , E_{22}\left(y\right) , G_{12}\left(y\right)$, and $\nu\left(y\right)$ are the parameters defining the mechanical properties of the material with respect to the principal directions. The quantity $\theta\left(y\right)$, with $-\pi/2 < \theta\left(y\right) < \pi/2$, is the rotation of the principal direction of the material with respect to the $x$-axis. {\rmk \label{r_D_even_odd} Due to the definition \eqref{const_rel_rotation}, $E_{xx} \left(y\right)$, $E_{xy} \left(y\right)$, $E_{yy} \left(y\right)$, and $G \left(y\right)$ are even functions of the material principal direction rotation $\theta\left(y\right)$ whereas the material coupling terms $G_{x} \left(y\right)$ and $G_{y} \left(y\right)$ are odd. } \subsection{Compatibility and equilibrium \acp{ODE}} \label{s_equil+comp} For convenience, we define the \name{axial stiffness} $A^*$, the dimensionless parameter $\beta$ introduced in Equation \eqref{axis_def}, and the \name{bending stiffness} $I^*$ \begin{equation} \label{area_centlin_inertia} A^* = \int_{H} E_{xx} \left(y\right) dy; \quad \beta = \cfrac{1}{h A^* }\int_{H} E_{xx}\left(y \right) y dy; \quad I^* = \int_{H} E_{xx}\left(y\right) y^2 dy \end{equation} {\rmk \label{r_centerline} Due to Definition \eqref{area_centlin_inertia}, the origin of the adopted Cartesian coordinate system $O$ coincides with the so-called stiffness centroid that is equal to the cross-sectional geometric centroid only if the cross-section is symmetric and, in particular, when the beam is homogeneous, as discussed also by \citet{dkl_01b}.} As usual for standard Timoshenko beam models, the 2D displacement field $\pmb{s}\left(x,y\right) = \left[s_x\left(x,y\right), s_y\left(x,y\right)\right]^T$ is represented in terms of three 1D functions, indicated as \name{axial displacement} $u\left(x\right)$, \name{cross-section rotation} $\phi\left(x\right)$, and \name{transversal displacement} $v\left(x\right)$. Therefore, the displacement field components are approximated as follows \begin{subequations} \label{kin_ass} \begin{align} s_x\left(x,y\right) \approx & u \left(x\right) - y \phi \left(x\right)\\ s_y\left(x,y\right) \approx & v\left(x\right) \end{align} \end{subequations} Introducing the \name{generalized strains} defined as the \name{axial strain} $\epsilon \left(x\right)$, the \name{curvature} $\chi \left(x\right)$, and the \name{shear strain} $\gamma \left(x\right)$, the beam compatibility is expressed through the following \acp{ODE} \begin{subequations} \label{disp_rec} \begin{align} \epsilon \left(x\right) & = u '\left(x\right) \label{u_rec}\\ \chi \left(x\right) &= \phi'\left(x\right) \label{fi_rec} \\ \gamma \left(x\right) & = v'\left(x\right) - \phi\left(x\right) \label{v_rec} \end{align} \end{subequations} where the notation $\left(\cdot\right)'$ denotes derivatives with respect to $x$. {\rmk \label{r_approx} In light of Remark \ref{r_centerline}, kinematic approximation \eqref{kin_ass} differs from standard Timoshenko one. In fact, $u \left(x\right)$ represents the axial displacement of the stiffness centroids and, in general, it does not coincide with the mean value of the cross-section axial displacements (i.e., $u \left(x\right) = s_x \left(x,0\right) \neq 1/h \int_{H} s_x \left(x,y\right) dy$). Similarly, $\epsilon \left(x\right)$ is not the mean value of the axial strain evaluated within the cross-section, but just represents the axial elongation evaluated at $y=0$ (i.e., $\epsilon \left(x\right) \neq 1/h \int_{H} \partial s_{x} \left(x,y\right)/\partial x \, dy$).} We introduce the \name{axial internal force} $N \left(x\right)$, the \name{bending moment} $M \left(x\right)$, and the \name{transversal internal force} $V \left(x\right)$ defined as \begin{gather} \label{stress_resultant} N \left( x \right) = \int_H \sigma_{x} \left(x,y\right) dy; \quad M \left( x \right) = \int_H \sigma_{x} \left(x,y\right) \left(-y\right) dy; \quad V \left( x \right) = \int_H \tau \left(x,y\right) dy \end{gather} Furthermore, we assume that $f_x =0$ and we introduce the transversal load $q$ % \begin{equation} q = \int_H f_y dy = hf_y \end{equation} Considering the axial, rotational, and transversal equilibrium of a infinitesimally long beam-segment, the equilibrium \acp{ODE} read \begin{subequations} \label{beam_equil} \begin{align} N'\left(x\right) & = 0 \label{beam_equil_h} \\ M'\left(x\right) & = - V\left(x\right) \label{beam_equil_m} \\ V'\left(x\right) & = - q \label{beam_equil_v} \end{align} \end{subequations} \subsection{Stress recovery} \label{s_stress_rec} The stress recovery is based on a recursive procedure that leads to define a distribution of stresses that satisfies the first constitutive relation \eqref{mat_const_rel_x} and the first equilibrium \ac{PDE} \eqref{equil_x}. Conversely, we assume that transversal stress vanishes i.e., $\sigma_{y} \left(x,y\right) =0$, aiming at the maximal simplicity of the model. In order to set the recursive procedure up, it is convenient to isolate Equations \eqref{mat_const_rel_x} and \eqref{equil_x} for the variables $\sigma_{x} \left(x,y\right)$ and $\tau \left(x,y\right)$, respectively \begin{equation} \label{const_rel_sx} \sigma_{x} \left(x,y\right) = E_{xx} \left(y\right) \epsilon_{x} \left(x,y\right) - \frac{E_{xx} \left(y\right)}{G_{x} \left(y\right)}\tau \left(x,y\right) \end{equation} \begin{equation} \label{shear_p_rec} \tau\left(x,y\right) = -\int_{-\beta h}^{y} \sigma_{x,x} \left(x,\hat{y}\right) d\hat{y} \end{equation} \begin{figure}[htbp] \centering \includegraphics[width=0.5\columnwidth]{stress_recovery_scheme.eps} \caption{\footnotesize{ Flow chart of the iterative procedure adopted for the stress recovery.}} \label{f_stress_recovery} \end{figure} The iterative procedure is resumed in Figure \ref{f_stress_recovery} and leads to the following distribution of stresses \begin{subequations} \label{stress_def} \begin{align} \label{sigx_def} \sigma_{x} \left(x,y\right) = d_{\sigma_x}^N \left(y\right) N \left(x\right) + d_{\sigma_x}^M \left(y\right) M \left(x\right) + d_{\sigma_x}^V \left(y\right) V \left(x\right) + d_{\sigma_x}^q \left(y\right) q \\ \label{tau_def} \tau \left(x,y\right) = d_{\tau}^V \left(y\right) V \left(x\right) + d_{\tau}^q \left(y\right) q \end{align} \end{subequations} where \begin{subequations} \begin{align} d_{\sigma_x}^N\left(y\right) & = \frac{E_{xx}\left(y\right)}{A^*} \label{b_sx_N} \\ d_{\sigma_x}^M\left(y\right) & = -\frac{E_{xx}\left(y\right)}{I^*} y \label{b_sx_M} \\ d_{\sigma_x}^V\left(y\right) & = - \frac{E_{xx} \left(y\right)}{G_{x} \left(y\right)} d_{\tau}^V\left(y\right) + \int_H \frac{E_{xx} \left(y\right)}{G_{x} \left(y\right)} d_{\tau}^V\left(y\right) dy \, d_{\sigma_x}^N\left(y\right) - \int_H \frac{E_{xx} \left(y\right)}{G_{x} \left(y\right)} d_{\tau}^V\left(y\right) y dy \, d_{\sigma_x}^M\left(y\right) \label{b_sx_V} \\ d_{\sigma_x}^{q}\left(y\right) & = - \frac{E_{xx} \left(y\right)}{G_{x} \left(y\right)} d_{\tau}^{q}\left(y\right) + \int_H \frac{E_{xx} \left(y\right)}{G_{x} \left(y\right)} d_{\tau}^{q}\left(y\right) dy \, d_{\sigma_x}^N\left(y\right) - \int_H \frac{E_{xx} \left(y\right)}{G_{x} \left(y\right)} d_{\tau}^{q}\left(y\right) y dy \, d_{\sigma_x}^M\left(y\right) \label{b_sx_q} \\ d_{\tau}^V\left(y\right) & = \int_{-\beta h}^{y} d_{\sigma}^M \left( \hat{y} \right) d\hat{y} \label{b_tau_V} \\ d_{\tau}^q\left(y\right) & = \int_{-\beta h}^{y} d_{\sigma}^V \left( \hat{y} \right) d\hat{y} \label{b_tau_q} \end{align} \end{subequations} {\rmk \label{r_stress_rec} In Definitions \eqref{b_sx_V} and \eqref{b_sx_q}, the second and the third addends satisfy Equation \eqref{stress_resultant}, maintaining standard physical meaning of beam model variables. } Equations \eqref{sigx_def} highlights that axial stress $\sigma_{x}$ explicitly depends on transversal internal force $V \left(x\right)$ and load $q$. Similarly, also the shear stress $\tau$ explicitly depends on the transversal load $q$. To the authors' knowledge, only \citet{kh_16} and \citet{h_67} obtained similar dependencies, but their analysis was limited to homogeneous beams. Furthermore, $d_{\sigma_x}^V$ and $d_{\sigma_x}^q$ depend on $E_{xx} / G_{x}$ and $E_{xx}^2 / G_{x}^2$, respectively (see Equation \eqref{b_sx_V} and \eqref{b_sx_q}), and similar coefficients was reported also by \citet{kh_16}. On the one hand, similarity of Equations \eqref{sigx_def} and \eqref{tau_def} with analytical solutions reported in \citep{kh_16} and \citep{h_67} indicates that the procedure summarized in Figure \ref{f_stress_recovery} may be effective. On the other hand, non-trivial dependency of $\sigma_{x}$ on transversal internal force $V \left(x\right)$ indicates that stress recovery procedures developed for isotropic or orthotropic structural elements available in literature \citep{dasar_17, tfbr_17} and implemented in most of structural analysis commercial softwares can lead to coarse results. \subsection{Beam constitutive relations} \label{s_beam_const_rel} To complete the Timoshenko-like beam model, simplified constitutive relations have to be defined. To this aim, we introduce the stress potential \begin{equation} \label{stress_potential} \Psi^*\left(x,y\right) = \frac{1}{2} \, \pmb{\sigma}^T\left(x,y\right) \cdot \pmb{D}\left(y\right) \cdot \pmb{\sigma}\left(x,y\right) = \frac{1}{2} \left( \frac{\sigma_{x}^2\left(x,y\right) }{E_{xx}\left(y\right) } + \frac{\tau^2\left(x,y\right)}{G\left(y\right)} + 2 \frac{\sigma_{x}\left(x,y\right)\tau\left(x,y\right) }{G_{x}\left(y\right) }\right) \end{equation} Substituting the stress recovery relations \eqref{stress_def} into Equation \eqref{stress_potential}, the generalized strains result as the cross-section integral of the derivatives of the stress potential with respect to the corresponding internal forces, reading \begin{subequations} \label{const_rel} \begin{align} \epsilon \left(x\right) = & \int_{H} \frac{\partial \Psi^*\left(x,y\right)}{\partial N\left(x\right)} dy = \epsilon_N N \left(x\right) + \epsilon_M M \left(x\right) + \epsilon_V V \left(x\right) + \epsilon^q q \\ \chi \left(x\right) = & \int_{H} \frac{\partial \Psi^*\left(x,y\right)}{\partial M\left(x\right)} dy = \chi_N N \left(x\right) + \chi_M M \left(x\right) + \chi_V V \left(x\right) + \chi^q q \\ \gamma \left(x\right) = & \int_{H} \frac{\partial \Psi^*\left(x,y\right)}{\partial V\left(x\right)} dy = \gamma_N N \left(x\right) + \gamma_M M \left(x\right) + \gamma_V V \left(x\right) + \gamma^q q \end{align} \end{subequations} with \begin{subequations} \begin{align} \epsilon_N & = \int_{H} \frac{\left( d_{\sigma_x}^N\left(y\right) \right)^2}{E_{xx}\left(y\right)} dy \label{eps_N} \\ \epsilon_M = \chi_N & = \int_{H} \frac{d_{\sigma_x}^N\left(y\right) d_{\sigma_x}^M\left(y\right) }{E_{xx}\left(y\right)} dy = 0 \label{eps_M} \\ \epsilon_V = \gamma_N & = \int_{H} \frac{d_{\sigma_x}^N\left(y\right) d_{\sigma_x}^V\left(y\right)}{E_{xx}\left(y\right)} dy + \int_{H} \frac{d_{\sigma_x}^N\left(y\right) d_{\tau}^V\left(y\right)}{G_{x}\left(y\right)} dy \label{eps_V} \\ \chi_M & = \int_{H} \frac{\left( d_{\sigma_x}^M\left(y\right) \right)^2}{E_{xx}\left(y\right)} dy \label{chi_M} \\ \chi_V = \gamma_M & = \int_{H} \frac{d_{\sigma_x}^M\left(y\right) d_{\sigma_x}^V\left(y\right)}{E_{xx}\left(y\right)} dy + \int_{H} \frac{d_{\sigma_x}^M\left(y\right) d_{\tau}^V\left(y\right)}{G_{x}\left(y\right)} dy \label{chi_V} \\ \gamma_V & = \int_{H} \frac{\left( d_{\sigma_x}^V\left(y\right) \right)^2}{E_{xx}\left(y\right)} dy + \int_{H} \frac{d_{\sigma_x}^V d_{\tau}^V\left(y\right)}{G_{x}\left(y\right)} dy + \int_{H} \frac{\left( d_{\tau}^V\left(y\right) \right)^2}{G\left(y\right)} dy \label{gam_V} \\ \epsilon^q & = \int_{H} \frac{d_{\sigma_x}^N\left(y\right) d_{\sigma_x}^q \left(y\right) }{E_{xx}\left(y\right)} dy + \int_{H} \frac{d_{\sigma_x}^N\left(y\right) d_{\tau}^q\left(y\right)}{G_{x}\left(y\right)} dy \label{eps_p} \\ \chi^q & = \int_{H} \frac{d_{\sigma_x}^M\left(y\right) d_{\sigma_x}^q \left(y\right) }{E_{xx}\left(y\right)} dy + \int_{H} \frac{d_{\sigma_x}^M\left(y\right) d_{\tau}^q\left(y\right)}{G_{x}\left(y\right)} dy \label{chi_p} \\ \gamma^q & = \int_{H} \frac{d_{\sigma_x}^V\left(y\right) d_{\sigma_x}^q \left(y\right) }{E_{xx}\left(y\right)} dy + \int_{H} \frac{d_{\sigma_x}^V\left(y\right) d_{\tau}^q\left(y\right)}{G_{x}\left(y\right)} dy + \int_{H} \frac{d_{\sigma_x}^q\left(y\right) d_{\tau}^V\left(y\right)}{G_{x}\left(y\right)} dy + \int_{H} \frac{d_{\tau}^V \left(y\right) d_{\tau}^V\left(y\right)}{G\left(y\right)} dy \label{gam_p} \end{align} \end{subequations} Introducing the definitions of stress distributions \eqref{sigx_def} into Definition \eqref{eps_M}, we obtain that $\epsilon_M = \chi_N = 0$ due to the choice of the origin of the Cartesian coordinate system introduced in Equation \eqref{area_centlin_inertia}. Conversely, the transversal internal force $V \left(x\right)$ produces not only shear deformation $\gamma \left(x\right)$, but also axial strain $\epsilon \left(x\right)$ and curvature $\chi \left(x\right)$ since $\epsilon_V \neq 0$ and $\chi_V \neq 0$. Equations \eqref{eps_N} and \eqref{chi_M} lead to a definition of axial and bending stiffness analogous to the one obtained for isotropic beams. Conversely, Equation \eqref{gam_V} highlights that the shear stiffness of anisotropic beams $\gamma_V$ depends not only on shear modulus $G \left(y\right)$, but also on both the axial modulus of elasticity $E_{xx} \left(y\right)$ and the coupling term $G_{x} \left(y\right)$. Finally, all the deformations explicitly depend on the transversal load $q$ (see Equation \eqref{const_rel}). Such an deep influence of the material anisotropy on beam constitutive relations is ignored by most of the literature. To the authors' knowledge, the coefficients $\epsilon_V =\gamma_N$ was analyzed only in \citep{mry_96, my_96, jnc_02, ql_02, vmp_03}. Conversely, the existence of the coefficients $\chi_V = \gamma_M$ was mentioned by \citep{dkl_01b} in the framework of the derivation of an enhanced 3D beam model, but their influence on the beam structural response was never analyzed. {\rmk \label{r_bondary_effects} The extremely simple assumptions on kinematics \eqref{kin_ass} do not allow to tackle any higher order and boundary effects, as usual for all \ac{FSDT}. Therefore, the proposed beam model has not the capability to describe deformation of the cross-section and the phenomena that occur in the neighborhood of constraints and concentrated loads.} \section{\acp{ODE} analytical solution} \label{s_anal_sol} This section discusses the analytical solution of beam model \acp{ODE} \eqref{equil_eq}, \eqref{disp_rec}, and \eqref{const_rel} for the two-layer cantilever depicted in Figure \ref{f_cantilever}. \begin{figure}[htbp] \centering \psfrag{l}{\footnotesize $l$} \psfrag{h1}{\footnotesize $h_1$} \psfrag{h2}{\footnotesize $h_2$} \psfrag{hh}{\footnotesize $h$} \psfrag{p}{\footnotesize $q$} \psfrag{can}{\footnotesize cantilever} \psfrag{dub}{\footnotesize doubly-clamped} \includegraphics[width=0.5\columnwidth]{cantilever.eps} \caption{\footnotesize{ Bi-layer anisotropic cantilever. Geometry, loads, and \acp{BC}.}} \label{f_cantilever} \end{figure} The two layers are made of the same anisotropic material and their thicknesses are $h_1 = \alpha h$ and $h_2 = \left(1-\alpha\right) h$ with $0 \leq \alpha \leq 1$. In the bottom layer, material principal direction is aligned with the beam axis, therefore $E_{xx} = E_{11}$, $G = G_{12}$, and $1/G_{x} = 0$. In the top layer material principal direction is rotated with respect to the beam axis of an angle $\theta$, therefore $E_{xx} = E_{11} / \mu$ and $G = G_{12} / \kappa$. The dimensionless parameters $\mu$ and $\kappa$ account for the reduction of axial and shear modulus due to the rotation $\theta$ of the material principal direction \eqref{const_rel_rotation}. They are defined in Appendix \ref{a_mat_coeff}, together with the material coupling term $G_{x}$. For $\alpha = 1$, the beam reduces to an homogeneous orthotropic beam with material orientation aligned with beam axis. Conversely, for $\alpha = 0$, the beam reduces to an homogeneous anisotropic beams, similar to the one analyzed by \citep{mry_96, my_96, jnc_02, ql_02, vmp_03, h_67, kh_16}. Finally, aiming at the maximal simplicity of the analytical solution, we are going to neglect the influence of transversal load on the beam deformation (i.e., we assume $\epsilon^q = \chi^q = \gamma^q = 0$). The coefficients of beam constitutive relations introduced in Definition \eqref{const_rel} read \begin{equation} \label{const_rel_coeff} \epsilon_N = \frac{P_1}{E_{11} h}; \quad \epsilon_V = \gamma_N = - \frac{P_3}{G_{x} h}; \quad \chi_M = \frac{12 P_2}{E_{11} h^3}; \quad \chi_V = \gamma_M = - \frac{18 P_4}{G_{x} h^2}; \quad \gamma_V = \frac{6 P_5}{5 G_{12} h} + \frac{P_6 E_{11}}{5 G^2_{x} h} \end{equation} where the dimensionless coefficients $P_i$ for $i= 1 \dots 6$ (reported in Appendix \ref{a_p}) account for the influence of geometry and material on the structural element stiffness. The solution of \acp{ODE} \eqref{disp_rec}, \eqref{equil_eq}, and \eqref{const_rel} leads to the following analytical expressions for beam model variables \begin{equation} \label{ode_sol} \begin{split} N \left( x \right) = & C_6 \\ V \left( x \right) = & -q x + C_5 \\ M \left( x \right) = & \frac{q x^2}{2} - C_5 x + C_3 \\ \phi \left( x \right) = & \overbrace{\frac{12 P_2}{E_{11} h^3} \left(\frac{q x^3}{6} -\frac{C_5 {x}^{2}}{2} + C_3 x \right)}^{\phi_{EB} \left( x \right)} + \overbrace{\frac{18 P_4}{G_{x} h^2} \left(-\frac{q x^2}{2} + C_5 x \right)}^{\phi_{c} \left( x \right)} + C_2 \\ v \left( x \right) = & \overbrace{\frac{12 P_2}{E_{11} h^3} \left( \frac{q x^4}{24} - \frac{C_5 {x}^{3}}{6} + \frac{C_3 {x}^{2}}{2} \right)}^{v_{EB} \left( x \right)} + \overbrace{\frac{6 P_5}{5 G_{12} h} \left( - \frac{q x^2}{2} + C_5 x \right)}^{v_{T} \left( x \right)} \\ + &\overbrace{\frac{P_3}{G_{x} h} C_6 x -\frac{18 P_4}{G_{x} h^2} C_3 x }^{v_{c} \left( x \right)} + \overbrace{\frac{P_6 E_{11}}{ G_{x}^2 h} \left( - \frac{q x^2}{2} + C_5 x \right)}^{v_{r} \left( x \right)} +C_2 x+C_1 \\ u \left( x \right) = & \overbrace{\frac{P_1}{E_{11} h} C_6 x}^{u_{EB} \left( x \right)} - \overbrace{\frac{P_3}{G_{x} h}\left(-\frac{q x^2}{2} + C_5 x \right)}^{u_{c} \left( x \right)} +C_4 \end{split} \end{equation} where $C_i$ for $i=1 \dots6$ depend on \acp{BC}. Notations $\left(\cdot \right)_{EB}$, $\left(\cdot \right)_{T}$, $\left(\cdot \right)_{c}$, and $\left(\cdot \right)_{r}$ highlight dependency of addends on the axial stiffness $E_{11}$, the shear stiffness $G_{12}$, the coupling term $G_{x}$, and the ratio $G_{x}^2/E_{11}$, respectively. Furthermore, few calculations allow to conclude that the addends denoted as $\left(\cdot \right)_{EB}$ coincide with the solution of the \ac{EB} beam theory whereas the addend denoted as $\left(\cdot \right)_{T}$ coincides with the shear deformation considered by Timoshenko beam theory. Considering a cantilever (see Figure \ref{f_cantilever}), the following \acp{BC} have to be enforced \begin{equation} \label{bc_cantilever} u \left( 0 \right) = 0; \quad \phi \left( 0 \right) = 0; \quad v \left( 0 \right) = 0; \quad N \left( l \right) = 0; \quad M \left( l \right) = 0; \quad V \left( l \right) = 0 \end{equation} Requiring \acp{ODE} solution \eqref{ode_sol} to satisfy \acp{BC} \eqref{bc_cantilever} leads to determine the following value of $C_i$ for $i=1 \dots 6$ \begin{equation} C_1 = C_2 = C_4 = C_6 = 0; \quad C_3 = \frac{q l^2}{2}; \quad C_5 = q l \end{equation} Finally, introducing the dimensionless parameter $\lambda = l/h$, the maximal transversal displacement of the beam reads \begin{equation} \label{v_max} v \left( l \right) = v_{EB} \left( l \right) + v_{T} \left( l \right) + v_{c} \left( l \right) + v_{r} \left( l \right) = \frac{3 q l \lambda^3}{2 E_{11}} Q_1 + \frac{3 q l \lambda}{G_{12}} Q_2 - \frac{9 q l \lambda^2}{G_{x}} Q_3 + \frac{ q l \lambda }{G_{x}}\frac{ E_{11}}{G_{x}} Q_4 \end{equation} where the dimensionless coefficients $Q_i$ for $i= 1 \dots 4$ are reported in Appendix \ref{a_q_c}. Equation \eqref{v_max} shows that the maximal transversal displacement $v \left( l \right)$ is the sum of four terms. The first addend $v_{EB} \left( l \right)$ depends on the Young's modulus along the principal direction $E_{11}$ and, for $\alpha = 1$, it corresponds to the classical \ac{EB} solution. The second addend $v_{T} \left( l \right)$ depends on the shear modulus $G_{12}$ and, for $\alpha = 1$, it coincides with the contribution due to shear deformation handled by the Timoshenko beam model. The third term $v_{c} \left( l \right)$ depends on the material coupling term $G_{x}$ and its existence is just a consequence of the fact that material principal directions are not aligned with the beam axis. The fourth term $v_{r} \left( l \right)$ depends on the material coupling term $G_{x}$ and on the ratio $E_{11}/G_{x}$ that appears in the definition of axial stress (see Equation \eqref{b_sx_V}). Looking at Equation \eqref{v_max} from a different perspective, the first addend $v_{EB} \left( l \right)$ depends on $\lambda ^3$, the second $v_{T} \left( l \right)$ and the fourth $v_{r} \left( l \right)$ ones depend on $\lambda$, and the third one $v_{c} \left( l \right)$ depends on $\lambda ^2$. On the one hand, the so far highlighted result is conformal to what stated in standard literature. Shear deformation $v_{T} \left( l \right)$ has a negligible influence on the total displacement of the beam for slender beams (i.e., for $\lambda \gg 1$). On the other hand, the third term $v_{c} \left( l \right)$ can weigh on the total displacement more than the shear deformation whereas the forth term $v_{r} \left( l \right)$ can have an influence similar to the shear deformation. To the authors' knowledge, the existence of terms $v_{c} \left( l \right)$ and $v_{r} \left( l \right)$ was never mentioned in the literature and their role will be analyzed in the following section. Finally, it is worth mentioning that \begin{itemize} \item stress distributions \eqref{stress_def} reduce to linear and quadratic functions for $\alpha = 1$ and $1 / G_{x} =0$, as usual in homogeneous prismatic beams, \item $\epsilon_N = 1/ \left( E_{11} h \right)$, $\epsilon_V = \gamma_N = \chi_V = \gamma_M = 0$, $\chi_M =12/ \left( h^3 E_{11} \right)$, and $\gamma_V = 6/ \left(5G_{12}h \right)$ for $\alpha = 1$, analogously to homogeneous prismatic isotropic beams, \item $\epsilon_N = \mu/ \left( E_{11} h \right)$, $\epsilon_V = \gamma_N = 1/\left(G_{x}h \right)$, $\chi_M =12 \mu/\left( h^3 E_{11}\right)$, $\chi_V = \gamma_M = 0$, and $\gamma_V = 6\kappa/\left(5G_{12}h\right)$ for $\alpha = 0$, similarly to anisotropic beam model proposed by \citet{mry_96}. \end{itemize} confirming that the presented beam model can recover analytical solutions already available in literature. \section{Comparison with analytical solution, simply-supported homogeneous beam, } \label{s_anal_res} This section compares the solution of the beam model discussed in Section \ref{s_beam_model} with the analytical solution derived in \citep{kh_16} for a simply supported homogeneous beam. Numerical results are obtained assuming the following parameters \begin{equation} \label{mech_prop_homog_beam} \begin{split} & h = 0.2 \, \milli\meter; \quad l = 2 \, \milli\meter; \quad q = 1 \, \newton/\milli\meter; \quad \alpha = 0 \\ E_{11} = 10^4 \, \mega & \pascal; \quad E_{22} = 5 \cdot10^2 \, \mega\pascal; \quad G = 10^3 \, \mega\pascal; \quad \nu = 0.25 \end{split} \end{equation} It is worth mentioning that the material is highly anisotropic: $E_{11}/E_{22} = 20$ and $E_{11}/G = 10$. Such a choice aims at magnifying the effects of both shear deformation and coupling on the behavior of the structural element, allowing for a more accurate discussion of the beam model effectiveness. Being $\psi \left(x\right)$ a beam model variable, the solution computed by means of \acp{ODE} \eqref{disp_rec}, \eqref{equil_eq}, and \eqref{const_rel} is denoted in the following as $\psi^{mod}$. Conversely, the reference solution $\psi^{ref}$ is computed using the analytical expressions reported in \citep{kh_16}. Assuming $\theta = 45 \, \deg$ the rotation of the constitutive relation \eqref{const_rel_rotation} leads to set \begin{equation} \begin{aligned} E_{xx} & = E_{yy} = 1.904 \, \giga\pascal ; \quad & E_{xy} & = 40.000 \, \giga\pascal ; \quad & G_{x} & = G_{y} = -1.052 \, \giga\pascal ; \quad & G & = 0.322 \, \giga\pascal \end{aligned} \end{equation} Figure \ref{f_homog_stress} compares the cross-section distribution of stresses evaluated according to reference \citep{kh_16} $\psi^{ref}$, the proposed beam model $\psi^{mod}$, and standard Timoshenko beam $\psi^{T}$. \begin{figure}[htbp] \centering \subfigure[axial stress, $x=1$]{ \label{f_homog_sx1} \includegraphics[width=0.48\textwidth]{simply_supp_sx1.eps}} \subfigure[shear stress, $x=1$]{ \label{f_homog_tau1} \includegraphics[width=0.48\textwidth]{simply_supp_tau1.eps}} \subfigure[axial stress, $x=1.65$]{ \label{f_homog_sx2} \includegraphics[width=0.48\textwidth]{simply_supp_sx2.eps}} \subfigure[shear stress, $x=1.65$]{ \label{f_homog_tau2} \includegraphics[width=0.48\textwidth]{simply_supp_tau2.eps}} \subfigure[axial stress, $x=2$]{ \label{f_homog_sx3} \includegraphics[width=0.48\textwidth]{simply_supp_sx3.eps}} \subfigure[shear stress, $x=2$]{ \label{f_homog_tau3} \includegraphics[width=0.48\textwidth]{simply_supp_tau3.eps}} \caption{\footnotesize{ Homogeneous, simply-supported, anisotropic beam ($\theta = 45 \, \deg$). Analysis of cross-section stress distributions. Comparisons of reference $\psi^{ref}$, beam model $\psi^{mod}$, and Timoshenko $\psi^{T}$ solutions.}} \label{f_homog_stress} \end{figure} Numerical results demonstrate that the proposed beam model provides results substantially identical to the reference solution. In particular, Figure \ref{f_homog_tau1} highlights that shear does not vanish in the beam mid-span $\tau \left(l/2,y\right) \neq 0 $, despite the vertical internal force vanishes $V \left(l/2\right) = 0$. Further comments about this peculiarity of simply supported beams can be found in \citep{kh_16}. More interestingly, Figure \ref{f_homog_tau1} highlights that axial stress does not vanish at the bearing $\sigma_x \left(l,y\right) \neq 0 $, despite both bending moment and axial internal force vanish $M \left(l\right) = N \left(l\right) = 0$. Considering also Figures \ref{f_homog_sx2} and \ref{f_homog_tau2}, it is possible to conclude that anisotropy influences the distribution of both axial and shear stresses. In particular, stress-recovery procedures developed for isotropic structural elements can underestimate the maximal magnitude of axial stress with errors greater than $10 \, \%$. \section{Comparison with 2D \ac{FE}, bi-layer beam} \label{s_num_res} This section reports numerical results for two examples: Subsection \ref{s_cantilever} considers the cantilever already introduced in Section \ref{s_anal_sol} and Subsection \ref{s_clamp_clamp} analyzes a doubly-clamped beam (see Figure \ref{f_cantilever}). In both cases, numerical results are obtained assuming the following parameters \begin{equation} \label{mech_prop} \begin{split} & h = 100 \, \milli\meter; \quad q = 1 \, \newton/\milli\meter; \quad \alpha = 0.5 \\ E_{11} = 10^4 \, \mega & \pascal; \quad E_{22} = 5 \cdot10^2 \, \mega\pascal; \quad G = 10^3 \, \mega\pascal; \quad \nu = 0 \end{split} \end{equation} In this section, the reference solution $\psi^{ref}$ is computed using the commercial software Abaqus \citep{abaqus}, in which the 2D problem domain $\Omega$ was discretized with a structured mesh of square bilinear elements CPS4. As discussed in Section \ref{s_intro}, boundary effects could significantly affect the structural element behavior. Aiming at limiting their influence in reference solution, the \acp{BC} are imposed requiring only vanishing mean value of cross-section displacements and rotation. In this manner constrained cross-sections can warp and deform, but stress concentrations are limited in magnitude. Aiming at guaranteeing negligible numerical errors in the reference results, a sequence of analysis has been performed considering the bilayer cantilever and defining the element size $\delta$ according to the series $1/2^n$ for $n=0,1,2, \dots$. The procedure has been interrupted when the relative increase of the maximal displacement magnitude was smaller than $10^{-4}$, leading to set $\delta = 0.25$ Transversal displacement $v^{ref} \left( x \right)$ and shear strain $\gamma^{ref} \left( x \right)$ have been obtained computing the mean value over the cross-section of the 2D transversal displacements $s_y^{ref} \left(x,y\right)$ and shear strains $\gamma_{xy}^{ref} \left(x,y\right)$, respectively. Conversely, axial $N^{ref} \left( x \right)$ and shear $V^{ref} \left( x \right)$ internal forces have been obtained as the integral over the cross-section of stress components $\sigma_{x}^{ref} \left(x,y\right)$ and $\tau^{ref} \left(x,y\right)$, respectively. The bending moment $M^{ref} \left( x \right)$ has been obtained as the integral over the cross section of axial stress $\sigma_{x}^{ref} \left(x,y\right)$ times the $y$ coordinate. Finally, according to Remark \ref{r_approx}, the axial displacement $u ^{ref} \left( x \right)$ and the rotation $\phi^{ref} \left( x \right)$ have been computed as the coefficients of the linear least squares with respect to $y$ of the axial displacements $s_x^{ref} \left(x,y\right)$. Similarly, the axial strain $\epsilon ^{ref} \left( x \right)$ and the curvature $\chi^{ref} \left( x \right)$ have been computed as the coefficients of the linear least squares with respect to $y$ of the strains $\epsilon_x^{ref} \left(x,y\right)$. \subsection{Cantilever} \label{s_cantilever} In the following we set $l = 500 \, \milli\meter$ i.e., we choose $\lambda = 5$. This assumption leads to consider a beam geometry that is close to the well known limit of validity of the \acp{FSDT} and, therefore, it will allow to identify every potential critical issue of the proposed model. Assuming $\theta = 15 \, \deg$ the rotation of constitutive relation \eqref{const_rel_rotation} leads to set \begin{equation} \label{mech_prop1} \mu = 1.5853; \quad \kappa = 1.2750; \quad G_x = -4.2222 \cdot 10^3 \, \mega \pascal \end{equation} Figure \ref{f_layer+_disp} reports numerical results concerning the displacements. \begin{figure}[htbp] \centering \subfigure[axial displacement]{ \label{f_layer+_u} \includegraphics[width=0.48\textwidth]{theta+_pu.eps}}\\ \subfigure[rotation components]{ \label{f_layer+_fp} \includegraphics[width=0.48\textwidth]{theta+_pfp.eps}} \subfigure[rotation]{ \label{f_layer+_f} \includegraphics[width=0.48\textwidth]{theta+_pf.eps}} \subfigure[transversal displacement components]{ \label{f_layer+_vp} \includegraphics[width=0.48\textwidth]{theta+_pvp_def.eps}} \subfigure[transversal displacement]{ \label{f_layer+_v} \includegraphics[width=0.48\textwidth]{theta+_pvt_def.eps}} \caption{\footnotesize{ Bi-layer anisotropic cantilever ($\theta = 15 \, \deg$). Analysis of the generalized displacements components according to the proposed beam model (Figures \ref{f_layer+_fp} and \ref{f_layer+_vp}). Comparisons of the beam model $\psi^{mod}$ and the reference $\psi^{ref}$ solutions (Figures \ref{f_layer+_u}, \ref{f_layer+_f}, and \ref{f_layer+_v}).}} \label{f_layer+_disp} \end{figure} Figures \ref{f_layer+_fp} and \ref{f_layer+_vp} analyze the model solution, highlighting the deep influence of the material coupling term $G_{x}$ on global structural response. In particular, Figure \ref{f_layer+_vp} highlights that transversal displacement component $v_{c} \left(x\right)$ (see Equation \eqref{ode_sol}) has a magnitude similar to the shear deformation component $v_{T} \left(x\right)$ and it contributes to total transversal displacement more than $10 \, \%$. Conversely, $v_{r} \left(x\right)$ influences the total displacement less than $1 \, \%$. Figure \ref{f_layer+_fp} highlights that $\phi_{c} \left(x\right)$ contributes to total cross-section rotation up to $10 \, \%$. Figures \ref{f_layer+_f} and \ref{f_layer+_v} solution reveal a good accuracy of the proposed model. Indeed, relative errors are smaller than $2 \, \%$ for rotation and transversal displacements. Due to considered loads $N^{mod} \left(x\right) = 0$ and, due to \acp{BC}, also $u_{EB} \left(x\right) = 0$ (see Equation \eqref{ode_sol}). As a consequence, $u^{mod} \left(x\right) = u_{c} \left(x\right)$ i.e., the axial displacement is uniquely controlled by the material coupling term $G_{x}$ and the transversal internal force $V \left(x\right)$. Figure \ref{f_layer+_u} shows that beam model correctly predict a non-vanishing distribution of axial displacement, but the error is near to $8 \, \%$. Nevertheless, the axial displacement is two order of magnitude smaller than the transversal one, not affecting the errors evaluated on the total displacement $\pmb{s} \left(x,y\right)$. Figure \ref{f_theta+_defo} reports numerical results concerning generalized strains. \begin{figure}[htbp] \centering \subfigure[axial strain components]{ \label{f_theta+_ep} \includegraphics[width=0.48\textwidth]{theta+_pep.eps}} \subfigure[axial strain]{ \label{f_theta+_e} \includegraphics[width=0.48\textwidth]{theta+_pe.eps}} \subfigure[curvature components]{ \label{f_theta+_cp} \includegraphics[width=0.48\textwidth]{theta+_pcp.eps}} \subfigure[curvature]{ \label{f_theta+_c} \includegraphics[width=0.48\textwidth]{theta+_pc.eps}} \subfigure[shear strain components]{ \label{f_theta+_gp} \includegraphics[width=0.48\textwidth]{theta+_pgp_def.eps}} \subfigure[shear strain]{ \label{f_theta+_g} \includegraphics[width=0.48\textwidth]{theta+_pgt_def.eps}} \caption{\footnotesize{ Bi-layer anisotropic cantilever ($\theta = 15 \, \deg$). Analysis of the generalized strains components according to the proposed beam model (Figures \ref{f_theta+_ep}, \ref{f_theta+_cp}, and \ref{f_theta+_gp}). Comparisons of the beam model $\psi^{mod}$ and the reference $\psi^{ref}$ solutions (Figures \ref{f_theta+_e}, \ref{f_theta+_c}, and \ref{f_theta+_g}).}} \label{f_theta+_defo} \end{figure} Figure \ref{f_theta+_ep} shows that axial strain is uniquely attributed to the transversal internal force $V \left(x\right)$ by means of the coefficient $\epsilon_V$ as already discussed above (see also Figure \ref{f_layer+_u}). The comparison with reference solution (Figure \ref{f_theta+_e}) confirms the goodness of the estimation provided by the beam model. Anyway, the reference solution reveals the presence of some higher order effects close to the clamp that can not be detected by the proposed beam model (see Remark \ref{r_bondary_effects}) and may be also responsible of the errors on axial displacements. Figure \ref{f_theta+_cp} shows that transversal internal force $V \left(x\right)$ produces non-negligible curvature, up to $10 \, \%$ of the total. Similarly, Figure \ref{f_theta+_gp} shows that bending moment $M \left(x\right)$ deeply influences the shear strain which has a non-linear distribution despite transversal internal force $V\left(x\right)$ is linear. In particular, bending moment $M \left(x\right)$ produces non-negligible shear strain, up to $30 \, \%$ of the total. For both shear deformation and curvature, Figures \ref{f_theta+_c} and \ref{f_theta+_g} demonstrate that generalized strains predicted by the beam model are in extremely good agreement with reference solution. Only near to the clamp, reference solution reveals the presence of some higher order effects that are not handled by the beam model. Figures \ref{f_layer+_sx} and \ref{f_layer+_tau} report cross-section stress distributions at $1/2l = 250 \, \milli\meter$ and $3/4l = 375 \, \milli\meter$. \begin{figure}[htbp] \centering \subfigure[axial stress components, $x= 250$]{ \label{f_layer+_sx2_par} \includegraphics[width=0.48\textwidth]{theta+_psx2_def_par.eps}} \subfigure[axial stress, $x= 250$]{ \label{f_layer+_sx2} \includegraphics[width=0.48\textwidth]{theta+_psx2_def.eps}} \subfigure[axial stress components, $x= 375$]{ \label{f_layer+_sx3_par} \includegraphics[width=0.48\textwidth]{theta+_psx3_def_par.eps}} \subfigure[axial stress, $x= 375$]{ \label{f_layer+_sx3} \includegraphics[width=0.48\textwidth]{theta+_psx3_def.eps}} \caption{\footnotesize{ Bi-layer anisotropic cantilever ($\theta = 15 \, \deg$). Axial stress distributions evaluated at $x=250 \, \milli\meter$ (Figures \ref{f_layer+_sx2_par} and \ref{f_layer+_sx2}), and $x=375 \, \milli\meter$ (Figures \ref{f_layer+_sx3_par} and \ref{f_layer+_sx3}). Analysis of the axial stress components according to the proposed beam model (Figures \ref{f_layer+_sx2_par} and \ref{f_layer+_sx3_par}) and comparisons of the beam model $\psi^{mod}$ and the reference $\psi^{ref}$ solutions (Figures \ref{f_layer+_sx2} and\ref{f_layer+_sx3}).}} \label{f_layer+_sx} \end{figure} \begin{figure}[htbp] \centering \subfigure[shear stress components, $x= 250$]{ \label{f_layer+_t2_par} \includegraphics[width=0.48\textwidth]{theta+_pt2_def_par.eps}} \subfigure[shear stress, $x= 250$]{ \label{f_layer+_t2} \includegraphics[width=0.48\textwidth]{theta+_pt2_def.eps}} \subfigure[shear stress components, $x= 375$]{ \label{f_layer+_t3_par} \includegraphics[width=0.48\textwidth]{theta+_pt3_def_par.eps}} \subfigure[shear stress, $x= 375$]{ \label{f_layer+_t3} \includegraphics[width=0.48\textwidth]{theta+_pt3_def.eps}} \caption{\footnotesize{ Bi-layer anisotropic cantilever ($\theta = 15 \, \deg$). Shear stress distributions evaluated at $x=250 \, \milli\meter$ (Figures \ref{f_layer+_t2_par} and \ref{f_layer+_t2}), and $x=375 \, \milli\meter$ (Figures \ref{f_layer+_t3_par} and \ref{f_layer+_t3}). Analysis of the shear stress components according to the proposed beam model (Figures \ref{f_layer+_t2_par} and \ref{f_layer+_t3_par}) and comparisons of the beam model $\psi^{mod}$ and the reference $\psi^{ref}$ solutions (Figures \ref{f_layer+_t2} and\ref{f_layer+_t3}).}} \label{f_layer+_tau} \end{figure} Figures \ref{f_layer+_sx2_par} and \ref{f_layer+_sx3_par} highlight that the axial stress depending on the transversal internal force $d_{\sigma_x}^V \left(y\right) V \left(x\right)$ is not negligible at all, but can increase the magnitude of maximal stress up to $30 \, \%$. Conversely, the effect of the axial stress depending on the transversal load $d_{\sigma_x}^{q} \left(y\right) q$ is less significant. The comparison with reference solution (Figures \ref{f_layer+_sx2} and \ref{f_layer+_sx3}) reveals that the stress recovery developed in Section \ref{s_stress_rec} provides accurate estimations of the stress magnitude, with relative errors rarely bigger than $10 \, \%$. In particular, the stress recovery correctly predicts the jump of axial stress at the interlayer surface. Conversely, reference solution reveals the presence of some higher order effects near to the free end of the cantilever that the beam model is not able to catch (see Remark \ref{r_bondary_effects}), and may locally lead to an increase of the relative errors up to $40 \, \%$. Figures \ref{f_layer+_t2_par} and \ref{f_layer+_t3_par} highlight that the shear stress depending on transversal load $d_{\tau}^q \left(y\right) q$ is not negligible, but it can lead to the creation of a local minimum on the interlayer surface. The comparison with reference solution (Figures \ref{f_layer+_t2_par} and \ref{f_layer+_t3_par}) reveals that the proposed stress recovery is in good agreement with reference solution, leading to relative errors rarely bigger than $5 \, \%$. In order to complete the discussion of the proposed model capabilities, Tables \ref{t_u_errors}, \ref{t_phi_errors}, and \ref{t_v_errors} compare the solutions obtained using different models. Maximal displacements $\psi \left(l\right)$, with $\psi = u, \phi, v$, are evaluated using 2D \ac{FE} $\psi^{ref} \left(l\right)$, standard \ac{EB} beam model $\psi^{EB}\left(l\right)$, and the proposed beam model $\psi^{mod}\left(l\right)$. Only for the transversal displacement, also the Timoshenko beam model is considered since its solution differs form \ac{EB} ($v^{EB}\left(l\right) + v^{T}\left(l\right)$. Relative errors are computed as \begin{equation} \label{rel_error} e^{i} = \frac{\left| \psi^{i} \left(l\right) - \psi^{ref} \left(l\right) \right|}{ \left| \psi^{ref} \left(l\right) \right|} \mbox{ with } i = EB, T, mod \end{equation} Finally, numerical results and relative errors are provided for $\lambda = 5, \, 10, \mbox{ and } 20$ and $\theta = \pm 15 \, \deg$. \begin{table}[htbp] \centering \begin{tabular}{l|r|r|rr|rr|} $\lambda$ & $\theta \left[\deg\right]$ & $u^{ref} \left[\milli\meter\right]$ & $u^{EB} \left[\milli\meter\right]$ & $u^{mod} \left[\milli\meter\right]$ & $e^{EB}_u \left[\%\right]$ & $e^{mod}_u \left[\%\right]$ \\ \hline 5 & +15 & 1.173e--1 & 0.000e+0 & 1.078e--1 & 100 & 8.10 \\ 5 & --15 & --1.125e--1 & 0.000e+0 & --1.078e--1 & 100 & 4.18 \\ 10 & +15 & 4.612e--1 & 0.000e+0 & 4.311e--1 & 100 & 6.53 \\ 10 & --15 & --4.504e--1 & 0.000e+0 & --4.311e--1 & 100 & 4.29 \\ 20 & +15 & 1.828e+0 & 0.000e+0 & 1.724e+0 & 100 & 5.69 \\ 20 & --15 & --1.803e+0 & 0.000e+0 & --1.724e+0 & 100 & 4.38 \end{tabular} \caption{\footnotesize{ Bi-layer anisotropic cantilever. Maximal axial displacement $u \left(l\right)$ evaluated according to \ac{EB} $u^{EB}$ and proposed $u^{mod}$ beam models and relative errors.}} \label{t_u_errors} \end{table} \begin{table}[htbp] \centering \begin{tabular}{l|r|r|rr|rr|} $\lambda$ & $\theta \left[\deg\right]$ & $\phi^{ref} \left[\rad\right]$ & $\phi^{EB} \left[\rad\right]$ & $\phi^{mod} \left[\rad\right]$ & $e^{EB}_{\phi} \left[\%\right]$ & $e^{mod}_{\phi} \left[\%\right]$ \\ \hline 5 & +15 & --3.569e--2 & --3.189e--2 & --3.513e--2 & 10.6 & 1.57 \\ 5 & --15 & --2.841e--2 & --3.189e--2 & --2.864e--2 & 12.2 & 0.81 \\ 10 & +15 & --2.706e--1 & --2.551e--1 & --2.681e--1 & 5.73 & 0.92 \\ 10 & --15 & --2.410e--1 & --2.551e--1 & --2.421e--1 & 5.85 & 0.46 \\ 20 & +15 & --2.103e+0 & --2.041e+0 & --2.093e+0 & 2.95 & 0.48 \\ 20 & --15 & --1.984e+0 & --2.041e+0 & --1.989e+0 & 2.87 & 0.25 \end{tabular} \caption{\footnotesize{ Bi-layer anisotropic cantilever. Maximal rotations $\phi\left(l\right)$ evaluated according to \ac{EB} $\phi^{EB}$ and proposed $\phi^{mod}$ beam models and relative errors.}} \label{t_phi_errors} \end{table} \begin{table}[htbp] \centering \begin{tabular}{l|r|r|rrr|rrr|} $\lambda$ & $\theta \left[\deg\right]$ & $v^{ref} \left[\milli\meter\right]$ & $v^{EB} \left[\milli\meter\right]$ & $v^{EB} + v^{T} \left[\milli\meter\right]$ & $v^{mod} \left[\milli\meter\right]$ & $e^{EB}_v \left[\%\right]$ & $e^{EB+T}_v \left[\%\right]$ & $e^{mod}_v \left[\%\right]$ \\ \hline 5 & +15 & --1.545e+1 & --1.196e+1 & --1.364e+1 & --1.516e+1 & 22.6 & 11.7 & 1.88 \\ 5 & --15 & --1.177e+1 & --1.196e+1 & --1.364e+1 & --1.191e+1 & 1.61 & 15.9 & 1.19 \\ 10 & +15 & --2.130e+2 & --1.913e+2 & --1.981e+2 & --2.106e+2 & 10.2 & 7.00 & 1.13 \\ 10 & --15 & --1.834e+2 & --1.913e+2 & --1.981e+2 & --1.847e+2 & 4.31 & 8.02 & 0.71 \\ 20 & +15 & --3.210e+3 & --3.061e+3 & --3.088e+3 & --3.190e+3 & 4.64 & 3.80 & 0.62 \\ 20 & --15 & --2.972e+3 & --3.061e+3 & --3.088e+3 & --2.982e+3 & 2.99 & 3.90 & 0.34 \end{tabular} \caption{\footnotesize{ Bi-layer anisotropic cantilever. Maximal transversal displacements $v\left(l\right)$ evaluated according to \ac{EB} $v^{EB}$, Timoshenko $v^{T}$, and proposed $v^{mod}$ beam models and relative errors.}} \label{t_v_errors} \end{table} On the one hand, relative errors decrease increasing the slenderness for all the considered beam models, consistently with standard beam model assumptions. On the other hand, it is worth highlighting that \ac{EB} and Timoshenko beam models lead to errors that are often greater than $10 \, \%$ and are therefore not acceptable for most of engineering applications. Conversely, the proposed beam model leads to errors that are usually smaller than $5 \, \%$, reaching an accuracy adequate for most of engineering applications. Furthermore, Table \ref{t_v_errors} highlights that Timoshenko beam model not always performs better than \ac{EB}, even considering tick beams. In particular, for $\lambda = 5$ and $\theta = - 15 \, \deg$ maximal transversal displacement predicted by the proposed beam model qualitatively coincides with the \ac{EB} solution i.e., $v^{mod} \left(l\right) \approx v^{EB} \left(l\right)$. On the one hand, such a result highlights (i) the deep influence of fiber direction on the structural element stiffness (see Remarck \ref{r_D_even_odd}) and (ii) the inappropriateness of beam models developed for isotropic structural element. On the other hand, the extremely low errors obtained for both positive and negative $\theta$ confirm the effectiveness of the proposed model in handling all peculiar aspects of anisotropic beams. Figure \ref{f_percentage} reports the weight of the four components $v_{EB} \left(l\right)$, $v_{T} \left(l\right)$, $v_{c} \left(l\right)$, and $v_{r} \left(l\right)$ on the total transversal displacement as a function of $\lambda$. \begin{figure}[htbp] \centering \includegraphics[width=0.5\columnwidth]{theta+_perc.eps} \caption{\footnotesize{ Bi-layer anisotropic cantilever ($\theta = 15 \, \deg$). Incidence of the maximal transversal displacement components $v_{EB} \left(l\right)$, $v_{T} \left(l\right)$, $v_{c} \left(l\right)$, and $v_{r} \left(l\right)$ evaluated for varying $\lambda$.}} \label{f_percentage} \end{figure} The analysis is limited to geometry and material mechanical properties introduced at the beginning of Section \ref{s_num_res}, but it highlights several effects of the anisotropy on the structural response of beams. The component $v_{c} \left(l\right)$, depending on the material coupling term $G_x$, is always bigger than component depending of shear $v_{T} \left(l\right)$. As an example, considering $\lambda = 20$ $v_{c} \left(l\right) / v \left(l\right) >3 \, \%$ whereas $v_{T} \left(l\right) / v \left(l\right) <1 \%$. Furthermore, for $\lambda = 80$ $v_{c} \left(l\right) / v\left(l\right) \approx 1 \, \%$ whereas $v_{T} \left(l\right) / v \left(l\right) \approx 0.05 \, \%$. As a consequence, it is possible to conclude that the material coupling contribution $v_{c} \left(x\right)$ could be significant also for slender structural elements for which, instead, shear contribution is negligible. Conversely, the transversal displacement $v_{r} \left(l\right)$ always contributes to the total displacement less than $1 \, \%$. \subsection{Doubly-clamped beam} \label{s_clamp_clamp} This section considers the statically indeterminate multilayer beam depicted in Figure~\ref{f_cantilever}, aiming at confirming the capabilities of the proposed beam model in effectively estimating anisotropic beam stiffness. Analytical solution reported in Equation \eqref{ode_sol} is still valid. Conversely the following \acp{BC} has to be considered: \begin{equation} u \left( 0 \right) = u \left( l \right) = 0; \quad \phi \left( 0 \right) = \phi \left( l \right) = 0; \quad v \left( 0 \right) = v \left( l \right) = 0 \end{equation} Analytical expression for the coefficients $C_i$ for $i = 1 \dots 6$ turns out to be extremely complex and, for brevity, they will not be reported. Anyway, it has to be noticed that all the coefficients depends on all mechanical properties $E_{11}$, $G_{12}$ , and $G_x$. As a consequence, the subdivision of displacements in components $\left(\cdot\right)_{EB}$, $\left(\cdot\right)_{T}$, $\left(\cdot\right)_{c}$, and $\left(\cdot\right)_{r}$ introduced in Equation \eqref{ode_sol} is no longer meaningful and it will not be considered in the following. We set $l = 1000 \, \milli\meter$ and we use the geometrical and mechanical properties reported in Equations \eqref{mech_prop} and \eqref{mech_prop1}. Figure \ref{f_clamp+_int_for} reports numerical results concerning distribution of internal forces $N \left(x\right)$, $M \left(x\right)$, and $V \left(x\right)$. \begin{figure}[htbp] \centering \subfigure[axial internal force]{ \label{f_clamp+_N} \includegraphics[width=0.48\textwidth]{clamp+_pn.eps}} \subfigure[bending moment]{ \label{f_clamp+_M} \includegraphics[width=0.48\textwidth]{clamp+_pm.eps}} \subfigure[transversal internal force]{ \label{f_clamp+_V} \includegraphics[width=0.48\textwidth]{clamp+_pvv.eps}} \caption{\footnotesize{ Doubly clamped bi-layer anisotropic beam. Analysis of internal forces. Comparisons of the beam model $\psi^{mod}$ and the reference $\psi^{ref}$ solutions.}} \label{f_clamp+_int_for} \end{figure} Numerical results highlight a non-trivial effect of the material anisotropy. The distribution of internal forces is non-symmetric and reactions on the right hand side clamp are greater than the ones in the left hand side clamp despite \acp{BC} and load are symmetric with respect to the beam mid-span. Once more, comparison with reference solution reveals the high accuracy of the proposed beam model that predicts both bending moment and transversal internal force with negligible errors. Finally, the proposed beam model correctly predicts a non-vanishing, constant distribution of axial internal force, which magnitude is anyway negligible if compared with transversal internal force and bending moment. Tables \ref{t_N_errors}, \ref{t_M_errors}, and \ref{t_V_errors} report constraint reactions (i.e., $N\left(x\right)$, $M\left(x\right)$, and $V\left(x\right)$ for $x = 0, l$) evaluated using 2D \ac{FE} $\psi^{ref} \left(l\right)$, standard \ac{EB} beam model $\psi^{EB}\left(l\right)$, and the proposed beam model $\psi^{mod}\left(l\right)$ for $\lambda = 5, \, 10, \mbox{ and } 20$. Relative errors are computed according to Equation \eqref{rel_error}. \begin{table}[htbp] \centering \begin{tabular}{l|r|r|rr|rr|} $\lambda$ & $x$ & $N^{ref} \left[\newton\right]$ & $N^{EB} \left[\newton\right]$ & $N^{mod} \left[\newton\right]$ & $e^{EB}_N \left[\%\right]$ & $e^{mod}_N \left[\%\right]$ \\ \hline 5 & 0 & 7.356e+0 & 0.000e+0 & 8.742e+0 & 100 & 18.8 \\ 5 & $l$ & 7.356e+0 & 0.000e+0 & 8.742e+0 & 100 & 18.8 \\ 10 & 0 & 8.959e+0 & 0.000e+0 & 1.092e+1 & 100 & 21.9 \\ 10 & $l$ & 8.959e+0 & 0.000e+0 & 1.092e+1 & 100 & 21.9 \\ 20 & 0 & 9.472e+0 & 0.000e+0 & 1.165e+1 & 100 & 23.0 \\ 20 & $l$ & 9.472e+0 & 0.000e+0 & 1.165e+1 & 100 & 23.0 \end{tabular} \caption{\footnotesize{ Doubly clamped bi-layer anisotropic beam. Axial constraint reactions $N \left(0\right)$ and $N \left(l\right)$ evaluated according to \ac{EB} $N^{EB}$ and proposed $N^{mod}$ beam models and relative errors.}} \label{t_N_errors} \end{table} \begin{table}[htbp] \centering \begin{tabular}{l|r|r|rr|rr|} $\lambda$ & $x$ & $M^{ref} \left[\newton \meter\right]$ & $M^{EB} \left[\newton \meter\right]$ & $M^{mod} \left[\newton \meter\right]$ & $e^{EB}_{M} \left[\%\right]$ & $e^{mod}_{M} \left[\%\right]$ \\ \hline 5 & 0 & --1.732e+4 & --2.083e+4 & --1.794e+4 & 20.3 & 3.58 \\ 5 & $l$ & --2.482e+4 & --2.083e+4 & --2.415e+4 & 16.1 & 2.70 \\ 10 & 0 & --7.458e+4 & --8.333e+4 & --7.583e+4 & 11.7 & 1.68 \\ 10 & $l$ & --9.244e+4 & --8.333e+4 & --9.137e+4 & 9.86 & 1.16 \\ 20 & 0 & --3.138e+5 & --3.333e+5 & --3.170e+5 & 6.21 & 1.02 \\ 20 & $l$ & --3.526e+5 & --3.333e+5 & --3.502e+5 & 5.47 & 0.68 \end{tabular} \caption{\footnotesize{ Doubly clamped bi-layer anisotropic beam. Bending moment constraint reactions $M \left(0\right)$ and $M \left(l\right)$ evaluated according to \ac{EB} $M^{EB}$ and proposed $M^{mod}$ beam models and relative errors.}} \label{t_M_errors} \end{table} \begin{table}[htbp] \centering \begin{tabular}{l|r|r|rr|rr|} $\lambda$ & $x$ & $V^{ref} \left[\newton\right]$ & $V^{EB} \left[\newton\right]$ & $V^{mod} \left[\newton\right]$ & $e^{EB}_V \left[\%\right]$ & $e^{mod}_V \left[\%\right]$ \\ \hline 5 & 0 & --2.333e+2 & --2.500e+2 & --2.376e+2 & 7.16 & 1.84 \\ 5 & $l$ & --2.645e+2 & --2.500e+2 & --2.624e+2 & 5.48 & 0.79 \\ 10 & 0 & --4.817e+2 & --5.000e+2 & --4.845e+2 & 3.80 & 0.58 \\ 10 & $l$ & --5.184e+2 & --5.000e+2 & --5.155e+2 & 3.55 & 0.56 \\ 20 & 0 & --9.921e+2 & --1.000e+3 & --9.834e+2 & 0.80 & 0.88 \\ 20 & $l$ & --1.026e+2 & --1.000e+3 & --1.017e+3 & 2.53 & 0.88 \end{tabular} \caption{\footnotesize{ Doubly clamped bi-layer anisotropic beam. Shear constraint reactions $V \left(0\right)$ and $V \left(l\right)$ evaluated according to \ac{EB} $V^{EB}$ and proposed $V^{mod}$ beam models and relative errors.}} \label{t_V_errors} \end{table} As already remarked in Section \ref{s_cantilever}, \ac{EB} beam model leads to errors that are often bigger than $10 \, \%$, leading to estimations that are too coarse for most of engineering applications. Conversely, the proposed beam model leads to errors that are generally three-six times smaller and always below $5 \, \%$. Only Table \ref{t_N_errors} highlights that model estimates axial internal force $N \left(x\right)$ with errors over $20 \, \%$. Anyway, since the magnitude of axial internal force is approximatively 50 times smaller than transversal one, the relative error on axial internal force might not have a deep influence on the global response of structural element. \section{Conclusions} \label{s_conclusion} This paper has proposed a simple beam model that effectively handles the influence of anisotropy on the beam constitutive relations and the stress distribution. The independent variables of the model are the internal forces and the standard Timoshenko kinematic parameters. Despite its simplicity, the beam model has allowed to highlight the following peculiarities of anisotropic beams. \begin{enumerate} \item Material anisotropy leads transversal internal force to contribute up to $30 \, \%$ of the magnitude of axial stress, deeply affecting also the beam strength, not explicitly considered in this paper. \item In beam constitutive relations, non-vanishing out-of-diagonal terms that relate transversal internal force with curvature (and bending moment with shear strain) exist and deeply influence the response of the structural element. \item In addition to the standard bending contribution (proportional to cube beam-slenderness) and the shear one (proportional to beam-slenderness), a third term, depending on material coupling term and proportional to square beam-slenderness, contributes to transversal displacement. \item The contribution depending on material coupling terms can be bigger than the contribution given by shear deformation and it may be non-negligible for length vs thickness ratios greater than fifty. \end{enumerate} A systematic comparison with analytical results and 2D \ac{FE} solutions, obtained using highly refined meshes, demonstrates the effectiveness of the proposed modeling approach. In general, the proposed beam model has a computational cost similar to simplest beam models used in engineering practice and it estimates significant displacements and internal forces with relative errors usually smaller than $5 \, \%$. Conversely, coarse adaptations of beam models developed for isotropic structural elements may lead to errors greater than $20 \, \%$ in the prediction of both internal forces and displacements. Furthermore, analysis of stress distributions demonstrates that stress recovery tools developed for isotropic structural elements are no longer effective for anisotropic ones, but ad-hoc routines has to be developed. The main limitations of the proposed model are the assumptions on kinematics that do not allow to describe higher order effects like cross-section warping and distortion as well as phenomena that occur in the neighborhood of constraints and concentrated loads. Future research will include the application of the proposed modeling strategy to higher order planar beams and its generalization to 3D beams and plates. \section{Acknowledgments} This work was funded by the Austrian Science Found (FWF) [M 2009-N32]. F. Auricchio and S. Morganti would like to acknowledge the strategic theme of the University of Pavia "Virtual Modeling and Additive Manufacturing for Advanced Materials (3D@UniPV)"
1,314,259,995,217
arxiv
\subsection*{Results \& Discussion} \subsubsection*{Hodgkin-Huxley model, multiple solutions and their sensitivity to parametric variations} The dispersion of parameters measured by Hodgkin and Huxley is summarized in Figure~\ref{fig:1} -- a collage of data and images from the original report, adapted to modern notation. The table (top panel) indicates ranges of cellular-level structural parameters. The term `structural' is used as these parameters are fully determined by physical measures that are characteristic of the cell. These include membrane surface area, the number of voltage-dependent conductances, and relevant equilibrium potentials. The graphs of Figure~\ref{fig:1} depict protein-level kinetic parameters, expressed as six transition rate functions superposed with data points. The mathematical expressions of these six rate functions -- each of which describes the change of transition rate with membrane voltage ($V$) -- involve more than ten different `hidden' parameters. Note the dispersion of points around the fitted rate functions, depicting repeated measurements in different axons. To simplify matters, the following analysis ignores variations in equilibrium potentials and focuses on ten parameters: membrane capacitance ($C_{\text{m}}$), maximal sodium, potassium and leak conductance ($\bar{g}_{\text{Na}}$, $\bar{g}_{\text{K}}$ and $\bar{g}_{\text{leak}}$, respectively), and the six transition rates underlying the opening and closing of `gates' -- $\alpha_{m}(v)$, $\beta_{m}(v)$, $\alpha_{h}(v)$, $\beta_{h}(v)$, $\alpha_{n}(v)$ and $\beta_{n}(v)$ -- as explained below. We begin creating a straw man, considering marginal (i.e. independent) uniformly distributed variations for all ten parameters. Values of parameters are expressed in terms of their scaling relative to the values chosen by Hodgkin and Huxley (1952). Thus, for instance, $<\bar{g}_{\text{Na}}>$ = 1.2 stands for $\bar{g}_{\text{Na}}$ = 144 mS/cm$^{2}$ (i.e., x1.2 the value chosen by Hodgkin and Huxley; see Figure~\ref{fig:1}). Transition rates are similarly scaled by multiplication. Thus, for instance, the expression $<\beta_{n}(v)>$ = 0.75 stands for $0.75 \beta_{n}(v)$. The shaded areas added to the graphs of Figure~\ref{fig:1} suggest that such a linear scaling of transition rate functions is justified, as it captures most of the underlying variance. \begin{figure}[] \centering \includegraphics[width=12cm,height=9cm]{figure2.pdf} \caption{\protect\rule{0ex}{20ex}Realizations (10,000) of a full Hodgkin-Huxley model; each realization is uniquely defined by a vector of ten parameters, expressed in terms of their scaling relative to the values chosen by Hodgkin and Huxley. Responses are classified to three excitability statuses (different colors): excitable (2225), nonexcitable (4884) and oscillatory (2891). Subsets of the results (200 for each excitability status) are presented in polar plots: Given a list of ten scaling parameters, the value of each parameter is depicted along its own (angular) axis, and the entire vector is depicted as a line that connects the ten scaling parameters. The standard Hodgkin-Huxley model would be a line passing through 1 for all scaling parameters. Mean vectors are depicted by dashed lines. The histograms in the bottom-right panel depict Euclidean distance between vectors of scaled parameters \textit{within} each of the three excitability classes.} \label{fig:2} \end{figure} Hence, a realization of a Hodgkin-Huxley model is defined by a list of ten scaling parameters: {$<\alpha_{n}(v)>$, $<\beta_{n}(v)>$, $<\alpha_{m}(v)>$, $<\beta_{m}(v)>$, $<\alpha_{h}(v)>$, $<\beta_{h}(v)>$, $<C_{\text{m}}>$, $<\bar{g}_{\text{leak}}>$, $<\bar{g}_{\text{K}}>$, $<\bar{g}_{\text{Na}}>$}. Assuming independence of the ten parameters, we randomly generated 10,000 such lists of scaling parameters with values range [0.75, 1.25], and numerically instantiated, each one of them, in a full Hodgkin-Huxley model. The resulting behaviors may be classified as nonexcitable (i.e. passive), excitable (i.e. a membrane that generates one spike in response to a short above-threshold stimulus), and oscillatory (i.e. pace-making). Resting membrane potentials of the excitable and non-excitable outcomes did not differ much over the $\pm$25\% deviation from the original Hodgkin and Huxley parameters, being -64.5$\pm$1.4 mV and -66.2$\pm$1.5 mV (respectively). To promote effective visualization of the ten-dimensional space, results are presented in the form of polar plots (Figure~\ref{fig:2}): Given a vector of ten scaling parameters, the value of each parameter is depicted along its own (angular) axis, and the entire vector is depicted as one line that connects the ten scaling parameters. The standard Hodgkin-Huxley model would be a line passing through 1 for all scaling parameters. Three separate polar plots (panels A, B and C) show that practically all three classes of excitability status (depicted by three different colors) are distributed throughout the 10-dimensional parameter space. Mean vectors in each of these cases are depicted by black dashed lines. For comparison, these three mean vectors and their corresponding standard deviations are plotted together in the polar plot of panel D. The Euclidean distance between the mean vector of excitable and the mean vectors of the other two solutions (non-excitable or oscillatory) is ca. 0.15, similar to the standard deviation of distances within each of them (inset to panel D). In other words, assuming complete independence of the parameters within the $\pm$25\% range of parametric variation, almost any randomly chosen vector of Hodgkin-Huxley parameters, regardless of its outcome (non-excitable, excitable or oscillatory), may be `pushed' to display any other excitability status by a minor manipulation of parameters. \subsubsection*{Lower dimension Hodgkin-Huxley parameter space} We focus on the conditions for transition between excitable and non-excitable statuses. Several schematic momentary current-voltage relations of excitable membranes, during an action potential, are plotted in Figure~\ref{fig:3ab}A. Grossly speaking, the lower curve depicts current-voltage relations sampled by voltage-clamp steps from deeply hyperpolarized holding potential. The upper curve depicts current-voltage relations sampled by voltage-clamp steps from a relatively depolarized holding potential. During an action potential, where membrane voltage is a dynamical free variable, current-voltage relations slowly shift between these two extremes due to an evolving voltage-dependent restoring force, mediated by the opening of potassium channels and inactivation of sodium channels. The slow change in restoring force gives rise to a current-voltage closed trajectory depicted in Figure~\ref{fig:3ab}A (black continuous line). \begin{figure}[] \centering \includegraphics[width=12cm,height=9cm]{figure3ab.pdf} \caption{\protect\rule{0ex}{20ex}(A) Idealized momentary current-voltage relations at different ratios of available sodium and potassium conductances (modified versions of Figures 11.9 \& 11.10 in Jack, Noble and Tsien, 1975). In different phases of the action potential, different momentary current-voltage relations determine the dynamics. The black continuous line depicts the resulting current-voltage trajectory during an action potential. Inset: a magnified version of the area at threshold (indicated in the main figure by a circle), about which the system is linearized. (B) histograms of the three excitability statuses, constructed from the data of Figure \ref{fig:2} (10000 Hodgkin-Huxley realizations), for each of the scaling parameters. Note that all parameters are freely fluctuating, simultaneously, over $\pm$25\%. } \label{fig:3ab} \end{figure} The general differential equation of the system is $C_{\text{m}} d^{2}V/dt^{2} + dI_{i}/dt = 0$. As pointed out by Jack, Noble and Tsien \cite{Jack1975} (Ch. 11), linearization about the threshold potential (inset to Figure~\ref{fig:3ab}A) leads to an expression of $dI_{i}/dt$ in terms of the momentary conductance ($-g_{\text{fast}}$) at threshold and the time constant ($\tau_{\text{slow}}$) for evolving restoring force. The condition for instability near threshold, one of the solutions to this equation (expressed in conductance units), is $-g_{\text{fast}}>C_{\text{m}}/\tau_{\text{slow}}$. Note that these lumped entities ($-g_{\text{fast}}$, $C_{\text{m}}$ and $\tau_{\text{slow}}$) may naturally be classified into the above groups of structural and kinetic parameters: $C_{\text{m}}$ is obviously structural; likewise, the fast conductance ($-g_{\text{fast}}$) that depends on the relative contribution of maximal sodium conductance. In contrast, the time scale for introduction of restoring force ($\tau_{\text{slow}}$) is a kinetic parameter because it depends on the actual transition rate functions governing the gating of sodium and potassium channels. Inspired by the above and related mathematical reductions of excitability \cite{Abbott1990,FitzHugh1961,Izhikevich2003}, we turned to the data of Figure~\ref{fig:2} in search for these two dimensions, expressed in terms of Hodgkin-Huxley parameters. To this aim, we constructed count histograms of the three excitability statuses for each of the scaling parameters (Figure~\ref{fig:3ab}B). These histograms show that the most critical determinants of excitability status are the rates of opening and closure of sodium and potassium channels ($\alpha_{m}(v)$, $\beta_{m}(v)$, $\alpha_{n}(v)$ and $\beta_{n}(v)$) and the maximal conductance of the membrane to the two ions ($\bar{g}_{\text{Na}}$, $\bar{g}_{\text{K}}$).\footnote{Similar results are obtained using the two ``far apart'' excitability statuses -- nonexcitable and oscillatory -- to extract principal components; not shown.} Distribution of excitability status is significantly less sensitive to transition rates involved in sodium conductance inactivation, as well as leak conductance. Membrane capacitance seems to have some effect. A possible interpretation of the histograms of Figure~\ref{fig:3ab}B is that, at least as a first approximation, $-g_{\text{fast}}$ may be assumed to be proportional to a ratio of structural parameters \textit{S} = $<\bar{g}_{\text{Na}}>/(<\bar{g}_{\text{Na}}> + <\bar{g}_{\text{K}}>$), whereas $\tau_{\text{slow}}$ may be assumed to be proportional to the ratio of kinetic parameters \textit{K} = $(<\alpha_{n}(v)>+<\beta_{m}(v)>)/(<\alpha_{n}(v)>+<\beta_{m}(v)>+<\alpha_{m}(v)>+<\beta_{n}(v)>)$. \begin{figure}[] \centering \includegraphics[width=12cm,height=9cm]{figure4ab.pdf} \caption{\protect\rule{0ex}{20ex}(A) Subsets of 100 realizations from each of the three excitability statuses of Figure \ref{fig:2} are plotted together. (B) Realizations (30,000) of a full Hodgkin-Huxley model, covering parametric variations over the entire range indicated by Hodgkin and Huxley (see Figure \ref{fig:1}), classified (different colors) to three excitability statuses: excitable (4660), not excitable (12271) and oscillatory (13069). Linear regression through the excitable status cloud (blue) is depicted by a line, the equation of which is $S = 4.4K-1.6$. Inset: same plot with excitable points omitted. } \label{fig:4ab} \end{figure} The murky enmeshment of solutions shown in Figure~\ref{fig:2} (a sample of which is re-plotted in Figure~\ref{fig:4ab}A, superposed) is significantly clarified when the data are arranged according to the values of \textit{S} and \textit{K} (Figure~\ref{fig:4ab}B): The three different excitability regimes are nicely clustered in three clouds. The oscillatory and nonexcitable phases are well-separated in the \textit{S--K} plane (inset to Figure~\ref{fig:4ab}B), whereas the borders separating the excitable phase from these other two are `soft' rather than sharp. The nonexcitable cloud in the upper left corner is due to excessive sodium conductance that stabilizes the membrane at a depolarized potential. Note that in Figure~\ref{fig:4ab}B, four parameters (two sodium inactivation rates, capacitance and leak conductance) are not taken into account even though they are allowed to freely fluctuate. And yet, when examined in the two-dimensional \textit{S--K} space, the Hodgkin-Huxley model reveals order that is literally impossible to detect in the more explicit, higher dimensional representation of Figure~\ref{fig:4ab}A. The effectiveness of the dimensionality reduction is further supported in Figure~\ref{fig:5ab}A, where the outcomes of multiple realizations of three different \textit{S;K} pairs are shown. Each of these three pre-defined \textit{S;K} pairs (0.60;0.45, 0.50;0.50, and 0.40;0.55) was realized 30 times by adjusting $<\alpha_{m}(v)>$ and $<\bar{g}_{\text{K}}>$ to the other four, randomly generated, Hodgkin-Huxley parameters ($<\alpha_{n}(v)>$, $<\beta_{m}(v)>$, $<\beta_{n}(v)>$ and $<\bar{g}_{\text{Na}}>$). Five of 30 are shown for each value; the rest are comparable. Clearly, the values of the lumped \textit{S} and \textit{K} dimensions are better predictions of the outcome, than the individual Hodgkin-Huxley parameters. Several points deserve attention in relation to the numerically calculated \textit{S--K} plane of Figure~\ref{fig:4ab}B. First, the borders between the three phases are fairly steep (note different ranges of \textit{S} and \textit{K} axes). The immediate implication of this steepness is that in its two extreme statuses (nonexcitable and oscillatory), the system is relatively immune to variations in maximal conductances. Stated differently, it is sufficient to use ionic channels that set the \textit{K} dimension below (ca.) 0.45 to obtain a pace-maker that is insensitive to fluctuations in density of channel proteins; the system maintains its pace-making nature over factor 3 in the value of the structural ($S$) dimension. The same can be said about nonexcitable membranes: setting the \textit{K} dimension above (ca.) 0.55 to obtain a nonexcitable system that is insensitive to fluctuations in density of channel proteins. Second, the \textit{K} dimension is a rational function of first degree, a simple combination of concrete Hodgkin-Huxley kinetic parameters; as such it buffers the effect of changes in individual rates. This, one might expect, would also be the case for the \textit{S} dimension where more than two voltage-dependent conductances are involved. Third -- moving within the \textit{S--K} plane has an interpretable effect on the response shape (Figure~\ref{fig:5ab}B): the integral of voltage response emitted during a simulated trace is sensitive to the position within the \textit{S--K} plane. Naturally, difference between points in the nonexcitable phase is very small, if at all. Fourth -- a seemingly technical point but of potential interest: given scaled Hodgkin-Huxley parameters, one can calculate the resulting excitability status without resorting to simulation. This means that the non-linearity of the model does not change the behavior actually predicted from a low-dimensional representation of the system \cite{sobie2009parameter}. And, fifth -- admittedly, our theoretically inspired choice of the rational functions that express \textit{S} and \textit{K} is one of many possible interpretations to relations between scaling parameters. To further justify this choice, we submitted the whole data set to a linear support vector machine (SVM) algorithm. The results are presented in Figure~\ref{fig:6}, where the test set and the probability distribution of each class are plotted as a function of \textit{S} and \textit{K}. The accuracy of classifying the outcome of a full Hodgkin-Huxley model, based on the values of \textit{S} and \textit{K} is 0.89, suggests that our theoretically-inspired reduction of the Hodgkin-Huxley model to an \textit{S--K} plane is judicious. It remains to be seen whether the dimensionality reduction approach we used above can be applied to models with many more channel types. \subsubsection*{Closed-loop control of excitability in the \textit{S--K} plane: the case of sodium conductance slow inactivation} Being embedded in \textit{S--K}, the seemingly complicated and parameter sensitive system becomes tractable, enabling regulation by an activity-dependent rule acting on one physiological entity. A most straight-forward regulation rule would involve inverse relations between electrical activity (say, integral of membrane potential depolarizations) and the effective or actual value of the structural dimension (\textit{S}). Many physiological processes that modulate membrane ion channels may realize such adaptation, covering a wide range of spatial and temporal scales \cite{Marom2010}. For instance: (1) Slow inactivation of sodium conductance, which is a local modulatory mechanism that operates over seconds to many minute time scales \cite{Catterall2015,Marom2016,Ruben1992,Silva2014,Toib1998,Ulbricht2005,Vilin2001}; or, (2) calcium-dependent activation of potassium conductance, a local and relatively fast time scale mechanism \cite{Brenner2000,Ghatta2006,Sah1996}, or (3) regulation of sodium and/or potassium channel protein expression, an arguably global but definitely slow time scale mechanism \cite{Bucher2011,Schulz2007}. Each of these in itself naturally constrains the system to hover about the excitable phase by pushing the value of \textit{S} downward when above the diagonal, or upwards when below. Other regulatory mechanisms may be envisioned, implementing (for instance) temporal integration of subthreshold activity by slow inactivation of potassium channels \cite{Marom1994,Storm1988}, or maintenance of pace-making activity by regulation of IK$_{\text{\textit{f}}}$ conductance \cite{Baruscotti2005}. Of the above-mentioned spectrum of physiological modulatory mechanisms, slow inactivation of sodium channels is especially interesting. While acknowledged from the early days of membrane electrophysiology [reviewed in \cite{Ulbricht2005}], the concept of slow inactivation of sodium channels as a means to maintain excitability status amidst parametric variations, has remained relatively neglected. What makes slow inactivation a powerful regulatory mechanism is its impacts on the \textit{effective} value of $\bar{g}_{\text{Na}}$, covering a range of time scales \cite{Ellerkmann:2001bs,fleidervish1996slow,Silva2014,Toib1998,Ulbricht2005,Vilin2001}. Furthermore, inactivation is `local' in the sense that it does not require central control; it occurs automatically as a consequence of activity. Thus, one might picture it acting as a distributed normalizing force in extended excitable tissues (e.g., long axons or electrically coupled excitable cells). \begin{figure} \centering \includegraphics[width=1\linewidth]{figure5ab.pdf} \caption{(A) Five different instantiations for each of three S;K pairs (depicted within panel \ref{fig:4ab}B); stimulation amplitude 14 $\mu$A. (B) The integral of voltage response emitted during a simulated trace is sensitive to the position within the \textit{S--K} plain. The integral, calculated by summing the voltage values of all data points along the trace, relative to -65 mV, is presented in arbitrary units: \textit{left} -- gradual change of $K$ at $S = 0.5$, \textit{right} -- gradual change of $S$ at $K = 0.5$. Point color depicts excitability class. } \label{fig:5ab} \end{figure} \begin{figure} \centering \includegraphics[width=1\linewidth]{figure6.pdf} \caption{Classification of excitability statuses (data of Figure 2) using a linear kernel support vector machine (SVM); 80\% training set. Each surface (oscillatory, excitable, nonexcitable; color coded) represents the probability (z-axis) of a given combination of $K$ and $S$ to give rise to its corresponding excitability status. Thus, for instance, the probability of a point $K=0.42$ and $S=0.7$ to yield an oscillatory (depicted green) excitability status is practically 1. The colored points at the top of the image are the actual data points, similar to the form of presentation used in Figure 4B. } \label{fig:6} \end{figure} To demonstrate the potential impacts of slow inactivation on dynamics within the \textit{S--K} plane, we focus on channel gating beyond the time scale of a single action potential. Slow inactivation is represented as a macroscopic system, where channels move -- in an activity-dependent manner -- between two states: available and not-available (Figure~\ref{fig:7}, left panel), depicted $A\leftrightarrow(1-A)$. The first (\textit{A}) is the set of states that includes, besides the open state itself, all the states from which the channel may arrive to the open state within the time scale of a single action potential, i.e. -- the closed states and the very first inactive states that are treated in standard Hodgkin and Huxley formalism for the action potential generation. In other words, a channel in \textit{A} is available for conducting ions within the time scale of a single action potential. In contrast, $(1-A)$ is a large, interconnected pool of slow inactive states (depicted \textit{I}j's in the scheme of Figure \ref{fig:7}) from which transition to the open state within the time scale of an action potential is impossible. Recent structural analyses suggest that the large space of slow inactive states $(1-A)$ might reflect the many distorted versions of the functional protein under conditions where the organization, otherwise enforced by hyperpolarized membrane potential, is compromised upon extensive depolarizing activity \cite{Catterall2015}. Theory and experiments \cite{Ellerkmann:2001bs,Goychuk:2004kx,Marom2016,Marom2010,millhauser1988diffusion,millhauser1988rate,Toib1998} show that in such a scheme, the multiplicity of slow inactive states entails a power-law scaling of recovery from $(1-A)$ to $A$ as a function of time spent in $(1-A)$. This implies a potential to become dormant in an activity-dependent manner for a duration ranging from tens of milliseconds to many minutes and possibly hours. Thus, unlike standard Hodgkin-Huxley gates, the rate of recovery from slow inactivation does not have a uniquely defined characteristic time scale. Rather, the time scale is determined by the distribution of channels in the space of inactive states, which, in turn, is dictated by the history of activation. The kinetics of $A\leftrightarrow(1-A)$ may be qualitatively described by an adaptive rate model \cite{Gal2013,excitability2014single,Marom2010,marom2009adaptive,xu2017dynamical}, a logistic-like function of the form: $\dot{A} = -f(\gamma)A + g(A)(1-A)$, where $f$ is a function of some general activity measure $\gamma$, and $g(A)$ is a monotonically increasing function of the system state $A$. The model gives rise to a wide range of time scales of recovery from inactivation and assures a non-zero stable point at $(1-\gamma)$, on the edge between excitable and nonexcitable \cite{Gal2013}. Mapping the above picture to the terms used in the present work, it is instructive to think of $A$ as $<\bar{g}_{\text{Na}}>$, i.e. a scaling parameter of maximal sodium conductance. In the original Hodgkin and Huxley formalism, $\bar{g}_{\text{Na}}$ is a structural constant that sets limits on the instantaneous (at the scale of milliseconds) input--output relations of the membrane. But when long-term effects are sought, $<\bar{g}_{\text{Na}}>$ might be treated as a dynamic variable that modulates residence in a reservoir of slow-inactivation configurations, pulling channels away from the system as a function of activity. Note that where $<\bar{g}_{\text{K}}>$ is constant and where $<\bar{g}_{\text{Na}}>$ = $A$, the adaptive rate model qualitatively captures the dynamics of $<\bar{g}_{\text{Na}}>$/($<\bar{g}_{\text{Na}}>$+$<\bar{g}_{\text{K}}>$). Indeed, the right panel of Figure~\ref{fig:7} demonstrates that application of a simple adaptive rate model $\dot{S} = -f(K)S + f(S)(1-S)$, where $f(\gamma)$ is substituted by $f(K)$, reveals the potential of sodium conductance slow inactivation to maintain excitability amid parametric variations. Slow inactivation restrains the system to a diagonal in the \textit{S--K} plane: the blue line depicts a case where the kinetic dimension ($K$) walks randomly while $\dot{S}$ follows an adaptive rate formalism; a gray line that connects between points (looks like squiggles) depicts the path of excitability status in a control condition, where both the kinetic dimension ($K$) and the structural dimension ($S$) walk randomly. \begin{figure}[] \centering \includegraphics[width=12cm,height=9cm]{figure7.pdf} \caption{\protect\rule{0ex}{20ex}Left: a schematic representation of sodium channel states, with many slow inactivation states. Right: Demonstration of maintenance of excitability in $S--K$ plane given parametric variation, controlled by activity-dependent transitions of sodium channels between available and not-available sets of states. The simulation describes a 200000 steps random walk process, beginning at 0.5;0.5 ($S;K$). The gray trajectory (squiggles-like) depicts a random walk where $K_{n+1} = K_{n} \pm0.01$ and $S_{n+1} = S_{n}\pm0.01$. The blue line depicts a walk where $K_{n+1} = K_{n} \pm0.01$ and $S_{n+1} = S_{n} \pm0.01 - (2.6-4.4 K) S + S (1-S)$. The slope corresponds to the fitted function of Figure \ref{fig:4ab}B. } \label{fig:7} \end{figure} \subsubsection*{Concluding remarks} In an era marked by capacity to collect data at ever-increasing resolution, it is important to identify proper scales in analyses of cellular phenomena, scales that matter to the system \cite{Transtrum2015}, scales where the phenomenon of interest is low-dimensional, regulatable by simple physiological processes and explainable in simple physiological terms. What makes membrane excitability -- a fundamental physiological phenomenon -- particularly attractive to study in this context, is the existence of a relatively sound theory in general, its application in the Hodgkin-Huxley formalism in particular, and its amenability to experimental manipulations at both microscopic (channel protein) and macroscopic (membrane potential) levels. Practically all of the homeostatic-based models of excitability regulation used in the past have kept kinetics constant and only looked at channel densities \cite{lemasson1993activity,OLeary2014,O_Leary_2018}. The present study suggests that when the problem is examined in a lower dimension, a simple control rule that relies on slow inactivation -- a ubiquitous protein intrinsic process -- can deal with fluctuations in both structural and kinetic parameters. This homeostatic mechanism is local, independent of protein synthesis and operates over a wide range of time scales (milliseconds to many minutes). We speculate that activity dependence of protein kinetics at relatively slow time scales, entailed by multiplicity of protein states, is a general ``automatic'' and local means for stabilization of cellular function. \subsubsection*{Methods}All the simulations and analyses were performed within \textit{Mathematica} (Wolfram Research, Inc.) environment. Data of Figures 2, 3, 4 and 5 were generated using Hodgkin-Huxley equations as they appear in the original manuscript (1952). The duration of each simulation epoch was 90 msec, including an initial 50 msec relaxation phase. Stimulus (7 $\mu$A, 1 msec) was delivered 70 msec into the epoch. A sorting algorithm for excitability status (excitable, non-excitable or oscillatory) was constructed, which is based on the time and the number of spikes following the relaxation phase. The sorting algorithm was validated by eye-inspection of multiple sets of 300 sorted epochs. To generate Figure 7, a support vector machine (SVM) algorithm with linear kernel was implemented within \textit{Mathematica}, using an 80\% training set. \subsubsection*{Acknowledgments}This work was partially supported by research grants from the Leir Foundation (EM, SM) the National Institutes of Health (EM), and the Israel Science Foundation (SM). The authors thank Omri Barak, Asaf Gal and Daniel Soudry for helpful comments. \bibliographystyle{unsrt}
1,314,259,995,218
arxiv
\section{Introduction} Theoretical studies and numerical simulations suggest that Active Galactic Nuclei (AGN) play an important role in the evolution of their host galaxies \citep[e.g.][]{hopkins05}. Currently, it is widely accepted that galaxies with spherical component (bulge of spiral galaxies and elliptical galaxies) host a central supermassive black hole \citep[SMBH,][]{ferrarese00,gebhardt00,tremaine02,scannapieco05} and cosmological simulations that do not include feedback effects from the SMBH result in galaxy stellar masses much higher than observed \citep{diMatteo05,springel05,bower06}. Massive outflows originated in the accretion flow are claimed to regulate and couple the growth of the galactic bulge and SMBH \citep{hopkins05} and to explain the relation between the mass of the SMBH and stellar velocity dispersion of the bulge -- the $M-\sigma$ relation \citep[e.g.][]{ferrarese00,gebhardt00} According to the Unified Model for AGN \citep[e.g.][]{antonucci93,urry95}, the Narrow Line Region (NLR) is expected to present a bi-conical shape, within which gas outflows due to winds from the accretion disk are expected to be observed. However, Hubble Space Telescope (HST) narrow-band [O\,{\sc iii}]$\lambda$5007 images of a sample of 60 nearby Seyfert galaxies show that the bi-conical shape of the NLR is not as common as expected \citep{schmitt03} and gas outflows are seen only in 33\% of Seyfert galaxies, as revealed by long-slit spectroscopy of 48 nearby AGN \citep{fischer13}. Nevertheless, long-slit observations are restricted to only one position angle. A better mapping of the outflows and their geometries can be obtained via integral field spectroscopy (IFS), as shown in recent studies both in the optical and near-infrared \citep[e.g.][]{lena15,allan14,zakamska16, n5929let,barbosa14} The comparison between the gas and stellar kinematics on kiloparsec scales allows the study of the possible impact of AGN outflows on its host galaxy. So far, most studies aimed to investigate gas outflows from AGN have been performed for small samples or individual galaxies. In this work we use the observations from the Mapping Nearby Galaxies at the Apache Point Observatory (MaNGA) survey \citep{bundy15} to compare the gas and stellar kinematics of a sample composed by 62 AGN observed in the MPL-5 (MaNGA Product Launch V) \citep[Data Release 14,][]{dr14} with those of a control sample of inactive galaxies, matched with the AGN sample by properties of the host galaxies. If an AGN sample presents strong outflows, the large-scale gas velocity fields are expected to be disturbed when compared to the stellar velocity fields, while for inactive galaxies, the stellar and gas velocity fields are expected to be similar. Another way that AGN can affect the gas dynamics is by increasing the gas velocity dispersion due to the shocks of the nuclear outflow with the ambient gas. The AGN and control samples used in this paper are described in \citet{rembold17} (hereafter Paper I), which presents also the study of the nuclear stellar populations. This is the third paper of a series aimed to compare properties of AGN hosts and their control galaxies. Besides Paper I, the spatially resolved stellar populations is investigated in \citet{nicolas18} (Paper II). In addition, the gas excitation and distribution will presented by Nascimento et al. (in preparation -- Paper IV). This paper is organized as follows: Section~\ref{data} presents the samples of active and inactive galaxies and the data analysis methods, while Section~\ref{results} presents the results, which are discussed in Section~\ref{discussion}. Finally, the conclusions of this work are presented in Section~\ref{conclusions}. \begin{figure*} \begin{center} 1-339163 \end{center} \centering \includegraphics[width=0.99\textwidth]{1-339163_flux.eps} \caption{Emission lines fluxes for the galaxy with {\it mangaid} 1-339163. Our measurements are shown at the top row and the MaNGA-DAP measurements at the bottom row. In all panels, the North points up and East to the left and the $x$ and $y$ labels show the distance relative to the peak of the continuum emission. The first column shows a map of the continuum emission obtained by collapsing the whole spectral range, the following columns exhibit the spatial distribution of the emission line fluxes for [O\,{\sc iii}]5007\,\AA, H$\alpha$ and [N\,{\sc ii}]6583\,\AA, respectively. The color bars show the fluxes in unit of 10$^{-17}$ erg s$^{-1}$ cm$^{-2}$ spx$^{-1}$.} \label{Fig.1} \end{figure*} \begin{figure*} \begin{center} 1-339163 \end{center} \centering \includegraphics[width=0.99\textwidth]{1-339163_vel.eps} \caption{Velocity fields for the galaxy {\it mangaid} 1-339163. Our measurements are shown at the top row and the DAP measurements at the bottom row. In all panels, the North points up and East to the left and the $x$ and $y$ labels show the distance relative to the peak of the continuum emission. The systemic velocity has been subtracted from each panel. The first column shows the stellar velocity field and the following columns exhibit the velocity fields for [O\,{\sc iii}], H$\alpha$ and [N\,{\sc ii}], respectively. The velocity maps are in unit of km s$^{-1}$ relative to the systemic velocity of the galaxy.} \label{Fig.2} \end{figure*} \begin{figure*} \begin{center} 1-339163 \end{center} \centering \includegraphics[width=0.99\textwidth]{1-339163_sig.eps} \caption{Velocity dispersion maps for the galaxy with {\it mangaid} 1-339163. Our measurements are shown at the top row and the DAP measurements at the bottom row. In all panels, the North points up and East to the left and the $x$ and $y$ labels show the distance relative to the peak of the continuum emission. The first column shows the stellar velocity dispersion distribution and the following columns exhibit the gas velocity dispersion distributions for [O\,{\sc iii}], H$\alpha$ and [N\,{\sc ii}], respectively. The color bars show the velocity dispersion corrected by instrumental broadening in units of km s$^{-1}$.} \label{Fig.3} \end{figure*} \begin{figure*} \centering \begin{center} 1-95092 \end{center} \includegraphics[width=\textwidth]{1-95092.eps} \begin{center} 1-351790 \end{center} \includegraphics[width=\textwidth]{1-351790.eps} \caption{ In the first two rows we show the derived velocity fields for the AGN {\it mangaid} 1-95092. The first row shows, from left to right, the stellar velocity field (V:Stars), the symmetrized stellar velocity field (V:Stars Sym), the gas velocity field for [O\,{\sc iii}] (V:[{O}{\sc iii}]), and the corresponding symmetrized velocity field (V:[{O}{\sc iii}] Sym). The second row shows, from left to right, the H$\alpha$ velocity field (V:H$\alpha$), its symmetrized velocity field (V:H$\alpha$ Sym), the velocity and symmetrized velocity fields for [N\,{\sc ii}] (V:[{N}{\sc iii}]) and (V:[{N}{\sc ii}] Sym), respectively. In the bottom two rows we show the same velocity maps but for the AGN {\it mangaid} 1-351790. In all velocity maps the solid black line shows the position angle of kinematic major axis, the value of the $\Psi_0$ is shown in the top right-corner of the symmetrized velocity maps. The continuous (dotted) line shows the orientation of the kinematic major (minor) axis of the galaxy. The color bars show the velocities in units of km s$^{-1}$. } \label{Fig.4} \end{figure*} \begin{figure*} \centering \begin{center} 12-129446 \end{center} \includegraphics[width=\textwidth]{12-129446.eps} \begin{center} 1-178838 \end{center} \includegraphics[width=\textwidth]{1-178838.eps} \caption{Same as Fig.\,\ref{Fig.4} for the control galaxies {\it mangaid} 12-129446 (first and second row) and {\it mangaid} 1-178838 (third and fourth row). The velocity fields are in unit of km s$^{-1}$.} \label{Fig.5} \end{figure*} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{his_oiii_u.eps} \includegraphics[width=0.49\textwidth]{his_halpha_u.eps} \includegraphics[width=0.49\textwidth]{his_nii_u.eps} \caption{Histograms comparing $\Delta$PA distributions of AGN and control galaxies for [O\,{\sc iii}]$\lambda$5007 (top panel), H$\alpha$ (middle panel) and [N\,{\sc ii}]$\lambda$6583 (bottom line). AGN are shown in blue and controls in red. The vertical dashed lines show $\Delta$PA=30$^\circ$. } \label{Fig.6} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{his_oiii_sig_teste.eps} \caption{Histograms comparing $\sigma_{frac}$ distributions of AGN and control galaxies for [O\,{\sc iii}]$\lambda$5007. AGN are shown in blue and controls in red. The result of Anderson-Darling statistical test returns a p-value of $10^{-5}$, confirming that AGN and inactive galaxies follow distinct distributions in $\sigma_{\rm frac}$.} \label{Fig.7} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{lum_sig_frac_oiii_teste.eps} \caption{Plot of logarithm of [O\,{\sc iii}]5007\,\AA\ luminosity versus $\sigma_{\rm frac}$ for AGN (blue open circles) and inactive galaxies (red closed circles). The Spearman test confirms that these properties are correlated resulting in a correlation coefficient of 0.53 and p-value of 10$^{-14}$. The black line is the result of linear fit of data, with linear coefficient of $LC=39.4\pm0.05$ and angular coefficient of $AC=1.16\pm0.18$.} \label{Fig.8} \end{figure} \section{The Data and analysis } \label{data} \subsection{Sample and MaNGA data} We use the datacubes obtained within the MaNGA survey of the sample of AGN and matched control sample defined in Paper I. The MaNGA survey is part of the fourth-generation Sloan Digital Sky Survey (SDSS-IV) and is aimed to observe $\sim$ 10,000 nearby galaxies using optical Integral Field Spectroscopy (IFS) covering the spectral range 3600--10000~\AA\ and spectral resolving power $R\sim2000$ at a spatial resolution of 1--2 kpc. The MaNGA sample of galaxies was designed to cover at least $1.5\,R_e$ ($R_e$ -- effective radius). Here the effective radius is defined as the radius that contains the half luminosity of galaxy measured at the i-band as described in \cite{bundy15}. The MaNGA survey science goals are presented in \citet{bundy15}, the design and performance of the Integral Field Units are discussed in \citet{drory15} and the MaNGA sample is presented in \citet{wake17}. \citet{yan16b} present the survey design, execution, and data quality, the observing strategy is presented in \citet{law15} and the data reduction and calibrations are discussed in \citet{law16} and \citet{yan16}. Our sample is composed by the first 62 AGN observed with MaNGA -- selected from MaNGA MPL-5 \citep[Data Release 14,][]{dr14}. For each AGN, two control inactive galaxies, matched to the AGN hosts in absolute magnitude, galaxy mass, redshift, morphological type and inclination, were selected. The AGN selection realized by \citet{rembold17} is based on single-fiber SDSS-III observations. A detailed description and characterization of the AGN and control samples is presented in Paper I and the properties of AGN and control galaxies are show in Table \ref{tableagns} and \ref{tablecontrol}, respectively. \citet{wylezalek18} found 173 galaxies that would not have been selected as AGN candidates based on single-fiber spectral measurements, but MaNGA allowed AGN selection based on the fully spatially resolved optical diagnostics and in the future papers similar work will be done for ``nuclear'' AGN and ``off-nuclear'' AGN. Thus, in this work we focus on the ``nuclear'' AGN. As mentioned in \citet{rembold17}, our AGN sample includes 34 (55 per cent) spiral and 18 (29 per cent) elliptical galaxies. The remaining 10 objects (16 per cent) comprise 6 E/S galaxies, 1 merger and 3 unclassified objects. \subsection{Spatial filtering and noise removal} In order to remove noise from the observed datacubes, without loss of angular resolution, we performed a spatial filtering of the datacubes using a Butterworth bandpass filter \citep{gonzalez02}. This filter is performed in the frequency domain. We used a low-bandpass filter to remove high spatial frequency components from the cubes, which are usually due to spurious features (e.g. bad pixels or cosmic rays). This procedure allow us to improve the fit the emission and absorption line spectra, as compared with the original datacubes. To perform the spatial filtering, we used the Interactive Data Language (IDL) routine $bandpass_-filter.pro$, which allows the choice of the cut-off frequency ($\nu$) and the order of the filter $n$. A low value of $n$ (e.g. 1) is close to a Gaussian filter, while a high value (e.g. 10) corresponds to an Ideal filter. We used $n=5$ and $\nu=0.25$~Ny, chosen by comparing the filtered cubes with the original ones. For lower values of $\nu$, besides the removal of spatial noise, the filter excludes also emission from the nucleus of the galaxy. \subsection{Spectral fitting} In order to measure the emission-line fluxes and the stellar and gas kinematics from the MaNGA datacubes, we used the Gas AND Absorption Line Fitting ({\sc gandalf}) code \citep{sarzi06,oh11}. In brief, the {\sc gandalf} code fits the emission and absorption lines simultaneously, allowing the separation of the relative contribution of the stellar continuum and of nebular emission in the spectra of the galaxies. To subtract the underlying stellar contribution on the spectra of the galaxy and measure the stellar kinematics, {\sc gandalf} uses the Penalized Pixel-Fitting ({\sc ppxf}) routine \citep{cappellari04,cappellari17}. The continuum spectra of the galaxy is fitted by using a library of template spectra under the assumption that the line-of-sight velocity distribution (LOSVD) of the stars is well reproduced by a Gauss-Hermite series. As template spectra, we used 30 selected Evolutionary Population Synthesis models from \citet{bc03}, covering ages ranging from 5~Myr to 12~Gyr and three metallicities ($0.004\,Z_{\odot},0.02\,Z_{\odot},0.05\,Z_{\odot}$). During the fit of the spectra, we allowed the use of an order 3 multiplicative Legendre polynomial to correct the shape of the continuum and only the first two Gauss-Hermite moments (velocity and velocity dispersion) were included to represent the LOSVD. We have tested the inclusion of higher order moments, but achieved the best results in the fitting process by considering only the first and second moments. The emission-line profiles were fitted by Gaussian curves, by keeping tied the centroid velocity and width of the [N\,{\sc ii}]$\lambda\lambda6548,6583$ and [S\,{\sc ii}]$\lambda\lambda6716,6731$ emission lines, fitting each doublet separately. In addition, the following line flux-ratio was kept fixed to their theoretical value: [N\,{\sc ii}]$\lambda6583$/[N\,{\sc ii}]$\lambda6548=2.94$ \citep{osterbrock06}. {\sc gandalf} gives as output measurements for the centroid velocity and velocity dispersion ($\sigma$) of the stars, and the flux, centroid velocity and $\sigma$ of the emission lines for each spaxel, used to construct two-dimensional maps. \subsection{Measurements of the Kinematic Position Angles} In order to measure the global kinematic PA (i.e. the orientation of line of nodes -- $\Psi_0$) from the stellar and gas velocity fields we used the kinemetry method \citep{krajnovic05}. This method extracts general kinematic properties of the galaxies by the symmetrization of the observed velocity fields, without the need of any assumption on the geometry of the stellar distribution. To obtain the global kinematic PA, the kinemetry method performs the symmetrization of the observed velocity fields. In this process for each possible PA it is created a symmetric velocity field $V'(x,y)$, with the PA oriented along the x axis. The symmetric velocity field is obtained by changing the mean velocity of each bin for the weighted average of the corresponding velocity in the four quadrants of the velocity field. The global kinematic PA is the one that minimizes $\chi^{2}$= $\sum_{n=1}^{N}$($V'(x,y)$ - $V(x,y)$/$\Delta$V)$^{2}$, where $V(x,y)$ is the value of observed velocity field at the position ($x$, $y$). We used the IDL routine {\it fit$_-$kinematic$_-$pa.pro}\footnote{This routine was developed by M. Cappellari and is available at \\ http://www-astro.physics.ox.ac.uk/$~$mxc/software}, which is an implementation of the kinemetry method and allows the measurement of the global kinematic PA and systemic velocity of the galaxy from the observed velocity fields. The routine is an implementation of the method presented in Appendix C of \citet{krajnovic06} and has been used to study the stellar kinematics of large samples of galaxies, as for example the SAURON \citep{cappellari07} and ATLAS$^{\rm 3D}$ \citep{krajnovic11} surveys. \section{Results}\label{results} We have performed measurements for the stellar and gas kinematics and emission-line fluxes for H$\beta$, [O\,{\sc iii}]\,$\lambda$5007, H$\alpha$, [N\,{\sc ii}]\,$\lambda\lambda$6549,83 and [S\,{\sc ii}]\,$\lambda\lambda$6716,31. With the aim of testing our measurements, we have compared the emission-line fluxes, centroid velocities and velocity dispersions with measurements provided by the MaNGA Data Analysis Pipeline (DAP -- Westfall K. B., in prep.), as part of the MPL-7. Figure\,\ref{Fig.1} shows an example of our measurements (top row) compared with those from the DAP (bottom row) for the AGN {\it{mangaid}} 1-339163. The first column shows a map of the continuum emission, the following columns exhibit maps of emission line fluxes for [O\,{\sc iii}]5007\,\AA, H$\alpha$ and [N\,{\sc ii}]6583\,\AA, respectively. The comparison between the top and bottom rows show that our flux measurements are similar to those provided by the DAP. In Figure \,\ref{Fig.2} we show the velocity fields for the same galaxy {\it{mangaid}} 1-339163. We present the stellar velocity field together with the gas velocity fields derived for the same emission lines presented in Fig. \ref{Fig.1}. For comparison, we show our results in the top row while the results from DAP are shown in the bottom row. The comparison show that the two velocity fields are similar, although the DAP maps are noisier. The comparison of the velocity dispersion maps obtained by us and from the DAP is shown in Figure\,\ref{Fig.3}, following the same pattern of organization as the previous figures. As for the centroid velocity and emission-line flux maps, the $\sigma$ maps from DAP are noisier than ours. The gas and stellar $\sigma$ values will be used to search for outflows in the central region of the galaxies of our sample. As noticed in Figs~\ref{Fig.1}--\ref{Fig.3}, our measurements are in general consistent with those provided by DAP, but the spatial filtering of the data allows the exclusion of spurious data, as clearly seen in the maps for the [N\,{\sc ii}]6583\,\AA and H$\alpha$ velocity dispersion, for which the maps constructed using the DAP shows a spurious feature at 4$^{\prime\prime}$ east of the nucleus, which is not present in our measurements. On the other hand, the DAP has the advantage of providing measurements for all emission lines present in the galaxy spectra, while we fit only the strongest lines. However, a detailed comparison of our measurements and those provided by DAP is beyond the scope of this paper. In order to verify if outflows of gas from the central AGN affects significantly the kinematics of AGN hosts, we can compare the kinematic position angle ($\Psi_0$) of the gas and stellar velocity fields. The motion of the stars is dictated by the gravitational potential of the galaxy, while for the gas, an additional component due to outflows is expected for the AGN. By comparing the difference between the $\Psi_0$ values derived from the gas and stellar velocity fields for AGN and control samples, one should expect larger differences for the AGN if strong outflows are present We derived $\Psi_0$ for the stellar and gas velocity fields using [O\,{\sc iii}]5007\,\AA, H$\alpha$ and [N\,{\sc ii}]6583\,\AA\ emission lines. In Figure\,\ref{Fig.4} we show two examples of the observed and symmetrized velocity fields for two AGN ({\it mangaid} 1-95092 and {\it mangaid} 1-351790). This figure illustrates two distinct results: (i) the $\Psi_0$ from distinct emission-line velocity fields are very similar to each other for both galaxies; (ii) for the galaxy {\it mangaid} 1-95092 the $\Psi_0$ derived from the stellar velocity field is very similar to that derived for the gas velocity field. (iii) in the case of the AGN host {\it mangaid} 1-351790 the orientation of the kinematic major axis of the stellar and gas velocity fields show a significant offset. In Figure~\ref{Fig.5} we show a similar figure for two control galaxies: {\it mangaid} 12-129446 and {\it mangaid} 1-178838, showing similar results as those observed for the AGN: similar $\Psi_0$ for all emission lines and in one case a distinct $\Psi_0$ for the gas and stars. From Fig. \ref{Fig.4} and Fig. \ref{Fig.5} we can conclude that for this galaxies both AGN and controls present a rotation pattern in the stellar as well as in the gas velocity fields. In Table \ref{Tab.1} we present the kinematic position angle derived for all galaxies of our sample \section{Discussion} \label{discussion} In order to investigate if the AGN feedback in our sample is powerful enough to disturb the gas kinematics on galactic scales and change the orientation of the kinematic major axis of the galaxy, we calculated the frequency of occurrence of a given PA offset in the AGN and control samples. We computed the difference in the $\Psi_{0 \star}$ of the stellar velocity field with respect to the $\Psi_{\rm 0 gas}$ derived for [O\,{\sc iii}]5007\,\AA, H$\alpha$ and [N\,{\sc ii}]6583\,\AA\ emission-line velocity fields. The resulting histograms of the PA offsets ($\Delta$PA$=|\Psi_{\rm 0 gas} -\Psi_{0 \star}|$) are presented in Figure\,\ref{Fig.6}. The top panels show the results using the [O\,{\sc iii}] velocity fields, while the middle panel show these results for H$\alpha$ and the bottom panel for[N\,{\sc ii}]. AGN are represented by blue colors and control galaxies are shown red. We find no clear difference in the distribution of $\Delta$PA for the AGN and control samples. Similar values of $\Delta$PA are observed for distinct emission lines. Although a few galaxies display large $\Delta$PA values, for most of them $\Delta$PA is smaller than 30$^{\circ}$. For 79\,\% of AGN and 81\,\% of control galaxies the PA offsets are smaller than 30$^{\circ}$ as measured using the [O\,{\sc iii}]5007\,\AA\ velocity field as representative of the gas velocity field. This result indicates that the AGN feedback is not strong enough to disturb $-$ more than in a control sample $-$ the gas kinematics on the galactic scales probed by MaNGA. Indeed, the sample of active galaxies used here is composed mainly by low-luminosity AGN \citep{rembold17}, for which outflows from the accretion disk are expected to be weak and thus the gas velocity fields of these AGN hosts on galactic scales are expected to be driven by the gravitational potential of the galaxy. Besides that, \citet{wylezalek17} only find evidence for an AGN-driven outflow in a MaNGA-selected AGN candidate when zoom into the center with higher spatial resolution. The resolution of MaNGA is only 1\farcs5--2\farcs5, so a lot of small scale outflows may be hidden. We do not find any clear difference in the $\Delta$PA of high and low luminosity AGN. \citet{penny18} analyzed low-mass galaxies (M$_{\star}$ $\lesssim$ 5$\times$10$^{9}$M$_{\odot}$) of the SDSS-IV MaNGA and found that five galaxies of their sample of 13 possible dwarf AGN host, exhibit ionized gas components in H$\alpha$ that are kinematically offset from their stellar velocity field and these objects have AGN-like emission line ratios at their centers. This fact has been interpreted as due to a recent accretion episode or outflow. Furthermore, \citet{penny18} suggest that AGN feedback may play an important role in these low-mass galaxies. Their sample can be considered an analogous of the ``Red Geysers" galaxies reported by \citet{cheung16} using MaNGA data. These galaxies do not show recent star formation activity, most of them harbor very low luminosity AGN, showing large scale bi-polar outflows in ionized gas and interpreted as being originated by centrally driven winds due to a Radiatively Inefficient Accretion Flow onto the Supermassive Black Hole. These galaxies show stellar and gas kinematic major axes misaligned and account for 10\% of the population of galaxies with masses of the order of $2\times10^{10}$\,M$_\odot$ that do not show recent star formation episodes. Although some galaxies of our sample show $\Delta$PA$>30^\circ$, as seen in Fig.\,\ref{Fig.6}, the fraction of AGN and control galaxies with significant PA offset are similar (21\,\% and 19\,\% for AGN and control sample, respectively), suggesting that these offsets are not associated to the presence AGN and probably they are just statistical fluctuations. Thus, we show that standard AGN do not follow the same behavior of ``Red Geyser" galaxies analyzed by \citet{cheung16} and the low-mass galaxies presented in \citet{penny18}, as we do not detect significant PA offsets. The fact that there are no significant PA offsets in our sample does not necessarily mean that the AGN do not show outflows, although it implies they do not play an important role in the galaxy scale gas kinematics. However, AGN driven outflows could be seen on smaller scales. In order to search for signatures of outflows closer to the nuclei of the galaxies, we have compared the stellar and gas velocity dispersion values within the inner 2\farcs5 diameter of the galaxies of our sample, as this aperture corresponds to the angular resolution of the MaNGA datacubes. In Table~\ref{Tab.1} we show these velocity dispersion values. On average, the 2\farcs5 aperture corresponds to a physical scale of $\sim$2\,kpc at the typical redshift of the sample galaxies. In order to quantify the differences between the stellar and gas velocity dispersions measured in the central regions we calculated the parameter $\sigma_{\text{frac}}$, defined as: \begin{equation} \sigma_{\text{frac}} = \frac{\sigma_{\text{gas}} - \sigma_{\star}} {\sigma_{\star}}, \end{equation} which measures the fractional difference between the gas and stellar velocity dispersion, and thus higher values of $\sigma_{\text{frac}}$ are indicative of a disturbed kinematics (not only due to the gravitational potential of the galaxy) and most probably due to outflows. We see a trend of AGN having generally higher $\sigma_{\rm frac}$ values than inactive galaxies as can be seen in the distributions shown in Figure \ref{Fig.7}. The median values of $\sigma_{\rm frac}$ for AGN and control sample are $<\sigma_{\rm frac}>_{\rm AGN}=0.04$ and $<\sigma_{\rm frac}>_{\rm CTR}=-0.23$, respectively. Besides that, we note that 90\% of AGN have $\sigma_{\rm frac}$ larger than $-0.22$ and 75\% of them have values larger than $-0.13$. For the control sample, 90\% of the galaxies show $\sigma_{\rm frac}<0.12$ and for 75\% of the sample $\sigma_{\rm frac}<-0.04$. The result of the Anderson-Darling statistical test returns a p-value of $10^{-5}$, that confirms that the AGN and inactive galaxies follow distinct distributions in $\sigma_{\rm frac}$. We thus conclude that the parameter $\sigma_{\rm frac}$\ can be used as an indicative of AGN activity. We derived the luminosity of the [{O}{\sc iii}]$\lambda5007$\,\AA\ emission line ($L_{[{O}{iii}]}$) of each galaxy (Table \ref{Tab.1}) using the flux measurements obtained with the {\sc gandalf} code within the same aperture used to measure the $\sigma_{\rm frac}$, and then investigated a possible correlation between $\sigma_{\text{frac}}$ and $L_{[{O}{\sc iii}]}$. Figure \ref{Fig.8} shows the plot of $L_{[{O}{\sc iii}]}$ vs. $\sigma_{\rm frac}$ for the AGN and control samples. There is a clear positive correlation between $\sigma_{\text{frac}}$ and $L_{[{O}{\sc iii}]}$, with a Spearman correlation coefficient of 0.53 and a p-value of 10$^{-14}$. However, it should be noticed that the observed correlation could be artificially produced, as the AGN and inactive galaxies clearly show distinct distributions in $\sigma_{\rm frac}$ (Fig.~\ref{Fig.7}). The Spermann test returns a p-value of 0.06 for the AGN sample and 10$^{-5}$ for the control sample, meaning that no strong correlation is found between the $L_{[{O}{iii}]}$ and $\sigma_{\text{frac}}$ for the AGN sample alone, while these parameters are correlated for the control sample. The absence of correlation for the AGN sample may be due to the fact that our sample covers only a small range of luminosities, as most objects are low-luminosity AGN \citep{rembold17}. Fig.~\ref{Fig.7} shows a trend of AGN having higher $\sigma_{\rm frac}$ values than inactive galaxies. The same trend can also be observed in Fig.~\ref{Fig.8}. This result can be interpreted as the higher values seen for AGN as compared to control galaxies being due to winds originated in the AGN. Thus, although the AGN of the sample do not show powerful outflows that can affect the gas kinematics on galactic scales, they do show small scale outflows (within the inner 1-2 kpc). Our results can be compared with those obtained from single aperture spectra. For example, \citet{woo17} find that there is a trend of the [O\,{\sc iii}]5007\,\AA\ velocity dispersion to increase with the increase of the AGN luminosity in a sample of $\sim$110,000 AGN and star-forming (SF) galaxies at $z<$0.3. This trend is also present in composite objects and is not clear for star-forming galaxies. They interpreted this result as due to strong gas outflows in high luminosity AGN, indicating that AGN energetics are driving these outflows. They find also lower average [O\,{\sc iii}] velocity dispersion values for star-forming galaxies. Our result is in good agreement with theirs. In addition, optical observations \citep{wylezalek16}, radio observations \citep{zakamska14} and molecular gas \citep{veilleux13} as well as theoretical models \citep{zubovas12b} have suggested that the AGN needs to have enough luminosity for the gas to be pushed out of the galactic potential. This is in agreement with our results, where we see a positive correlation between $\sigma_{\text{frac}}$ and luminosity. \section{Conclusions} \label{conclusions} We have mapped the gas and stellar kinematics of a sample of 62 AGN and 109 control galaxies (inactive galaxies) in order to investigate the effect of the AGN in the large and small scale gas kinematics of the AGN host galaxies. We detect evidence of nuclear gas outflows in the 62 AGN, but conclude they are not powerful enough to play an important role in the gas kinematics on galactic scales. The main conclusions of our work are: \begin{itemize} \item There is no significant difference in the $\Delta$PA between active and inactive galaxies, indicating that the galaxy scale gas kinematics is dominated by orbital motion in the gravitational potential of the galaxies, instead of outflows driven by the central AGN. \item We found that the difference between the orientation of the kinematic major axes of the gas and stars ($\Delta$PA) is larger than 30$^\circ$ for 13 (21\,\%) AGN and 21 control galaxies (19\,\%) using the [O\,{\sc iii}]5007\,\AA\ kinematics. \item The AGN show larger fractional differences in the velocity dispersions of the gas and stars $\sigma_{\rm frac}=\frac{\sigma_{OIII}-\sigma_{\star}}{\sigma_\star}$ than inactive galaxies within the inner 2\farcs5 diameter, that corresponds to 1-2kpc at the galaxies. The mean values are $\sigma_{\rm frac}$=0.05 for the AGN and $\sigma_{\rm frac}$=-0.23 for the control sample. This difference is interpreted as being due to outflows from the active nuclei. This indicates that, although the AGN of our sample do not affect the gas kinematics on large scale, it does affect it at least within the inner kpc. \item A correlation between the [O\,{\sc iii}]5007\,\AA\ luminosity and $\sigma_{\rm frac}$ is observed when putting together the AGN and control samples \end{itemize} \section*{Acknowledgements} Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astro physik Potsdam (AIP), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), National Astronomical Observatory of China, New Mexico State University, New York University, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Portsmouth, University of Utah, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. We thank the support of the Instituto Nacional de Ci\^encia e Tecnologia (INCT) e-Universe (CNPq grant 465376/2014-2). This study was financed in part by the Coordena\c c\~ao de Aperfei\c coamento de Pessoal de N\'ivel Superior - Brasil (CAPES) - Finance Code 001, Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (CNPq) and Funda\c c\~ao de Amparo \`a pesquisa do Estado do RS (FAPERGS).
1,314,259,995,219
arxiv
\section{Introduction} The pseudogap state of high temperature superconductors has been studied with numerous experimental tools, yet its origin is not resolved.~\cite{Norman:2005} One view proposes that the pseudogap state is a particle-hole condensate, a density wave. Of all such states that break translational symmetry and has strong momentum dependence of the type $d_{x^{2}-y^{2}}$, two candidate density wave orders that can couple to inelastic neutron scattering have been proposed: the singlet $d_{x^{2}-y^{2}}$-density wave (sDDW),~\cite{Chakravarty:2001} corresponding to angular momentum $\ell=2$ but a spin singlet, and the spin density wave order (SDW); in the general classification of density wave orders,~\cite{Nayak:2000} the latter corresponds to $\ell =0$ but a spin triplet. In addition to the sDDW order, its triplet counterpart~\cite{Nersesyan:1991} (tDDW) $i\sigma{d_{x^{2}-y^{2}}}$, where $\sigma=\pm 1$ corresponding to up and down spins with the $\hat{z}$ axis as the axis of spin quantization has also interesting properties and deserves more attention.~\cite{Nersesyan:1991} Recently, Fujimoto proposed that a triplet $d$-wave particle-hole condensate may be realized in the hidden order state of the URu$_2$Si$_2$ system~\cite{Fujimoto:2011}. Since high-$T_c$ superconductors have a rich phase diagram, which hosts many possible competing orders, it is both important and interesting to examine the properties of various density wave order parameters of higher angular momentum. In this paper we discuss the three order parameters mentioned above. In addition, we note that a singlet chiral $i{d_{x^{2}-y^{2}}}+d_{xy}$-density wave~\cite{Tewari:2008} as well as $i\sigma{d_{x^{2}-y^{2}}}+d_{xy}$ density wave states with interesting topological properties have been explored.~\cite{Hsu:2011} Owing to limitations of space, we do not discuss these order parameters here. Inelastic neutron scattering can directly probe magnetic excitations. The scattering cross-section is proportional to the magnetic structure factor, which is proportional to the imaginary part of the dynamic spin susceptibility via the fluctuation-dissipation theorem~\cite{Aeppli:1997}. Thus, a calculation of the spin susceptibility will provide a link between theoretical models and neutron scattering experiments. In particular, we want to address a recent experiment in underdoped YBa$_{2}$Cu$_{3}$O$_{6.6}$. The most striking aspect of this experiment is a vertical dispersion relation of the spin excitations with a large in-plane anisotropy in the pseudogap state in contrast to the `hour glass' dispersion observed in the superconducting state.~\cite{Hinkov:2007} The qualitatively different behavior between the superconducting and the pseudogap states suggests different mechanisms. Motivated by the experimental observations, we study the spin susceptibility of the three density wave orders mentioned above with hopping anisotropy, which breaks $C_{4}$ rotational and mimics an `electron nematic' state.~\cite{Yao:2011} In a phenomenological model, we set the hopping terms to be anisotropic along $a$- and $b$-axes, and study the energy-momentum dispersion relations of the dynamical spin susceptibility. The structure of this paper is as follows: in Sec. \ref{Sec:sDDW}, we sketch the calculation of the spin susceptibility, and discuss the numerical results of the sDDW order. In Sec. \ref{Sec:tDDW}, we discuss the numerical results of the tDDW order. In Sec. \ref{Sec:SDW}, we also discuss the numerical results of the SDW order. To make the paper succinct and more accessible, the explicit forms of the spin susceptibility are shown in Appendix \ref{Sec:chi}. \section{\label{Sec:sDDW}Spin Susceptibility: singlet DDW} In this section we set up the calculation of the spin susceptibility using sDDW as an example. In the following sections we will give the results of the other order parameters. To capture the in-plane anisotropic feature of the pseudogap state in the neutron scattering experiment, we consider the sDDW order with anisotropic hopping terms. In the momentum space, the order parameter can be written in terms of the fermion operators as \begin{equation} \langle c_{k+Q,\alpha}^{\dagger} c_{k,\beta}\rangle \propto i \delta_{\alpha\beta} W_{k} \end{equation} with $W_k \equiv \frac{W_0}{2} \left[ \cos (k_x a) - \cos (k_y b) \right]$, where $a$ and $b$ are lattice constants. For orthorhombic YBa$_{2}$Cu$_{3}$O$_{6.6}$, $a$ and $b$ are unequal, but the difference is very small. ($a=3.82 \AA, b=3.87 \AA$.) The two-dimensional mean field Hamiltonian will be \begin{eqnarray} \mathcal{H}_{sDDW}&=&\sum_{\sigma} \sum_{k} \left( \epsilon_k c_{k,\sigma}^{\dagger} c_{k,\sigma} + \epsilon_{k+Q} c_{k+Q,\sigma}^{\dagger} c_{k+Q,\sigma} \right. \nonumber\\ && \left. + iW_k c_{k,\sigma}^{\dagger} c_{k+Q,\sigma} + h.c. \right), \end{eqnarray} where the summation is over the reduced Brilloin Zone (RBZ) bounded by $(k_y b)\pm (k_x a)= \pm \pi$, $Q=(\pi/a,\pi/b)$ is the nesting vector, and $\epsilon_{k} \equiv \epsilon_{1k}+\epsilon_{2k}$ with~\cite{Pavarini:2001} \begin{eqnarray} \epsilon_{1k}&\equiv& -2t \left[ (1+r)\cos (k_x a)+ (1-r) \cos (k_y b) \right],\\ \epsilon_{2k}&\equiv& 4t^{\prime} \cos (k_x a) \cos (k_y b) -\mu \nonumber\\ && -2t^{\prime\prime}\left[ (1+r)\cos (2k_x a) + (1-r)\cos (2k_y b) \right]. \end{eqnarray} For $r\neq 0$, we have anisotropic hopping terms which breaks four-fold rotational symmetry. Note that although the anisotropy also modifies the next nearest neighbor hopping, it is simply a parameter and is defined as $t^{\prime}$ in our model. The eigenvalues of the Hamiltonian are $\lambda_{k,\pm} = \epsilon_{2k} \pm E_{k}$ with $E_{k} \equiv \sqrt{\epsilon_{1k}^2 + W_k^2}$. The one-loop spin susceptibility in the momentum and Matsubara frequency space is defined as, $N$ being the number of lattice sites, \begin{equation} \chi_0^{ij}(q,q',i \omega_n) = -{\frac {1}{N}} \int^{\beta}_{0} d \tau e^{i \omega_n \tau} \langle T S^{i}_{q}(\tau) S^{j}_{-q'} \rangle, \end{equation} where $i,j=x,y,z$, $\tau$ is the imaginary time, $T$ is the time-ordering symbol, and the spin operators are \begin{equation} S^{i}_{q} \equiv \sum_{k,\alpha,\beta} c_{k+q,\alpha}^{\dagger} \hat{\mathbf{\sigma}}^{i}_{\alpha\beta} c_{k,\beta}. \end{equation} Here $\hat{\mathbf{\sigma}}_{\alpha\beta}$ are the Pauli matrices with spin indices $\alpha$ and $\beta$. We can define the longitudinal and the transverse susceptibilities as $\chi_0^{zz}(q,q',\omega)$ and $\chi_0^{+-}(q,q',\omega)$, respectively, with $S_{q}^{\pm}\equiv S_{q}^{x}\pm iS_{q}^{y}$ and $i \omega_n \rightarrow \omega + i \delta$. For unpolarized measurements, the scattering intensity, $I$, contains both the spin-flip and the non-spin-flip channels, $ I \propto (\chi^{zz} + 2\chi^{+-})/3$. However, in this paper we will present the longitudinal and transverse susceptibilities separately so that it can provide more information about the polarized neutron scattering experiments, which may be achieved in the future. For the sDDW order, $\chi_0^{zz}(q,q',\omega)=2\chi_0^{+-}(q,q',\omega)$ because up-spin and down-spin parts of the Hamiltonian are identical. The explicit form of the one-loop susceptibility is shown in Eq.~(\ref{Eq-sDDWlong}-~\ref{Eq-sDDW-off}), and we apply random phase approximation (RPA) to obtain the RPA susceptibility as shown in Eq.~(\ref{Eq-sDDWRPA1}-\ref{Eq-sDDWRPA2}) in Appendix \ref{Sec:chi}. For illustrative purposes, we set $t=0.15~eV$, $t^{\prime}=0.32t$, $t^{\prime\prime}=0.1t^{\prime}$, $W_0=0.65t$, $r=-0.1$, and $k_{B}T=0.05t$. The chemical potential is set to $\mu=-0.805t$ in order to obtain a hole doping level of $n_{h}\approx 10.07\%$, approximately the doping level in the experiment. Other similar choices of the parameters will not change the conclusions. In Fig.~\ref{Fig-sDDWx}, the constant energy cuts of the imaginary part of the transverse spin susceptibility along $a^{*}$-axis for $\omega \le 0.6t$ are plotted. The results along $b^{*}$-axis are similar and are not shown here. Away from $Q=(\pi/a,\pi/b)$, the magnetic excitations are peaked at the incommensurate positions $(q_{x} a, q_{y} b) = (\pi \pm \delta_{a},\pi)$ and $(\pi,\pi \pm \delta_{b})$, where we define the incommensurability $\delta_{a}$ and $\delta_{b}$ along $a^{*}$- and $b^{*}$-axes, respectively. From the numerical results, one finds that $\delta_{a}$ and $\delta_{b}$ are weakly energy dependent, similar to the inelastic neutron scattering experiment~\cite{Hinkov:2007}. Furthermore, a prominent anisotropy in the incommensurability $\delta_{b}<\delta_{a}$ can be seen. With the hopping anisotropy $r=-0.1$, we obtain $\delta_{a}\approx (0.30\pm 0.01)\pi$ and $\delta_{b}\approx (0.235\pm 0.015)\pi$, which gives $\delta_{b}/\delta_{a} \approx 0.78$, which would be again similar to $\delta_{b}/\delta_{a} \approx 0.6$ reported in the neutron scattering experiments.~\cite{Hinkov:2007} One may further adjust the parameters of this model to fit the experimental data, but that is not the goal of this paper. We have varied the chemical potential, $\mu$, to check how the dispersion relations vary with hole doping; results for different doping levels are qualitatively similar. In the doping range $8\% \le n_{h} \le 20\%$, there are always weakly energy-dependent incommensurate excitations, and the incommensurability $\delta_{a}$ and $\delta_{b}$ increase with increasing doping level $n_{h}$ as shown in Fig.~\ref{Fig-doping}. \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth]{sDDW07x_T_RPA0.65t.eps} \caption{(Color online) Constant energy cuts of Im$\chi^{+-}_{RPA}(q,\omega)$ along $a^{*}$-axis when $q_{y}=\pi/b$ and $0.1t \le \omega \le 0.6t$ for the sDDW order. The weakly energy-dependent incommensurate peak positions are marked with red dashed lines. The results of Im$\chi^{zz}_{RPA}(q,\omega)$ are similar and omitted.} \label{Fig-sDDWx} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth]{doping_dep.eps} \caption{Doping-dependence of incommensurability $\delta_{a}$ and $\delta_{b}$. Here $\mu$ is adjusted to obtain different doping levels, and all the other parameters are the same as in Fig.~\ref{Fig-sDDWx}.} \label{Fig-doping} \end{center} \end{figure} Note that hopping anisotropy is not necessary for the existence of the nearly vertical dispersions. To demonstrate this, the numerical results with isotropic hopping are plotted in Fig.~\ref{Fig-sDDWiso}. Here $r$ is set to $0$, $\mu=-0.806t$, and the hole doping level is $n_{h}=10.03\%$. All the other parameters are the same as in Fig.~\ref{Fig-sDDWx}. One can still find nearly vertical dispersions with incommensurability $\delta_{a}\approx (0.255\pm 0.015)\pi$ even without the hopping anisotropy.~\cite{Jiang:2011} \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth]{sDDW07isox_T_RPA0.65t.eps} \caption{(Color online) (Color online) Constant energy cuts of Im$\chi^{+-}_{RPA}(q,\omega)$ along $a^{*}$-axis when $q_{y}=\pi/b$ for the sDDW order. Here $r=0$, $\mu=-0.806t$, and all the other parameters are the same as in Fig.~\ref{Fig-sDDWx}.} \label{Fig-sDDWiso} \end{center} \end{figure} The neutron scattering experiments show vertical dispersions in the energy range $30~meV \le \omega \le 60~meV$,~\cite{Hinkov:2007} and the numerical results exhibit a nearly vertical dispersions up to $\omega \le 0.6t = 90~meV$ with the chosen parameters, which are similar to experiments. It is interesting to see how the excitation peaks evolve at higher energies, so in Fig.~\ref{Fig-sDDWx-hi} we present the numerical results along the $a^{*}$-axis for $0.7t \le \omega \le 1.4t$, where all the parameters are the same as in Fig.~\ref{Fig-sDDWx} except for the energy $\omega$. The results along $b^{*}$-axis are again so similar that they are not shown here. In Fig.~\ref{Fig-sDDWx-hi}, one finds that the high energy spin excitations are strongly energy dependent. The incommensurate peaks move toward $q=Q$ in the range $0.7t \le \omega \le 0.9t$, and eventually disappear at $\omega \approx 1.0t$, where the intensity around $q=Q$ is enhanced. When $\omega \approx 1.1t$, a central peak emerges at the commensurate position $q=Q$. As the energy is further increased, the central peak splits into to two peaks deviating from $Q$ with incommensurability $\delta_{a}^{\prime}$ and $\delta_{b}^{\prime}$, which are marked by dashed lines. Unlike the low-energy incommensurability $\delta_{a}$ and $\delta_{b}$, $\delta_{a}^{\prime}$ and $\delta_{b}^{\prime}$ are energy-dependent and increase with increasing energy. Note that to observe $\delta_{a}^{\prime}$ and $\delta_{b}^{\prime}$, the neutron scattering experiment needs to be performed with very high energy ($\omega \ge 1.1t = 165~meV$). \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth]{sDDW07x_hi_T_RPA0.65t.eps} \caption{ (color online) Constant energy cuts of Im$\chi^{+-}_{RPA}(q,\omega)$ along $a^{*}$-axis when $q_{y}=\pi/b$ and $0.7t \le \omega \le 1.4t$ for the sDDW order. The energy-dependent incommensurate peak positions are marked with blue dashed lines.} \label{Fig-sDDWx-hi} \end{center} \end{figure} The reason for the unusual vertical dispersions at low energies and a different behavior at high energies can be understood by examining the imaginary part of Eq.~(\ref{Eq-sDDW-diag}). In this equation, the first two terms are interband contribution arising from the scattering from the upper band ($\epsilon_{2k}+E_{k}$) to the lower band ($\epsilon_{2k+q}-E_{k+q}$), and the scattering from the lower band ($\epsilon_{2k}-E_{k}$) to the upper band ($\epsilon_{2k+q}+E_{k+q}$). The last two terms, on the other hand, are intraband scattering. For the purpose of illustration, an example of the band structure and the scattering process is plotted in Fig.~~\ref{Fig-Scatt}, where the interband and intraband scattering are shown with arrows. \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth]{Scattering.eps} \caption{(Color online)Energy spectrum $(\lambda_{k,\pm}+\mu)$ of the sDDW system as $(k_{x}a,k_{y}b)$ goes along the route: $(0,0) \rightarrow (\pi,0) \rightarrow (\pi,\pi) \rightarrow (0,0)$. The blue (red) arrows indicate the interband (intraband) scattering, and the brown line is the chemical potential $\mu$. The parameters are the same as in Fig.~ \ref{Fig-sDDWx}.} \label{Fig-Scatt} \end{center} \end{figure} The interband and intraband terms of Eq.~(\ref{Eq-sDDW-diag}) for $0.1t\le\omega\le 0.6t$ are plotted in Fig.~\ref{Fig-interband} and Fig.~\ref{Fig-intraband}, respectively. The results for higher energy $0.7t\le\omega\le 1.4t$ are not shown because they are very similar. From Fig.~\ref{Fig-interband} and Fig.~\ref{Fig-intraband}, one finds that the intensity near $q=Q$ is mainly from the contribution of the interband terms, whereas the contribution of the intraband terms arise when $q$ is away from $Q$. From Eq.~(\ref{Eq-sDDW-diag}), we can see that at $q=Q$, the intraband terms vanish and only the interband terms contribute, leading to magnetic excitations peaked around $\omega \approx 1.1t$. In the vicinity of $q=Q$, interband terms still dominate, and we may expand them to first order in $\delta q\equiv |q-Q|$ and obtain \begin{eqnarray*} &\frac{-\pi}{N} \sum_{k} \left[n_F(\epsilon_{2k} \pm E_k) - n_F(\epsilon_{2k+q} \mp E_{k+q})\right] \times \\ &\delta(\omega - \epsilon_{2k} \mp E_k + \epsilon_{2k+q} \mp E_{k+q})\\ \simeq& \frac{\pi}{N} \sum_{k} \left[ n_F(\epsilon_{2k} \mp E_k) - n_F(\epsilon_{2k} \pm E_{k})\right. \\ & \left. + \frac{\partial n_F(E)}{\partial E}|_{E=\epsilon_{2k} \mp E_k} \vec{\triangledown}_k (\epsilon_{2k} \mp E_k)\cdot \delta q \right] \times \\ &\delta(\omega \mp 2E_k + \vec{\triangledown}_k (\epsilon_{2k} \mp E_k)\cdot \delta q ), \end{eqnarray*} which will be peaked at $\delta q = (\pm 2E_{k}-\omega)/\left[ \vec{\triangledown}_{k} (\epsilon_{2k} \mp E_k) \right]$. However, for low energies, the energy conservation condition cannot be satisfied unless $E_k$ is very small, which diminishes the difference between the Fermi functions and thus suppresses the intensity. Therefore, there is no enhanced peak in the vicinity of $q=Q$ for low energies. For higher energies, the energy conservation factor will be satisfied, and the intensity at the incommesurate positions ($\delta_{a}^{\prime}$ and $\delta_{b}^{\prime}$) will be enhanced and the excitation peaks can be seen as $\omega \apprge 1.1t$ in Fig.~\ref{Fig-sDDWx-hi}. In contrast, away from $q=Q$, the intraband terms dominate. The peak positions of the energy conservation factor, $\delta(\omega - \epsilon_{2k} \mp E_k + \epsilon_{2k+q} \pm E_{k+q})$, move away from $Q$ with increasing $\omega$. On the other hand, the coherence factor $\left[ 1+ (\epsilon_{1k} \epsilon_{1k+q} + W_k W_{k+q}) / (E_k E_{k+q}) \right]$ vanishes at $q=Q$ and develops with increasing $|q-Q|$. For the chosen parameters, the energy dependence of these two opposite effects almost cancels out in the energy range $0 \le \omega \le 0.6t$, leading to the weakly energy-dependent positions of local maxima ($\delta_{a}$ and $\delta_{b}$) as in Fig.~\ref{Fig-intraband}. Such a dispersionless feature is sensitive to the parameters because it depends on whether the contribution of the intraband terms overcomes that of the interband terms away from $Q$. The nature of the excitation peaks due to the interband terms is distinct from the intraband terms. The dominant contribution of the interband terms are determined by the energy conservation factor and the Fermi functions, leading to sharper excitation peaks at $(\pi \pm \delta_{a}^{\prime},\pi)$ and $(\pi,\pi \pm \delta_{b}^{\prime})$, whereas the intraband terms also depend on the coherence factor, resulting in relatively broadened local maxima instead of sharp peaks at $(\pi \pm \delta_{a},\pi)$ and $(\pi,\pi \pm \delta_{b})$. \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth]{sDDW07x_inter_T.eps} \caption{ (Color online) Constant energy cuts of the interband terms of Im$\chi_{diag}(q,\omega)$ in Eq.~(\ref{Eq-sDDW-diag}) along $a^{*}$-axis when $q_{y}=\pi/b$ for $0.1t \le \omega \le 0.6t$.} \label{Fig-interband} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth]{sDDW07x_intra_T.eps} \caption{(Color online) Constant energy cuts of the intraband terms of Im$\chi_{diag}(q,\omega)$ in Eq.~(\ref{Eq-sDDW-diag}) along $a^{*}$-axis when $q_{y}=\pi/b$ for $0.1t \le \omega \le 0.6t$.} \label{Fig-intraband} \end{center} \end{figure} \section{\label{Sec:tDDW}The triplet $d$-density wave order} We now consider the tDDW order, and choose the spin quantization axis to be the $z$-axis without any loss of generality, that is, \begin{equation} \langle c_{k+Q,\alpha}^{\dagger} c_{k,\beta}\rangle \propto i (\hat{d}\cdot \vec{\sigma}_{\alpha\beta})W_{k}=i (\hat{z}\cdot \vec{\sigma}_{\alpha\beta})W_{k}. \end{equation} The tDDW mean field Hamiltonian is therefore \begin{eqnarray} \mathcal{H}_{tDDW}&=&\sum_{\sigma} \sum_{k} \left( \epsilon_k c_{k,\sigma}^{\dagger} c_{k,\sigma} + \epsilon_{k+Q} c_{k+Q,\sigma}^{\dagger} c_{k+Q,\sigma} \right. \nonumber\\ && \left. + i\sigma W_k c_{k,\sigma}^{\dagger} c_{k+Q,\sigma} + h.c. \right), \end{eqnarray} which has the same eigenvalues as the sDDW Hamiltonian. The explicit form of the one-loop and RPA susceptibilities are given in Eq.~(\ref{Eq-tDDWlong}-~\ref{Eq-tDDWRPA2}) in Appendix \ref{Sec:chi}. The constant energy cuts of the imaginary part of the spin susceptibility of the tDDW order along $a^{*}$-axis are shown in Fig.~\ref{Fig-tDDW}. The hopping anisotropy $r$ is set to $0$ for simplicity and the parameters are the same as in Fig.~\ref{Fig-sDDWiso}. The longitudinal susceptibility behaves similar to the sDDW order whereas the transverse susceptibility is significantly different in the vicinity of $q=Q$. In comparison with the sDDW order, the intensity of Im$\chi^{+-}_{RPA}(q,\omega)$ of the tDDW order is suppressed in the vicinity of $q=Q$. The intensity exhibits a V-shaped curve around $q=Q$ at $\omega=0.1t$, which evolves gradually to a U-shaped curve at $\omega=0.6t$. Here we can also see the nearly vertical dispersion of the incommensurate spin excitations $\delta_a \approx (0.255\pm 0.015)\pi$. Notice that for unpolarized measurements, with $I \propto (\chi^{zz} + 2\chi^{+-})/3$, there will still be the vertical dispersion away from $q=Q$. The difference between the sDDW and tDDW order is that in $\chi_{\text{diag}}^{+-}(q,\omega)$ of the tDDW order, Eq.~(\ref{Eq-tDDW-T-diag}), the $W_{k}W_{k+q}$ term of the coherence factor changes sign and reduces the interband contribution. As a result, the intensity in the vicinity of $q=Q$ is suppressed. The significant difference between the transverse and the longitudinal susceptibilities should permit one to distinguish the singlet and the triplet orders in spin-polarized measurements. On the other hand, the sign change of $W_{k}W_{k+q}$ does not affect the intraband terms as much as the interband terms, so the nearly vertical dispersions due to the intraband contribution can still be seen away from $q=Q$. \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth]{tDDW07_L_RPA0.65t.eps} \includegraphics[width=\linewidth]{tDDW07_T_RPA0.65t.eps} \caption{(Color online) Constant energy cuts of Im$\chi_{RPA}^{zz}(q,\omega)$ (upper) and Im$\chi_{RPA}^{+-}(q,\omega)$ (lower) for the tDDW order along $a^{*}$-axis when $q_y=\pi/b$. The parameters are the same as in Fig.~\ref{Fig-sDDWiso}.} \label{Fig-tDDW} \end{center} \end{figure} \section{\label{Sec:SDW}The spin density wave order} Finally, we also consider the SDW order, which has the order parameter \begin{equation} \langle c_{k+Q,\alpha}^{\dagger} c_{k,\beta}\rangle \propto (\hat{z}\cdot \vec{\sigma}_{\alpha\beta}) \Delta_{s}. \end{equation} The SDW mean field Hamiltonian will be \begin{eqnarray} \mathcal{H}_{SDW}&=&\sum_{\sigma} \sum_{k} \left( \epsilon_k c_{k,\sigma}^{\dagger} c_{k,\sigma} + \epsilon_{k+Q} c_{k+Q,\sigma}^{\dagger} c_{k+Q,\sigma} \right. \nonumber\\ && \left. + \sigma \Delta_{s} c_{k,\sigma}^{\dagger} c_{k+Q,\sigma} + h.c. \right), \end{eqnarray} where the eigenvalues now become $\lambda^{S}_{k,\pm}=\epsilon_{2k}\pm E^{S}_{k}$ with $E^{S}_{k} \equiv \sqrt{\epsilon_{1k}^2 + \Delta_{s}^2}$. The explicit forms of the spin susceptibilities are given in Eq.~(\ref{Eq-SDWlong}-~\ref{Eq-SDW-T-off}) in Appendix \ref{Sec:chi}. The constant energy cuts of Im$\chi_{RPA}^{zz}(q,\omega)$ and Im$\chi_{RPA}^{+-}(q,\omega)$ for the SDW order along $a^{*}$-axis are plotted in Fig.~\ref{Fig-SDW}. Here we set the SDW gap to be $\Delta_{s}=0.65t$ and $\mu=-1.026t$. The hole doping level is $n_{h}=10.02\%$. The results are interesting: Im$\chi_{RPA}^{zz}(q,\omega)$ and Im$\chi_{RPA}^{+-}(q,\omega)$ for SDW order seem to be `interchanged' in comparison with those for the tDDW order in Fig.~\ref{Fig-tDDW}. In addition to this interchange, there is also a difference in the intensity around $q=Q$ between tDDW and SDW, which could be observed if spin-polarized experiments with high resolution could be achieved, although one cannot be sure because of the non universal nature of this difference. Away from $q=Q$, we can also see the vertical dispersions of the incommensurate spin excitations with $\delta_a \approx 0.28\pi$. Again, for unpolarized measurements, there will still be the vertical dispersion away from $q=Q$. To understand the swap of the susceptibilities between tDDW and SDW, we should compare Eq.~(\ref{Eq-sDDW-diag}) and Eq.~(\ref{Eq-tDDW-T-diag}) for the tDDW with Eq.~(\ref{Eq-SDW-L-diag}) and Eq.~(\ref{Eq-SDW-T-diag}) for the SDW; we can see that at $q=Q$, $W_{k}W_{k+q}=-W^2_{k}$ in tDDW, and this leads to a minus sign, while $\Delta^2_{s}$ in SDW does not. Therefore, the form of the coherence factors of SDW is opposite to tDDW in the vicinity of $q=Q$. As a result, the intensity of Im$\chi_{RPA}^{+-}(q,\omega)$ for SDW in the vicinity of $q=Q$ is enhanced due to the dominant interband contribution, whereas the intensity of Im$\chi_{RPA}^{zz}(q,\omega)$ is suppressed in the vicinity of $q=Q$. Thus, the difference in coherence factors leads to the ``interchanging" behavior between tDDW and SDW; the different momentum dependence of the order parameters also leads to distinct momentum dependence around $q=Q$. Away from $q=Q$, on the other hand, both Im$\chi_{RPA}^{+-}(q,\omega)$ and Im$\chi_{RPA}^{zz}(q,\omega)$ show vertical dispersion relations due to intraband contributions. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{SDW07_L_RPA0.65t.eps} \includegraphics[width=\linewidth]{SDW07_T_RPA0.65t.eps} \caption{Constant energy cuts of Im$\chi_{RPA}^{zz}(q,\omega)$ (upper) and Im$\chi_{RPA}^{+-}(q,\omega)$ (lower) for the SDW order along $a^{*}$-axis when $q_y=\pi/b$. Here $\Delta_{s}=0.65t$, $\mu=-1.026t$, and the other parameters are the same as in Fig.~\ref{Fig-tDDW}.} \label{Fig-SDW} \end{figure} \section{Conclusion} In conclusion, we have attempted to provide an explanation of a recent neutron scattering measurement in an underdoped high temperature superconductor, which point to the fact that the pseudogap state is not a continuation of the superconducting state below $T_{c}$. The salient feature is a vertical dispersion seen above $T_{c}$, in the spin excitations as opposed to an hourglass shape dispersion seen below $T_{c}$. Although couched in the language Hartree-Fock theory augmented by RPA, a thorough analysis of the properties of various alternate order parameters should be a useful guide. We also checked a band structure to contain electron pockets as well, but the robust aspects of the conclusions were unchanged. The vertical dispersion feature appears to persist in the doping range $8\% \le n_{h} \le 20\%$. At higher energies, we find energy dependent incommensurability due to the interband contributions. We also contrast the spin dynamics of the tDDW and SDW orders, which exhibit different features around $q=Q$, which could in principle allow one to identify the spin nature of the underlying phase in a spin-polarized neutron scattering experiment with high resolution. The transverse and the longitudinal spin dynamics are interchanged between SDW and tDDW. In principle, a whole class of higher angular momentum particle-hole condensates are possible. Experimental evidence of these order parameters should be a major step forward. The tDDW is such an unconventional hidden order that its discovery would be of great importance. Note that tDDW is even invariant under time reversal. \begin{acknowledgments} This work is supported by NSF under Grant No. DMR-1004520. \end{acknowledgments}
1,314,259,995,220
arxiv
\section{Introduction} In this paper, we introduce an unknotting-type number of knot projections (Definition~\ref{def1}) as follows. Every double point in a knot projection can be spliced two different ways (Figure~\ref{f2}), one of which gives another knot projection (Definition~\ref{dfn1}). A special case of such operations is a first Reidemeister move $\operatorname{RI}^-$, as shown in Figure~\ref{f2}. If the other case of such operations, which is not of type $\operatorname{RI}^-$, it is denoted by $S^-$. Beginning with an $n$-crossing knot projection $P$, there are many sequences of $n$ splices of type $\operatorname{RI}^-$ and type $S^-$, all of which end with the simple closed curve $O$. Then, we define the number $u^-(P)$ as the minimum number of splices of type $S^-$ (Definition~\ref{dfn3}). For this number, we determine the set of knot projections with $u^-(P)=1$ or $u^-(P)=2$ (Theorem~\ref{thm1}, Section~\ref{sec3}). Here, we provide an efficient method to obtain a knot projection $P$ with $u^- (P)$ $=$ $n$ for a given $n$ (Move~\ref{lemma2}). Further, for a connected sum (Definition~\ref{dfn_connected}) of knot projections, we show that the additivity of $u^-$ under the connected sum (Section~\ref{sec7}). Thus, to calculate $u^-(P)$ for every knot projection $P$, it is sufficient to compute $u^-(P)$ for every prime knot projection $P$. Here, a knot projection is called a \emph{prime} (Definition~\ref{dfn_connected}) knot projection if the knot projection is not the simple closed curve and is not a connected sum of two knot projections, each of which is not the simple closed curve. We apply this unknotting-type number to the theory of crosscap numbers (Section\ref{sec4}--Section~\ref{sec6}). Let $P$ be a knot projection and $D_P$ a knot diagram by adding any over/under information to each double point of $P$. Let $K(D_P)$ be a knot type (Definition~\ref{def1}) where $D_P$ is a representative of the knot type. In particular, if $D_P$ is an alternating knot diagram (Definition~\ref{def1}), we denote $K(D_P)$ by $K^{alt}(P)$ simply. In this paper, in general, we show that the unknotting-type number $u^-(P)$ gives an upper bound of the crosscap number of knots (Definition~\ref{dfn_cross}), i.e., $C(K(D_P)) \le u^-(P)$ (Theorem~\ref{thm2}, Section~\ref{sec4}). As a special case if $K(D_P)=K^{alt}(P)$, as a corollary of the inequality, we are easily able to determine the set of alternating knots with $C(K)=1$ (corresponding to a classical result in Section~\ref{sec5}) or $C(K)=2$ (corresponding to a new result in Section~\ref{sec6}). Similarly, by using type $S^+$ ($\operatorname{RI}^+$,~resp.) that is the inverse operation of type $S^-$ ($\operatorname{RI}^-$,~resp.), we also introduce $u(P)$ that is the minimum number of operations of types $S^{\pm}$ in a sequence, from $P$ to $O$, consisting of operations of types $S^{\pm}$ and $\operatorname{RI}^{\pm}$. These studies are motivated by Observation~\ref{ob1}, where we use crosscap numbers in the table of KnotInfo \cite{CL} and a table of knot projections up to eight double points \cite{IT8c}. \begin{observe}\label{ob1} For every prime knot projection $P$ with less than nine double points, $C(K^{alt} (P))$ $=$ $u(P)$ $=$ $u^-(P)$. \end{observe} In Section~\ref{sec7}, for a connected sum $P \sharp P'$, we give examples of $u(P \sharp P')$ and $u^- (P \sharp P')$ satisfying $u(P \sharp P')$ $<$ $u^- (P \sharp P')$. We also obtain a question whether every knot projection $P$ holds $C(K(D_P))$ $\le u(P)$. Finally, we would like to mention that crosscap numbers of knots are discussed in the literature. Clark obtained that for a knot $K$, $C(K)=1$ if and only if $K$ is a $2$-cable knot (in particular, for an alternating knot $K$, $K$ is a $(2, p)$-torus knot) \cite{clark}. Clark also obtained an upper bound $C(K) \le 2 g(K)+1$, where $g(K)$ is the orientable genus of $K$ (this inequality holds for every knot $K$) \cite{clark}. Murakami and Yasuhara \cite{MY} gave the example $C(K)=2g(K)+1$ by $K = 7_4$ and sharp bounds $C(K) \le \lfloor n(K)/2 \rfloor$ for the minimum crossing number $n(K)$ of a knot $K$ (note that, as we mention in the following, Hatcher-Thurston \cite{HaT} includes the particular case $7_4$, which is discussed \emph{explicitly} in Hirasawa-Teragaito \cite{HT}). Murakami and Yasuhara \cite{MY} also gave the necessary and sufficient condition for the crosscap number to be additive under the connected sum. Historically, the orientable knot genus has been well studied, and a general algorithm for computations is known. For low crossing number knots, effective calculations are made from genus bounds using invariants such as the Alexander polynomial and the Heegaared Floer homology. However, crosscap numbers are harder to compute. In this situation, crosscap numbers of several families are known by Teragaito (torus knots) \cite{Tra}, Hatcher-Thurston ($2$-bridge knots, in theory), Hirasawa-Teragaito ($2$-bridge knots, explicitly) \cite{HT}, Ichihara-Mizushima (\emph{many} pretzel knots) \cite{IM}. Adams and Kindred \cite{AK} determine the crosscap number of any alternating knot in theory. For a given $n$ crossing alternating knot diagram, consider $2^n - 1$ non-orientable state surfaces (Definition~\ref{state}); some of these surfaces achieve the crosscap number of the knot. By using coefficients of the colored Jones polynomials to establish two sided bounds on crosscap numbers, Kalfagianni and Lee \cite{KL} improve the efficiency of these computations. They apply this improved efficiency in order to calculate hundreds of crosscap numbers explicitly and rapidly. In this paper, we relate our unknotting-type number $u^-(P)$ of a knot projection $P$ to methods of Adams-Kindered \cite{AK} that seems at a glance to be distinct from giving our number $u^-(P)$. We also study crosscap numbers from a different viewpoint to obtain a state surface for an alternating knot using our unknotting-type number $u^-(P)$ of a knot projection $P$, and determine the set of alternating knots with the crosscap number two. \section{Preliminaries} \begin{definition}[knot, knot diagram, knot projection, alternating knot]\label{def1} A \emph{knot} is an embedding from a circle to $\mathbb{R}^3$. We say that knots $K$ and $K'$ are \emph{equivalent} if there is a homeomorphism of $\mathbb{R}^3$ onto itself which maps $K$ onto $K'$. Then, each equivalence class of knots is called a \emph{knot type}. A \emph{knot projection} is an image of a generic immersion from a circle into $S^2$ where every singularity is a transverse double point. In this paper, a transverse double point of a knot projection is simply called a \emph{double point}. The \emph{simple closed curve} is a knot projection with no double points. A \emph{knot diagram} is a knot projection with over/under information for every double point. Throughout this paper, in general, a knot projection (knot diagram, resp.) is defined not to be distinct from its mirror image. A double point with over/under information of a knot diagram is called a \emph{crossing}. As a special case, for a knot projection, one can arrange crossings in such a way that an under-path and an over-path alternate when traveling along the knot projection. Then, the knot diagram is called an \emph{alternating knot diagram}. For a knot $K$, if a knot diagram of $K$ is an alternating knot diagram, then $K$ is called an \emph{alternating knot}. \end{definition} \begin{definition}[splices, operations of type $S^-$ or type $\operatorname{RI}^-$, Seifert splice]\label{dfn1} For each double point, there are two ways to smooth the knot projection near the double point (Figure~\ref{f1}~(a) $N(d)$). Namely, erase the transversal intersection of the knot projection within a small neighborhood of the double point and connect the four resulting ends by a pair of simple, nonintersecting arcs (Figure~\ref{f1}~(b) or (c) $N'(d)$). A replacement of $N(d)$ with $N'(d)$ is called a \emph{splice}. \begin{figure}[h!] \includegraphics[width=6cm]{f1.pdf} \caption{(a) : $N(d)$, (b) : $N'(d)$, and (c) : another $N'(d)$}\label{f1} \end{figure} If a connection of four points in $S^2 \setminus N(d)$ is fixed, the connection is presented by dotted arcs as in Figure~\ref{f2}~(a-1) or Figure~\ref{f2}~(c-1) (ignoring the orientation). First, we consider a splice from (a-1) to (a-2) in Figure~\ref{f2}. Then, a special case of such splices as in (b-1) to (b-2) is called a \emph{splice of type} $\operatorname{RI}^-$ and is denoted by $\operatorname{RI}^-$. If the case is not $\operatorname{RI}^-$, as in (b-1) to (b-2), then the operation is called a \emph{splice of type} $S^-$ and is denoted by $S^-$. Second, if we choose a connection presented by dotted arcs as in Figure~\ref{f2} (c-1), the splice from (c-1) to (c-2) in Figure~\ref{f2} is called a \emph{Seifert splice} or a splice of type \emph{Seifert}. The splice preserves the orientation of the knot projection, as in Figure~\ref{f2}. \end{definition} By definition, we have Fact~\ref{fact0}. \begin{fact}\label{fact0} Every splice is one of three types: $S^-$, $\operatorname{RI}^-$, or Seifert. \end{fact} \begin{figure}[h!] \includegraphics[width=12cm]{f2.pdf} \caption{Three types of splices are represented by pairs ((a-1), (a-2)), ((b-1), (b-2)), and ((c-1), (c-2)). In the operation from (c-1) to (c-2), the orientation is ignored in Definition~\ref{dfn1} and it is not ignored in Definition~\ref{state2}.}\label{f2} \end{figure} \begin{remark} An operation here is introduced by \cite{Ca} (for the full twisted version) and \cite{ItoShimizu} (for the half-twisted version), and in \cite{ItoShimizu}, it is called the inverse of a \emph{half-twisted splice operation}, denoted by $A^{-1}$. \end{remark} By definition, it is easy to see Fact~\ref{fact1} (it is a known fact). \begin{fact}\label{fact1} Let $P$ be a knot projection with $n$ double points. There exist at most $2^n$ distinct sequences of splices of type $S^-$ and $\operatorname{RI}^-$ from $P$ to the simple closed curve $O$. Each sequence consists of $n$ splices in total. \end{fact} \begin{definition}[unknotting-type number $u^-(P)$]\label{dfn3} Let $P$ be a knot projection and $O$ the simple closed curve. The nonnegative integer $u^- (P)$ is defined as the minimum number of splices of type $S^-$ for any finite sequence of splices of type $S^-$ and of type $\operatorname{RI}^-$ to obtain $O$ from $P$. \end{definition} \begin{example} Figure~\ref{ex0} gives examples of knot projections with $u^-(\widehat{1_1})$ $=$ $0$, $u^-(\widehat{3_1})$ $=$ $1$, or $u^-(\widehat{6_2})$ $=$ $2$. Here, letting $i$ be a positive integer, for a knot diagram $n_i$ in the famous table in \cite{Ro}, the corresponding knot projection is denoted by $\widehat{D}$ (for details, see \cite{IT8c}). \end{example} \begin{figure}[h!] \includegraphics[width=12cm]{ex0.pdf} \caption{$u^-(\widehat{1_1})$ $=$ $0$, $u^-(\widehat{3_1})$ $=$ $1$, or $u^-(\widehat{6_2})$ $=$ $2$.}\label{ex0} \end{figure} The definition of a connected sum of two knots is slightly different from that of two knot projections, which is not unique, as in Definition~\ref{dfn_connected}. \begin{definition}[a connected sum of two knot projections, a prime knot projection]\label{dfn_connected} Let $P_i$ be a knot projection ($i=1, 2$). Suppose that the ambient $2$-spheres corresponding to $P_1, P_2$ are oriented. Let $p_i$ be a point on $P_i$ where $p_i$ is not a double point ($i=1, 2$). Let $d_i$ be a sufficiently small disk with the center $p_i$ ($i=1, 2$) satisfying $d_i \cap P_i$ consists of an arc which is properly embedded in $d_i$. Let $\widetilde{d}_i$ $=$ $cl(S^2 \setminus d_i)$, $\widetilde{P}_i$ $=$ $P_i \cap \widetilde{d}_i$, and let $h :$ $\partial \widetilde{d}_1$ $\to$ $\partial \widetilde{d}_2$ be an orientation reversing homeomorphism where $h(\partial \widetilde{P}_1)$ $=$ $\partial \widetilde{P}_2$. Then $\widetilde{P}_1 \cup_h \widetilde{P}_2$ gives a knot projection in the oriented $2$-sphere $\widetilde{d}_1 \cup_h \widetilde{d}_2$. The knot projection $\widetilde{P}_1 \cup_h \widetilde{P}_2$ in the oriented $2$-sphere is denoted by $P_1 \sharp_{(p_1,~ p_2,~h)} P_2$ and is called a \emph{connected sum} of the knot projections $P_1$ and $P_2$ at the pair of points $p_1$ and $p_2$ (Figure~\ref{f3}). A connected sum of knot projections is often simply denoted by $P_1 \sharp P_2$ when no confusion is likely to arise. If a knot projection is not the simple closed curve and is not a connected sum of two knot projections, each of which is not the simple closed curve, it is called a \emph{prime knot projection}. \end{definition} \begin{figure}[h!] \includegraphics[width=12cm]{f3.pdf} \caption{A connected sum $P_1 \sharp_{(p_1,~p_2,~h)} P_2$ of two knot projections $P_1$ and $P_2$}\label{f3} \end{figure} \begin{definition}[the connected sum of two knots] Let $K_i$ be a knot ($i=1, 2$) and $D_i$ a knot diagram of $K_i$. Let $P_i$ be a knot projection corresponding to $D_i$. A connected sum $D_1 \sharp_{(p_1,~p_2,~h)} D_2$ is defined as a connected sum $P_1 \sharp_{(p_1,~p_2,~h)} P_2$ in Definition~\ref{dfn_connected}. Then, a knot having a knot diagram $D_1 \sharp_{(p_1,~p_2,~h)} D_2$ is called a connected sum of $K_1$ and $K_2$. Because it is well-known that a connected sum of $K_1$ and $K_2$ does not depend on $(p_1,~p_2,~h)$, the connected sum is denoted by $K_1 \sharp K_2$. \end{definition} \begin{definition}[crosscap number]\label{dfn_cross} The \emph{crosscap number} $C(K)$ of a knot $K$ is defined by $C(K)$ $=$ $\min \{$ $1-\chi(\Sigma)~|~$ a non-orientable surface $\Sigma$ with $\partial \Sigma = K \}$, where $\chi(\Sigma)$ is the Euler characteristic of $\Sigma$. Traditionally, we define that $K$ is the unknot if and only if $C(K)=0$. \end{definition} \begin{definition}[set $\langle {\mathcal{S}} \rangle$]\label{ri_eq_notation} Let $\operatorname{RI}^+$ be the inverse of a splice of type $\operatorname{RI}^-$ (Figure~\ref{f5}). Let $P$ and $P'$ be knot projections. We say that $P \sim P'$ if $P$ and $P'$ are related by a finite sequence of operations of types ${\operatorname{RI}^{\pm}}$. It is easy to see that $\sim$ defines an equivalence relation. Let $\mathcal{S}$ be a set of knot projections. Let $\langle {\mathcal{S}} \rangle$ $=$ $\{ P~|$ $P \sim Q$ $(\exists Q \in {\mathcal{S}})\}$. \end{definition} \begin{figure}[h!] \includegraphics[width=5cm]{f5.pdf} \caption{$\operatorname{RI}^+$}\label{f5} \end{figure} \begin{notation}[Sets $\mathcal{T}$, $\mathcal{R}$, $\mathcal{P}$]\label{not1} Let $l$, $m$, $n$, $p$, $q$, and $r$ be positive integers. Let $\mathcal{T}$ be the set of $(2, 2l-1)$-torus knot projections $(l \ge 2)$, $\mathcal{R}$ the set of $(2m, 2n-1)$-rational knot projections $(m \ge 1, n \ge 2)$, and $\mathcal{P}$ the set of $(2p, 2q-1, 2r-1)$-pretzel knot projections $(p, q, r \ge 1)$ as in Figure~\ref{f6}. Let $\langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle$ $=$ $\{P_1 \sharp P_2 ~|~P_1, P_2 \in \langle {\mathcal{T}} \rangle \}$. \end{notation} \begin{figure}[h!] \includegraphics[width=12cm]{f6.pdf} \caption{A $(2, 2l-1)$-torus knot projection $(l \ge 2)$, a $(2m, 2n-1)$-rational knot projection $(m \ge 1, n \ge 2)$, and a $(2p, 2q-1, 2r-1)$-pretzel knot projection $(p, q, r \ge 1)$}\label{f6} \end{figure} \begin{notation}\label{not3} Let $l$, $m$, $n$, $p$, $q$, and $r$ be positive integers. Let $\mathcal{T}_{\operatorname{knot}}$ ($\mathcal{R}_{\operatorname{knot}}$, $\mathcal{P}_{\operatorname{knot}}$, resp.) be the set of $(2, 2l-1)$-torus knots $(l \ge 2)$ ($(2m, 2n-1)$-rational knots $(m \ge 1, n \ge 2)$, $(2p, 2q-1, 2r-1)$- pretzel knots $(p, q, r \ge 1)$,~resp.) as in Figure~\ref{f8}. Let $\mathcal{T}_{\operatorname{knot}} \sharp \mathcal{T}_{\operatorname{knot}}$ $=$ $\{ L \sharp L' ~|~L, L' \in {\mathcal{T}_{\operatorname{knot}}} \}$. \begin{figure}[h!] \includegraphics[width=12cm]{f8.pdf} \caption{Knot diagrams of knots in $\mathcal{T}_{\operatorname{knot}}$, $\mathcal{R}_{\operatorname{knot}}$, and $\mathcal{P}_{\operatorname{knot}}$ $(l \ge 2, m \ge 1, n \ge 2, p, q, r \ge 1)$ }\label{f8} \end{figure} \end{notation} \begin{definition}\label{def_uk} Let $K$ be an alternating knot and $C(K)$ the crosscap number of $K$. Let $Z(K)$ be the set of knot projections obtained from alternating knot diagrams of $K$. Then, $\min_{P \in Z(K)} u^-(P)$ is an alternating knot invariant. Let $u^- (K)$ $=$ $\min_{P \in Z(K)} u^-(P)$. \end{definition} \section{Knot projections with $u^- (P)$ $\le$ $2$}\label{sec3} \begin{theorem}\label{thm1} Let $P$ be a knot projection. Let $\mathcal{T}$, $\mathcal{R}$, and $\mathcal{P}$ be the sets as in Notation~\ref{not1}. Then, \begin{enumerate} \item $u^-(P)$ $=$ $1$ if and only if $P$ $\in \langle \mathcal{T} \rangle$. \item $u^-(P)$ $=$ $2$ if and only if $P$ $\in \langle \mathcal{R} \rangle \cup \langle {\mathcal{P}} \rangle \cup \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle$. \end{enumerate} \end{theorem} In the proof of Theorem~\ref{thm1}, we prepare Notation~\ref{notation_p} and a move (Move~\ref{lemma2}). \begin{notation}\label{notation_p} Let $S^+$ be the inverse operation of a splice of type $S^-$. \end{notation} \begin{move}\label{lemma2} For any pair of simple arcs lying on the boundary of a common region, each of the two local replacements as in Figure~\ref{p1} is obtained by applying operations of type {$\operatorname{RI}^+$} $i-1$ times followed by a single operation of type $S^+$. \end{move} \begin{figure}[h!] \includegraphics[width=12cm]{f7.pdf} \caption{Two local replacements}\label{p1} \end{figure} For the proof of Theorem~\ref{thm1}, it is worth to mention Proposition~\ref{prop_move}. \begin{proposition}\label{prop_move} Let $P$ be a knot projection and $O$ the simple closed curve. Let $\langle \{O\} \rangle$ be as in Definition~\ref{ri_eq_notation}. The following conditions are equivalent. \begin{enumerate} \item[(A)] $P$ satisfies $u^- (P)=n$. \item[(B)] There exists $Q \in \langle \{O\} \rangle$ such that $P$ is obtained from $Q$ by applying Move~\ref{lemma2} successively $n$ times. \end{enumerate} \end{proposition} Now we prove Theorem~\ref{thm1} in the following. \begin{proof} \noindent (1). For the simple closed curve $O$, if we apply a finite sequence of a single $S^+$ and {$\operatorname{RI}^+$}'s, which corresponds to Move~\ref{lemma2}, then $P \in {\mathcal{T}}$. If some ${\operatorname{RI}^+}$'s are applied to $P$, we have $P' \in \langle {\mathcal{T}} \rangle$. Conversely, suppose that $P' \in \langle {\mathcal{T}} \rangle$. Then $P (\in {\mathcal{T}})$ is obtained from $P'$ by some applications of {$\operatorname{RI}^-$}'s. For $P (\in {\mathcal{T}})$, it is easy to find a single $S^-$ to obtain an element of $\langle \{ O \} \rangle$. \noindent (2). By (1), the argument starts on $P \in {\mathcal{T}}$. For each of three marked places, $\alpha$, $\beta$, and $\gamma$ as in Figure~\ref{p2a}, we find a pair of simple arcs lying on the boundary of a common region. By applying Move~\ref{lemma2} to $\alpha$ ($\beta$, $\gamma$, resp.), we have $P \in {\mathcal{R}}$ ($P \in {\mathcal{P}}$, $P \in \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle$,~resp.). Note that for $\gamma$, there is the ambiguity to apply Move~\ref{lemma2}. However, essentially, the same argument works. See Figure~\ref{p2b}. Conversely, for $P$ $\in \langle \mathcal{R} \rangle \cup \langle {\mathcal{P}} \rangle \cup \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle$, it is easy to find a single $S^-$ to obtain an element of $\langle {\mathcal{T}} \rangle$. \end{proof} \begin{figure}[h!] \includegraphics[width=4cm]{p2a.pdf} \caption{Three places for an application of one of the local replacement}\label{p2a} \end{figure} \begin{figure}[h!] \includegraphics[width=10cm]{p2b.pdf} \caption{Move~\ref{lemma2} and $\gamma$}\label{p2b} \end{figure} \section{$u^- (P)$ and crosscap numbers}\label{sec4} \begin{definition}[state surface, cf.~\cite{AK}]\label{state} Let $P$ be a knot projection and $D_P$ a knot diagram by adding any over/under information to each double point of $P$. Let $K(D_P)$ be a knot type where $D_P$ is a representative of the knot type. By using the identification $S^2$ $=$ $R^2 \cup \{ \infty \}$, a knot projection $P$ (knot diagram $D_P$, resp.) is considered on $\mathbb{R}^2$ in the following. For a knot projection, by applying a splice to each double point, we have an arrangement of disjoint circles on $\mathbb{R}^2$. The resulting arrangement of circles on $\mathbb{R}^2$ are called a \emph{state} and circles in a state are called \emph{state circles} (cf.~\cite{K}). For the state, every circle is filled with disks, and the nested disks stacked in some order. Then the surface is given by attaching half-twisted bands across the crossings of $D_P$ to obtain a surface spanning the knot $K(D_P)$. The twisting is fixed by the type of the crossing. The surface generated by this algorithm is called a \emph{state surface}. Suppose that a state $\sigma$ of a knot projection $P$ with exactly $n$ double points is given by ordered $n(P)$ splices. Then, we denote $\sigma$ by $(\sigma_1, \sigma_2, \dots, \sigma_{n(P)})$ and the resulting arrangement of simple closed curves on a sphere is denoted by $S_\sigma$. Let $|S_{\sigma}|$ be the number of circles in $S_{\sigma}$. For a state $\sigma$ of a knot diagram $D_P$, let $\Sigma_{\sigma} (D_P)$ be the state surface obtained from circles and half-twisted bands corresponding to $S_{\sigma}$ and $\sigma$. If $\sigma$ satisfies that every $\sigma_i$ is a Seifert splice, the state surface is orientable, it is called a \emph{Seifert state surface}, and the state is called a \emph{Seifert state} (see Fact~\ref{w_fact}). \end{definition} \begin{fact}[a well-known fact]\label{w_fact} For a positive integer $n$, $2^n$ states from an $n$ crossing knot diagram, all except the Seifert state give non-orientable state surfaces. For every alternating knot $K$, there exists a Seifert state surface whose genus is $g(K)$ via an algorithm as in Definition~\ref{state2}. \end{fact} \begin{definition}[Seifert's algorithm]\label{state2} For a given knot, we orient it. Then, for every crossing of a knot diagram of the knot, if we choose the splice from (c-1) to (c-2) as in Figure~\ref{f2}, then the state surface given by Definition~\ref{state} is orientable. The resulting surface does not depend on the orientation of the knot. Traditionally, the process is called \emph{Seifert's algorithm}. State circles appearing in the process of Seifert's algorithm are called \emph{Seifert circles}. \end{definition} \begin{theorem}\label{thm2} Let $P$ be a knot projection and $D_P$ a knot diagram by adding any over/under information to each double point of $P$. Let $K(D_P)$ be the knot type having a knot diagram $D_P$. Let $C(K(D_P))$ be the crosscap number of $K(D_P)$. Then, \[ C(K(D_P)) \le u^- (P). \] \end{theorem} \begin{proof} Let $n(P)$ be the number of double points of $P$. In the following, we obtain an appropriate state in the $2^{n(P)}$ candidates to find a state surface by using a sequence realizing $u^-(P)$. Consider a sequence of splices that realizes $u^-(P)$. Denote it by \[ P=P_1 \stackrel{Op_1}{\to} P_2 \stackrel{Op_2}{\to} \dots \stackrel{Op_{n(P)}}{\to} O. \] Then, let $\sigma$ $=$ $(\sigma_1, \sigma_2, \dots, \sigma_{n(P)})$ by assigning a splice $\sigma_i$ to each double point of $P$ as follows. \begin{itemize} \item If $Op_i = S^-$ at a double point of $d$, the splice $\sigma_i$ is defined as $S^-$ (Figure~\ref{f1_proof}, the left half). \item If $Op_i = \operatorname{RI}^-$ at a double point of $d$, the splice $\sigma_i$ is defined as the splice which is different from $\operatorname{RI}^-$ (Figure~\ref{f1_proof}, the right half). \end{itemize} \begin{figure}[h!] \includegraphics[width=10cm]{f4.pdf} \caption{$\sigma_i$ for $S^-$ (the left half) and $\sigma_i$ for $\operatorname{RI}^-$ (the right half)}\label{f1_proof} \end{figure} If every $Op_i$ is type $\operatorname{RI}^-$, in which case $K(D_P)$ is the unknot, $C(K(D_P))$ $=$ $0$ and $u^- (P)$ $=$ $0$, which is one of the case of the statement. Thus, we may suppose that at least some $Op_i$ is type $S^-$. Then, since $\sigma$ is not a Seifert state (Definition~\ref{state}), $\Sigma_{\sigma} (D_P)$ is non-orientable (cf.~Fact~\ref{w_fact}). For $K(D_P)$, let $\Sigma_0$ be a non-orientable surface that spans $K(D_P)$ and satisfies $\chi(\Sigma_0)$ $=$ $1-C(K(D_P))$. By the maximality of $\chi(\Sigma_0)$, \begin{align}\label{eq0} \chi(\Sigma_0) \ge \chi(\Sigma_{\sigma} (D_P)). \end{align} Therefore, \begin{align}\label{eq1} 1 - C(K(D_P)) &= \chi(\Sigma_0) \ge \chi(\Sigma_{\sigma} (D_P)) = |S_{\sigma}| - n(P). \end{align} Note that a splice $\sigma_i$ corresponding to $S^-$ from $P_i$ to $P_{i+1}$ does not change the number of the components and a splice $\sigma_i$ corresponding to $\operatorname{RI}^-$ from $P_i$ to $P_{i+1}$ increases the number of the components by exactly one (Figure~\ref{f1_proof}). Observing the process in the finite sequence from $P$ to the simple closed curve $O$, it is easy to see $|S_{\sigma}|$ $=$ $1+$ $\sharp \{Op_i~|~ Op_i = \operatorname{RI}^- \}$. Note also that $n(P)$ $=$ $\sharp \{Op_i~|~ Op_i = \operatorname{RI}^- \}$ $+$ $\sharp \{Op_j~|~ Op_j = S^- \}$. Therefore, \[ |S_{\sigma}| - n(P) = 1 + \sharp \{Op_i~|~ Op_i = \operatorname{RI}^- \} - (\sharp \{Op_i~|~ Op_i = \operatorname{RI}^- \} + \sharp \{Op_j~|~ Op_j = S^- \}). \] Thus, \begin{equation}\label{eq2} \begin{split} 1 - C(K(D_P)) &\ge |S_{\sigma}| - n(P) \\ &= 1 - \sharp \{Op_j~|~ Op_j = S^- \} \\ & = 1 - u^-(P). \end{split} \end{equation} \end{proof} \begin{notation}\label{not4} For a knot diagram $D_P$ of a knot projection $P$, a particular state surface introduced in the proof of Theorem~\ref{thm2} is denoted by $\Sigma_u$ (it is a state surface corresponding to a sequence of splices that realized $u^- (P)$). \end{notation} By this proof, for the equalities on (\ref{eq0}), (\ref{eq1}), and (\ref{eq2}), we have Lemma~\ref{lemma0}. \begin{lemma}\label{lemma0} Let $P$, $D_P$, $K(D_P)$, $C(K(D_P))$, and $\Sigma_0$ be as in Theorem~\ref{thm2}, i.e., let $P$ be a knot projection, $D_P$ a knot diagram by adding any over/under information to each double point of $P$, $K(D_P)$ the knot type having a knot diagram $D_P$, $C(K(D_P))$ the crosscap number of $K(D_P)$, $\Sigma_0$ a non-orientable surface that spans $K(D_P)$ and satisfies $\chi(\Sigma_0)$ $=$ $1-C(K(D_P))$. Let $\Sigma_u$ be as in Notation~\ref{not4}. Then, $\chi(\Sigma_0) = \chi(\Sigma_u)$ if and only if $C(K(D_P))=u^-(P)$. \end{lemma} \begin{proof} By Notation~\ref{not4}, we choose $\Sigma_{\sigma} (D_P)$ that is $\Sigma_u$ as in the proof of Theorem~\ref{thm2}. Then, \begin{align*} &\chi(\Sigma_0) = \chi(\Sigma_{\sigma} (D_P))~{\textrm{on}}~(\ref{eq0})\\ \Longleftrightarrow& 1 - C(K(D_P)) = |S_{\sigma}| - n(P)~{\textrm{on}}~(\ref{eq1}) \\ \Longleftrightarrow& C(K(D_P))=u^-(P)~{\textrm{on}}~(\ref{eq2}). \end{align*} \end{proof} To discuss the equality of (\ref{eq0}), we review Fact~\ref{AKthm}. Here, we give Definition~\ref{gon} only, and review their fact. \begin{definition}[$n$-gon]\label{gon} Let $P$ be a knot projection and let $\partial F$ be the boundary of the closure of a connected component $F$ of $S^2 \setminus P$. Let $n$ be a positive integer. Then, $\partial F$ is called an \emph{$n$-gon} if, when the double points of $P$ that lie on $\partial F$ are removed, the remainder consists of $n$ connected components, each of which is homeomorphic to an open interval. For a knot diagram, the definition of an $n$-gon is straightforward. \end{definition} Following \cite{AK}, a \emph{genus} is defined to be the orientable genus of a knot or $\frac{1}{2}$ of the crosscap number. \begin{fact}[Adams-Kindred, Theorem~3.3 of \cite{AK}]\label{AKthm} For every alternating knot diagram, the following algorithm $(1)$--$(3)$ always generates a minimal genus state surface. \end{fact} \noindent{\bf{Minimal genus algorithm.}} Let $D_P$ be an alternating knot diagram. \begin{itemize} \item[(1)] Find the smallest $m$ for which $D_P$ contains an $m$-gon. \item[(2)] If $m \le 2$, then we apply the splice(s) to the crossing(s) so that the $m$-gon becomes a state circle. If $m > 2$, then $m=3$ by a simple Euler characteristic argument on the knot projection (see, e.g., \cite[Lemma~3.1]{KL} or \cite[Lemma~2]{IT_triple1}). Then, choose a triangle of $D_P$. From here, the process has two branches: For one branch, we apply splices to the crossings on this triangle's boundary so that the triangle becomes a state circle. For the other branch, we apply splices to the crossings the opposite way. \item[(3)] Repeat Steps (1) and (2) until each branch reaches a state. Of all resulting state surfaces, choose the one with the smallest genus. \end{itemize} Here, recall notations $\langle {\mathcal{T}} \rangle$, $\langle {\mathcal{P}} \rangle$, and $\langle {\mathcal{R}} \rangle$ in Theorem~\ref{thm1} (i.e., Definition~\ref{ri_eq_notation} and Notation~\ref{not1}) and notations $\Sigma_0$ and $\Sigma_u$ in the proof of Theorem~\ref{thm2} (i.e., see the statement of Lemma~\ref{lemma0} and Notation~\ref{not4}). We also prepare Notation~\ref{not2}. \begin{notation}\label{not2} If a knot type $K(D_P)$ has an alternating knot diagram $D_P$ obtained by adding over/under information to $P$, the knot type is denoted by $K^{alt}(P)$. \end{notation} By using Fact~\ref{AKthm}, we have Lemma~\ref{lemma3}. \begin{lemma}\label{lemma3} Let $P$ be a knot projection. Let $K^{alt}(P)$ be as in Notation~\ref{not2}. \noindent$(1)$ If $P \in \langle {\mathcal{T}} \rangle$, then $C(K^{alt}(P))$ $=$ $u^-(P)$ $= 1$. \noindent$(2)$ If $P$ $\in \langle \mathcal{R} \rangle \cup \langle {\mathcal{P}} \rangle \cup \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle$, then $C(K^{alt}(P))$ $=$ $u^-(P)$ $= 2$. \end{lemma} \begin{proof} Note that the minimal genus algorithm of Fact~\ref{AKthm} gives a surface $\Sigma_0$ that spans $K^{alt}(P)$ and has the maximal Euler characteristic $\chi(\Sigma_0)$. Suppose that $P \in \langle {\mathcal{T}} \rangle$. Then, the set of alternating knot diagrams obtained from $P$ is fixed. Note that a state surface $\Sigma_u$ obtained from the computation of $u^-(P)$ is one of the minimal genus algorithm of Fact~\ref{AKthm} giving $\Sigma_0$. Then, $\chi(\Sigma_0) = \chi(\Sigma_u)$. By Lemma~\ref{lemma0}, $C(K^{alt}(P))=u^-(P)$. Further, by Theorem~\ref{thm1}, $u^-(P)=1$. Then, we have (1). By replacing the assumption $P \in \langle {\mathcal{T}} \rangle$ with \begin{equation*}\label{condition_2} P \in \langle \mathcal{R} \rangle \cup \langle {\mathcal{P}} \rangle \cup \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle, \end{equation*} and by the same argument, we have (2). \end{proof} \section{Alternating knots with crosscap number one revisited}\label{sec5} As an application of Theorems~\ref{thm1} and \ref{thm2}, it gives an elementary proof of a known result that for any alternating knot $K$, $C(K)=1$ if and only if $K$ is a $(2, 2l-1)$-torus knots ($l \ge 2$), as shown in Proposition~\ref{fact2}. Before proving Proposition~\ref{fact2}, we need preliminary results. Note that Adams and Kindred obtain \cite[Corollary~6.1]{AK}. Here, we use an expression \cite[Theorem~3.3]{KL} of \cite[Corollary~6.1]{AK}. Note also Fact~\ref{w_fact}. \begin{fact}[an expression of Corollary~6.1 of \cite{AK}]\label{factKL} Let $K$ be an alternating knot, $C(K)$ the crosscap number of $K$, and $g(K)$ the orientable genus of $K$. Let $Y$ be the set of state surfaces with maximal Euler characteristics obtained from the minimal genus algorithm as in Fact~\ref{AKthm}. \\ Then, \begin{enumerate} \item If there exists $\Sigma$ $(\in Y)$ that is a non-orientable, then $C(K)$ $=$ $1-\chi(\Sigma)$. \label{case1} \item If every $\Sigma$ $(\in Y)$ is orientable, then $C(K)$ $=$ $2-\chi(\Sigma)$ and $C(K)$ $=$ $2g(K)+1$. \label{case2} \end{enumerate} \end{fact} We also prepare the following technical lemma. \begin{lemma}\label{lemma4} Let $P$ be a knot projection that is the image of a generic immersion $f: S^1$ $\to$ $S^2$. For every pair of two double points $d, d'$ of $P$, the configuration of $f^{-1}(\{ d, d' \})$ on $\subset S^1$ is one of two types $(a)$ and $(b)$ on $S^1$. In other words, any pair of two double points are represented by Figure~\ref{connection3}~$(a)$ or $(b)$ where dotted curves indicate the connections of double points. \begin{figure}[h!] \includegraphics[width=8cm]{connection3.pdf} \caption{In the upper line, the configuration of preimages of double points $d$ and $d'$. In the lower line, (a) : the leftmost knot projection and (b) : the two knot projections in the right half. Two double points and their connections. Dotted curves indicate the connections of double points.} \label{connection3} \end{figure} \end{lemma} \begin{proof} Every knot projection is a $1$-component curve, and thus, the possibilities of connections are shown in Figure~\ref{connection3}. \end{proof} \begin{lemma}\label{connection} If there exist two double points as in Figure~\ref{connection3}~$(a)$, then, after a Seifert splice at one of the two double point, any splice at the other double point yields another knot projection. \end{lemma} \begin{proof} By Lemma~\ref{lemma4} and Figure~\ref{connection4}, it is easy to see the claim. \end{proof} \begin{figure}[h!] \includegraphics[width=8cm]{connection4.pdf} \caption{Two Seifert splices on the two double points (upper arrow) and one Seifert splice and the other splice on the two double points (lower arrow)}\label{connection4} \end{figure} \begin{lemma}\label{eq4} \begin{align*} \langle {\mathcal{T}} \rangle = \{P~|~C(K^{alt}(P)) = u^-(P)=1~(\forall P)\}. \end{align*} \end{lemma} \begin{proof} For a knot projection $P$, let $K^{alt}(P)$ be a knot type as in Notation~\ref{not2}. Then, \begin{align*} \langle {\mathcal{T}} \rangle &\stackrel{\textrm{Lemma~\ref{lemma3}~(1)}}{\subset} \{P~|~C(K^{alt}(P)) = u^-(P)=1~(\forall P)\}\\ & \stackrel{\textrm{Theorem~\ref{thm1}~(1)}}{\subset} \langle {\mathcal{T}} \rangle. \end{align*} \end{proof} \begin{lemma}\label{eq5} \begin{align*} {\mathcal{T}_{\operatorname{knot}}} = \{ K~{\textrm{: an alternating knot}}~|~C(K)=1, u^-(K)=1 \}. \end{align*} \end{lemma} \begin{proof} Note that $P$ uniquely determines an alternating knot diagram (up to reflection). For Lemma~\ref{eq4}, the left-hand side $\langle {\mathcal{T}} \rangle$ determines $\{ K^{alt}(P)~|~P \in \langle {\mathcal{T}} \rangle \}$, which equals $ \mathcal{T}_{\operatorname{knot}}$. On the other hand, the right-hand side $\{P~|~C(K^{alt}(P))$ $=$ $u^-(P)=1~(\forall P)\}$ determines $\{K^{alt}(P)$ $|$ $C(K^{alt}(P))$ $=$ $u^-(P)=1$ $(\forall P)\}$, which equals $\{ K :$ an alternating knot $|~C(K)=1,$ $u^-(K)=1 \}$ (cf.~Definition~\ref{def_uk}). \end{proof} \begin{proposition}\label{fact2} Let $\mathcal{T}_{\operatorname{knot}}$ be the set as in Notation~\ref{not3}. Let $K$ be an alternating knot and $C(K)$ the crosscap number of $K$. Let $u^- (K)$ be the integer as in Definition~\ref{def_uk}. Then, the following conditions are mutually equivalent. \noindent$(A)$ $K$ $\in \mathcal{T}_{\operatorname{knot}}$. \noindent $(B)$ $C(K)=1$. \noindent $(C)$ $u^-(K) =1$. \end{proposition} \begin{proof} \noindent (Proof of (A) $\Leftrightarrow$ (B).) Lemma~\ref{eq5} immediately implies that (A) $\Rightarrow$ (B). \noindent((B) $\Rightarrow$ (A).) Suppose that $C(K)=1$ and $K$ is an alternating knot. By definition, there exist an alternating knot diagram $D^{alt}(K)$ of $K$. Let $P$ be a knot projection obtained from $D^{alt}(K)$ by ignoring over/under information of the double points. By Fact~\ref{factKL}, we have the following Case~1 and Case~2 corresponding to (\ref{case1}) and (\ref{case2}) of Fact~\ref{factKL}, respectively. \noindent $\bullet$ Case~1: there exist an alternating knot diagram $D^{alt}(K)$ of $K$, a state $s$ of $D^{alt}(K)$ such that a non-orientable state surface $\Sigma_0$ obtained from $D^{alt}(K)$ satisfies that $C(K)=1-\chi(\Sigma_0)$, i.e., $\chi(\Sigma_0)$ $=$ $0$. Here, note that the state $s$ is given by the algorithm of Fact~\ref{AKthm}. Let $n(P)$ be the number of double points of $P$. The state $s$ is obtained from $P$ by $n(P)$ splices. In the following, we find the state $s$ in the $2^{n(P)}$ candidates. Then, note that the splices consist of $n(P)-1$ Seifert splices producing $n(P)$-component curves and a single $S^-$ since $\chi(\Sigma_0)=0$. Here, note that if there exist two splices of type $S^-$ in the $n(P)$ splices, then the $n(P)$ splices do not realize $\Sigma_0$ because $\chi(\Sigma_0)$ $=$ $1-C(K)$. Then, we interpret the $n(P)$ splices as a sequence of the $n(P)$ splices, and we may suppose that there exists a sequence such that \[ P=P_0 \stackrel{Op_1}{\to} P_1 \stackrel{Op_2}{\to} P_2 \stackrel{Op_3}{\to} \dots \stackrel{Op_{n(P)}}{\to} P_{n(P)} = s \] and $Op_i = S^-$ ($1 \le i \le n(P)$). By Lemma~\ref{connection}, for the double points corresponding to $Op_k$ $(1 \le k \le i-1)$, any two double points are represented as in Figure~\ref{connection3}~(b). Here, if there exists a pair of type (a), then a pair consisting of two splices containing a Seifert splice on the two double points sends a $1$-component curve to another $1$-component curve, which implies the contradiction with the condition that $n(P)-1$ Seifert splices produce $n(P)$-component curves. Similarly, by Lemma~\ref{connection}, for two double points corresponding to $Op_k$ $(1 \le k \le i-1)$ and $Op_j$ $(i+1 \le j \le n(P))$, any pair is also represented as in Figure~\ref{connection3}~(b). Thus, noting that the state $s$ has one to one correspondence with the $n(P)$ splices (Definition~\ref{state}), it is easy to choose $\sigma_1,$ $\sigma_2, \dots, \sigma_{i-1}$ ($\sigma_{i+1},$ $\sigma_{i+2}, \dots, \sigma_{n(P)}$,~resp.) like $\Sigma_u$ (Notation~\ref{not4}) corresponding to $\operatorname{RI}^-$'s applied successively to $P$ ($P_{i}$,~resp.) to obtain $P_{i-1}$ ($s$,~resp.). Here, by Lemma~\ref{connection}, note that two double points corresponding to $\sigma_{k}$ $(1 \le k \le i-1)$ and $S^-$ are the configuration of type~(b) of Figure~\ref{connection3}. Then, $u^-(P) \le 1$. Here, $K$ is not the unknot, $1 = C(K) \le u^-(P)$ ($\because$ Theorem~\ref{thm2}). Thus, $C(K)=1$ and $u^- (P)=1$, where $P$ is a knot projection obtained from $D^{alt}(K)$. Then, by Lemma~\ref{eq5}, we have $K \in {\mathcal{T}}_{\operatorname{knot}}$, which implies (A). \noindent $\bullet$ Case~2: For an orientable genus $g(K)$, $C(K)$ $=$ $2g(K)+1$. If $C(K)=1$, then $g(K)=0$. Then, $K$ is the unknot, which implies a contradiction. \noindent((A) $\Leftrightarrow$ (C).) Since Lemma~\ref{eq5} immediately implies that (A) $\Rightarrow$ (C), it is sufficient to shown that (C) $\Rightarrow$ (A). Recall that $u^- (K)$ $=$ $\min_{P \in Z(K)} u^-(P)$ (Definition~\ref{def_uk}). Then, \begin{align*} & u^-(K) =1 \\ \Rightarrow& \exists P \in Z(K)~{\textrm{such that}}~P \in \langle {\mathcal{T}} \rangle \quad(\because~{\rm{Theorem~\ref{thm1}~(1)}})\\ \Rightarrow& K \in {\mathcal{T}_{\operatorname{knot}}}. \end{align*} \end{proof} \section{Alternating knots with crosscap number two}\label{sec6} By Theorems~\ref{thm1} and \ref{thm2}, we determine alternating knots with crosscap number two (Theorem~\ref{corollary1}). \begin{lemma}\label{eq6} \begin{align*} \langle {\mathcal{R}} \rangle \cup \langle {\mathcal{P}} \rangle \cup \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle = \{P~|~C(K^{alt}(P)) = u^-(P)=2~(\forall P)\}. \end{align*} \end{lemma} \begin{proof} For a knot projection $P$, let $K^{alt}(P)$ be as in Notation~\ref{not2}. Then, \begin{align*} &\langle {\mathcal{R}} \rangle \cup \langle {\mathcal{P}} \rangle \cup \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle \\ \stackrel{\textrm{Lemma~\ref{lemma3}~(2)}}{\subset}& \{P~|~C(K^{alt}(P)) = u^-(P)=2~(\forall P)\}\\ \stackrel{\textrm{Theorem~\ref{thm1}~(2)}}{\subset}& \{ P~|~P \in \langle {\mathcal{R}} \rangle \cup \langle {\mathcal{P}} \rangle \cup \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle \}. \end{align*} \end{proof} \begin{lemma}\label{eq7} \begin{align*} \mathcal{R}_{\operatorname{knot}} \cup {\mathcal{P}}_{\operatorname{knot}} \cup \mathcal{T}_{\operatorname{knot}} \sharp \mathcal{T}_{\operatorname{knot}} = \{ K~{\textrm{: an alternating knot}}~|~C(K)=2, u^-(K)=2 \}. \end{align*} \end{lemma} \begin{proof} Note that $P$ uniquely determines an alternating knot diagram (up to reflection). For Lemma~\ref{eq6}, $\{ P~|~P \in \langle {\mathcal{R}} \rangle \cup \langle {\mathcal{P}} \rangle \cup \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle \}$ determines $\{ K^{alt}(P)$ $|$ $P \in \langle {\mathcal{R}} \rangle \cup \langle {\mathcal{P}} \rangle \cup \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle \}$, which equals $\mathcal{R}_{\operatorname{knot}}$ $\cup$ ${\mathcal{P}}_{\operatorname{knot}}$ $\cup~ \mathcal{T}_{\operatorname{knot}} \sharp \mathcal{T}_{\operatorname{knot}}$. Similarly, $\{P~|~C(K^{alt}(P))$ $=$ $u^-(P)=2~(\forall P)\}$ determines $\{K^{alt}(P)~|~C(K^{alt}(P))$ $=$ $u^-(P)=2~(\forall P)\}$, which equals $\{ K$ an alternating knot $|~C(K)=2,$ $u^-(K)=2 \}$ (cf.~Definition~\ref{def_uk}). \end{proof} \begin{theorem}\label{corollary1} Let $\mathcal{R}_{\operatorname{knot}}$, $\mathcal{P}_{\operatorname{knot}}$, and $\mathcal{T}_{\operatorname{knot}} \sharp \mathcal{T}_{\operatorname{knot}}$ be as in Notation~\ref{not3}. Let $K$ be an alternating knot and $C(K)$ the crosscap number of $K$. Let $u^- (K)$ be an integer as in Definition~\ref{def_uk}. Then, the following conditions are mutually equivalent. \noindent $(A)$ $K$ $\in \mathcal{R}_{\operatorname{knot}} \cup {\mathcal{P}}_{\operatorname{knot}}$ $\cup~\mathcal{T}_{\operatorname{knot}} \sharp \mathcal{T}_{\operatorname{knot}}$. \noindent $(B)$ $C(K)$ $=$ $2$. \noindent $(C)$ $u^- (K) =2$. \end{theorem} \begin{proof} \noindent (Proof of (A) $\Leftrightarrow$ (B).) Lemma~\ref{eq7} immediately implies that (A) $\Rightarrow$ (B). \noindent((B) $\Rightarrow$ (A).) Suppose that $C(K)=2$ and $K$ is an alternating knot. By definition, there exist an alternating knot diagram $D^{alt}(K)$ of $K$. Let $P$ be a knot projection obtained from $D^{alt}(K)$ by ignoring over/under information of the double points. By Fact~\ref{factKL}, we have the following Case~1 and Case~2 corresponding to (\ref{case1}) and (\ref{case2}) of Fact~\ref{factKL}, respectively. \noindent Case~1: there exist an alternating knot diagram $D^{alt}(K)$ of $K$, a state $s$ of $D^{alt}(K)$ such that a non-orientable state surface $\Sigma_0$ obtained from $D^{alt}(K)$ satisfies that $C(K)=1-\chi(\Sigma_0)$, i.e., $\chi(\Sigma_0)$ $=$ $-1$. Here, note that the state $s$ is given by the algorithm of Fact~\ref{AKthm}. Let $n(P)$ be the number of double points of $P$. The state $s$ is obtained from $P$ by $n(P)$ splices. In the following, we find the state $s$ in the $2^{n(P)}$ candidates. If the $n(P)$ splices are $n(P)$ Seifert splices, they give an orientable surface, which implies a contradiction. Thus, there exists at least one $S^-$ in the $n(P)$ splices. Further, since $\chi(\Sigma_0)= -1$, the splices consist of $n(P) -2$ Seifert splices produce an $n(P) -1$-component curve and exactly two $S^-$'s (there are no other possibilities). Then, we interpret the $n(P)$ splices as a sequence of the $n(P)$ splices, and suppose that there exists a sequence such that \[ P=P_0 \stackrel{Op_1}{\to} P_1 \stackrel{Op_2}{\to} P_2 \stackrel{Op_3}{\to} \dots \stackrel{Op_{n(P)}}{\to} P_{n(P)} = s, \] $Op_i = S^-$ and $Op_j = S^-$ ($1 \le i < j \le n(P)$). By Lemma~\ref{connection}, for the two distinct double points corresponding to $Op_k$ $(1 \le k \le i-1)$ and $Op_t$ $(1 \le t \le n(P), k \neq t)$, any two double points are represented as in Figure~\ref{connection3}~(b). Here, if there exists a pair of type (a), then a pair consisting of two splices on the two double points sends a $1$-component curve to another $1$-component curve, which implies the contradiction with the condition that $n(P)-2$ Seifert splices produce $n(P)-1$-component curves. Thus, noting that the state $s$ has one to one correspondence with the $n(P)$ splices (Definition~\ref{state}), we can choose $\sigma_1, \sigma_2, \ldots, \sigma_{i-1}$ like $\Sigma_u$ (Notation~\ref{not4}) corresponding to $\operatorname{RI}^-$'s applied successively to $P$ to obtain $P_{i-1}$. Here, by Lemma~\ref{connection}, note that two double points corresponding to $\sigma_k$ $(1 \le k \le i-1)$ and $Op_i (= S^-)$ are the configuration of type~(b) of Figure~\ref{connection3}. After applying $S^-$, a sequence consisting of a single $S^-$ and $j-i-1$ Seifert splices from $P_i$ to $P_j$, where $j-i-1$ Seifert splices should produce $j-i-1$ new components. By recalling Definition~\ref{state}, the state $s$ has one to one correspondence with the $n(P)$ splices. Then, by focusing $1$-gons, it is easy to choose $\sigma_{i+1}, \sigma_{i+2}, \ldots, \sigma_{j-1}$ like $\Sigma_u$ (Notation~\ref{not4}) corresponding to $\operatorname{RI}^-$'s applied successively to $P_i$ to obtain the state $P_{j-1}$. Here, by Lemma~\ref{connection}, note that two double points corresponding to $\sigma_{k'}$ $(i+1 \le k' \le j-1)$ and $Op_j (= S^-)$ are the configuration of type~(b) of Figure~\ref{connection3}. Similarly, it is elementary to choose $\sigma_{j+1}, \sigma_{j+2}, \ldots, \sigma_{n(P)}$ like $\Sigma_u$ (Notation~\ref{not4}) corresponding to $\operatorname{RI}^-$'s applied successively to $P_j$ to obtain the state $s$. Then, $u^-(P) \le 2$. Here, $K$ is not the unknot and is not in $\mathcal{T}_{\operatorname{knot}}$, $2 \le C(K) \le u^-(P)$ ($\because$ Theorem~\ref{thm2}). Thus, $C(K)=2$ and $u^-(P) =2$, where $P$ is a knot projection obtained from $D^{alt}(K)$. Then, by Lemma~\ref{eq7}, we have $K \in \mathcal{R}_{\operatorname{knot}}$ $\cup$ ${\mathcal{P}}_{\operatorname{knot}}$ $\cup$ $\mathcal{T}_{\operatorname{knot}} \sharp \mathcal{T}_{\operatorname{knot}}$, which implies (A). \noindent Case~2: For an orientable genus $g(K)$, $C(K)$ $=$ $2g(K)+1$, which implies a contradiction with $C(K)=2$, which is an even number. \noindent((A) $\Leftrightarrow$ (C).) Since Lemma~\ref{eq7} immediately implies that (A) $\Rightarrow$ (C), it is sufficient to shown that (C) $\Rightarrow$ (A). Recall that $u^- (K)$ $=$ $\min_{P \in Z(K)} u^-(P)$ (Definition~\ref{def_uk}). Then, \begin{align*} &u^- (K) =2\\ \Rightarrow& \exists P \in Z(K)~{\textrm{such that}}~P \in \langle {\mathcal{R}} \rangle \cup \langle {\mathcal{P}} \rangle \cup \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle \quad(\because~{\rm{Theorem~\ref{thm1}~(2)}}) \\ \Rightarrow& K \in \mathcal{R}_{\operatorname{knot}} \cup {\mathcal{P}}_{\operatorname{knot}} \cup \mathcal{T}_{\operatorname{knot}} \sharp \mathcal{T}_{\operatorname{knot}}. \end{align*} \end{proof} \section{Additivity of $u^- (P)$}\label{sec7} In this section, we freely use notations in Definition~\ref{dfn_connected}. \begin{proposition}\label{prop1} Let $P_1$ and $P_2$ be knot projections. \[u^- ( P_1 \sharp P_2 ) = u^-(P_1) + u^-(P_2). \] \end{proposition} \begin{proof} Let $P$ $=$ $P_1 \sharp P_2$. Note that by definition, $u^-(P_1 \sharp P_2)$ $\le$ $u^-(P_1)$ $+$ $u^-(P_2)$. For any orientation, every $S^-$ is characterized by local oriented arcs, as shown in Figure~\ref{orientedS}. On the other hand, when we choose appropriate orientations of $P_1$ and $P_2$, every connected sum $P_1 \sharp P_2$ does not change orientations of factors $P_1$ and $P_2$, as shown in Figure~\ref{orientedConn}. Therefore, type $S^-$ ($\operatorname{RI}^-$,~resp.) on $P_i$ ($i=1, 2$) one to one corresponds to that of $P_1 \sharp P_2$, which implies that $u^-(P_1 \sharp P_2)$ $\ge$ $u^-(P_1)$ $+$ $u^-(P_2)$. \end{proof} \begin{figure}[h!] \includegraphics[width=8cm]{splice.pdf} \caption{Every $S^-$ is characterized by local oriented arcs. }\label{orientedS} \end{figure} \begin{figure}[h!] \includegraphics[width=10cm]{connect.pdf} \caption{An operation $\sharp$ preserves orientations of $P_1$ and $P_2$.}\label{orientedConn} \end{figure} As a corollary of Proposition~\ref{prop1}, we have Corollary~\ref{cor3} (cf.~Theorem~\ref{thm2}). \begin{corollary}\label{cor3} For a knot projection $P$, $D_P$ and $K(D_P)$ be as in Definition~\ref{state}, and let $C(K(D_P))$ be the crosscap number of $K(D_P)$. Let $P_1$ and $P_2$ be knot projections. Suppose that $C(K(D_{P_1 \sharp P_2}))$ $\neq$ $C(K(D_{P_1}))$ $+$ $C(K(D_{P_2}))$. Then, \[ C(K(D_{P_1 \sharp P_2})) < u^- (K(D_{P_1 \sharp P_2})). \] \end{corollary} Recall (cf.~Notation~\ref{notation_p}, Definition~\ref{ri_eq_notation}) that $S^+$ and $\operatorname{RI}^+$ are the respective inverse of $S^-$ and $\operatorname{RI}^-$. \begin{definition}[unknotting-type number $u(P)$]\label{dfn3_u} Let $P$ be a knot projection and $O$ the simple closed curve. The nonnegative integer $u(P)$ is defined as the minimum number of operations of types $S^{\pm}$ to obtain $O$ from $P$ by a finite sequence of operations of types $S^{\pm}$ and of types $\operatorname{RI}^{\pm}$. \end{definition} \begin{example}\label{74two} In general, the crosscap number is not additive under the connected sum \cite{MY}. For example, for the knot $7_4$, $C(7_4)$ $=$ $3$ and $C(7_4 \sharp 7_4)$ $=$ $5$. For the knot projection $\widehat{7_4}$, by Theorem~\ref{thm1} and Proposition~\ref{prop1}, $u^-(\widehat{7_4})$ $=$ $3$ and $u^-(\widehat{7_4} \sharp \widehat{7_4})$ $=$ $u^-(\widehat{7_4})$ $+$ $u^-(\widehat{7_4})$ $=$ $3$ $+$ $3$ $=$ $6$. However, the behavior of the number $u(P)$, introduced in Definition~\ref{dfn3_u}, is different from that of $u^-(P)$. By definition, $u(P) \le u^-(P)$ for every knot projection $P$. We have $u(\widehat{7_4})$ $\le$ $u^-(\widehat{7_4})$ $=$ $3$ and $u(\widehat{7_4} \sharp \widehat{7_4})$ $\le$ $5$, as shown in Figure~\ref{f15}. \end{example} \begin{figure}[h!] \includegraphics[width=12cm]{74.pdf} \caption{$u(\widehat{7_4})$ $\le$ $u^-(\widehat{7_4})$ $=$ $3$, $u(\widehat{7_4} \sharp \widehat{7_4})$ $\le$ $5$.}\label{f15} \end{figure} For $u(P)$, hoping for the best, we ask the following question: \begin{question} Let $P$ be a knot projection and $D_P$ a knot diagram by adding any over/under information to each double point of $P$. Let $K(D_P)$ be a knot type having a knot diagram $D_P$. Let $C(K(D_P))$ be the crosscap number of $K(D_P)$. Then, does every knot projection $P$ hold \[ C(K(D_P)) \le u (P)? \] \end{question} \begin{remark} Let $Z(K)$ be the set of knot projections obtained from alternating knot diagrams of $K$. Then, $\min_{P \in Z(K)} u(P)$ is an alternating knot invariant. Let $u (K)$ $=$ $\min_{P \in Z(K)} u(P)$ (cf.~Definition~\ref{def_uk}). By Theorem~\ref{thm2}, for every knot $K$, $C(K) \le$ $u^- (K)$. However, it is unknown whether $C(K) \le$ $u (K)$ or not. \end{remark} \begin{example}\label{74general} If we generalize Example~\ref{74two}, we have examples Case~1--Case~3, as shown in Figs.~\ref{74a}--\ref{74c} by using connected sums of knot projections where each component is a knot projection, as shown in Figure~\ref{f15a}. Namely, there exist infinitely many knot projections, each of which is represented as $P \sharp P'$ such that $g(K^{alt}(P))$ $=$ $1$, $g(K^{alt}(P'))$ $=$ $1$, $C(K^{alt}(P))$ $=3$, $C(K^{alt}(P'))$ $=3$, $C(K^{alt}(P) \sharp K^{alt}(P'))$ $=5$, $u^-(P \sharp P')$ $=$ $6$, and $u(P \sharp P') \le 5$. Here, for $C(K^{alt}(P) \sharp K^{alt}(P'))$ $=5$, we use Fact~\ref{AKthm}. For the initial knot projection $P \sharp P'$ in each figure of Figs.~\ref{74a}--\ref{74c}, each symbol, ``odd" or ``even", indicates the number of given double points. \end{example} \begin{figure}[h!] \includegraphics[width=8cm]{741.pdf} \caption{$p, q, r \ge 1$ and $m, n \ge 2$. }\label{f15a} \end{figure} \begin{figure}[h!] \includegraphics[width=12cm]{74a.pdf} \caption{Case~1}\label{74a} \end{figure} \begin{figure}[h!] \includegraphics[width=12cm]{74b.pdf} \caption{Case~2}\label{74b} \end{figure} \begin{figure}[h!] \includegraphics[width=12cm]{74c.pdf} \caption{Case~3}\label{74c} \end{figure} Finally, we remark Propositions~\ref{proposition1} and \ref{proposition2} and Questions~\ref{q2} and \ref{q3}. \begin{proposition}\label{proposition1} The following conditions are mutually equivalent. \noindent$(1)$ $P \in \langle {\mathcal{T}} \rangle$. \noindent$(2)$ $u^-(P)=1$. \noindent$(3)$ $u(P)=1$. \end{proposition} \begin{proof} By Theorem~\ref{thm1}, $(1)$ $\Leftrightarrow$ $(2)$. By the same argument as in Theorem~\ref{thm1}, it is easy to show that (1) $\Leftrightarrow$ (3). \end{proof} \begin{proposition}\label{proposition2} The following conditions are mutually equivalent. \noindent$(1)$ $P$ $\in \langle \mathcal{R} \rangle \cup \langle {\mathcal{P}} \rangle \cup \langle {\mathcal{T}} \rangle \sharp \langle {\mathcal{T}} \rangle$. \noindent$(2)$ $u^-(P)=2$. \noindent$(3)$ $u(P)=2$. \end{proposition} \begin{proof} By Theorem~\ref{thm1}, $(1)$ $\Leftrightarrow$ $(2)$. By the same argument as in Theorem~\ref{thm1}, it is easy to show that (1) $\Leftrightarrow$ (3). \end{proof} \begin{question}\label{q2} Is there a prime knot knot projection $P$ such that $u(P) < u^- (P)$? \end{question} \begin{question}\label{q3} Is there a prime knot knot projection $P$ such that $C(K^{alt}(P)) < u^- (P)$? \end{question} \section{Acknowledgement} The authors would like to thank the referee for the comments. The authors would like to thank some participants of Topology Seminar at Tokyo Woman's Christian University for useful comments. The authors would like to thank Professor Tsuyoshi Kobayashi, Professor~Makoto Ozawa, Professor Masakazu Teragaito, and Professor~Akira Yasuhara for their comments. N.~I.~was partially supported by Sumitomo Foundation (Grant for Basic Science Research Projects, Project number: 160556).
1,314,259,995,221
arxiv
\section{Introduction} Colloidal semiconductor nanoclusters (NCs) or quantum dots (QDs) have attracted a great deal of attention due to their applications in the fields of optoelectronics, spintronics, photovoltaic or bio-labeling\cite{peng05,clark07,beard07, klimov07,ithurria07,cirillo14,gaponik10,talapin10,cao00,blokland11,poeselt12} and to their potential to emerge as the key components of the next generation displays \cite{bourzac13}. One important milestone was the synthesis of core-shell quantum dots, which contain at least two semiconductors arrangement in an onion-like geometry \cite{yang99,cao00,baranov03,Chilla08,silva13,dzhagan13,todescato13,cirillo14,jing15}. In this, the core material is surrounded by a different shell material in order to reduce the influence of the possibly imperfect surface onto the core. Indeed, semiconductor core-shell NCs with a high photoluminescence quantum efficiency were reported \cite{balet04,peng05,schops06,ithurria07,reiss09,smith09,chen13}. The theoretical modeling of these type of structures, performed at the level of the effective mass approximation\cite{jing15}, the $k.p$ method\cite{pistol11}, the tight-binding approach\cite{niquet11,neupane11} or the empirical pseudopotential method\cite{schrier06,luo10} were done assuming perfect, unstrained, and unrelaxed atomic positions. Indeed, colloidal quantum dots in their fluid environment were often considered as unstrained, in direct contrast to their self-assembled (Stransky Krastanov) omologues that are known to only exist because of the presence of strain. The estimate of the atomic relaxation and ensuing strain is not straight forward in colloidal quantum dots as the use of continuum models\cite{weng07,trallero-Giner10,crut11} or empirical force field \cite{lin14} would require insights about the surface effects. These are difficult to quantify as the surface relaxes inwards, shortening the bond-length, but these bonds are weakened. Only very recently, {\it ab-initio} large-scale calculations showed that the effect of structural relaxation in colloidal QDs is important \cite{khoo11,han11,han12b,voros13} and a lack of such relaxation leads, for instance, to the appearance of unphysical imaginary vibrational frequencies\cite{han11} and red shifts of vibrational modes\cite{han12b}. In this work, we perform large-scale \emph{ab initio} density functional theory (DFT) calculations to study the structural and vibrational properties of core-shell Si(core)-Ge(shell), the inverted Ge-Si, InAs-InP, and CdSe-CdS NCs with radii ranging from 13.5 to 15.6~\AA. We find that (i) The shell dictates the atom positions of the entire core-shell NCs. This is especially true for group IV Si-Ge and Ge-Si NCs where we see almost no difference between the bond length distribution of the core-shell NCs and the pure NC made of only shell material. For instance, the Ge core in a core-shell Ge-Si NC is compressed to the bulk Si lattice constant (4\% compression of the bond length). (ii) Both the core and the shell are compressed in the InAs-InP and in the CdSe-CdS NCs. So the lattice constants of core and shell materials do not undergo the compromise of one having compressive and the other tensile strain, as one may expect. (iii) The bond-length distribution in the NCs goes from homogeneous (small scattering) throughout the NCs for our group IV NCs to inhomogeneous only at the interface for our group III-Vs, to inhomogeneous in the entire shell region in our II-VIs NCs. We speculate that the long-range Coulombic interaction in the more ionic II-VIs is responsible for the large bond distortions. (iv) The frequency shifts we obtain, compared to the bulk frequencies, for our NCs can all be traced back to two fundamental effects: One being the shift of the modes according to strain (given by the Gr{\"u}neisen parameters), the second being the red-shift created by the undercoordination of the near-surface atoms. Both effects tend to work against each other since the NCs are typically compressively strained and the strain effect leads to a blue-shift (with a positive Gr{\"u}neisen parameter). \section{Method} We first construct an un-relaxed NCs by cutting a sphere, centered on an atom for group IVs or a cation for III-Vs and II-VIs with $T_d$ point group symmetry, out of bulk zinc-blende material. Then, the surface atoms with only one nearest neighbor bond are removed and the surface dangling bonds are terminated by hydrogen atoms for group IV atoms and pseudo-hydrogen atoms $H^*$ with a fractional charge of 1/2, 3/4, 5/4, and 3/2 for group VI, V, III, and II atoms, respectively. These atomic positions are relaxed using DFT in the local density approximation (LDA) and Trouiller-Martin norm-conserving pseudopotentials with an energy cutoff of 30 Ry for group IV and III-V clusters and 40 Ry for II-VI clusters\cite{cpmd08}. In order to calculate the vibrational eigenmodes of Cd-VI clusters with up to one thousand atoms, we apply a non-linear core correlation (NLCC) to the Cd atoms instead of including $d$-electrons in the valence. The calculated transverse optical (TO) and longitudinal optical (LO) frequencies of bulk CdS and CdSe at the $\Gamma$ point using NLCC-LDA and fully including the $d$-electrons in the valence are given in Table.~\ref{table:nlcc}. As shown in this table, the phonon frequencies calculated using NLCC are in close agreement with the full calculations. \begin{table} \caption{Comparison of TO and LO frequencies (in cm$^{-1}$) at the $\Gamma$ points for bulk CdS and CdSe between the NLCC calculations and the full calculation including the $d$-electrons in the valence.} \label{table:nlcc} \begin{tabular}{lcccc} \hline \hline & TO$^{NLCC}$ & TO$^{d-state}$ & LO$^{NLCC}$ & LO$^{d-state}$ \\ \hline CdS & 247.2 & 246.5 & 296.5 & 299.1 \\ CdSe & 179.4 & 174.8 & 204.1 & 205.2 \\ \hline \hline \end{tabular} \end{table} The structures are relaxed until the forces are less than 3$\times$10$^{-6}$ a.u.. The dynamical matrix elements are then calculated via finite difference and the vibrational eigenmodes and eigenvectors are obtained by solving the dynamical matrix,\cite{yu10} \begin{equation} \label{eq:eigen} \sum_{J}\frac{1}{\sqrt{M_{I}M_{J}}}\frac{\partial^{2}V({\bm R})}{\partial{\bm R_{I}}\partial{\bm R_{J}}} {\bm U_{J}}=\omega^{2}{\bm U_{I}} \end{equation} where $I$ and $J$ label the atoms, $M$ are the atomic masses, $V({\bm R})$ the potential energy, ${\bm R}$ the atomic positions, ${\bm U}$ the eigenvectors and $\omega$ the vibrational frequencies. We use \emph{ab initio} DFT implemented in the CPMD code\cite{cpmd08} to optimize the geometry and to calculate the vibrational eigenmodes of the NCs. In order to analyze the vibrational eigenmodes in terms of the core and the shell (surface) contributions, we calculate the projection coefficients \begin{equation} \label{eq:core} \alpha^{\nu}_{c(s)}=\frac{\sum_{I}^{N_c (N_s)}|\bm X^{\nu}_{I}|^2}{\sum_{I=1}^{N}|\bm X^{\nu}_{I}|^2}, \end{equation} where, $N_c$, $N_s$ and $N$ are the core, shell (surface), and the total number of atoms; ${\bm X^{\nu}_{I}}$ represents the three components belonging to atom $I$ and vibrational mode $\nu$ from the 3$N$ component eigenvectors ${\bm U_{I}}$. To compare the vibrational properties of core-shell NCs with the phonon density of states (DOS) of their corresponding bulk, the phonon DOS of bulk InAs, InP, CdSe, and CdS are calculated via \emph{ab initio} density functional perturbation theory\cite{abinit}. The computation of bulk materials is performed using the ABINIT code package\cite{abinit} with the same pseudopotentials and the same energy cutoff as their corresponding NCs, and a Monkhorst-Pack $k$-point mesh is taken as 8$\times$8$\times$8. To compare our results with Raman spectroscopy measurements, we calculate the Raman intensities using a phenomenological model proposed by Richter \emph{et al.}\cite{richter81}. Based on this model, the Raman intensity of nanostructures is proportional to the projection coefficient of the vibrational modes of the nanostructure onto bulk modes with a relaxation of the wave-vector selection rule\cite{richter81,han12a}, \begin{equation} \label{eq:raman} I(\omega)\propto \sum_{n,\nu,\bm{q}}\frac{|C_{n,\bm{q}}^{\nu}|^{2}}{(\omega - \omega^{\nu})^{2}+(\Gamma_0/2)^2}, \end{equation} where, $C_{n,\bm{q}}^{\nu}$ is the projection coefficient of the NC mode $\nu$ on the bulk mode $n$ with wave vector $\bm{q}$ and $\Gamma_0$ is the natural Lorentzian linewidth (an empirical parameter in our work). The coefficient $C_{n,\bm{q}}^{\nu}$ are summed up to $\Delta q = 1/(2R)$. \section{Structural properties} In Fig.~\ref{fig:geom}, we show the relaxed atomic positions of the In$_{141}$As$_{140}$-In$_{228}$P$_{208}$H$^{*}_{300}$ core-shell NC with a radius of 15.6~\AA. In this figure, the core part (InAs) is shown as green and purple spheres while the shell part (InP) is plotted as green and tan spheres. The small white spheres represent the passivants. The core, shell and interface areas are highlighted. \begin{figure} \centerline{\includegraphics[width=.8\columnwidth]{geometry_coreshell.pdf}} \caption{ (Color online) Relaxed atom positions of the In$_{79}$As$_{68}$-In$_{242}$P$_{244}$H$^{*}_{300}$ core-shell nanocrystal. The In, As, P, and H$^*$ atoms are represented as green, purple, tan, and white spheres, respectively. The definition of core and shell is following the atomic type (As atoms are core-atoms, P-atoms are shell-atoms, in the present case). The interface region is defined graphically.}\label{fig:geom} \end{figure} To describe the structural properties of the NCs, we plot the nearest neighbor distances as a function of their distance from the cluster center. The results are shown for group IV, III-V and II-VI in Figs.~\ref{fig:geom_SiGe},~\ref{fig:geom_InAsP}, and \ref{fig:geom_CdSSe}, respectively. \subsection{Strain: Group IV NCs} \begin{figure} \centerline{\includegraphics[width=\columnwidth]{Ge_Si_bond.pdf}} \caption{ (Color online) Bond length distribution as a function of their distance to the dot center. For (a) Si$_{465}$H$_{228}$ (red circles), Ge$_{147}$-Si$_{318}$H$_{228}$ (black squares); (b) Si$_{705}$H$_{300}$ (red circles), Ge$_{281}$-Si$_{424}$H$_{300}$ (black squares); (c) Ge$_{465}$H$_{228}$ (green triangles), Si$_{147}$-Ge$_{318}$H$_{228}$ (blue diamonds); (d) Ge$_{705}$H$_{300}$ (green triangles), Si$_{281}$-Ge$_{424}$H$_{300}$ (blue diamonds). The LDA bond lengths of bulk Si and Ge are given as dashed lines and dotted dashed lines, respectively. }\label{fig:geom_SiGe} \end{figure} For group IV NCs we compare in Fig.~\ref{fig:geom_SiGe} the results for the core-shell structure with the results for a pure NC of similar size made of shell material. The most striking result is that the bond length distribution of both structures is nearly identical. In other words, the core-shell structure is geometrically very similar to a NC made of pure shell material, i.e., the shell dictates the structure and the core adapts. For the {\it Ge-Si core-shell structure}, Fig.~\ref{fig:geom_SiGe}a,b), the softer Ge material (bulk modulus of 77.2 GPa\cite{fine53}) is fully compressed, by around 5\%, to the lattice constant of Si. The bond lengths show a very small deviation from the bulk values, even close to the surface, no noticeable deviation is observed. This is a special feature of Si, compared to the other materials. For the {\it Si-Ge core-shell structure}, Fig.~\ref{fig:geom_SiGe}c,d), the rather stiff Si core (bulk modulus of 97.6 GPa\cite{hopcroft10}) is expanded by 2-3 \%, in order to almost perfectly match the bond length distribution of a pure Ge NC. The Ge NC, in contrast to the Si NC, has a significant bond-length reduction at the surface (around 5\%) and even for our largest NC (Fig.~\ref{fig:geom_SiGe}d) the bulk bond-length is not recovered at the NC's center. We refer to the bond-length reduction at the surface as the {\it undercoordination effect} \cite{khoo10,han12a}, i.e., the missing atomic partners on the vacuum side lead to an inward surface relaxation, as common in surface physics/chemistry. We further note that the bond-length variation is rather constant across the interface, marked as light-green area in Fig.~\ref{fig:geom_SiGe}. \subsection{Strain: Group III-V (InAs-InP) NCs} \begin{figure} \centerline{\includegraphics[width=\columnwidth]{InAs_InP_bond.pdf}} \caption{ (Color online) Bond length distribution as a function of the distance to the dot center for (a) In$_{79}$As$_{68}$-In$_{146}$P$_{172}$H$^{*}_{228}$ (black square), In$_{225}$P$_{240}$H$^{*}_{228}$ (red circle); (b) In$_{79}$As$_{68}$-In$_{242}$P$_{244}$H$^{*}_{300}$ (black square), In$_{321}$P$_{312}$H$^{*}_{300}$ (red circle); and (c) In$_{141}$As$_{140}$-In$_{228}$P$_{208}$H$^{*}_{300}$ (black square), In$_{369}$P$_{348}$H$^{*}_{300}$ (red circle). The LDA bond lengths of bulk InP and InAs are given as dashed lines and dotted dashed lines, respectively.}\label{fig:geom_InAsP} \end{figure} The corresponding comparison for InAs-InP is given in Fig.~\ref{fig:geom_InAsP}. We now notice a different bond-length distribution between the InAs-InP core-shell structure and the pure InP NCs. The difference is, however, still surprisingly small and only apparent for the smaller NCs (Figs.~\ref{fig:geom_InAsP}a,b)). In all cases, the core is heavily compressed, by 5.1\%--7.1\%. For a radius of 16.5 {\AA} (Fig.~\ref{fig:geom_InAsP}c)), the bond-length of the core are already ``converged" to the bond length of the shell. So again, the shell dictates the structure and the core adapts. The shell in the core-shell InAs-InP NC, or the surface-area in the pure InP NC, have a reduced bond-length due to the undercoordination effect: the surface layers relax inwards. The bond-length variation is significant in the interface region and hence much larger than in the group IV materials. \subsection{Strain: Group II-VI (CdSe-CdS) NCs} \begin{figure} \centerline{\includegraphics[width=\columnwidth]{CdSe_CdS_bond.pdf}} \caption{ (Color online) Bond length distribution as a function of the distance to the NC center for (a) Cd$_{79}$Se$_{68}$-Cd$_{146}$S$_{172}$H$^{*}_{228}$ (black square), Cd$_{225}$S$_{240}$H$^{*}_{228}$ (red circle), and (b) Cd$_{79}$Se$_{68}$-Cd$_{242}$S$_{244}$H$^{*}_{300}$ (black square), Cd$_{321}$S$_{312}$H$^{*}_{300}$ (red circle). The LDA bond lengths of bulk CdS and CdSe are given as dashed lines and dotted dashed lines, respectively. }\label{fig:geom_CdSSe} \end{figure} The results for CdSe-CdS NCs are given in Fig.~\ref{fig:geom_CdSSe}. In comparison to the previous results for our group IV and our group III-V NCs, we can see a clear progression towards a more scattered data: the core-shell structure (black squares) shows a bond-length distribution that significantly differs from the corresponding NC made of only shell material (CdS, red circles). The core CdSe material is still compressed by as much as 2.0\% but has larger bond-length than the core of the corresponding pure CdS NC. In the area of the interface, the bond-length variation is large, going from nearly CdSe bulk bond-length to 1.9\% below the CdS bulk bond length (again, the undercoordination effect). The comparison of Fig.~\ref{fig:geom_CdSSe}a) and b) shows as noticeable difference, especially in the surface/shell region, despite the fact that the NCs have only a small difference in radius. This highlights the fact, that the atomistic description leads to a shell-by-shell construction of the structure with increasing radius. A small radius increase can lead to geometrically and chemically rather different structures. The pure CdS NCs have a ratio of Cd to S atoms of 225/240 in the smaller structure and 321/312 in the larger structure. So, Cd poor in the first case and Cd rich in the second. This should be kept in mind when comparing structures with different radii. \section{Vibrational properties} The vibrational DOS are shown for the different NCs in Figures.~\ref{fig:SiGe_vib}, \ref{fig:InAsP_vib}, and \ref{fig:CdSeS_vib}. Two dominant effects lead to the vibrational frequency shifts we observe: (i) {\it Strain induced shifts}. The magnitude of this shift is quantified by the Gr\"uneisen parameter. For the materials considered, these are all positive and between 0.89 and 1.89 (see Table~\ref{tab:gruen}). The Gr\"uneisen parameters for CdS and CdSe TO-modes are results of our DFT calculations, as we did not find experimental results in the literature. This means, for a compression of the structure, the LO and TO frequencies shift to higher frequencies ({\bf blue shift}). (ii) {\it The undercoordination effect}. The atoms at the surface lacking bonding partners vibrate with a lower frequency. This {\bf red-shift} is strongest for atoms close to the surface but does penetrate a few layers inside the NCs. \begin{table}[htdp] \caption{Gr\"uneisen parameters for the optical modes\cite{Landolt01}. The results marked with an asterisk ${}^*$ are from our DFT calculations.} \begin{center} \begin{tabular}{l|llllll} \hline \hline & Si\cite{Landolt01} & Ge\cite{Landolt01} & InP\cite{aoki84} & InAs\cite{aoki84} & CdS\cite{wasilik74} & CdSe\cite{alivisatos88} \\ \hline $\gamma^{TO}$ & ~~1.02 & 0.89 & 1.24 & 1.06 & 1.89$^*$ & 1.86$^*$ \\ $\gamma^{LO}$ & ~~1.02 & 0.89 & 1.44 & 1.21 & 1.37 (1.34$^*$) & 1.1 (1.43$^*$) \\ \hline \hline \end{tabular} \end{center} \label{tab:gruen} \end{table}% \subsection{Vibrations: Group IV NCs} The vibrational DOS is given for ``smaller" (693 atoms) and ``larger" (1005 atoms) structures in Fig.~\ref{fig:SiGe_vib}. In case of the core-shell NCs (panels (a),(b),(e),(f)), the vibrational eigenmodes are projected onto a core and a shell area defined in Fig.~\ref{fig:geom} and shown as black and red lines respectively in Fig.~\ref{fig:SiGe_vib}. For the pure NCs (panels (c),(d),(g),(h)) the eigenmodes are projected onto a core and a surface area. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{vibrons_IV.pdf}} \caption{(Color online) Vibrational DOS of (a) Ge-Si core-shell NC with $R$ = 13.5~\AA, (b) Si-Ge core-shell NC with $R$ = 13.8~\AA, (c) pure Si NC with $R$ = 13.5~\AA, (d) pure Ge NC with $R$ = 13.8~\AA, (e) Ge-Si core-shell NC with $R$ = 15.2~\AA, (f) Si-Ge core-shell NC with $R$ = 15.5~\AA, (g) pure Si NC with $R$ = 15.2~\AA, (h) pure Ge NC with $R$ = 15.5~\AA. All the vibrational modes were broadened by 0.8~cm$^{-1}$. The lower frequency blue dashed lines in panels (c), (d), (g), and (h) label the van Hove singularities for the acoustic branches, which corresponds to a maximum in the bulk acoustic phonon DOS. The higher frequency blue dashed lines show the optical $\Gamma$-point frequencies in bulk (517.0~cm$^{-1}$ for Si and 300.9~cm$^{-1}$ for Ge). The ``surface" of the pure Si(Ge) NCs are defined to have the same dimension as the ``Shell" in the core-shell NCs. Note that the passivant vibrations are far remote at much higher frequencies, see Ref. \onlinecite{han12b} for a detailed description. }\label{fig:SiGe_vib} \end{figure} We make few observations: 1) The vibrational DOS of the core-shell structures ((a),(b),(e),(f)) is qualitatively a superposition of the vibrational DOS of the pure Si and Ge NCs ((c),(d),(g),(h)). 2) Compared to the bulk optical frequencies (blue dashed lines at high frequencies in panels (c),(d),(g),(h)), the optical peak is red-shifted. In case of the Si NCs (panels (c),(g)) it is entirely due to the undercoordination effect, as the structure is basically unstrained (see Fig.~\ref{fig:geom_SiGe}) for the bond-length distribution. For the case of the Ge NCs (panels (d),(h)), the undercoordination effect (red-shift) is counterbalanced by the compression (blue-shift) experienced throughout the crystal (see Fig.~\ref{fig:geom_SiGe}). The undercoordination effect is stronger and the optical peak is slightly red shifted. 3) The high frequency optical peak (originating from Si) of the Ge-Si NCs (panels (a), (e)) is significantly blue shifted with respect to the optical peak of the Si-Ge NCs (panels (b), (f)). This shift originates mainly from the fact that the Si core (panels (b)(f)) experiences tensile strain of up to 3\% compared to its nearly unstrained value when it is used as shell material (panels (a) (e)). This is mainly a strain effect. 4) For Si and Ge, the Gr\"{u}neisen parameters of the longitudinal acoustic (LA), LO, and TO modes are positive while those of the transverse acoustic (TA) modes are negative. Due to the mixing of transverse and longitudinal characters in the confined NCs, the positive and negative Gr\"{u}neisen parameters of the acoustic modes tend to cancel each other out. This leads to the lack of shift in the acoustic modes that we highlighted in Fig. ~\ref{fig:geom_SiGe} (panels (c)(d)(g)(h)) by comparing the NC's results with the van Hove singularity of the bulk acoustic modes\cite{tubino72} marked with blue dashed lines at 155~cm$^{-1}$ for Si, and 90~cm$^{-1}$ for Ge. \subsection{Vibrations: Group III-V (InAs-InP) NCs} In a similar fashion as done for group IV, the vibrational DOS is given for InAsP NCs in Fig.~\ref{fig:InAsP_vib}. We note the following: 1) In contrast to the group IV NCs, the optical peaks are strongly blue-shifted compared to the bulk frequencies. This is true for the pure NCs (panels (d) and (h)) and for the core-shell structures (panels (b),(c),(f),(g)). This can be traced back to the very strong compression given in the core (in the case of the InAs core, more than 6\%) and in the shell (1-2\% in the InP shell), as well as in the pure NC, as shown in Fig~\ref{fig:geom_InAsP}. Such a reduction of 6\% in the bond-length corresponds to extreme pressures of around 20 GPa, for the InAs core. A result that highlights the need to consider strain effects in colloidal QDs. 2) The surface modes that appear in the phonon gap (between acoustic- and optical-type vibrations) between 240-280 cm$^{-1}$ in the InP NCs (panels (d) and (h), see Ref.\onlinecite{han12a} for a detailed description of these new modes) are in the same frequency range as the core optical modes in the core-shell NCs (panels (b) and (f)). 3) Some vibrational modes are confined around the interface (panels (c) and (g)), but lie in the same frequency range as core and shell modes. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{vibrons_III_V.pdf}} \caption{ (Color online) (a) and (e) phonon DOS of bulk InAs and InP (both (a) and (e) panels show the same results). Vibrational DOS of (b) InAs-InP core-shell NC with $R$ = 13.6~\AA, (c) projection on interface region (enlarged 3.25 times), (d) InP NC with $R$ = 13.6~\AA, (f) InAs-InP core-shell NC with $R$ = 14.5~\AA, (g) projection on interface region (enlarged 4 times), (h) InP NC with $R$ = 14.5~\AA. All vibrational modes were broadened by 0.8~cm$^{-1}$. The ``Surface'' of InP NCs is defined to have the same dimension as the ``Shell" in the corresponding core-shell NCs. }\label{fig:InAsP_vib} \end{figure} \subsection{Vibrations: Group II-VI (CdSe-CdS) NCs} In a similar fashion as done for group IV and III-Vs, the vibrational DOS for CdSeS NCs are given in Fig.~\ref{fig:CdSeS_vib}. We note the following: 1) The CdSe core ``optical" modes at around 200 cm$^{-1}$ (panels (b), (f)) are blue-shifted compared to the corresponding bulk frequencies. This is due to a compression of around 2\% of the core (see Fig.~\ref{fig:geom_CdSSe}). 2) The CdS shell ``optical" modes are scattered over a large range of frequencies between 250-370 cm$^{-1}$. This is similar to the case of pure CdS NCs (panels (d) and (h)) and is directly related to the large variation in bond-length approaching the surface of the NCs, as depicted in Fig.~\ref{fig:geom_CdSSe}. 3) Interface modes do exist and are of the persistent-type\cite{Klingshirn12}, i.e., they are in the frequency ranges of the bulk CdSe and CdS modes (with the shifts according to the existing strain). In other words, as is the case for the III-V NCs discussed previously, no additional modes appear in the intermediate frequency range between the bulk bands. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{vibrons_II_VI.pdf}} \caption{ (Color online) Same as Fig.~\ref{fig:InAsP_vib}, but for CdSe-CdS core-shell NCs with $R$ = 14.1~\AA~[(b)-(d)] and with $R$ = 14.9~\AA~[(f)-(h)]. }\label{fig:CdSeS_vib} \end{figure} \section{Comparison with experiment} To allow a comparison with experiment we have used the phenomenological model proposed by Richter \emph{et al.}\cite{richter81}, given in Eq.~(\ref{eq:raman}), to calculated the Raman intensities and show the results for CdSe-CdS core-shell Cd$_{79}$Se$_{68}$-Cd$_{242}$Se$_{244}$H$^*_{300}$ NC with $R$ = 14.9~\AA~ in Fig.~\ref{fig:raman}. While the green lines represent the raw data of the intensities, we have also used two different values for the broadening $\Gamma_0$: 1.5~cm$^{-1}$ is shown as red line and a broadening of 8.0~cm$^{-1}$ is shown as black line. The experimental results taken from Ref.~\onlinecite{tschirner12} are given in panel (b) of the figure for comparison. To quantify, as well as possible, the origin of the different vibrations, we have plotted the magnitude of the vibrational eigenvector $|\bm X_I^\nu(R) |$ as a function of its radial position, i.e., distance to the NC center. So for each atom $I$ and eigenmode $\nu$ we obtain one value for $|\bm X_I^\nu(R) |$. The results for all the atoms on one ``shell" around the NC center (with same value of $R$) are averaged and plotted in Fig.~\ref{fig:vibloc}. The vibrations of peak A and A$^{\prime}$, as well as F and F$^{\prime}$ are qualitatively very similar and only the vibrations A and F are plotted. Two comments are due up front: (1) The size of our NC with a radius of 14.9~{\AA} is significantly smaller than the experimental size with $R$ around 30.0~{\AA}. (2) {\bf Peak C} (221.4~cm$^{-1}$) is a surface mode of CdS, as can be seen in Fig.~\ref{fig:vibloc} and in Ref.~\onlinecite{supplementary}, which we do not expect to see in the experiment because of the much larger size and corresponding much smaller surface to volume ratio. Without considering peak C, we observe for the larger broadening (black line in Fig.~\ref{fig:raman}a)) a two peak structure, similar to the experimental result. Each of these peaks is composed of several peaks, as was also deduced from the non-lorentzian line-shapes in the experiment\cite{tschirner12}. We now analyze the peaks subsequently. {\bf Peaks A} (282.5~cm$^{-1}$) and {\bf A$^{\prime}$} (274.6~cm$^{-1}$) are vibrational modes with optical character of CdS \cite{supplementary}. With a broadening of 8.0~cm$^{-1}$, these two peaks merge into one peak with a frequency of 279~cm$^{-1}$ (Fig.~\ref{fig:raman}), which corresponds to the peak labeled as CdS LO in the experimental work\cite{tschirner12}. Our combined peak is red-shifted compared to the experimental results. From the two effects mainly responsible for frequency shifts, strain and undercoordination, only the latter effect is relevant since the CdS shell is already mainly unstrained in our NC (see Fig~\ref{fig:geom_CdSSe}). The undercoordination effect becomes weaker with increasing shell size and we expect a blue shift of peaks A and A$^{\prime}$ if we go towards the experimental situation with a much thicker shell, until they reach the bulk value of around 300~cm$^{-1}$ (Fig.~\ref{fig:geom_CdSSe}a)); in good agreement with the experiment. A dependence of the blue shift on the shell thickness was also reported recently \cite{cirillo14}. The experimental peak ``LO 297'' in Ref.~\onlinecite{tschirner12} is therefore a bulk-like, unstrained, CdS peak (see Fig.~\ref{fig:vibloc} to see the localization in the CdS shell) without (large) confinement effect. {\bf Peak B} (264.3~cm$^{-1}$) is a vibrational mode localized at the interface and in the CdS near-surface shell, with a small but non-vanishing surface component (within the purple area in Fig.~\ref{fig:vibloc}, see Ref.~\onlinecite{supplementary} for a movie of this mode). It corresponds well to the ``low-energy shoulder (LES)" described in the experimental work\cite{tschirner12} and which was observed in NCs with different sizes, shapes, surface environments and materials using Raman and photoluminescence measurements\cite{lin14,roy96,tschirner12,dzhagan11,giugni12,nobile07,hwang99,cherevkov13,venugopal05,onoberov04}. This LES is often described as a ``surface optical'' phonon mode\cite{lin14,roy96,tschirner12,dzhagan11,giugni12,nobile07,hwang99,cherevkov13,venugopal05,onoberov04} which is in rather good agreement with our identification. The fact, that is some surface character leads to the expectation that it will have some dependence on the type of surface passivation, in good agreement with some experiments \cite{xiong04}. {\bf Peak D} (203.5~cm$^{-1}$) represents a vibrational mode with optical character of the CdSe core \cite{supplementary} combined with CdS near-interface contributions (see Fig.~\ref{fig:vibloc} (c)) and Ref.~\onlinecite{supplementary}). This mode has a blue shift (8.5~cm$^{-1}$) compared to the bulk LO mode in CdSe. The experimental blue shift is somewhat larger (214~cm$^{-1}$), which can be traced back to the fact that a structure with a larger shell will experience an even larger compression of the core and hence a larger blue shift. {\bf Peak E} (197.7~cm$^{-1}$) are modes with contributions from all regions of the NC: core, interface and shell (see Fig.~\ref{fig:vibloc} (d)) and Ref.~\onlinecite{supplementary}). They correspond well to the ``additional intermediate band or IP mode" seen in the experimental work. {\bf Peak F} (190.0~cm$^{-1}$) and F$^{\prime}$ (184.0~cm$^{-1}$). These modes are CdS shell modes combined with optical surface breathing-type modes (see Fig.~\ref{fig:vibloc} (e)) and Ref.~\onlinecite{supplementary}). They may correspond to the ``SO mode" labeled in the experimental paper. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{comparison_with_experiment.pdf}} \caption{(Color online) (a) Calculated Raman spectrum (see Eq.~(\ref{eq:raman})) for CdSe-CdS core-shell NC with $R$ = 14.9~\AA~. Green lines represent the raw data of the intensities, red and black lines represent the Raman intensities with a broadening of 1.5~cm$^{-1}$ and 8.0~cm$^{-1}$, respectively. (b) Measured Raman spectrum of CdSe-CdS core-shell NC with $R$ around 30.0~{\AA} taken from Ref.~\onlinecite{tschirner12}. }\label{fig:raman} \end{figure} \begin{figure} \centerline{\includegraphics[width=\columnwidth]{vibration_loc_CdSeS.pdf}} \caption{ (Color online) Localization of the vibrational modes of the Cd$_{79}$Se$_{68}$-Cd$_{242}$S$_{244}$H$^*_{300}$ NC with R=14.9~{\AA}, where we have used the magnitude of the eigenmode as indicator (see text). The green highlighted area shows the interface region and the violet area the region where passivant atoms are present defining a surface region. }\label{fig:vibloc} \end{figure} As a final remark, we note that while the large majority of theoretical work on core-shell QDs neglect the effect of strain altogether, some used simple continuum models \cite{todescato13,tschirner12} leading to the erroneous conclusion, due to the absence of the surface effect, that the shell structure experiences tensile strain or missed the existence of certain Raman active modes with interface or surface character, such as the LES mode (our mode B). The lack of the latter has forced more intricate interpretation of the experiment in terms of excitonic effects \cite{lin14}, which seem, in view of the present {\it ab initio} results, at least not necessary to interpret the experimental results. A cheap empirical valence force field description would certainly be advantageous, but seem presently out of reach due to the complexity of surface and interface effects. A reasonable valence force field description of the vibrational property of III-V NCs was proposed earlier \cite{han11} for pure NCs. An extension towards core-shell structures and group II-VI or group IV may be worthwhile, although certainly tedious. \section{Summary} In summary, we have investigated the structural and vibrational properties of colloidal semiconductor core-shell NCs with up to 1000 atoms via \emph{ab initio} DFT calculations. We find that the geometry of Si-Ge and inverted Ge-Si clusters is determined by the shell part and there is no bond length distortion at the core-shell interface of these group IV NCs. Also for our III-V and II-VI core-shell NCs, the geometry of the shell is similar to that of the corresponding pure NCs made of shell material. Accordingly, the vibrational DOS of the shell-type vibrations in core-shell NCs is very similar to the vibrational DOS of pure NCs made of shell material. Hence, the shell experiences no tensile strain but is rather compressed. This fact could be used to improve continuum model descriptions that suggested the opposite situation (tensile shell). For our III-V and II-VI core-shell NCs we find that the bond-length of the core remains very heavily compressed. We also find an obvious bond length distortion at the core-shell interface of our III-V and II-VI core-shell NCs. This distortion (i.e., scattering in the bond-length distribution) extends beyond the interface region all the way to the surface in CdSe-CdS core-shell NCs. We link the bond-length distortion to the long range ionic interaction, which is strong in the more ionic II-VI NCs \cite{han12a}. This large scattering of the bond-length distribution leads to a significant broadening of the vibrational bands, which is consequently especially prominent for II-VI NCs. We also observe a lack of shift in the acoustic modes which we trace back to the mixing of LA and TA modes, with their nearly compensating positive and negative Gr\"{u}neisen parameters \cite{han12a}. We obtain a blue- (red-) shift in the optical modes of the core parts in Ge-Si (Si-Ge) NCs, which can be traced back to the compressed (expanded) bond lengths. In addition, we find that the interface modes of III-V and II-VI core-shell NCs are just within the manifold of the core and the shell modes and overlap with the surface modes, which will make them difficult to identify experimentally. Finally, we compared our results with recent Raman experiments and give a new interpretation of the results based on the lack of tensile strain and the existence of the undercoordination effect. The qualitative picture of our frequency shifts can be understood based on two, often competing, effects of compressive strain (blue-shift of optical modes) and undercoordination (red-shift). We further show that the often discussed low-energy shoulder on the Raman spectra originate from interface vibrations with small surface character, in agreement with most of the experimental interpretations and in disagreement with earlier theoretical models. \begin{acknowledgements} We are grateful to H. Lange and A. Biermann for fruitful discussions. P. H. was supported by the National Natural Science Foundation of China under Grant No. 11404224 and General program of science and technology development project of Beijing Municipal Education Commission under Grant No. KM201510028004. Most of the simulations were preformed on the Cray XC40 Hornet Supercomputer Cluster at the High Performance Computing Center Stuttgart (HLRS). \end{acknowledgements}
1,314,259,995,222
arxiv
\section{Introduction} The purpose of this contribution is two-fold: on one hand, it serves as a summary of my talk in the QF2 session on the spin-statistics connection (section~\ref{sec:spinstats}); on the other, at the request of the session organisers, it provides an expository review of locally covariant quantum field theory in curved spacetimes (QFT in CST) (section~\ref{sec:LCQFT}). In the context of a Marcel Grossmann meeting, there should be no need to justify the study of QFT in curved spacetimes. However, it is worth emphasising that locally covariant QFT has two main differences from usual practice of QFT in CST: it is an axiomatic approach, and it aims to discuss arbitrary spacetime backgrounds, rather than specific examples. The motivation for the latter has several aspects. First, one wishes to gain a perspective that is independent of special features of particular spacetimes, but democratically implements the same physics (in some sense) in all of them. This is motivated by the practical reason that the spacetime we inhabit does not exhibit any symmetries on small scales, but seems to be well-approximated on large scales by spacetimes that do, and a setting in which such approximations can be controlled is desirable. Second, allowing for arbitrary backgrounds gives one flexibility to model macroscopic material features (e.g., stars or apparatus etc) by the geometry of the background rather than as complicated configurations of a QFT. Third, to embed the principle of locality from the start, it is expedient to seek a framework in which the formulation in a given spacetime region is (in a suitable sense) independent of the geometry in its causal complement. Axiomatic approaches to QFT have been developed since the 1950's. They arose from concerns about the mathematical deficiencies of QFT at that time, with the aim to `kill it or cure it'.\cite{StreaterWightman} At a basic level, the goal is to write down precisely what a quantum field theory aspires to be, to draw out the general consequences (e.g., a spin-statistics connection, PCT theorem, or no-go results like Haag's theorem) that follow from them and thereby to provide guidance for attempts to rigorously construct models of QFT. It is striking that, despite the undoubted successes of QFT, the mathematical status of (nonperturbative) interacting theories in four dimensions is still unsettled. (On the other hand, while there has been no cure, QFT has not been killed by the discovery of an internal contradiction in its fundamental assumptions.) There are two basic flavours of axiomatic QFT: the Wightman framework,\cite{StreaterWightman} which retains the idea of a quantum field as a key building block of the theory, and the more radical Haag--Kastler--Araki framework of algebraic QFT (AQFT) or \emph{local quantum physics},\cite{Haag} in which the focus is on algebras of local observables, while fields enter as secondary and less intrinsic elements. The motivation of the algebraic approach is to remain close to operational ideas of what can be measured locally, simultaneously avoiding an over-reliance on classical field theory (which, after all, should emerge as a limit of QFT, rather than being taken as its foundation). Locally covariant QFT is a natural generalisation of AQFT, but retains a natural place for quantum fields. Aside from general structural results applying to wide classes of QFTs, axiomatic QFT also provides a deepened and better founded conceptual framework, often allied with powerful mathematical tools. In turn, this can lead to new developments, such as the formalism of perturbative algebraic QFT (pAQFT) that has put perturbative QFT on a rigorous basis, even in curved spacetime and even for gauge theories including gravity (see Refs.~\refcite{BrFr2000,Ho&Wa01,Ho&Wa02,FreRej_BVqft:2012} and Rejzner's contribution to these Proceedings). A survey of the present status of AQFT, in both flat and curved spacetimes, can be found in the edited collection Ref.~\refcite{AdvAQFT}. \section{Locally Covariant QFT in CST}\label{sec:LCQFT} Let us set out the general structure of locally covariant QFT. General references for this section are the original paper,\cite{BrFrVe03} and an extensive recent review.\cite{FewVerch_aqftincst:2015} \subsection{Locally covariant theories} Fix a spacetime dimension $n\ge 2$. The spacetime backgrounds that we will study, and which we will call \emph{globally hyperbolic spacetimes}, consist of tuples $\Mb =(\Mc,g,\ogth,\tgth)$, where $\Mc$ is a smooth manifold with a Lorentzian metric $g$ (signature $+-\cdots-$), an orientation $\ogth$, and time-orientation $\tgth$. Here, we allow $\Mc$ to have finitely many connected components, while $\ogth\subset\Omega^n(\Mc)$ is one of the components of the nowhere-vanishing smooth $n$-forms on $\Mc$ and $\tgth\subset\Omega^1(\Mc)$ is one of the components of the nowhere-vanishing smooth $1$-forms that are timelike with respect to $g$. We restrict to those spacetimes that are globally hyperbolic with respect to the given metric and time-orientation. The first element of the algebraic formulation is the assignment of a $*$-algebra $\Af(\Mb)$, with a unit $\II_{\Af(\Mb)}$, to each $\Mb$ of this type. The self-adjoint elements of $\Af(\Mb)$ are to represent observables of the given theory on spacetime $\Mb$. A simple example is given by the real scalar field, obeying the Klein--Gordon equation \begin{equation} P_\Mb\phi:=(\Box_\Mb +m^2)\phi = 0. \end{equation} As $\Mb$ is globally hyperbolic, there are advanced ($-$) and retarded $(+)$ Green operators $E_\Mb^\pm$ for the operator $P_\Mb$ so that, for any smooth compactly supported function $f\in\CoinX{\Mb}$, $\phi^\pm = E^\pm_\Mb f$ solves the inhomogeneous equation $P_\Mb \phi^\pm=f$ with the support of $\phi^\pm$ lying in the causal future ($+$) or past $(-)$ of the support of $f$. Then the $*$-algebra $\Af(\Mb)$ is defined to have a set of generators $\{\Phi_\Mb(f):f\in\CoinX{\Mb}\}$ labelled by smooth compactly supported functions, a unit $\II_{\Af(\Mb)}$, and relations given by \begin{itemize} \item linearity of $f\mapsto \Phi_\Mb(f)$ \item hermiticity: $\Phi_\Mb(f)^*=\Phi_\Mb(\overline{f})$ for all $f\in\CoinX{\Mb}$ \item field equation: $\Phi_\Mb(P_\Mb f)=0$ for all $f\in\CoinX{\Mb}$ \item commutation relations: $[\Phi_\Mb(f_1),\Phi_\Mb(f_2)]=iE_\Mb(f_1,f_2)\II_{\Af(\Mb)}$ for all $f_1,f_2\in\CoinX{\Mb}$. \end{itemize} Here we have written \begin{equation} E_\Mb(f_1,f_2)=\int_\Mb f_1(p) (E_\Mb f_2)(p)\dvol_\Mb(p), \end{equation} where $E_\Mb = E_\Mb^--E_\Mb^+$. The specification of the algebra on each spacetime is only one part of the structure. An important aspect is the ability to compare the algebras on different spacetimes. This can be done by considering smooth maps $\psi:\Mb_1\to\Mb_2$ that are isometric, respect orientation and time-orientation: if $\Mb_i=(\Mc_i,g_i,\ogth_i,\tgth_i)$ ($i=1,2$), we require $g_1=\psi^*g_2$, $\ogth_1=\psi^*\ogth_2$, $\tgth_1=\psi^*\tgth_2$. Furthermore, $\psi$ is required to have a causally convex image in $\Mb_2$, thus ensuring that no causal links exist in the image that are not already present in the original spacetime. A map $\psi$ obeying these conditions will be called a \emph{hyperbolic embedding}. As a requirement of locality, each hyperbolic embedding of $\Mb_1$ in $\Mb_2$ should provide an embedding of the physical content of our theory on $\Mb_1$ within that on $\Mb_2$, represented mathematically by a unit-preserving $*$-homomorphism $\Af(\psi):\Af(\Mb_1)\to \Af(\Mb_2)$. We demand that $\Af(\psi)$ is an injection, so that no observables are lost in passing from a small spacetime to a larger one in which it is embedded.\footnote{This is too stringent in some contexts where `topological observables' appear but we set these to the side for now in the interests of a clean axiomatic framework.} Moreover, we make the natural requirements that, if a trivial embedding is made, the algebraic embedding should be likewise trivial, and that the composition of maps arising from successive embeddings should agree with that of the composition of embeddings: \begin{equation} \Af(\id_\Mb)=\id_{\Af(\Mb)},\qquad \Af(\psi\circ\varphi)=\Af(\psi)\circ\Af(\varphi), \end{equation} where the second equation holds for all pairs of composable maps between spacetimes in our class. These various requirements can be summarised by a single mathematical assumption:\footnote{See Ref.~\refcite{MacLane} for a general reference on category theory. For the purpose of this review, the reader will not go too far wrong by thinking of a category as consisting of \emph{objects} that are `sets with structure' and \emph{morphisms} that are `structure preserving maps'. Examples include the category of topological spaces with continuous maps as morphisms, or groups with homomorphisms, as well as $\Loc$ and $\Alg$ described here. A functor between two categories maps objects and morphisms in the first to objects and morphisms in the second in a coherent way; for example, a homology functor maps topological spaces to the appropriate homology group and continuous maps between the topological spaces to group homomorphisms between the homology groups.} \begin{axiom}[Local covariance]\label{ax:LC} A theory is a covariant functor $\Af:\Loc\to\Alg$, where $\Loc$ is the category whose objects are globally hyperbolic spacetimes, and whose morphisms are hyperbolic embeddings, while $\Alg$ is the category of unital $*$-algebras with injective, unit-preserving $*$-homomorphisms as morphisms.\footnote{It is of course possible to change the category $\Alg$ for, e.g., the category of $C^*$-algebras. Alternatively theories other than QFT can be set into a locally covariant context by an appropriate choice of target category, e.g., that of (pre)symplectic spaces for classical linear field theories.} \end{axiom} In the context of the Klein--Gordon theory, these morphisms are easily described: if $\psi:\Mb\to\Nb$ then $\Af(\psi)$ maps the generators of $\Af(\Mb)$ into those of $\Af(\Nb)$ by \begin{equation}\label{eq:field} \Af(\psi)\Phi_\Mb(f) = \Phi_\Nb(\psi_* f) \end{equation} for all $f\in\CoinX{\Mb}$. Here, $\psi_*$ denotes the push-forward, so that $\psi_*f$ agrees with $f\circ \psi^{-1}$ on $\psi(\Mb)$ and vanishes elsewhere. The action of $\Af(\psi)$ on all other elements of $\Af(\Mb)$ is fixed by the requirement that it be a $*$-homomorphism obeying $\Af(\psi)\II_{\Af(\Mb)}=\II_{\Af(\Nb)}$. That this can be done consistently is a consequence of the theory of the Klein--Gordon equation on globally hyperbolic spacetimes and the relations in $\Af(\Mb)$ and $\Af(\Nb)$. For example, $\Af([\Phi_\Mb(f_1),\Phi_\Mb(f_2)])$ can be written as either of $\Af(iE_\Mb(f_1,f_2)\II_{\Af(\Mb)})= iE_\Mb(f_1,f_2)\II_{\Af(\Nb)}$ or $[ \Phi_\Nb(\psi_* f_1),\Phi_\Nb(\psi_* f_2)] =iE_\Nb(\psi_*f_1,\psi_* f_2)\II_{\Af(\Nb)}$ and consistency is assured because $E_\Nb(\psi_*f_1,\psi_* f_2)=E_\Mb(f_1,f_2)$. It is less obvious that the resulting map is injective: this follows because $\Af(\Mb)$ is known to be simple, so the kernel of $\Af(\psi)$ is either trivial or equals $\Af(\Mb)$, and the latter is impossible because $\Af(\psi)\II_{\Af(\Mb)}=\II_{\Af(\Nb)}\neq 0$. Other free bosonic models have been formulated as functors from $\Loc$ to $\Alg$, including the Proca and (with some subtleties) Maxwell fields.\cite{DappLang:2012,SandDappHack:2014,FewLang:2014a} This includes examples (self-dual gauge fields) which are not formulated by reference to a classical Lagrangian.\cite{BecBenSchSza:2015} To incorporate theories with spin, one can either generalise $\Loc$ to a category of spin manifolds,\cite{Verch01,Sanders_dirac:2010,Zahn_Dirac:2014} or -- as in Section~\ref{sec:spinstats} -- use coframed manifolds (see Ref.~\refcite{FergusonPhD} for yet another approach). \subsection{Comparison of theories and locally covariant fields}\label{sec:nat} Assumption~\ref{ax:LC} has two main strengths: first, it has packaged many individual assumptions into one statement; second, it allows us to discuss a theory as a single mathematical object, rather than viewing it through its instantiations on each spacetime separately. Category theory provides a language for this discussion and a number of standard categorical ideas find uses in locally covariant quantum field theory. A particularly important example is the notion of a \emph{natural transformation} between functors (denoted by a dotted arrow $\nto$), which has two main uses in locally covariant QFT. The first of these concerns relations between theories: \begin{definition} \label{def:nat} Let $\Af$ and $\Bf$ be locally covariant theories (functors from $\Loc$ to $\Alg$). Any natural transformation $\eta:\Af\nto\Bf$ defines an embedding of $\Af$ as a subtheory of $\Bf$. If $\eta$ is a natural isomorphism, then it determines a physical equivalence of the theories $\Af$ and $\Bf$. \end{definition} Here, a natural transformation $\eta:\Af\nto\Bf$ is a collection $(\eta_\Mb)_{\Mb\in\Loc}$ of morphisms $\eta_\Mb:\Af(\Mb)\to\Bf(\Mb)$ such that, for every hyperbolic embedding $\psi:\Mb\to\Nb$, one has \begin{equation} \eta_{\Nb}\Af(\psi) = \Bf(\psi) \eta_{\Mb}. \end{equation} In other words, the square in the diagram \begin{equation*} \begin{tikzpicture}[baseline=0 em, description/.style={fill=white,inner sep=2pt}] \matrix (m) [ampersand replacement=\&,matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] {\Mb \& \Af(\Mb) \& \Bf(\Mb) \\ \Nb \& \Af(\Nb) \& \Bf(\Nb)\\ }; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$ \psi $} (m-2-1) (m-1-2) edge node[auto] {$ \eta_\Mb $} (m-1-3) edge node[auto] {$ \Af(\psi) $} (m-2-2) (m-2-2) edge node[auto] {$ \eta_\Nb $} (m-2-3) (m-1-3) edge node[auto] {$ \Bf(\psi) $} (m-2-3); \end{tikzpicture} \end{equation*} commutes; $\eta$ is a natural isomorphism if each of its components $\eta_\Mb$ is an isomorphism. The interpretation placed on natural transformations and isomorphisms in Definition~\ref{def:nat} can be justified in several ways and has found various applications.\cite{BrFrVe03,FewVer:dynloc_theory,Fewster:gauge} In particular, the automorphisms of $\Af$ (the natural isomorphisms of $\Af$ to itself) form a group $\Gc$ under composition, which can be interpreted as the \emph{global gauge group} of the theory $\Af$.\cite{Fewster:gauge} A simple example of a global gauge transformation for the Klein--Gordon theory is defined so that $\eta_\Mb \Phi_\Mb(f)=-\Phi_\Mb(f)$ (extended as a unit-preserving $*$-homomorphism); if one considers the theory of $n$ Klein--Gordon fields with identical mass one has an ${\rm O}(n)$ group of orthogonal transformations on the multiplet of fields (and further shift transformations if the mass is zero). An example of a subtheory embedding can be given if $\Bf=\Af\otimes\Af$ consists of two identical copies of $\Af$,\footnote{Thus $\Bf(\Mb)=\Af(\Mb)\otimes\Af(\Mb)$ while $\Bf(\psi)=\Af(\psi)\otimes\Af(\psi)$.} and we define $\eta:\Af\nto\Bf$ so that $\eta_\Mb A = A\otimes\II_{\Af(\Mb)}$, which can easily be verified as natural. Klein--Gordon theories with distinct mass, or Klein--Gordon multiplets with differing numbers of fields, can be shown to be inequivalent (modulo additional mild technical conditions).\cite{BrFrVe03,FewVerch_aqftincst:2015} A second use of natural transformations is to describe \emph{locally covariant fields}. For simplicity we restrict here to fields smeared with scalar, smooth compactly supported test functions. Let $\Set$ be the category of sets, with functions as morphisms. This category contains $\Alg$ as a subcategory: every unital $*$-algebra is, in particular, a set and every $*$-homomorphism between such algebras is, in particular, a function. The assignment of the space of scalar test functions to spacetime $\Mb$ can be described as a functor $\Df:\Loc\to\Set$ by setting $\Df(\Mb)=\CoinX{\Mb}$ for each $\Mb$ and $\Df(\psi)=\psi_*$ for each hyperbolic embedding $\psi$. Locally covariant fields can be then identified as follows:\cite{Ho&Wa01, BrFrVe03} \begin{definition} Let $\Af$ be a locally covariant theory. A locally covariant field of the theory $\Af$ is a natural transformation $\Phi:\Df\nto\Af$, where we regard $\Alg$ as a subcategory of $\Set$. \end{definition} This means precisely that the equation \eqref{eq:field} should hold for all $f\in\CoinX{\Mb}$ and all $\psi:\Mb\to\Nb$, with $\Phi_\Mb$ now reinterpreted as the maps that form the components of $\Phi:\Df\nto\Af$. This gives the Klein--Gordon field its own mathematical status. In general, the collection of all locally covariant fields forms a unital $*$-algebra $\Fld(\Df,\Af)$: given $\Phi,\Psi\in\Fld(\Df,\Af)$, and $\lambda\in\CC$, the fields $\Phi+\lambda\Psi$, $\Phi\Psi$, $\Phi^*$ are \begin{align} (\Phi+\lambda\Psi)_\Mb(f) &= \Phi_\Mb(f) + \lambda\Psi_\Mb(f) \label{eq:fld_lin} \\ (\Phi\Psi)_\Mb(f) &= \Phi_\Mb(f)\Psi_\Mb(f), \label{eq:fld_prod} \\ (\Phi^*)_\Mb(f) &= \Phi_\Mb(f)^* \label{eq:fld_star} \end{align} and the unit field is $\II_\Mb(f) = \II_{\Af(\Mb)}$, for all $f\in\CoinX{\Mb}$. This $*$-algebra then carries an action of the gauge group of the theory, so that if $\eta\in\Gc$ then $\eta\cdot\Phi$ is the field with components \begin{equation} (\eta\cdot\Phi)_\Mb(f)=\eta_\Mb\Phi_\Mb(f) \end{equation} for all $\Mb\in\Loc$, $f\in\CoinX{\Mb}$. Consequently, the fields appear in multiplets corresponding to subspaces of $\Fld(\Df,\Af)$ that are irreducible under the action of $\Gc$. One can easily adapt the same idea to other types of smearing test functions. As with the theories themselves, the locally covariant viewpoint on fields allows them to be manipulated as mathematical objects in a spacetime-independent way. \subsection{The kinematic net} Given a spacetime $\Mb$, one can ask what physical content can be associated with a specific subregion $O\subset\Mb$. This can be achieved in a simple fashion in our functorial setting, if $O$ is causally convex and open, for then we can equip $O$ with the metric and (time)-orientation inherited from $\Mb$ and regard it as a globally hyperbolic spacetime in its own right, to be denoted $\Mb|_O$. Moreover, the inclusion map of $O$ within $\Mb$ now induces a hyperbolic embedding $\iota_{\Mb;O}:\Mb|_O\to\Mb$. Applying the functor, we obtain an algebra $\Af(\Mb|_O)$ and a $*$-homomorphism $\Af(\iota_{\Mb;O})$ mapping $\Af(\Mb|_O)$ into $\Af(\Mb)$. \begin{definition} For any nonempty, open, causally convex subset $O$ of $\Mb$, the image of $\Af(\iota_{\Mb;O})$ will be denoted $\Af^\kin(\Mb;O)$ and called the \emph{kinematic subalgebra} of $\Af(\Mb)$ associated with region $O$, and the assignment $O\mapsto\Af^\kin(\Mb;O)$ forms the \emph{kinematic net} of $\Af$ on $\Mb$. \end{definition} The kinematic subalgebras have a number of nice properties. If $O_1\subset O_2$, then $\iota_{\Mb;O_1}=\iota_{\Mb;O_2}\circ \iota_{\Mb|_{O_2};O_1}$, which implies that $\Af(\iota_{\Mb;O_1})=\Af(\iota_{\Mb;O_2})\circ\Af (\iota_{\Mb|_{O_2};O_1})$ and hence one has the \emph{isotony} relation \begin{equation}\label{eq:isotony} \Af^\kin(\Mb;O_1)\subset \Af^\kin(\Mb;O_2). \end{equation} If $\psi:\Mb\to\Nb$ then, similarly, $\Af(\psi)(\Af^\kin(\Mb;O))=\Af^\kin(\Nb;\psi(O))$. This is particularly interesting in the case of a symmetry of $\Mb$, i.e., an isomorphism $\alpha:\Mb\to\Mb$, in which case \begin{equation}\label{eq:sym} \Af(\alpha)(\Af^\kin(\Mb;O))=\Af^\kin(\Mb;\alpha(O)). \end{equation} Furthermore, the kinematic subalgebras fit well with the other structures we have introduced: if $\Phi\in\Fld(\Df,\Af)$, and $f\in\CoinX{\Mb}$ is supported within $O$, then $\Phi_\Mb(f)\in\Af^\kin(\Mb;O)$. This holds because $f=\Df(\iota_{\Mb;O})\hat{f}$ for $\hat{f}=\iota_{\Mb;O}^*f\in\Df(\Mb|_{O})$, and hence \begin{equation} \Phi_\Mb(f)=\Phi_\Mb(\Df(\iota_{\Mb;O})\hat{f}) = \Af(\iota_{\Mb;O})\Phi_{\Mb|_O}(\hat{f}) \in\Af^\kin(\Mb;O). \end{equation} Moreover, the gauge transformations act locally: $\eta_\Mb(\Af^\kin(\Mb;O))=\Af^\kin(\Mb;O)$ for any $\eta\in\Gc$, $\Mb\in\Loc$ and open, causally convex $O\subset\Mb$. It is noteworthy that in Minkowski space algebraic QFT, equations~\eqref{eq:isotony} and~\eqref{eq:sym}, together with the other properties just described, are separate assumptions about the theory; here, they are consequences of Assumption~\ref{ax:LC} and the definition of the kinematic subalgebras. One normally makes a further assumption \begin{axiom}[Einstein Causality] If $O_1$ and $O_2$ are open, causally convex regions of $\Mb$ that are spacelike separated ($O_1\cap J_\Mb(O_2)=\emptyset$) then $\Af^\kin(\Mb;O_1)$ and $\Af^\kin(\Mb;O_2)$ are commuting subalgebras of $\Af(\Mb)$. \end{axiom} (This could be changed to a graded commutator if required.) \subsection{The timeslice axiom and relative Cauchy evolution} The structures introduced so far are kinematic in nature and lack any notion of dynamics. Describing any hyperbolic embedding $\psi:\Mb\to\Nb$ whose image contains a Cauchy surface of $\Nb$ as \emph{Cauchy}, the existence of a dynamical law can be encapsulated in the following assumption. \begin{axiom}[Timeslice] If $\psi:\Mb\to\Nb$ is Cauchy, then $\Af(\psi):\Af(\Mb)\to\Af(\Nb)$ is an isomorphism. \end{axiom} This assumption means that any observable of the theory on $\Mb$ can be measured, equivalently, within a neighbourhood of any Cauchy surface. It therefore corresponds to an abstracted notion of a dynamical law. Note that no equation of motion has been assumed in our general framework. The timeslice axiom has some striking consequences. One of the most prominent is that the sensitivity of a theory to changes in the metric can be described in terms of a \emph{relative Cauchy evolution}.\cite{BrFrVe03,FewVer:dynloc_theory} Fix a spacetime $\Mb=(\Mc,g,\ogth,\tgth)\in\Loc$ and let $h$ be a smooth and compactly supported metric perturbation so that $\Mb[h]:=(\Mc,g+h,\ogth,\tgth[h])$ is also a globally hyperbolic spacetime in $\Loc$, where $\tgth[h]$ is the unique choice of time-orientation agreeing with $\tgth$ outside the support of $h$. Choose any open causally convex sets $\Mc^\pm\subset\Mc$ with $\Mc^\pm\subset \Mc\setminus J^\mp_\Mb(\supp h)$. Setting $\Mb^\pm=\Mb|_{\Mc^\pm}$, the inclusion maps of $\Mc^\pm$ into $\Mc$ induce Cauchy morphisms $\imath^\pm:\Mb^\pm\to \Mb$, and also Cauchy morphisms $j^\pm:\Mb^\pm\to\Mb[h]$; see Fig.~\ref{fig:rce}. Each of these Cauchy morphisms is turned into an isomorphism under the action of the functor $\Af$ and we may therefore define an automorphism of $\Af(\Mb)$ by \begin{equation} \rce_\Mb[h] = \Af(\imath^-)\circ \Af(j^-)^{-1} \circ \Af( j^+)\circ \Af(\imath^+)^{-1}, \end{equation} which is the relative Cauchy evolution induced by $h$. (One may show that $\rce_\Mb[h]$ is independent of the specific choices of $\Mc^\pm$.) \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale=0.8] \definecolor{Gold}{rgb}{.93,.82,.24} \definecolor{Orange}{rgb}{1,0.5,0} \draw[fill=lightgray] (-4,0) -- ++(2,0) -- ++(0,4) -- ++(-2,0) -- cycle; \draw[fill=lightgray] (4,0) -- ++(2,0) -- ++(0,4) -- ++(-2,0) -- cycle; \draw[fill=gray] (4,3) -- ++(2,0) -- ++(0,0.5) -- ++(-2,0) -- cycle; \draw[fill=gray] (0,3) -- ++(2,0) -- ++(0,0.5) -- ++(-2,0) -- cycle; \draw[fill=gray] (-4,3) -- ++(2,0) -- ++(0,0.5) -- ++(-2,0) -- cycle; \draw[fill=gray] (4,0.5) -- ++(2,0) -- ++(0,0.5) -- ++(-2,0) -- cycle; \draw[fill=gray] (0,0.5) -- ++(2,0) -- ++(0,0.5) -- ++(-2,0) -- cycle; \draw[fill=gray] (-4,0.5) -- ++(2,0) -- ++(0,0.5) -- ++(-2,0) -- cycle; \draw[color=black,line width=2pt,->] (2.25,3.25) -- (3.75,3.25) node[pos=0.4,above]{$j^+$}; \draw[color=black,line width=2pt,->] (2.25,0.75) -- (3.75,0.75) node[pos=0.4,above]{$j^-$}; \draw[color=black,line width=2pt,->] (-0.25,3.25) -- (-1.75,3.25) node[pos=0.5,above]{$i^+$}; \draw[color=black,line width=2pt,->] (-0.25,0.75) -- (-1.75,0.75) node[pos=0.5,above]{$i^-$}; \draw[fill=white] (5,2) ellipse (0.7 and 0.4); \node at (5,2) {$h$}; \node[anchor=north] at (5,0) {$\Mb[h]$}; \node[anchor=north] at (-3,0) {$\Mb$}; \node[anchor=north] at (1,3) {$\Mb^+$}; \node[anchor=north] at (1,0.5) {$\Mb^-$}; \end{tikzpicture} \end{center} \caption{The geometrical construction of relative Cauchy evolution}\label{fig:rce} \end{figure} The relative Cauchy evolution has been computed for various theories.\cite{BrFrVe03,Sanders_dirac:2010,Benini_Masters,Ferguson:2013,FewSchenkel:2015,FewLang:2014a} Even more, it is possible under some circumstances to take a functional derivative with respect to $h$, thus inducing a derivation of $\Af(\Mb)$ that can be interpreted as a commutator with a stress-energy tensor.\cite{BrFrVe03} To be specific, let $f^{ab}$ be a compactly supported rank $2$-contravariant tensor field. Then the stress-energy tensor smeared against $f$ has the following action on $A\in\Af(\Mb)$: \begin{equation} [\Ts_\Mb(f), A] = \int_\Mb f_{\mu\nu} \frac{\delta\rce_{\Mb}}{\delta g_{\mu\nu}} (A) := \frac{2}{i}\left.\frac{d}{ds}\rce_\Mb[h(s)] A\right|_{s=0}, \end{equation} where $h(s)$ is a differentiable family of metric perturbations with $\dot{h}(0)^{ab}=f^{(ab)}$. Understood in this way, $\Ts_\Mb$ turns out to be symmetric and conserved, and in particular models it coincides with the standard stress-energy tensor. This a remarkable result, because at no stage was it assumed that the theory can be derived from a classical action principle. If external fields are incorporated into the background, one may consider variations of them as well, leading to other conserved currents.\cite{FewSchenkel:2015} Another application of the relative Cauchy evolution is to provide an alternative notion of localisation to that encoded in the kinematic net. The idea is to regard an observable as localised in a region if it is invariant under metric changes in the region's causal complement. This leads to a \emph{dynamical net}: when this coincides with the kinematic net, the theory is said to be \emph{dynamically local} -- see Ref.~\refcite{FewVer:dynloc_theory}, where some general consequences are developed. Various models (including the massive Klein--Gordon theory) have been shown to be dynamically local.\cite{FewVer:dynloc2,Ferguson:2013,FergusonPhD,FewLang:2014a,FewSchenkel:2015} There are exceptions, which seem to stem from broken symmetries or topological charges; see Ref.~\refcite{FewVerch_aqftincst:2015} for discussion. \subsection{State spaces} In algebraic QFT, states correspond to experimental preparations, while observables correspond to physical quantities to be measured. The pairing of states and observables produces the expectation value of the measurements of the given physical quantity, subject to the given preparation. Technically, a state on a unital $*$-algebra $\Ac$ is a linear functional $\omega:\Ac\to\CC$, which is positive in the sense that $\omega(A^*A)\ge 0$ for all algebra elements $A$, and is normalised to the value $\omega(\II)=1$ on the algebra unit. On any algebra $\Ac$, we denote the corresponding set of states by $\Ac^*_{+,1}$. Using the well-known GNS construction, any state on a unital $*$-algebra induces a Hilbert space representation in which expectation values are given by the standard Born rule. Experience has shown that the full set of states includes many that do not have good physical properties and that it is better to focus attention on a smaller class, e.g., the Hadamard states of Klein--Gordon theory. In the locally covariant context this indicates that one should consider a set of states $\Sf(\Mb)\subset\Af(\Mb)^*_{+,1}$ on each spacetime $\Mb$. As measurements made in a small spacetime should also be possible within a larger one, there should be an appropriate relation between $\Sf(\Mb)$ and $\Sf(\Nb)$ whenever there is a hyperbolic embedding $\psi:\Mb\to\Nb$. \begin{definition} A \emph{state space} $\Sf$ for a locally covariant theory $\Af:\Loc\to\Alg$ is an assignment of a subset $\Sf(\Mb)\subset \Af(\Mb)^*_{+,1}$ that is closed under convex combinations and operations induced by $\Af(\Mb)$,\footnote{That is, given states $\omega,\omega'\in\Sf(\Mb)$, each state $\lambda\omega+(1-\lambda)\omega'$ $(\lambda\in[0,1]$) also belongs to $\Sf(\Mb)$, as also does the state $\omega_A$ given by $\omega_A(C)=\omega(A^*CA)/\omega(A^*A)$ for any $A$ such that $\omega(A^*A)>0$.} and obeys \begin{equation}\label{eq:sts_cont} \Af(\psi)^*\Sf(\Nb) \subset \Sf(\Mb) \end{equation} whenever $\psi:\Mb\to\Nb$ is a hyperbolic embedding.\footnote{One can give a more categorical definition, in which $\Sf$ is a subfunctor of the contravariant functor assigning to each algebra $\Af(\Mb)$ its full state space.} If \eqref{eq:sts_cont} holds with equality for all Cauchy hyperbolic embeddings, then $\Sf$ has the \emph{timeslice property}. \end{definition} As already mentioned, the Hadamard states provide an example of such a state space, for the Klein--Gordon theory. To understand the significance of the above definition, suppose that $A\in\Af(\Mb)$ is an observable on spacetime $\Mb$ that is hyperbolically embedded in $\Nb$ by $\psi$. Then there is an observable on spacetime $\Nb$, $\Af(\psi)A$, which corresponds to $A$. To any state on $\omega\in\Sf(\Nb)$ there is an expectation value $\omega(\Af(\psi)(A))$, which can be written as $(\Af(\psi)^*\omega)(A)$, i.e., the expectation of $A$ in the `pulled back' state $\Af(\psi)^*\omega$ on $\Af(\Mb)$. The content of \eqref{eq:sts_cont} is that this pulled-back state belongs to the state space $\Sf(\Mb)$ and so is a legitimate physical state on $\Mb$. Accordingly, the same measurement results can be obtained either in $\Mb$ or $\Nb$. A reasonable question is whether one can find a state space consisting of a single state in each spacetime (even dropping the requirement of closure under operations), which would amount to a choice of a preferred state. However, it can be shown that this is impossible for dynamically local theories that obey standard assumptions in Minkowski space.\cite{FewVer:dynloc_theory} This turns long-standing folk-wisdom into a rigorous theorem. \subsection{Applications of the locally covariant framework} The locally covariant framework was first introduced about 15 years ago and has already led to substantial progress in understanding both general structural features of QFT in CST and also specific physical problems. Above all, the ideas of local covariance were instrumental in completing the perturbative construction of interacting theories in curved spacetime\cite{BrFr2000,Ho&Wa01,Ho&Wa02,Zahn_Dirac:2014}, and extending it to include theories with local gauge symmetries\cite{Hollands:2008,FreRej_BVqft:2012} and gravity (see Ref.~\refcite{BruFreRej_gravity} and Rejzner's contribution to these Proceedings). In the context of gravity, key issues are the identification of suitable gauge-invariant observables (see also Ref.~\refcite{Khav:2015}) and the use of relative Cauchy evolution in the discussion of background independence (see Ref.~\refcite{Zahn_bgindep:2015} for similar considerations in another context). Anomalies have also been studied.\cite{Zahn:2015} In addition, there have been a number of results of a structural nature: these include Verch's proof of the spin-statistics connection,\cite{Verch01} Sanders' results on the Reeh--Schlieder property,\cite{Sanders_ReehSchlieder} and the analysis of the superselection structure of locally covariant theories,\cite{Br&Ru05} including the identification of topological sectors in suitable spacetimes.\cite{BrunettiRuzzi_topsect} The fundamental question of whether a locally covariant theory can be said to represent the same physics in all spacetimes has been discussed; the issue is subtle, but there are positive results at least for the class of dynamically local theories.\cite{FewVer:dynloc_theory} As already described, the global gauge group has been understood at the functorial level,\cite{Fewster:gauge} along with the intrinsic definition of the stress-energy tensor.\cite{BrFrVe03} Quite recently, the split property\cite{Few_split:2015} (see also Ref.~\refcite{Few_DMV:2016} for a review) and modular nuclearity\cite{LecSan:2015} have been proved in the locally covariant framework, given suitable additional assumptions. Extensions of the locally covariant framework towards `higher' categorical structures are also under way.\cite{BenSchSza:2015} Finally, locally covariant ideas have been applied to problems in the theory of Quantum Energy Inequalities \cite{Few&Pfen06, Marecki:2006,Fewster2007} (e.g.\ to obtain a priori bounds on Casimir energy densities) and to questions in cosmology.\cite{DapFrePin2008,DegVer2010,VerchRegensburg,HackPin_chap:2015} The remainder of this contribution will focus on the spin-statistics connection. \section{Spin and Statistics}\label{sec:spinstats} \subsection{Introductory remarks} Observed elementary particles are either bosons of integer spin, or fermions of half-integer spin. Explanations of this connection between spin and statistics have been sought since the early days of quantum field theory \cite{Fierz:1939,Pauli:1940} and the rigorous proof of a connection between spin and statistics was an early and major achievement of the axiomatic Wightman framework.\cite{Burgoyne:1958, LuedersZumino:1958,StreaterWightman} Similarly, general results have been proved in the Haag--Kastler framework.\cite{Epstein:1967,DHRiv,GuidoLongo:1995} In this section, we will discuss how the connection can be established in the locally covariant framework, after suitable adaptations. To set the scene, let us recall the spin-statistics theorem of Burgoyne\cite{Burgoyne:1958} (see also \S II.5 in Ref.~\refcite{Haag}) which concerns a Wightman theory in Minkowski space, with Hilbert space $\HH$ and vacuum state vector $\Omega$. The universal cover $\SL(2,\CC)$ of the proper orthochronous Lorentz group $\LL^\uparrow_+$ is unitarily represented on $\HH$ by $S\mapsto U(S)$. Let $\Phi(x)$ be a component of a spin $J$ field, i.e., $\Phi$ is one of a multiplet of fields $\Phi_\alpha$ transforming as \begin{equation} U(S) \Phi_\alpha(x) U(S)^{-1} = D(S^{-1})_\alpha^{\phantom{\alpha}\beta} \Phi_\beta (\pi(S)x) , \end{equation} where $D$ is a spin-$J$ $\SL(2,\CC)$ representation and \begin{equation} \pi:\SL(2,\CC)\to \Lc^\uparrow_+, \qquad \sigma_\mu \pi(S)^\mu_{\phantom{\mu}\nu} = S \sigma_\nu S^* \end{equation} is the covering map. Burgoyne argues that the two-point function $\ip{\Omega}{\Phi(x)\Phi^*(y)\Omega}$ can be extended analytically and displays invariance under the \emph{complex} Lorentz group, leading to the identity \begin{equation} \ip{\Omega}{\Phi(x)\Phi^*(y)\Omega} = (-1)^{P+2J} \ip{\Omega}{\Phi^*(-y)\Phi(-x)\Omega} \end{equation} for spacelike separated $x,y$, where $P$ is fixed by \begin{equation} \Phi(x)\Phi^*(y) = (-1)^P \Phi^*(y)\Phi(x). \end{equation} In consequence, for any test function $f$, \begin{equation} \|\Phi^*(f)\Omega\|^2 = (-1)^{P+2J} \|\Phi(Rf)\Omega\|^2 \qquad\text{where}~ (Rf)(x)=f(-x), \end{equation} so (except for trivial $\Phi$) the spin--statistics connection $P=2J\pmod{2}$ holds. A more algebraic expression of the connection is the statement that \begin{equation}\label{eq:spinstats} A_1 A_2 = (-1)^{P_1 P_2} A_2 A_1, \qquad\text{if}~U(-\II)A_i U(-\II)^{-1}=(-1)^{P_i}A_i, \end{equation} where the $A_i$ are combinations of fields smeared in regions $O_i$ at spacelike separation or, more generally, any local observables associated with these regions; the previous statement of the spin-statistics connection is a special case of \eqref{eq:spinstats} because \begin{equation} U(-\II) \Phi_\alpha(x) U(-\II)^{-1} = D(-\II)_\alpha^{\phantom{\alpha}\beta} \Phi_\beta (x) = (-1)^{2J}\Phi_\alpha (x) . \end{equation} A number of assumptions play crucial parts in this argument, as can be seen from various well-known evasions of the standard spin-statistics relation. Hilbert space positivity evidently plays a decisive role, and indeed ghosts provide examples of anticommuting integer spin fields. Nonrelativistic fields, or even relativistic fields in infinite-dimensional multiplet, can also violate the spin-statistics connection, so the properties of $U(S)$ are crucial. The analytic continuation argument depends on energy positivity (bosonic statistics may be imposed on a Dirac field at the cost of sacrificing positivity of the Hamiltonian\cite{Pauli:1940}) and the ability to obtain a spacetime reflection symmetry $x\mapsto -x$ in the identity connected component of the complex Lorentz group. General curved spacetimes have no geometrical symmetries, no global notion of energy positivity and the $n$-point functions of typical states of interest are not expected to have analytic extensions. Burgoyne's proof, like all the other general proofs mentioned above, therefore has no traction in curved spacetime and there is no obvious way to repair it. Indeed, for many years, work on the spin-statistics connection in curved spacetimes was restricted to demonstrations that free models become inconsistent on general spacetimes if equipped with the wrong statistics (e.g., imposing anticommutation relations on a scalar field)\cite{Wald_Smatrix:1979,ParkerWang:1989} unless some other property such as positivity is sacrificed.\cite{HiguchiParkerWang:1990} The breakthrough was made by Verch,\cite{Verch01} who established a general spin-statistics theorem for theories defined on each spacetime by a single field which, in particular, obeys Wightman axioms in Minkowski space. Together with Ref.~\refcite{Ho&Wa02}, this paper was responsible for laying down many of the foundations of the locally covariant framework for QFT in curved spacetimes described in Section~\ref{sec:LCQFT}. Verch's assumptions allow certain properties of the theory on one spacetime to be deduced from its properties on another, provided the spacetimes are suitably related by restrictions or deformations of the metric. In particular, the spin-statistics connection is proved by noting that, if it were violated in any one spacetime, it would then be violated in Minkowski space, contradicting the classic spin-statistics theorem. There are nonetheless some good reasons to revisit the spin-statistics connection. First, as a matter of principle, one hopes to gain a better understanding of why spin is the correct concept to investigate in curved spacetime, given the lack of the rotational symmetries that are so closely bound up with the description of spin in Minkowski space. Second, Ref.~\refcite{Verch01} described spinor fields as sections of various bundles associated to the spin bundle. While this is conventional wisdom in QFT in CST, it has the effect of basing the discussion on geometric structures that are, in part, unobservable. This is unproblematic as long as the goal is to understand particular models such as the Dirac field; however, in order to understand the spin-statistics connection for general theories one needs a more fundamental starting point that avoids the insertion of spin by hand. Third, Ref.~\refcite{Verch01} confined itself to theories in which the algebra in each spacetime is generated by a single field, and the argument is indirect in parts. This section outlines a new and operationally well-motivated perspective on the spin-statistics connection in which spin emerges as a natural concept in curved spacetimes, and which leads to a more general and direct proof of the connection. In particular, there is no longer any need to describe the theory in terms of one or more fields. Full details will appear shortly;\cite{Few_spinstats} see also Ref.~\refcite{Few_Regensburg:2015} for a summary. The main new ideas are contained in a generalization of locally covariant QFT based on a category of spacetimes with global coframes (i.e., a `rods and clocks' account of spacetime measurements). As in Ref.~\refcite{Verch01} the goal is to prove that a spin-statistics connection in curved spacetime is implied by the standard results holding in Minkowski space; however, the proof becomes quite streamlined in the new formulation and the overall result can be formulated at a functorial level. Putting frames at the centre of the approach introduces redundancies because two coframes related by a global proper Lorentz transformation ought not to be physically distinguishable. A key part of the formalism involves tracking these redundancies, which leads naturally from an operational starting-point to a description that allows for spin. The functorial setting of locally covariant quantum field theory provides various tools that make this possible, but in a way that connects with traditional understandings of spin in, for example, Wightman field theory. \subsection{Locally covariant theories on coframed spacetimes} Our new spacetime category consists of coframed globally hyperbolic spacetimes, denoted $\FLoc$. The objects are pairs $\Mbb=(\Mc,e)$, where $\Mc$ is a smooth manifold of dimension $n$ on which $e=(e^\nu)_{\nu=0}^{n-1}$ is a global coframe, such that \begin{equation} \Lf(\Mc,e):= (\Mc, \eta_{\mu\nu} e^\mu e^\nu, [e^0], [e^0\wedge\cdots\wedge e^{n-1}]) \end{equation} defines a spacetime in the category $\Loc$. A morphism $\psi:(\Mc,e)\to (\Mc',e')$ in $\FLoc$ is, by definition, a smooth $\psi:\Mc\to\Mc'$ that induces a $\Loc$-morphism from $\Lf(\Mc,e)$ to $\Lf(\Mc',e')$ and obeys $\psi^*e' = e$. Nothing is lost, relative to the framework of Section~\ref{sec:LCQFT}, because each theory $\Af:\Loc\to\Phys$ induces a theory $\Af\circ\Lf:\FLoc\to\Phys$, and in four dimensions, every theory described using spin bundles has a similar reformulation on $\FLoc$.\footnote{Note that all orientable four-dimensional globally hyperbolic manifolds admit global coframings.} On the other hand, as the geometry does not fix a choice of frame, the use of $\FLoc$ introduces redundancies that must be tracked. An important point is that these redundancies can be expressed functorially. To each $\Lambda\in \Lc^\uparrow_+$, there is a functor $\Tf(\Lambda):\FLoc\to\FLoc$, \begin{equation} \Tf(\Lambda)(\Mc,e) = (\Mc,\Lambda e) \qquad (\Lambda e)^\mu= \Lambda^\mu_{\phantom{\mu}\nu} e^\nu \end{equation} with action on morphisms uniquely fixed so that $\Lf\circ\Tf(\Lambda)(\psi)=\Lf(\psi)$. In other words, $\Tf(\Lambda)$ is a rigid rotation of the frames in all spacetimes. Every theory $\Af:\FLoc\to\Phys$ now induces a family of theories $\Af\circ\Tf(\Lambda)$ labelled by $\Lambda\in\Lc^\uparrow_+$. Our fundamental assumption is that all these theories should be physically equivalent, with such equivalences encoded by natural isomorphisms as explained in Section~\ref{sec:nat}. Thus we assume that to each $\Lambda\in\Lc^\uparrow_+$, there exists an equivalence $\eta(\Lambda): \Af\nto\Af\circ\Tf(\Lambda)$; for convenience we also assume that the $\eta(\Lambda)$ `commute' with the action of the global gauge group $\Gc$. It is remarkable that this assumption (without further specification of the $\eta(\Lambda)$'s) already yields a number of consequences. First, consider successive transformations $\Lambda$ and $\Lambda'$. There are two ways of comparing $\Af$ with $\Af\circ\Tf(\Lambda'\Lambda)$: directly using $\eta(\Lambda'\Lambda)$, or in two steps, using $\eta(\Lambda)$ followed by $\Lambda^*\eta(\Lambda')$, the equivalence defined by $(\Lambda^*\eta(\Lambda'))_\Mbb=\eta(\Lambda')_{\Tf(\Lambda)(\Mb)}$. The comparison between them, i.e., the extent to which the diagram \begin{equation*} \begin{tikzpicture}[baseline=0 em, description/.style={fill=white,inner sep=2pt}] \matrix (m) [ampersand replacement=\&,matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] { \Af \& \Af\circ\Tf(\Lambda) \\ \& \Af\circ\Tf(\Lambda'\Lambda)\\ }; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$ \eta(\Lambda) $} (m-1-2) edge node[below,sloped] {$ \eta(\Lambda'\Lambda) $} (m-2-2) (m-1-2) edge node[auto] {$ \Lambda^*\eta(\Lambda') $} (m-2-2); \end{tikzpicture} \end{equation*} fails to commute, is measured by a \emph{$2$-cocycle} $\xi(\Lambda,\Lambda')$ of $\Lc^\uparrow_+$ in $\Zc(\Gc)$, the centre of $\Gc$. Importantly, while there is freedom in choosing the $\eta(\Lambda)$'s, $\xi$ is unique up to cohomological equivalence, so \emph{each theory $\Af$ on $\FLoc$ determines a canonical cohomology class $[\xi]\in H^2(\Lc^\uparrow_+;\Zc(\Gc))$.} Under some circumstances, $[\xi]$ is trivial: e.g., if $\Zc(\Gc)$ is trivial or $\Af$ is induced from a theory on $\Loc$ (as we may take $\eta(\Lambda)=\id$ for every $\Lambda$). As discussed in Section~\ref{sec:LCQFT}, the \emph{scalar} (one-component) fields of the theory form a $*$-algebra $\Fld(\Df,\Af)$. This algebra now carries actions of both the gauge group and the Lorentz group: \begin{align*} (\alpha\cdot\Phi)_{(\Mc,e)} (f) &= \alpha_{(\Mc,e)}\Phi_{(\Mc,e)}(f) & (\alpha\in\Gc)\\ (\Lambda\star \Phi)_{(\Mc,\Lambda e)}(f) &= \eta(\Lambda)_{(\Mc,e)}\Phi_{(\Mc,e)}(f) &(\Lambda\in\Lc^\uparrow_+). \end{align*} These actions commute, and one finds \begin{equation} (\Lambda'\Lambda)\star\Phi= \xi(\Lambda',\Lambda)\cdot (\Lambda'\star(\Lambda\star\Phi)), \end{equation} from which it follows that irreducible subspaces of $\Fld(\Df,\Af)$ under the action of $\Lc^\uparrow_+\times \Gc$ carry multiplier representations of $\Lc^\uparrow_+$, determined by $\xi$. This `rediscovers' the classification of fields according to representations of the universal cover of $\Lc^\uparrow_+$. So far we have avoided specifying the $\eta(\Lambda)$. However the dynamics of the theory suggests a way to construct them. Consider the spacetimes $(\Mc,e)$ and $(\Mc,\Lambda e)$. Let $\tilde{\Lambda}\in C^\infty(\Mc,\Lc^\uparrow_+)$ agree with $\Lambda$ (resp., the identity) everywhere to the future (resp., past) of suitably chosen Cauchy surfaces. Thus it is locally constant outside a time-compact set. Then the spacetime $(\Mc,\tilde{\Lambda}e)$ interpolates between $(\Mc,e)$ (with which it agrees sufficiently to the past) and $(\Mc,\Lambda e)$ (with which it agrees sufficiently to the future). Assuming the theory obeys the timeslice axiom, we may obtain in this way an isomorphism between $\Af(\Mc,e)$ and $\Af(\Mc,\Lambda e)$ (cf.\ the construction of relative Cauchy evolution in Section~\ref{sec:LCQFT}). Without further assumptions, this isomorphism could depend on the details of $\tilde{\Lambda}$. However, it is reasonable to assume that frame rotations that are trivial outside a time compact set and are homotopically trivial within this class induce a trivial relative Cauchy evolution. In this case, the isomorphism from $\Af(\Mc,e)$ to $\Af(\Mc,\Lambda e)$ depends on $\tilde{\Lambda}$ only via its homotopy class among $C^\infty(\Mc,\Lc^\uparrow_+)$ maps that are locally constant outside time-compact sets. The upshot is that each $S$ in the universal cover of $\Lc^\uparrow_+$ induces an isomorphism \begin{equation} \zeta_{(\Mc,e)}(S): \Af(\Mc,e)\longrightarrow \Af(\Mc,\pi(S) e). \end{equation} Provided that these form the components of a natural isomorphism $\zeta(S):\Af\nto \Af\circ\Tf(\pi(S))$ (which holds subject to an additivity assumption) we may obtain our required family of equivalences by $\eta(\Lambda)=\zeta(S_\Lambda)$, where $S_\Lambda$ is any lift of $\Lambda$ to the universal cover. Applying these structures in $n=4$ dimensions, one finds that $\zeta(-\II)$ is a global gauge transformation $\zeta(-\II) \in\Gc$ obeying $\zeta(-\II)^2 = \zeta(\II)= \id$. Moreover, in Minkowski spacetime $\Mbb_0=(\RR^4,(dx^\mu)_{\mu=0..3})$, if $\SL(2,\CC)$ is unitarily implemented by $S\mapsto U(S)$ then $\zeta(S)_{\Tf(\pi(S)^{-1})(\Mbb_0)}\circ\Af(\psi_{\pi(S)})$ is implemented by $\ad_{U(S)}$. This shows how the various aspects of the standard Minkowski transformation under $U(S)$ are implemented in the locally covariant framework: the active transformation of points is achieved by the functor $\Af(\psi_{\pi(S)})$ while the passive relabelling of field components is done by $\zeta(S)$. In particular, in the special case $S=-\II$, $\zeta(-\II)_{\Mbb_0}$ is implemented by the adjoint action of $U(-\II)$, i.e., the $2\pi$ rotation, and can be termed the \emph{univalence automorphism}. \subsection{The spin-statistics connection in four dimensions} We first need a definition of `statistics' at the functorial level. An involutory global gauge transformation $\gamma\in\Gc$, $\gamma^2=\id$ will be said to \emph{grade statistics in $\Mbb$} if \begin{equation} A_1 A_2 = (-1)^{\sigma_1\sigma_2} A_2 A_1 \end{equation} holds for all local operators $A_i\in\Af^\kin(\Mbb;O_i)$, where $O_1$ and $O_2$ are spacelike separated, and obeying $\gamma_{\Mbb} A_i=(-1)^{\sigma_i}A_i$. From this point of view, the standard spin-statistics connection precisely asserts that $\zeta(-\II)$ grades statistics in Minkowski space $\Mbb_0$. What can be proved is that, if such a $\gamma$ grades statistics in $\Mbb_0$, then it does so in every spacetime of $\FLoc$, an argument that depends critically on the timeslice property. Now, if the theory obeys the standard spin-statistics connection in Minkowski space -- for example, if $\Af(\Mbb_0)$ can be identified with a Wightman theory -- then $\zeta(-\II)$ must grade statistics in every $\Mbb\in\FLoc$.\cite{Few_spinstats,Few_Regensburg:2015} What this means is that the statistics are directly related to the $2\pi$-rotation of frames in every spacetime, and indeed, the statement can be made in a spacetime-independent fashion that $\zeta(-\II)$ grades statistics for $\Af$. While the proof is indirect, because one argues from the connection in Minkowski space rather than proving it afresh in each spacetime, this of course does not detract from the worth of the statement. As mentioned at the start of this section, the spin-statistics connection is rather subtle, and even Feynman was forced onto the defensive: \begin{quote} We apologize for the fact that we cannot give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments of quantum field theory and relativity. ...[W]e have not been able to find a way of reproducing his arguments on an elementary level. ... The explanation is deep down in relativistic quantum mechanics. This probably means that we do not have a complete understanding of the fundamental principle involved. [RP Feynman, Lectures on Physics III (\S 4.1)]\cite{FeynmanIII} \end{quote} For the moment, I have to add my own apologies for the lack of a direct proof. However, that one can prove structural results of quantum field theory in curved spacetime at all is a notable achievement, and indicates the power of the locally covariant framework. \section*{Acknowledgments} I am grateful to the organisers of the QF2 session for arranging partial financial support under the ERC Advanced Grant ``Operator Algebras and Conformal Field Theory'' (PI Roberto Longo). {\small
1,314,259,995,223
arxiv
\subsection{Neutral DY} \input{plot_DY} For the neutral DY case, the fixed order results and the associated uncertainties have been discussed in Ref.\cite{Baglio:2022wzu} in greater detail. Hence, we will not repeat them here, instead we focus on the resummed results. In the left panel of \fig{fig:matched_kfac_DY}, we present the invariant mass distribution $Q^2 d\sigma / dQ^2$ up to N$^3$LO+N$^3$LL by varying $Q$ from $250$ GeV to $3000$ GeV. The corresponding $R_{i0}$-factors defined above are given in the right panel. It can be seen that $R_{20}$ is larger than $R_{30}$ here up to about $Q=2000$ GeV, and then they slowly converge to each other, while $R_{10}$ being smaller than these two for the entire $Q$ region considered. We also present the $R_{ii}$-factors which estimate the contribution of higher order threshold logarithms over the respective FO results. The effect of threshold resummation is prominent at NLO. However, its contribution at N$^3$LO level is very small. This is expected as the FO results for DY case are converging already at NNLO level onwards, unlike the case of Higgs production in gluon fusion channel \cite{Harlander:2002wh,Ravindran:2003um,Anastasiou:2015vya,Mistlberger:2018etf}. Here, the resummed results at N$^3$LO+N$^3$LL demonstrate excellent convergence of the perturbation theory. \subsection{Charged DY} \input{plot_WmDY} Similar to the neutral DY case, we present in \fig{fig:matched_kfac_WmDY}, the invariant mass distribution (left panel) for charge DY($W^{-*}$) case up to N$^3$LO+N$^3$LL accuracy and the corresponding $R_{i0}$ factors (right panel). For the case of charged DY, the corresponding parton fluxes are different from those of neutral DY case. This will clearly result in a slightly different behavior of the higher order corrections. This together with the NLL enhancement of the cross-sections can explain the behavior of the $R$-factors noticed. Quantitatively, the impact of QCD corrections in the high invariant mass region ($Q \ge 2500$ GeV) are smaller than the corresponding ones for neutral DY case. \input{plot_WpDY} Similar results have been obtained for $W^{+*}$ and the corresponding $R_{i0}$ factors are shown in \fig{fig:matched_kfac_WpDY}. Again, owing to the different underlying parton fluxes that are different starting from the Born level, the behavior of these $R_{i0}$-factors is expected to be different from those of both neutral DY and $W^{-*}$. Besides, the resummed results at N$^3$LO+N$^3$LL play a significant role in reducing the conventional $7$-point scale uncertainties. The scale uncertainties in the resummed predictions up to N$^3$LO+N$^3$LL accuracy are given in \fig{fig:match_uncertainty_7points_DY} for neutral DY (left panel), for $W^{-*}$ (middle panel) and for $W^{+*}$ (right panel). While the FO scale uncertainties for neutral DY case at $Q=3000$ GeV are found to get reduced from about $1.5\%$ at NNLO level to about $0.4\%$ at N$^3$LO level, the same in the resummed results are found to get significantly reduced from about $0.2\%$ at NNLO+NNLL level to almost about $0.1\%$ at N$^3$LO+N$^3$LL level. Thus, DY process is one of the processes for which the theoretical predictions available to-date are the most precise, any uncertainties that can still be present in these cross-sections are only due to the PDFs. We will discuss the uncertainty due to PDFs later. It should be noted here that the scale uncertainties in the resummation context will not show improvement over the FO results in the low $Q$-region, say below $1000$ GeV, where the threshold logarithms are not the sole dominant contributions to the cross-sections. However, the scale uncertainties in the resummed results will get reduced in the high $Q$ region. To elaborate this, we compare and contrast the $7$-point scale uncertainties in FO and resummed results at third order in QCD (see \fig{fig:comparison_uncertainty_7points_DY}). The scale uncertainties in the low $Q$ region (where regular terms and other parton channel contributions are non-negligible) are smaller for FO case, while they are smaller for resummed case in the high $Q$ region (where the threshold logarithms are important). To see the effects of resummation over FO results at a given order, it is quite useful to use the factors $R_{ii}$ defined in \eq{eq:ratio}. We plot in \fig{fig:match_fo_ratio_DY} these factors $R_{11}, \, R_{22}$ and $R_{33}$ for all three different DY processes as a function of the invariant mass of the di-leptons. In all these plots, we can see the $R_{11}$ contribution is dominant particularly in the high $Q$ region, while the $R_{33}$ is almost unity except for a small contribution in the high invariant mass distribution. \subsection{$VH$} In this section, we present the numerical results for the Higgs production in association with a massive vector boson $V=Z, \, W^-, \, W^+$. We present results for both the invariant mass distribution and the total production cross-sections at hadron colliders for different center of mass energies. However, for invariant mass distribution our results are confined to only DY process i.e. the production of an off-shell gauge boson followed by its decay to on-shell $V$ and $H$. \input{plot_ZH} It is worth noting that the total production cross-sections for these processes up to NNLO are available through the public code {\tt vh@nnlo} \cite{Brein:2012ne}. The updated version {\tt vh@nnlo 2.1} \cite{Harlander:2018yio} of the code can handle both the SM Higgs and other BSM scenarios where Higgs can be produced in association with a gauge boson. In this version, the code is also interfaced with {\tt MCFM} \cite{Campbell:2016jau} to produce invariant mass distributions up to NNLO level. In the present context, we provide the invariant mass distribution up to third order (N$^3$LO+N$^3$LL). To achieve this we use the invariant mass distributions of DY processes available at N$^3$LO level through {\tt n3loxs} \cite{Baglio:2022wzu} code, and we numerically incorporate the decay of the off-shell vector boson to $V$ and $H$ instead of to leptons, using the Eq.[2] and Eq.[3] of Ref.\cite{Harlander:2018yio}. For consistency, we reproduce the results obtained from {\tt vh@nnlo 2.1} up to NNLO for $VH$ invariant mass distribution. We then extend our analysis to fixed order N$^3$LO level. For the resummation case, we use our in-house numerical code, similar to the one used for di-lepton production case discussed in previous sections. In \fig{fig:fo_resum_ZH}, we plot the invariant mass distribution of $VH$ at FO (left panel) and at resum level (right panel) up to third order (N$^3$LO+N$^3$LL) for $ZH$ production process by varying $Q$ from $250$ GeV to $3000$ GeV. Because the branching of off-shell $V^*$ to $VH$ is different from that to di-leptons, the production cross-sections will certainly be different from that of the neutral DY production of di-leptons. However, the corresponding $K$-factors and $R$-factors are defined in \eq{eq:ratio}, expected to be almost same as those of neutral DY case due to the cancellation of branching part in these ratios. \input{tableZHnew} \input{tableZHcompare} \input{plot_WmH} In \fig{fig:matched_kfac_ZH}, we present these $K$-factors (left panel) and $R$-factors (right panel) up to third order (N$^3$LO+N$^3$LL). The corresponding scale uncertainties for the $ZH$ production case are shown for FO in \fig{fig:fo_uncertainty_ZH} and for resum case in \fig{fig:match_uncertainty_ZH}. We notice that the behavior of the scale uncertainties for FO is the same as that for neutral DY case. For completeness, in \fig{fig:fo_uncertainty_ZH}, we show separately the uncertainties due to the $7$-point scale variations (left panel), those due to only $\mu_R$ for fixed $\mu_F=Q$ (middle panel) and those due to only $\mu_F$ for fixed $\mu_R=Q$ (right panel) for FO results. We find that the factorization scale uncertainties at higher orders namely at NNLO and N$^3$LO level are smaller than those due to the renormalization scale. In \fig{fig:match_uncertainty_ZH}, we show similar plots as those in \fig{fig:fo_uncertainty_ZH} but for resummed results. However, here we find that the factorization scale uncertainties are larger than the uncertainties due to the renormalization scale variations. This is expected because in the resummation case, the factorization scale dependence is included only from the threshold regions and that too in the $q\bar{q}$ channel. However, at the fixed order at N$^3$LO level, this is not the case as the full $\mu_F$ scale dependence is included in the coefficient functions at this order. We also notice that in general the scale uncertainties in FO results in the high $Q$ region increase but very slowly. On the contrary, the scale uncertainties in the resummed predictions decrease with increasing $Q$. This is because in the high $Q$ region the bulk of the cross-sections are dominated by threshold logarithms and are resummed to all orders in the perturbation theory. In \fig{fig:fo_resum_WmH} and \fig{fig:fo_resum_WpH}, we show the results for invariant mass distribution of $W^- H$ and $W^+ H$ production processes respectively. The corresponding $K$-factors and $R_{i0}$-factors are shown in \fig{fig:matched_kfac_WmH} and \fig{fig:matched_kfac_WpH}. The results for the invariant mass distribution differ from those of the respective di-lepton production processes through off-shell $W^-$ and $W^+$ gauge bosons. However, the ratios $K$ and $R$-factors are almost the same. Due to the underlying parton fluxes, these $K$-factors and $R$-factors for $WH$ production case, on the other hand, will certainly be different from those of $ZH$ case. In \fig{fig:match_uncertainty_7points}, we show the $7$-point scale uncertainties in the invariant mass distribution of $W^-H$ and $W^+H$ processes. For these new results i.e. the invariant mass distribution for $VH$ process at N$^3$LO and N$^3$LO+N$^3$LL, we compare the $7$-point scale uncertainties in FO and in resum results \fig{fig:comparison_uncertainty_7points}. The behavior of the scale uncertainties is almost identical to that of the neutral DY case (see \fig{fig:comparison_uncertainty_7points_DY}), namely, the scale uncertainties are smaller in the low $Q$ region for FO case, while the same is true for resum case in the high $Q$ region. There is a very small and negligible difference between $ZH$ case and neutral DY case, which is mostly due to the presence of photon contribution in the latter case. We also present in \fig{fig:matched_r0fac_comp}, the ratios $R_{ii}$ for $i=1,2,3$ for $ZH$, $W^-H$ and $W^+H$ cases. It is also worth noting that with the automation of most of the NLO calculations \cite{Frederix:2018nkq,Alwall:2014hca}, the NLO results are readily available for these processes and hence it will be particularly useful to estimate the $R_{i1}$ for $i=1,2,3$. We plot these $R_{i1}$ factors for all these $VH$ processes in \fig{fig:matched_r1fac_comp} as a function of $Q$. We notice that in general approximately for $Q > 2000$ GeV, both $R_{21}$ and $R_{31}$ merge with each other. However, their values are different for different processes. In the high $Q$ region around $3000$ GeV, $R_{31}$ being the largest for $W^-H$ case and is about $1.105$ while it is the smallest and is about $1.03$ for $W^+H$ case, whereas the corresponding one for $ZH$ case is about $1.065$. \input{plot_WpH} For the $VH$ production process, we also give the total production cross-sections obtained by integrating the invariant mass distributions over the entire $Q$ region that is kinematically accessible. First, we give these total cross-sections for DY type $ZH$ production processes from LO to N$^3$LO+N$^3$LL for different center of mass energies $\sqrt{S} = 7, 8, 13, 13.6 \text{ and } 100$ TeV in \tab{tab:tableZHnew}. The corresponding $7$-point scale uncertainties are also provided for each case. For the total production cross-sections, the bulk of the contribution comes essentially from the low $Q$ region and hence the corresponding scale uncertainties in the resummed results are predominantly due to those coming from the FO results that enter through matching procedure. Hence, the scale uncertainties are smaller for FO case than those for resum case. However, one can clearly see these scale uncertainties decrease for any center of mass energy as we go from LO to N$^3$LO in FO, or as we go from LO+LL to N$^3$LO+N$^3$LL ~in the resummation series. For example, for $13.6$ TeV case, the scale uncertainties at LO are $4.06\%$ and get reduced to about $0.33\%$, while those at LO+LL are $4.44\%$, and they get reduced to about $0.58\%$ at N$^3$LO+N$^3$LL. In \tab{tab:tableWmH}, we present similar results for $W^-H$ case and in \tab{tab:tableWpH} for $W^+H$ case. In all these results, the general observation is that the scale uncertainties do increase with the center of mass energy $\sqrt{S}$ of the incoming hadrons. Finally, in the context of $VH$ production, we note that the DY type contributions do not fully give the complete FO predictions for $VH$ case. Starting from ${\cal O} ({a_S^2})$, particularly the $ZH$ process receives contributions from the gluon fusion channel \cite{Kniehl:1990iva}, bottom annihilation channel as well as the heavy top-loop contributions \cite{Brein:2011vx}. For the gluon fusion channel, the LO order contribution formally comes at the same order as NNLO of DY type $ZH$ production process and the results for this channel are available already. The higher order NLO corrections to this gluon fusion channel which contribute at ${\cal O} (a_S^3)$ level appear at the same order as the N$^3$LO of DY type corrections, and these NLO corrections to the gluon fusion channel in the effective theory \cite{Altenkamp:2012sx} have been computed already. It is worth noting that in the recent past, there has been a significant amount of progress towards the computation of the NLO corrections to this gluon fusion process considering the finite top quark mass effects \cite{Davies:2020drs,Chen:2020gae,Wang:2021rxu,Alasfar:2021ppe,Bellafronte:2022jmo,Chen:2022rua,Degrassi:2022mro}. The top-loop contributions are available at $a_S^2$ level \cite{Brein:2011vx} and are implemented in {\tt vh@nnlo 2.1} code. In the present context, we consider all these three contributions to the $ZH$ production process and define the total production cross-section defined as \begin{eqnarray} \sigma^{tot,ZH}_{\text{N}^3\text{LO}} = \text{$\sigma^{\text{DY,ZH}}_{\text{N$^3$LO}}$} + \sigma^{gg} (a_S^3) + \sigma^{\text{top}} (a_S^2) + \sigma^{b\bar{b}} \, \label{eq:ZHtotal_fo} \end{eqnarray} \begin{eqnarray} \sigma^{tot,ZH}_{\text{N}^3\text{LO+N}^3\text{LL}} = \text{$\sigma^{\text{DY,ZH}}_{\text{N$^3$LO+N$^3$LL}}$} + \sigma^{gg} (a_S^3) + \sigma^{\text{top}} (a_S^2) + \sigma^{b\bar{b}} \, \label{eq:ZHtotal_resum} \end{eqnarray} where the power of $a_S$ in the parenthesis denotes the highest order up to which the respective contribution has been taken into account. We present these results for the $ZH$ case in \tab{tab:tableZHcompare}. For the case $WH$ production, there will be no contribution from gluon fusion as well as bottom annihilation processes, however, there will be contribution from top-loops from NNLO onwards. Hence, we define the total production cross-sections for $WH$ case as \begin{eqnarray} \sigma^{tot,WH}_{\text{N}^3\text{LO}} = \text{$\sigma^{\text{DY,WH}}_{\text{N$^3$LO}}$} + \sigma^{\text{top}} (a_S^2) \, \label{eq:WHtotal_fo} \end{eqnarray} \begin{eqnarray} \sigma^{tot,WH}_{\text{N}^3\text{LO+N}^3\text{LL}} = \text{$\sigma^{\text{DY,WH}}_{\text{N$^3$LO+N$^3$LL}}$} + \sigma^{\text{top}} (a_S^2) \cdot \label{eq:WHtotal_resum} \end{eqnarray} These production cross-sections up to N$^3$LO+N$^3$LL are given in \tab{tab:tableWmHcompare} for $W^-H$ and in \tab{tab:tableWpHcompare} for $W^+H$ cases. We also note that at this level, the electroweak corrections \cite{Ciccolini:2003jy,Denner:2011id} are also competitive. For the current LHC energies, they are about $-5.28\%$ for $ZH$ and $-6.88\%$ for $WH$ total production cross-sections. This can easily be implemented in the analysis, however, we did not consider them for simplicity and focus only on the QCD corrections. \input{tableWmH} \input{tableWmHcompare} \input{tableWpH} \input{tableWpHcompare} Finally, we estimate the uncertainties in our predictions for $VH$ invariant mass distributions due to the choice of the PDFs used in our analysis. For this, we compute the invariant mass distributions at N$^3$LO+N$^3$LL using different choice of PDFs : ABMP16 \cite{Alekhin:2017kpj}, CT18 \cite{Hou:2019qau}, NNPDF23 \cite{Ball:2012cx} and PDF4LHC15 \cite{Butterworth:2015oua}. All these sets are considered at NNLO level and the invariant mass distributions are obtained for central set (iset=0). We obtain these results normalized with respect to those obtained from our default choice of MMHT2014nnlo PDFs (iset=0) and for the central scale choice and present them in \fig{fig:pdf_uncertainty_comp} for $ZH$ (left panel), $W^-H$ (middle panel) and for $W^+H$ (right panel). The bands for each of the PDF sets in these figures represent the corresponding $7$-point scale uncertainties, which are found to get reduced with $Q$ for all the PDF sets considered here except those for NNPDF23 sets. For the latter choice of PDF sets, the scale uncertainties are found to decrease first with $Q$ up to about $1500$ GeV, and then they slowly increase with $Q$. The uncertainty in these predictions due to the choice of different PDF sets is smaller in the low $Q$-region and is at the most $4\%$, but these uncertainties tend to increase with $Q$. In the high $Q$ region $Q > 2000$ GeV, the deviations from MMHT2014 results are in general larger for ABMP16 and CT18 PDF sets for $ZH$ and $W^-H$ processes. The deviations at $Q=2000$ GeV for $ZH$ case are about $4.2\%$, $6.6\%$, $5.3\%$ and $4.0\%$ for ABMP16, CT18, NNPDF23 and PDF4LHC15 PDF sets respectively. The similar numbers for $Q=3000$ GeV are about $9.0\%$, $8.3\%$, $2.9\%$ and $4.3\%$ respectively. The deviations are largest for the case $W^-H$ in the high $Q$ region and are about $20\%$ for ABMP16 sets. \subsection{$b\bar{b}H$} \input{plot_bbH} As a final process, we consider the Higgs boson production in bottom quark annihilation process at the LHC. This process has already been studied up to NNLO+NNLL by some of the authors in \cite{Ajjath:2019neu} including a detailed uncertainty analysis. In \cite{Ajjath:2019neu}, the work also has been extended to N$^3$LO+N$^3$LL including the estimation of uncertainties only due to unphysical renormalization scale. However, in this work, we complement these studies by considering the $b\bar{b} \to H$ production cross-sections along with a detailed $7$-point scale uncertainties for the choice of parameters given before. We present in \fig{fig:match_cme_bbH} these production cross-sections from LO+LL to N$^3$LO+N$^3$LL along with the respective scale uncertainties at hadron colliders for different center of mass energy $\sqrt{S}$ values from $7$ TeV to $100$ TeV. For $\sqrt{S} = 13.6$ TeV, we find that these $7$-point scale uncertainties is $5.26\%$ at N$^3$LO and is $5.46\%$ at N$^3$LO+N$^3$LL. The scale uncertainties are found to get reduced with higher logarithmic accuracy in resummation while at any given order in perturbation theory they are in general found to increase with $\sqrt{S}$. \section{Introduction} \label{sec:introduction} \input{intro} \section{Theoretical Framework} \label{sec:theory} \input{theory} \section{Numerical Results}\label{sec:numerics} \input{discussion} \section{Conclusions}\label{sec:conclusion} \input{conclusion} \section*{Acknowledgements} We thank V.\ Ravindran for useful discussion. G.D. also thanks J.\ Baglio for the discussion related to \texttt{n3loxs}. C.D. would like to thank the cluster computing facility \textit{Param Ishan} at IIT Guwahati, where most of the computational work has been carried out. The research work of M.C.K. is supported by SERB Core Research Grant (CRG) under the project CRG/2021/005270. The research of G.D. is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 - TRR 257 (\textit{Particle Physics Phenomenology after Higgs discovery.}). The research work of K.S. is supported by Shanghai Natural Science Foundation under Grant No. 21ZR1406100. \bibliographystyle{JHEP}
1,314,259,995,224
arxiv
\section{\label{sec:intro}Introduction} Neural networks are widely used across various sectors to perform challenging data analysis tasks, but the high energy cost of training increasingly complex models is an escalating problem. More specifically, for training a state-of-the-art model, a Transformer with 213M parameters, the CO2 emissions were 626,155 lbs (including neural architecture search), while driving a car (average fuel consumption), for one lifetime, the Co2 emissions were only 126,000 lbs \cite{strubell2019energy}. One solution to the energy issue is to create new hardware platforms for neuromorphic computation using functional materials that intrinsically perform the required computation, potentially achieving greater efficiency than conventional CMOS approaches that merely simulate these. Recurrent neural networks (RNNs) are inspired by the high interconnectivity of biological systems and are a potent tool for tasks involving complex temporal data sequences. However, their temporal interconnectivity requires complex training methods. Such methods are computationally expensive and challenging to implement on hardware. The reservoir computing (RC) paradigm provides a solution using an RNN with fixed, random synaptic weights (the reservoir) that transforms inputs into higher dimensional representations before passing them to a single feed-forward output layer. The weights of this output layer can be calculated by minimizing an error function defined, for instance, as the squared difference between the desired and the predicted output. The output layer contains no temporal dependencies, and thus training becomes relatively trivial. Ultimately, the reservoir does not need to be a neural network; it can be any suitable nonlinear system that exhibits hysteresis. RC is particularly well suited to neuromorphic hardware-based implementations. Since the learning process does not interfere with the reservoir dynamics, we may use any material device which provides appropriately complex dynamics and memory in the place of a neural network reservoir. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig1-Hysteresis.pdf} \caption{Magnetic hysteresis loop and complex magnetic textures. (a) Bistable hysteresis loop where the applied magnetic field (H) controls the orientation of the magnetization (M). (b-d) Examples of complex magnetic textures are domain walls, skyrmion, and artificial spin ice. } \label{fig:hysteresis} \end{figure} There are explorative reservoirs from different technologies, including photonic\cite{nakajima2021scalable}, mechanical\cite{dion2018reservoir}, and memristive\cite{gaurav2022reservoir} systems. Nanomagnetic systems have properties that make them particularly well-suited to act as reservoirs. For example, the magnetic hysteresis loop depicted in Figure \ref{fig:hysteresis} shows a non-linear response (the net magnetization) to a stimulus (the applied field). Bistable remnant magnetization states, shown schematically in Figure \ref{fig:hysteresis} (a), can be the basis for the system's memory. Furthermore, in extended systems, interactions between moments give rise to a wealth of magnetization textures with complex dynamics that provide a rich playground to explore novel devices. Some example textures are shown in Figure \ref{fig:hysteresis} (b-d), showing magnetic domain wall, skyrmion, and artificial spin ice systems. Short-range exchange and longer-range magnetostatic interactions offer {\it in materia} pathways to creating reservoirs with multiple physical nodes without the need for complex material synapses between nodes. The historical use of magnetic materials in hard-disk drives, sensors, and random access memories means integration with CMOS and techniques for reading (e.g., magnetoresistance effects) and writing data (e.g., magnetic fields, spin torque effects) are also well-established. In this perspective, we will first review the approaches for creating {\it in materia} reservoirs using nanomagnetic materials and their various strengths and weaknesses. We will discuss the most common training methods that map their physical behaviors into meaningful data outputs. Next, we will discuss simulation tools that can assist in exploring the feasibility of reservoir computing with different magnetic systems and the characterization methods and benchmark problems commonly used to establish computational capability. Finally, we present some key challenges in the field and potential approaches to address these. \section{Materials and devices} Due to their attractive properties, several nanomagnetic systems have been deemed suitable as reservoirs. These systems include: spin torque oscillators (STOs)\cite{Torrejon++2017,riou__2019,Markovic++2019,furuta_2018,jiang_2019}; spin ice arrays\cite{Jensen++2018:spinice,Jensen++2020:spinice,zhou2020reservoir,hon_2021,Nomura_2019, https://doi.org/10.48550/arxiv.2107.08941}; skyrmion textures\cite{Pinna++2020,prychynenko__2018,jiang_2019}; superparamagnetic arrays\cite{Welbourne++2021}; magnonic systems\cite{watt2020reservoir} and domain wall devices\cite{Dawidek++2021,ababei__2021}. Most studies are in simulations, although some demonstrations of RC with real devices have been performed, providing important evidence of real-world feasibility\cite{Torrejon++2017,https://doi.org/10.48550/arxiv.2107.08941,watt2020reservoir}. In general, nanomagnetic reservoirs can be classified based on several characteristics (e.g., energy consumption, operating speed, and device size). Here we introduce a taxonomy that classifies proposed devices by (a) Input/Output Dimensionality (IOD) and (b) Dynamical Response (DR) (Figure ~\ref{fig:taxonomy}). For IOD, RC requires multiple outputs from the reservoir (i.e., simultaneous measures of reservoir state) and benefits from multiple, simultaneous data inputs. Many devices proposed for use in RC are simple dynamical nodes with only a single input and output (IOD-1D). To use IOD-1D as reservoirs, we must expand the dimensionality of input and output data by using time-multiplexing techniques \cite{appeltant_2011}, an approach often referred to as "delay line" RC. However, other proposed devices consist of many spatially distributed, interacting elements/regions. These naturally possess \emph{N} dimensional state vectors and thus offer an \emph{in materia} pathway to defining multiple input and output dimensions (IOD-N). Reservoirs containing multiple \emph{non-interacting} devices can also be powerful, providing that each device offers a different non-linear mapping of input signals\cite{furuta_2018}. For DR, many proposed magnetic reservoirs exploit the damped, oscillatory motion of individual magnetic moments, as described by the Landau-Lifshitz-Gilbert (LLG) equation of motion (DR-LLG). These dynamics have high MHz-THz frequencies and ns decay times for ferromagnetic materials, making them well-suited to high-speed data processing applications. RC is also ideal for real-time signal processing where reservoir timescales must match external signals with low or high frequencies. However, as the dynamics of DR-LLG systems occur on nanosecond timescales, they are too fast for many real-time tasks; one must use external electronics to "speed-up" data input or improve long-term dependencies via delay lines. Effectively, we treat the magnetic devices as non-linear activation functions with short-term temporal dependencies\cite{riou__2019}. Other magnetic devices do not naturally relax their state without applied stimuli; external clocking stimuli determine the timescales of these dynamically driven (DR-D) systems. By choosing the clock frequency, these systems can operate at any timescale \emph{longer} than intrinsic magnetization dynamics. Therefore, they are naturally well-suited to real-time data analysis but may be less energy efficient than DR-LLG devices. A final class of reservoirs directly exploits thermally activated magnetization dynamics to provide transitions between magnetic states (DR-T). These are interesting as they directly exploit aggregated thermal effects to increase energy efficiency, whereas, in most device proposals, thermal effects introduce stochasticity, reducing performance in computational tasks. Furthermore, as the timescales of thermal activation can be changed dramatically (down to \textasciitilde10s of nanoseconds\cite{thermalrelaxns}) by changing the size of the systems' energy barriers, it should be possible to tune these systems dynamics to be compatible with a variety of real-time tasks. However, their stability to variations in operating temperature requires careful exploration. In the following section, we briefly review the wide range of device proposals within this framework and discuss their other potential merits and limitations. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{Fig2-Review.pdf} \caption{Classification of magnetic reservoir proposals by Input/Output Dimensionality (IOD) and Dynamical Response (DR). IOD: IOD-1D = single dynamical node, IOD-N = multiple spatial dimensions. DR: DR-D = driven by external clock stimulus, DR-T = dynamics governed by thermal activation/relaxation, DR-LLG = dynamics governed by LLG equation. References in bold red text are experimental demonstrations; all other references are simulation-based demonstrations. Red arrows represent systems where the state-of-the-art is an IOD-1D demonstration, but there are clear {\it in materia} approaches available to create IOD-N reservoirs. Key: STOs = spin torque oscillators, DWO = domain wall oscillators, SkyrOsc = skyrmion oscillator, SPE = super-paramagnet ensemble, NRE = nanoring ensemble, Magnonic = magnonic reservoir, SkyTex = skyrmion texture, ASI = artificial spin ice. } \label{fig:taxonomy} \end{figure} \subsection{Nanomagnetic Oscillators} Spin torque oscillators\cite{7505988} (STOs) (IOD-1D, DR-LLG) use the same magnetic tunnel junction (MTJ) technology that forms the basis of contemporary MRAM devices\cite{kent_worledge_2015}. At the most basic level, MTJs consist of two thin ferromagnetic layers separated by a thin insulating barrier in a “spin valve” configuration. One of the ferromagnetic layers is free to change its magnetization direction (free layer). The other is “pinned” into a fixed state (pinned layer) by an adjacent antiferromagnetic layer. Passing a DC electrical current through the multilayer excites oscillation of the magnetization direction of the free layer due to spin torque effects\cite{slonczewski_1996,berger_1996}, with frequencies in the range 100s of MHz to 10s of GHz, depending on the details of the oscillator’s design and stimuli applied to it. When the free layer magnetization oscillates, it produces oscillations in the electrical resistance of the MTJ via the tunnel magnetoresistance (TMR) effect. TMR can be detected as voltage signals with amplitudes as large as 10s of mV\cite{tsunegi__2016}. The amplitude of STO oscillations varies non-linearly with current and typically decays over timescales of $\sim$100s of ns\cite{Torrejon++2017}. Torrejon \emph{et al.}\cite{Torrejon++2017} demonstrated RC experimentally using a single sub-micrometer STO device using the time-multiplexed approach of Appletant \emph{et al.}\cite{appeltant_2011} Input signals are given to the STO by modulating the amplitude of the DC driving current, with the readout being the power output of the STO. Using this approach, the authors achieved state-of-the-art performance when classifying spoken digits from the TI-46 database\cite{TI-46}. Alternative input and output approaches (e.g., frequency modulated input, phase modulated output) can also create richer reservoir transformations and improve performance in tasks\cite{Markovic++2019}. STOs have many attractive properties. Foremost among these is that MTJs are a well-established commercial technology and are fully compatible with conventional CMOS platforms, providing a clear path to the realization of devices. Furthermore, they can be scaled down substantially from the sub-micrometer dimensions studied by Torrejon \emph{et al.} to \textasciitilde10 nm, creating device designs that are both dense and energy efficient (\textasciitilde1 µW per STO). While recent demonstrations have focused on time-multiplexed RC schemes, interconnections between STOs allow them to couple to each other\cite{slavin_2009,houshang_2015}, potentially facilitating \emph{N}-dimensional reservoirs. Current approaches to neuromorphic computation with STOs have used external electrical interconnects to achieve this\cite{romera_2018}. Still, STOs can interact/synchronize via magnetic interactions \cite{houshang_2015,zahedinejad_2019}, allowing for simpler and more elegant device designs. Other types of magnetic oscillators can also be used as reservoirs. Ababei \emph{et al.} used simulations to show that a single magnetic domain wall (DW) oscillating within a geometrically defined potential well in a nickel nanowire can create a reservoir capable of classifying a variety of different signals\cite{ababei__2021} (IOD-1D, DR-LLG). In this approach, the DW's dynamics are dictated by device geometry and, therefore, should be highly tunable. Furthermore, DWs naturally produce monopole-like magnetic fields\cite{PhysRevB.81.020410,hayward_2010}, allowing inter-device interactions to expand reservoir dimensionality. In a similar modeling study, Jiang \emph{et al.} use the dynamics of a single magnetic skyrmion (i.e., a topologically protected "bubble" of non-uniform magnetization) within a geometrically-defined potential to make an effective reservoir\cite{jiang_2019} (IOD-1D, DR-LLG). \subsection{Magnonic Systems} When driven at microwave frequencies, magnetic materials exhibit phase-coherent collective excitations known as spin waves (SWs), the quasiparticle of which is the magnon. The frequencies of SWs depend strongly on both material properties and induced magnetic anisotropies imposed by the system's geometry. The magnetic damping parameter of a material quantifies how efficiently SWs dissipate into the lattice and must be minimized by using materials such as permalloy (NiFe) or Yttrium Iron Garnet (YIG)\cite{haldar2021functional} to reduce losses. Boundaries and interfaces within a material allow for complex SW interference patterns to form, akin to reservoir work involving the pattern of water waves in a bucket\cite{fernando2003pattern}. This high degree of tunability provides a rich parameter space for useful computation. At the same time, the intrinsic spatial variation of interference effects makes spin waves an ideal phenomenon for developing IOD-N reservoirs. As these approaches directly exploit magnetization dynamics, they all have class DR-LLG. Papp \emph{et al.}\cite{papp2021characterization} used micromagnetic simulations to characterize the computational potential of a simulated SW reservoir based on a film of YIG (IOD-N, DR-LLG) using task agnostic metrics. Modulating an RF excitation from a waveguide on one side of the film provides the input. The output is the time-averaged signal response at points across the system. Patterned dots of material with perpendicular magnetic anisotropy (PMA) on the surface of the YIG provided a non-uniform magnetic field, which locally altered the SW dispersion, resulting in a non-linear response. The system's response strongly depends on the regime at which the SWs were driven. For example, too high an input excitation would drive the system toward chaos. Nakane \emph{et al.} suggest that magnetoelastic effects in multiferroic systems could provide energy-efficient excitation of spin wave reservoirs \cite{Nakane++2018,Nakane++2019,Nakane++2021}. In another simulation-based study, Dale \emph{et al.} explored the limits of magnonic RC \cite{dale2021computing} by considering thin films of Co, Fe, and Ni with \textasciitilde100 nm lateral dimensions (IOD-N, DR-LLG). These were split into a regular grid of up to 900 5 nm x 5 nm nodes which were excited with local magnetic fields for data input and with the local 3d magnetization state of each node providing output. SWs reflect from edges forming interference patterns that provide a complex, transient transformation of input data. For larger numbers of nodes at 0~K, the system achieves impressive task-agnostic metric scores (see section \ref{CHARC}) and an error of about 1\% for a NARMA-30 task. As expected, the introduction of temperature to the simulation drastically reduced performance. Experimental realization of an equivalent device would be highly challenging, and cooling devices to cryogenic temperatures are unlikely to be energy efficient. Hence, further work is required to explore device designs that are feasible to fabricate and robust to higher temperatures. Physical devices based on SWs are challenging to realize, partly due to devices operating at non-zero temperatures, which can alter magnon behaviour\cite{agrawal2013direct}. Watt \emph{et al.} experimentally demonstrated an SW-based system with a time-multiplexed active ring resonator approach \cite{watt2020reservoir,watt2021implementing,watt2021enhancing} (IOD-1D, DR-LLG). The system consists of two antennas on each side of a strip of YIG: one to excite SWs and the other to detect them. The amplified microwave output signal is fed back into the input antenna to shift the phase of the frequencies within the YIG. An increase in gain stabilizes the SWs, until the threshold at which chaotic behavior occurs. This time-delayed transition to a steady-state condition acts as a fading memory within the system without needing external time-delayed input\cite{watt2021implementing}. Magnonic systems provide a potential platform for fast, low-power reservoir computing. However, they require high-quality growth of insulating magnetic films such as YIG and may show the best performance at low temperatures. Further work, particularly on experimental SW-based devices, is needed to explore their potential fully. \subsection{Artifical Spin Ice Systems} Artificial spin ice (ASI) arrays consist of magnetically-bistable nanoscale islands of soft magnetic materials (e.g. permalloy) arranged into tightly spaced, periodic lattices of various geometries\cite{skjaervo_2019}. Magnetostatic fields created by the elements in these lattices mean that any given nanomagnet's free energy depends strongly on its magnetization's direction relative to its neighbors. Thus, the physics of ASIs are emergent, with complex collective behaviors deriving from simple interactions at an array's vertices. They provide a rich playground to explore various physical phenomena, including phase transitions, emergent magnetic monopoles, and magnetic frustration. Dynamics in these experiments are typically driven by applying external magnetic fields or directly heating the arrays. Studies have explored a wide range of geometries, including, for example, square lattices\cite{wang_2007}, kagome lattices\cite{wills_2002}, and pinwheel lattices\cite{gliga_2017}. Fully-connected ASIs can also be created where exchange interactions mediate interactions between vertices, and switching occurs by the propagation of DWs\cite{mellado_2010}. ASIs are particularly effective systems for RC. They consist of large numbers of spatially distributed elements that interact strongly with their neighbors without the need for layers of interconnects, offering a natural platform for realizing IOD-N reservoirs. Their complex and highly tuneable dynamics (e.g., via their large geometric phase space) promise a wealth of non-linear transforms of input data. Their dynamics are typically "clocked" by external stimuli, making them examples of DR-D systems. Initial simulation-based studies by Jensen \emph{et. al.} show that the large binary state space of ASIs can be fully exploited computationally\cite{Jensen++2018:spinice} and that even subsampled representations of the magnetic state retain substantial computational power when used as outputs\cite{Jensen++2020:spinice} (IOD-N, DR-D). Other simulation studies have demonstrated that data can be input using the configurations of individual, or small groups, of islands\cite{zhou2020reservoir,hon_2021,Nomura_2019}. These studies provide strong evidence that the large numbers of interacting, binary degrees of freedom in ASIs are a genuine asset for creating IOD-N RC platforms. There are substantial challenges to experimentally demonstrating the computational abilities of ASIs. While it is possible to envision ASIs constructed from dense arrays of individually addressable MTJs that would facilitate data input and output, the fabrication of such devices is beyond what is achievable in most research laboratories. Thus, alternative methods must be used to determine how the microstates of ASIs vary when subjected to complex field sequences. Gartside \emph{et al.} have used ferromagnetic resonance measurements to "fingerprint" the microstates of an ASI\cite{https://doi.org/10.48550/arxiv.2107.08941}. Their novel approach led to the first experimental demonstration of RC using an ASI to perform signal reconstruction and time series prediction tasks (IOD-N, DR-D). Globally applied magnetic fields were used to "clock" the ASI-based reservoirs. Still, such fields would likely be energy intensive for device-level implementations, and alternative clocking methods, e.g., spin or spin-orbit torque effects, will be necessary. While the potential strength of ASIs as reservoirs stems from interactions between elements, Welbourne \emph{et al.} have shown that collections of magnetic islands are capable of computation even in the non-interacting limit~\citep{Welbourne++2021}. In a simulation study, the authors used ensembles of voltage-controlled super-paramagnetic islands as time-multiplexed reservoirs, demonstrating high performance in both chaotic series prediction and spoken digit recognition tasks (IOD-1D, DR-T). Energy consumption was estimated to be \textasciitilde24 fJ per input, which makes the proposed devices attractive for edge computing applications where low power consumption is vital. However, RC systems contain multiple components beyond the reservoir material itself. Further research is needed to understand how the total power consumption is related to that of the reservoir itself. \subsection{Skyrmion and Domain Wall Ensembles} Magnetic nanostructures can support a variety of stable, non-uniform magnetization textures. Examples of such textures are domain walls and magnetic skyrmions that exhibit complex dynamics and strong interactions when placed in close proximity. Skyrmions are topologically-protected bubble-like magnetization textures stabilized in magnetic materials that exhibit strong Dzyaloshinskii-Moriya interactions\cite{fert_2017}. These can be found in single crystal bulk magnetic materials with non-centrosymmetric lattices (e.g., MnSi\cite{muhlbau_2009}) or in thin film systems that lack inversion symmetry (e.g., Pt/Co/Ir multilayers\cite{moreau_2016}). Skyrmions can be displaced at relatively low current densities using spin-orbit torques and produce unique electrical signatures via the topological Hall effect\cite{fert_2017}. In extended systems, skyrmion textures/fabrics can be formed; these interpolate between particle-like individual skyrmions and complex domain structures bounded by chiral domain walls. Pinna \emph{et al.} have studied the feasibility of reservoir computing with skyrmion textures using micromagnetic simulations\cite{Pinna++2020} (IOD-N, DR-LLG). These were excited using spin torque effects by passing current between two electrical contacts. The readout could be either (i) a time-multiplexed sampling of the device's anisotropic magnetoresistance or (ii) multiple spatially-resolved samples of the textures' magnetization configurations. The authors showed that the device could classify sine and square waves within random sequences provided that the dynamics of the input signals were well-matched to those of the skyrmions dynamics, which were in the GHz regime. However, there are a variety of hurdles still to be overcome for experimental realizations. Chief amongst these is that for temperatures above T = 100 K, thermal noise obscures the the anisotropic magnetoresistance (AMR) signals\cite{Pinna++2020}, indicating a need for alternative readout mechanisms. Dawidek \emph{et al.} have proposed an alternative reservoir design that exploits stochastic interactions between domain walls in a patterned array of interconnected, micron-scale Ni$_{80}$Fe$_{20}$ rings~\citep{Dawidek++2021}. At remanence, each ring in the array typically contained two 180° DWs, which could be driven continuously around the rings' tracks by applying rotating magnetic fields\cite{negoita2012}. Stochastic interactions between DWs at the array's junctions led to both mechanisms for DWs being annihilated from the array and new DW pairs being nucleated, with the balance of these mechanisms depending strongly on the rotating amplitude of the applied field. Thus, the array exhibited a field-dependent emergent response similar to that observed in ASIs. Averaging magnetic behavior over many rings transformed the individual rings' stochastic response into a rich, non-linear, and deterministic aggregate response. Dawidek \emph{et al.} first used a range of experimental techniques to demonstrate that the ring arrays had the basic physical properties required for reservoir computing. They then used a phenomenological model of their dynamics to demonstrate the classification of digits from the TI-46 database of spoken digits via a time-multiplexed approach, with data being input to the array using the amplitude of a continuously rotating applied field\cite{negoita2012,doi:10.1063/1.4812388} (IOD-1D, DR-D). A recent study by the same team has provided an experimental demonstration of RC with an electrically contacted ring array\cite{vidamourexpt}, where AMR measurements probed the states of the rings. Interconnected ring arrays have several features that make them highly attractive as reservoirs. Like ASIs, they have numerous geometrical parameters that could tune their dynamic responses. Furthermore, as they consist of many interacting magnetic elements, they offer obvious routes to creating IOD-N reservoirs. However, data input by rotating magnetic fields is unlikely to be energy efficient, and alternative approaches exploiting, e.g., spin-orbit torques, will need to be explored\cite{fukami_2016}. \section{\label{sec:train}Reservoir Training Methods} In the previous section, we covered a range of nanomagnetic systems suitable for reservoir computing. Here, we discuss how to train the output layer that receives the reservoir activity to solve various tasks. We present the most popular reservoir training method, known as ridge regression, which requires accumulating all training data and training the reservoir in one step. We also mention a recent technique applicable in an "online learning" setup, where the algorithm progressively adapts its parameters as new data are collected. This new technique enables the reservoir to learn tasks sequentially, which may allow its usage in lifelong learning situations. Assume that we provide the reservoir with an M-dimensional input signal $\mathbf{s}_i(t)$, where $i$ is an index on the $N_{data}$ different inputs that we can give to the reservoir. Then, $\mathbf{x}_i(t)$ is an N-dimensional variable that represents measurements in the physical reservoir (or in the traditional neural network setting the activities of the reservoir neurons) as a response to input signal $\mathbf{s}_i(t)$. For each datapoint $i$, we wish to find common parameters ({\it weights}) $\mathbf{W}_{\rm out}$, where $\mathbf{W}_{\rm out}$ is a matrix with dimensions $\rm K \times \rm N$, so that $\mathbf{W}_{\rm out}\mathbf{x}_i(t)=\mathbf{y}_i(t)$, with $\mathbf{y}_i(t)$ being a K-dimensional desirable signal output. We then construct $\mathbf{X}$ and $\tilde{\mathbf{Y}}$ matrices of dimensions $\rm N \times N_{\rm data}$ and $\rm K \times N_{\rm data}$ respectively, obtained through concatenation (across columns) of the measurements (neuron activities) and the desired outputs. We assume $\rm K=1$, meaning we have one output. To calculate the parameters $\mathbf{W}_{\rm out}$, we minimize the error function E of the system's output: \begin{align} \rm E=\left(\mathbf{W}_{\rm out} \mathbf{X}-\tilde{\mathbf{Y}}\right) \left(\mathbf{W}_{\rm out} \mathbf{X}-\tilde{\mathbf{Y}}\right)^{\rm T} +\beta \mathbf{W}_{\rm out}\mathbf{W}_{\rm out}^{\rm T} \label{Eq:Error_ridge} \end{align} where $\beta$ is the scaling factor of a term known as the L2 penalty, which penalizes large weights. The method is known as ridge regression and is the most commonly used in the application of reservoir computing. We can find a closed-form solution to this minimization problem by setting the gradient of $\rm E$ equal to zero: \begin{align} \mathbf{W}_{\rm out}=\tilde{\mathbf{Y}}\mathbf{X}^{\rm T} \left(\mathbf{X}\mathbf{X}^{\rm T}+\beta \mathbf{I}_{\rm N}\right)^{\rm -1} \label{Eq:Ridge} \end{align} The solution holds for $\rm K>1$ since we independently minimize every output. While attractive for its simplicity, the ridge regression algorithm is not appropriate for scenarios where the training dataset is not fixed {\it a priori} but increases over time. In particular, robotics applications, reinforcement learning, and {\it lifelong learning} scenarios require algorithms that continuously update their parameters as new data become available. Moreover, solving Eq.~\ref{Eq:Ridge} can be challenging when the matrix to be inverted is very large or rank deficient. For this reason, previous research has also adapted iterative learning algorithms to minimize a generic error function, which is not constrained to the mean-squared error. Given an arbitrary cost function $\rm E$, the output weights are optimized iteratively through gradient descent: \begin{align} \mathbf{W}_{\rm out}(n+1)=\mathbf{W}_{\rm out}(n)-\eta \nabla_{\mathbf{W}_{\rm out}} \rm E \end{align} where $n$ is the iteration number. Alternatively, we can use complex gradient descent methods such as RMSProp or Adam \cite{ruder2016overview, kingma2014adam}, which exploits the first and second-order momentum of the derivatives. Such iterative algorithms are known as {\it online} methods. More recently, a sparse online learning algorithm (SpaRCe) has been proposed\cite{Manneschi2021:Sparse}. SpaRCe introduces one threshold per neuron, which is learnable by minimizing the same cost function for the output weights. SpaRCe boosts the performance of online learning in reservoirs applied to classification problems while alleviating the issue of catastrophic forgetting. The latter is a fundamental problem in machine learning; new knowledge overrides older memories when the algorithm learns tasks sequentially. Catastrophic forgetting imposes additional challenges when considering the application of machine learning in {\it lifelong learning} scenarios and is a particularly significant problem for recurrent networks. SpaRCe performs exceptionally well in cases where the reservoir measurements are highly correlated. Since this method doesn't affect the reservoir dynamics, it synergizes well with {\it in materia} reservoirs. Although more time-consuming than the one-step regression, it may enable functionalities that are not possible otherwise, as it improves performance over standard "online methods" in classification problems. Despite the recent advantages in training methods, and while we consider reservoir computing a promising paradigm for {\it in materia} computing, we do not expect that single reservoirs will be able to compete with more complex structures in general. However, it is possible to achieve competitive performance for specific problems. In a comparative study\cite{Manneschi2021:Sparse} between hierarchical reservoirs and a well-established recurrent network architecture known as Long Sort-Term Memory (LSTM) with the same number of learnable parameters, the reservoirs achieved better performance in the permuted sequential MNIST task. The reservoir learning rule does not need to unravel dependencies in time when finding gradients, reducing the algorithmic complexity by factor T compared to the LSTM, where T is the length of input signals (here 784). These advantages in terms of complexity are expected to translate to reduced energy costs. \section{Simulation tools} Many tools are available to model nanoscale magnetic systems, ranging from general-purpose, full-physics simulators to high-level, special-purpose phenomenological models. These tools are essential to developing magnetic RC platforms; experimental demonstrations require challenging device fabrication and subsequent high-throughput characterization of the devices' responses to large quantities of input data. Simulation-based approaches are attractive for scoping functionality when combining these challenges with the wealth of systems and phenomena useful for RC. However, simulations of RC also have their challenges. RC requires modeled devices to receive extended streams of input stimuli over timescales at a high computational expense. Furthermore, there is usually a trade-off between the accuracy with which the simulation approach replicates physical phenomena (e.g., magnetization dynamics, the effects of temperature, and materials defects) and their computational cost. We will briefly review the different simulation approaches used to model RC in magnetic materials and discuss where they are best applied. \subsection{General purpose physical simpulators} General-purpose physical simulators are powerful modeling software packages that can model a diverse range of nanomagnetic systems. {\bf Atomistic Solvers}, such as \textsc{Vampire}\cite{Evans++2014:Vampire}, allow atomic scale simulation of magnetic materials. Magnetic moments are assumed to be localized to atomic sites, and their dynamics are modeled classically via the LLG equation. Modeling materials with this exquisite fidelity allows physically accurate simulations of thermal effects, defects, interfacial interactions, and non-uniform spin textures but at a very high computational cost; it is prohibitively costly to simulate devices with dimensions >100 nm. Consequently, atomistic models are generally poorly suited to exploring RC unless the systems in question are smaller than those we could typically study experimentally\cite{dale2021computing}. {\sc \bf Micromagnetic solvers}, such as OOMMF\cite{OOMFuserguide}, NMag\cite{NMag2007}, and {\sc MuMax3}\cite{Vansteenkiste++2014}, model magnetisation as a continuous vector field $\mathbf{M}(\mathbf{r})$, using finite difference or finite element numerical methods. Typically, a model is discretized into individual cells smaller than the exchange length (i.e., the characteristic length scale of a domain wall). Within these cells, we consider magnetization to be uniform. Cells are usually a few nanometres in size and thus represent the magnetic moments of several hundred atoms each. Similar to atomistic solvers, the classical LLG equation models dynamics. Thermal effects may be introduced by including a thermal noise term, resulting in a Langevin thermostat for the system\cite{Leliaert2017}. Since we assume that each cell has a fixed magnetic moment, this approach is limited to temperatures away from the Curie temperature, where we expect large fluctuations in the length of the moment. The cells in micromagnetic approaches are typically two orders of magnitude larger than those in atomistic simulations. Therefore, they are substantially less computationally expensive to run. Systems with lateral dimensions \textasciitilde$\mu m$ are easily accessible, especially when using GPU accelerated packages such as {\sc MuMax3}. While these can be used to model RC in modestly sized systems\cite{ababei__2021,Pinna++2020}, the sheer amount of input data required for training can present computational challenges. They are also poorly suited to simulating large systems such as large ASIs or interconnected ring ensembles. Micromagnetic simulations are often best suited to validating the outputs of higher level simulators or training fast, machine learning-based models of system behaviour\cite{chen_2022}. \subsection{Special-purpose phenomenological simulators} The limited applicability of general-purpose simulators to modeling RC stems from the many degrees of freedom they must model. However, simulators specialized to systems of a given class can describe the basic physical behaviors with substantially fewer degrees of freedom. For example, each island in a typical ASI would consist of \textasciitilde2000 cells with 2 degrees of freedom each if simulated within a micromagnetic framework. At a phenomenological level, it could be represented by a single bistable vector within an Ising model. The GPU-accelerated flatspin simulator\cite{jensen2022flatspin} takes this approach. The simulator has been designed to simulate the dynamics of ASIs as collections of bistable nano-magnets arranged on a lattice, approximated as point dipoles interacting through dipole-dipole coupling. With these approximations, it is possible to model systems comprised of millions of islands. Model predictions were validated against experimental results and other models and allowed simulations demonstrating the applicability of ASIs to RC with modest computational costs\cite{Jensen++2018:spinice}. RingSim\cite{Dawidek++2021,vidamour2022quantifying}, a simulator designed to predict the behaviors of interconnected nanoring ensembles, takes a similar phenomenological approach. The simulator follows agent-based modeling principles: the active agents are domain walls that are instanced into the model and interact stochastically with a rotating field and other DWs situated in neighboring rings. With this model, it was possible to demonstrate the feasibility of performing RC with a system that would be entirely inaccessible using standard micromagnetic approaches\cite{Dawidek++2021,vidamour2022quantifying}. Simple phenomenological models have been used to model a range of other systems, including STOs\cite{furuta_2018}, DW Oscillators \cite{ababei__2021} and super-paramagnet ensembles\cite{Welbourne++2021}. These models are similar in that they sacrifice the detail and accuracy of their descriptions of physics to reduce computational expense. These are appropriate tradeoffs for studies aiming to demonstrate the basic feasibility of RC with a given system as a stepping stone to experimental studies; even predictions from highly detailed atomistic or micromagnetic models are expected to show some variance from real-world devices. \section{Characterisation beyond benchmark tasks}\label{CHARC} The suitability of nanomagnetic systems for RC is usually established by performing standard benchmark tasks such as time series prediction or speech recognition (for a review of some key benchmarks, see supplemental information). Evaluating reservoirs in this way provides limited characterization; different tasks require different computational properties. Thus, strong performance in a single task does not guarantee broader usefulness as a reservoir nor scalability to more complex problems. In principle, one may achieve a better understanding by measuring task-agnostic reservoir metrics, which characterize a reservoir's computational properties beyond specific benchmarks. Three commonly used metrics are Kernel Rank (KR) \cite{legenstein2007edge}, Generalisation Rank (GR) \cite{legenstein2007edge}, and Linear Memory Capacity (MC) \cite{Jaeger2002,Dambre2012}. KR measures the ability of a reservoir to separate different inputs to different reservoir states. GR is the ability of a reservoir to generalize similar inputs to the same reservoir states, and MC is the amount of linear memory within the system. Other metrics have also been proposed\cite{Love_2021}, and careful research will be required to establish which groupings offer the most informative characterizations of a reservoir's computational properties. The optimal values of metrics are highly task-dependant. For example, a system with a high GR is susceptible to noisy inputs, whereas a low GR is less sensitive. Depending on the task, these may reflect a desired or undesired property; a noisy input would benefit from a low GR, but a precise and sensitive input would benefit from a high GR. Nonetheless, metrics knowledge can help optimize reservoir design for a specific problem. For instance, if a task requires a particular memory length, knowing which device designs provide the appropriate timescales would lead to a more efficient design process than fabricating several reservoirs and testing them on the specific task. A step in this direction is CHARC \cite{Dale++2019:PRSA}, a framework for exploring the behavior spaces of families of dynamical systems. Traditional search-based methods search for reasonable solutions to a given problem. Instead, CHARC explores the entire behavior space to characterize how well a given set of systems (such as the nanomagnetic systems in this paper) exhibit various dynamical properties usable for solving specific problems. CHARC defines the space of behaviors by a set of $n$ user-supplied metrics that define an $n$-dimensional \textit{behavior space}. It then explores the input parameters to determine the range of behaviors accessible in this space. Using a range is more appropriate for characterizing a system's overall potential than optimizing the parameters for some specific behavior. CHARC uses a novelty search algorithm\cite{Lehman2008,lehman2010efficiently}, an evolutionary algorithm purely explorative, to find sets of input parameters that result in relatively uniformly distributed behaviors over the behavior space. The system is characterized by the volume of behavior space it can access. CHARC is typically applied to a 3-dimensional behavior space defined by KR, GR, and MC, but it also allows the configuration of alternative measures; there is no claim by the authors of CHARC that these measures are the best for mapping a behavior space \cite{Dale++2019:PRSA}. Given a sufficiently fast and accurate simulator, CHARC can be used to find potentially compelling phenomena to then test in hardware experiments. The results of these experiments can then refine the simulator, creating a closed software improvement loop. \section{Challenges and outlook} {\bf Experimental Realisations} Thus far, most studies have only explored nanomagnetic RC in simulation. It is now critical that the most promising proposals are transferred to experimental demonstrations. The challenges here are not a lack of methods to input signals into materials or measure well-established materials' responses but the complexity of the proposed devices and the measurement infrastructure required for proof-of-principle experiments. The latter needs to apply and measure signal trains in substrate-compatible formats at speeds up to GHz. While these challenges are substantial, robust functionality can be demonstrated only via these experimental prototypes under real-world conditions and constraints. While we expect a system computing using material dynamics to be inherently more efficient, such prototypes will alow an accurate measurement of energy consumption\cite{zhong2022memristor} and drive future device improvements. {\bf Scalability} Once experiments demonstrate basic functionality, it is essential to examine the scalability of proposed RC systems. For example, for simple IOD-1D, time-multiplexed implementations of RC, it will be essential to examine how computational power is enriched if these devices create IOD-N networks, either via external interconnects or via {\it in materia} interactions. One needs to explore how computational power scales as the size and complexity of systems increase. Computational power will be particularly critical when exploiting {\it in materia} interaction as these will have natural length scales beyond which individual inputs and outputs of a reservoir will not directly interact. Meta-reservoirs, i.e., systems consisting of multiple interconnected reservoirs with different computational properties, should also be explored. Such architectures may likely have substantially greater power than their constituent parts\cite{Manneschi_2021}. In all of these cases, simulations will be an essential tool for exploration. These allow evaluation of the ultimate computational potential of a material system by ignoring the physical confines of interfacing in the first instance. {\bf Algorithms} The simplicity of the training algorithms RC uses is another critical element for the popularity of RC in the spintronics community. However, this simplicity also has drawbacks; training RC online with the simplest algorithms was challenging until recent methods\cite{Manneschi2021:Sparse} improved its performance by efficiently increasing algorithmic complexity. We pay a small price for improving learning speed and resilience to catastrophic forgetting. Similarly, to achieve {\it Scalability}, we need to optimize the interconnectivity between the reservoirs or their timescales\cite{manneschi2021exploiting}. Typically, however, techniques for finding appropriate parameters require precise mathematical reservoir models, and in spintronic devices, such models may only sometimes be available. Techniques that allow for automated tuning of the parameters of mathematically agnostic reservoirs will be transformative. {\bf Evaluation} Task-agnostic metrics offer a powerful platform for understanding the computational properties of potential reservoirs. With the wealth of nanomagnetic systems available for this purpose, careful evaluation of these metrics will be essential for understanding their relative strengths and weaknesses. We do not believe such evaluation will reveal a single system as inherently superior. A wealth of factors must be considered, including power consumption, operating speed, and production cost. More likely, a thorough evaluation of device concepts will reveal what applications they would best suit, whether in lower power edge-computing systems or high-throughput data co-processors, and how nanomagnetic RC systems compare to other competitor technologies. In all cases, it will be essential to recognize the heterotic nature of RC, i.e., conventional electronic systems must augment the reservoir to create input and output layers, all with their constraints and overheads. {\bf Applications} So far, reservoir-based spintronic devices have solved simple benchmark problems. While this is inevitable at the earlier stages of research, such toy problems serve only as proof of concept. They are inappropriate for the evaluation of the reservoirs and for attracting a more general interest in the technology. Identifying more challenging tasks within application areas where the spintronics devices may be transformative is necessary. At this stage, it is hard to imagine that spintronic-based RC will serve as general-purpose devices; we expect that there are particular niche areas for which they may be suited. For instance, in the context of edge computing, a promising direction may be that of \emph{smart sensors}, where we would like to offload low-energy preprocessing on the chip. Generally, RC maybe also boost existing methods where additional memory is helpful by adding only a small overhead. For instance, in robotics, the advantages of augmenting existing architectures with a reservoir are demonstrated in the problem of visual place recognition\cite{ozdemir2022echovpr}. For this, interfacing spintronics technology with other hardware may be crucial for the further development of the devices. \section*{Supplementary Material} The supplementary material describes the Echo State Network, a fundamental neural network reservoir model, and some typical benchmarks used in reservoir computing. \begin{acknowledgments} DAA, TJH, LM, CS, ITV, EV acknowledge funding from the EPSRC MARCH project EP/V006339/1. DG, MFKHM, SOK, SS, MAT acknowledge funding from the EPSRC MARCH project EP/V006029/1; SOK, SS, MAT also acknowledge partial funding from the EPSRC SpInspired project EP/R032823/1. DAA, TJH, MOAE and EV also acknowledge funding from the EPSRC project EP/S009647/1. DAA, TJH, GV acknowledge Horizon 2020 FET-Open SpinEngine (Agreement no 861618). ITV acknowledges a DTA-funded Ph.D. studentship from EPSRC. CW acknowledges doctoral funding from the Department of Computer Science, University of York. \end{acknowledgments}
1,314,259,995,225
arxiv
\section{Introduction} Single image super-resolution aims at reconstructing an accurate high-resolution image from the low-resolution image. Since deep learning made big progress in the computer version, many SISR algorithms based on deep Convolution Neural Networks (CNN) have been proposed in recent years. The powerful feature representation and end-to-end training skill of CNN makes a huge breakthrough in SISR. Dong \etal ~\cite{dong2014learning} first proposed SRCNN by introducing a three-layer CNN for image SR. Kim et al. increased the number of layers to 20 in VDSR~\cite{kim2016accurate} and DRCN~\cite{kim2016deeply},making notable improvements over SRCNN. As we all know, the deeper the network is, the more powerful the representation it has. However, with the depth of network grow, gradient disappear and gradient explosion will be the main problem to hinder the performance of the network. This problem was solved when He \etal ~\cite{he2016deep} proposed residual net (ResNet), and Huang \etal ~\cite{Huang_2017_CVPR} proposed dense net (DesNet). Many large scale networks were introduced in SISR, such as SRResNet~\cite{ledig2017photo}, EDSR~\cite{lim2017enhanced}, SRDenseNet~\cite{tong2017image}, RDN~\cite{zhang2018residual} etc.These methods aim at building a deeper network to increase the performance. Other methods such as RCAN~\cite{zhang2018image} and SAN~\cite{dai2019second} try to learn the correlation of the features in the middle layers. WDSR~\cite{yu2018wide} allows for better network performance with less computational effort. AWSRN~\cite{wang2019lightweight} applies an adaptive weighted network. Weight adaptation is achieved by multiplying the residual convolution and the residual hopping by coefficients respectively, and the coefficients can be trained. Since the performance of dense connections is better than the residual~\cite{lim2017enhanced}~\cite{zhang2018residual}, we develop an adaptive densely connection method to enhance the efficiency of feature learning. There is a similar global SKIP, a single sub-pixel convolution, in WDSR~\cite{yu2018wide}and AWSRN~\cite{wang2019lightweight}. Although the SKIP is set to recover low-order frequencies, there is no practical measure to limit its training. We present an adaptive densely connected super-resolution reconstruction algorithm (ADCSR). The algorithm is divided into two parts: BODY and SKIP. BODY is focused on high-frequency information reconstruction through pre-training the SKIP. ADCSR obtained the optimal SISR performance based on bicubic interpolation. There are three main tasks: (1)WDSR~\cite{dai2019second}is optimized using adaptive dense connections. Experiments were carried out by initializing the adaptive parameters and optimizing the models. Based on the above efforts, the performance of the network has been greatly improved; (2)We propose the AFSL model to perform image SR through adaptive sub-pixel convolution; (3)We develop a method which pre-train SKIP first and then train the entire network at the same time. Thus, the BODY is focused on the reconstruction of high-frequency details to improve network performance. \section{Related Works} SISR has important applications in many fields, such as security and surveillance imaging~\cite{zou2011very}, medical imaging~\cite{shi2013cardiac}, and image generation~\cite{karras2017progressive}. The simplest method among them is the interpolation, such as linear interpolation, bicubic interpolation, and so on. This method takes the average of the pixel points in the known LR image as the missing pixel of the HR image. Interpolation works well in the smooth part of the image, but it works poorly in the edge regions, causing ringing and blurring. Additionally, learning-based and reconstruction-based methods are more complex such as sparse coding~\cite{yang2010image}, neighborhood embedded regression~\cite{chang2004super}~\cite{timofte2013anchored}, random forest~\cite{schulter2015fast}, etc. Dong et al. first proposed a Convolutional Neural Network (CNN)-based super-resolution reconstruction network (SRCNN)~\cite{dong2014learning}, which performance is better than the most advanced algorithm at the time. Later, Shi et al. proposed a sub-pixel convolution super-resolution reconstruction network~\cite{shi2016real}. The network contains several convolutional layers to learn LR image features. Reconstruction is performed using the proposed sub-pixel convolutional layer. We can directly reconstruct the image utilizing the convolutional features from the deep convolutional network. Lim et al. proposed an enhanced depth residual network (EDSR)~\cite{lim2017enhanced}, which made a significant performance through the deeper network. Other deep network like RDN~\cite{zhang2018residual} and MemNet~\cite{tai2017memnet}, are based on dense blocks. Some networks focus on feature correlations in channel dimension, such as RCAN~\cite{zhang2018image}and SAN~\cite{dai2019second}. The WDSR~\cite{yu2018wide}proposed by Yu et al. draws two conclusions. First, when the parameters and calculations are the same, the model with more features before the activation function has better performance. Second, weight normalization (WN layer) can improve the accuracy of the network. In WDSR, there is a broader channel before the activation function of each residual block. Wang et al. proposed an adaptive weighted super-resolution network(AWSRN) based on WDSR~\cite{yu2018wide}. It designs a local fusion block for more efficient residual learning. Besides, an adaptive weighted multi-scale model is developed. The model is used to reconstruct features and has superior performance in methods with roughly equal parameters. Cao et al. proposed an improved Deep Residual Network (IDRN)~\cite{cao2019fast}. It makes simple and effective modifications to the structure of residual blocks and skip-connections. Besides, a new energy-aware training loss EA-Loss was proposed. And it employs lightweight networks to achieve fast and accurate results. The SR feedback network (SRFBN)~\cite{li2019feedback} proposed by Li et al. applies the RNN with constraints to process feedback information and performs feature reuse. The Deep Plug and Play SR Network (DPSR) ~\cite{zhang2019deep} proposed by Zhang et al. can process LR images with arbitrary fuzzy kernels. Zhang et al.~\cite{zhang2019zoom} obtained real sensor data by optical zoom for model training. Xu et al.~\cite{xu2019towards}generated training data by simulating the digital camera imaging process. Their experiments have shown that SR using raw data helps to restore fine detail and clear structure. \begin{figure*}[htp] \centering \includegraphics[width=1\linewidth]{moedl} \caption{The architecture of our proposed Adaptive Densely Connected Super-Resolution Reconstruction (ADCSR).The top is ADCSR,the middle are ADRU and AFSL,the bottom is ADRB. } \label{fig:moedl} \end{figure*} \section{Our Model} \subsection{Network Architecture} As shown in Figure \ref{fig:moedl}, our ADCSR mainly consists two parts: SKIP and BODY. The SKIP just uses the sub-pixel convolution~\cite{shi2016real}. The BODY includes multiple ADRUs (adaptive, dense residual units), GFF (global feature fusion layer)~\cite{zhang2018residual}, and an AFSL layer(adaptive feature sub-pixel reconstruction layer). The model takes the RGB patches from the LR image as input. On the one hand, the HR image is reconstructed by SKIP using the low-frequency information of the LR images. On the other hand, the image is reconstructed by BODY using the high-frequency information of the LR images. We can obtain the final complete reconstructed HR image by combining the results of SKIP and BODY. SKIP consists of a single or multiple sub-pixel convolutions with a convolution kernel size of 5. we have: \begin{equation} HR_{SKIP}=f_{sub-conv5}(I_{LR})\label{equo1} \end{equation} where $HR_{SKIP}$ represents the output of skip part, $I_{LR}$ denotes the input image of LR and $f_{sub-conv5}$ represents the sub-pixel convolution, which convolution kernel size is 5. In the BODY, first, we use a convolution layer to extract the shallow features from the LR image. \begin{equation} F_{f}=f_{conv3}(I_{LR})\label{equo2} \end{equation} where $f_{conv3}$ represents the feature extraction convolution, which kernel size is 3. Second, we use several ADRUs to extract the deep features. There are four ADRBs (adaptive dense residual blocks) through adaptive dense connections in Each ADRU. The features are merged by the LFF (Local Feature Fusion Layer) and combined with a skip connection as the output of the ADRU. Each ADRB combines four convolution units by the same adaptive dense connection structure as ADRU. The convolution units adopt a convolution structure, which is similar to WDSR~\cite{yu2018wide}, including two layers of wide active convolution and one layer of Leakyrelu. After that, we fuse features by LFF, which combined with a skip connection as the output of the ADRB. GFF fuses the outputs of multiple ADRUs by means of concatenation and convolution. \begin{equation} X_{ADRUk+1}=b_{k}X_{ADRUk}+a_{k}f_{ADRUk}(X_{ADRUk})\label{equo3} \end{equation} where $X_{ADRUk}$ denotes the input feature map of $k$th ADRU, $f_{ADRUk}$ means the function of $k$th ADRU, $a_{k}$, $b_{k}$ are hyperparameters. \begin{equation} \begin{split} Y_{ADRUk}=&a_{k}f_{ADRUk}(X_{ADRUk})\\ Y_{ADRUlast}=&b_{last}X_{ADRUlast}+\\ &a_{last}f_{ADRUlast}(X_{ADRUlast}) \end{split} \label{equo4} \end{equation} $Y_{ADRUk}$ means the output of the $k$th ADRU, and $Y_{ADRUlast}$ represents the output of the last ADRU, which includes the skip connection. The third part of BODY uses the GEF to combine all the output of ADRU, which fuses features by two convotion layers. \begin{equation} \begin{split} F_{GFF}=f_{conv3}(f_{conv1}(&concat(Y_{ADRU1},\\ &...,Y_{ADRUn})))\label{equo5} \end{split} \end{equation} where $concat$ means feature fusion. Finaly, Image upsampling via AFSL. The AFSL consists of four sub-pixel convolution branches of different scales with a convolution kernel size of 3, 5, 7, and 9, respectively. The output is obtained through the junction layer and a single layer convolution. \begin{equation} \begin{split} F_{AFSL}=f_{conv1}(&concat(f_{sub-conv3}(F_{GFF}),\\ &f_{sub-conv5}(F_{GFF}),\\ &f_{sub-conv7}(F_{GFF}),\\ &f_{sub-conv9}(F_{GFF})))\label{equo6} \end{split} \end{equation} In the second stage of BODY, the feature amplification layer is also implemented by a single convolution layer. The whole BODY is: \begin{equation} \begin{split} HR_{BODY}=F_{AFSL}((F_{GFF}(F_{f}(I_{LR}))))\label{equo7} \end{split} \end{equation} $HR_{BODY}$represents the output of BODY. The whole network can be expressed by formulas \eqref{equo8}. \begin{equation} HR=HR_{BODY}+HR_{SKIP}\label{equo8} \end{equation} \subsection{ADRB and ADRU} We will demonstrate the superiority of the adaptive dense connection structure in Chapter 4. To use as much adaptive residual structure as possible, we split the ADRU into ADRBs using adaptive dense connections and split ADRB into densely connected convolution units. At the same time, to get better results with less parameter amount, we use the residual block in WDSR as our convolution unit. As shown in Figure \ref{fig:moedl}, ADRB and ADRU have similar connection structure. ADRB contains four convolution units, each of which can be represented by equotion \eqref{equo9}. \begin{equation} f_{conv-unit}=f_{1conv3}(LeakyRelu(f_{2conv3}(x)))\label{equo9} \end{equation} where $x$ means the input of the convolution units. The kernel size of the $f_{1conv3}$ is $[3,3,feats,3\times feats]$, and the $f_{2conv3}$ is $[3,3,3\times feats,feats]$, $feat$ is the input channels of the convolution units. The whole ADRB can be expressed by equation \eqref{equo10}. \begin{equation}\label{equo10} \begin{split} &Y_{1}=f_{conv\_unit1}(x)\\ &X_{1}=a_{01}(Y_{1})+b_{01}(x)\\ &Y_{2}=f_{conv\_unit2}(X_{1})\\ &X_{2}=a_{12}(Y_{2})+b_{12}(Y_{1})+b_{02}(x)\\ &Y_{3}=f_{conv\_unit3}(X_{2})\\ &X_{3}=a_{23}(Y_{3})+b_{23}(Y_{2})+b_{13}(Y_{1})+b_{03}(x)\\ &Y_{4}=f_{conv\_unit4}(X_{3})\\ &\begin{split} f_{ADRB}=f_{conv1}(&concat(a_{34}Y_{4},b_{34}Y_{3},\\ &b_{24}Y_{2},b_{14}Y_{1},b_{04}x))+x \end{split} \end{split} \end{equation} where $f_{conv\_unit1}$means convolution unit, $x$ denotes the input of ADRB. $a_{mn}, b_{mn}$ are hyperparameter, $X_i$ denotes the input of $(i+1)$th convolution unit, $Y_{j}$represents the output of $j$th convolution unit. The whole ADRU can be formulated by equation \eqref{equo11}. \begin{equation}\label{equo11} \begin{split} &Y_{1}=f_{ADRB1}(x)\\ &X_{1}=a_{01}(Y_{1})+b_{01}(x)\\ &Y_{2}=f_{ADRB2}(X_{1})\\ &X_{2}=a_{12}(Y_{2})+b_{12}(Y_{1})+b_{02}(x)\\ &Y_{3}=f_{ADRB3}(X_{2})\\ &X_{3}=a_{23}(Y_{3})+b_{23}(Y_{2})+b_{13}(Y_{1})+b_{03}(x)\\ &Y_{4}=f_{ADRB4}(X_{3})\\ &\begin{split} f_{ADRU}=f_{conv1}(&concat(a_{34}Y_{4},b_{34}Y_{3},\\ &b_{24}Y_{2},b_{14}Y_{1},b_{04}x))+x \end{split} \end{split} \end{equation} \subsection{Implementation} In this section, we will give specific implementation details. In SKIP, the convolution channel for the sub-pixel convolutional layer is defined as 5. The convolution kernel size of the LFF in BODY is 1.The two convolution kernel sizes of GFF are 1 and 3, respectively. In AFSL, the convolution kernels are 3, 5, 7 and 9. All other convolution kernel sizes are set to 3. There are 4 ADRUs in BODY. The number of output channels in feature extraction layer, convolution unit, LFF, and GFF are 128, and the 4 sub-pixel convolutions and the final output in AFSL are 3. The stride size is 1 throughout the network while using Leakyrelu as the activation function. \section{Experiments} \subsection{Adaptive dense connections} We propose a structure for adaptive dense connections such as ADRU, and verify its performance through experiments. In the experiment, we designed three models. The model parameters are the same, and the calculations are roughly equal. The structure of the models is similar to the ADCSR consisting of a single ADRU. These three models are: \\ a. Add LFF~\cite{zhang2018residual} on WDSR~\cite{dai2019second} (to obtain the same model depth);\\ b. Add a dense connection based on a;\\ c. Add parameter adaptation based on b.\\ The three models have the same training parameters. We train our models with the data set DIV2K~\cite{lim2017enhanced}. We also compare the performance on the standard benchmark dataset: B100~\cite{martin2001database}. The number of iterations is 200. The learning rate is $1\times10^{-4}$ and halved at every 100 epochs. As shown in Figure \ref{fig:adc}, networks with dense connections and parameter adaptation have the highest performance under the same conditions. \begin{figure}[htp] \centering \includegraphics[scale=0.3]{ADC.png} \caption{Convergence analysis of tests on B100 with scaling factor $\times2$ during different model structures}\label{fig-shuiping} \label{fig:adc} \end{figure} \subsection{Adaptive sub-pixel reconstruction layer (AFSL)} We test the reconstruction layer in BODY. We have designed a new reconstruction network model AFSL. To verify the performance of the model, we designed a straightforward model for comparison experiments. The model only includes the feature extraction layer and the reconstruction layer. As shown in Figure \ref{fig:afsl}, the reconstruction layers are Sub-pixel convolution~\cite{shi2016real}, AWMS~\cite{wang2019lightweight}, and AFSL. We performed the task on scale $\times2$. The feature extraction layers and experimental parameters of the models are the same. We tested the models with B100~\cite{martin2001database} and Urban100~\cite{huang2015single}. At the same time, we also analyzed the difference in the number of FLOPs and model parameters. The result is shown in Table \ref{tab:tafsl}. We can see that AWMS and AFSL require more calculations and parameters than Sub-pixel convolution while its performance is better. In the case where the setting and the calculated amount are the same, the performance of AFSL is slightly better than AWMS. \begin{figure}[htp] \centering \includegraphics[scale=0.3]{AFSL.png} \caption{Test model and structural Comparison of Three Reconstruction Layers}\label{fig-shuiping} \label{fig:afsl} \end{figure} \begin{table} \centering \caption{Performance comparison of three reconstruction layers} \begin{tabular}[htp]{ccccc} \hline &B100&Urban100&FLOPs&Params\\ \hline Sub-conv&30.402&27.750&0.02G&9K\\ AWMS&30.590&27.956&0.30G&128K\\ AFSL&30.592&27.958&0.30G&128K\\\hline \end{tabular} \label{tab:tafsl} \end{table} \subsection{Pre-training SKIP} We have explored a training method that performs a separate pre-training of SKIP while training the entire model. This training method is used to make SKIP focus on the reconstruction of low-frequency information, while BODY focuses on high-frequency information reconstruction. We employ the same model, that is, the ADCSR containing a single ADRU with the same training parameters. But we train the model in different ways:\\ a. Train the entire network directly;\\ b. First pre-train SKIP, then train the whole network at the same time;\\ c. First pre-train SKIP, then set SKIP to be untrainable when training the entire network. Figure \ref{fig:skipstop1} compares the image and image spectrum of SKIP and BODY output for models a and b. By comparing the output images, it can be seen that the BODY of the pre-trained SKIP model focuses on learning the texture edge details of the image. From the comparison of the output spectrum of the BODY part, the spectrogram of the pre-trained SKIP model is darker near the center and brighter around. It proves that the proposed method makes the BODY use more high-frequency information and less low-frequency information. \begin{figure}[htp] \centering \includegraphics[scale=0.18]{SKIPSTOP.png} \caption{Results of pre-training SKIP on SKIP output} \label{fig:skipstop1} \end{figure} Figure \ref{fig:skipstop2} is a comparison of the test curves of the model on the B100 under different training modes. We found that networks that were pre-trained with SKIP achieved higher performance. And the network performance of tests b and c are similar. \begin{figure}[htp] \centering \includegraphics[scale=0.3]{SKIPSTOP2.png} \caption{Convergence analysis of tests on B100 with scaling factor $\times2$ during different model training methods} \label{fig:skipstop2} \end{figure} \subsection{Training settings} We train our network with dataset DIV2K and Flickr2K~\cite{lim2017enhanced}. The training set has a total of 3,450 images without data augmentation. DIV2K is composed of 800 images for training while 100 images each for testing and validation. Flickr2K has 2,650 training images. The input image block size is $48\times48$. SKIP is trained separately, and then the entire network is trained at the same time. The initial learning rate is $1\times10^{-4}$. When the learning rate drops to $5\times10^{-7}$, the training stops. we also adopt L1 loss to optimize our model. We train the network of scale $\times2$ firstly. Subsequently, when training the network of scale $\times3$, $\times4$, the BODY parameter of the scale $\times2$ is loaded (excluding the parameters of the AFSL). We train the model through the NVIDIA RTX2080Ti. Pytorch1.1.0+Cuda10.0+cudnn7.5.0 is selected as the deep learning environment. \begin{table*}[htp] \begin{center} \caption{Quantitative evaluation of competing methods. We report the performance of state-of-the-art algorithms on widely used publicly available datasets, in terms of PSNR (in dB) and SSIM. The best results are highlighted with {\color{red}read} color while the {\color{blue}blue} color represents the second-best SR.} \label{tab:qecm} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{1.8cm}{method} & \multirow{2}{*}{scale} &\multicolumn{2}{|c|}{Set5~\cite{bevilacqua2012low}}&\multicolumn{2}{|c|}{Set14~\cite{yang2010image}}&\multicolumn{2}{|c|}{B100~\cite{martin2001database}}&\multicolumn{2}{|c|}{Urban100~\cite{huang2015single}} &\multicolumn{2}{|c|}{manga109~\cite{matsui2017sketch}} \\ \cline{3-12} & &PSNR &SSIM &PSNR &SSIM &PSNR &SSIM &PSNR &SSIM &PSNR &SSIM \\ \hline\hline \multirow{11}{1.8cm}{Bicubic\newline SRCNN~\cite{dong2014learning}\newline VDSR~\cite{kim2016accurate}\newline LapSRN~\cite{lai2017deep}\newline MemNet~\cite{tai2017memnet}\newline EDSR~\cite{lim2017enhanced}\newline RDN~\cite{zhang2018residual}\newline RCAN~\cite{zhang2018image}\newline SAN~\cite{dai2019second}\newline ADCSR\newline ADCSR+}&\multirow{11}{*}{$\times2$}&33.66&0.9299&30.24&0.8688&29.56&0.8431&26.88&0.8403&30.80&0.9299 \\ & &36.33&0.9542&32.45&0.9067&31.36&0.8879&29.50&0.8946&35.60&0.9663\\ & &37.53&0.9590&33.05&0.9130&31.90&0.8960&30.77&0.9140&37.22&0.9750\\ & &37.52&0.9591&33.08&0.9130&31.08&0.8950&30.41&0.9101&37.27&0.9740\\ & &37.78&0.9597&33.28&0.9142&32.08&0.8978&31.31&0.9195&37.72&0.9740\\ & &38.11&0.9602&33.92&0.9195&32.32&0.9013&32.93&0.9351&39.10&0.9773\\ & &38.24&0.9614&34.01&0.9212&32.34&0.9017&32.89&0.9353&39.18&0.9780\\ & &38.27&0.9614&34.12&0.9216&32.41&0.9027&33.34&0.9384&39.44&0.9786\\ & &38.31&{\color{red}0.9620}&34.07&0.9213&32.42&0.9028&33.10&0.9370&39.32&0.9792\\ &&{\color{blue}38.33}&{\color{blue}0.9619}&{\color{blue}34.48}&{\color{blue}0.9250}&{\color{blue}32.47}&{\color{blue}0.9033}&{\color{blue}33.61}&{\color{blue}0.9410}&{\color{blue}39.84}&{\color{blue}0.9798}\\ &&{\color{red}38.38}&{\color{red}0.9620}&{\color{red}34.52}&{\color{red}0.9252}&{\color{red}32.50}&{\color{red}0.9036}&{\color{red}33.75}&{\color{red}0.9418}&{\color{red}39.97}&{\color{red}0.9800}\\ \hline\hline \multirow{11}{1.8cm}{Bicubic\newline SRCNN~\cite{dong2014learning}\newline VDSR~\cite{kim2016accurate}\newline LapSRN~\cite{lai2017deep}\newline MemNet~\cite{tai2017memnet}\newline EDSR~\cite{lim2017enhanced}\newline RDN~\cite{zhang2018residual}\newline RCAN~\cite{zhang2018image}\newline SAN~\cite{dai2019second}\newline ADCSR\newline ADCSR+}&\multirow{11}{*}{$\times3$}&30.39&0.8682&27.55&0.7742&27.21&0.7385&24.46&0.7349&26.95&0.8556 \\ & &32.75&0.9090&29.30&0.8215&28.41&0.7863&26.24&0.7989&30.48&0.9117\\ & &33.67&0.9210&29.78&0.8320&28.83&0.7990&27.14&0.8290&32.01&0.9340\\ & &33.82&0.9227&29.87&0.8320&28.82&0.7980&27.07&0.8280&32.21&0.9350\\ & &34.09&0.9248&30.00&0.8350&28.96&0.8001&27.56&0.8376&32.51&0.9369\\ & &34.65&0.9280&30.52&0.8462&29.25&0.8093&28.80&0.8653&34.17&0.9403\\ & &34.71&0.9296&30.57&0.8468&29.26&0.8093&28.80&0.8653&34.13&0.9484\\ & &34.74&0.9255&30.65&0.8482&29.32&0.8111&29.09&0.8702&34.44&0.9499\\ & &34.75&0.9300&30.59&0.8476&29.33&0.8112&28.93&0.8671&34.30&0.9494\\ &&{\color{blue}34.86}&{\color{blue}0.9305}&{\color{blue}30.81}&{\color{blue}0.8505}&{\color{blue}29.40}&{\color{blue}0.8127}&{\color{blue}29.44}&{\color{blue}0.8767}&{\color{blue}34.95}&{\color{blue}0.9521}\\ &&{\color{red}34.93}&{\color{red}0.9310}&{\color{red}30.88}&{\color{red}0.8514}&{\color{red}29.43}&{\color{red}0.8133}&{\color{red}29.57}&{\color{red}0.8784}&{\color{red}35.11}&{\color{red}0.9528}\\ \hline\hline \multirow{11}{1.8cm}{Bicubic\newline SRCNN~\cite{dong2014learning}\newline VDSR~\cite{kim2016accurate}\newline LapSRN~\cite{lai2017deep}\newline MemNet~\cite{tai2017memnet}\newline EDSR~\cite{lim2017enhanced}\newline RDN~\cite{zhang2018residual}\newline RCAN~\cite{zhang2018image}\newline SAN~\cite{dai2019second}\newline ADCSR\newline ADCSR+}&\multirow{11}{*}{$\times4$}&28.42&0.8104&26.00&0.7027&25.96&0.6675&23.14&0.6577&24.89&0.7866 \\ & &30.45&0.8628&27.50&0.7513&26.90&0.7101&24.52&0.7221&27.58&0.8555\\ & &31.35&0.8830&28.02&0.7680&27.29&0.7251&25.18&0.7540&28.83&0.8870\\ & &31.54&0.8850&28.19&0.7720&27.32&0.7270&25.21&0.7560&29.09&0.8900\\ & &31.74&0.8893&28.26&0.7723&27.40&0.7281&25.50&0.7630&29.42&0.8942\\ & &32.46&0.8968&28.80&0.7876&27.71&0.7420&26.64&0.8033&31.02&0.9148\\ & &32.47&0.8990&28.81&0.7871&27.72&0.7419&26.61&0.8028&31.00&0.9173\\ & &32.63&0.9002&28.87&0.7889&27.77&0.7436&26.82&0.8087&30.40&0.9082\\ & &32.64&0.9003&28.92&0.7888&27.78&0.7436&26.79&0.8068&31.18&0.9169\\ &&{\color{blue}32.77}&{\color{blue}0.9013}&{\color{blue}29.02}&{\color{blue}0.7917}&{\color{blue}27.86}&{\color{blue}0.7457}&{\color{blue}27.15}&{\color{blue}0.8174}&{\color{blue}31.76}&{\color{blue}0.9212}\\ &&{\color{red}32.82}&{\color{red}0.9020}&{\color{red}29.09}&{\color{red}0.7930}&{\color{red}27.90}&{\color{red}0.7466}&{\color{red}27.27}&{\color{red}0.8197}&{\color{red}31.98}&{\color{red}0.9232}\\ \hline \end{tabular} \end{center} \end{table*} \begin{figure*}[htp] \centering \includegraphics[scale=0.5]{X4.png} \caption{Visual results with bicubic degradation model($\times4$) on Urban100} \label{fig:cmpic} \end{figure*} \begin{figure*}[htp] \centering \includegraphics[scale=0.35]{dssr.png} \caption{Two-stage adaptive dense connection super-resolution reconstruction network (DSSR)} \label{fig:dssr} \end{figure*} \subsection{Results with Bicubic Degradation} In order to verify the validity of the model , we compare the performance on five standard benchmark datasets: Set5~\cite{bevilacqua2012low}, Set14~\cite{yang2010image}, B100~\cite{martin2001database}, Urban100~\cite{huang2015single}, and manga109~\cite{matsui2017sketch}. In terms of PSNR, SSIM and visual effects, We compare our models with the state-of-the-art methods including Bicubic, SRCNN~\cite{dong2014learning}, VDSR~\cite{kim2016accurate}, LapSRN~\cite{lai2017deep}, MemNet~\cite{tai2017memnet}, EDSR~\cite{lim2017enhanced}, RDN~\cite{zhang2018residual}, RCAN~\cite{zhang2018image}, SAN~\cite{dai2019second}. We also adopt self-ensemble strategy~\cite{lim2017enhanced} to further improve our ADCSR and denote the self-ensembled ADCSR as ADCSR+. The results are shown in Table \ref{tab:qecm}. As can be seen from the table, the PSNR and SSIM of the algorithm in $\times2$, $\times3$, $\times4$ exceed the current state of the art. Figure \ref{fig:cmpic} show the Qualitative comparison of our models with Bicubic, SRCNN~\cite{dong2014learning}, VDSR~\cite{kim2016accurate}, LapSRN~\cite{lai2017deep}, MSLapSRN~\cite{lai2018fast}, EDSR~\cite{lim2017enhanced}, RCAN~\cite{zhang2018image}, and SAN~\cite{dai2019second} . The images of SRCNN, EDSR, and RCAN are derived from the author's open-source model and code. Test images for VDSR, LapSRN, MSLapSRN, SAN are provided by their respective authors. In the comparison chart of img044 in Figure \ref{fig:cmpic}, the image reconstructed by the algorithm is clear and close to the original image. In img004, our algorithm has a better visual effect. \section{AIM2019: Extreme Super-Resolution Challenge} This work is initially proposed for the purpose of participating in the AIM2019 Extreme Super-Resolution Challenge. The goal of the contest is to super-resolve an input image to an output image with a magnification factor $\times16$ and the challenge is called extreme super-resolution. Our model is the improved ADCSR, a two-stage adaptive dense connection super-resolution reconstruction network (DSSR). As shown in Figure \ref{fig:dssr}, the DSSR consists of two parts, SKIP and BODY. The SKIP is a simple sub-pixel convolution~\cite{shi2016real}. The BODY part is divided into two stages. The first stage includes a feature extraction layer, multiple ADRUs (adaptive, dense residual units), GFF (global feature fusion layer)~\cite{zhang2018residual}, and an AFSL layer (adaptive feature sub-pixel reconstruction layer).The second stage includes a feature amplification layer, an ADRB (adaptive dense residual block), and an AFSL. During the training of DSSR, the network converges slowly due to the large network. We divide the network into two parts for training to speed up network convergence. When training DSSR, we first train the SKIP. The network ADCSR of scale is used as a pre-training parameter while training the entire network. At the same time, the feature extraction layer of the first level and each ADRU are set to be untrainable. During the period, GFF, AFSL and later second-level network parameters are trained at normal learning rates $1\times10^{-4}$. Finally, we train the entire network when the learning rate is small. We train DSSR with dataset DIV8K. Other training settings are the same as ADCSR. Our model final result on the full resolution of the DIV8K test images is ($\times16$) : PSNR = 26.79, SSIM = 0.7289. \section{Conclusions} We propose an adaptive densely connected super-resolution reconstruction algorithm (ADCSR). The algorithm is divided into two parts: BODY and SKIP. BODY improves the utilization of convolution features by adaptively dense connections. We also explore an adaptive sub-pixel reconstruction layer (AFSL) to reconstruct the features of the BODY output. We pre-train SKIP in advance so that the BODY focuses on high-frequency feature learning. Several comparative experiments demonstrate the effectiveness of the proposed improved method. On the standard datasets, the comparisons of PSNR, SSIM, and visual effects show that the proposed algorithm is superior to the state-of-the-art algorithms. {\small \bibliographystyle{ieee_fullname}
1,314,259,995,226
arxiv
\section{Introduction} The optical flux during the {\sl prompt} $\gamma$-ray emission is 1--4 orders of magnitude larger than the extrapolation of the burst spectrum to optical for GRB 990123 (figure 2 of Galama et al 1999), GRB 061126 (figure 5 Perley et al 2008), GRB 080319B (figure 3 of Racusin et al 2008), and for GRBs 060111B, 060927, 061007, 061121, 071003, 080413, and 080810 (as can be inferred from the optical and GRB properties listed in Table 1). This may suggest that the optical and burst emissions arise from different radiation processes, synchrotron emission dominating the optical {\sl counterpart} and inverse-Compton scatterings producing the 10 keV--10 MeV emission (model b2 of M\'esz\'aros \& Rees 1997, Panaitescu \& M\'esz\'aros 2000). An essential {\sl assumption} for the synchrotron self-Compton interpretation of GRBs is that the optical counterpart and burst emissions arise from the same relativistic ejecta. For GRB 080319B, whose optical counterpart emission was well-sampled (Karpov et al 2008), that assumption is supported by the broad correlation of optical and $\gamma$-ray prompt light-curves (Stamatikos et al 2008). A correlation between burst and optical counterpart emissions is also possible in the internal shock model (Rees \& M\'esz\'aros 1994) for GRBs if the pair of reverse and forward shocks produced by the interaction of relativistic shells radiate in the optical and at sub-MeV, as was proposed by Yu, Wand \& Dai (2008) for GRB 080319B. The latter model requires that, for all pairs of interacting shells, the Lorentz factor ratio is very larger (above 1000), but it produces a weaker GeV emission from inverse-Compton scatterings than does the second upscattering of the former model. A tight correlation of GRB and optical counterpart fluctuations is not expected in either model, as the spectra of two emission components (synchrotron and inverse-Compton, or just synchrotron from reverse and forward shocks, respectively) may peak, sometimes, far from the corresponding observing band-passes (optical and $\gamma$-ray) and not yield a pulse in that photon range. From the optical and $\gamma$-ray properties of the prompt emissions of GRB 080319B, Kumar \& Panaitescu (2008) have inferred that the upscattering of GRB photons (i.e. the second inverse-Compton scattering of the primary synchrotron photons) should have produced a GeV photon yield over the burst duration of thousands of photons for Fermi's Large Area Telescope (LAT) and hundreds of photons for Agile's Gamma-Ray Imaging Detector (GRID), the second scattering GeV--TeV emission accompanying GRB 080319B containing 10 times more energy than released at sub-MeV by the first scattering. If the synchrotron self-Compton process were at work in other bursts with an optical counterpart dimmer than that of GRB 080319B, then the Compton parameter for the second scattering could be substantially larger than for GRB 080319B, leading to bursts that radiate much more energy in the GeV than at sub-MeV (Piran, Sari \& Zou 2008); however, synchrotron peak energies well from optical can reduce substantially the Compton parameter of the second scattering and its GeV flux. Currently, the observational evidence for a prompt emission component peaking above 10 MeV (as is possible in the synchrotron self-Compton model for GRBs) is modest. The spectra of 15 GRBs measured by the Energetic Gamma-Ray Experiment Telescope (EGRET) calorimeter on the Compton Gamma-Ray Observatory up to 100 MeV (Kaneko et al 2008) show only 3 such cases. One of them is GRB 941017 (Gonzales et al 2003), whose $\nu F_\nu$ spectrum rises up to 100 MeV; the other ones are GRB 930506 and 980923. A prompt GeV flux that exceeds the extrapolation of the burst spectrum to higher energies has also been detected by EGRET for GRB 940217 (Hurley et al 1994). At the other end, the most notable evidence provided by EGRET for the absence of higher energy emission is for GRB 930131 (Sommer et al 1994), whose power-law spectrum extends up to 1 GeV. We also note that the prompt emissions above 100 MeV of two recent bursts measured by Fermi-LAT lie on the extrapolation of the MeV spectrum. Double upscattering of the synchrotron emission is not the only model that can yield a prompt GeV emission. Previous proposed models for a {\sl prompt} GeV emission include the more "mundane" synchrotron and inverse-Compton from internal shocks (e.g. Papathanassiou \& M\'esz\'aros 1996), inverse-Compton emission from the reverse-shock (e.g. Granot \& Gueta 2003) or from the forward-shock (e.g. Wang, Dai \& Lu 2001), and upscattering of reverse-shock synchrotron photons in the forward-shock (Pe'er \& Waxman 2004), as well as some more "exotic" and uncertain ones (e.g. synchrotron emission from ultra-high energy protons or the electrons and muons formed from by the photo-pion decay of those protons -- Asano \& Inoue 2007). Evidently, a comparison of the optical, sub-MeV, and GeV emissions with model-expected correlations will be required to distinguish among the various process proposed for the higher energy component. In this paper, we develop the formalism by which optical counterpart and prompt burst measurements can be used to infer the GeV flux accompanying GRBs and apply it to the bursts with optical counterpart measurements (detections or upper limits) to calculate the bolometric GRB output. As shown below, these quantities depend strongly on the peak energy of the primary synchrotron spectrum. The direct determination of that quantity through optical and near-infrared observations of the prompt emission and the measurement of the GeV prompt flux can then be used to test the synchrotron self-Compton model for GRBs. If the peak energy of the synchrotron spectrum cannot be determined observationally, then the GeV and optical fluxes and spectral slopes can be used to perform a weaker test of that model. The following calculations for the synchrotron self-Compton emissions are general and do not depend on the dissipation mechanism (i.e. type of shock) which accelerates relativistic electrons and produces magnetic fields. It could be the external reverse shock which propagates into the relativistic ejecta, if that mechanism can account for the burst variability, or it could be internal shocks in a variable outflow, as was proposed by Sari \& Piran (1999) and M\'esz\'aros \& Rees (1999), respectively, to explain the bright optical counterpart of GRB 990123. The physical parameters of the synchrotron self-Compton model required to account for the optical and sub-MeV emissions of that particular burst, GRB 990123, were inferred by Panaitescu \& Kumar (2007). As for GRB 080319B, it was found that the peak energy of the synchrotron spectrum was not far from the optical. \section{Formalism} In the synchrotron self-Compton model for the GRB emission, the peak energy and peak flux of the first inverse-Compton scattering are the peak energy $\eg$ and flux $F_\gamma$ of the GRB spectrum. The peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ and flux $F_p$ of the primary synchrotron spectrum could be measured directly with robotic telescopes performing multiband observations of the optical counterpart only if $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ falls in the optical bandpass (i.e. $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \sim 2$ eV), but otherwise remain unknown (optical counterpart measurements yield a relation between $F_p$ and $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$). Both quantities $F_p$ and $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ are needed to calculate the typical energy $\gamma_p m_e c^2$ of the radiating electrons and the optical thickness $\tau_e$ to electron scattering of the radiating medium, which, together with the $F_\gamma$ and $\eg$ of the first scattering, lead to the peak energy $\eG$ and flux $F_{GeV}$ of the second inverse-Compton scattering. The last two quantities set the GeV prompt flux, thus observations by Fermi-LAT and Agile-GRID of the GeV emission accompanying GRBs can be used in conjunction with the optical counterpart and burst measurements to test the synchrotron self-Compton model for GRBs. In this section, we relate the properties of the twice upscattered emission to those of the prompt optical and $\gamma$-rays. The peak energy $\eG$ of the second inverse-Compton emission spectrum and the peak flux $F_{GeV}$ at $\eG$ are related to those of the first inverse-Compton by \begin{equation} \eG = \gamma_p^2 \eg \; \quad F_{GeV} = \tau_e F_\gamma \label{Gev} \end{equation} with $\gamma_p = (\eg/\varepsilon_{peak})^{1/2}$ and $\tau_e = F_\gamma/F_{peak}$ relating the peak energies and flux of the first inverse-Compton spectrum to those of the spectrum of the {\sl received} synchrotron emission, $\varepsilon_{peak}$ and $F_{peak}$. If the emitting fluid is optically thin to synchrotron self-absorption at the peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ of the synchrotron {\sl emissivity}, then $\varepsilon_{peak}=\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ and $F_{peak}=F_p$; however, if the optical thickness to synchrotron self-absorption at $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ is above unity, the received spectrum peaks at the synchrotron self-absorption energy $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$ (i.e. $\varepsilon_{peak} = \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$). Thus \begin{equation} \gamma_p = \left\{ \begin{array}{ll} \hspace*{-2mm} (\eg/\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV})^{1/2} & \tau_p < 1 \\ \hspace*{-2mm} (\eg/\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon)^{1/2} & \tau_p > 1 \end{array} \right. \; , \quad \tau_e = \left\{ \begin{array}{ll} \hspace*{-2mm} F_\gamma/F_p & \tau_p < 1 \\ \hspace*{-2mm} F_\gamma/F_a & \tau_p > 1 \end{array} \right. \label{gpte} \end{equation} where $F_a$ is the synchrotron flux at $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$ and \begin{equation} \tau_p = \frac{5e \tau_e}{\sigma_e B \gamma_p^5} \end{equation} is the optical thickness to synchrotron self-absorption at the peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$, $\sigma_e$ being the cross-section for electron scattering (in the Thomson regime) and $B$ the magnetic field strength. The value of $B$ can be inferred from the synchrotron peak energy: \begin{equation} \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} = \frac{eh}{4m_ec} \frac{\gamma_p^2 B \Gamma}{z+1} \end{equation} taking into account the relativistic boost of photon energy by the Lorentz factor $\Gamma$ of the source, which leads to \begin{equation} \tau_p = \frac{52.6\; {\rm MeV}}{\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}} \frac{\Gamma\tau_e}{(z+1)\gamma_p^3} \;. \label{taup} \end{equation} Therefore, to find $\gamma_p$ and $\tau_e$, the quantities $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$, $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$, $F_p$, and $F_a$ must be constrained from the counterpart optical flux (which is the only measurable quantity directly pertaining to the synchrotron emission). As that provides only one constraint, we shall express the following results as function of the peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$, and consider separately the $\tau_p < 1$ and $\tau_p >1$ cases. \begin{figure} \centerline{\psfig{figure=fig1.eps,width=7cm}} \caption{ Power-law spectra of synchrotron and first inverse-Compton emissions for an emitting plasma which is optically thin ($\tau_p <1$, upper panel) and optically thick ($\tau_p >1$, lower panel) at the emissivity peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$, and for electrons with a power-law energy distribution above that corresponding to the synchrotron characteristic energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$. Spectral power-law indices ($d\log F_\e/ d\log \e$) are indicated. Below the peak of the spectrum, the slopes are $\alpha=1/3$ and $\alpha=5/2$ for $\tau_p <1$ and $\tau_p >1$, respectively. However, we allow an arbitrary slope $-\beta < \alpha < 1/3(5/2)$ to accommodate the diversity of low-energy slope measured for the GRB emission (which is the first inverse-Compton component), which may be due to a more complex electron distribution than a pure power-law only above $\gamma_p$. GRB observations determine the peak energy and flux ($\eg$, $F_\gamma$) of the first inverse-Compton emission, while optical counterpart measurements set a constraint on the spectral peak properties ($\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$, $F_p$) of the synchrotron emission which depends on the location of the optical bandpass relative to the break energies $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ (emissivity peak) and $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$ (self-absorption).} \label{f1} \end{figure} For $\tau_p < 1$, equations (\ref{gpte}) and (\ref{taup}) yield \begin{equation} \tau_p = k \frac{F_\gamma}{F_p} \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}^{1/2}\;, \quad k \equiv \frac {52.6\, \Gamma\; {\rm MeV}}{(z+1) \eg^{3/2}} \;. \label{tp1} \end{equation} In this case, the self-absorption energy is \begin{equation} \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon = \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \tau_p^{3/5} < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \label{ea1} \end{equation} and the optical counterpart flux $F_o$ is (Figure \ref{f1}, upper panel) \begin{equation} F_o = F_p \left\{ \begin{array}{ll} (\eo/\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon)^2 (\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon/\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV})^\alpha & \eo < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \\ (\eo/\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV})^\alpha & \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \eo < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \\ (\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}/\eo)^\beta & \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \eo \end{array} \right. \label{Fo1} \end{equation} where $\alpha$ and $\beta$ are the spectral slopes below and above $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ of the synchrotron emissivity, which are the same as low and high energies slope of the first inverse-Compton GRB spectrum: $F_\varepsilon \propto \varepsilon^\alpha$ and $F_\varepsilon \propto \varepsilon^{-\beta}$, respectively. Substituting $F_p$ in equation (\ref{tp1}) and using equation (\ref{ea1}), one finds \begin{equation} \tau_p = \left( k \frac{F_\gamma}{F_o} \frac{\eo^2}{\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}^{3/2}} \right)^{5/(11-3\alpha)} \;. \label{tp1} \end{equation} for the $\eo < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ case. Then, the starting condition $\tau_p <1$ is equivalent to $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} > \e_1$ with \begin{equation} \e_1 \equiv (k' \eo^{-2} )^{-2/3} \;, \quad k' \equiv \frac{F_o}{k F_\gamma} \end{equation} while the assumption $\eo < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$ is equivalent to $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} > \e_2$ with \begin{equation} \e_2 \equiv (k' \eo^{\frac{5}{3}-\alpha})^{6/(13-6\alpha)} \;. \end{equation} For the $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \eo < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ case, one obtains \begin{equation} \tau_p = k \frac{F_\gamma}{F_o} \frac{\eo^\alpha}{\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}^{\alpha-\frac{1}{2}}} \label{tp2} \end{equation} while for the $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \eo$ case \begin{equation} \tau_p = k \frac{F_\gamma}{F_o} \frac{\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}^{\beta+\frac{1}{2}}}{\eo^\beta} \label{tp3} \end{equation} the $\tau_p <1$ condition requiring that $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \e_3$, where \begin{equation} \e_3 \equiv (k' \eo^\beta)^{2/(2\beta+1)} \;. \end{equation} The three reference photon energies $\e_1$, $\e_2$, and $\e_3$ depend only on observables and allow the selection of the photon energy ordering given in equation (\ref{Fo1}): \begin{equation} \left\{ \begin{array}{lll} \eo < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} & {\rm if} & \e_1, \e_2 < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \\ \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \eo < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} & {\rm if} & \eo < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \e_1 \\ \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \eo & {\rm if} & \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \e_3, \eo \end{array} \right. \label{case1} \end{equation} Thus, given a peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$, equation (\ref{case1}) identifies the ordering of $\eo$, $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$, and $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$, from where $\tau_p$ can be determined using equations (\ref{tp1}), (\ref{tp2}), or (\ref{tp3}), further leading to $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$ through equation (\ref{ea1}), then to the peak flux $F_p$ of equation (\ref{Fo1}), then to the $\tau_e$ of equation (\ref{gpte}) and, finally, to the spectral peak flux of the second inverse-Compton scattering of equation (\ref{Gev}) as a function of $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$. For $\tau_p >1$, we are interested in calculating the self-absorption energy $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$ and the synchrotron flux $F_a$ at $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$ as a function of $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$. From equations (\ref{gpte}) and (\ref{taup}), one obtains \begin{equation} \tau_p = k \frac{F_\gamma}{F_a} \frac{\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon^{3/2}}{\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}} \label{tp2} \end{equation} with $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$ being \begin{equation} \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon = \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \tau_p^{\frac{2}{2\beta+5}} > \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \;. \label{ea2} \end{equation} The optical counterpart flux is related to $F_a$ through (Figure \ref{f1}, lower panel) \begin{equation} F_o = F_a \left\{ \begin{array}{ll} (\eo/\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV})^2 (\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}/\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon)^\alpha & \eo < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon \\ (\eo/\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon)^\alpha & \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \eo < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon \\ (\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon/\eo)^\beta & \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \eo \end{array} \right. \label{Fo2} \end{equation} Continuing in a similar way as shown above for $\tau_p <1$, one finds that the ordering of energies in equation (\ref{Fo2}) is set by $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ as following \begin{equation} \left\{ \begin{array}{lll} \eo < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon & {\rm if} & \eo <\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \e_2 \\ \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \eo < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon & {\rm if} & \e_4 < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \eo \\ \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \eo & {\rm if} & \e_3 < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \e_4 \end{array} \right. \label{case2} \end{equation} where \begin{equation} \e_4 \equiv (k' \eo^{\beta+1})^{2/(2\beta+3)} \;. \end{equation} Then, the optical thickness to self-absorption at the spectral peak of the synchrotron emissivity is \begin{equation} \tau_p = \left\{ \begin{array}{lll} (k \eo^2/\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}^{3/2})^{(\beta+5/2)/(\beta+\alpha+1)} & \eo < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon \\ (k \eo^\alpha/\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}^{\alpha-1/2})^{(\beta+5/2)/(\beta+\alpha+1)} & \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \eo < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon \\ (k \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}^{\beta+1/2}/\eo^\beta)^{\beta+5/2} & \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \eo \\ \end{array} \right. \label{taup1} \end{equation} Equations (\ref{taup1}), (\ref{ea2}), and (\ref{Fo2}) allow the calculation of $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon (\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV})$ and $F_a(\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV})$. The reference energies $\e_1$, $\e_2$, $\e_3$, and $\e_4$ have the same form $\e(x) = (k' \eo^x)^{2/(2x+1)}$ with $x=-2,\frac{5}{3} -\alpha,\beta,\beta+1$, respectively, thus $\e(x)$ has a singularity at $x=-1/2$. For $k' < \eo^{1/2}$, i.e. $F_o/F_\gamma < (z+1) \eg^{3/2}/(52.6\, \Gamma \eo^{1/2}\, {\rm MeV})$, it can be shown that $d\e(x)/dx > 0$ and the reference energies ordering is $\e_3 < \e_4 < \eo < \e_1$ (the relative location of $\e_2$ depending on $\alpha$). For $k' > \eo^{1/2}$, $d\e(x)/dx < 0$ and $\e_1 < \eo < \e_4 < \e_3$. The Klein-Nishina effect on the second inverse-Compton scattering is important if the energy of the first scattering (GRB) photon, as measured in the electron frame, is comparable or larger than the electron rest-mass energy, i.e. if $(z+1)(\eg/\Gamma) \gamma_p > m_e c^2$, where the typical electron Lorentz factor $\gamma_p$ is obtained using equation (\ref{gpte}). Considering only the $\tau_p < 1$ case, for which $\gamma_p = (\eg/\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV})^{1/2}$, implies that the Klein-Nishina effect is important for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \e_{kn}$ with \begin{equation} \e_{kn} \equiv \left( \frac{z+1}{\Gamma m_e c^2}\right)^{\hspace*{-1mm}2} \hspace*{-1mm}\eg^3 = 3 \left(\frac{\eg}{200\,{\rm keV}}\right)^{\hspace*{-1mm}3} \hspace*{-1mm} \left(\frac{z+1}{3}\right)^{\hspace*{-1mm}2} \hspace*{-1mm} \left(\frac{\Gamma}{300}\right)^{\hspace*{-1mm}-2} \hspace*{-1mm} {\rm eV} \;. \end{equation} Thus, the Klein-Nishina effect is expected to be important only if the peak energy of the synchrotron spectrum is below optical. In this case, the energy of the twice upscattered photon is $\eG = \Gamma m_e c^2 \gamma_p/(z+1)$ (lower than given in equation \ref{Gev}) and the peak flux of the second inverse-Compton emission is diminished by the decreased scattering cross-section, $\tau_{e,kn} \simeq \tau_e (\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}/\e_{kn})^{1/2} < \tau_e$. For $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \eo$, equations (\ref{Fo1}) and (\ref{gpte}) lead to $\tau_e = (F_\gamma/F_o) (\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}/\eo)^\beta$, and the Compton parameter for the second scattering is $Y_{GeV} = \tau_{e,kn} \eG/\eg = \tau_e (\eg/\e_{kn})^2$. For $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ above optical, the Klein-Nishina effect is negligible and $Y_{GeV} = Y_\gamma = (F_\gamma \eg)/ (F_p \e_p)$, where $Y_\gamma$ is the Compton parameter for the first scattering and $F_p$ is given by equation (\ref{Fo1}) for $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon < \eo < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$. Thus, the fluence of the twice upscattered emission, $\Phi_{GeV} = Y_{GeV} \Phi_\gamma$, is \begin{equation} \Phi_{GeV} = \frac{\Phi_\gamma^2}{t_\gamma F_o} \times \left\{ \begin{array}{ll} \hspace*{-2mm} \left(\frac{\displaystyle \Gamma m_e c^2}{\displaystyle z+1}\right)^2 \frac{\displaystyle \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}^\beta}{\displaystyle \eg^3 \eo^\beta} & \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \e_{kn} \\ \hspace*{-2mm} \frac{\displaystyle \eo^\alpha}{\displaystyle \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}^{\alpha+1}} & \e_{kn} < \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \end{array} \right. \label{PhiGeV} \end{equation} where $\Phi_\gamma$ and $t_\gamma$ are the GRB fluence and duration, respectively. Therefore, for fixed properties of the prompt optical and GRB emissions, the fluence of the GeV emission and its Compton parameter increase as $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}^\beta$ for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < \e_{kn}$ and decrease as $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}^{-(\alpha+1)}$ for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} > \e_{kn}$, being maximal when the peak energy of the synchrotron spectrum is in or close to the optical ($\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} = \e_{kn} \simeq \eo$). Using equation (\ref{PhiGeV}) to assess the effect of observables on the expected fluence of the second scattering, it could be expected that the GeV fluence is (1) {\sl correlated} with the burst fluence (which is quite trivial, as the GeV photons are the upscattered burst photons) and (2) {\sl anticorrelated} with the optical counterpart flux, burst duration, and peak energy of the GRB spectrum (if $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ is below optical), with the caveats that these correlations (a) should be weakened and could be even wiped out by variations in $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ from burst to burst (as $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ affects strongly the GeV flux), (b) could be affected by correlations among optical and burst properties. Figure \ref{f2} shows the dependence of some characteristics of the second inverse-Compton scattering (peak energy $\eG = \gamma_p^2 \eg$ and upscattered self-absorption frequency $\e_A = \gamma_p^4 \varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$ in the upper panel, Compton parameter $Y_{GeV} = (\eG F_{GeV})/ (\eg F_\gamma)$ in the mid panel) on the peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ of the synchrotron spectrum, calculated with the aid of the equations above. Aside from the unknown $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$, the optical and GRB spectral peak fluxes, $F_o$ and $F_\gamma$, are the other major factors affecting the brightness of the twice upscattered emission, the other parameters and observables having a lesser effect. \begin{figure} \psfig{figure=fig2.eps,width=7cm} \caption{Dependence of twice up-scattered self-absorption $\e_A$ and peak photon energy $\eG$ (top panel), Compton parameter for second scattering $Y_{GeV}$ (mid panel), and synchrotron self-absorption energy $\varepsilon_a} \def\eo{\varepsilon_{\rm o}} \def\e{\varepsilon$ (bottom panel) on the peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ of the synchrotron emissivity, for three likely ratios of the (synchrotron) optical counterpart flux $F_o$ to the (first inverse-Compton) GRB flux $F_\gamma$ at the peak of GRB spectrum. Other parameters are set to average values for the GRBs of Table 1 for which optical counterpart measurements have been obtained: $F_\gamma = 0.3$ mJy, GRB peak energy $\eg = 200$ keV (low and high-energy GRB spectral slopes $\alpha=0$ and $\beta = 1.5$ were assumed), for calculation of Klein-Nishina effect on the upscattered emission: redshift $z=2$ and a source Lorentz factor $\Gamma=300$ was assumed. The $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ is upper limited by the condition that the 10 keV (lower bound of bandpass of most burst detectors) flux is dominated by the first inverse-Compton component. } \label{f2} \end{figure} The GeV emission spectrum has the same shape as given in equations (\ref{Fo1}) and (\ref{Fo2}), except that upscattering of the self-absorbed part of the synchrotron spectrum yields a flatter one, $F_\e \propto \e$ (Panaitescu \& M\'esz\'aros 2000). Middle panel of Figure \ref{f2} shows the expected behaviour of the second scattering's Compton parameter, peaking for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \sim 1$ eV. As $Y_{GeV}$ is a measure of the GeV fluence, it follows that a measurement of the prompt GeV fluence yields two solutions for the unknown peak $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ of the synchrotron spectrum (see also equation \ref{PhiGeV}). The real solution can be find using the 1--100 GeV spectrum: a hard $F_\epsilon \propto \epsilon$ spectrum (resulting from upscattering twice synchrotron photons below self-absorption) or one of slope $\alpha$ up to tens of GeV indicates that $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \mathrel{\rlap{\raise 0.511ex \hbox{$<$}}{\lower 0.511ex \hbox{$\sim$}}} 1$ eV, while and a soft spectrum of slope $\beta$ above 1--10 GeV is expected for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \mathrel{\rlap{\raise 0.511ex \hbox{$>$}}{\lower 0.511ex \hbox{$\sim$}}} 10$ eV. Then, compatibility with the spectrum of the optical counterpart emission, which the bottom panel of Figure \ref{f2}) shows that should be optically thin for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \mathrel{\rlap{\raise 0.511ex \hbox{$<$}}{\lower 0.511ex \hbox{$\sim$}}} 1$ eV and thick for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} \mathrel{\rlap{\raise 0.511ex \hbox{$>$}}{\lower 0.511ex \hbox{$\sim$}}} 1$ eV, offers a possible test of the synchrotron self-Compton model for the burst emission. A stronger test can be done if the peak energy of the GeV spectrum is measured, as in this case the GeV peak energy and fluences provide two independent constraints on the peak energy of the synchrotron spectrum. \section{Application to bursts with optical counterpart measurements} Without knowing the peak energy of the synchrotron spectrum, we proceed to estimate the GeV output and total energetics of bursts for which optical counterpart measurements exist for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} = 1$ eV, which maximizes the GeV prompt output, and for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} = 0.01$ eV (values above optical also reduce the GeV output, with an upper limit of 100 eV on $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ is imposed by requiring that the 10 keV prompt emission is dominated by the first scattering and not by the high-energy tail of the synchrotron spectrum). The relevant properties of the optical and GRB prompt emissions of bursts with optical counterpart measurements are listed in Table 1. For more than half of those bursts, only upper limits on the optical counterpart flux have been obtained, the upper limit listed in Table 1 one being the deepest available and over an integration time that is within the burst emission or up to a factor two in time after the last GRB peak. Optical counterpart measurements have been corrected for the often modest Galactic dust extinction (given in last column), but may be affected by a more substantial extinction in the host galaxy that could be estimated, in some cases, from the optical afterglow spectrum. \begin{table*} \vspace*{-3mm} \caption{Gamma-ray (columns 3--7) and optical (counterpart in column 8, afterglow in columns 9 and 10) properties of GRBs with optical counterpart measurements. $t_\gamma$ = burst duration, $\Phi_\gamma$ = burst fluence, $\eg$ = GRB peak energy, $\alpha$ = slope of GRB spectrum below $\eg$ ($F_\e \propto \e^\alpha$). For GRBs in boldface, the optical counterpart flux is more than 10 times larger than the extrapolation of the GRB spectrum to optical energies. "Wh" is for UVOT's white filter, "Un" for unfiltered. } \vspace*{-2mm} \begin{tabular}{ccccccccccccccccc} \hline GRB & redshift & $t_\gamma$ & $\Phi_\gamma$ & band & $\alpha$ & $\eg$ & OC flux & $t_b$ & decay & E(B-V) \\ & & (s) & (cgs) & (keV)& & (keV) & (mag) & (d) & index & \\ \hline {\bf080810}&3.35&140& 1.7e-5 &20-1000&-0.2& 550 & Un=13.2 &$>$5.6 & 1.40 & 0.03 \\ 080802 & &176& 1.3e-6 &15-150 &-0.8& & Wh$>$20 & & & 0.80 \\ 080607 &3.04&85 & 8.9e-5 &20-4000&-0.1& 420 & R=15.2 &$>$0.6 & & 0.02 \\ 080603B&2.69&70 & 4.5e-6 &20-1000&-0.2& 100 & Un=14.1 & 0.6 & 3.05 & 0.01 \\ {\bf080413}&2.44&55 & 4.8e-6 &15-1000&-0.2& 170 & Un=12.8 &$>$0.15& 1.2 & 0.16 \\ {\bf080319B}&0.94&57 & 5.7e-4 &20-7000& 0.2& 650 & V=6.0 &$>$20 & 1.33 & 0.01 \\ 080310 &2.43&365& 2.3e-6 &15-150 & &$<$30& R=17 & 2 & 2.4 & 0.04 \\ 080307 & &64 & 7.3e-7 &15-150 &-0.4& & R$>$16.9&$>$0.06& 0.7 & 0.03 \\ 080229 & &64 & 9.0e-6 &15-150 &-0.9& & R$>$14.7& & & 0.15 \\ 080212 & &123& 2.9e-6 &15-150 &-0.6& & R$>$17.8&$>$0.6 & 0.4 & 0.16 \\ 080205 & &107& 2.1e-6 &15-150 &-1.1& & Un=18.1 & & & 0.09 \\ 071031 &2.69&180& 9.0e-7 &15-150 & &$<$30& R=15 &$>$0.13& 0.55 & 0.01 \\ 071025 & &109& 6.5e-6 &15-150 &-0.8& & R$>$17.3&$>$0.2 & 1.8 & 0.08 \\ 071011 & &61 & 2.2e-6 &15-150 &-0.4& & R$>$16.9&$>$0.15& 0.7 & 0.91 \\ {\bf071003}&1.60&30 & 1.2e-5 &20-4000& 0.1& 800& Un=12.8 &$>$7.9 & 1.60 & 0.15 \\ 070808 & &32 & 1.2e-6 &15-150 &-0.5& &Un$>$16.2& & & 0.02 \\ 070721B&3.63&32 & 2.1e-6 &15-150 &-0.3& & Wh=15.9 & & & 0.02 \\ 070621 & &40 & 4.3e-6 &15-150 &-0.6& &Un$>$16.6& & & 0.05 \\ 070616 & &402& 1.9e-5 &15-150 &-0.9& 100& V=16.5 & & & 0.40 \\ 070521 &0.55&55 & 1.8e-5 &20-1000& 0.1& 220& R$>$17.1& & & 0.03 \\ 070429 & &163& 9.2e-7 &15-150 &-1.1& &Un$>$16.2& & & 0.17 \\ 070420 & &120& 2.6e-5 &20-1000&-0.1& 170& R=16.2 &$>$0.15& 0.88 & 0.52 \\ 070419B& &91 & 1.1e-5&100-1000& 0.1& &Wh$>$18.5& & & 0.09 \\ 070419A&0.97&116& 5.6e-7 &15-150 & &$<$30& R$>$18.6&$>$3.7 & 0.99 & 0.03 \\ 070411 &2.95&101& 2.5e-6 &15-150 &-0.7& & R=17.9 &$>$5.8 & 1.11 & 0.29 \\ 070306 &1.50&210& 5.5e-6 &15-150 &-0.7& &Wh$>$19.8& & & 0.03 \\ 070220 & &30 & 1.1e-5 &20-2000&-0.2& 300&Wh$>$19.6& & & 0.90 \\ 070208 &1.17&48 & 4.3e-7 &15-150 &-1.0& &Un$>$18.7&$>$0.3 & 0.55 & 0.01 \\ 070129 & &460& 3.1e-6 &15-150 &-1.0& & V$>$17.3& & & 0.14 \\ 061222 & &100& 2.7e-5 &20-2000& 0.1& 280&Un$>$17.0& & & 0.10 \\ {\bf061126}&1.16&25 & 2.0e-5 &30-2000& 0.1& 935& R=12.93 &$>$1.8 & 0.99 & 0.18 \\ {\bf061121}&1.31&81 & 1.4e-5 &15-150 & 0.2& 455& Un=14.9 &$>$3.9 & 1.05 & 0.05 \\ 061110 &0.76&41 & 1.1e-6 &15-150 &-0.7& &Un$>$16.2& & & 0.09 \\ {\bf061007}&1.26&90 & 2.5e-4&20-10000& 0.3& 400& Un=13.6 &$>$1.7 & 1.70 & 0.02 \\ {\bf060927}&5.47&23 & 1.1e-6 &15-150 & 0.1& 70& Un=16.5 &$>$2.6 & 1.01 & 0.06 \\ 060904B&0.70&192& 1.7e-6 &15-150 &-0.7& & Un=17.3 &$>$1.9 & 1.02 & 0.17 \\ 060904A& &80 & 1.6e-5 &10-2000& 0.1& 160& R$>$16.5& & & 0.02 \\ 060814 &0.84&134& 2.7e-5 &20-1000&-0.4& 260&Wh$>$19.7& & & 0.04 \\ 060729 &0.54&116& 2.7e-6 &15-150 &-0.9& & Un=15.67& $>$28 & 1.27 & 0.05 \\ 060719 & &55 & 1.6e-6 &15-150 &-1.0& & z$>$16.6& & & 0.07 \\ 060714 &2.71&115& 3.0e-6 &15-150 &-1.0& & Wh=19.2 &$>$3.3 & 1.22 & 0.08 \\ 060607 &3.08&100& 2.6e-6 &15-150 &-0.5& & r=16.3 &$>$0.3 & 1.20 & 0.03 \\ 060602 &0.79&60 & 1.6e-6 &15-150 &-0.1& & R$>$15 & & & 0.03 \\ 060507 & &185& 4.1e-6 &15-150 &-0.8& &Un$>$15.5& & & 0.16 \\ 060418 &1.49&44 & 1.6e-5 &20-1100&-0.5& 230& z=15.3 &$>$1.2 & 1.25 & 0.22 \\ 060312 & &43 & 1.8e-6 &15-150 &-0.4& & R$>$14.6& & & 0.19 \\ 060210 &3.91&255& 7.7e-6 &15-150 &-0.5& & R$>$17.5& & & 0.09 \\ 060124 &2.30&710& 2.8e-5 &20-2000&-0.3& 335& V=17.08 &$>$6.2 & 1.42 & 0.14 \\ {\bf060111B}& &25 & 5.6e-8&100-1000&-0.5& & R=13.8 & & & 0.10 \\ 051117 & &140& 4.6e-6 &15-150 &-0.8& & V=20.0 & & & 0.02 \\ 051111 &1.55&31 & 8.4e-6&100-700 &-0.5& & R=13.2 &$>$1.0 & 1.62 & 0.16 \\ 051022 &0.80&200& 2.6e-4 &20-2000&-0.2& 510& R$>$17.4& & & 0.07 \\ 051001 & &190& 1.8e-6 &15-150 &-1.1& & R$>$16.2& & & 0.02 \\ 050915 & &53 & 8.8e-7 &15-150 &-0.4& & R$>$17.4& & & 0.03 \\ 050904 &6.29&225& 5.4e-6 &15-150 &-0.4& 340& R=18.5 &$>$5.3 & 1.15 & 0.06 \\ 050822 & &102& 3.4e-6 &15-350 & &$<$30& R$>$16.6& & & 0.02 \\ 050714 & &40 & 6.2e-7 &20-200 & & & R$>$16.6& & & 2.09 \\ 050713 & &70 & 9.1e-6 &15-350 &-0.6& 310& R$>$17.7&$>$0.8 & 0.66 & 0.41 \\ 050520 & &80 & 2.4e-6 &20-200 & & & R$>$16.1& & & 0.02 \\ 050504 & &80 & 1.5e-6 &20-200 & & & R$>$16.0& & & 0.01 \\ 050408 &1.24&34 & 1.9e-6 &30-400 & & &Un$>$14.7&$>$5.1 & 0.70 & 0.03 \\ 050319 &3.24&10 & 8.0e-7 &15-350 &-1.2& & R=16.16 &$>$3.4 & 0.48 & 0.01 \\ 041219 & &540& 1.0e-4 &15-200 & & & R$>$19.4&$>$1.0 & 1.2 & 1.75 \\ {\bf990123}&1.61&63 & 5.1e-4 &20-1000& 0.2& 720& R=8.95 & 2.0 & 1.65 & 0.02 \\ \hline \end{tabular} \end{table*} The average optical flux for the 35 upper limits of Table 1 is 1 mJy, while that of the 19 counterpart detections 3 mJy, thus upper limits are, on average, 1 mag deeper than detections, but both averages have large dispersions (2.8 and 1.6 mag, respectively). The burst spectral peak energy $\eg$ has been measured for 27 of the GRBs in Table 1, the average being $\overline{\eg}= 210$ keV with a dispersion of 0.45 dex. For those bursts, the flux $F_\gamma$ at the peak, calculated from the GRB fluence and spectrum (if not known, the low-energy GRB spectral slope was assumed to be $\alpha=0$; the high-energy spectral slope at $\e > \eg$ was set at $\beta=1.5$), has an average $\overline{F_\gamma} = 0.3$ mJy with a dispersion of 0.5 dex. To calculate the GeV output for all bursts, we assume the average $\overline{\eg}$ for the 37 burst without a reported peak energy. The average optical to GRB peak flux ratio, $F_o/F_\gamma$, which is an important parameter for the calculation of the GeV emission flux, is about the same for the bursts with known $\eg$ ($\overline{F_o/F_\gamma} = 30$) as for those with assumed $\eg = \overline{\eg}$ ($\overline{F_o/F_\gamma} = 15$), as can be seen in Figure \ref{f3}. \begin{figure} \centerline{\psfig{figure=fig3.eps,height=6cm}} \caption{ Distribution of optical counterpart flux to GRB spectral peak flux ratio for bursts with optical counterpart measurements (listed in Table 1). For more than half of bursts, only upper limits on the optical counterpart flux have been obtained, which sets an upper limit on the $F_o/F_\gamma$ ratio, leading to a lower limit on the Compton parameter of the second scattering and on the GeV emission. For the bursts without a determined GRB peak energy $\eg$, we assumed $\eg = 200$ keV, which is the average value for the bursts of Table 1 with measured $\eg$. } \label{f3} \end{figure} \subsection{Expected GeV prompt flux} Using the equations of the previous section, we calculate the break energies of the twice upscattered prompt emission and the peak flux of the GeV spectrum, and integrate over the power-law piecewise spectrum to obtain the 0.1--100 GeV prompt photon flux expected for the bursts of Table 1 in the synchrotron self-Compton model. The distribution of the resulting GeV fluxes is shown in Figure \ref{f4} for two values of $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$. For its collecting area of thousands of ${\rm cm^2}$, the LAT onboard the Fermi satellite would detect hundreds to tens of thousands of photons during a 100 s burst if the peak energy of the synchrotron spectrum were at $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} = 1$ eV. However, the received GeV photon flux can be greatly affected by photon-photon attenuation, which depends primarily on the the source radius $R$. Calculations show that, for the bursts of Table 1, photon-photon attenuation is negligible if $R>10^{16}$ cm, but suppresses the 0.1--100 GeV flux above $\e_p = 6\times 10^{-3} (F_o/1\,{\rm mJy})^{0.6} (F_\gamma/1\,{\rm mJy})^{-1.5} (R/10^{15}\, {\rm cm})^2$ eV if $R < 10^{16}$ cm. Thus, the non-detection of a GeV prompt emission produced by the second scattering may be due to either (1) an intrinsically weak GeV output, when $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ is well below optical, in which case pair-formation is negligible, or (2) photon-photon attenuation suppressing the intrinsically-bright GeV emission produced when $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ is close to the optical. \begin{figure} \centerline{\psfig{figure=fig4.eps,height=4.3cm}} \caption{ Distribution of the prompt 0.1--100 GeV photon flux resulting from the second scattering of the primary synchrotron emission, for two values of the peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ of the synchrotron emissivity, and for the optical counterpart and $\gamma$-ray prompt emission properties of the bursts listed in Table 1. Upper limits on the optical counterpart flux set lower limits on the GeV flux. For the Fermi-LAT area of $\sim 5000\,{\rm cm^2}$ and a burst lasting for 100 s (the average of the durations given in Table 1), the photon fluxes shown in left panel correspond to 0.2--50 GeV photons collected during the burst, while those in the right panel to $10-10^5$ prompt GeV photons. The effect of photon-photon attenuation, which depends strongly on the source Lorentz factor and radius where the prompt emission is released, is expected to be negligible for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} = 0.01$ eV (left panel), but could completely suppress the GeV photon flux for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} = 1$ eV (right panel) if the source radius is less than about $10^{16}$ cm. } \label{f4} \end{figure} Fermi-LAT has received more than 10 photons above 1 GeV during GRB 080916C (Tajima et al 2008) and a similar number of photons below 1 GeV during GRB 080825C (Bouvier et al 2008). Optical counterpart measurements are not available for these bursts but the distributions shown in Figure \ref{f4} suggest that $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ was in the 0.01--1 eV range, as a burst-integrated flux of 10 photons corresponds to $\mathrel{\rlap{\raise 0.511ex \hbox{$<$}}{\lower 0.511ex \hbox{$\sim$}}} 10^{-4}\, {\rm photons/cm^2 s}$, which is at the bright end of the distribution shown for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} = 0.01$ eV (left panel) and at the dim end for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}=1$ eV (right panel). In fact, the above range for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ set by GeV observations of 080916C and 080825C is an upper limit because the measured high-energy fluxes are consistent with the extrapolation of the sub-MeV burst spectrum to GeV, as can be shown using the burst fluences and high energy spectral slope reported by van der Horst \& Goldstein (2008). From equation (\ref{PhiGeV}), correlations are expected between the prompt GeV fluence $\Phi_{GeV}$ and optical or GRB prompt emission properties (optical flux $F_o$, GRB fluence $\Phi_\gamma$, burst peak energy $\eg$), provided that the peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ of the synchrotron spectrum has a narrow distribution. For the 29 bursts of Table 1 with optical counterpart measurements, we find a significant correlation only between the expected GeV fluence and the observed sub-MeV fluence if $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ is above optical (linear correlation coefficient $r (\log \Phi_{GeV}, \log \Phi_\gamma) \simeq 0.75$ corresponding to a $10^{-6}$ probability of a chance correlation), with other expected correlations being much less significant, owing to the scatter in the optical and GRB properties and to correlations among them. The strongest such correlation found is that between the burst fluence and burst peak energy\footnotemark -- $r (\log \Phi_\gamma, \log \eg) = 0.70$, with best fit $\Phi_\gamma \propto \eg^{2.0}$w -- which weakens the expected $\Phi_{GeV} - \Phi_\gamma$ correlation for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ is below optical, as in this case $\Phi_{GeV} \propto \Phi_\gamma/\eg^3$ (equation \ref{PhiGeV}). Other correlations that we found among the prompt optical and GRB properties and which have a probability for a chance occurrence less than 10 percent are the optical counterpart flux $F_o$ (1) correlation with the burst fluence $\Phi_\gamma$, (2) anticorrelation with burst duration $t_\gamma$, both of which weaken the $\Phi_{GeV} - F_o$ anticorrelation expected from equation (\ref{PhiGeV}), and (3) correlation with GRB peak energy $\eg$, which strengthens the expected $\Phi_{GeV} - F_o$ anticorrelation. \footnotetext{ The $\Phi_\gamma - \eg$ correlation was first noticed by Lloyd, Petrosian \& Mallozzi (2000), who suggested that it arose from a correlation of intrinsic source properties (see also Amati et al 2000)} To assess the effect of $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ not being universal on the expected correlations, we assume that, for the 29 bursts of Table 1 with optical counterpart measurements, $\log \varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ has a uniform distribution between 0.01 eV and 100 eV, and find that such a distribution of $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ among bursts weakens the $\Phi_{GeV} - \Phi_\gamma$ correlation found for a universal $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ above optical, the linear correlation coefficient $r (\log \Phi_{GeV}, \log \Phi_\gamma) \in (0.2,0.3)$, corresponding to a 10--30 percent of a chance correlation. We conclude that, if the synchrotron self-Compton model for the GRB emission is correct, then the measured, prompt GeV fluence (produced by the second upscattering) is likely to be correlated with the burst fluence, and less likely to be correlated with other properties of the prompt emission (e.g. optical counterpart flux), but we note that the strength of this conclusion depends on the actual width of the synchrotron peak energy distribution. \subsection{Burst energetics} For bursts with known redshift, the isotropic radiative output ${\cal E}_r$ (synchrotron + 1st inverse Compton + 2nd inverse-Compton) can be calculated from the total prompt fluence $\Phi = \Phi_{sy} + \Phi_\gamma + \Phi_{GeV} = (Y_\gamma^{-1} + 1 + Y_{GeV}) \Phi_\gamma$, with the GRB fluence $\Phi_\gamma$ in the 10 keV--10 MeV range calculated from the fluences reported in Table 1, using the burst spectrum. The resulting ${\cal E}_r$ for the 34 bursts of Table 1 with known redshift ranges from $10^{52}$ to $10^{54}$ erg, with an average of $10^{53.3}$ erg for 20 bursts with known peak energy $\eg$ and $10^{53.0}$ erg for all 34 bursts, assuming $\eg = 200$ keV when not known. \begin{figure} \centerline{\psfig{figure=fig5.eps,height=4.4cm}} \caption{ Distribution of lower limits on the collimation-corrected outflow initial energy (assuming a two-sided jet), for the optical counterparts and GRB prompt emission properties of Table 1, and for two possible values of the peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ of the synchrotron emissivity. A radiative efficiency ($\eta$) for the prompt emission of 50 percent was assumed. The lower limit on the jet opening was determined from the latest epoch $t_b$ (Table 1) until which the optical afterglow light-curve decay does not exhibit the steepening expected from "seeing" the jet boundary, and assuming that the ambient medium has the typical density expected for a Wolf-Rayet progenitor of long bursts. Whenever it cannot be determined through afterglow observations, $t_b = 1$ day was assumed. (The jet energy has a moderate dependence on these parameters: $E_j\propto (t_b/\eta)^{1/2}$.) Only the bursts of Table 1 with known redshift have been used and $\eg = 200$ keV was assumed when not known. Compared to the $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} = 1$ eV case (right panel), the required jet energy is found to decrease by a factor $\sim 10$ for a factor 100 increase or decrease in $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$, as illustrated in the left panel for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} =0.01$ eV. } \label{f5} \end{figure} The true radiative output of GRBs depends on the degree of ejecta collimation. If the optical light-curve breaks (i.e. decay steepenings) observed at 0.3--3 days in a majority of well-monitored pre-Swift bursts (e.g. Zeh, Klose \& Kann 2006) are due to "jet effects" (i.e. boundary of the jet becoming visible to observer when its decreasing Lorentz factor $\Gamma$ reaches $\theta_j^{-1}$, the inverse of the jet half-opening angle, and jet lateral spreading beginning to affect the jet dynamics at about the same time), then the epoch $t_b$ of the light-curve break can be used to determine the jet opening $\theta_j$: \begin{equation} \theta_j = [\Gamma(t_b)]^{-1} = 0.096\, \left[\frac{t_{b,d}} {(z+1){\cal E}_{k,53}}\right]^{1/4} \; {\rm rad} \end{equation} where $t_b$ is measured in days and ${\cal E}_{k,53}$ is the isotropic-equivalent of the jet kinetic energy after the prompt phase, measured in $10^{53}$ erg. The derivation of the above result for the jet dynamics $\Gamma(t)$ assumed that the jet is decelerated by its interaction with the wind produced by a Wolf-Rayet star. The post-burst jet kinetic energy is not known but can be related to the GRB bolometric output ${\cal E}_r$ by assuming a burst radiative efficiency $\eta = {\cal E}_r/({\cal E}_r + {\cal E}_k)$. Then the collimation-corrected initial energy of the two-sided GRB jet is \begin{equation} 2E_k = \frac{1}{2} \theta_j^2 ({\cal E}_r + {\cal E}_k) = 10^{50.7} \left[ \frac{{\cal E}_{r,53}\, t_{b,d}}{(z+1)\eta(1-\eta)} \right]^{1/2} {\rm erg} \;. \label{Ejet} \end{equation} Monitoring of the optical emission of the GRB afterglows listed in Table 1 is somewhat limited, nearly none of the optical light-curves displaying a jet-break until the last measurement, as shown by slow optical decays $d\log F_\nu /d\log t$ listed in column 10 of Table 1 (decays faster than $t^{-2}$ are likely to be caused by a jet-break having occurred). Evidence for jet-breaks in the X-ray afterglow light-curve, consisting of a steepening to a decay faster than $t^{-2}$, is not considered here because the decoupled optical and X-ray afterglow light-curve behaviours seen in many cases (e.g. chromatic X-ray light-curve breaks) suggests that sometimes these two emission arise from different mechanisms and/or parts of the relativistic outflow. Thus, for most afterglows, we have only a lower limit on $t_b$ (column 9 in Table 1) which yields a lower limit on the initial jet energy $E_k$. Figure \ref{f5} shows the distribution of the lower limits on $E_k$ for bursts with known redshift, assuming a GRB bolometric radiative efficiency $\eta=0.5$ (which minimizes the jet energy -- equation \ref{Ejet}) and $t_j = 1$ day whenever the available optical afterglow monitoring does not allow us to set even a lower limit on $t_b$. For a given burst, the jet energy is maximal for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ in the optical, as this value minimizes the synchrotron peak flux required to account for the observed optical counterpart flux, which maximizes the Compton parameter, the GeV output, and the total isotropic-equivalent burst output. The largest lower limit on the initial jet kinetic energy, obtained for $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} =1$ eV, is $10^{53}$ erg, being lower by a factor 10 for a factor 100 increase or decrease in $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$. The energy that the long-GRB progenitor (black-hole plus accretion torus formed after the collapse of a Wolf-Rayet core) can deposit into a relativistic jet depends on the available energy reservoir (a torus with a rest-mass energy $1\, M_\odot c^2 = 2\times 10^{54}$ erg and a black-hole with a comparable spin energy, for the collapsar model -- Woosley 1993) and the efficiency at which the available energy is extracted and deposited into highly relativistic ejecta [e.g. magnetohydrodynamical energy extraction is limited to 5.7 percent (for a non-rotating BH) and 42 percent (for a maximally rotating BH) of the torus gravitational binding energy and up to 29 percent of the black-hole mass]. In the case when the accretion rate is $0.1\, M_\odot {\rm s^{-1}}$ and the black-hole spin parameter is $a=0.95$, Popham, Woosley \& Fryer (1999) obtain a maximum of $10^{52.3}$ erg for the energy of the outflow resulting from the annihilation of neutrinos and antineutrinos produced by dissipation in the torus. Using general relativistic MHD simulations of accreting tori by Kerr black-holes, Krolik, Hawley \& Hirose (2005) find that, at the radius of marginal stability, the Poynting flux produced by the Blandford-Znajek mechanism carries 0.25 percent of the accreted mass-energy for $a=0.5$, 1 percent for $a=0.9$, rising to about $\sim 10$ percent for $a=0.998$. Similar results, showing a rapidly increasing jet efficiency as function of the BH spin parameter, are obtained by McKinney (2005), who obtains an upper limit of 6.8 percent for the jet efficiency for a maximally rotating BH, corresponding to a jet energy of $10^{53.1}$ erg, if the disk as a mass of $1\,M_\odot$. Thus, the jet energy expected in the collapsar model is less than $10^{53}$ erg, which implies that, if the sub-MeV emission of the GRBs listed in Table 1 was produced from upscattering of a lower energy emission, the peak energy $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ of the primary spectrum could not have been in the optical for all bursts (right panel of Figure \ref{f5}). Instead, if $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ were universal, the synchrotron peak energy must have been below 0.1 eV (left panel of Figure \ref{f5}) or above 10 eV, to yield lower limits on the jet energy that are sufficiently below the theoretical upper limit of $10^{53}$ erg. Values of $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV}$ above 10 eV lead to even lower required jet energies but the 10 keV prompt flux could be dominated by the synchrotron emission instead of the first upscattering, thus the resulting low-energy GRB spectrum would be softer than usually observed. For this reason, we consider that only $\varepsilon_p} \def\eg{\varepsilon_\gamma} \def\eG{\varepsilon_{GeV} < 0.1$ eV is a possible solution for reducing the required jet energetics below the theoretical expectation for the collapsar model. \section{Conclusions} The occasional detection of an optical counterpart whose brightness exceeds the extrapolation of the GRB sub-MeV emission suggests that, if the two emissions arise from same medium, the burst could be the first upscattering of the synchrotron spectrum that yields the optical counterpart. We find 10 such cases in a sample of 29 bursts with optical counterpart measurements, but the true fraction of over-bright optical counterparts could be larger because half of those 29 bursts have been observed only by Swift-BAT (Table 1), which underestimates the true hardness of the GRB spectral slope below the peak energy if that peak energy fell in BAT's relatively narrow bandpass. A straightforward expectation for the synchrotron self-Compton model for GRBs is that the upscattering of the burst photons yields a GeV--TeV prompt emission, whose brightness is found to depend strongly on the peak energy of the synchrotron spectrum. In this paper, we provided the formalism by which the GeV prompt emission from the second scattering is related to the sub-MeV emission from the first scattering and the optical emission from synchrotron, and applied that formalism to bursts with optical counterpart measurements, to estimate the expected GeV prompt fluxes, the bolometric GRB output and energetics, and the correlation of the expected GeV fluence with the burst and optical brightnesses. The synchrotron self-Compton model can be tested in the following ways. The measurement by Fermi-LAT or Agile-GRID of the 0.1--100 GeV fluence of the emission produced during the burst by the second upscattering, combined with the properties (peak flux and energy) of the prompt burst emission produced by the first upscattering, leads to two solutions for the location of the synchrotron peak energy (equation \ref{PhiGeV}). The lower energy solution corresponds to a soft optical spectrum and a hard GeV spectrum, while the higher energy solution is identified with a hard, self-absorbed optical and a soft (falling) GeV spectrum. Future multicolour measurements of the optical prompt emission obtained by fast-response telescopes and measurements of the GeV emission by high-energy satellites will allow a test of consistency between the observed optical and GeV spectra and the above-mentioned model expectations. A stronger test will be possible if the peak energy of the second scattering emission spectrum is also determined, as the GeV flux and peak energy provide two independent determinations of the synchrotron spectral peak, for given properties of the $\gamma$-ray spectrum (see Figure \ref{f2}). From the expected GeV output and using the constraints on the outflow opening set by the afterglow optical light-curve, we have calculated lower limits on the collimated radiation output and jet initial energy for 34 bursts with optical counterpart measurements and redshifts. The resulting jet energies are lower limits because one third of optical counterpart measurements are upper limits, which lead to lower limits on the GeV output, and because most of the available coverage of optical afterglows sets only lower limits on the jet-break time, leading to lower limits on the jet half-angle. Figure \ref{f5} shows that the resulting lower limits on the double-jet initial energy ranges over 2 decades, with the largest value ($10^{53}$ erg) being obtained if the peak energy of the synchrotron spectrum is close to the optical (right panel), and with the lower limit on the jet energy decreasing by a factor 10 for a factor 100 decrease of the synchrotron peak energy (left panel). Thus, the energetics required by the synchrotron self-Compton model for GRBs and the upper limit of $\sim 10^{53}$ erg expected for jets produced after the core-collapse of massive stars indicate that the peak energy of the synchrotron spectrum should be often well below optical. For the 29 bursts with optical counterparts measurements, we find that, if the unknown peak energy of the synchrotron spectrum does not have a very wide distribution, the brightness of the second inverse-Compton scattering remains {\sl correlated} with the flux of the first upscattering and {\sl anticorrelated} with that of the primary synchrotron spectrum. Thus, the synchrotron self-Compton model for GRBs will be {\sl invalidated} if Fermi-LAT detects GeV prompt emission consistent with the extrapolation of the burst spectrum for bursts that are bright at sub-MeV energies and dim in the optical. This test of the synchrotron self-Compton model for GRBs applies only if GeV prompt photons are detected because the lack of such detections in a given burst may not necessarily imply the production of low GeV prompt fluxes, but could be due instead to the source being optically-thick to photon-photon attenuation. \section*{Acknowledgments} The author acknowledges the great help in collecting the optical counterparts measurements provided by the {\sl GRBlog} site at {\sl http://grad40.as.utexas.edu} created by Robert Quimby and maintained together with Erin McMahon and Jeremy Murphy. \\ This work was supported by the US Department of Energy through the LANL/LDRD 20080039DR program.
1,314,259,995,227
arxiv
\section{Introduction} In this paper, we present a new framework for idempotent analysis over tropical semirings based on a notion called tropical projection. The backgrounds of this paper come from a few areas with intrinsic connections: \begin{enumerate} \item \emph{Idempotent analysis.} Idempotent analysis is an analysis theory over idempotent semirings, developed by Maslov and his collaborators in 1980s originally as a framework to describe the semicalssical limit of quantum mechanics via Maslov dequantization \cite{KM97,LM05}. Equivalent under negation, the min-plus algebra $({\mathbb R}_{\min},\odot,\oplus)$ and the max-plus algebra $({\mathbb R}_{\max},\odot,\boxplus)$ where ${\mathbb R}_{\min}={\mathbb R}\bigcup\{+\infty\}$, ${\mathbb R}_{\max}={\mathbb R}\bigcup\{-\infty\}$, $a\odot b:=a+b$, $a\oplus b:=\min(a,b)$ and $a\boxplus b:=\max(a,b)$ are the most intensively studied idempotent semirings. ($({\mathbb R}_{\min},\odot,\oplus)$ and $({\mathbb R}_{\max},\odot,\boxplus)$ are also commonly called tropical semirings nowadays and sometimes the addition identities $+\infty$ and $-\infty$ are ignored.) Under a correspondence principle of idempotent analysis \cite{LM98}, many important results over the field of real or complex numbers have counterparts over idempotent semirings, while the most striking observation is probably that the Legendre transform is just the counterpart of the Fourier transform in idempotent analysis. \item \emph{Tropical geometry and tropical convexity.} As a rapidly developing subject in mathematics, tropical geometry is a theory of geometry over tropical semirings which can be described as a piece-wise linear version of algebraic geometry \cite{MS15, Viro08}. A framework for tropical geometry has been developed by Kontsevich-Soibelman \cite{KS01} and Mikhalkin \cite{Mikhalkin05} with applications in enumerative algebraic geometry and homological mirror symmetry, and another framework has been developed by Baker-Payne-Rabinoff \cite{BPR16,BPR13} based on Berkovich analytification \cite{Berkovich12}. The ``linearity'' features of tropical geometry can be captured by the notion of tropical convexity discussed by Develin and Sturmfels \cite{DS04}, which is intrinsically related to max-linear (or min-linear) systems \cite{Butkovivc10}. \item \emph{Chip-firing games, the divisor theory on finite graphs/metric graphs and reduced divisors.} The chip-firing game on a graph (or the abelian sandpile model) has become a huge topic in combintorics \cite{CP18}, partly because it is related to many areas of mathematics and theoretical physics. Baker and Norine \cite{BN07} have discovered that the classical Riemann--Roch theorem and Abel-Jacobi theory for algebraic curves have analogues for finite graphs with a motivation of degenerating divisors on an algebraic curve $C$ to divisors on the dual graph of the special fiber of a semistable model of $C$. Their result was immediately extended to the context of abstract tropical curves which are essentially metric graphs \cite{GK08}. People soon realize that this remarkable work should be unified into the world of tropical geometry under the framework in \cite{BPR16}, and tropical proofs of some theorems in conventional algebraic geometry have been obtained. For example, Cools, Draisma, Payne and Robeva \cite{CDPR12} have derived a new proof of the Brill--Noether Theorem based on Baker's specialization lemma \cite{Baker08} and an explicit computation of the Baker-Norine combinatorial rank function for a special type of metric graphs. Note that because the degeneration of divisors on algebraic curves to finite graphs or metric graphs has some subtlety, even though the graph-theoretical Riemann--Roch in \cite{BN07} is formulated after the algebro-geometric Riemann--Roch, one can hardly borrow techniques from algebraic geometry and there are only purely combinatorial proofs of the graph-theoretical Riemann--Roch so far. Actually the divisor theory on finite graphs or metric graphs is closely related to the chip-firing games, and the most important tool in Baker-Norine's original proof of the graph-theoretical Riemann--Roch and many follow-up works is based on a notion called reduced divisors (also named G-parking functions in combinatorics \cite{PS04}). \end{enumerate} Let $X$ be a topological space and $BC(X)$ be the the space of all bounded and continuous real functions on $X$. Instead of equipping ${\mathbb R}$ (when the addition identities $\pm\infty$ are ignored) with only one of min-plus algebra and max-plus algebra like most other works, both $\oplus$ and $\boxplus$ (called lower tropical addition and upper tropical addition respectively) are put into our scenario. Then operations on $BC(X)$ can be induced from operations on ${\mathbb R}$. For $f,g\in BC(X)$ and $c\in{\mathbb R}$, the lower tropical addition $f\oplus g:=\min(f,g)$, upper tropical addition $f\boxplus g := \max(f,g)$ and tropical scalar multiplication $c\odot f :=c+f$ are simply defined by pointwise operations (see details in Subsection~\ref{SS:TOper}). The tropical projective space $\mathbb{TP}(X)$ is defined as $BC(X)$ modulo tropical scalar multiplication. Also, a subspace of $BC(X)$ closed under tropical scalar multiplication and lower tropical addition (respectively upper tropical addition) modulo tropical scalar multiplication is a subspace of $\mathbb{TP}(X)$ which is said to be lower tropically convex (respectively upper tropically convex). Note that conventionally in most works on tropical convexity, only the finite dimensional case is explored, i.e., when $X$ is a finite set equipped with discrete topology in our setting. Here we don't have such a restriction. Moreover, $\mathbb{TP}(X)$ is a normed space whose norm is naturally defined as $\Vert [f]\Vert=\max(f)-\min(f)$ where $[f]\in\mathbb{TP}(X)$ is the equivalence class of $f\in BC(X)$ (Definition~\ref{D:TropNorm}). Actually $(\mathbb{TP}(X),\Vert\cdot \Vert)$ is also a Banach space (Proposition~\ref{P:TPBanach}). For an element $\gamma\in \mathbb{TP}(X)$ and a compact subset $T$ of $\mathbb{TP}(X)$ which is lower (or upper) tropically convex, unlike the case in conventional convex geometry, it is not generally true that the minimizer of the distance function between elements in $T$ and $\gamma$ is a singleton. Actually because of the degeneration nature of ``tropicalization'', similar phenomena are quite common in the ``tropical world''. In other words, one may only be able to develop a coarse analysis theory from the notion of tropical norm. We resolve this non-uniqueness issue by introducing a notion of $B$-pseudonorms, which enables use to build a more refined analysis theory on a tropical projective space as desired. More precisely, the $B$-pseudonorms on $\mathbb{TP}(X)$ with respect to some Borel measure $\mu$ on $X$ (Definition~\ref{D:pseudonorm}) are a series of pseudonorms $\llfloor \cdot \rrfloor_p$ and $\llceil \cdot \rrceil_p$ for $p\in[1,\infty]$. Here ``pseudo'' means not necessarily symmetric, i.e., $\llfloor \alpha \rrfloor_p\neq \llfloor -\alpha \rrfloor_p$ and $\llceil \alpha \rrceil_p\neq \llceil -\alpha \rrceil_p$ for $p\in[1,\infty)$ in general. But we have $\llfloor \alpha \rrfloor_p = \llceil -\alpha \rrceil_p$ and $\llfloor \cdot \rrfloor_\infty=\llceil \cdot \rrceil_\infty=\Vert \cdot \Vert$. One of the main results of this paper is the following theorem. (\textbf{A Restatement of Theorem~\ref{T:main}}) For a compact lower (respectively upper) tropically convex subset of $T$ of $\mathbb{TP}(X)$ and an arbitrary element $\gamma$ in $\mathbb{TP}(X)$, there exists a unique element $\underline{\pi}_T(\gamma)$ (respectively $\overline{\pi}_T(\gamma)$) in $T$ which minimizes all $B$-pseudonorm functions $\alpha\mapsto\llfloor \alpha-\gamma\rrfloor_p$ (respectively $\alpha\mapsto\llceil \alpha-\gamma\rrceil_p$) with $\alpha\in T$ for $p\in[1,\infty)$. Note that in the above theorem, $\underline{\pi}_T(\gamma)$ and $\overline{\pi}_T(\gamma)$ do not depend on $p$ as long as $p\in[1,\infty)$, and are called lower and upper tropical projections of $\gamma$ to $T$ respectively. Actually such tropical projections are even more intrinsic. Using the criteria of tropical projections in Corollary~\ref{C:CritTropProj}, we see that $\underline{\pi}_T(\gamma)$ and $\overline{\pi}_T(\gamma)$ can be characterized purely set-theoretically which means that $\underline{\pi}_T(\gamma)$ and $\overline{\pi}_T(\gamma)$ are even independent of the underlying measure $\mu$ of $X$ (Remark~\ref{R:TropProjMeas}). Recall that the notion of $b$-functions is introduced in \cite{BS13} where reduced divisors can be characterized as the minimizers of $b$-functions. In our scenario, $b$-functions are special cases of $B$-pseudonorms and thus reduced divisors are special cases of tropical projections (see detailed discussions in Subsection~\ref{SS:bFuncBPseudoNorm}). Just as techniques based on reduced divisors have shown tremendous power in studying the divisor theory on finite graphs and metric graphs, the notion of tropical projections and its features are also extremely useful for exploring the theory of tropical convexity analysis. Here we summarize some of the results that we have derived: \begin{enumerate} \item We show that for a lower (respectively upper) tropical convex set $W$ and a compact lower (respectively upper) tropical convex subset $T$ of $W$, there is always a strong deformation retraction from $W$ to $T$ such that the intermediate sets are also lower (respectively upper) tropical convex (Theorem~\ref{T:TropRetract}). A direct corollary is that $W$ itself must be contractible (Corollary~\ref{C:TropContr}). \item We propose a systematic approach to construct compact tropical convex sets (Theorem~\ref{T:construction}) which has a consequence that finitely generated tropical convex hulls (also called tropical polytopes) are always compact (Corollary~\ref{C:Polytope}). Moreover, we prove the following tropical version of Mazur's Theorem on conventional closed convex hulls (Theorem~\ref{T:TropMazur}): the closed tropical convex hull of a compact subset of $\mathbb{TP}(X)$ is also compact. \item We study several notions of independence (Definition~\ref{D:TropIndep} and Remark~\ref{R:TropIndep}) in the context of tropical projective spaces including tropical weak independence, Gondran-Minoux independence and tropical independence (the last one is essential in Jensen-Payne's tropical proofs of the Gieseker-Petri Theorem \cite{JP14} and the maximal rank conjecture for quadrics \cite{JP16}). In particular, we provide a purely set-theoretical criterion for tropical weak independence (Theorem~\ref{T:CriterionTropIndep}) and discuss the extremals of tropical convex sets (Theorem~\ref{T:extremal}). \item We prove the following fixed-point theorem about tropical projections (Theorem~\ref{T:FixedPoint}): the tropical projections bouncing back and forth between a compact lower tropically convex set and a compact upper tropically convex set stabilize after at most two steps. Note that unlike most of the other results in this paper which have statements for lower and upper tropical convexity separately, this fixed-point theorem involves both lower and upper tropical convex sets. \end{enumerate} We apply this machinery of tropical convexity analysis to the divisor theory on metric graphs (or abstract tropical curves). \begin{enumerate} \item Using the potential theory on metric graphs (Appendix~\ref{S:potential}) and the language of chip-firing moves, we convert several definitions and statements in the previously developed theory of tropical convexity analysis to those in the context of divisor theory on metric graphs, e.g., tropical convexity, tropical projection and its criterion (Theorem~\ref{T:TropProjDiv}), and tropical independence and its criterion (Theorem~\ref {T:FiniteCriterion}). \item In the divisor theory of algebraic curves, for each divisor $D$ on an algebraic curve, the complete linear system $|D|$ associated to $D$ is the projective space consisting of all effective divisors linearly equivalent to $D$, and a linear subspace of $|D|$ is called a linear system whose rank is simply its dimension. However, different complete linear systems on different algebraic curves can degenerate as subsets of the same complete linear system on a single finite graph or metric graph. As a result, even through we may still define the complete linear system $|D|$ associate to a divisor $D$ on a metric graph as the set of all effective divisors linearly equivalent to $D$ (as in \cite{BN07} and the whole subsequent works), $|D|$ is not purely dimensional in general. This is an obstacle in defining linear systems as the ``linear'' subspaces of $|D|$ as in the case of algebraic curves. Here we note that $|D|$ is a tropical polytope and propose to define the linear systems on a metric graph as the tropical polytopes contained in complete linear systems (Definition~\ref{D:LinSys}). \item For each point $q$ on a metric graph $\Gamma$ and each nonempty complete linear system $|D|$ on $\Gamma$, there exists a unique divisor called the $q$-reduced in $|D|$ (Definition~\ref{D:RedDiv}) which is characterized in \cite{BS13} as the minimizer of the so-called $b$-functions restricted to $|D|$ with respect to $q$. Based on an observation that $b$-functions are just special $B$-pseudonorms, we give the following definition of reduced divisors for all linear systems (Definition~\ref{D:GeneralRed}) instead of only for complete ones as in convention: the $q$-reduced divisor in a linear system $T$ is the tropical projection of $d\cdot(q)$ to $T$ where $d$ is the degree of $T$. In this sense, we naturally extend the notion of reduced divisor maps introduced in \cite{Amini13} which is originally defined as a map from $\Gamma$ to a complete linear system $|D|$ to a more general scope where the target can be any linear system (Definition~\ref{D:RedDivMap}). \item We pay special attention to those one dimensional linear systems which we call tropical trees (Definition~\ref{D:TropTree}). Using the techniques based on tropical projections and reduced divisors, we provide criteria for (1) tropical trees (Proposition~\ref{P:TropTreeCrit}), (2) dominant tropical trees which are tropical trees with support being the whole metric graph (Proposition~\ref{P:DomTreeCrit}), and (3) reduced divisor maps to dominant tropical trees (Proposition~\ref{P:CritRedDivMap}). We prove that a harmonic morphism \cite{BN09} from a modification of a metric graph $\Gamma$ to a metric tree essentially corresponds to the reduced divisor map from $\Gamma$ to a dominant tropical tree (Theorem~\ref{T:RedHarm}). \item Recall that the gonality of an algebraic curve is defined as the minimum degree of divisors of rank one or equivalently the minimum degree of finite maps from the algebraic curve to a projective line. However, because of some subtlety in the process of divisor degeneration from curves to metric graphs, these two equivalent interpretations of curve gonality diverge into two non-equivalent notions of gonality: the divisorial gonality \cite{Baker08} and the stable gonality \cite{CKK15} for metric graphs. In particular, the divisorial gonality of $\Gamma$ is the minimum degree of divisors with Baker-Norine rank being one, and the stable gonality of $\Gamma$ is the minimum degree of harmonic morphisms from modifications of $\Gamma$ to a metric tree (Definition~\ref{D:StaGonality}). Therefore, to compute the stable gonality of $\Gamma$ is equivalent to find the minimum degree of dominant tropical trees on $\Gamma$ (Proposition~\ref{P:SGon}). We want to mention that recently there is another independent work also trying to compute stable gonality by looking into linear systems in \cite{Kageyama18}. Other than the Baker-Norine combinatorial rank and the Caporaso algebraic rank \cite{Caporaso15}, we propose a new rank function called the geometric rank for divisors on metric graphs such that the stable gonality of $\Gamma$ can also be accounted as the the minimum degree of divisors of geometric rank one. \end{enumerate} Since idempotent mathematics has wide applications in optimization problems \cite{K94}, we expect that the notions of $B$-pseudonorms and tropical projections introduced in this paper open up new trends in idempotent optimization. Some of the results in this paper can be directly extended from over tropical semirings to over idempotent semirings in general. In our follow-up work, we will investigate idempotent functional analysis and operator theory based on the notion of tropical projection. The rest of the paper can be divided into two parts. The first part consists of Sections~\ref{S:PreAna}-\ref{S:FixedPoint}, laying out the foundation of the theory of tropical convexity analysis built around tropical projections. The second part consists of Section~\ref{S:AppMetGra}, in which we discuss in detail an application of our machinery to the divisor theory on metric graphs. More specifically, in Section~\ref{S:PreAna}, we give some preliminary results on the analysis of tropical projective spaces; in Section~\ref{S:Tconvexity}, we discuss the notion of tropical convexity and several types of independence in the context of tropical projective spaces; in Section~\ref{S:Bpseudonorm}, we define $B$-pseudonorms on tropical projective spaces and discuss the main theorem about tropical projections; in Section~\ref{S:TropRetract}, we investigate deformation retracts to tropical convex sets; in Section~\ref{S:Compact}, we provide a general approach of constructing compact tropical convex sets and prove the tropical version of Mazur's theorem on closed convex hulls; in Section~\ref{S:TropIndep}, we provide a set-theoretical criterion of tropical independence; in Section~\ref{S:FixedPoint}, we give a fixed-point theorem for tropical projections; Section~\ref{S:AppMetGra} is about the application of the whole machinery to the divisor theory on metric graphs, in which we discuss the notions of tropical convexity, tropical projection and tropical independence on a divisor space of a metric graph, define a general notion of linear systems of divisors, define a notion of tropical trees as $1$-dimensional linear systems, study the relation between $b$-functions and $B$-pseudonorms, show that reduced divisors are essentially special tropical projections, give a definition of reduced divisors to linear systems in general, prove that harmonic morphisms from a metric graph to a metric tree are no more than reduced divisor maps to tropical trees, and finally introduce a new rank function called geometric rank function on the space of divisors. \section{Preliminary Analysis on Tropical Projective Spaces} \label{S:PreAna} Let $X$ be a topological space. Let $BC(X)$ be the real linear space of all bounded and continuous real functions on $X$. Let $BC^0(X)$ be linear subspace of $BC(X)$ whose elements are bounded and continuous functions whose infimum and supremum on $X$ are both attainable. Throughout this paper, we denote the infimum, supremum, minimum (when existing) and maximum (when existing) of a real-valued function $f$ on $X$ by $\inf(f)$, $\sup(f)$, $\min(f)$ and $\max(f)$ respectively. And by abuse of notation, we let $\inf_{i\in I}(f_i)$ (respectively $\sup_{i\in I}(f_i)$) be a function on $X$ whose value at $x\in X$ is the infimum (respectively supremum) value of $\{f_i(x)\}_{i\in I}$, and let $\min(f_1,\cdots,f_n)$ (respectively $\max(f_1,\cdots, f_n)$) be a function on $X$ whose value at $x\in X$ is the minimum (respectively maximum) value of $f_1(x),\cdots,f_n(x)$. Moreover, we also write the constant function of value $c$ on $X$ simply as $c$ sometimes. Let $\Vert \cdot \Vert_\infty$ be the uniform norm on $BC(X)$. Recall that $(BC(X),\Vert \cdot \Vert_\infty)$ is a Banach space. \begin{lemma} \label{L:BCBC0} $(BC(X),\Vert \cdot \Vert_\infty)$ is the completion of $(BC^0(X),\Vert \cdot \Vert_\infty)$. \end{lemma} \begin{proof} To show that $BC^0(X)$ is dense in $BC(X)$, we need to show that any $f\in BC(X)$ is approachable by a sequence $f_1,f_2,\cdots$ in $BC^0(X)$. Consider $f\in BC(X)\setminus BC^0(X)$ whose infimum and supremum are $a$ and $b$ respectively. Then we can choose a decreasing sequence $a_1,a_2,\cdots$ converging to $a$ and an increasing sequence $b_1,b_2,\cdots$ converging to $b$ such that $a_1\leq b_1$. Let $f_n=\max(a_n,\min(b_n,f))$. Then $f_n$ has a minimum value $a_n$ and a maximum value $b_n$ and is clearly continuous which means $f_n\in BC^0(X)$. Moreover, the sequence $f_1,f_2,\cdots$ converges to $f$ uniformly. Therefore $BC^0(X)$ is dense in $BC(X)$. \end{proof} We let $\mathbb{TP}^0(X):=BC^0(X)/\sim$ and $\mathbb{TP}(X):=BC(X)/\sim$ where $f_1\sim f_2$ if $f_1-f_2$ is a constant function. We call $\mathbb{TP}^0(X)$ and $\mathbb{TP} (X)$ the \emph{inner tropical projective space} and the \emph{(outer) tropical projective space} on $X$ respectively. Note that if $X$ is compact, then $BC(X)=BC^0(X)=C(X)$ and $\mathbb{TP}(X)=\mathbb{TP}^0(X)$ where $C(X)$ is the linear space of all continuous functions on $X$. For $f\in BC(X)$ and $V\subseteq BC(X)$, we have the following notations: \begin{enumerate} \item $[f]\in \mathbb{TP}(X)$ is the equivalence class of $f$ and we call $[f]$ the projectivization of $f$; \item $[V]:=\{[f]\mid f\in V\}$; \item $\underline{f}:=f-\inf(f)$ and $\overline{f}:=f-\sup(f)$; \item $X_{\min}(f):=f^{-1}(\inf(f))$ and $X_{\max}(f):=f^{-1}(\sup(f))$; \item For $\epsilon>0$, $X_{\min}^\epsilon(f):=\{x\in X\mid f(x)<\inf(f)+\epsilon\}$ and $X_{\max}^\epsilon(f):=\{x\in X\mid f(x)>\sup(f)-\epsilon\}$. \end{enumerate} We call $X_{\min}(f)$ and $X_{\max}(f)$ the minimizer and maximizer of $f$ respectively. Note that $X_{\min}(f)=\bigcap_{\epsilon>0} X_{\min}^\epsilon(f)$ and $X_{\max}(f)=\bigcap_{\epsilon>0} X_{\max}^\epsilon(f)$. Clearly if $f\in BC^0(X)$, then $X_{\max}(f)$ and $X_{\min}(f)$ are both nonempty. But if $f\in BC(X)\setminus BC^0(X)$, then at least one of $X_{\max}(f)$ and $X_{\min}(f)$ must be empty. Note that for each $g\in[f]$, $X_{\min}(f)=X_{\min}(g)$, $X_{\max}(f)=X_{\max}(g)$, $X_{\min}^\epsilon(f)=X_{\min}^\epsilon(g)$ and $X_{\max}^\epsilon(f)=X_{\max}^\epsilon(g)$. Hence by abuse of notation, we also write $X_{\min}([f]):=X_{\min}(f)$, $X_{\max}([f]):=X_{\max}(f)$, $X_{\min}^\epsilon([f]):=X_{\min}^\epsilon(f)$ and $X_{\max}^\epsilon([f]):=X_{\max}^\epsilon(f)$. The following are several easily verifiable facts: \begin{enumerate} \item $\mathbb{TP}(X)$ and $\mathbb{TP}^0(X)$ are both real linear spaces with the zero element being the equivalence class $[0]$ of all constant functions on $X$ and $\lambda_1 [f_1]+\lambda_2 [f_2]:=[\lambda_1 f_1+\lambda_2 f_2]$ for all $\lambda_1,\lambda_2\in {\mathbb R}$; \item $-\overline{f}=\underline{-f}$; \item $X_{\min}(f)=X_{\min}(cf)$ and $X_{\max}(f)=X_{\max}(cf)$ for all $c>0$; \item $X_{\min}(f)=X_{\max}(-f)$; \item $X_{\min}^\epsilon(f)=X_{\min}^{c\epsilon}(cf)$ and $X_{\max}^\epsilon(f)=X_{\max}^{c\epsilon}(cf)$ for all $c>0$ and $\epsilon>0$; \item $X_{\min}^\epsilon(f)=X_{\max}^\epsilon(-f)$ for all $\epsilon>0$; \end{enumerate} Here we introduce a new notation ``$\Cap$''. For each $\alpha,\beta\in \mathbb{TP}(X)$, we say $X_{\min}(\alpha)\CapX_{\min}(\beta)\neq\emptyset$ if for each $\epsilon>0$, $X_{\min}^\epsilon(\alpha)\bigcapX_{\min}^\epsilon(\beta)\neq\emptyset$, and say $X_{\min}(\alpha)\CapX_{\min}(\beta)=\emptyset$ if there exists $\epsilon>0$ such that $X_{\min}^\epsilon(\alpha)\bigcapX_{\min}^\epsilon(\beta)=\emptyset$. Accordingly, we say $X_{\max}(\alpha)\CapX_{\max}(\beta)\neq\emptyset$ if for each $\epsilon>0$, $X_{\max}^\epsilon(\alpha)\bigcapX_{\max}^\epsilon(\beta)\neq\emptyset$, and say $X_{\max}(\alpha)\CapX_{\max}(\beta)=\emptyset$ if there exists $\epsilon>0$ such that $X_{\max}^\epsilon(\alpha)\bigcapX_{\max}^\epsilon(\beta)=\emptyset$. \begin{lemma} \label{L:XminXmax} For $\alpha,\beta\in \mathbb{TP}(X)$, if $X_{\min}(\alpha)\CapX_{\min}(\beta)\neq\emptyset$, then $X_{\min}(\alpha)\bigcapX_{\min}(\beta)=X_{\min}(\alpha+\beta)$, and if $X_{\max}(\alpha)\CapX_{\max}(\beta)\neq\emptyset$, then $X_{\max}(\alpha)\bigcapX_{\max}(\beta)=X_{\max}(\alpha+\beta)$. \end{lemma} \begin{lemma} \label{L:XminXmaxTP0} For $\alpha,\beta\in \mathbb{TP}^0(X)$, \begin{enumerate} \item $X_{\min}(\alpha)\bigcapX_{\min}(\beta)=X_{\min}(\alpha+\beta)\neq\emptyset$ if and only if $X_{\min}(\alpha)\CapX_{\min}(\beta)\neq\emptyset$; \item $X_{\min}(\alpha)\bigcapX_{\min}(\beta)=\emptyset$ if and only if $X_{\min}(\alpha)\CapX_{\min}(\beta)=\emptyset$; \item $X_{\max}(\alpha)\bigcapX_{\max}(\beta)=X_{\max}(\alpha+\beta)\neq\emptyset$ if and only if $X_{\max}(\alpha)\CapX_{\max}(\beta)\neq\emptyset$; \item $X_{\max}(\alpha)\bigcapX_{\max}(\beta)=\emptyset$ if and only if $X_{\max}(\alpha)\CapX_{\max}(\beta)=\emptyset$. \end{enumerate} \end{lemma} \begin{proof} This can be easily verified by definition. \end{proof} \begin{lemma} \label{L:SpeIneq} Let $\alpha=[f],\beta=[g]\in \mathbb{TP}(X)$. Then \begin{enumerate} \item $\underline{f+g}\leq \underline{f}+\underline{g}$ with equality holds if and only if $X_{\min}(\alpha)\CapX_{\min}(\beta)\neq\emptyset$; \item $\overline{f+g}\geq \overline{f}+\overline{g}$ with equality holds if and only if $X_{\max}(\alpha)\CapX_{\max}(\beta)\neq\emptyset$. \end{enumerate} \end{lemma} \begin{proof} For (1), $\underline{f+g}=f+g-\inf(f+g)\leq f+g-\inf(f)-\inf(g)=\underline{f}+\underline{g}$ with equality holds if and only if $\inf(f+g)=\inf(f)+\inf(g)$ if and only if $X_{\min}(\alpha)\CapX_{\min}(\beta)\neq\emptyset$. For (2), $\overline{f+g}=f+g-\sup(f+g)\geq f+g-\sup(f)-\sup(g)=\overline{f}+\overline{g}$ with equality holds if and only if $\sup(f+g)=\sup(f)+\sup(g)$ if and only if $X_{\max}(\alpha)\CapX_{\max}(\beta)\neq\emptyset$. \end{proof} \begin{definition} \label{D:TropNorm} The \emph{tropical norm} on $\mathbb{TP}(X)$ is a function $\Vert \cdot \Vert:\mathbb{TP}(X)\to [0,\infty)$ defined by $\Vert [f] \Vert:=\sup(f)-\inf(f)=\sup(\underline{f})$ for all $[f]\in\mathbb{TP}(X)$. \end{definition} \begin{lemma} \label{L:TropNorm} The tropical norm is a norm, i.e., \begin{enumerate} \item $\Vert c\alpha \Vert = |c| \Vert \alpha \Vert$ for all $c\in {\mathbb R}$ and $\alpha\in \mathbb{TP}(X)$. \item $\Vert \alpha \Vert = 0$ if and only if $\alpha=[0]$. \item $\Vert \alpha+\beta \Vert\leq \Vert \alpha\Vert +\Vert \beta \Vert$ for all $\alpha,\beta\in\mathbb{TP}(X)$ where the equality holds if and only if $X_{\min}(\alpha)\CapX_{\min}(\beta)\neq\emptyset$ and $X_{\max}(\alpha)\CapX_{\max}(\beta)\neq\emptyset$. \end{enumerate} \end{lemma} \begin{proof} (1) and (2) are straightforward. For (3), let $\alpha=[f]$ and $\beta=[g]$. Then as in Lemma~\ref{L:SpeIneq}, $\inf(f+g)\geq\inf(f)+\inf(g)$ where equality holds if and only if $X_{\min}(\alpha)\CapX_{\min}(\beta)\neq\emptyset$, and $\sup(f+g)\leq\sup(f)+\sup(g)$ where the equality holds if and only if $X_{\max}(\alpha)\CapX_{\max}(\beta)\neq\emptyset$. Therefore, $\Vert \alpha+\beta \Vert=\sup(f+g)-\inf(f+g)\leq \sup(f)+\sup(g)-\inf(f)-\inf(g)$ where the equality holds if and only if $X_{\min}(\alpha)\CapX_{\min}(\beta)\neq\emptyset$ and $X_{\max}(\alpha)\CapX_{\max}(\beta)\neq\emptyset$. \end{proof} As in convention, the tropical norm induces a metric $\rho$ on $\mathbb{TP}(X)$ where $\rho(\alpha,\beta)=\Vert \alpha- \beta \Vert$. And in future discussions, we always assume that $\mathbb{TP}(X)$ is equipped with this metric topology. \begin{lemma} \label{L:NormComparison} For $f\in BC(X)$, we have the following relation between $\Vert f\Vert_\infty$ and $\Vert[f]\Vert$: \begin{enumerate} \item $\Vert f \Vert_\infty =\Vert [f]\Vert$ if $\inf(f)=0$ or $\sup(f)=0$; \item $\Vert f \Vert_\infty \geq \Vert [f]\Vert$ if $\inf(f)\geq0$ or $\sup(f)\leq 0$; \item $\Vert f \Vert_\infty \leq \Vert [f]\Vert\leq 2 \Vert f \Vert_\infty$ if $\inf(f)\leq 0$ and $\sup(f)\geq 0$. \end{enumerate} \end{lemma} \begin{proof} This lemma can be easily verified. \end{proof} \begin{proposition} \label{P:TPBanach} The normed space $(\mathbb{TP}(X),\Vert \cdot \Vert)$ is a real Banach space which is the completion of $(\mathbb{TP}^0(X),\Vert \cdot \Vert)$. \end{proposition} \begin{proof} We will use the inequalities in Lemma~\ref{L:NormComparison}. Let $\{[f_n]\}_n$ be a Cauchy sequence in $\mathbb{TP}(X)$. Then given any $\epsilon>0$, there exists $N>0$ such that $\Vert [f_n]-[f_m]\Vert<\epsilon$ for all $m,n\geq N$. Note that $\Vert \underline{f_n}-\underline{f_m} \Vert_\infty\leq \Vert [f_n]-[f_m]\Vert$ since $\inf(\underline{f_n}-\underline{f_m})\leq 0$ and $\sup(\underline{f_n}-\underline{f_m})\geq 0$. Therefore $\{\underline{f_n}\}_n$ is a Cauchy sequence in $BC(X)$ and converges to a function $f\in BC(X)$ since $(BC(X),\Vert \cdot \Vert_\infty)$ is complete. To show that $(\mathbb{TP}(X),\Vert \cdot \Vert)$ is a Banach space, it remains to show that $[f]$ is the limit of $\{[f_n]\}_n$, i.e., $\Vert [f_n]-[f]\Vert\to 0$ as $n\to \infty$. Note that it can be easily shown that $\inf:BC(X)\to {\mathbb R}$ is a continuous function and hence $\inf(f)=\lim\limits_{n\to \infty}\inf (\underline{f_n})=0$. It follows that $\Vert [f_n]-[f]\Vert \leq 2 \Vert \underline{f_n}-f \Vert_\infty\to 0$ as $n\to \infty$. Now to show that $(\mathbb{TP}^0(X),\Vert \cdot \Vert)$ is dense in $(\mathbb{TP}(X),\Vert \cdot \Vert)$, we note that $BC^0(X)$ is dense in $BC(X)$ by Lemma~\ref{L:BCBC0}. For each $[f]\in\mathbb{TP}(X)$, we choose a sequence $f_1,f_2,\cdots$ in $BC^0(X)$ converging to $f$ and claim that the sequence $[f_1],[f_2],\cdots$ in $\mathbb{TP}^0(X)$ converges to $[f]$. This follows from the fact that $\Vert [f_n]-[f]\Vert$ is always bounded by $2\Vert f_n-f\Vert_\infty$. \end{proof} \section{Tropical Convexity} \label{S:Tconvexity} \subsection{Tropical Operations} \label{SS:TOper} \begin{definition} For $c\in {\mathbb R}$ and elements $f,g\in {\mathbb R}^X$, we define the following tropical operations on ${\mathbb R}^X$: \begin{enumerate} \item \emph{lower tropical addition} $f\oplus g :=\min(f,g)$, \item \emph{upper tropical addition} $f\boxplus g :=\max(f,g)$, \item \emph{tropical scalar multiplication} $c\odot f :=c+f$, \item \emph{tropical multiplication} $f\otimes g:=f+g$, (if $c$ is also considered as a constant function, then $c\otimes f = c\odot f$) \item \emph{tropical division} $f\oslash g:=f-g$; \item The negation $-f=0\oslash f$ of $f$ is also called the \emph{tropical inverse} of $f$. \end{enumerate} \end{definition} \begin{lemma}\label{L:TropOper} The following are some properties of the above tropical operations. \begin{enumerate} \item All these tropical operations are closed in $BC(X)$. \item $\oplus$, $\boxplus$ and $\otimes$ are all commutative. \item $\oplus$ and $\boxplus$ are idempotent, i.e., $f\oplus f=f$ and $f\boxplus f = f$. \item If $f\leq g$ (i.e.,$ f(x)\leq g(x)$ for all $x\in X$), then $f\oplus g=f$ and $f\boxplus g=g$. \item $f\oplus(f\boxplus g) = f\boxplus(f\oplus g)=f$. \item $f\otimes(g\oplus h)=(f\otimes g)\oplus (f\otimes h)$. \item $f\otimes(g\boxplus h)=(f\otimes g)\boxplus (f\otimes h)$. \item $f\boxplus(g\oplus h)=(f\boxplus g)\oplus (f\boxplus h)$. \item $f\oplus(g\boxplus h)=(f\oplus g)\boxplus (f\oplus h)$. \item $(f\oplus g)\otimes (f\boxplus g)=f\otimes g$. \item $(-f)\oplus(-g)=-(f\boxplus g)$. \end{enumerate} \end{lemma} \begin{proof} (1)-(7) are straightforward to verify. For (8) and (9), it can be verified that for each $x\in X$, $$f\boxplus(g\oplus h)(x)=(f\boxplus g)\oplus (f\boxplus h)(x)= \begin{cases} f & \mbox{if } g(x)\leq f(x)\ \mbox{or } h(x)\leq f(x) \\ g &\mbox{if } f(x)\leq g(x)\leq h(x) \\ h & \mbox{if } f(x)\leq h(x)\leq g(x) \end{cases}$$ and $$f\oplus(g\boxplus h)(x)=(f\oplus g)\boxplus (f\oplus h)(x)= \begin{cases} f & \mbox{if } g(x)\geq f(x)\ \mbox{or } h(x)\geq f(x) \\ g &\mbox{if } f(x)\geq g(x)\geq h(x) \\ h & \mbox{if } f(x)\geq h(x)\geq g(x). \end{cases}$$ For (10), we have $(f\oplus g)\otimes (f\boxplus g)=\min(f,g)+\max(f,g)=f+g=f\otimes g$. For (11), we have $(-f)\oplus(-g)=\min(-f,-g)=-\max(f,g)=-(f\boxplus g)$. \end{proof} \subsection{Tropical Paths and Segments} \begin{definition} \label{D:tpath} For each $\alpha=[f]$ and $\beta=[g]$ in $\mathbb{TP}(X)$, we define two types of \emph{tropical paths} from $\alpha$ to $\beta$. Let $d=\rho(\alpha,\beta)$ be the distance between $\alpha$ and $\beta$. \begin{enumerate} \item The \emph{lower tropical path} from $\alpha$ to $\beta$ is an injective map $P_{(\alpha,\beta)}:[0,d]\to \mathbb{TP}(X)$ given by $t\mapsto [\min(t,\underline{g-f})+f]$ \item The \emph{upper tropical path} from $\alpha$ to $\beta$ is an injective map $P^{(\alpha,\beta)}:[0,d]\to \mathbb{TP}(X)$ given by $t\mapsto [\max(-t,\overline{g-f})+f]$ \item Both lower and upper tropical paths are \emph{tropical paths}. \end{enumerate} Respectively, we define two types of \emph{tropical segments} (also called \emph{t-segments}) connecting $\alpha$ and $\beta$ as follows. \begin{enumerate} \item The \emph{lower tropical segment} $\underline{\mathrm{tconv}}(\alpha,\beta)$ connecting $\alpha$ and $\beta$ is the image of $P_{(\alpha,\beta)}$ in $\mathbb{TP}(X)$; \item The \emph{upper tropical segment} $\overline{\mathrm{tconv}}(\alpha,\beta)$ connecting $\alpha$ and $\beta$ is the image of $P^{(\alpha,\beta)}$ in $\mathbb{TP}(X)$. \item Both lower and upper tropical segments are called \emph{tropical segments}. \end{enumerate} \end{definition} \begin{remark} \label{R:TropPath} Clearly, $P_{(\alpha,\beta)}(0)=P^{(\alpha,\beta)}(0)=\alpha$ and $P_{(\alpha,\beta)}(d)=P^{(\alpha,\beta)}(d)=\beta$ by definition. Tropical paths can be translated by any $\gamma\in \mathbb{TP}(X)$ as follows: for each $t\in[0,d]$, $P_{(\alpha+\gamma,\beta+\gamma)}(t)=P_{(\alpha,\beta)}(t)+\gamma$ and $P^{(\alpha+\gamma,\beta+\gamma)}(t)=P^{(\alpha,\beta)}(t)+\gamma$. Furthermore, we can scale tropical paths as follows: for each $t\in[0,d]$ and $c\in[0,\infty)$, $P_{(c\alpha,c\beta)}(ct)=cP_{(\alpha,\beta)}(t)$ and $P^{(c\alpha,c\beta)}(ct)=cP^{(\alpha,\beta)}(t)$. \end{remark} \begin{remark} If $\alpha,\beta\in \mathbb{TP}^0(X)$, then the tropical segments $\underline{\mathrm{tconv}}(\alpha,\beta)$ and $\overline{\mathrm{tconv}}(\alpha,\beta)$ are both contained in $\mathbb{TP}^0(X)$. \end{remark} \begin{lemma} \label{L:TropLinear} For each $\alpha=[f]$ and $\beta=[g]$ in $\mathbb{TP}(X)$, $\underline{\mathrm{tconv}}(\alpha,\beta)=\{[a\odot f \oplus b\odot g]\mid a, b\in {\mathbb R}\}$ and $\overline{\mathrm{tconv}}(\alpha,\beta)=\{[a\odot f \boxplus b\odot g]\mid a, b\in {\mathbb R}\}$. Moreover, $\underline{\mathrm{tconv}}(\alpha,\beta)=\underline{\mathrm{tconv}}(\beta,\alpha)$ and $\overline{\mathrm{tconv}}(\alpha,\beta)=\overline{\mathrm{tconv}}(\beta,\alpha)$. \end{lemma} \begin{proof} Let $u=\inf(g-f)$ and $v=\sup(g-f)$. Note that $c\odot f\oplus g = c\odot f$ if and only if $c\odot f\boxplus g = g$ if and only if $c\leq u$, and $c\odot f\oplus g = g$ if and only if $c\odot f\boxplus g = c\odot f$ if and only if $c\geq v$. Thus $\{[a\odot f \oplus b\odot g]\mid a, b\in {\mathbb R}\}=\{[c\odot f \oplus g]\mid c\in [u,v]\}$ and $\{[a\odot f \boxplus b\odot g]\mid a, b\in {\mathbb R}\}=\{[c\odot f \boxplus g]\mid c\in [u,v]\}$. On the other hand, $\underline{\mathrm{tconv}}(\alpha,\beta)=\{[\min(t,\underline{g-f})+f]\mid t\in[0,v-u]\}$ and $\overline{\mathrm{tconv}}(\alpha,\beta)=\{[\max(-t,\overline{g-f})+f]\mid t\in[0,v-u] \}$. Now $\min(t,\underline{g-f})+f=\min(t,g-f-u)+f=\min(t+f,g-u)=t\odot f \oplus (-u)\odot g \sim (t+u)\odot f\oplus g$ and $\max(-t,\overline{g-f})+f=\max(-t,g-f-v)+f=\max(f-t,g-v)=(-t)\odot f \boxplus (-v)\odot g\sim (v-t)\odot f \boxplus g$. Since $t\in[0,v-u]$, we have $t+u,v-t\in[u,v]$. Thus we get $\underline{\mathrm{tconv}}(\alpha,\beta)=\{[c\odot f \oplus g]\mid c\in [u,v]\}=\{[a\odot f \oplus b\odot g]\mid a, b\in {\mathbb R}\}$ and $\overline{\mathrm{tconv}}(\alpha,\beta)=\{[c\odot f \boxplus g]\mid c\in [u,v]\}=\{[a\odot f \boxplus b\odot g]\mid a, b\in {\mathbb R}\}$. The commutativity of $\underline{\mathrm{tconv}}$ and $\overline{\mathrm{tconv}}$ also follows easily. \end{proof} \begin{remark} Lemma~\ref{L:TropLinear} actually says that the lower (respectively upper) tropical segment between $\alpha=[g]$ and $\beta=[g]$ is the projectivization of the lower (respectively upper) tropical linear space spanned by $f$ and $g$. \end{remark} \begin{lemma} \label{L:neg_tseg} For $\alpha,\beta$ in $\mathbb{TP}(X)$ with $d=\rho(\alpha,\beta)$, we have $-P_{(\alpha,\beta)}(t)=P^{(-\alpha,-\beta)}(t)$ for all $t\in[0,d]$. Moreover, $-\underline{\mathrm{tconv}}(\alpha,\beta)=\overline{\mathrm{tconv}}(-\alpha,-\beta)$. In other words, the tropical inverse of a lower tropical segment is an upper tropical segment and vice versa. \end{lemma} \begin{proof} Suppose $\alpha=[f]$ and $\beta=[g]$. Then $-P_{(\alpha,\beta)}(t)=-[\min(t,\underline{g-f})+f]=[-\min(t,\underline{g-f})-f]=[\max(-t,\overline{(-g-(-f))})+(-f)]=P^{(-\alpha,-\beta)}(t)$. And it follows $-\underline{\mathrm{tconv}}(\alpha,\beta)=\overline{\mathrm{tconv}}(-\alpha,-\beta)$. \end{proof} We will also use the following notations to represent tropical segments which are respectively called closed, open and half-closed half-open (upper or lower) tropical segments as in the convention what we call the intervals on the real line: \begin{align*} \underline{[\alpha,\beta]}&=\underline{\mathrm{tconv}}(\alpha,\beta),\ \overline{[\alpha,\beta]}=\overline{\mathrm{tconv}}(\alpha,\beta),\\ \underline{(\alpha,\beta)}&=\underline{\mathrm{tconv}}(\alpha,\beta)\setminus\{\alpha,\beta\},\ \overline{(\alpha,\beta)}=\overline{\mathrm{tconv}}(\alpha,\beta)\setminus\{\alpha,\beta\}, \\ \underline{(\alpha,\beta]}&=\underline{\mathrm{tconv}}(\alpha,\beta)\setminus\{\alpha\},\ \overline{(\alpha,\beta]}=\overline{\mathrm{tconv}}(\alpha,\beta)\setminus\{\alpha\}, \\ \underline{[\alpha,\beta)}&=\underline{\mathrm{tconv}}(\alpha,\beta)\setminus\{\beta\},\ \overline{[\alpha,\beta)}=\overline{\mathrm{tconv}}(\alpha,\beta)\setminus\{\beta\}.\\ \end{align*} Note that unless otherwise specified, when saying tropical segments we mean the closed ones by default. \begin{proposition} \label{P:TropSeg} Let $\alpha,\beta$ be elements in $\mathbb{TP}(X)$ with $d=\rho(\alpha,\beta)$. We summarize some properties of tropical paths and segments as follows. \begin{enumerate} \item For each $\alpha', \beta'$ in $\underline{[\alpha,\beta]}$ (respectively in $\overline{[\alpha,\beta]}$), we have $\underline{[\alpha',\beta']} \subseteq \underline{[\alpha,\beta]}$ (respectively $\overline{[\alpha',\beta']} \subseteq \overline{[\alpha,\beta]}$). \item $P_{(\alpha,\beta)}(t)=P_{(\beta,\alpha)}(d-t)$ (respectively $P^{(\alpha,\beta)}(t)=P^{(\beta,\alpha)}(d-t)$) for all $t\in[0,d]$. \item $P_{(\alpha,\beta)}$ (respectively $P^{(\alpha,\beta)}$) is an isometry from $[0,d]$ to $\underline{[\alpha,\beta]}$ (respectively $\overline{[\alpha,\beta]}$). \item $\underline{[\alpha,\beta]}$ and $\overline{[\alpha,\beta]}$ are compact (and thus closed and bounded) subsets of $\mathbb{TP}(X)$. \item The following are equivalent: \begin{enumerate} \item $\gamma\in \underline{[\alpha,\beta]}$ (respectively $\gamma\in \overline{[\alpha,\beta]}$); \item $\underline{[\alpha,\beta]}=\underline{[\alpha,\gamma]}\bigcup \underline{[\gamma,\beta]}$ \\(respectively $\overline{[\alpha,\beta]}=\overline{[\alpha,\gamma]}\bigcup \overline{[\gamma,\beta]}$); \item $X_{\min}(\alpha-\gamma)\bigcupX_{\min}(\beta-\gamma)=X$ \\(respectively $X_{\max}(\alpha-\gamma)\bigcupX_{\max}(\beta-\gamma)=X$). \end{enumerate} \item The intersection of any two lower (respectively upper) tropical segments is a lower (respectively upper) tropical segment. \item For two lower tropical segments $\underline{[\alpha_1,\beta_1]}$ and $\underline{[\alpha_2,\beta_2]}$, if $\beta_1\in \underline{(\alpha_2,\beta_2]}$ and $\alpha_2\in \underline{[\alpha_1,\beta_1)}$, then $\underline{[\alpha_1,\beta_1]}\bigcup\underline{[\alpha_2,\beta_2]}=\underline{[\alpha_1,\beta_2]}$. Respectively, for two upper tropical segments $\overline{[\alpha_1,\beta_1]}$ and $\overline{[\alpha_2,\beta_2]}$, if $\beta_1\in \overline{(\alpha_2,\beta_2]}$ and $\alpha_2\in \overline{[\alpha_1,\beta_1)}$, then $\overline{[\alpha_1,\beta_1]}\bigcup\overline{[\alpha_2,\beta_2]}=\overline{[\alpha_1,\beta_2]}$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item Let $\alpha=[f]$ and $\beta=[g]$. For $\alpha',\beta'\in \underline{[\alpha,\beta]}$, we may assume that $\alpha'=P_{(\alpha,\beta)}(t_1)=[f']$ and $\beta'=P_{(\alpha,\beta)}(t_2)=[g']$ where $f'=\min(t_1,\underline{g-f})+f$ and $g'=\min(t_2,\underline{g-f})+f$ for $0\leq t_1\leq t_2\leq d$. Note that $g'-f'=\underline{g'-f'}=\min(t_2,\underline{g-f})-\min(t_1,\underline{g-f})$ and $\rho(\alpha',\beta')=\Vert g'-f'\Vert = \max(g'-f')=t_2-t_1$. Therefore, for $0\leq t\leq t_2-t_1$, \begin{align*} P_{(\alpha',\beta')}(t)&=[\min(t,g'-f')+f]=[\min(t+\min(t_1,\underline{g-f}),\min(t_2,\underline{g-f}))+f]\\ &=[\min(t+t_1,\underline{g-f})+f]=P_{(\alpha,\beta)}(t+t_1) \end{align*} which implies $\underline{[\alpha',\beta']} \subseteq \underline{[\alpha,\beta]}$. An analogous argument holds for $\overline{[\alpha',\beta']} \subseteq \overline{[\alpha,\beta]}$. \item For $t\in[0,d]$, \begin{align*} & P_{(\alpha,\beta)}(t)=[\min(t,\underline{g-f})+f]=[\min(t+f,\underline{g-f}+f]=[\min(t+f,g-\min(g-f)] \\ &=[\min(t+f,d+g+\min(f-g)]=[\min(d-t,f-g-\min(f-g))+g]=P_{(\beta,\alpha)}(d-t). \end{align*} An analogous argument holds for $P^{(\alpha,\beta)}(t)=P^{(\beta,\alpha)}(d-t)$. \item Following the argument in (1), $\rho(P_{(\alpha,\beta)}(t_1),P_{(\alpha,\beta)}(t_2))=\rho(P^{(\alpha,\beta)}(t_1),P^{(\alpha,\beta)}(t_2))=|t_2-t_1|$ for all $0\leq t_1,t_2\leq d$ and thus $\underline{[\alpha,\beta]}$ and $\overline{[\alpha,\beta]}$ are both isometric to the interval $[0,d]$. \item It follows from (3) straightforwardly. \item Here we just show it for $\underline{[\alpha,\beta]}$ and the case for $\overline{[\alpha,\beta]}$ follows from a similar argument. The equivalence of (a) and (b) is clear by (1). Let $\alpha=[f]$, $\beta=[g]$ and $\gamma = [h]$. Now suppose $\gamma\in \underline{[\alpha,\beta]}$. We may let $h=\min(t,\underline{g-f})+f$ for some $t\in[0,d]$. Then $X_{\min}(\alpha-\gamma)=X_{\min}(f-h)=X_{\min}(-\min(t,\underline{g-f}))=\{x\in X\mid \underline{g-f}(x)\geq t\}$ and $X_{\min}(\beta-\gamma)=X_{\min}(g-h)=X_{\min}(g-f-\min(t,\underline{g-f}))=X_{\min}(\max(t,\underline{g-f}))=\{x\in X\mid \underline{g-f}(x)\leq t\}$. Therefore $X_{\min}(\alpha-\gamma)\bigcup X_{\min}(\beta-\gamma) =X$. Conversely, suppose $X_{\min}(g-h)\bigcup X_{\min}(f-h)=X_{\min}(g-h)\bigcup X_{\max}(h-f) =X$. Let $t=\Vert h-f\Vert$. Then $\underline{h-f}=\min(t,\underline{g-f})$ and thus $\gamma=[h]=[\min(t,\underline{g-f}))+f]=P_{(\alpha,\beta)}(t)$. \item Let $T_1$ and $T_2$ be two lower (respectively upper) tropical segments with $T$ being their intersection. Then by (1), whenever $\alpha,\beta\in T$, we must have $\underline{[\alpha,\beta]}\subseteq T$ (respectively $\overline{[\alpha,\beta]}\subseteq T$). Since $T$ is compact, $T$ must be a lower (respectively upper) tropical segment. \item Again here we only show the case for lower tropical segments while the case for $\overline{[\alpha,\beta]}$ follows analogously. Actually we just need to show $\beta_1,\alpha_2\in \underline{[\alpha_1,\beta_2]}$ and the statement will follow from (1). Suppose $\rho(\alpha_1,\beta_1)=d_1$, $\rho(\alpha_2,\beta_2)=d_2$. Note that $X_{\min}(\beta_1-\alpha_2)=X_{\min}(\beta_2-\alpha_2)$ since $\beta_1\in \underline{(\alpha_2,\beta_2]}$. Then $X_{\min}(\alpha_1-\alpha_2)\bigcupX_{\min}(\beta_2-\alpha_2)=X_{\min}(\alpha_1-\alpha_2)\bigcupX_{\min}(\beta_1-\alpha_2)=X$ since $\alpha_2\in\underline{[\alpha_1,\beta_1]}$. Thus $\alpha_2\in \underline{[\alpha_1,\beta_2]}$. Analogously we can show that $\beta_1\in \underline{[\alpha_1,\beta_2]}$. \end{enumerate} \end{proof} \subsection{Tropical Convexity and Several Types of Independence on Tropical Projective Spaces} \begin{definition} \label{D:TropConv} A subset $T$ of $\mathbb{TP}(X)$ is said to be \emph{lower tropically convex} (respectively \emph{upper tropically convex} ) if for every $\alpha,\beta\in T$, the whole tropical segment $\underline{[\alpha,\beta]}$ (respectively $\overline{[\alpha,\beta]}$) connecting $\alpha$ and $\beta$ is contained in $T$. \end{definition} \begin{remark} By Proposition~\ref{P:TropSeg}(1), all (closed, open and half-closed half-open) lower and upper tropical segments are lower and upper tropically convex respectively. \end{remark} The following lemmas follow from Definition~\ref{D:TropConv} directly. \begin{lemma} If $T$ is lower or upper tropically convex, then $T$ is connected. \end{lemma} \begin{lemma} $\mathbb{TP}^0(X)$ is both lower and upper tropically convex. \end{lemma} \begin{lemma} The intersection of an arbitrary collection of lower or upper tropical convex sets is lower or upper tropically convex respectively. \end{lemma} \begin{definition} \label{D:TropIndep} Let $S$ be a subset of $\mathbb{TP}(X)$. \begin{enumerate} \item The \emph{lower tropical convex hull $\underline{\mathrm{tconv}}(S)$} (respectively \emph{upper tropical convex hull $\overline{\mathrm{tconv}}(S)$}) generated by $S$ is the intersection of all lower (respectively upper) tropically convex subsets of $\mathbb{TP}(X)$ containing $S$, and we say $S$ is a \emph{generating set} of $\underline{\mathrm{tconv}}(S)$ (respectively $\overline{\mathrm{tconv}}(S)$). Clearly, $S$ is lower tropically convex if and only if $S=\underline{\mathrm{tconv}}(S)$, and $S$ is upper tropically convex if and only if $S=\overline{\mathrm{tconv}}(S)$. \item We say an (upper or lower) tropical convex hull is \emph{finitely generated} if it can be generated by a finite set. We also call a finitely generated (upper or lower) tropical convex hull an (upper or lower respectively) \emph{tropical polytope}. \item \begin{enumerate} \item If $x\notin \underline{\mathrm{tconv}}(S\setminus\{x\})$ (respectively $x\notin \overline{\mathrm{tconv}}(S\setminus\{x\})$) for every $x\in S$, then we say $S$ is \emph{lower (tropically) weakly independent} (respectively \emph{upper (tropically) weakly independent}). \item If $\underline{\mathrm{tconv}}(S_1)\bigcap \underline{\mathrm{tconv}}(S_2)= \emptyset$ (respectively $\overline{\mathrm{tconv}}(S_1)\bigcap \overline{\mathrm{tconv}}(S_2)= \emptyset$) for each partition $\{S_1,S_2\}$ of $S$ with $S_1,S_2\neq \emptyset$, then we say $S$ is \emph{lower Gondran-Minoux independent} (respectively \emph{upper Gondran-Minoux independent}). \end{enumerate} \end{enumerate} \end{definition} \begin{lemma} \label{L:OpTropHull} We may do translation, dilation and tropical inversion of tropical convex hulls as follows: \begin{enumerate} \item $\alpha+\underline{\mathrm{tconv}}(S)=\underline{\mathrm{tconv}}(\alpha+S)$ and $\alpha+\overline{\mathrm{tconv}}(S)=\underline{\mathrm{tconv}}(\alpha+S)$ for any $\alpha\in\mathbb{TP}(X)$; \item $c\cdot \underline{\mathrm{tconv}}(S)=\underline{\mathrm{tconv}}(c\cdot S)$ and $c\cdot \overline{\mathrm{tconv}}(S)=\underline{\mathrm{tconv}}(c\cdot S)$ for any $c\geq 0$; \item $-\underline{\mathrm{tconv}}(S)=\overline{\mathrm{tconv}}(-S)$ and $-\overline{\mathrm{tconv}}(S)=\underline{\mathrm{tconv}}(-S)$. \end{enumerate} \end{lemma} \begin{proof} This follows from Remark~\ref{R:TropPath} and Lemma~\ref{L:neg_tseg}. \end{proof} For $V\subseteq BC(X)$, we say $\widehat{\oplus}(V):=\{(c_1\odot f_1)\oplus\cdots\oplus (c_m\odot f_m)\mid m\in{\mathbb N}, c_i\in {\mathbb R},f_i\in V\}$ is the \emph{lower tropical torus} spanned by $V$ and $\widehat{\boxplus}(V):=\{(c_1\odot f_1)\boxplus\cdots\boxplus (c_m\odot f_m)\mid m\in{\mathbb N}, c_i\in {\mathbb R},f_i\in V\}$ is the \emph{upper tropical torus} spanned by $V$. By abuse of notation, for $S\subseteq\mathbb{TP}(X)$, we also write $\widehat{\oplus}(S):=\widehat{\oplus}(V_S)$ and $\widehat{\boxplus}(S):=\widehat{\boxplus}(V_S)$ where $V_S=\{f\in BC(X)\mid [f]\in S\}$. Note that Lemma~\ref{L:TropLinear} essentially says $\underline{\mathrm{tconv}}(\alpha,\beta)=[\widehat{\oplus}(\{\alpha,\beta\})]$ and $\overline{\mathrm{tconv}}(\alpha,\beta)=[\widehat{\boxplus}(\{\alpha,\beta\})]$. In the following theorem, we show that this is generally true for all $S\subseteq\mathbb{TP}(X)$. \begin{theorem} \label{T:TconvTlinear} For any $S\subseteq\mathbb{TP}(X)$, $\underline{\mathrm{tconv}}(S)=[\widehat{\oplus}(S)]$ and $\overline{\mathrm{tconv}}(S)=[\widehat{\boxplus}(S)]$. Moreover, $\overline{\mathrm{tconv}}(\underline{\mathrm{tconv}}(S))=\underline{\mathrm{tconv}}(\overline{\mathrm{tconv}}(S))$ which is both lower and upper tropically convex. \end{theorem} \begin{proof} To show that $\underline{\mathrm{tconv}}(S)=[\widehat{\oplus}(S)]$, first we note that $[\widehat{\oplus}(S)]\subseteq \underline{\mathrm{tconv}}(S)$, i.e., $[f_1\oplus \cdots \oplus f_m]\in \underline{\mathrm{tconv}}(S)$ for all $m\in{\mathbb N}$ and $[f_1],\cdots, [f_m]\in S$. But this can be derived by applying Lemma~\ref{L:TropLinear} inductively on $m$. More precisely, suppose $[f_1\oplus \cdots \oplus f_m]\in \underline{\mathrm{tconv}}(S)$ is true for all $[f_1],\cdots, [f_m]\in S$. Then since $\underline{\mathrm{tconv}}(S)$ is lower tropically convex, $[f_1\oplus \cdots \oplus f_m\oplus f_{m+1}]\in \underline{\mathrm{tconv}}(S)$ must also be true for all $[f_1],\cdots, [f_m],[f_{m+1}]\in S$ by Lemma~\ref{L:TropLinear}. It remains to show that $[\widehat{\oplus}(S)]$ itself is lower tropically convex. Consider $\alpha=[f_1\oplus \cdots \oplus f_m]$ and $\beta=[g_1\oplus \cdots \oplus g_n]\in [\widehat{\oplus}(S)]$ where $[f_1],\cdots, [f_m],[g_1],\cdots [g_n]\in S$. We have $\underline{\mathrm{tconv}}(\alpha,\beta)=[\widehat{\oplus}(\{\alpha,\beta\})]=\{(a\odot f_1\oplus \cdots \oplus a\odot f_m) \oplus (b\odot g_1\oplus \cdots \oplus b\odot g_n)\mid a,b\in {\mathbb R}\}\subseteq [\widehat{\oplus}(S)]$. Using an analogous argument, we can also show that $\overline{\mathrm{tconv}}(S)=[\widehat{\boxplus}(S)]$. Now let us show that $\overline{\mathrm{tconv}}(\underline{\mathrm{tconv}}(S))=\underline{\mathrm{tconv}}(\overline{\mathrm{tconv}}(S))$. By the above results, an element of $\overline{\mathrm{tconv}}(\underline{\mathrm{tconv}}(S))$ can be written as $\alpha=[(f_{11}\oplus \cdots \oplus f_{1n_1}])\boxplus\cdots \boxplus (f_{m1}\oplus \cdots \oplus f_{mn_m}])$ for some $m,n_1,\cdots, n_m\in{\mathbb N}$ and $[f_{ij}]\in S$. Using Lemma~\ref{L:TropOper}(8), $\alpha$ can also be written as $\alpha=\oplus_{1\leq i_j\leq n_j,1\leq j\leq m}(f_{1i_1}\boxplus\cdots\boxplus f_{mi_m})$ which lies in $\underline{\mathrm{tconv}}(\overline{\mathrm{tconv}}(S))$. Therefore, $\overline{\mathrm{tconv}}(\underline{\mathrm{tconv}}(S))\subseteq\underline{\mathrm{tconv}}(\overline{\mathrm{tconv}}(S))$. Analogously, using Lemma~\ref{L:TropOper}(9), we can show that $[(f_{11}\boxplus \cdots \boxplus f_{1n_1}])\oplus\cdots \oplus (f_{m1}\boxplus \cdots \boxplus f_{mn_m}])=\boxplus_{1\leq i_j\leq n_j,1\leq j\leq m}(f_{1i_1}\oplus\cdots\oplus f_{mi_m})$ which implies $\underline{\mathrm{tconv}}(\overline{\mathrm{tconv}}(S))\subseteq\overline{\mathrm{tconv}}(\underline{\mathrm{tconv}}(S))$. \end{proof} \begin{remark} We will write $\underline{\overline{\mathrm{tconv}}}(S):=\underline{\mathrm{tconv}}(\overline{\mathrm{tconv}}(S))=\overline{\mathrm{tconv}}(\underline{\mathrm{tconv}}(S))$. \end{remark} \begin{corollary} \label{C:LocalFinite} Let $T$ be a lower (respectively upper) tropical convex hull generated by $S$. For each $x\in T$, there exists a finite subset $S_x$ of $S$ such that $x$ is in the lower (respectively upper) tropical convex hull generated by $S_x$. \end{corollary} \begin{proof} This is essentially a restatement of Theorem~\ref{T:TconvTlinear}. \end{proof} \begin{remark} \label{R:TropIndep} There are several different notions of ``linear'' independence defined over tropical semirings \cite{AGG09}. As studied in \cite{CR79}, in max-plus linear algebra, a family $V$ of vectors in ${\mathbb R}_{\max}^n$ is said to be \emph{weakly independent} if no vector in $V$ is a max-plus linear combination of the others. Then by Theorem~\ref{T:TconvTlinear}, this corresponds exactly to our definition of upper tropical weak independence in the context of tropical projective spaces in Definition~\ref{D:TropIndep}(3a). Another notion of linear independence in max-plus linear algebra is the Gondran-Minous independence \cite{GM84} which says that a family $V$ of vectors in ${\mathbb R}_{\max}^n$ is Gondran-Minoux independent if for any partition $\{V_1,V_2\}$ of $V$, the intersection of the max-plus linear spaces generated by $V_1$ and $V_2$ is trivial. Again, by Theorem~\ref{T:TconvTlinear}, this corresponds to Definition~\ref{D:TropIndep}(3b) in the context of tropical projective spaces. In tropical geometry, there is another notion of linear independence, which is usually called tropical independence \cite{RST05}. This notion of tropical independence was applied to linear systems on metric graphs by Jensen and Payne in their tropical proofs of the Gieseker-Petri Theorem \cite{JP14} and the maximal rank conjecture for quadrics \cite{JP16}. More precisely, $\{f_1,\cdots,f_n\}\subseteq BC(X)$ (or $\{[f_1],\cdots,[f_n]\}\subseteq \mathbb{TP}(X)$) is said to be \emph{lower tropically dependent} (respectively \emph{upper tropically dependent}) if there are real numbers $c_1,\cdots,c_n$ such that the minimum $\min\{f_1(x)+c_1,\cdots,f_n(x)+c_n\}$ (respectively the maximum $\max\{f_1(x)+c_1,\cdots,f_n(x)+c_n\}$) occurs at least twice at every point $x\in X$. Otherwise, $\{f_1,\cdots,f_n\}\subseteq BC(X)$ (or $\{[f_1],\cdots,[f_n]\}\subseteq \mathbb{TP}(X)$) is said to be \emph{lower tropically independent} (respectively \emph{upper tropically independent}). Gondran-Minoux independence is clearly a stronger notion than weak independence, while tropical independence is even stronger. Actually suppose $\{[f_1],\cdots,[f_n]\}\subseteq \mathbb{TP}(X)$ is lower Gondran-Minoux dependent (the case of upper Gondran-Minoux independence follows analogously). Then without loss of generality, we may assume that there exists $$[f]\in \underline{\mathrm{tconv}}(\{[f_1],\cdots,[f_m]\})\bigcap \underline{\mathrm{tconv}}(\{[f_{m+1}],\cdots,[f_n]\})$$ for some $1\leq m \leq n-1$. Then by Theorem~\ref{T:TconvTlinear}, this means that \begin{align*} f&=(c_1\odot f_1)\oplus\cdots\oplus (c_m\odot f_m)=\min\{f_1+c_1,\cdots,f_m+c_m\} \\ &=(c_{m+1}\odot f_{m+1})\oplus\cdots\oplus (c_n\odot f_n)=\min\{f_{m+1}+c_{m+1},\cdots,f_n+c_n\} \\ &=(c_1\odot f_1)\oplus\cdots\oplus (c_n\odot f_n) = \min\{f_1+c_1,\cdots,f_n+c_n\} \end{align*} for some $c_1,\cdots, c_n\in{\mathbb R}$. Therefore, for all $x\in X$, the minimum of $\min\{f_1(x)+c_1,\cdots,f_n(x)+c_n\}$ is taken at both $f_i(x)+c_i$ and $f_j(x)+c_j$ for some $1\leq i\leq m$ and $m+1\leq j \leq n$. As a result, this means that $\{[f_1],\cdots,[f_n]\}\subseteq \mathbb{TP}(X)$ is lower tropically dependent. \qed \end{remark} \begin{example} \label{E:tconv} Let $X$ be the finite set $\{1,2,3\}$ with discrete topology. Then $BC(X)=BC^0(X) = {\mathbb R}^3$ and $\mathbb{TP}(X) = \mathbb{TP}^0(X)={\mathbb R}^2$. In particular, an element $f$ in $BC(X)$ can be written as $f=(x_1,x_2,x_3)=x_1{\mathbf e}_1+x_2{\mathbf e}_2+x_3{\mathbf e}_3$ with $x_1,x_2,x_3\in{\mathbb R}$, and correspondingly the element $[f]$ in $\mathbb{TP}(X)$ can be written as $[f]=(x_1:x_2:x_3)$ under the tropical projective coordinates. Note that in tropical projective coordinates, $(x_1:x_2:x_3)=(y_1:y_2:y_3)$ if and only if $x_1-y_1=x_2-y_2=x_3-y_3$. Therefore, $[f]=(x_1:x_2:x_3)=(x_1-x_3:x_2-x_3:0)$ and we also write $[f]=(x_1-x_3, x_2-x_3)=(x_1-x_3){\mathbf e}_1+(x_1-x_3){\mathbf e}_2$. In this way, elements in the $x_1x_2$-plane is in one-to-one correspondence to elements in $\mathbb{TP}(X)$, i.e., the point $(x_1,x_2)$ in the $x_1x_2$-plane represents $(x_1:x_2:0)$ in $\mathbb{TP}(X)$. Let $\alpha=(x^\alpha_1:x^\alpha_2:0)=(x^\alpha_1,x^\alpha_2)$ and $\beta=(x^\beta_1:x^\beta_2:0)=(x^\beta_1,x^\beta_2)$ be two elements in $\mathbb{TP}(X)$ respectively. Then by definition, the distance $\rho(\alpha,\beta)= \max(x^\beta_1-x^\alpha_1, x^\beta_2-x^\alpha_2,0)-\min(x^\beta_1-x^\alpha_1, x^\beta_2-x^\alpha_2,0)$. Depending on the relative positions of $\alpha$ and $\beta$, $\rho(\alpha,\beta)$ and the tropical paths from $\alpha$ to $\beta$ can be written as follows: \begin{enumerate} \item $x^\beta_1 - x^\alpha_1\geq x^\beta_2 - x^\alpha_2\geq 0$: \begin{align*} \rho(\alpha,\beta) &=x^\beta_1 - x^\alpha_1 \\ P_{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1+t,x^\alpha_2+t) & \mbox{if } 0\leq t\leq x^\beta_2 - x^\alpha_2 \\ (x^\alpha_1+t,x^\beta_2) &\mbox{if } x^\beta_2 - x^\alpha_2\leq t \leq x^\beta_1 - x^\alpha_1 \end{cases} \\ P^{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1+t,x^\alpha_2) & \mbox{if } 0\leq t\leq (x^\beta_1 - x^\alpha_1)-(x^\beta_2 - x^\alpha_2) \\ (x^\alpha_1+t,x^\beta_2-(x^\beta_1-x^\alpha_1)+t) &\mbox{if } (x^\beta_1 - x^\alpha_1)-(x^\beta_2 - x^\alpha_2) \leq t \leq x^\beta_1 - x^\alpha_1 \end{cases} \end{align*} \item $x^\beta_2 - x^\alpha_2\geq x^\beta_1 - x^\alpha_1\geq 0$: \begin{align*} \rho(\alpha,\beta) &=x^\beta_2 - x^\alpha_2 \\ P_{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1+t,x^\alpha_2+t) & \mbox{if } 0\leq t\leq x^\beta_1 - x^\alpha_1 \\ (x^\beta_1,x^\alpha_2+t) &\mbox{if } x^\beta_1 - x^\alpha_1\leq t \leq x^\beta_2 - x^\alpha_2 \end{cases} \\ P^{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1,x^\alpha_2+t) & \mbox{if } 0\leq t\leq (x^\beta_2 - x^\alpha_2)-(x^\beta_1 - x^\alpha_1) \\ (x^\beta_1-(x^\beta_2 - x^\alpha_2)+t,x^\alpha_2+t) &\mbox{if } (x^\beta_2 - x^\alpha_2)-(x^\beta_1 - x^\alpha_1) \leq t \leq x^\beta_2 - x^\alpha_2 \end{cases} \end{align*} \item $x^\beta_1 - x^\alpha_1\leq 0$ and $x^\beta_2 - x^\alpha_2\geq 0$: \begin{align*} \rho(\alpha,\beta) &=(x^\beta_2 - x^\alpha_2)-(x^\beta_1 - x^\alpha_1) \\ P_{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1-t,x^\alpha_2) & \mbox{if } 0\leq t\leq -(x^\beta_1 - x^\alpha_1) \\ (x^\beta_1,x^\alpha_2+(x^\beta_1 - x^\alpha_1)+t) &\mbox{if } -(x^\beta_1 - x^\alpha_1) \leq t \leq (x^\beta_2 - x^\alpha_2)-(x^\beta_1 - x^\alpha_1) \end{cases} \\ P^{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1,x^\alpha_2+t) & \mbox{if } 0\leq t\leq x^\beta_2 - x^\alpha_2 \\ (x^\alpha_1+(x^\beta_2 - x^\alpha_2)-t,x^\beta_2) &\mbox{if } x^\beta_2 - x^\alpha_2 \leq t \leq (x^\beta_2 - x^\alpha_2)-(x^\beta_1 - x^\alpha_1) \end{cases} \end{align*} \item $x^\beta_1 - x^\alpha_1\leq x^\beta_2 - x^\alpha_2\leq 0$: \begin{align*} \rho(\alpha,\beta) &=-(x^\beta_1 - x^\alpha_1) \\ P_{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1-t,x^\alpha_2) & \mbox{if } 0\leq t\leq (x^\beta_2 - x^\alpha_2) -(x^\beta_1 - x^\alpha_1)\\ (x^\alpha_1-t,x^\beta_2-(x^\beta_1-x^\alpha_1)-t) &\mbox{if } (x^\beta_2 - x^\alpha_2) -(x^\beta_1 - x^\alpha_1) \leq t \leq -(x^\beta_1 - x^\alpha_1) \end{cases} \\ P^{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1-t,x^\alpha_2-t) & \mbox{if } 0\leq t\leq -(x^\beta_2 - x^\alpha_2) \\ (x^\alpha_1-t,x^\beta_2) &\mbox{if } -(x^\beta_2 - x^\alpha_2)\leq t \leq -(x^\beta_1 - x^\alpha_1) \end{cases} \end{align*} \item $x^\beta_2- x^\alpha_2\leq x^\beta_1 - x^\alpha_1\leq 0$: \begin{align*} \rho(\alpha,\beta) &=-(x^\beta_2 - x^\alpha_2) \\ P_{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1,x^\alpha_2-t) & \mbox{if } 0\leq t\leq (x^\beta_1 - x^\alpha_1) -(x^\beta_2 - x^\alpha_2)\\ (x^\beta_1-(x^\beta_2 - x^\alpha_2)-t,x^\alpha_2-t) &\mbox{if } (x^\beta_1 - x^\alpha_1) -(x^\beta_2 - x^\alpha_2) \leq t \leq -(x^\beta_2 - x^\alpha_2) \end{cases}\\ P^{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1-t,x^\alpha_2-t) & \mbox{if } 0\leq t\leq -(x^\beta_1 - x^\alpha_1) \\ (x^\beta_1,x^\alpha_2-t) &\mbox{if } -(x^\beta_1 - x^\alpha_1)\leq t \leq -(x^\beta_2 - x^\alpha_2) \end{cases} \end{align*} \item $x^\beta_1 - x^\alpha_1\geq 0$ and $x^\beta_2 - x^\alpha_2\leq 0$: \begin{align*} \rho(\alpha,\beta) &=(x^\beta_1 - x^\alpha_1) -(x^\beta_2 - x^\alpha_2)\\ P_{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1,x^\alpha_2-t) & \mbox{if } 0\leq t\leq -(x^\beta_2 - x^\alpha_2) \\ (x^\alpha_1+(x^\beta_2 - x^\alpha_2)+t,x^\beta_2) &\mbox{if } -(x^\beta_2 - x^\alpha_2) \leq t \leq (x^\beta_1 - x^\alpha_1)-(x^\beta_2 - x^\alpha_2) \end{cases}\\ P^{(\alpha,\beta)}(t) &= \begin{cases} (x^\alpha_1+t,x^\alpha_2) & \mbox{if } 0\leq t\leq x^\beta_1 - x^\alpha_1 \\ (x^\beta_1,x^\alpha_2+(x^\beta_1 - x^\alpha_1)-t) &\mbox{if } x^\beta_1 - x^\alpha_1 \leq t \leq (x^\beta_1 - x^\alpha_1)-(x^\beta_2 - x^\alpha_2) \end{cases} \end{align*} \end{enumerate} \begin{figure} \centering \begin{tikzpicture}[scale=0.9] \begin{scope}[shift={(0,14.5)}] \draw (0,4) node[anchor=east] {(a)}; \begin{scope}[shift={(1,0)}] \draw (3,3.5) node[draw, anchor=south] {Lower Tropical Segments}; \draw[->,line width=0.8pt] (0,0) -- (5,0) node[right] {$x_1$}; \draw[->,line width=0.8pt] (0,0) -- (0,4) node[left] {$x_2$}; \coordinate (origin) at (2.5,2); \def1.5{1} \def0.5{0.5} \coordinate (A) at ($(origin)+(0,1.5)$); \coordinate (B) at ($(origin)+(1.5,0)$); \coordinate (C) at ($(origin)+(-1.5,-1.5)$); \coordinate (A1) at ($(A)+(0.5,0)$); \coordinate (B1) at ($(B)+(0.5,0)$); \coordinate (O1) at ($(origin)+(0.5,0)$); \coordinate (B2) at ($(B)+(0.5,-0.5)$); \coordinate (C2) at ($(C)+(0.5,-0.5)$); \coordinate (O2) at ($(origin)+(0.5,-0.5)$); \draw [line width=1.2pt] (A) -- (origin) -- (C); \draw [line width=1.2pt] (A1) -- (O1) -- (B1); \draw [line width=1.2pt] (B2) -- (O2) -- (C2); \fill [blue] (A) circle (3pt); \fill [blue] (B1) circle (3pt); \fill [blue] (C) circle (3pt); \fill [blue] (A1) circle (3pt); \fill [blue] (B2) circle (3pt); \fill [blue] (C2) circle (3pt); \draw [blue] (A) node[anchor=east] { $\alpha_1$}; \draw [blue] (A1) node[anchor=west] { $\alpha_2$}; \draw [blue] (B1) node[anchor=west] { $\beta_1$}; \draw [blue] (B2) node[anchor=west] { $\beta_2$}; \draw [blue] (C2) node[anchor=east] { $\gamma_1$}; \draw [blue] (C) node[anchor=east] { $\gamma_2$}; \end{scope} \begin{scope}[shift={(9,0)}] \draw (3,3.5) node[draw, anchor=south] {Upper Tropical Segments}; \draw[->,line width=0.8pt] (0,0) -- (5,0) node[right] {$x_1$}; \draw[->,line width=0.8pt] (0,0) -- (0,4) node[left] {$x_2$}; \coordinate (origin) at (2.5,1.5); \def1.5{1} \def0.5{0.5} \coordinate (A) at ($(origin)+(-1.5,0)$); \coordinate (B) at ($(origin)+(1.5,1.5)$); \coordinate (C) at ($(origin)+(0,-1.5)$); \coordinate (A1) at ($(A)+(0,0.5)$); \coordinate (B1) at ($(B)+(0,0.5)$); \coordinate (O1) at ($(origin)+(0,0.5)$); \coordinate (B2) at ($(B)+(0.5,0)$); \coordinate (C2) at ($(C)+(0.5,0)$); \coordinate (O2) at ($(origin)+(0.5,0)$); \draw [line width=1.2pt] (A) -- (origin) -- (C); \draw [line width=1.2pt] (A1) -- (O1) -- (B1); \draw [line width=1.2pt] (B2) -- (O2) -- (C2); \fill [blue] (A) circle (3pt); \fill [blue] (B1) circle (3pt); \fill [blue] (C) circle (3pt); \fill [blue] (A1) circle (3pt); \fill [blue] (B2) circle (3pt); \fill [blue] (C2) circle (3pt); \draw [blue] (A) node[anchor=east] { $\alpha_3$}; \draw [blue] (A1) node[anchor=east] { $\alpha_4$}; \draw [blue] (B1) node[anchor=west] { $\beta_3$}; \draw [blue] (B2) node[anchor=west] { $\beta_4$}; \draw [blue] (C2) node[anchor=west] { $\gamma_3$}; \draw [blue] (C) node[anchor=east] { $\gamma_4$}; \end{scope} \end{scope} \begin{scope}[shift={(0,5)}] \draw (0,9) node[anchor=east] {(b)}; \draw (9,8) node[anchor=south] {\large $\underline{\mathrm{tconv}}(\{A_i,B_i,C_i\})$}; \draw (9,3.3) node[anchor=south] {\large $\overline{\mathrm{tconv}}(\{A_i,B_i,C_i\})$}; \draw [line width=0.5pt] (0,-0.1) -- (16.8,-0.1) -- (16.8,9) -- (0,9) -- (0,-0.1); \draw [line width=0.5pt] (0,4.3) -- (16.8,4.3); \begin{scope}[shift={(0,4.5)}] \coordinate (origin) at (2,2); \def1.5{1.5} \coordinate (A1) at ($(origin)+(0,1.5)$); \coordinate (B1) at ($(origin)+(1.5,0)$); \coordinate (C1) at ($(origin)+(-1.5,-1.5)$); \draw [line width=1.2pt] (origin) -- (A1); \draw [line width=1.2pt] (origin) -- (B1); \draw [line width=1.2pt] (origin) -- (C1); \fill [blue] (A1) circle (3pt); \fill [blue] (B1) circle (3pt); \fill [blue] (C1) circle (3pt); \draw [blue] (A1) node[anchor=east] { $A_1$}; \draw [blue] (B1) node[anchor=north] {$B_1$}; \draw [blue] (C1) node[anchor=north] { $C_1$}; \end{scope} \begin{scope}[shift={(0,0)}] \coordinate (origin) at (2,2); \def1.5{1.5} \coordinate (A1) at ($(origin)+(0,1.5)$); \coordinate (B1) at ($(origin)+(1.5,0)$); \coordinate (C1) at ($(origin)+(-1.5,-1.5)$); \coordinate (a1) at ($(origin)+(1.5,1.5)$); \coordinate (b1) at ($(origin)+(0,-1.5)$); \coordinate (c1) at ($(origin)+(-1.5,0)$); \fill [black!30,opacity=1] (A1)--(a1)--(B1)--(b1)--(C1)--(c1); \draw [line width=1.2pt] (A1)--(a1)--(B1)--(b1)--(C1)--(c1)--(A1); \fill [blue] (A1) circle (3pt); \fill [blue] (B1) circle (3pt); \fill [blue] (C1) circle (3pt); \draw [blue] (A1) node[anchor=south] { $A_1$}; \draw [blue] (B1) node[anchor=west] {$B_1$}; \draw [blue] (C1) node[anchor=north] { $C_1$}; \end{scope} \begin{scope}[shift={(3,4.5)}] \coordinate (origin) at (2,2.5); \def1.5{1.5} \def1{1} \coordinate (A2) at ($(origin)+(0,1.5-1)$); \coordinate (B2) at ($(origin)+(1.5,0)$); \coordinate (C2) at ($(origin)+(1-1.5,-1.5)$); \coordinate (b2) at ($(origin)+(1,0)$); \coordinate (c2) at ($(origin)+(0,-1)$); \fill [black!30,opacity=1] (origin) -- (b2) -- (c2) -- (origin); \draw [line width=1.2pt] (c2) -- (A2); \draw [line width=1.2pt] (origin) -- (B2); \draw [line width=1.2pt] (C2) -- (b2); \fill [blue] (A2) circle (3pt); \fill [blue] (B2) circle (3pt); \fill [blue] (C2) circle (3pt); \draw [blue] (A2) node[anchor=south] { $A_2$}; \draw [blue] (B2) node[anchor=south] {$B_2$}; \draw [blue] (C2) node[anchor=north] { $C_2$}; \end{scope} \begin{scope}[shift={(3,0)}] \coordinate (origin) at (2,2.5); \def1.5{1.5} \def1{1} \coordinate (A2) at ($(origin)+(0,1.5-1)$); \coordinate (B2) at ($(origin)+(1.5,0)$); \coordinate (C2) at ($(origin)+(1-1.5,-1.5)$); \coordinate (a2) at ($(origin)+(1.5,1.5-1)$); \coordinate (b2) at ($(origin)+(0,-1.5)$); \coordinate (c2) at ($(origin)+(1-1.5,0)$); \fill [black!30,opacity=1] (A2)--(a2)--(B2)--(b2)--(C2)--(c2); \draw [line width=1.2pt] (A2)--(a2)--(B2)--(b2)--(C2)--(c2)--(A2); \fill [blue] (A2) circle (3pt); \fill [blue] (B2) circle (3pt); \fill [blue] (C2) circle (3pt); \draw [blue] (A2) node[anchor=south] { $A_2$}; \draw [blue] (B2) node[anchor=west] {$B_2$}; \draw [blue] (C2) node[anchor=north] { $C_2$}; \end{scope} \begin{scope}[shift={(6,4.5)}] \coordinate (origin) at (1.7,2.5); \def1.5{1.5} \coordinate (A3) at (origin); \coordinate (B3) at ($(origin)+(1.5,0)$); \coordinate (C3) at ($(origin)+(0,-1.5)$); \fill [black!30,opacity=1] (A3) -- (B3) -- (C3) -- (A3); \draw [line width=1.2pt] (A3) -- (B3) -- (C3) -- (A3); \fill [blue] (A3) circle (3pt); \fill [blue] (B3) circle (3pt); \fill [blue] (C3) circle (3pt); \draw [blue] (A3) node[anchor=south] { $A_3$}; \draw [blue] (B3) node[anchor=south] {$B_3$}; \draw [blue] (C3) node[anchor=north] { $C_3$}; \end{scope} \begin{scope}[shift={(6,0)}] \coordinate (origin) at (1.7,2.5); \def1.5{1.5} \coordinate (A3) at (origin); \coordinate (B3) at ($(origin)+(1.5,0)$); \coordinate (C3) at ($(origin)+(0,-1.5)$); \fill [black!30,opacity=1] (A3) -- (B3) -- (C3) -- (A3); \draw [line width=1.2pt] (A3) -- (B3) -- (C3) -- (A3); \fill [blue] (A3) circle (3pt); \fill [blue] (B3) circle (3pt); \fill [blue] (C3) circle (3pt); \draw [blue] (A3) node[anchor=south] { $A_3$}; \draw [blue] (B3) node[anchor=south] {$B_3$}; \draw [blue] (C3) node[anchor=north] { $C_3$}; \end{scope} \begin{scope}[shift={(9,4.5)}] \coordinate (origin) at (1.9,2.5); \def1.5{1.5} \def1{1} \coordinate (A4) at ($(origin)+(1-1.5,0)$); \coordinate (B4) at ($(origin)+(1.5,1.5-1)$); \coordinate (C4) at ($(origin)+(0,-1.5)$); \coordinate (a4) at ($(origin)+(0,1.5-1)$); \coordinate (b4) at ($(origin)+(1.5,0)$); \coordinate (c4) at ($(origin)+(1-1.5,-1.5)$); \fill [black!30,opacity=1] (A4)--(a4)--(B4)--(b4)--(C4)--(c4); \draw [line width=1.2pt] (A4)--(a4)--(B4)--(b4)--(C4)--(c4)--(A4); \fill [blue] (A4) circle (3pt); \fill [blue] (B4) circle (3pt); \fill [blue] (C4) circle (3pt); \draw [blue] (A4) node[anchor=south east] { $A_4$}; \draw [blue] (B4) node[anchor=south] {$B_4$}; \draw [blue] (C4) node[anchor=north] { $C_4$}; \end{scope} \begin{scope}[shift={(9,0)}] \coordinate (origin) at (1.9,2.5); \def1.5{1.5} \def1{1} \coordinate (A4) at ($(origin)+(1-1.5,0)$); \coordinate (B4) at ($(origin)+(1.5,1.5-1)$); \coordinate (C4) at ($(origin)+(0,-1.5)$); \coordinate (b4) at ($(origin)+(1,0)$); \coordinate (c4) at ($(origin)+(0,-1)$); \fill [black!30,opacity=1] (origin) -- (b4) -- (c4) -- (origin); \draw [line width=1.2pt] (A4) -- (b4); \draw [line width=1.2pt] (origin) -- (C4); \draw [line width=1.2pt] (c4) -- (B4); \fill [blue] (A4) circle (3pt); \fill [blue] (B4) circle (3pt); \fill [blue] (C4) circle (3pt); \draw [blue] (A4) node[anchor=south] { $A_4$}; \draw [blue] (B4) node[anchor=south] {$B_4$}; \draw [blue] (C4) node[anchor=north] { $C_4$}; \end{scope} \begin{scope}[shift={(12,4.5)}] \coordinate (origin) at (2.5,2); \def1.5{1.5} \coordinate (A5) at ($(origin)+(-1.5,0)$); \coordinate (B5) at ($(origin)+(1.5,1.5)$); \coordinate (C5) at ($(origin)+(0,-1.5)$); \coordinate (a5) at ($(origin)+(0,1.5)$); \coordinate (b5) at ($(origin)+(1.5,0)$); \coordinate (c5) at ($(origin)+(-1.5,-1.5)$); \fill [black!30,opacity=1] (A5)--(a5)--(B5)--(b5)--(C5)--(c5); \draw [line width=1.2pt] (A5)--(a5)--(B5)--(b5)--(C5)--(c5)--(A5); \fill [blue] (A5) circle (3pt); \fill [blue] (B5) circle (3pt); \fill [blue] (C5) circle (3pt); \draw [blue] (A5) node[anchor=east] { $A_5$}; \draw [blue] (B5) node[anchor=west] {$B_5$}; \draw [blue] (C5) node[anchor=north] { $C_5$}; \end{scope} \begin{scope}[shift={(12,0)}] \coordinate (origin) at (2.5,2); \def1.5{1.5} \coordinate (A5) at ($(origin)+(-1.5,0)$); \coordinate (B5) at ($(origin)+(1.5,1.5)$); \coordinate (C5) at ($(origin)+(0,-1.5)$); \draw [line width=1.2pt] (origin) -- (A5); \draw [line width=1.2pt] (origin) -- (B5); \draw [line width=1.2pt] (origin) -- (C5); \fill [blue] (A5) circle (3pt); \fill [blue] (B5) circle (3pt); \fill [blue] (C5) circle (3pt); \draw [blue] (A5) node[anchor=south] {$A_5$}; \draw [blue] (B5) node[anchor=north west] {$B_5$}; \draw [blue] (C5) node[anchor=west] { $C_5$}; \end{scope} \end{scope} \begin{scope}[shift={(0,0)}] \draw (0,4) node[anchor=east] {(c)}; \begin{scope}[shift={(0,0)}] \coordinate (origin) at (2,2); \def1.5{1.5} \draw [blue, line width=2pt] (origin) circle (1.5); \draw ($(origin)+(0,1.8)$) node[anchor=south] {\large $S$}; \end{scope} \begin{scope}[shift={(4,0)}] \coordinate (origin) at (2,2); \def1.5{1.5} \coordinate (N) at ($(origin)+(0,1.5)$); \coordinate (S) at ($(origin)+(0,-1.5)$); \coordinate (E) at ($(origin)+(1.5,0)$); \coordinate (W) at ($(origin)+(-1.5,0)$); \coordinate (NE) at ($(origin)+(1.5,1.5)$); \coordinate (SW) at ($(origin)+(-1.5,-1.5)$); \coordinate (Nw) at ($(origin)+({(1-sqrt(2))*1.5},1.5)$); \coordinate (nW) at ($(origin)+(-1.5,{(sqrt(2)-1)*1.5})$); \coordinate (nw) at ($(origin)+({-(sqrt(2)/2)*1.5},{(sqrt(2)/2)*1.5})$); \coordinate (Se) at ($(origin)+({(sqrt(2)-1)*1.5},-1.5)$); \coordinate (sE) at ($(origin)+(1.5,{(1-sqrt(2))*1.5})$); \coordinate (se) at ($(origin)+({(sqrt(2)/2)*1.5},{-(sqrt(2)/2)*1.5})$); \fill [black!30,opacity=1] (S) -- (SW) -- (W) -- (S); \fill [black!30,opacity=1] (E) -- (sE) -- (se) -- (E); \fill [black!30,opacity=1] (N) -- (Nw) -- (nw) -- (N); \fill [black!30,opacity=1] (origin) circle (1.5); \draw [line width=1.2pt] (S) -- (SW) -- (W); \draw [line width=1.2pt] (E) -- (sE) -- (se); \draw [line width=1.2pt] (N) -- (Nw) -- (nw); \draw [blue, line width=1.2pt] (origin) circle (1.5); \draw ($(origin)+(0,1.8)$) node[anchor=south] {\large $\underline{\mathrm{tconv}}(S)$}; \end{scope} \begin{scope}[shift={(8,0)}] \coordinate (origin) at (2,2); \def1.5{1.5} \coordinate (N) at ($(origin)+(0,1.5)$); \coordinate (S) at ($(origin)+(0,-1.5)$); \coordinate (E) at ($(origin)+(1.5,0)$); \coordinate (W) at ($(origin)+(-1.5,0)$); \coordinate (NE) at ($(origin)+(1.5,1.5)$); \coordinate (SW) at ($(origin)+(-1.5,-1.5)$); \coordinate (Nw) at ($(origin)+({(1-sqrt(2))*1.5},1.5)$); \coordinate (nW) at ($(origin)+(-1.5,{(sqrt(2)-1)*1.5})$); \coordinate (nw) at ($(origin)+({-(sqrt(2)/2)*1.5},{(sqrt(2)/2)*1.5})$); \coordinate (Se) at ($(origin)+({(sqrt(2)-1)*1.5},-1.5)$); \coordinate (sE) at ($(origin)+(1.5,{(1-sqrt(2))*1.5})$); \coordinate (se) at ($(origin)+({(sqrt(2)/2)*1.5},{-(sqrt(2)/2)*1.5})$); \fill [black!30,opacity=1] (N) -- (NE) -- (E) -- (N); \fill [black!30,opacity=1] (W) -- (nW) -- (nw) -- (W); \fill [black!30,opacity=1] (S) -- (Se) -- (se) -- (S); \fill [black!30,opacity=1] (origin) circle (1.5); \draw [line width=1.2pt] (N) -- (NE) -- (E); \draw [line width=1.2pt] (W) -- (nW) -- (nw); \draw [line width=1.2pt] (S) -- (Se) -- (se); \draw [blue, line width=1.2pt] (origin) circle (1.5); \draw ($(origin)+(0,1.8)$) node[anchor=south] {\large $\overline{\mathrm{tconv}}(S)$}; \end{scope} \begin{scope}[shift={(12,0)}] \coordinate (origin) at (2,2); \def1.5{1.5} \coordinate (N) at ($(origin)+(0,1.5)$); \coordinate (S) at ($(origin)+(0,-1.5)$); \coordinate (E) at ($(origin)+(1.5,0)$); \coordinate (W) at ($(origin)+(-1.5,0)$); \coordinate (NE) at ($(origin)+(1.5,1.5)$); \coordinate (SW) at ($(origin)+(-1.5,-1.5)$); \coordinate (Nw) at ($(origin)+({(1-sqrt(2))*1.5},1.5)$); \coordinate (nW) at ($(origin)+(-1.5,{(sqrt(2)-1)*1.5})$); \coordinate (nw) at ($(origin)+({-(sqrt(2)/2)*1.5},{(sqrt(2)/2)*1.5})$); \coordinate (Se) at ($(origin)+({(sqrt(2)-1)*1.5},-1.5)$); \coordinate (sE) at ($(origin)+(1.5,{(1-sqrt(2))*1.5})$); \coordinate (se) at ($(origin)+({(sqrt(2)/2)*1.5},{-(sqrt(2)/2)*1.5})$); \fill [black!30,opacity=1] (NE) -- (Nw) -- (nW) -- (SW) -- (Se) -- (sE) -- (NE); \fill [black!30,opacity=1] (origin) circle (1.5); \draw [line width=1.2pt] (NE) -- (Nw) -- (nW) -- (SW) -- (Se) -- (sE) -- (NE); \draw [blue,line width=1.2pt] (origin) circle (1.5); \draw ($(origin)+(0,1.8)$) node[anchor=south] {\large $\underline{\overline{\mathrm{tconv}}}(S)$}; \end{scope} \end{scope} \end{tikzpicture} \caption{(a) Some examples of lower and upper tropical segments. (b) Tropical convex hulls generated by three points $A_i$, $B_i$ and $C_i$ with different relative positions for $i$ being $1$ through $5$. (c) Tropical convex hulls generated by a circle $S$.} \label{F:tconv} \end{figure} Figure~\ref{F:tconv}(a) shows some lower and upper tropical segments in $\mathbb{TP}(X)$ represented in the $x_1x_2$-plane. In particular, for the lower tropical paths on the left panel, the lower tropical paths from $\gamma_1$ to $\beta_2$, from $\gamma_2$ to $\alpha_1$, from $\beta_1$ to $\alpha_2$, from $\beta_2$ to $\gamma_1$, from $\alpha_1$ to $\gamma_2$ and from $\alpha_2$ to $\beta_1$ correspond to Case (1)-(6) above respectively; for the upper tropical paths on the left panel, the lower tropical paths from $\alpha_4$ to $\beta_3$, from $\gamma_3$ to $\beta_4$, from $\gamma_4$ to $\alpha_3$, from $\beta_3$ to $\alpha_4$, from $\beta_4$ to $\gamma_3$ and from $\alpha_3$ to $\gamma_4$ correspond to Case (1)-(6) above respectively. Figure~\ref{F:tconv}(b) shows the upper and lower tropical polytopes generated by the triples $\{A_i,B_i,C_i\}$ for $i=1,\cdots,5$. Note that the fact that these sets are (lower or upper) tropically convex can be easily verified by showing that the (lower or upper) tropical segments connecting any two points in one of the sets is fully contained in that set. More specifically, we suppose \begin{align*} & A_1 = (0,1),\ B_1=(1,0),\ C_1=(-1,-1);\quad & A_2 = (0,1),\ B_2=(1,\frac{2}{3}),\ C_2=(-\frac{1}{3},-\frac{1}{3}); \\ & A_3 = (0,1),\ B_3=(1,1),\ C_3=(0,0);\quad & A_4 = (-\frac{1}{3},\frac{2}{3}),\ B_4=(1,1),\ C_4=(0,-\frac{1}{3}); \\ & A_5 = (-1,0),\ B_5=(1,1),\ C_5=(0,-1). \end{align*} There are a few observations about these sets worth mentioning here: \begin{enumerate} \item All these sets are compact subsets of ${\mathbb R}^2$ under the tropical metric topology which can be verified straightforwardly in this case. Actually in Section~\ref{S:Compact}, we show that in general all tropical polytopes are compact (Corollary~\ref{C:Polytope}). \item Depending on the relative positions of $A_i$, $B_i$ and $C_i$, the tropical polytopes can be purely $1$-dimensional ($\underline{\mathrm{tconv}}(\{A_1,B_1,C_1\})$ and $\overline{\mathrm{tconv}}(\{A_5,B_5,C_5\})$), purely $2$-dimensional ($\underline{\mathrm{tconv}}(\{A_3,B_3,C_3\})$, $\underline{\mathrm{tconv}}(\{A_4,B_4,C_4\})$, $\underline{\mathrm{tconv}}(\{A_5,B_5,C_5\})$, $\overline{\mathrm{tconv}}(\{A_1,B_1,C_1\})$, $\overline{\mathrm{tconv}}(\{A_2,B_2,C_2\})$ and $\overline{\mathrm{tconv}}(\{A_3,B_3,C_3\})$) and not of pure dimension ($\underline{\mathrm{tconv}}(\{A_2,B_2,C_2\})$ and $\overline{\mathrm{tconv}}(\{A_4,B_4,C_4\})$). \item Note that $\underline{\mathrm{tconv}}(\{A_3,B_3,C_3\})=\overline{\mathrm{tconv}}(\{A_3,B_3,C_3\})$, $\underline{\mathrm{tconv}}(\{A_4,B_4,C_4\})=\overline{\mathrm{tconv}}(\{A_2,B_2,C_2\})$, and $\underline{\mathrm{tconv}}(\{A_5,B_5,C_5\})=\overline{\mathrm{tconv}}(\{A_1,B_1,C_1\})$ which are all both lower and upper tropically convex. Therefore, we conclude that \begin{align*} \underline{\overline{\mathrm{tconv}}}(\{A_3,B_3,C_3\})=\underline{\mathrm{tconv}}(\{A_3,B_3,C_3\})=\overline{\mathrm{tconv}}(\{A_3,B_3,C_3\}) \\ \underline{\overline{\mathrm{tconv}}}(\{A_4,B_4,C_4\})=\underline{\overline{\mathrm{tconv}}}(\{A_2,B_2,C_2\})=\underline{\mathrm{tconv}}(\{A_4,B_4,C_4\})=\overline{\mathrm{tconv}}(\{A_2,B_2,C_2\}) \\ \underline{\overline{\mathrm{tconv}}}(\{A_5,B_5,C_5\})=\underline{\overline{\mathrm{tconv}}}(\{A_1,B_1,C_1\})=\underline{\mathrm{tconv}}(\{A_5,B_5,C_5\})=\overline{\mathrm{tconv}}(\{A_1,B_1,C_1\}). \end{align*} \end{enumerate} Figure~\ref{F:tconv}(c) shows a circle $S$ and the tropical convex hulls $\underline{\mathrm{tconv}}(S)$, $\overline{\mathrm{tconv}}(S)$ and $\underline{\overline{\mathrm{tconv}}}(S)$ generated by $S$. Note $\underline{\overline{\mathrm{tconv}}}(S)$ is a hexagon. In this case, $S$ is an infinite compact set and $\underline{\mathrm{tconv}}(S)$, $\overline{\mathrm{tconv}}(S)$ and $\underline{\overline{\mathrm{tconv}}}(S)$ are all compact. It is also generally true that the closed tropical convex hulls generated by a compact set is also compact (Theorem~\ref{T:TropMazur}), which is the tropical version of Mazur's theorem proved in Section~\ref{S:Compact}. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw (-0.3,4) node[anchor=east] {(a)}; \begin{scope} \draw[->,line width=0.8pt] (0,0) -- (2,0) node[right] {$x_1$}; \draw[->,line width=0.8pt] (0,0) -- (0,3) node[left] {$x_2$}; \def\x{3.5*0.3}; \def\y{3.5*0.9}; \fill [blue] (0,0) circle (3pt); \draw [blue,line width=1.5pt] (0,0) -- (\x,\y); \draw ($(1,3.5)$) node[anchor=south] {\large $U$}; \end{scope} \begin{scope}[shift={(4,0)}] \def\x{3.5*0.3}; \def\y{3.5*0.9}; \fill [black!30,opacity=1] (0,0) -- (\x,\y) -- (\y,\y) -- cycle; \draw [line width=1.5pt] (0,0) -- (\y,\y); \fill [blue] (0,0) circle (3pt); \draw [blue,line width=1.5pt] (0,0) -- (\x,\y); \draw ($(1.5,3.5)$) node[anchor=south] {\large $\underline{\mathrm{tconv}}(U)$}; \end{scope} \begin{scope}[shift={(8.5,0)}] \def\x{3.5*0.3}; \def\y{3.5*0.9}; \fill [black!30,opacity=1] (0,0) -- (0,\y) -- (\x,\y) -- cycle; \draw [line width=1.5pt] (0,0) -- (0,\y); \fill [blue] (0,0) circle (3pt); \draw [blue,line width=1.5pt] (0,0) -- (\x,\y); \draw ($(0.5,3.5)$) node[anchor=south] {\large $\overline{\mathrm{tconv}}(U)$}; \end{scope} \begin{scope}[shift={(11,0)}] \def\x{3.5*0.3}; \def\y{3.5*0.9}; \fill [black!30,opacity=1] (0,0) -- (\y,\y) -- (0,\y) -- cycle; \draw [line width=1.5pt] (0,\y) -- (0,0) -- (\y,\y); \fill [blue] (0,0) circle (3pt); \draw [blue,line width=1.5pt] (0,0) -- (\x,\y); \draw ($(1.5,3.5)$) node[anchor=south] {\large $\underline{\overline{\mathrm{tconv}}}(U)$}; \end{scope} \end{scope} \begin{scope}[shift={(0,-5)}] \draw (-0.3,4.2) node[anchor=east] {(b)}; \begin{scope} \draw[->,line width=0.8pt] (0,0) -- (2,0) node[right] {$x_1$}; \draw[->,line width=0.8pt] (0,0) -- (0,3) node[left] {$x_2$}; \def\genx{0.3}; \def\geny{0.9}; \def\n{3}; \def6{6} \coordinate (gen) at (\genx,\geny); \foreach \i in {0,1,...,\n} \coordinate (gen\i) at ($(\i*\genx,\i*\geny)$); \foreach \i in {0,1,...,\n} \fill [blue] (gen\i) circle (3pt); \foreach \i in {0,1,...,\n} \draw [blue] (gen\i) node[anchor=south west] {$A\i$}; \draw ($(1.25,3.5)$) node[anchor=south] {\large $V=\{A_0,\cdots\}$}; \end{scope} \begin{scope}[shift={(4,0)}] \def\genx{0.3}; \def\geny{0.9}; \def\n{3}; \def6{6} \coordinate (gen) at (\genx,\geny); \foreach \i in {0,1,...,\n} \coordinate (gen\i) at ($(\i*\genx,\i*\geny)$); \foreach \i in {0,...,\n} \coordinate (mid\i) at ($(gen\i)+(\genx,\genx)$); \coordinate (V) at ($(\n*\geny+\geny-\genx,\n*\geny+\genx)$); \fill [black!30,opacity=1] (mid0) node{} \foreach \i in {1,...,\n} {--(gen\i)--(mid\i) node{}} --(V) -- cycle; \draw [line width=1.5pt] (mid0) node{} \foreach \i in {1,...,6}{ -- \ifodd\i ++(0,\geny-\genx) \else ++(\genx,\genx) \fi node{}}; \draw [line width=1.5pt] (0,0) -- (V); \foreach \i in {0,1,...,\n} \fill [blue] (gen\i) circle (3pt); \draw ($(1.5,3.5)$) node[anchor=south] {\large $\underline{\mathrm{tconv}}(V)$}; \end{scope} \begin{scope}[shift={(8.5,0)}] \def\genx{0.3}; \def\geny{0.9}; \def\n{3}; \def6{6} \coordinate (gen) at (\genx,\geny); \foreach \i in {0,1,...,\n} \coordinate (gen\i) at ($(\i*\genx,\i*\geny)$); \foreach \i in {0,...,\n} \coordinate (mid\i) at ($(gen\i)+(0,\geny-\genx)$); \coordinate (V) at ($(0,\n*\geny+\geny-\genx)$); \fill [black!30,opacity=1] (mid0) node{} \foreach \i in {1,...,\n} {--(gen\i)--(mid\i) node{}} --(V) -- cycle; \draw [line width=1.5pt] (mid0) node{} \foreach \i in {1,...,6}{ -- \ifodd\i ++(\genx,\genx) \else ++(0,\geny-\genx) \fi node{}}; \draw [line width=1.5pt] (0,0) -- (V); \foreach \i in {0,1,...,\n} \fill [blue] (gen\i) circle (3pt); \draw ($(0.5,3.5)$) node[anchor=south] {\large $\overline{\mathrm{tconv}}(V)$}; \end{scope} \begin{scope}[shift={(11,0)}] \def\genx{0.3}; \def\geny{0.9}; \def\n{3}; \def6{6} \coordinate (gen) at (\genx,\geny); \foreach \i in {0,1,...,\n} \coordinate (gen\i) at ($(\i*\genx,\i*\geny)$); \coordinate (V1) at ($(\n*\geny+\geny-\genx,\n*\geny+\genx)$); \coordinate (V2) at ($(0,\n*\geny+\genx)$); \fill [black!30,opacity=1] (0,0) -- (V1) --(V2) -- cycle; \draw [line width=1.5pt] (V2) -- (0,0) -- (V1); \foreach \i in {0,1,...,\n} \fill [blue] (gen\i) circle (3pt); \draw ($(1.5,3.5)$) node[anchor=south] {\large $\underline{\overline{\mathrm{tconv}}}(V)$}; \end{scope} \end{scope} \end{tikzpicture} \caption{Examples of non-compact tropical convex sets: (a) Tropical convex hulls generated by the ray $x_2=3x_1$ in the first quadrant. (b) Tropical convex hulls generated by points $(x_1,3x_1)$ for $x_1=0,1, \cdots$.} \label{F:tconv2} \end{figure} Now let us consider some tropical convex hulls generated by noncompact sets. Figure~\ref{F:tconv2}(a) shows the tropical convex hulls generated by the ray $U$ defined by $x_2=3x_1$ with $x_1\geq 0$. Note that for any two points $\alpha$ and $\beta$ in $U$, the lower tropical segment $\underline{[\alpha,\beta]}$ is of type $\underline{[\alpha_1,\gamma_2]}$ in Figure~\ref{F:tconv}(a) and the upper tropical segment $\overline{[\alpha,\beta]}$ is of type $\overline{[\beta_4,\gamma_3]}$ in Figure~\ref{F:tconv}(a). For comparison, Figure~\ref{F:tconv2}(b) shows the tropical convex hulls generated by the countable set $V$ defined by $x_2=3x_1$ with $x_1=0,1,\cdots$. \qed \end{example} \section{$B^p$-Pseudonorms and Tropical Projections} \label{S:Bpseudonorm} \subsection{The Definition of $B^p$-Pseudonorms} Now suppose the underlying space $X$ is a locally compact Hausdorff space equipped with a finite nontrivial Borel measure $\mu$. To specify the measure $\mu$, we also write $\mathbb{TP}(X)$ as $\mathbb{TP}(X,\mu)$. Note that all functions in $BC(X)$ are $\mu$-measurable and $\mu$-integrable. Then we can define some pseudonorms and pseudometrics on $\mathbb{TP}(X,\mu)$ (here ``pseudo'' means not necessarily symmetric). \begin{definition} \label{D:pseudonorm} For a given $1\leq p \leq\infty$, the \emph{$\underline{p}$-pseudonorm} or \emph{$\underline{B}^p$-pseudonorm} on $\mathbb{TP}(X,\mu)$ is a function: $\llfloor\cdot\rrfloor_p:\mathbb{TP}(X,\mu)\to[0,\infty)$ defined by $\llfloor[f]\rrfloor_p:=\Vert \underline{f}\Vert_p$ where $\Vert \cdot\Vert_p$ is the $p$-norm on $BC(X)$. More precisely, for all $[f]\in \mathbb{TP}(X,\mu)$, when $1\leq p <\infty$, $\llfloor[f]\rrfloor_p=\left(\int_X (\underline{f})^pd\mu\right)^{1/p}=\left(\int_X (f-\inf(f))^pd\mu\right)^{1/p}$, and the $\infty$-pseudonorm $\llfloor\cdot\rrfloor_\infty$ on $\mathbb{TP}(X,\mu)$ is defined as $\llfloor[f]\rrfloor_\infty=\lim\limits_{p\to\infty}\llfloor[f]\rrfloor_p$. We also define the \emph{$\overline{p}$-pseudonorm} or \emph{$\overline{B}^p$-pseudonorm} on $\mathbb{TP}(X,\mu)$ as $\llceil[f]\rrceil_p=\Vert \overline{f}\Vert_p=\Vert -\overline{f}\Vert_p=\left(\int_X (-\overline{f})^pd\mu\right)^{1/p}=\left(\int_X (\sup(f)-f)^pd\mu\right)^{1/p}$ with $p\in[1,\infty)$ and $\llceil[f]\rrceil_\infty=\lim\limits_{p\to\infty}\llceil[f]\rrceil_p$. Both $\underline{p}$-pseudonorm and $\overline{p}$-pseudonorm are called \emph{$p$-pseudonorms} or \emph{$B^p$-pseudonorms}. All $B^p$-pseudonorms are also simply called \emph{$B$-pseudonorms}. \end{definition} \subsection{Null Sets} \begin{lemma} \label{L:null} For $p\in[1,\infty)$, $\llfloor[f]\rrfloor_p=0$ if and only if $\underline{f}=0$ almost everywhere, and $\llceil[f]\rrceil_p=0$ if and only if $\overline{f}=0$ almost everywhere. \end{lemma} \begin{proof} Recall that for a non-negative $\mu$-measurable function $F$, $\int_X F d\mu=0$ if and only if $F=0$ almost everywhere (i.e., $\mu(\{x\in X\mid F(x)\neq 0\})=0$). By letting $F=(\underline{f})^p$ and $F=(-\overline{f})^p$ respectively, the statement follows. \end{proof} We have the following notations of null sets: \begin{enumerate} \item $\underline{{\mathcal N}}(X,\mu):=\{[f]\in \mathbb{TP}(X,\mu)\mid \underline{f}=0\ \text{almost everywhere}\}$; \item $\overline{{\mathcal N}}(X,\mu)=\{[f]\in \mathbb{TP}(X,\mu)\mid \overline{f}=0\ \text{almost everywhere}\}$; \item ${\mathcal N}(X,\mu)=\{[f]\in \mathbb{TP}(X,\mu)\mid f\ \text{is equal to a constant almost everywhere}\}$. \end{enumerate} The sets will also be simply denoted as $\underline{{\mathcal N}}$, $\overline{{\mathcal N}}$ and ${\mathcal N}$ respectively when $X$ and $\mu$ are presumed. \begin{lemma} \label{L:null2} The following are some basic properties of $\underline{{\mathcal N}}$, $\overline{{\mathcal N}}$ and ${\mathcal N}$. \begin{enumerate} \item $[f]\in\underline{{\mathcal N}}$ if and only if $\llfloor[f]\rrfloor_p=0$ for some $p\in[1,\infty)$ if and only if $\llfloor[f]\rrfloor_p=0$ for all $p\in[1,\infty)$. \item $[f]\in\overline{{\mathcal N}}$ if and only if $\llceil[f]\rrceil_p=0$ for some $p\in[1,\infty)$ if and only if $\llceil[f]\rrceil_p=0$ for all $p\in[1,\infty)$. \item $\underline{{\mathcal N}}=-\overline{{\mathcal N}}$. \item $\underline{{\mathcal N}}\bigcap\overline{{\mathcal N}}=\{[0]\}$. \item $\underline{{\mathcal N}}$ (respectively $\overline{{\mathcal N}}$) is a positive cone, i.e., $\underline{{\mathcal N}}\bigcap(-\underline{{\mathcal N}}) =\{[0]\}$ (respectively $\overline{{\mathcal N}}\bigcap(-\overline{{\mathcal N}}) =\{[0]\}$) and if $\alpha,\beta\in\underline{{\mathcal N}}$ and $a,b\geq 0$, $a\alpha+b\beta\in\underline{{\mathcal N}}$ (respectively $a\alpha+b\beta\in\overline{{\mathcal N}}$). \item ${\mathcal N}$ is a linear subspace of $\mathbb{TP}(X,\mu)$ spanned by $\underline{{\mathcal N}}(X,\mu)\bigcup\overline{{\mathcal N}}(X,\mu)$. \item ${\mathcal N}=\{[0]\}$ if and only if $\underline{{\mathcal N}}=\{[0]\}$ if and only if $\overline{{\mathcal N}}=\{[0]\}$. \end{enumerate} \end{lemma} \begin{proof} (1) and (2) follows from Lemma~\ref{L:null} directly. For (3), we observe that $[f]\in\underline{{\mathcal N}}$ if and only if $\llfloor[f]\rrfloor_1=\Vert \underline{f}\Vert_1=0$ if and only if $\llceil-[f]\rrceil_1=\Vert \overline{-f}\Vert_1=0$ if and only if $-[f]\in\overline{{\mathcal N}}$. (4) follows from the fact that $[f]=[0]$ if and only if $\underline{f}=0$ almost everywhere and $\overline{f}=0$. Using (3) and (4), (5) can be easily verified by definition. For (6), it is straightforward to verify that ${\mathcal N}$ is a linear subspace containing both $\underline{{\mathcal N}}(X,\mu)$ and $\overline{{\mathcal N}}(X,\mu)$. It remains to show that each $[f]\in {\mathcal N}$ can be written as $[f]=[g]+[h]$ where $[g]\in \underline{{\mathcal N}}(X,\mu)$ and $[h]\in \overline{{\mathcal N}}(X,\mu)$. By definition, we know that $f=c$ for some constant $c$ almost everywhere. We let $g=\max(f,c)$ and $h=\min(f,c)$. Then it is clear that $[f]=[g]+[h]$, $[g]\in \underline{{\mathcal N}}(X,\mu)$ is and $[h]\in \overline{{\mathcal N}}(X,\mu)$. (7) follows from (3), (4) and (6). \end{proof} We say that ${\mathcal N}$ is trivial if ${\mathcal N}=\{[0]\}$. \begin{lemma} If the measure of every nonempty open subset of $X$ is nonzero, then ${\mathcal N}$ is trivial. \end{lemma} \begin{proof} If $[f]\in \underline{{\mathcal N}}$, then $\mu(\{x\in X\mid \underline{f}(x)\neq 0\})=0$ by definition. Therefore $\{x\in X\mid \underline{f}(x)\neq 0\}=\emptyset$ and thus $[f]=[0]$. By Lemma~\ref{L:null2}(7), ${\mathcal N}$ is trivial. \end{proof} \subsection{Basic Properties of $B^p$-Pseudonorms} \begin{proposition} \label{P:BnormProperty} We summarize some properties of the $B^p$-pseudonorms as follows. \begin{enumerate} \item $\llceil\alpha\rrceil_p=\llfloor-\alpha\rrfloor_p$. \item For $1\leq p <\infty$, $\llfloor\alpha\rrfloor_p \leq \mu(X)^{1/p} \Vert \alpha\Vert$ and $\llceil\alpha\rrceil_p \leq \mu(X)^{1/p} \Vert \alpha\Vert$. \item The $\underline{p}$-pseudonorms and $\overline{p}$-pseudonorms are continuous functions. \item Fixing $\alpha\in\mathbb{TP}(X)$, the functions $\mu(X)^{-1/p}\llfloor\alpha\rrfloor_p$ and $\mu(X)^{-1/p}\llceil\alpha\rrceil_p$ are nondecreasing with respect to $p\in [1,\infty]$. \item $\llfloor\alpha\rrfloor_1+\llceil\alpha\rrceil_1=\llfloor\alpha\rrfloor_1+\llfloor -\alpha\rrfloor_1=\llceil\alpha\rrceil_1+\llceil-\alpha\rrceil_1=\mu(X)\cdot\Vert \alpha \Vert$. \item $\llfloor\alpha\rrfloor_\infty=\llceil\alpha\rrceil_\infty=\Vert \alpha \Vert$. \item $\llfloor c\alpha\rrfloor_p=c\llfloor \alpha\rrfloor_p$ and $\llceil c\alpha\rrceil_p=c\llceil \alpha\rrceil_p$ when $c>0$. \item For $p\in[1,\infty)$, $\alpha=[f]$ and $\beta=[g]$, if $\underline{f}\leq\underline{g}$ almost everywhere, then $ \llfloor \alpha\rrfloor_p \leq \llfloor \beta\rrfloor_p$, and if $\overline{f}\geq\overline{g}$ almost everywhere, then $ \llceil \alpha\rrfloor_p \leq \llceil \beta\rrceil_p$. \item (The triangle inequalities) For $1\leq p \leq\infty$, $\llfloor \alpha+\beta\rrfloor_p \leq \llfloor \alpha\rrfloor_p +\llfloor \beta\rrfloor_p$ and $\llceil \alpha+\beta\rrceil_p \leq \llceil \alpha\rrceil_p +\llceil \beta\rrceil_p$. \item $\llfloor \alpha+\beta\rrfloor_1 = \llfloor \alpha\rrfloor_1 +\llfloor \beta\rrfloor_1$ if and only if $ X_{\min}(\alpha)\Cap X_{\min}(\beta) \neq\emptyset$. \item $\llceil \alpha+\beta\rrceil_1 = \llceil \alpha\rrceil_1 +\llceil \beta\rrceil_1$ if and only if $ X_{\max}(\alpha)\Cap X_{\max}(\beta) \neq\emptyset$. \item $\Vert \alpha+\beta \Vert= \Vert \alpha\Vert +\Vert \beta \Vert$ if and only if $\llfloor \alpha+\beta\rrfloor_1 = \llfloor \alpha\rrfloor_1 +\llfloor \beta\rrfloor_1$ and $\llceil \alpha+\beta\rrceil_1 = \llceil \alpha\rrceil_1 +\llceil \beta\rrceil_1$. \item For any $p\in[1,\infty)$, $\alpha\in\mathbb{TP}(X)$, $\beta_1\in\underline{{\mathcal N}}$ and $\beta_2\in\overline{{\mathcal N}}$, if $X_{\min}(\alpha)\Cap X_{\min}(\beta_1)\neq \emptyset$ and $ X_{\max}(\alpha)\Cap X_{\max}(\beta_2) \neq\emptyset$, then $\llfloor \alpha+\beta_1\rrfloor_p = \llfloor \alpha\rrfloor_p$ and $\llceil \alpha+\beta_2\rrceil_p = \llceil \alpha\rrceil_p$. \item[For (14) and (15), we suppose that ${\mathcal N}(X,\mu)$ is trivial.] \item For $p\in(1,\infty)$, $\llfloor \alpha+\beta\rrfloor_p = \llfloor \alpha\rrfloor_p +\llfloor \beta\rrfloor_p$ if and only if $\llceil \alpha+\beta\rrceil_p = \llceil \alpha\rrceil_p +\llceil \beta\rrceil_p$ if and only if either $\alpha=[0]$ or $\beta=c\alpha$ for some $c\geq0$. \item For any $p\in[1,\infty)$, $\alpha=[f]$ and $\beta=[g]$, if $\underline{f}\lneqq\underline{g}$, then $ \llfloor \alpha\rrfloor_p < \llfloor \beta\rrfloor_p$, and if $\overline{f}\gneqq\overline{g}$, then $ \llceil \alpha\rrfloor_p < \llfloor \beta\rrceil_p$. \end{enumerate} \end{proposition} \begin{proof} Let $\alpha=[f]$. For (1), we have $\llceil\alpha\rrceil_p=\llceil[f]\rrceil_p=\Vert \overline{f}\Vert_p=\Vert -\overline{f}\Vert_p=\Vert \underline{-f}\Vert_p=\llfloor[-f]\rrfloor_p=\llfloor-\alpha\rrfloor_p$. For (2), $\llfloor\alpha\rrfloor_p=\left(\int_X (\underline{f})^pd\mu\right)^{1/p}\leq \left(\int_X (\max(f)-\min(f))^pd\mu\right)^{1/p}=\left(\mu(X)\Vert \alpha \Vert^p\right)^{1/p}=\mu(X)^{1/p} \Vert \alpha\Vert$. Moreover, $\llceil \alpha\rrceil =\llfloor -\alpha \rrfloor\leq\mu(X)^{1/p} \Vert -\alpha\Vert = \mu(X)^{1/p} \Vert \alpha\Vert$. (3) follows from (2) directly. For (4), we know from Jensen's inequality that $\mu(X)^{-1/p}\Vert F\Vert_p$ is non-decreasing for any function $F$. Then (4) follows by letting $F$ be $\underline{f}$ and $\overline{f}$ respectively. For (5), note that $\underline{f}-\overline{f}$ is a constant function of value $\max(f)-\min(f)=\Vert f \Vert$. Therefore $\llfloor\alpha\rrfloor_1+\llceil\alpha\rrceil_1=\int_X (f-\min(f))d\mu + \int_X (\max(f)-f)d\mu=\int_X(\max(f)-\min(f))d\mu=\mu(X)\cdot\Vert \alpha \Vert$. For (6), $\llfloor[f]\rrfloor_\infty=\lim\limits_{p\to\infty}\llfloor[f]\rrfloor_p=\lim\limits_{p\to\infty}\Vert \underline{f}\Vert_p=\Vert\underline{f}\Vert_\infty=\Vert [f]\Vert$. Analogously, $\llceil [f] \rrceil_\infty = \llfloor[-f]\rrfloor_\infty=\Vert [-f] \Vert=\Vert [f] \Vert$. (7) and (8) follow from the definition of pseudonorms directly. Now let $\alpha=[f]$ and $\beta=[g]$. For (9), we have $\llfloor \alpha+\beta\rrfloor_p=\Vert \underline{f+g}\Vert_p\leq\Vert \underline{f}+\underline{g}\Vert_p\leq \Vert \underline{f}\Vert_p+\Vert \underline{g}\Vert_p=\llfloor \alpha\rrfloor_p+\llfloor\beta\rrfloor_p$ and $\llceil \alpha+\beta\rrceil_p = \llfloor -\alpha-\beta\rrfloor_p \leq \llfloor -\alpha\rrfloor_p +\llfloor -\beta\rrfloor_p=\llceil \alpha\rrceil_p +\llceil \beta\rrceil_p$. For (10), note that $\underline{f+g}+(\inf(f+g)-\inf(f)-\inf(g))=\underline{f}+\underline{g}$ where $\inf(f+g)-\inf(f)-\inf(g)\geq0$. Therefore, $\llfloor \alpha+\beta\rrfloor_1 + (\inf(f+g)-\inf(f)-\inf(g))\mu(X)= \llfloor \alpha\rrfloor_1 +\llfloor\beta\rrfloor_1$. As in Lemma~\ref{L:SpeIneq}, the statement follows from the fact that $\inf(f+g)-\inf(f)-\inf(g)=0$ if and only if $ X_{\min}(\alpha)\Cap X_{\min}(\beta) \neq\emptyset$. Moreover, (11) can be derived by replacing $\alpha$ and $\beta$ with $-\alpha$ and $-\beta$ respectively in the above argument. (12) follows from (5), (10) and (11) directly. For (13), suppose $\alpha=[f]$, $\beta_1=[g_1]$ and $\beta_2=[g_2]$. Let $Y_1=X_{\min}(g_1)$ and $Y_2=X_{\max}(g_2)$. Then $\mu(Y_1)=\mu(Y_2)=\mu(X)$ and thus for any non-negative $\mu$-measurable function $F$, $\int_X F d\mu=\int_{Y_1} F d\mu=\int_{Y_2} F d\mu$. Since $X_{\min}(\alpha)\Cap X_{\min}(\beta_1)\neq \emptyset$ and $ X_{\max}(\alpha)\Cap X_{\max}(\beta_2) \neq\emptyset$, we have $\underline{f+g_1}=\underline{f}+\underline{g_1}$ and $\overline{f+g_2}=\overline{f}+\overline{g_2}$. Therefore, \begin{align*} \llfloor \alpha+\beta_1\rrfloor_p &= \left(\int_X (\underline{f+g_1})^pd\mu\right)^{1/p}= \left(\int_X (\underline{f}+\underline{g_1})^pd\mu\right)^{1/p} = \\ &= \left(\int_{Y_1} (\underline{f}+\underline{g_1})^pd\mu\right)^{1/p}=\left(\int_{Y_1} (\underline{f})^pd\mu\right)^{1/p} =\left(\int_X(\underline{f})^pd\mu\right)^{1/p}=\llfloor \alpha\rrfloor_p \end{align*} and \begin{align*} \llceil \alpha+\beta_1\rrceil_p &= \left(\int_X (-\overline{f+g_1})^pd\mu\right)^{1/p}= \left(\int_X (-\overline{f}-\overline{g_1})^pd\mu\right)^{1/p} = \\ &= \left(\int_{Y_2} (-\overline{f}-\overline{g_1})^pd\mu\right)^{1/p}=\left(\int_{Y_2} (-\overline{f})^pd\mu\right)^{1/p} =\left(\int_X(-\overline{f})^pd\mu\right)^{1/p}=\llceil \alpha\rrceil_p. \end{align*} For (14), clearly if either $\alpha=[0]$ or $\beta=c\alpha$ for some $c\geq0$, then $\llfloor \alpha+\beta\rrfloor_p = \llfloor \alpha\rrfloor_p +\llfloor \beta\rrfloor_p$ and $\llceil \alpha+\beta\rrceil_p = \llceil \alpha\rrceil_p +\llceil \beta\rrceil_p$ by definition of pseudonorms. Now suppose $\llfloor \alpha+\beta\rrfloor_p = \llfloor \alpha\rrfloor_p +\llfloor \beta\rrfloor_p$. Then $\Vert \underline{f+g}\Vert_p=\Vert\underline{f}\Vert_p+\Vert\underline{g}\Vert_p$ and we must have $\Vert \underline{f}+\underline{g}\Vert_p=\Vert\underline{f}\Vert_p+\Vert\underline{g}\Vert_p$ since $\Vert \underline{f+g}\Vert_p\leq \Vert \underline{f}+\underline{g}\Vert_p\leq \Vert\underline{f}\Vert_p+\Vert\underline{g}\Vert_p$. Recall that by Minkowski inequality, $\Vert \underline{f}+\underline{g}\Vert_p\leq\Vert\underline{f}\Vert_p+\Vert\underline{g}\Vert_p$ with equality for $1< p <\infty$ if and only if either $\underline{f}=0$ or $\underline{g}=c\underline{f}$ for some $c\geq0$. It follows that either $\alpha=[0]$ or $\beta=c\alpha$ for some $c\geq0$. For the case $\llceil \alpha+\beta\rrceil_p = \llceil \alpha\rrceil_p +\llceil \beta\rrceil_p$, a similar argument applies. (15) is a special case of (8) where the inequalities are strict. Suppose $\underline{f}\lneqq\underline{g}$. Then it is easy to see that in this case $\underline{g-f}=\underline{g}-\underline{f}\gneqq 0$. Since ${\mathcal N}(X,\mu)$ is trivial and $\beta-\alpha\neq[0]$, we know that $\llfloor \beta-\alpha\rrfloor_p> 0$ for all $p\in[1,\infty)$. Now $ \llfloor \alpha\rrfloor_p^p+ \llfloor \beta-\alpha\rrfloor_p^p = \int_X( (\underline{f})^p+ (\underline{g-f})^p)d\mu\leq \int_X( (\underline{f}+\underline{g-f})^p)d\mu= \int_X(\underline{g})^pd\mu= \llfloor \beta\rrfloor_p^p$ and thus $\llfloor \alpha\rrfloor_p< \llfloor \beta\rrfloor_p$. Analogously, suppose $\overline{f}\gneqq\overline{g}$ which implies $\overline{g-f}=\overline{g}-\overline{f}\lneqq 0$. Again, $\llceil \beta-\alpha\rrceil_p> 0$ for all $p\in[1,\infty)$ and we get $ \llceil \alpha\rrceil_p^p+ \llceil \beta-\alpha\rrceil_p^p = \int_X( (-\overline{f})^p+ (-\overline{g-f})^p)d\mu \leq \int_X( (-\overline{f}-\overline{g-f})^p)d\mu= \int_X(-\overline{g})^pd\mu= \llceil \beta\rrceil_p^p$ which means $\llceil \alpha\rrceil_p < \llceil \beta\rrceil_p$. \end{proof} \subsection{The Main Theorem of Tropical Projections} For the rest of the paper, we assume that $X$ is a locally compact Hausdorff space, $\mu$ is a Borel measure on $X$ such that $\mu(X)\in(0,\infty)$ and ${\mathcal N}(X,\mu)$ is trivial. \begin{theorem} \label{T:main} For a compact subset $T$ of $\mathbb{TP}(X)$ and an arbitrary element $\gamma$ in $\mathbb{TP}(X)$, consider the following real-valued functions on $T$ defined by $\gamma$ using the $p$-pseudonorms with respect to $\mu$: \begin{enumerate} \item $\Theta^{(T,\gamma)}_\infty:T \to [0,\infty)$ given by $\alpha\mapsto\Vert\alpha-\gamma\Vert$, \item $\underline{\Theta}^{(T,\gamma)}_p:T \to [0,\infty)$ with $p\in[1,\infty)$ given by $\alpha\mapsto\llfloor \alpha-\gamma\rrfloor_p$ and \item $\overline{\Theta}^{(T,\gamma)}_p:T \to [0,\infty)$ with $p\in[1,\infty)$ given by $\alpha\mapsto\llceil \alpha-\gamma\rrceil_p$. \end{enumerate} We have the following conclusions: \begin{enumerate} \item Suppose $T$ is lower tropically convex. For each element $\gamma\in \mathbb{TP}(X)$, there is a unique element $\underline{\pi}_T(\gamma)$ called the lower tropical projection of $\gamma$ to $T$ which minimizes $\underline{\Theta}^{(T,\gamma)}_p$ for all $p\in[1,\infty)$. Moreover, the minimizer of $\Theta^{(T,\gamma)}_\infty$ is compact and lower tropically convex which contains $\underline{\pi}_T(\gamma)$. \item Suppose $T$ is upper tropically convex. For each element $\gamma\in \mathbb{TP}(X)$, there is a unique element $\overline{\pi}_T(\gamma)$ called the upper tropical projection of $\gamma$ to $T$ which minimizes $\overline{\Theta}^{(T,\gamma)}_p$ for all $p\in[1,\infty)$. Moreover, the minimizer of $\Theta^{(T,\gamma)}_\infty$ is compact and upper tropically convex which contains $\overline{\pi}_T(\gamma)$. \item Suppose $T$ is both lower and upper tropically convex. Then for each element $\gamma\in \mathbb{TP}(X)$, the minimizer of $\Theta^{(T,\gamma)}_\infty$ is also both lower and upper tropically convex. In addition, $\underline{\pi}_T(\gamma)=\overline{\pi}_T(\gamma)$ if and only if the minimizer of $\Theta^{(T,\gamma)}_\infty$ is identical to the singleton $\{\underline{\pi}_T(\gamma)\}=\{\overline{\pi}_T(\gamma)\}$. \end{enumerate} \end{theorem} \begin{proof} We will use Proposition~\ref{P:working}. Recall that a closed subset of a complete metric space is complete. Moreover, since $T$ is compact and the $p$-pseudonorms are continuous, the minimizers of $\Theta^{(T,\gamma)}_\infty$, $\underline{\Theta}^{(T,\gamma)}_p$ and $\overline{\Theta}^{(T,\gamma)}_p$ are nonempty. \begin{enumerate} \item Choose an element $\alpha$ from the nonempty minimizer of $\underline{\Theta}^{(T,\gamma)}_p$ for some $p\in[1,\infty)$. We first need to show that the minimizer of $\underline{\Theta}^{(T,\gamma)}_p$ is actually the singleton $\{\alpha\}$. For each element $\beta$ in $T$, consider the lower tropical path $P_{(\alpha,\beta)}$ from $\alpha$ to $\beta$. Note that since $T$ is lower tropically convex, the whole segment $\underline{[\alpha,\beta]}$ is contained in $T$. By Case (1) and (2) of Proposition~\ref{P:working}, $\underline{\eta}_p(t)=\llfloor P_{(\alpha,\beta)}(t)-\gamma\rrfloor_p=\underline{\Theta}^{(T,\gamma)}_p(P_{(\alpha,\beta)}(t))$ is either strictly increasing or strictly decreasing at $t=0$. But since $\alpha=P_{(\alpha,\beta)}(0)$ minimizes $\underline{\Theta}^{(T,\gamma)}_p$, only Case (1) can happen and $\underline{\eta}_p(t)$ must be strictly increasing. This means that the minimizer of $\underline{\Theta}^{(T,\gamma)}_p$ is exactly the singleton $\{\alpha\}$. Moreover, for all $p\in[1,\infty)$, the minimizers of $\underline{\Theta}^{(T,\gamma)}_p$ are all identical to $\{\alpha\}$ and we can just let $\underline{\pi}_T(\gamma)=\alpha$. This is because the condition $X_{\min}(\beta-\alpha)\Cap X_{\min}(\alpha-\gamma) \neq\emptyset$ for Case (1) is independent of $p$. For the same reason, $\underline{\pi}_T(\gamma)$ minimizes $\Theta^{(T,\gamma)}_\infty$. Let $T_{\min}$ be the minimizer $\Theta^{(T,\gamma)}_\infty$. Then $T_{\min}$ is compact since $T_{\min}$ is a closed subset of the compact set $T$. To show that $T_{\min}$ is lower tropically convex, we choose two elements $\alpha$ and $\beta$ from $T_{\min}$. Then also by case (1) and (2) of Proposition~\ref{P:working}, the function $\underline{\eta}_\infty(t)=\llfloor P_{(\alpha,\beta)}(t)-\gamma\rrfloor_\infty=\Vert P_{(\alpha,\beta)}(t)-\gamma \Vert=\Theta^{(T,\gamma)}_\infty(P_{(\alpha,\beta)}(t))$ must be a constant function with value being the minimum of $\Theta^{(T,\gamma)}_\infty$. This means that the whole segment $\underline{[\alpha,\beta]}$ is contained in $T_{\min}$. Hence $T_{\min}$ is lower tropically convex. \item It follows from a proof analogous to the above proof of (1) while instead Case (3) and (4) of Proposition~\ref{P:working} are employed and functions $\overline{\eta}_p(t)=\overline{\Theta}^{(T,\gamma)}_p(P^{(\alpha,\beta)}(t))$ and $\overline{\eta}_\infty(t)=\Theta^{(T,\gamma)}_\infty(P^{(\alpha,\beta)}(t))$ are considered. Again, we let $\overline{\pi}_T(\gamma)=\alpha$ and only Case (3) can happen for all $p\in[1,\infty]$. \item It is clear from (1) and (2) that the minimizer of $\Theta^{(T,\gamma)}_\infty$ must also be both lower and upper tropically convex when $T$ is both lower and upper tropically convex. It remains to show that if $\underline{\pi}_T(\gamma)=\overline{\pi}_T(\gamma)=\alpha$, then the minimizer of $\Theta^{(T,\gamma)}_\infty$ must also be the singleton $\{\alpha\}$. Actually as in the above arguments for (1) and (2), we know that Case (1) and (3) of Proposition~\ref{P:working} will happen simultaneously for all $\beta\in T$. Then by Lemma~\ref{L:TropNorm}, $$\Vert\beta-\gamma\Vert=\Vert\beta-\alpha\Vert + \Vert\alpha-\gamma\Vert$$ since $X_{\min}(\beta-\alpha)\Cap X_{\min}(\alpha-\gamma) \neq\emptyset$ and $X_{\max}(\beta-\alpha)\Cap X_{\max}(\alpha-\gamma) \neq\emptyset$. If $\beta\neq\alpha$, then $\Theta^{(T,\gamma)}_\infty(\beta)=\Vert\beta-\gamma\Vert>\Vert\alpha-\gamma\Vert=\Theta^{(T,\gamma)}_\infty(\alpha)$. This means that the minimum of $\Theta^{(T,\gamma)}_\infty$ is $\Vert\alpha-\gamma\Vert$ and the minimizer of $\Theta^{(T,\gamma)}_\infty$ is the singleton $\{\alpha\}$. \end{enumerate} \end{proof} \begin{remark} Accordingly, $\underline{\pi}_T$ and $\overline{\pi}_T$ can be considered as maps from $\mathbb{TP}(X)$ to $T$ which are called \emph{lower and upper tropical projections} respectively. \end{remark} \begin{remark} \label{R:NoCompact} The existence of $\underline{\pi}_T(\gamma)$ (or $\overline{\pi}_T(\gamma)$) in Theorem~\ref{T:main} is guaranteed by the compactness of $T$. If this compactness condition is withdrawn, as long as we know the minimizer of $\underline{\Theta}^{(T,\gamma)}_p$ when $T$ is upper tropically convex (or $\overline{\Theta}^{(T,\gamma)}_p$ when $T$ is lower tropically convex) is nonempty for some $p\in[1,\infty)$, the existence and uniqueness of $\underline{\pi}_T(\gamma)$ (or $\overline{\pi}_T(\gamma)$ respectively) are still guaranteed. A conjecture is that we may only assume $T$ to be a closed instead of compact subset of $\mathbb{TP}(X)$ to make the theorem hold. \end{remark} \begin{proposition} \label{P:working} Let $\alpha,\beta$ be distinct elements in $\mathbb{TP}(X)$ such that $\rho(\alpha,\beta)=d$. For $p\in[1,\infty]$ and $\gamma\in\mathbb{TP}(X)$, consider the functions $\underline{\eta}_p(t)=\llfloor P_{(\alpha,\beta)}(t)-\gamma\rrfloor_p$ and $\overline{\eta}_p(t)=\llceil P^{(\alpha,\beta)}(t)-\gamma\rrceil_p$ for $t\in[0,d]$. Then we have the following cases: \begin{enumerate} \item $X_{\min}(\beta-\alpha)\Cap X_{\min}(\alpha-\gamma) \neq\emptyset$: In this case, $\underline{\eta}_\infty(t)$ is non-decreasing and $\underline{\eta}_p(t)$ with $p\in[1,\infty)$ is strictly increasing for $t\in[0,d]$. Moreover, for $t\in[0,d]$, $\underline{\eta}_1(t)=\llfloor P_{(\alpha,\beta)}(t)-\alpha\rrfloor_1+\llfloor \alpha-\gamma\rrfloor_1$. \item $X_{\min}(\beta-\alpha)\CapX_{\min}(\alpha-\gamma)=\emptyset$: In this case, $\underline{\eta}_\infty(t)$ is non-increasing at $t=0$ and $\underline{\eta}_p(t)$ with $p\in[1,\infty)$ is strictly decreasing at $t=0$. \item $X_{\max}(\beta-\alpha)\Cap X_{\max}(\alpha-\gamma) \neq\emptyset$: In this case, $\overline{\eta}_\infty(t)$ is non-decreasing and $\overline{\eta}_p(t)$ with $p\in[1,\infty)$ is strictly increasing for $t\in[0,d]$. Moreover, for $t\in[0,d]$, $\overline{\eta}_1(t)=\llceil P^{(\alpha,\beta)}(t)-\alpha\rrceil_1+\llceil \alpha-\gamma\rrceil_1$. \item $ X_{\max}(\beta-\alpha)\CapX_{\max}(\alpha-\gamma)=\emptyset$: In this case, $\overline{\eta}_\infty(t)$ is non-increasing at $t=0$ and $\overline{\eta}_p(t)$ with $p\in[1,\infty)$ is strictly decreasing at $t=0$. \end{enumerate} \end{proposition} \begin{remark} We say a function $f(t)$ is non-decreasing (resp. non-increasing, strictly increasing, strictly decreasing, or locally constant) at $t_0$ if there exists $\delta>0$ such that $f(t)$ is non-decreasing (resp. non-increasing, strictly increasing, strictly decreasing, or constant) on $[t_0,t_0+\delta]$. \end{remark} \begin{proof} Let $\alpha=[f]$, $\beta=[g]$ and $\gamma=[h]$. Recall that by definition, $P_{(\alpha,\beta)}(t)= [\min(t,\underline{g-f})+f]$ and $P^{(\alpha,\beta)}(t)= [\max(-t,\overline{g-f})+f]$ for $t\in[0,d]$. Then $P_{(\alpha,\beta)}(t)-\alpha=[\min(t,\underline{g-f})]$, $P^{(\alpha,\beta)}(t)-\alpha=[\max(-t,\overline{g-f})]$, $P_{(\alpha,\beta)}(t)-\gamma=[\min(t,\underline{g-f})+f-h]$ and $P^{(\alpha,\beta)}(t)-\gamma=[\max(-t,\overline{g-f})+f-h]$. Moreover, for $t\in(0,d]$, $X_{\min}(P_{(\alpha,\beta)}(t)-\alpha)=X_{\min}(\min(t,\underline{g-f}))=X_{\min}(\beta-\alpha)=X_{\min}(g-f)$ and $X_{\max}(P^{(\alpha,\beta)}(t)-\alpha)=X_{\max}(\max(-t,\overline{g-f}))=X_{\max}(\beta-\alpha)=X_{\max}(g-f)$. For (1), suppose $X_{\min}(\beta-\alpha)\Cap X_{\min}(\alpha-\gamma) \neq\emptyset$. Then for all $t\in(0,d]$, $X_{\min}(P_{(\alpha,\beta)}(t)-\alpha)\CapX_{\min}(\alpha-\gamma)=X_{\min}(\beta-\alpha)\CapX_{\min}(\alpha-\gamma)\neq \emptyset$. Moreover, $P_{(\alpha,\beta)}(t)-\gamma=[\min(t,\underline{g-f})+f-h]$ and in this case, $\underline{\min(t,\underline{g-f})+f-h}=\min(t,\underline{g-f})+\underline{f-h}$. Now $\underline{\eta}_\infty(t)=\Vert [\min(t,\underline{g-f})+\underline{f-h}]\Vert=\sup(\min(t,\underline{g-f})+\underline{f-h})$ is clearly non-decreasing for $t\in[0,d]$. In addition, as in Proposition~\ref{P:BnormProperty}(10), this implies that for $t\in[0,d]$, $\underline{\eta}_1(t)=\llfloor P_{(\alpha,\beta)}(t)-\gamma\rrfloor_1=\llfloor P_{(\alpha,\beta)}(t)-\alpha\rrfloor_1+\llfloor \alpha-\gamma\rrfloor_1$. To show that $\underline{\eta}_p(t)$ with $p\in[1,\infty)$ is strictly increasing for $t\in[0,d]$, consider $t_1,t_2\in[0,d]$ such that $t_1<t_2$ and we claim that $\underline{\eta}_p(t_1)=\llfloor [\min(t_1,\underline{g-f})+\underline{f-h}]\rrfloor_p<\underline{\eta}_p(t_2)=\llfloor [\min(t_2,\underline{g-f})+\underline{f-h}]\rrfloor_p$. Note that $\min(t_1,\underline{g-f})+\underline{f-h}\lneqq \min(t_2,\underline{g-f})+\underline{f-h}$ and the claim follows from Proposition~\ref {P:BnormProperty}(15). For (2), suppose $ X_{\min}(\beta-\alpha)\CapX_{\min}(\alpha-\gamma)=\emptyset$. In this case, we note that $\beta-\gamma=[g-h]=[\underline{g-f}+\underline{f-h}]$ and $\underline{g-f}+\underline{f-h}=\underline{g-h}+\delta$ where $\delta$ must be strictly larger than $0$. Again recall that $P_{(\alpha,\beta)}(t)-\gamma=[\min(t,\underline{g-f})+f-h]$ and in this case we claim that $\underline{\min(t,\underline{g-f})+f-h}=\min(0,\underline{g-f}-t)+\underline{f-h}=\min(\underline{f-h},\underline{g-f}+\underline{f-h}-t)$ for all $t\in[0,\delta]$. Actually this is implied by $\min (\underline{g-f}+\underline{f-h}-t)=\min(\underline{g-h}+\delta-t)\geq 0$ for $t\in[0,\delta]$. Therefore $\underline{\eta}_\infty(t)=\Vert [\min(0,\underline{g-f}-t)+\underline{f-h}]\Vert=\sup(\min(0,\underline{g-f}-t)+\underline{f-h})$ is clearly non-increasing for $t\in[0,\delta]$. Now consider $t_1,t_2\in[0,\delta]$ such that $t_1<t_2$. We claim $\underline{\eta}_p(t_1)=\llfloor [\min(0,\underline{g-f}-t_1)+\underline{f-h}]\rrfloor_p<\underline{\eta}_p(t_2)=\llfloor [\min(0,\underline{g-f}-t_2)+\underline{f-h}]\rrfloor_p$. Note that $\min(0,\underline{g-f}-t_1)+\underline{f-h}\gneqq\min(0,\underline{g-f}-t_2)+\underline{f-h}$ and again the claim follows from Proposition~\ref {P:BnormProperty}(15). For (3) and (4), we let $\alpha=-\alpha'$, $\beta=-\beta'$ and $\gamma=-\gamma'$. Then $X_{\max}(\beta-\alpha)=X_{\min}(\beta'-\alpha')$, $X_{\max}(\alpha-\gamma)=X_{\min}(\alpha'-\gamma')$, $X_{\max}(\beta-\gamma)=X_{\min}(\beta'-\gamma')$, $P^{(\alpha,\beta)}(t)=-P_{(\alpha',\beta')}(t)$, $\overline{\eta}_p(t)=\llceil P^{(\alpha,\beta)}(t)-\gamma\rrceil_p=\llceil -P_{(\alpha',\beta')}(t)+\gamma'\rrceil_p=\llfloor P_{(\alpha',\beta')}(t)-\gamma'\rrceil_p$. By replacing $\alpha$, $\beta$ and $\gamma$ with $\alpha'$, $\beta'$ and $\gamma'$ respectively in (1) and (2), we can derive (3) and (4) respectively. \end{proof} We have the following corollary of Proposition~\ref{P:working} which summarize some concrete criteria of lower and upper tropical projections stated in Theorem~\ref{T:main}. \begin{corollary} [\textbf{Criteria for Tropical Projections}] \label{C:CritTropProj} Let $T$ be a subset (not necessarily compact) of $\mathbb{TP}(X)$. Let $\gamma\in\mathbb{TP}(X)$ and $\alpha\in T$. \begin{enumerate}[(a)] \item If $T$ is lower tropically convex, then the following are equivalent: \begin{enumerate}[(1)] \item $\alpha=\underline{\pi}_T(\gamma)$. \item For every $p\in[1,\infty)$ and every $\beta\in T$, the function $\underline{\eta}_p(t)=\llfloor P_{(\alpha,\beta)}(t)-\gamma\rrfloor_p$ is strictly increasing for $t\in[0,\rho(\alpha,\beta)]$. \item For every $p\in[1,\infty)$ and every $\beta\in T$, the function $\underline{\eta}_p(t)=\llfloor P_{(\alpha,\beta)}(t)-\gamma\rrfloor_p$ is strictly increasing at $t=0$. \item For every $\beta\in T$, $\llfloor \beta-\gamma\rrfloor_1=\llfloor \beta-\alpha\rrfloor_1+\llfloor \alpha-\gamma\rrfloor_1$. \item For every $\beta\in T$, $X_{\min}(\beta-\alpha)\CapX_{\min}(\alpha-\gamma)\neq\emptyset$. \item For every $\beta\in T$ such that $\beta\neq \alpha$, $X_{\min}(\alpha-\beta)\CapX_{\min}(\beta-\gamma)=\emptyset$. \end{enumerate} \item If $T$ is upper tropically convex, then the following are equivalent: \begin{enumerate}[(1)] \item $\alpha=\overline{\pi}_T(\gamma)$. \item For every $p\in[1,\infty)$ and every $\beta\in T$, the function $\overline{\eta}_p(t)=\llfloor P_{(\alpha,\beta)}(t)-\gamma\rrfloor_p$ is strictly increasing for $t\in[0,\rho(\alpha,\beta)]$. \item For every $p\in[1,\infty)$ and every $\beta\in T$, the function $\overline{\eta}_p(t)=\llfloor P_{(\alpha,\beta)}(t)-\gamma\rrfloor_p$ is strictly increasing at $t=0$. \item For every $\beta\in T$, $\llceil \beta-\gamma\rrceil_1=\llceil \beta-\alpha\rrceil_1+\llceil \alpha-\gamma\rrceil_1$. \item For every $\beta\in T$, $X_{\max}(\beta-\alpha)\CapX_{\max}(\alpha-\gamma)\neq\emptyset$. \item For every $\beta\in T$ such that $\beta\neq \alpha$, $X_{\max}(\alpha-\beta)\CapX_{\max}(\beta-\gamma)=\emptyset$. \end{enumerate} \item The lower and upper tropical projections are independent to the Borel measure $\mu$ on $X$ as long as $\mu(X)\in(0,\infty)$ and ${\mathcal N}(X,\mu)$ is trivial. \end{enumerate} \end{corollary} \begin{proof} All the criteria in (a) and (b) easily follow from Proposition~\ref{P:working}. For (c), we note that the set-theoretical criteria (a5), (a6), (b5) and (b6) actually do not depend on the underlying measure $\mu$ itself. \end{proof} \begin{remark} \label{R:TropProjMeas} In Theorem~\ref{T:main}, we see that minimizers of all the functions defined with respect to the $\underline{p}$-pseudonorms and $\overline{p}$-pseudonorms for all $p\in[1,\infty)$ coincide as the upper or lower tropical projections respectively. However, one may note that the $\underline{p}$-pseudonorms and $\overline{p}$-pseudonorms with $p\in[1,\infty)$ depend on the underlying measure $\mu$ on $X$. On the other hand, Corollary~\ref{C:CritTropProj}(c) says that actually we can expect more: the tropical projections are so intrinsic that they are even independent of the measure $\mu$ on $X$. \end{remark} \begin{example} \label{E:bpseudonorm} \begin{figure} \centering \includegraphics[scale=1]{Figure_bpseudonorm.pdf} \caption{Tropical norm and some $B^p$-pseudonorms on $\mathbb{TP}(\{{\mathbf e}_1,{\mathbf e}_2,{\mathbf e}_3\})$ represented by the $x_1x_2$-plane (as in Example~\ref{E:tconv}).} \label{F:bpseudonorm} \end{figure} Consider $\mathbb{TP}(X)$ with $X=\{{\mathbf e}_1,{\mathbf e}_2,{\mathbf e}_3\}$ which is represented by the $x_1x_2$-plane as in Example~\ref{E:tconv}. We associate $X$ with a measure $\mu$ such that $\mu({\mathbf e}_i)\in(0,\infty)$ for $i=1,2,3$. For $\alpha=(x_1,x_2)$ and $p\in[1,\infty)$, we have \begin{align*} \Vert\alpha\Vert &= \begin{cases} \max(x_1,x_2) & \mbox{if } \min(x_1,x_2)\geq 0 \\ x_2-x_1 & \mbox{if } x_1\leq 0 \leq x_2 \\ x_1-x_2 & \mbox{if } x_2\leq 0 \leq x_1 \\ -x_1 & \mbox{if } x_1\leq x_2 \leq 0 \\ -x_2 & \mbox{if } x_2\leq x_1 \leq 0 \end{cases} \\ \llfloor\alpha\rrfloor_p &= \begin{cases} (\mu({\mathbf e}_1)x_1^p+\mu({\mathbf e}_2)x_2^p)^{1/p} & \mbox{if } \min(x_1,x_2)\geq 0 \\ (\mu({\mathbf e}_2)(x_2-x_1)^p+\mu({\mathbf e}_3)(-x_1)^p)^{1/p} & \mbox{if } \min(x_2,0)\geq x_1 \\ (\mu({\mathbf e}_1)(x_1-x_2)^p+\mu({\mathbf e}_3)(-x_2)^p)^{1/p} & \mbox{if } \min(x_1,0)\geq x_2 \end{cases} \\ \llceil\alpha\rrceil_p &= \begin{cases} (\mu({\mathbf e}_1)(-x_1)^p+\mu({\mathbf e}_2)(-x_2)^p)^{1/p} & \mbox{if } \max(x_1,x_2)\leq 0 \\ (\mu({\mathbf e}_2)(x_1-x_2)^p+\mu({\mathbf e}_3)(x_1)^p)^{1/p} & \mbox{if } \max(x_2,0)\leq x_1 \\ (\mu({\mathbf e}_1)(x_2-x_1)^p+\mu({\mathbf e}_3)(x_2)^p)^{1/p} & \mbox{if } \max(x_1,0)\leq x_2 \end{cases} \\ \end{align*} by definition of the tropical norm and $B^p$-pseudonorms (Definition~\ref{D:pseudonorm}). In particular, Figure~\ref{F:bpseudonorm} shows the 2D plots of the functions $\Vert (x_1,x_2) \Vert$, $\llfloor (x_1,x_2) \rrfloor_1$, $\llfloor (x_1,x_2) \rrfloor_2$, $\llfloor (x_1,x_2) \rrfloor_4$, $\llceil (x_1,x_2) \rrceil_1$, $\llceil (x_1,x_2) \rrceil_2$ and $\llceil (x_1,x_2) \rrceil_4$ for $x_1\in[-5,5]$ and $x_2\in[-5,5]$ where $\mu({\mathbf e}_1)=\mu({\mathbf e}_2)=\mu({\mathbf e}_3)=1/3$. Moreover, in the first subfigure, we let $O=(0,0)$, $A=(-1,3)$, $B=(-1,1)$, $C=(2,1)$, $D=(2,3)$, $E=(0,1)$ and $F=(1,1)$. Let $T$ be the rectangle with vertices $A$, $B$, $C$ and $D$. Then it can be easily verified that $T=\underline{\overline{\mathrm{tconv}}}(\{A,C\})$ which is compact and both lower and upper tropically convex. Then $\Theta^{(T,O)}_\infty(\alpha) = \Vert\alpha\Vert$, $\underline{\Theta}^{(T,O)}_p(\alpha) =\llfloor \alpha \rrfloor_p$ and $\overline{\Theta}^{(T,O)}_p(\alpha) =\llceil \alpha \rrceil_p$. In the second subfigure, we plot the curves of $\Vert \alpha\Vert$, $\llfloor \alpha \rrfloor_1$, $\llfloor \alpha\rrfloor_2$, $\llfloor \alpha \rrfloor_4$, $\llceil \alpha \rrceil_1$, $\llceil \alpha \rrceil_2$ and $\llceil \alpha \rrceil_4$ for all $\alpha$ in the segment $BC$. We note that the minimizer of $\Theta^{(T,O)}_\infty$ is the segment $EF$ which is also compact and both lower and upper tropically convex, the minimizers of $\underline{\Theta}^{(T,O)}_1$, $\underline{\Theta}^{(T,O)}_2$ and $\underline{\Theta}^{(T,O)}_4$ are all identical to the singleton $\{E\}$, and the minimizers of $\overline{\Theta}^{(T,O)}_1$, $\overline{\Theta}^{(T,O)}_2$ and $\overline{\Theta}^{(T,O)}_4$ are all identical to the singleton $\{f\}$. Actually, Theorem~\ref{T:main} tells us that for all $p\in[1,\infty)$, the minimizers of $\underline{\Theta}^{(T,O)}_p$ are identical to a singleton which must be $\{E\}$, and the minimizers of $\overline{\Theta}^{(T,O)}_p$ are identical to a singleton which must be $\{F\}$. Therefore, the lower tropical projection $\underline{\pi}_T(O)$ and upper tropical projection $\overline{\pi}_T(O)$ of $O$ to $T$ are $E$ and $F$ respectively. \end{example} \subsection{Basic Properties of Tropical Projections} The following proposition shows some basic properties of tropical projection maps. \begin{proposition} \label{P:TropProj} Let $T$ be compact subsets of $\mathbb{TP}(X)$. \begin{enumerate} \item If $T$ is lower tropically convex(respectively upper tropically convex), then $\alpha_0+cT$ is lower tropically convex(respectively upper tropically convex) for each $\alpha_0\in\mathbb{TP}(X)$ and $c>0$, and $\underline{\pi}_{\alpha_0+cT}(\alpha_0+c\alpha)=\alpha_0+c\underline{\pi}_T(\alpha)$ (respectively $\overline{\pi}_{\alpha_0+cT}(\alpha_0+c\alpha)=\alpha_0+c\overline{\pi}_T(\alpha)$) for all $\alpha\in\mathbb{TP}(X)$. \item If $T$ is lower tropically convex (which is equivalent to say $-T$ upper tropically convex), then $\overline{\pi}_{-T}(-\alpha)=-\underline{\pi}_T(\alpha)$ for all $\alpha\in\mathbb{TP}(X)$. \item If $T$ is lower tropically convex (respectively upper tropically convex), then $\underline{\pi}_T(\alpha)=\alpha$ (respectively $\overline{\pi}_T(\alpha)=\alpha$) for all $\gamma\in T$. \item If $T$ is lower tropically convex (respectively upper tropically convex), then for each $\alpha\in\mathbb{TP}(X)$ and each $\beta\in \underline{[\alpha,\underline{\pi}_T(\alpha)]}$ (respectively $\beta\in \overline{[\alpha,\overline{\pi}_T(\alpha)]}$), we have $\underline{\pi}_T(\beta)=\underline{\pi}_T(\alpha)$ (respectively $\overline{\pi}_T(\beta)=\overline{\pi}_T(\alpha)$). \item For each $\alpha,\beta\in \mathbb{TP}(X)$ and $T$ being lower tropically convex, $\rho(\underline{\pi}_T(\alpha), \underline{\pi}_T(\beta))\leq\rho(\alpha,\beta)$ and the equality holds if and only if $\alpha=\overline{\pi}_L(\underline{\pi}_T(\alpha))$ and $\beta=\overline{\pi}_L(\underline{\pi}_T(\beta))$ where $L=\overline{[\alpha,\beta]}$. Accordingly, for each $\alpha,\beta\in \mathbb{TP}(X)$ and $T$ being upper tropically convex, $\rho(\overline{\pi}_T(\alpha), \overline{\pi}_T(\beta))\leq\rho(\alpha,\beta)$ and the equality holds if and only if $\alpha=\underline{\pi}_L(\overline{\pi}_T(\alpha))$ and $\beta=\underline{\pi}_L(\overline{\pi}_T(\beta))$ where $L=\underline{[\alpha,\beta]}$. \item For $\alpha,\beta,\gamma\in\mathbb{TP}(X)$, if $T$ is lower tropically convex and $\rho(\underline{\pi}_T(\alpha), \underline{\pi}_T(\beta))=\rho(\alpha,\beta)=\rho(\alpha,\gamma)+\rho(\gamma,\beta)$, then $\rho(\alpha,\gamma)= \rho(\underline{\pi}_T(\alpha), \underline{\pi}_T(\gamma))$ and $\rho(\gamma,\beta)= \rho(\underline{\pi}_T(\gamma), \underline{\pi}_T(\beta))$; if $T$ is upper tropically convex and $\rho(\overline{\pi}_T(\alpha), \overline{\pi}_T(\beta))=\rho(\alpha,\beta)=\rho(\alpha,\gamma)+\rho(\gamma,\beta)$, then $\rho(\alpha,\gamma)= \rho(\overline{\pi}_T(\alpha), \overline{\pi}_T(\gamma))$ and $\rho(\gamma,\beta)= \rho(\overline{\pi}_T(\gamma), \overline{\pi}_T(\beta))$. \end{enumerate} \end{proposition} \begin{remark} The existence of $\underline{\pi}_{T}(\alpha)$, and $\overline{\pi}_{T}(\alpha)$ in the above proposition is not guaranteed for all $\alpha \in \mathbb{TP}(X)$ if we don't assume the compactness of $T$. As in Remark~\ref{R:NoCompact} for Theorem~\ref{T:main}, if the compactness assumption in the above proposition is withdrawn, the statements still hold if whenever $\underline{\pi}_{T}(\cdot)$ or $\overline{\pi}_{T}(\cdot)$ is mentioned, its existence is assumed. \end{remark} \begin{proof} (1) follows from Lemma~\ref{L:OpTropHull}(1) and (2), Proposition~\ref{P:BnormProperty}(7), Corollary~\ref{C:CritTropProj}~(a4) and (b4) easily. (2) follows from Lemma~\ref{L:OpTropHull}(3), Proposition~\ref{P:BnormProperty}(1) and Corollary~\ref{C:CritTropProj}~(a4) and (b4) easily. (3) is also clear by Corollary~\ref{C:CritTropProj}~(a4) and (b4). For (4), we note that if $T$ is lower tropically convex (respectively upper tropically convex), then for $\beta\in \underline{[\alpha,\underline{\pi}_T(\alpha)]}$ (respectively $\beta\in \overline{[\alpha,\overline{\pi}_T(\alpha)]}$), we have $X_{\min}(\underline{\pi}_T(\alpha)-\alpha)\subseteq X_{\min}(\underline{\pi}_T(\alpha)-\beta)$ (respectively $X_{\max}(\overline{\pi}_T(\alpha)-\alpha)\subseteq X_{\max}(\overline{\pi}_T(\alpha)-\beta)$). Therefore, $\underline{\pi}_T(\beta)=\underline{\pi}_T(\alpha)$ (respectively $\overline{\pi}_T(\beta)=\overline{\pi}_T(\alpha)$) by Corollary~\ref{C:CritTropProj}~(a5) (respectively (b5)). For (5), we use Proposition~\ref{P:BnormProperty}(1)(5)(9), Corollary~\ref{C:CritTropProj}~(a4) and (b4), and see that when $T$ is lower tropically convex, \begin{align*} &\rho(\underline{\pi}_T(\alpha), \underline{\pi}_T(\beta)) \mu(X) \\ &=\llfloor \underline{\pi}_T(\alpha)-\underline{\pi}_T(\beta)\rrfloor_1 + \llfloor \underline{\pi}_T(\beta)-\underline{\pi}_T(\alpha)\rrfloor_1\\ &= (\llfloor \underline{\pi}_T(\alpha)-\beta\rrfloor_1- \llfloor \underline{\pi}_T(\beta)-\beta\rrfloor_1)+(\llfloor \underline{\pi}_T(\beta)-\alpha\rrfloor_1- \llfloor \underline{\pi}_T(\alpha)-\alpha\rrfloor_1) \\ &= (\llfloor \underline{\pi}_T(\alpha)-\beta\rrfloor_1-\llfloor \underline{\pi}_T(\alpha)-\alpha\rrfloor_1)+(\llfloor \underline{\pi}_T(\beta)-\alpha\rrfloor_1-\llfloor \underline{\pi}_T(\beta)-\beta\rrfloor_1) \\ &\leq \llfloor \alpha-\beta\rrfloor_1+\llfloor \beta-\alpha\rrfloor_1\\ &=\rho(\alpha, \beta) \mu(X). \end{align*} with equality holds if and only if $\llfloor \underline{\pi}_T(\alpha)-\beta\rrfloor_1=\llfloor \underline{\pi}_T(\alpha)-\alpha\rrfloor_1+\llfloor \alpha-\beta\rrfloor_1$ and $\llfloor \underline{\pi}_T(\beta)-\alpha\rrfloor_1=\llfloor \underline{\pi}_T(\beta)-\beta\rrfloor_1+\llfloor \beta-\alpha\rrfloor_1$ if and only if $\llceil \beta- \underline{\pi}_T(\alpha)\rrceil_1=\llceil \alpha-\underline{\pi}_T(\alpha)\rrceil_1+\llceil \beta-\alpha\rrceil_1$ and $\llceil \alpha-\underline{\pi}_T(\beta)\rrceil_1=\llceil \beta-\underline{\pi}_T(\beta)\rrceil_1+\llceil \alpha-\beta\rrceil_1$ if and only if $\alpha$ is the upper tropical projection of $\underline{\pi}_T(\alpha)$ to $\overline{[\alpha,\beta]}$ and $\beta$ is the upper tropical projection of $\underline{\pi}_T(\beta)$ to $\overline{[\alpha,\beta]}$. Accordingly, we can prove the case when $T$ is upper tropically convex. For (6), we have $\rho(\alpha,\beta)=\rho(\underline{\pi}_T(\alpha), \underline{\pi}_T(\beta))\leq\rho(\underline{\pi}_T(\alpha), \underline{\pi}_T(\gamma))+\rho(\underline{\pi}_T(\gamma), \underline{\pi}_T(\beta))\leq \rho(\alpha,\gamma)+\rho(\gamma,\beta)=\rho(\alpha,\beta)$ which implies $\rho(\alpha,\gamma)= \rho(\underline{\pi}_T(\alpha), \underline{\pi}_T(\gamma))$ and $\rho(\gamma,\beta)= \rho(\underline{\pi}_T(\gamma), \underline{\pi}_T(\beta))$. The case when $T$ is upper tropically convex can be proved analogously. \end{proof} \begin{proposition} \label{P:SeqTropProj} Let $T$ and $T'$ be compact lower tropical convex (respectively upper tropical convex) subsets of $\mathbb{TP}(X)$ such that $T'\subseteq T$. Then for each $\alpha\in \mathbb{TP}(X)$, $\underline{\pi}_{T'}(\alpha)=\underline{\pi}_{T'}(\underline{\pi}_T(\alpha))$ (respectively $\overline{\pi}_{T'}(\alpha)=\overline{\pi}_{T'}(\overline{\pi}_T(\alpha))$). \end{proposition} \begin{proof} We will only prove the case of lower tropical convexity while the case of upper tropical convexity can be proved analogously. To prove $\underline{\pi}_{T'}(\alpha)=\underline{\pi}_{T'}(\underline{\pi}_T(\alpha))$, it suffices to show that $\llfloor \beta-\alpha \rrfloor_1 = \llfloor \beta-\underline{\pi}_{T'}(\underline{\pi}_T(\alpha)) \rrfloor_1 +\llfloor \underline{\pi}_{T'}(\underline{\pi}_T(\alpha))-\alpha \rrfloor_1 $ for every $\beta\in T'$ by Corollary~\ref{C:CritTropProj}(a4). Actually, by applying Corollary~\ref{C:CritTropProj}(a4) to $T$ with respect to $\alpha$, we get $\llfloor \gamma-\alpha \rrfloor_1 = \llfloor \gamma-\underline{\pi}_T(\alpha) \rrfloor_1 +\llfloor\underline{\pi}_T(\alpha)-\alpha \rrfloor_1 $ for every $\gamma\in T$, and in particular $\llfloor \underline{\pi}_{T'}(\underline{\pi}_T(\alpha))-\alpha \rrfloor_1 = \llfloor \underline{\pi}_{T'}(\underline{\pi}_T(\alpha))-\underline{\pi}_T(\alpha) \rrfloor_1 +\llfloor\underline{\pi}_T(\alpha)-\alpha \rrfloor_1 $ Now applying Corollary~\ref{C:CritTropProj}(a4) to $T'$ with respect to $\underline{\pi}_T(\alpha)$, we get $\llfloor \beta-\underline{\pi}_T(\alpha)\rrfloor_1 = \llfloor \beta-\underline{\pi}_{T'}(\underline{\pi}_T(\alpha)) \rrfloor_1 +\llfloor\underline{\pi}_{T'}(\underline{\pi}_T(\alpha))-\underline{\pi}_T(\alpha)\rrfloor_1 $ for every $\beta\in T'$. Therefore, \begin{align*} & \llfloor \beta-\alpha \rrfloor_1 =\llfloor \beta-\underline{\pi}_T(\alpha)\rrfloor_1+\llfloor \underline{\pi}_T(\alpha)-\alpha\rrfloor_1 \\ &= \llfloor \beta-\underline{\pi}_{T'}(\underline{\pi}_T(\alpha)) \rrfloor_1 +\llfloor\underline{\pi}_{T'}(\underline{\pi}_T(\alpha))-\underline{\pi}_T(\alpha)\rrfloor_1+\llfloor\underline{\pi}_T(\alpha)-\alpha \rrfloor_1 \\ &= \llfloor \beta-\underline{\pi}_{T'}(\underline{\pi}_T(\alpha)) \rrfloor_1 +\llfloor \underline{\pi}_{T'}(\underline{\pi}_T(\alpha))-\alpha \rrfloor_1 \end{align*} for every $\beta\in T'$, which means that $\underline{\pi}_{T'}(\underline{\pi}_T(\alpha))$ is exactly the lower tropical projection of $\alpha$ in $T'$ as claimed. \end{proof} For some initial results of tropical projections, we consider the tropical projections to tropical segments and have the following lemmas. \begin{lemma} \label{L:TropProjSegment} Let $\beta_1,\beta_2,\gamma$ be elements in $\mathbb{TP}(X)$ and let $\alpha=\underline{\pi}_{\underline{[\beta_1,\beta_2]}}(\gamma)$ (respectively $\alpha=\overline{\pi}_{\overline{[\beta_1,\beta_2]}}(\gamma)$). Then for all $\beta\in\underline{[\beta_1,\beta_2]}$, $X_{\min}(\beta-\gamma)\subseteq X_{\min}(\alpha-\gamma)=X_{\min}(\beta_1-\gamma)\bigcupX_{\min}(\beta_2-\gamma)$ (respectively for all $\beta\in\overline{[\beta_1,\beta_2]}$, $X_{\max}(\beta-\gamma)\subseteq X_{\max}(\alpha-\gamma)=X_{\max}(\beta_1-\gamma)\bigcupX_{\max}(\beta_2-\gamma)$). \end{lemma} \begin{proof} We will prove the case of lower tropical convexity while the case of upper tropical convexity can be proved analogously. Since $\alpha=\underline{\pi}_{\underline{[\beta_1,\beta_2]}}(\gamma)$, we have $$X_{\min}(\beta_1-\gamma)=X_{\min}(\beta_1-\alpha)\bigcapX_{\min}(\alpha-\gamma),$$ $$X_{\min}(\beta_2-\gamma)=X_{\min}(\beta_2-\alpha)\bigcapX_{\min}(\alpha-\gamma),$$ $$X_{\min}(\beta-\gamma)=X_{\min}(\beta-\alpha)\bigcapX_{\min}(\alpha-\gamma)$$ by Corollary~\ref{C:CritTropProj} and Lemma~\ref{L:XminXmax}. In addition, $X_{\min}(\beta_1-\alpha)\bigcup X_{\min}(\beta_2-\alpha)=X$ by Proposition~\ref{P:TropSeg}(5) and therefore \begin{align*} X_{\min}(\beta-\gamma)&=X_{\min}(\beta-\alpha)\bigcapX_{\min}(\alpha-\gamma)\subseteqX_{\min}(\alpha-\gamma) \\ &=(X_{\min}(\beta_1-\alpha)\bigcapX_{\min}(\alpha-\gamma))\bigcup(X_{\min}(\beta_2-\alpha)\bigcapX_{\min}(\alpha-\gamma)) \\ &=X_{\min}(\beta_1-\gamma)\bigcup X_{\min}(\beta_2-\gamma). \end{align*} \end{proof} \begin{lemma} \label{L:SegSum} Let $\alpha,\beta,\gamma$ be elements in $\mathbb{TP}(X)$. Then $\rho(\alpha,\gamma)=\rho(\alpha,\beta)+\rho(\beta,\gamma)$ if and only if $\beta=\underline{\pi}_{\underline{[\alpha,\beta]}}(\gamma)=\underline{\pi}_{\underline{[\beta,\gamma]}}(\alpha)$ if and only if $\beta=\overline{\pi}_{\overline{[\alpha,\beta]}}(\gamma)=\overline{\pi}_{\overline{[\beta,\gamma]}}(\alpha)$. \end{lemma} \begin{proof} By Proposition~\ref{P:BnormProperty}, $\rho(\alpha,\gamma)=\rho(\alpha,\beta)+\rho(\beta,\gamma)$ if and only if $\llfloor \alpha-\gamma\rrfloor_1 =\llfloor \alpha-\beta\rrfloor_1+\llfloor \beta-\gamma \rrfloor_1 $ and $\llfloor \gamma-\alpha\rrfloor_1 =\llfloor \gamma-\beta\rrfloor_1+\llfloor \beta-\alpha \rrfloor_1 $ if and only if $\llceil \alpha-\gamma\rrceil_1 =\llceil \alpha-\beta\rrceil_1+\llceil \beta-\gamma \rrceil_1 $ and $\llceil \gamma-\alpha\rrceil_1 =\llceil \gamma-\beta\rrceil_1+\llceil \beta-\alpha \rrceil_1 $. We then note that \begin{enumerate} \item $\llfloor \alpha-\gamma\rrfloor_1 =\llfloor \alpha-\beta\rrfloor_1+\llfloor \beta-\gamma \rrfloor_1 $ if and only if $\beta=\underline{\pi}_{\underline{[\alpha,\beta]}}(\gamma)$; \item $\llfloor \gamma-\alpha\rrfloor_1 =\llfloor \gamma-\beta\rrfloor_1+\llfloor \beta-\alpha \rrfloor_1 $ if and only if $\beta=\underline{\pi}_{\underline{[\beta,\gamma]}}(\alpha)$; \item $\llceil \alpha-\gamma\rrceil_1 =\llceil \alpha-\beta\rrceil_1+\llceil \beta-\gamma \rrceil_1 $ if and only if $\beta = \overline{\pi}_{\overline{[\alpha,\beta]}}(\gamma)$; \item $\llceil \gamma-\alpha\rrceil_1 =\llceil \gamma-\beta\rrceil_1+\llceil \beta-\alpha \rrceil_1 $ if and only if $\beta = \overline{\pi}_{\overline{[\beta,\gamma]}}(\alpha)$. \end{enumerate} \end{proof} \subsection{Balls and Tropical Convex Functions} For any $\alpha\in\mathbb{TP}(X)$, we say that ${\mathcal B}(\alpha,r):=\{\beta\in \mathbb{TP}(X)\mid \rho(\alpha,\beta)\leq r\}$ and ${\mathcal B}^0(\alpha,r):=\{\beta\in \mathbb{TP}(X)\mid \rho(\alpha,\beta)< r\}$ are respectively the \emph{closed ball} and \emph{open ball} of radius $r$ centered at $\alpha$. Moreover, for a function $f$ on $\mathbb{TP}(X)$ and a function $g$ on a subset $S$ of $\mathbb{TP}(X)$, we write ${\mathcal L}_{*r}(f):=\{\alpha\in\mathbb{TP}(X)\mid f(\alpha)*r\}$ and ${\mathcal L}^S_{*r}(g):=\{\alpha\in S\mid g(\alpha)*r\}$ where $*$ can be $=$, $<$, $\leq$, $>$ or $\geq$. Note that in this sense, ${\mathcal B}(\alpha,r)={\mathcal L}_{\leq r}(\rho(\alpha,\cdot))={\mathcal L}_{\leq r}(\Vert \cdot -\alpha\Vert)$ and ${\mathcal B}^0(\alpha,r)={\mathcal L}_{< r}(\rho(\alpha,\cdot))={\mathcal L}_{< r}(\Vert \cdot-\alpha\Vert)$. \begin{lemma} Let $\alpha\in\mathbb{TP}(X)$, $r\geq 0$ and $p\in[1,\infty]$. \begin{enumerate} \item ${\mathcal L}_{\leq r}(\llfloor \cdot-\alpha\rrfloor_p)$ and ${\mathcal L}_{< r}(\llfloor \cdot-\alpha\rrfloor_p)$ are lower tropically convex. \item ${\mathcal L}_{\leq r}(\llceil \cdot-\alpha\rrceil_p)$ and ${\mathcal L}_{< r}(\llceil \cdot-\alpha\rrceil_p)$ are upper tropically convex. \item ${\mathcal B}(\alpha,r)$ and ${\mathcal B}^0(\alpha,r)$ are both lower and upper tropically convex. \end{enumerate} \end{lemma} \begin{proof} Let $\beta_1$ and $\beta_2$ be elements in $\mathbb{TP}(X)$. Let $\beta_0=\underline{\pi}_{\underline{[\beta_1,\beta_2]}}(\alpha)$ Then by Theorem~\ref{T:main} and Proposition~\ref{P:working}, for all $\beta\in\underline{[\beta_0,\beta_1]}$, $\llfloor \beta_0-\alpha\rrfloor_p\leq \llfloor \beta-\alpha\rrfloor_p\ \leq \llfloor \beta_1-\alpha\rrfloor_p$, and for all $\beta\in\underline{[\beta_0,\beta_2]}$, $\llfloor \beta_0-\alpha\rrfloor_p\leq \llfloor \beta-\alpha\rrfloor_p\ \leq \llfloor \beta_2-\alpha\rrfloor_p$. In sum, $\llfloor \beta-\alpha\rrfloor_p\leq \max(\llfloor \beta_1-\alpha\rrfloor_p,\llfloor \beta_2-\alpha\rrfloor_p)$. Then (1) follows from Definition~\ref{D:TropConv}. Using an analogous argument with respect to the case of upper tropical convexity, we can derive (2). For (3), we note that ${\mathcal B}(\alpha,r)={\mathcal L}_{\leq r}(\Vert \cdot -\alpha\Vert)={\mathcal L}_{\leq r}(\llfloor \cdot -\alpha\rrfloor_\infty)={\mathcal L}_{\leq r}(\llceil \cdot -\alpha\rrceil_\infty)$ (Proposition~\ref{P:BnormProperty}(6)). Then (3) follows from (1) and (2). \end{proof} \begin{example} \begin{figure} \centering \includegraphics[scale=1]{Figure_contour.pdf} \caption{Contour lines of the tropical norm $\Vert\cdot\Vert$ and the $B$-pseudonorms $\llfloor\cdot \rrfloor_1$ and $\llfloor\cdot \rrfloor_2$ on $\mathbb{TP}(\{{\mathbf e}_1,{\mathbf e}_2,{\mathbf e}_3\})$ represented by the $x_1x_2$-plane (as in Example~\ref{E:tconv} and Example~\ref{E:bpseudonorm}).} \label{F:contour} \end{figure} Figure~\ref{F:contour} shows some contour lines of $\Vert\cdot\Vert$, $\llfloor\cdot \rrfloor_1$ and $\llfloor\cdot \rrfloor_2$ under the same assumption of Example~\ref{E:bpseudonorm}. We observe that the contour lines of $\Vert\cdot\Vert$ are hexagons which enclose the balls centered at the origin, the contour lines of $\llfloor\cdot \rrfloor_1$ are triangles which enclose the sub-level sets ${\mathcal L}_{\leq r}(\llfloor \cdot\rrfloor_1)$ for different $r$'s, and the contour lines of $\llfloor\cdot \rrfloor_2$ are arrowhead-shaped curves which enclose the sub-level sets ${\mathcal L}_{\leq r}(\llfloor \cdot\rrfloor_2)$ for different $r$'s. \end{example} \begin{definition} \label{D:TropConvFunc} Let $T$ be a lower tropically convex subset (respectively upper tropically convex subset) of $\mathbb{TP}(X)$. Then we say a function $f$ on $T$ is \emph{lower tropically convex} (respectively \emph{upper tropically convex}) on $T$ if for each $\alpha,\beta\in T$, we have $f(P_{(\alpha,\beta)}(td))\leq tf(\alpha)+(1-t)f(\beta)$ (respectively $f(P^{(\alpha,\beta)}(td))\leq tf(\alpha)+(1-t)f(\beta)$) where $d=\rho(\alpha,\beta)$. \end{definition} \begin{lemma} If $T$ is a lower tropically convex subset (respectively upper tropically convex subset) of $\mathbb{TP}(X)$ and $f$ is a lower tropically convex (respectively upper tropically convex) function on $T$, then ${\mathcal L}^T_{\leq r}(f)$ and ${\mathcal L}^T_{< r}(f)$ are lower tropically convex (respectively upper tropically convex). \end{lemma} \begin{proof} This can be easily verified from Definition~\ref{D:TropConv} and Definition~\ref{D:TropConvFunc}. \end{proof} \section{Tropical Retracts and the Contractibiity of Tropical Convex Sets} \label{S:TropRetract} \begin{definition} Let $W\subseteq\mathbb{TP}(X)$ be lower (respectively upper) tropically convex. Let $T$ be a compact lower (respectively upper) tropical convex subset of $W$. Then we say a strong deformation retraction $h:[0,1]\times W \rightarrow W$ of $W$ onto $T$ is a \emph{lower (respectively upper) tropical retraction} if at each $t\in[0,1]$, the set $h(t,W)$ is lower (respectively upper) tropically convex. In this sense, we say $T$ is a \emph{lower (respectively upper) tropical retract} of $W$. \end{definition} For $\gamma\in\mathbb{TP}(X)$ and $S\subseteq\mathbb{TP}(X)$, we write $\rho(\gamma,S):=\inf_{\beta\in S}\rho(\gamma,\beta)$. \begin{theorem} \label{T:TropRetract} For each lower (respectively upper) tropically convex set $W\subseteq\mathbb{TP}(X)$ and each compact lower (respectively upper) tropical convex subset $T$ of $W$, there exists a lower (respectively upper) tropical retraction of $W$ onto $T$. \end{theorem} \begin{proof} Here we give a proof for the case of lower tropical convexity, while a proof for the case of upper tropical convexity can be derived analogously. We will explicitly construct such a tropical retraction $h:[0,1]\times W \rightarrow W$. In particular, for each $\gamma\in W$, we want $h(0,\gamma)=\gamma$ and $h(1,\gamma)=\underline{\pi}_T(\gamma)$. Note that since $T$ is compact and lower tropically convex, we must have $\rho(\gamma,T)=\min_{\beta\in T}\rho(\gamma,\beta)=\rho(\gamma,\underline{\pi}_T(\gamma))$ by Theorem~\ref{T:main}. Let $\kappa=\sup_{\gamma\in W}\rho(\gamma,T)$. Note that $T$ must be bounded since it is compact, and $\kappa=\infty$ when $W$ is not bounded. Let $\phi:[0,\kappa]\to [0,1]$ be any continuous monotonically decreasing function such that $\phi(0)=1$ and $\phi(\kappa)=0$ (we allow $\kappa$ to be $\infty$). Then we define $h$ in the following way: $$ h(t, \gamma) = \begin{cases} \gamma, & \text{for } 0 \leq t < \phi(\rho(\gamma,T))\\ P_{(\gamma,\underline{\pi}_T(\gamma))}(\rho(\gamma,T)-\phi^{-1}(t)), & \text{for } \phi(\rho(\gamma,T)) \leq t \leq 1. \end{cases}$$ In other words, at $t=\phi(\rho(\gamma,T))$, the point at $\gamma$ starts to retract towards $\underline{\pi}_T(\gamma)$ along $\underline{[\gamma,\underline{\pi}_T(\gamma)]}$ and at $t=1$, it hits $\underline{\pi}_T(\gamma)$. It is clear that $h(t,\gamma)=\gamma$ for all $t\in[0,1]$ and $\gamma\in T$. Now, to show $h$ is actually a lower tropical retraction of $W$ onto $T$, it remains to show that $h(t,W)$ is lower tropically convex for all $t\in[0,1]$ and $h$ is continuous. To show that $h(t,W)$ is lower tropically convex, we note that the distance $\rho(h(t, \gamma),T)$ from $h(t, \gamma)$ to $T$ is at most $\phi^{-1}(t)$. More precisely, $\rho(h(t, \gamma),T)=\phi^{-1}(t)$ when $t\geq \phi(\rho(\gamma,T)) $ and $\rho(h(t, \gamma),T)=\rho(\gamma,T)$ when $t\leq \phi(\rho(\gamma,T)) $. Therefore, $h(t,W)$ is identical to the sublevel set $\{\gamma\in W\mid \rho(\gamma,T) \leq \phi^{-1}(t)\}$ of $\rho(\cdot,T)$. Then for each $\gamma_1,\gamma_2\in h(t,W)$, we have $\rho(\gamma_1,T)\leq \phi^{-1}(t)$ and $\rho(\gamma_2,T)\leq \phi^{-1}(t)$. To prove that $h(t,W)$ is lower tropically convex, it remains to show that for each $\gamma\in \underline{[\gamma_1,\gamma_2]}$, $\rho(\gamma,T)\leq \phi^{-1}(t)$. Let $\alpha_1=\underline{\pi}_T(\gamma_1)$ and $\alpha_2=\underline{\pi}_T(\gamma_2)$. Let $\alpha=\underline{\pi}_{\underline{[\alpha_1,\alpha_2]}}(\gamma)$. Then by Proposition~\ref{P:DistIneq}(2), $$\rho(\gamma,T)\leq\rho(\gamma,\alpha)\leq\max(\rho(\gamma_1,\alpha_1),\rho(\gamma_2,\alpha_2))=\max(\rho(\gamma_1,T),\rho(\gamma_2,T))\leq \phi^{-1}(t).$$ Now let us show that $h$ is continuous, i.e., $h(t_n,\gamma_n)\to h(t,\gamma)$ as $t_n\to t$ and $\gamma_n\to \gamma$. By the triangle inequality, $$\rho(h(t_n,\gamma_n),h(t,\gamma))\leq\rho(h(t_n,\gamma_n),h(t_n,\gamma))+\rho(h(t_n,\gamma),h(t,\gamma)).$$ We first note that $\rho(h(t_n,\gamma),h(t,\gamma))\leq \rho(\gamma,T)|\phi^{-1}(t)-\phi^{-1}(t_n)|$. Since $\phi$ and $\phi^{-1}$ are continuous, $\rho(h(t_n,\gamma),h(t,\gamma))\to 0$ as $t_n\to t$. Write $\gamma'_n=h(t_n,\gamma_n)$ and $\gamma'=h(t_n,\gamma)$. We then claim that $\rho(\gamma'_n,\gamma')\leq 2\rho(\gamma_n,\gamma)$ which is sufficient to guarantee the continuity of $h$. Let $\alpha_n = \underline{\pi}_T(\gamma_n)=\underline{\pi}_T(\gamma'_n)$ and $\alpha = \underline{\pi}_T(\gamma)=\underline{\pi}_T(\gamma')$. Then $\rho(\gamma_n,\alpha_n)=\rho(\gamma_n,T)$ and $\rho(\gamma,\alpha)=\rho(\gamma_n, T)$. Note that $\rho(\alpha_n,\alpha)\leq \rho(\gamma_n,\gamma)$ by Proposition~\ref{P:TropProj}(5). Case (a): $\phi^{-1}(t) \geq \max(\rho(\gamma_n,T),\rho(\gamma,T))$. In this case, $\gamma'_n=\gamma_n$, $\gamma'=\gamma$, and thus $\rho(\gamma'_n,\gamma')=\rho(\gamma_n,\gamma)$. Case (b): $ \min(\rho(\gamma_n,T),\rho(\gamma,T))\leq \phi^{-1}(t) \leq \max(\rho(\gamma_n,T),\rho(\gamma,T))$. Without loss of generality, we may assume $\rho(\gamma_n,T)\leq\rho(\gamma,T)$ which means $\rho(\gamma_n,\alpha_n)=\rho(\gamma_n,T)\leq\phi^{-1}(t) \leq\rho(\gamma,T)=\rho(\gamma,\alpha)$. Then $\gamma'_n=\gamma_n$ and $\rho(\gamma',\alpha)=\rho(\gamma',T)=\phi^{-1}(t)$. Let $\gamma'' =\underline{\pi}_{\underline{[\gamma,\alpha]}}(\gamma_n)$ and $\alpha'' =\underline{\pi}_{\underline{[\gamma,\alpha]}}(\alpha_n)$. Depending on the relative positions of $\gamma'$ and $\gamma''$ in $\underline{[\gamma,\alpha]}$, there are two subcases: Subcase (b1): $\gamma''\in\underline{[\alpha,\gamma']}$. Then $\rho(\gamma'_n,\gamma')=\rho(\gamma_n,\gamma')\leq \rho(\gamma_n,\gamma)$. Subcase (b2): $\gamma'\in\underline{[\alpha,\gamma'']}$. Note that $\rho(\alpha'',\alpha)\leq \rho(\alpha_n,\alpha)$, $\rho(\gamma'',\alpha'')\leq \rho(\gamma'_n,\alpha_n)$ (Proposition~\ref{P:TropProj}(5)) and $\rho(\gamma_n,\alpha_n)\leq\phi^{-1}(t) =\rho(\gamma',\alpha)\leq\rho(\gamma'',\alpha)$. Therefore, \begin{align*} \rho(\gamma'',\gamma') &=\rho(\gamma'',\alpha)-\rho(\gamma',\alpha)\leq(\rho(\gamma'',\alpha'')+\rho(\alpha'',\alpha))-\rho(\gamma',\alpha) \\ &\leq(\rho(\gamma'_n,\alpha_n)+\rho(\alpha_n,\alpha))-\rho(\gamma',\alpha)\leq \rho(\alpha_n,\alpha)\leq\rho(\gamma_n,\gamma). \end{align*} In addition, we have $\rho(\gamma_n,\gamma'')\leq\rho(\gamma_n,\gamma).$ It follows $$\rho(\gamma'_n,\gamma')=\rho(\gamma_n,\gamma')\leq\rho(\gamma_n,\gamma'')+\rho(\gamma'',\gamma')\leq 2\cdot\rho(\gamma_n,\gamma).$$ Case (c): $\phi^{-1}(t) \leq \min(\rho(\gamma_n,T),\rho(\gamma,T))$. Then $\rho(\gamma'_n,\alpha_n)=\rho(\gamma'_n,T) =\rho(\gamma',T)=\rho(\gamma',\alpha)=\phi^{-1}(t)$. Let $\gamma''=\underline{\pi}_{\underline{[\gamma,\alpha]}}(\gamma'_n)$ and $\gamma''_n=\underline{\pi}_{\underline{[\gamma_n,\alpha_n]}}(\gamma')$. Depending on the relative positions of $\gamma'$ and $\gamma''$ in $\underline{[\gamma,\alpha]}$ and of the relative positions of $\gamma'_n$ and $\gamma''_n$ in $\underline{[\gamma_n,\alpha_n]}$, there are four subcases: Subcase (c1): $\gamma''\in \underline{[\gamma',\alpha]}$ and $\gamma''_n\in \underline{[\gamma'_n,\alpha_n]}$. Then by Proposition~\ref{P:DistIneq}(1), $\rho(\gamma'_n,\gamma')\leq \rho(\gamma_n,\gamma)$. Subcase (c2): $\gamma''\in \underline{[\gamma',\gamma]}$ and $\gamma''_n\in \underline{[\gamma'_n,\gamma_n]}$. Then again by Proposition~\ref{P:DistIneq}(1), $\rho(\gamma'_n,\gamma')\leq \rho(\alpha_n,\alpha)\leq \rho(\gamma_n,\gamma)$. Subcase (c3): $\gamma''\in \underline{[\gamma',\gamma]}$ and $\gamma''_n\in \underline{[\gamma'_n,\alpha_n]}$. Analyzing as in Subcase (b2) by introducing $\alpha'' =\underline{\pi}_{\underline{[\gamma,\alpha]}}(\alpha_n)$, we can derive $\rho(\gamma'',\gamma')\leq \rho(\alpha_n,\alpha)\leq\rho(\gamma_n,\gamma)$. Moreover, $\rho(\gamma'_n,\gamma'')\leq \max(\rho(\alpha_n,\alpha),\rho(\gamma_n,\gamma))=\rho(\gamma_n,\gamma)$ by Proposition~\ref{P:DistIneq}(2). Therefore, $\rho(\gamma'_n,\gamma')\leq\rho(\gamma'_n,\gamma'')+\rho(\gamma'',\gamma')\leq 2\cdot\rho(\gamma_n,\gamma)$. Subcase (c4): $\gamma''\in \underline{[\gamma',\alpha]}$ and $\gamma''_n\in \underline{[\gamma'_n,\gamma_n]}$. Analogous to the analysis in Subcase (c3), we get $\rho(\gamma'_n,\gamma')\leq\rho(\gamma'_n,\gamma''_n)+\rho(\gamma''_n,\gamma')\leq 2\cdot\rho(\gamma_n,\gamma)$. \end{proof} \begin{corollary} \label{C:TropContr} All lower or upper tropical convex sets are contractible. \end{corollary} \begin{proof} Apply Theorem~\ref{T:TropRetract} while letting $W$ be a singleton. \end{proof} \begin{proposition} \label{P:DistIneq} For $\alpha_1,\alpha_2,\gamma_1,\gamma_2\in \mathbb{TP}(X)$, consider $\beta_1\in \underline{[\alpha_1,\gamma_1]}$ and $\beta_2\in \underline{[\alpha_2,\gamma_2]}$ (respectively consider $\beta_1\in \overline{[\alpha_1,\gamma_1]}$ and $\beta_2\in \overline{[\alpha_2,\gamma_2]}$). \begin{enumerate} \item Let $\alpha'_1=\underline{\pi}_{\underline{[\alpha_1,\gamma_1]}}(\beta_2)$ and $\alpha'_2=\underline{\pi}_{ \underline{[\alpha_2,\gamma_2]}}(\beta_1)$. If $\alpha'_1\in \underline{[\alpha_1,\beta_1]}$ and $\alpha'_2\in \underline{[\alpha_2,\beta_2]}$, then $\rho(\beta_1,\beta_2)\leq \rho(\gamma'_1,\gamma'_2)$ for all $\gamma'_1\in \underline{[\beta_1,\gamma_1]}$ and $\gamma'_2\in \underline{[\beta_2,\gamma_2]}$. (Respectively, let $\alpha'_1=\overline{\pi}_{\overline{[\alpha_1,\gamma_1]}}(\beta_2)$ and $\alpha'_2=\overline{\pi}_{ \overline{[\alpha_2,\gamma_2]}}(\beta_1)$. If $\alpha'_1\in \overline{[\alpha_1,\beta_1]}$ and $\alpha'_2\in \overline{[\alpha_2,\beta_2]}$, then $\rho(\beta_1,\beta_2)\leq \rho(\gamma'_1,\gamma'_2)$ for all $\gamma'_1\in \overline{[\beta_1,\gamma_1]}$ and $\gamma'_2\in \overline{[\beta_2,\gamma_2]}$.) \item If $\beta_1=\underline{\pi}_{\underline{[\alpha_1,\gamma_1]}}(\beta_2)$, then $\rho(\beta_1,\beta_2)\leq \max(\rho(\alpha_1,\alpha_2),\rho(\gamma_1,\gamma_2))$. If in addition $\alpha_1=\alpha_2$, then $\rho(\beta_1,\beta_2)\leq \rho(\gamma'_1,\gamma'_2)$ for all $\gamma'_1\in \underline{[\beta_1,\gamma_1]}$ and $\gamma'_2\in \underline{[\beta_2,\gamma_2]}$. (Respectively, if $\beta_1=\overline{\pi}_{\overline{[\alpha_1,\gamma_1]}}(\beta_2)$, then $\rho(\beta_1,\beta_2)\leq \max(\rho(\alpha_1,\alpha_2),\rho(\gamma_1,\gamma_2))$. If in addition $\alpha_1=\alpha_2$, then $\rho(\beta_1,\beta_2)\leq \rho(\gamma'_1,\gamma'_2)$ for all $\gamma'_1\in \overline{[\beta_1,\gamma_1]}$ and $\gamma'_2\in \overline{[\beta_2,\gamma_2]}$.) \item If $\alpha_1=\alpha_2=\alpha$ and $\rho(\alpha,\beta_1)=\rho(\alpha,\beta_2)$, then $\rho(\beta_1,\beta_2)\leq \rho(\gamma'_1,\gamma'_2)$ for all $\gamma'_1\in \underline{[\beta_1,\gamma_1]}$ and $\gamma'_2\in \underline{[\beta_2,\gamma_2]}$. (Respectively, if $\alpha_1=\alpha_2=\alpha$ and $\rho(\alpha,\beta_1)=\rho(\alpha,\beta_2)$, then $\rho(\beta_1,\beta_2)\leq \rho(\gamma'_1,\gamma'_2)$ for all $\gamma'_1\in \overline{[\beta_1,\gamma_1]}$ and $\gamma'_2\in \overline{[\beta_2,\gamma_2]}$.) \end{enumerate} \end{proposition} \begin{proof} Here we give proofs for the cases of lower tropical convexity. The proofs for upper tropical convexity cases can be derived analogously. For (1), based on the relative locations of the points, we have the following equalities by Corollary~\ref{C:CritTropProj} \begin{align*} &\llfloor \gamma'_1-\beta_2\rrfloor_1 \\ &=\llfloor \gamma'_1-\alpha'_1\rrfloor_1+\llfloor \alpha'_1-\beta_2\rrfloor_1 \\ &=\llfloor \gamma'_1-\beta_1\rrfloor_1+\llfloor \beta_1-\alpha'_1\rrfloor_1+\llfloor \alpha'_1-\beta_2\rrfloor_1 \\ &=\llfloor \gamma'_1-\beta_1\rrfloor_1+\llfloor \beta_1-\beta_2\rrfloor_1, \end{align*} and analogously $ \llfloor \gamma'_2-\beta_1\rrfloor_1=\llfloor \gamma'_2-\beta_2\rrfloor_1+\llfloor \beta_2-\beta_1\rrfloor_1$. Therefore, \begin{align*} \rho(\beta_1,\beta_2)\mu(X) &= \llfloor \beta_1-\beta_2\rrfloor_1+\llfloor \beta_2-\beta_1\rrfloor_1 \\ &(\llfloor \gamma'_1-\beta_2\rrfloor_1-\llfloor \gamma'_1-\beta_1\rrfloor_1)+(\llfloor \gamma'_2-\beta_1\rrfloor_1-\llfloor \gamma'_2-\beta_2\rrfloor_1) \\ &=(\llfloor \gamma'_1-\beta_2\rrfloor_1-\llfloor \gamma'_2-\beta_2\rrfloor_1)+(\llfloor \gamma'_2-\beta_1\rrfloor_1-\llfloor \gamma'_1-\beta_1\rrfloor_1) \\ &\leq \llfloor \gamma'_1-\gamma'_2\rrfloor_1 +\llfloor \gamma'_2-\gamma'_1\rrfloor_1 \\ &=\rho(\gamma'_1,\gamma'_2)\mu(X). \end{align*} For (2), we want to apply (1). Note that $\alpha'_1=\beta_1=\underline{\pi}_{\underline{[\alpha_1,\gamma_1]}}(\beta_2)$. Thus by (1), if $\alpha'_2=\underline{\pi}_{ \underline{[\alpha_2,\gamma_2]}}(\beta_1)$ lies in $\underline{[\alpha_2,\beta_2]}$, then $\rho(\beta_1,\beta_2)\leq \rho(\gamma_1,\gamma_2)$, and if $\alpha'_2$ lies in $\underline{[\gamma_2,\beta_2]}$, then $\rho(\beta_1,\beta_2)\leq \rho(\alpha_1,\alpha_2)$. This means $\rho(\beta_1,\beta_2)\leq \max(\rho(\alpha_1,\alpha_2),\rho(\gamma_1,\gamma_2))$. Now if in addition $\alpha=\alpha_1=\alpha_2$, then we claim that $\alpha'_2=\underline{\pi}_{ \underline{[\alpha_2,\gamma_2]}}(\beta_1)$ lies in $\underline{[\alpha_2,\beta_2]}$ which means that $\rho(\beta_1,\beta_2)\leq \rho(\gamma'_1,\gamma'_2)$ for all $\gamma'_1\in \underline{[\beta_1,\gamma_1]}$ and $\gamma'_2\in \underline{[\beta_2,\gamma_2]}$ by (1). Actually using Proposition~\ref{P:TropProj}(5) twice, we can derive $\rho(\alpha,\alpha'_2)\leq \rho(\alpha,\beta_1) \leq \rho(\alpha,\beta_2)$. For (3), we can derive $\rho(\alpha,\alpha'_1)\leq \rho(\alpha,\beta_2)=\rho(\alpha,\beta_1)$ and $\rho(\alpha,\alpha'_2)\leq \rho(\alpha,\beta_1)=\rho(\alpha,\beta_2)$ by using Proposition~\ref{P:TropProj}(5). Again, we can apply (1). \end{proof} \section{Compact Tropical Convex Sets} \label{S:Compact} \subsection{A Construction of Compact Tropical Convex Sets} \begin{theorem}\label{T:construction} Let $T,T'\subseteq\mathbb{TP}(X)$ be lower (respectively upper) tropical convex set. Then $\underline{\mathrm{tconv}}(T\bigcup T')=\bigcup_{\alpha\in T,\alpha'\in T'}\underline{\mathrm{tconv}}(\alpha,\alpha')$ (respectively $\overline{\mathrm{tconv}}(T\bigcup T')=\bigcup_{\alpha\in T,\alpha'\in T'}\overline{\mathrm{tconv}}(\alpha,\alpha')$). If $T$ and $T'$ are compact in addition, then $\underline{\mathrm{tconv}}(T\bigcup T')$ (respectively $\overline{\mathrm{tconv}}(T\bigcup T')$) is compact and each $\beta\in\underline{\mathrm{tconv}}(T\bigcup T')$ (respectively $\beta\in\overline{\mathrm{tconv}}(T\bigcup T')$) lies on the tropical segment $\underline{[\underline{\pi}_T(\beta),\underline{\pi}_{T'}(\beta)]}$ (respectively $\overline{[\overline{\pi}_T(\beta),\overline{\pi}_{T'}(\beta)]}$). \end{theorem} \begin{proof} We give a proof for the lower tropical convexity case and the proof for the upper tropical convexity case can be derived analogously. Denote $\bigcup_{\alpha\in T,\alpha'\in T'}\underline{\mathrm{tconv}}(\alpha,\alpha')$ by $\tilde{T}$. Then clearly $\tilde{T}\subseteq\underline{\mathrm{tconv}}(T\bigcup T')$. We claim that $\underline{\mathrm{tconv}}(T\bigcup T')\subseteq\tilde{T}$. Note that by Theorem~\ref{T:TconvTlinear}, $\underline{\mathrm{tconv}}(T\bigcup T')=\widehat{\oplus}(T\bigcup T')$ which means that for each $[g]\in\underline{\mathrm{tconv}}(T\bigcup T')$, we can write $g=(c_1\odot f_1)\oplus\cdots\oplus (c_m\odot f_m)\oplus (c'_1\odot f'_1)\oplus\cdots\oplus (c'_n\odot f'_n)$ for some $m,n\in {\mathbb Z}^+$, $c_i,c'_j\in{\mathbb R}$, $[f_i]\in T$ and $[f'_j]\in T'$. Let $f=(c_1\odot f_1)\oplus\cdots\oplus (c_m\odot f_m)$ and $f'=(c'_1\odot f'_1)\oplus\cdots\oplus (c'_n\odot f'_n)$. Then $[f]\in T$, $[f']\in T'$ and $g=f\oplus f'$. Thus $[g]\in\underline{[[f],[f']]}$ which implies $\underline{\mathrm{tconv}}(T\bigcup T')\subseteq\tilde{T}$ as claimed. Recall that a metric space is compact if and only if it is complete and totally bounded. Now let us show that if in addition $T$ and $T'$ are complete and totally bounded, then $\tilde{T}$ is also complete and totally bounded. First, we claim that for each $\beta\in\tilde{T}$, we must have $\beta\in\underline{[\alpha,\alpha']}$ where $\alpha=\underline{\pi}_T(\beta)$ and $\alpha'=\underline{\pi}_{T'}(\beta)$. We must assume that $\beta\in\underline{[\alpha+0,\alpha'_0]}$ for some $\alpha_0\in T$ and $\alpha'_0\in T'$. Then By Corollary~\ref{C:CritTropProj}(a5) and Lemma~\ref{L:XminXmax}, we have $X_{\min}(\alpha_0-\beta)\subseteq X_{\min}(\alpha-\beta)$ and $X_{\min}(\alpha'_0-\beta)\subseteq X_{\min}(\alpha'-\beta)$. Therefore, $X_{\min}(\alpha-\beta)\bigcup X_{\min}(\alpha'-\beta)\supseteq X_{\min}(\alpha_0-\beta)\bigcup X_{\min}(\alpha'_0-\beta)=X$ which means that $\beta\in\underline{[\alpha,\alpha']}$ by Proposition~\ref{P:TropSeg}(5). Now we want to show that $\tilde{T}$ is complete. Let $\beta_1,\beta_2,\cdots$ be a Cauchy sequence in $\tilde{T}$, i.e., $\rho(\beta_m,\beta_n)\to 0$ as $m,n\to\infty$. We claim that there exists $\beta\in\tilde{T}$ such that $\rho(\beta_n,\beta)\to 0$ as $n\to\infty$, which implies the completeness of $\tilde{T}$. Since $T$ is compact, we may let $\alpha_i=\underline{\pi}_T(\beta_i)$ and $\alpha'_i=\underline{\pi}_{T'}(\beta_i)$. Note that $\beta_i\in\underline{[\alpha_i,\alpha'_i]}$. Moreover, $\alpha_1,\alpha_2,\cdots$ is a Cauchy sequence in $T$ and $\alpha'_1,\alpha'_2,\cdots$ is a Cauchy sequence in $T'$ since $\rho(\alpha_m,\alpha_n)\leq \rho(\beta_m,\beta_n)$ and $\rho(\alpha'_m,\alpha'_n)\leq \rho(\beta_m,\beta_n)$ by Proposition~\ref{P:TropProj}(5). Now let $\alpha$ be the limit of $\alpha_1,\alpha_2,\cdots$ and $\alpha'$ be the limit of $\alpha'_1,\alpha'_2,\cdots$. Consider the tropical segment $\underline{[\alpha,\alpha']}$ and let $\gamma_i = \underline{\pi}_{\underline{[\alpha,\alpha']}}(\beta_i)$. Again $\gamma_1,\gamma_2,\cdots$ is a Cauchy sequence in $\underline{[\alpha,\alpha']}$ by Proposition~\ref{P:TropProj}(5). We let $\beta$ be the limit of $\gamma_1,\gamma_2,\cdots$ and claim $\beta$ is also the limit of $\beta_1,\beta_2,\cdots$. Note that $\rho(\beta_n,\beta) \leq \rho(\beta_n,\gamma_n)+\rho(\gamma_n,\beta)$. By Proposition~\ref{P:DistIneq}(2), we have $\rho(\beta_n,\gamma_n)\leq \max(\rho(\alpha_n,\alpha),\rho(\alpha'_n,\alpha'))$. Now $\rho(\alpha_n,\alpha)\to 0$, $\rho(\alpha'_n,\alpha')\to 0$ and $\rho(\gamma_n,\beta)\to 0$ as $n\to\infty$, which implies $\rho(\beta_n,\beta)\to 0$ as $n\to\infty$ as claimed. Next, we want to show that $\tilde{T}$ is totally bounded, i.e., for every real $\epsilon>0$, there exists a finite cover of $\tilde{T}$ by open balls of radius $\epsilon$. We start with a finite cover of $T$ by open balls ${\mathcal B}^0(\alpha_i,\epsilon/2)$ of radius $\epsilon/2$ centered at $\alpha_i\in T$ for $i=1,\cdots,n$, and a finite cover of $T'$ by open balls ${\mathcal B}^0(\alpha'_j,\epsilon/2)$ of radius $\epsilon/2$ centered at $\alpha'_j\in T'$ for $j=1,\cdots,m$. Then for each $\underline{[\alpha_i,\alpha'_j]}$, we have a finite cover by open balls ${\mathcal B}^0(\beta^{(i,j)}_{k^{(i,j)}},\epsilon/2)$ of radius $\epsilon/2$ centered at $\beta^{(i,j)}_{k^{(i,j)}}\in \underline{[\alpha_i,\alpha'_j]}$ for $k^{(i,j)}=1,\ldots,m^{(i,j)}$. We claim that there is a finite cover of $\tilde{T}$ by open balls ${\mathcal B}^0(\beta^{(i,j)}_{k^{(i,j)}},\epsilon)$ of radius $\epsilon$ centered at $\beta^{(i,j)}_{k^{(i,j)}}\in\tilde{T}$ for $i=1,\ldots,n$, $j=1,\ldots,m$ and $k^{(i,j)}=1,\ldots,m^{(i,j)}$. For any $\beta\in\tilde{T}$, there exist $\alpha\in T$ and $\alpha'\in T'$ such that $\beta\in\underline{[\alpha,\alpha']}$. Suppose $\alpha\in {\mathcal B}^0(\alpha_i,\epsilon/2)$ for some $1\leq i\leq n$ and $\alpha'\in {\mathcal B}^0(\alpha'_j,\epsilon/2)$ for some $1\leq j\leq m$. Let $\gamma=\underline{\pi}_{\underline{[\alpha_i,\alpha'_j]}}(\beta)$ and suppose $\gamma\in {\mathcal B}^0(\beta^{(i,j)}_{k^{(i,j)}},\epsilon/2)$ for some $\beta^{(i,j)}_{k^{(i,j)}}$. Then by Proposition~\ref{P:DistIneq}(2), we have $$\rho(\beta,\beta^{(i,j)}_{k^{(i,j)}})\leq \rho(\beta,\gamma)+\rho(\gamma,\beta^{(i,j)}_{k^{(i,j)}})\leq \max(\rho(\alpha,\alpha_i),\rho(\alpha',\alpha'_j))+\rho(\gamma,\beta^{(i,j)}_{k^{(i,j)}})<\epsilon/2+\epsilon/2=\epsilon.$$ Therefore $\beta$ lies in ${\mathcal B}^0(\beta^{(i,j)}_{k^{(i,j)}},\epsilon)$ which means $\tilde{T}$ is covered by this finite collection of open balls as claimed. \end{proof} \begin{corollary} \label{C:Polytope} Let $T_1,\cdots,T_n$ be compact subsets of $\mathbb{TP}(X)$ which are also lower (respectively upper) tropically convex. Then $\underline{\mathrm{tconv}}(T_1\bigcup \cdots\bigcup T_n)$ (respectively $\overline{\mathrm{tconv}}(T_1\bigcup \cdots\bigcup T_n)$) is compact. In particular, all lower and upper tropical polytopes are compact. \end{corollary} By the corollary, since tropical polytopes are compact, we can always apply tropical projections from $\mathbb{TP}(X)$ to any tropical polytope. \subsection{Closed Tropical Convex Hulls and The Tropical Version of Mazur's Theorem} Recall that the closed (conventional) convex hull of a compact subset $S$ of a Banach space is also compact (Mazur's theorem). Note that $\mathbb{TP}(X)$ is a Banach space and here we prove an analogue of Mazur's theorem for tropical convexity. \begin{proposition} \label{P:closure} The topological closure $cl(T)$ of any lower (respectively upper) tropical convex set $T$ is lower (respectively upper) tropically convex. \end{proposition} \begin{proof} We show here the case for lower tropical convexity and the proof for the case of upper tropical convexity can be derived analogously. Let $\alpha$ and $\beta$ be elements in $cl(T)$. Then there is a sequence $\alpha_1,\alpha_2,\cdots$ in $T$ converging to $\alpha$ and a sequence $\beta_1,\beta_2,\cdots$ in $T$ converging to $\beta$. Then for each $\gamma\in\underline{[\alpha,\beta]}$, let $\gamma_n=\underline{\pi}_{\underline{[\alpha_n,\beta_n]}}(\gamma)$. By Proposition~\ref{P:DistIneq}(2), we see that $\rho(\gamma,\gamma_n)\leq\max(\rho(\alpha,\alpha_n),\rho(\beta,\beta_n))$. Therefore, $\rho(\gamma,\gamma_n)\to 0$ as $n\to\infty$ since $\rho(\alpha,\alpha_n)\to 0$ and $\rho(\beta,\beta_n)\to 0$ as $n\to\infty$. Now since $\gamma_1,\gamma_2,\cdots$ is a sequence in $T$, we conclude that $\gamma\incl(T)$ which means $cl(T)$ is also lower tropically convex. \end{proof} \begin{definition} \label{D:ClTropConv} Let $S$ be a subset of $\mathbb{TP}(X)$. The \emph{closed lower tropical convex hull $\underline{\mathrm{TCONV}}(S)$} (respectively \emph{closed upper tropical convex hull $\overline{\mathrm{TCONV}}(S)$}) generated by $S$ is the intersection of all closed lower (respectively upper) tropically convex subsets of $\mathbb{TP}(X)$ containing $S$. \end{definition} \begin{lemma} $\underline{\mathrm{TCONV}}(S)=cl(\underline{\mathrm{tconv}}(S))$, $\overline{\mathrm{TCONV}}(S)=cl(\overline{\mathrm{tconv}}(S))$ and $\underline{\mathrm{TCONV}}(\overline{\mathrm{TCONV}}(S))=\overline{\mathrm{TCONV}}(\underline{\mathrm{TCONV}}(S))=\underline{\mathrm{TCONV}}(\overline{\mathrm{tconv}}(S))=\overline{\mathrm{TCONV}}(\underline{\mathrm{tconv}}(S))=cl(\underline{\overline{\mathrm{tconv}}}(S))$. \end{lemma} \begin{proof} The statements follow from Definition~\ref{D:ClTropConv} and Proposition~\ref{P:closure} directly. \end{proof} \begin{remark} We will write $\underline{\overline{\mathrm{TCONV}}}(S):=cl(\underline{\overline{\mathrm{tconv}}}(S))$. \end{remark} Before discussing the tropical version of Mazur's theorem, we will need the following proposition which is a generalization of Proposition~\ref{P:DistIneq}(2). \begin{proposition} \label{P:GeneralIneq} Consider $\alpha_1,\cdots,\alpha_n,\beta_1,\cdots,\beta_n\in\mathbb{TP}(X)$ and let $d_i=\rho(\alpha_i,\beta_i)$ for $i=1,\cdots,n$. Let $T=\underline{\mathrm{tconv}}(\{\beta_1,\cdots,\beta_n\})$ (respectively $T=\overline{\mathrm{tconv}}(\{\beta_1,\cdots,\beta_n\})$). Then for each $\alpha\in \underline{\mathrm{tconv}}(\{\alpha_1,\cdots, \alpha_n\})$, $\rho(\alpha,T)=\rho(\alpha,\underline{\pi}_T(\alpha))\leq \max(d_1,\cdots,d_n)$ (respectively for each $\alpha\in \overline{\mathrm{tconv}}(\{\alpha_1,\cdots, \alpha_n\})$, $\rho(\alpha,T)=\rho(\alpha,\overline{\pi}_T(\alpha))\leq \max(d_1,\cdots,d_n)$). \end{proposition} \begin{proof} First we note that $\rho(\alpha,T)=\rho(\alpha,\underline{\pi}_T(\alpha))$ is always true by Theorem~\ref{T:main}. We will proceed by using induction on $n$. Suppose the statement is true for $n$. Now let us consider $\alpha_1,\cdots,\alpha_{n+1},\beta_1,\cdots,\beta_{n+1}\in\mathbb{TP}(X)$ and let $d_i=\rho(\alpha_i,\beta_i)$ for $i=1,\cdots,n+1$. Let $S_n=\underline{\mathrm{tconv}}(\{\alpha_1,\cdots,\alpha_n\})$, $S_{n+1}=\underline{\mathrm{tconv}}(\{\alpha_1,\cdots,\alpha_{n+1}\})$, $T_n=\underline{\mathrm{tconv}}(\{\beta_1,\cdots,\beta_n\})$ and $T_{n+1}=\underline{\mathrm{tconv}}(\{\beta_1,\cdots,\beta_{n+1}\})$. For each $\alpha\in S_{n+1}$, let $\alpha_0=\underline{\pi}_{S_n}(\alpha)$ and $\beta_0=\underline{\pi}_{T_n}(\alpha_0)$. Then by assumption, since $\alpha_0$ is an element in $S_n$, we have $\rho(\alpha_0,T_n)=\rho(\alpha_0,\beta_0)\leq \max(d_1,\cdots,d_n)$. Moreover, we have $\alpha\in\underline{[\alpha_0,\alpha_{n+1}]}$ by Theorem~\ref{T:construction}. Let $\gamma = \underline{\pi}_{\underline{[\beta_0,\beta_{n+1}]}}(\alpha)$. Then $\rho(\alpha,T_{n+1})\leq \rho(\alpha,\gamma)\leq \max(\rho(\alpha_0,\beta_0),\rho(\alpha_{n+1,},\beta_{n+1}))\leq \max(d_1,\cdots,d_{n+1})$ where the second inequality follows from Proposition~\ref{P:DistIneq}(2). The case for upper tropical convexity can be proved analogously. \end{proof} \begin{theorem}[The Tropical Version of Mazur's Theorem] \label{T:TropMazur} If $S$ is a compact subset of $\mathbb{TP}(X)$, then the closed tropical convex hulls $\underline{\mathrm{TCONV}}(S)$, $\overline{\mathrm{TCONV}}(S)$ and $\underline{\overline{\mathrm{TCONV}}}(S)$ are all compact. \end{theorem} \begin{proof} Since $\underline{\mathrm{TCONV}}(S)$ is closed in the Banach space $\mathbb{TP}(X)$ (Proposition~\ref{P:TPBanach}), we know that $\underline{\mathrm{TCONV}}(S)$ is complete. So to show that $\underline{\mathrm{TCONV}}(S)$ is compact, we need to show that it is totally bounded. Actually first we will show that $\underline{\mathrm{tconv}}(S)$ is totally bounded. Since $S$ is compact and thus totally bounded, for each $\epsilon>0$, $S$ can be finitely covered by open balls ${\mathcal B}^0(\beta_i,\epsilon/2)$ of radius $\epsilon/2$ centered at $\beta_i\in S$ for $i=1,\cdots,n$. Now consider the lower tropical polytope $\underline{\mathrm{tconv}}(\{\beta_1,\cdots,\beta_n\})$ which is also compact by Corollary~\ref{C:Polytope}. Then we may also assume that $\underline{\mathrm{tconv}}(\{\beta_1,\cdots,\beta_n\})$can be finitely covered by open balls ${\mathcal B}^0(\beta_i,\epsilon/2)$ of radius $\epsilon/2$ centered at $\beta_i\in\underline{\mathrm{tconv}}(\{\beta_1,\cdots,\beta_n\})$ for $i=1,\cdots,N$ where $N\geq n$. Now let $\alpha$ be any element in $\underline{\mathrm{tconv}}(S)$. We may assume that $\alpha$ is contained in a lower tropical polytope $\underline{\mathrm{tconv}}(\{\alpha_1,\cdots,\alpha_m\})$ with $\alpha_i\in S$ for $i=1,\cdots,m$ by Corollary~\ref{C:LocalFinite}. Then there is a function $\phi:\{1,\cdots,m\}\to\{1,\cdots, n\}$ such that $\rho(\alpha_i,\beta_{\phi(i)})<\epsilon/2$ for $i=1,\cdots,m$. Therefore, if $\gamma$ be the lower tropical projection of $\alpha$ to $\underline{\mathrm{tconv}}(\{\beta_1,\cdots,\beta_N\})$, then $\rho(\alpha,\gamma)=\rho(\alpha,\underline{\mathrm{tconv}}(\{\beta_1,\cdots,\beta_N\}))\leq \rho(\alpha,\underline{\mathrm{tconv}}(\{\beta_\phi(1),\cdots,\beta_\phi(m)\}))\leq \max(\rho(\alpha_i,\beta_{\phi(i)})\mid i=1,\cdots,m)<\epsilon/2$ by Proposition~\ref{P:GeneralIneq}. Again, since $\gamma$ is an element of $\underline{\mathrm{tconv}}(\{\beta_1,\cdots,\beta_N\})$, there must be some $1\leq i(\alpha)\leq N$ such that $\rho(\gamma,\beta_{i(\alpha)})< \epsilon/2$. Therefore, $\rho(\alpha,\beta_{i(\alpha)})\leq \rho(\alpha,\gamma)+\rho(\gamma,\beta_{i(\alpha)})<\epsilon$. As a conclusion, $\underline{\mathrm{tconv}}(S)$ can be finitely covered by open balls ${\mathcal B}^0(\beta_i,\epsilon)$ of radius $\epsilon$ centered at $\beta_i$ for $i=1,\cdots,N$ which implies that $\underline{\mathrm{tconv}}(S)$ is totally bounded. As $\underline{\mathrm{TCONV}}(S)=cl(\underline{\mathrm{tconv}}(S))$, each element $\alpha\in \underline{\mathrm{TCONV}}(S)$ is approachable by a sequence $\alpha_1,\alpha_2,\cdots$ in $\underline{\mathrm{tconv}}(S)$. Note that by the above argument, $\rho(\alpha_i,\{\beta_1,\cdots,\beta_N\})< \epsilon$. Therefore, $\rho(\alpha,\{\beta_1\cdots,\beta_N\})=\lim\limits_{i\to\infty} \rho(\alpha_i,\{\beta_1\cdots,\beta_N\})\leq \epsilon$. This means that $\underline{\mathrm{TCONV}}(S)$ can be finitely covered by open balls ${\mathcal B}^0(\beta_i,2\epsilon)$ of radius $2\epsilon$ centered at $\beta_i$ for $i=1,\cdots,N$. Thus $\underline{\mathrm{TCONV}}(S)$ is totally bounded and thus compact. The compactness of $\overline{\mathrm{TCONV}}(S)$ and $\underline{\overline{\mathrm{TCONV}}}(S)$ can be derived analogously. \end{proof} \begin{remark} In case $\mathbb{TP}^0(X)\neq \mathbb{TP}(X)$, the above theorem is not generally true restricted to $\mathbb{TP}^0(X)$, i.e., the compactness of $S\subseteq\mathbb{TP}^0(X)$ is not able to guarantee the compactness of the tropical convex sets $\underline{\mathrm{TCONV}}(S)\bigcap \mathbb{TP}^0(X)$, $\overline{\mathrm{TCONV}}(S)\bigcap \mathbb{TP}^0(X)$ and $\underline{\overline{\mathrm{TCONV}}}(S)\bigcap \mathbb{TP}^(X)$. The reason is that $\mathbb{TP}^0(X)$ is not necessarily a Banach space as $\mathbb{TP}(X)$. \end{remark} \section{A Criterion for Tropical Weak Independence} \label{S:TropIndep} The following theorem provides a set-theoretical criterion for tropical weak independence, which is a generalization of Proposition~\ref{P:TropSeg} and Lemma~\ref{L:TropProjSegment}. \begin{theorem} \label{T:CriterionTropIndep} Let $T\subseteq\mathbb{TP}(X)$ be a lower (respectively upper) tropical convex hull generated by $S\subseteq \mathbb{TP}(X)$. Then for any $\gamma\in\mathbb{TP}(X)$, $\gamma$ lies in $T$ if and only if there is a finite subset $\{\beta_1,\cdots, \beta_n\}$ of $S$ such that $\bigcup_{i=1}^nX_{\min}(\beta_i-\gamma)=X$ (respectively $\bigcup_{i=1}^nX_{\max}(\beta_i-\gamma)=X$). Furthermore, for each element $\beta$ in $T$, $X_{\min}(\beta-\gamma)\subseteqX_{\min}(\underline{\pi}_T(\gamma)-\gamma)=\bigcup_{i=1}^nX_{\min}(\beta_i-\gamma)$ (respectively $X_{\max}(\beta-\gamma)\subseteqX_{\max}(\overline{\pi}_T(\gamma)-\gamma)=\bigcup_{i=1}^nX_{\max}(\beta_i-\gamma)$). \end{theorem} \begin{proof} We prove here the lower tropical convexity case and the proof of upper tropical convexity case follows analogously. By Corollary~\ref{C:LocalFinite}, any element $\gamma$ in $T$ must be contained in lower tropical polytope with generators in a finite subset of $S$. So we only need to show the case when $S$ is itself finite. Let us prove by an induction on the number of generators. Suppose the statements are true for all lower tropical convex hulls generated by $n$ elements. Now consider a lower tropical polytope $T=\underline{\mathrm{tconv}}(\{\beta_1,\cdots, \beta_n, \beta_{n+1}\})$ generated by $n+1$ elements and let $T'=\underline{\mathrm{tconv}}(\{\beta_1,\cdots, \beta_n\})$. For $\gamma\in \mathbb{TP}(X)$, let $\alpha=\underline{\pi}_T(\gamma)$. Then by Theorem~\ref{T:construction}, there exists $\alpha'\in T'$ such that $\alpha\in \underline{\pi}{[\alpha',\beta_{n+1}]}$. This implies that $X_{\min}(\alpha'-\alpha)\bigcup X_{\min}(\beta_{n+1}-\alpha)=X$. Then by assumption, $X_{\min}(\alpha'-\alpha)\subseteq \bigcup_{i=1}^nX_{\max}(\beta_i-\alpha)$. Thus $\bigcup_{i=1}^{n+1}X_{\max}(\beta_i-\gamma)=X$. In addition, $X_{\min}(\beta_i-\gamma)=X_{\min}(\alpha-\gamma)\bigcapX_{\min}(\beta_i-\gamma)$ for $i=1,\cdots,n+1$. Therefore \begin{align*} X_{\min}(\alpha-\gamma)&=X_{\min}(\alpha-\gamma)\bigcap(\bigcup_{i=1}^{n+1}X_{\min}(\beta_i-\alpha)) \\ &= \bigcup_{i=1}^{n+1}(X_{\min}(\alpha-\gamma)\bigcapX_{\min}(\beta-\alpha)) \\ &= \bigcup_{i=1}^{n+1}X_{\min}(\beta-\gamma). \end{align*} And this also implies $\gamma\in T$ if and only if $\bigcup_{i=1}^{n+1}X_{\min}(\beta_i-\gamma)=X$. \end{proof} \begin{corollary} Let $S,S_1,\cdots,S_n$ be finite subsets of $\mathbb{TP}(X)$ such that $S=S_1\bigcup \cdots \bigcup S_n$. Let $T=\underline{\mathrm{tconv}}(S)$ (respectively $T=\overline{\mathrm{tconv}}(S)$) and for $i=1,\cdots,n$, let $T_i = \underline{\mathrm{tconv}}(S_i)$ (respectively $T_i = \overline{\mathrm{tconv}}(S_i)$). For $\gamma\in \mathbb{TP}(X)$, $\gamma\in T$ if and only if $\gamma\in\underline{\mathrm{tconv}}(\{\underline{\pi}_{T_1}(\gamma),\cdots,\underline{\pi}_{T_n}(\gamma)\})$ (respectively $\gamma\in\overline{\mathrm{tconv}}(\{\underline{\pi}_{T_1}(\gamma),\cdots,\overline{\pi}_{T_n}(\gamma)\})$). \end{corollary} \begin{proof} Note that by Theorem~\ref{T:CriterionTropIndep}, $X_{\min}(\underline{\pi}_{T_i}(\gamma)-\gamma)=\bigcup_{\beta\in S_i}X_{\min}(\beta-\gamma)$ for $i=1,\cdots,n$. Then $\bigcup_{i=1}^nX_{\min}(\underline{\pi}_{T_i}(\gamma)-\gamma)=\bigcup_{\beta\in S_1\bigcup \cdots \bigcup S_n}X_{\min}(\beta-\gamma)=\bigcup_{\beta\in S}X_{\min}(\beta-\gamma)$. Again, by Theorem~\ref{T:CriterionTropIndep}, this means that $\gamma\in T$ if and only if $\gamma\in\underline{\mathrm{tconv}}(\{\underline{\pi}_{T_1}(\gamma),\cdots,\underline{\pi}_{T_n}(\gamma)\})$. The case for upper tropical convexity can be shown analogously. \end{proof} Let $T$ be a lower (respectively upper) tropical convex set. For $\alpha\in T$, if $\alpha\notin \underline{\mathrm{tconv}}(T\setminus\{\alpha\})$ (respectively $\alpha\notin \overline{\mathrm{tconv}}(T\setminus\{\alpha\})$), then we say $\alpha$ is an \emph{extremal} of $T$. It is clear from definition that any generating set of $T$ must contain all the extremals of $T$ and the set of extemals of $T$ is (lower or upper) tropically independent. \begin{theorem} \label{T:extremal} Every lower (respectively upper) tropical polytope $T$ contains finitely many extremals. The set of all extremals of $T$ generates $T$ and is minimal among all generating sets of $T$. \end{theorem} \begin{proof} We will prove the case for lower tropical convexity and the case for upper tropical convexity can be proved analogously. Since $T$ is a lower tropical polytope, we may choose a finite generating set $S$ of $T$, i.e., $\underline{\mathrm{tconv}}(S)=T$. Choose a subset $V$ of $S$ such that $\underline{\mathrm{tconv}}(V)=T$ and $V$ is lower tropically independent, i.e., $\underline{\mathrm{tconv}}(V\setminus\{\alpha\})\neq T$ for all $\alpha\in V$. Note that this is doable since $S$ is finite. We claim that $V$ must be the set of all extremals of $T$, i.e., $V=\{\alpha\in T\mid \underline{\mathrm{tconv}}(T\setminus\{\alpha\})= T\setminus\{\alpha\}\}$. First we note that all extremals must be contained in $V$ by definition. Now let us show that all elements of $V$ are extremals of $T$. Let $V=\{\alpha_1,\cdots,\alpha_n\}$. Without loss of generality, we will show that $\alpha_1$ is an extremal of $T$. We know that $\underline{\mathrm{tconv}}(V)=T$ and $\underline{\mathrm{tconv}}(V\setminus\{\alpha_1\})\neq T$. To show that $\alpha_1$ is an extremal of $T$, we will need to show that $T\setminus\{\alpha_1\}$ is lower tropically convex or equivalently $\alpha_1\notin \underline{\mathrm{tconv}}(T\setminus\{\alpha_1\})$. Actually it suffices to show that for arbitrarily $\beta_1$ and $\beta_2$ in $T\setminus\{\alpha_1\}$, $\alpha\notin \underline{[\beta_1,\beta_2]}$. Let $T'=\underline{\mathrm{tconv}}(\{\alpha_2,\cdots,\alpha_n\})$. By Theorem~\ref{T:construction}, there exist $\gamma_1$ and $\gamma_2$ in $T'$ such that $\beta_1\in\underline{[\alpha_1,\gamma_1]}$ and $\beta_2\in\underline{[\alpha_1,\gamma_2]}$. Then $X_{\min}(\beta_1-\alpha_1)=X_{\min}(\gamma_1-\alpha_1)$ and $X_{\min}(\beta_2-\alpha_1)=X_{\min}(\gamma_2-\alpha_1)$. Note that $\alpha_1\notin T'$ since $V$ is lower tropically independent. Hence $\bigcup_{i=2}^nX_{\min}(\alpha_i-\alpha_1)\neq X$ by Theorem~\ref{T:CriterionTropIndep}. Moreover, $X_{\min}(\gamma_1-\alpha_1)\subseteq \bigcup_{i=2}^nX_{\min}(\alpha_i-\alpha_1)$ and $X_{\min}(\gamma_2-\alpha_1)\subseteq \bigcup_{i=2}^nX_{\min}(\alpha_i-\alpha_1)$ by Theorem~\ref{T:CriterionTropIndep}. Then $X_{\min}(\beta_1-\alpha_1) \bigcup X_{\min}(\beta_2-\alpha_1) =X_{\min}(\gamma_1-\alpha_1) \bigcup X_{\min}(\gamma_2-\alpha_1) \subseteq \bigcup_{i=2}^nX_{\min}(\alpha_i-\alpha_1) \neq X$ which means that $\alpha\notin \underline{[\beta_1,\beta_2]}$. \end{proof} \begin{remark} The statements in Theorem~\ref{T:extremal} is not generally true to all tropical convex sets. For example, if $X=\{1,\cdots,n\}$, then the whole space $\mathbb{TP}(X)={\mathbb R}^{n-1}$ does not contain any extremals. \end{remark} \section{A Fixed Point Theorem for Tropical Projections} \label{S:FixedPoint} \begin{theorem} \label{T:FixedPoint} Let $S$ and $T$ be compact subsets of $\mathbb{TP}(X)$ and suppose $S$ is lower tropically convex and $T$ is upper tropically convex. Let $S_0=\{\underline{\pi}_S(\gamma)\mid \gamma\in T\}$ and $T_0=\{\overline{\pi}_T(\gamma)\mid \gamma\in S\}$. Then $S_0$ and $T_0$ are isometric under $\overline{\pi}_T\mid _{S_0}:S_0\to T_0$ and $\underline{\pi}_S\mid _{T_0}:T_0\to S_0$ which are inverse maps. \end{theorem} \begin{proof} For each $\alpha\in S_0$, there exists $\gamma\in T$ such that $\alpha=\underline{\pi}_S(\gamma)$. Let $\beta=\overline{\pi}_T(\alpha)$ and $\alpha'=\underline{\pi}_S(\beta)$. We want to show that $\alpha=\alpha'$. Using Theorem~\ref{T:main} and Corollary~\ref{C:CritTropProj}, we have the following relations: \begin{align} &\llfloor \alpha-\alpha'\rrfloor_1 + \llfloor \alpha' - \beta\rrfloor_1 = \llfloor \alpha - \beta\rrfloor_1; \\ &\llfloor \alpha'-\alpha\rrfloor_1 + \llfloor \alpha - \gamma\rrfloor_1 = \llfloor \alpha' - \gamma\rrfloor_1; \\ &\llceil \gamma-\beta\rrceil_1 + \llceil \beta - \alpha\rrceil_1 = \llceil \gamma - \alpha\rrceil_1. \end{align} Then we have \begin{align*} 0\leq &\rho(\alpha,\alpha')\mu(X)=\llfloor \alpha-\alpha'\rrfloor_1+\llfloor \alpha'-\alpha\rrfloor_1 \quad\text{(Proposition~\ref{P:BnormProperty}(5))}\\ &=(\llfloor \alpha - \beta\rrfloor_1-\llfloor \alpha' - \beta\rrfloor_1)+(\llfloor \alpha' - \gamma\rrfloor_1- \llfloor \alpha - \gamma\rrfloor_1)\quad \text{(by (1) and (2))}\\ &=(\llceil \beta - \alpha\rrceil_1-\llceil \beta-\alpha' \rrceil_1)+(\llceil \gamma-\alpha' \rrceil_1- \llceil \gamma-\alpha\rrceil_1) \quad\text{(Proposition~\ref{P:BnormProperty}(1))}\\ &= ((\llceil \gamma - \alpha\rrceil_1-\llceil \gamma - \beta\rrceil_1)-\llceil \beta-\alpha' \rrceil_1)+(\llceil \gamma-\alpha' \rrceil_1- \llceil \gamma-\alpha\rrceil_1) \quad \text{(by (3))}\\ &= \llceil \gamma-\alpha' \rrceil_1-(\llceil \gamma - \beta\rrceil_1+\llceil \beta-\alpha' \rrceil_1) \\ & = \llceil(\gamma - \beta) +(\beta-\alpha' ) \rrceil_1-(\llceil \gamma - \beta\rrceil_1+\llceil \beta-\alpha' \rrceil_1) \leq 0 \quad\text{(Proposition~\ref{P:BnormProperty}(9))}. \end{align*} Therefore $\rho(\alpha,\alpha')$ must be $0$ which means that $\alpha=\alpha'$ as claimed. Using an analogous argument, we can show that $\beta=\overline{\pi}_T(\underline{\pi}_S(\beta))$ for each $\beta\in T_0$. \end{proof} \section{An Application to the Divisor Theory on Metric Graphs} \label{S:AppMetGra} \subsection{Tropical Convexity for Divisors and ${\mathbb R}$-Divisors} Let $\Gamma$ be a compact metric graph with finite edge lengths. We also denote the set of points of $\Gamma$ by $\Gamma$ for simplicity. Let $\Div(\Gamma)$ be the free abelian group on $\Gamma$ and $\RDiv(\Gamma) = \Div(\Gamma)\otimes\mR$. As in convention, we call the elements of $\Div(\Gamma)$ \emph{divisors} (or \emph{${\mathbb Z}$-divisors} when we want to emphasize the integer coefficients) and elements of $\RDiv(\Gamma)$ \emph{$\mR$-divisors}. An $\mR$-divisor on $\Gamma$ can be written as $D=\sum_{p\in\Gamma}m_p\cdot(p)$ where $m_p\in{\mathbb R}$ is called the \emph{value} of $D$ at $p\in\Gamma$ which is zero for all but finitely many points $p\in \Gamma$. We also write the value of $D$ at $p\in\Gamma$ as $D(p)$. Moreover, for an ${\mathbb R}$-divisor $D=\sum_{p\in\Gamma}m_p\cdot(p)$, \begin{enumerate} \item the \emph{degree} of $D$ is $\sum_{p\in\Gamma}m_p$; \item the \emph{support} of $D$, denoted by $\supp(D)$, is the set of points $p\in\Gamma$ such that $m_p\neq 0$; \item $D$ is the \emph{zero divisor} if and only if $\supp(D)=\emptyset$; \item $D$ is a ${\mathbb Z}$-divisor if and only if $m_p$ is an integer for all $p\in\Gamma$; \item we say $D$ is \emph{effective} if $m_p\geq 0$ for all $p\in\Gamma$; \item $D$ can be uniquely written as $D^+-D^-$ where $D^+$ and $D^-$ are both effective divisors which have disjoint supports and are called the \emph{effective part} and \emph{noneffective part} of $D$ respectively. \end{enumerate} Note that there is a natural partial ordering on $\RDiv(\Gamma)$. We say $D'\leq D$ if $D-D'$ is effective. Let $\DivPlus(\Gamma)$ and $\RDivPlus(\Gamma)$ be the semigroups of effective $\mZ$-divisors and effective $\mR$-divisors respectively. If $d$ is a nonnegative integer, denote the set of divisors of degree $d$ by $\DivD(\Gamma)$ and the set of effective divisors of degree $d$ by $\DivPlusD(\Gamma)$. If $d$ is a nonnegative real, denote the set of ${\mathbb R}$-divisors of degree $d$ by $\RDivD(\Gamma)$ and the set of effective ${\mathbb R}$-divisors of degree $d$ by $\RDivPlusD(\Gamma)$. In this section, we will explore a tropical convexity theory on $\RDivPlusD(\Gamma)$. To make a connection to the whole theory developed in the previous sections, we will need to relate ${\mathbb R}$-divisors to elements in $\mathbb{TP}(\Gamma)$ (note that $\mathbb{TP}^0(\Gamma)=\mathbb{TP}(\Gamma)$ since $\Gamma$ is compact). This can be realized using a potential theory on metric graphs \cite{BF06,BR07,BS13} with some results briefly summarized in Appendix~\ref{S:potential}. Let $CPA(\Gamma)\subset C(\Gamma)$ be the vector space consisting of all continuous piecewise-affine functions on $\Gamma$. If $D=\sum_{p\in\Gamma}m_p\cdot(p)\in\RDiv$, we let $\delta_D:=\sum_{p\in\Gamma}m_p\cdot\delta_p$ with $\delta_p$ the Dirac measure at $p$. Consider $D_1,D_2\in\RDivPlusD(\Gamma)$. Then based on the potential theory on $\Gamma$, there exists a piecewise-linear function $f_{D_2-D_1}\in CPA(\Gamma)$ such that $\Delta f_{D_2-D_1} = \delta_{D_2}-\delta_{D_1}$ which is unique up to a constant translation. We say $f_{D_2-D_1}$ is an \emph{associated function} of $D_2-D_1$, and in this sense, we may associate the unique element $[f_{D_2-D_1}]$ in $\mathbb{TP}(\Gamma)$ to $D_2-D_1$. On the other hand, for each $f\in[f_{D_2-D_1}]$, we say $\divf(f)=\divf([f]):=D_2-D_1$ is the \emph{associated divisor} of $f$ or $[f]$. In particular, the value of $\divf(f)$ at $p\in\Gamma$ is the sum of slopes of $f$ along all incoming tangent directions at $p$. Note that $[f_{D_2-D_1}]+[f_{D_1-D_0}]=[f_{D_2-D_0}]$ for all $D_0,D_1,D_2\in \RDivPlusD(\Gamma)$. More precisely, if $D_1=(q)$ and $D_2=(p)$ for some $p,q\in\Gamma$, then $\underline{f_{D_2-D_1}}(x)=j_q(x,p)$ (see the definition of $j_q(x,p)$ in Appendix~\ref{S:potential}). Now let $D_1=\sum_{i=1}^{d_1} m_{1,i}\cdot(p_{1,i})$ and $D_2=\sum_{i=1}^{d_2} m_{2,i}\cdot(p_{2,i})$ such that $D_1,D_2\in\RDivPlusD(\Gamma)$ (this means $d=\sum_{i=1}^{d_1} m_{1,i}=\sum_{i=1}^{d_2} m_{2,i}$). Then by the linearity of the Laplacian, for an arbitrary $q\in\Gamma$, $\sum_{i=1}^{d_1}m_{1,i}\cdot j_q(x,p_{1,i})$, $\sum_{i=1}^{d_2}m_{2,i}\cdot j_q(x,p_{2,i})$, $\sum_{i=1}^{d_2}m_{2,i}\cdot j_q(x,p_{2,i})-\sum_{i=1}^{d_1}m_{1,i}\cdot j_q(x,p_{1,i})$ are associated functions of $D_1-d\cdot(q)$, $D_2-d\cdot(q)$ and $D_2-D_1$ respectively. To have a quick understanding, one may think of $\Gamma$ as an electrical network with resistances given by the edge lengths. Then an associated function of $D_2-D_1$ is the electrical potential function on $\Gamma$ (with an arbitrary point of $\Gamma$ grounded) when $m_{2,i}$ units of current enter the network at $p_{2,i}$ for all $i=1,\cdots,d_2$ and $m_{1,i}$ units of current exit the network at $p_{1,i}$ for all $i=1,\cdots,d_1$. We may define $\iota:\RDivPlusD(\Gamma)\times \RDivPlusD(\Gamma)\to \mathbb{TP}(\Gamma)$ by $\iota(D_1,D_2)=[f_{D_2-D_1}]$. Moreover, fixing an arbitrary ${\mathbb R}$-divisor $D_0$ in $\RDivPlusD(\Gamma)$, the map $\iota_{D_0}:\RDivPlusD(\Gamma)\to\mathbb{TP}(\Gamma)$ with $D\mapsto [f_{D-D_0}]$ is an embedding of $\RDivPlusD(\Gamma)$ into $\mathbb{TP}(\Gamma)$. Note that $\divf(\iota_{D_0}(D))=D-D_0$. Now we can translate the tropical convexity theory from $\mathbb{TP}(\Gamma)$ to $\RDivPlusD(\Gamma)$ based on the maps $\iota$ and $\iota_{D_0}$. \begin{enumerate} \item The \emph{tropical metric} on $\RDivPlusD(\Gamma)$ is defined by the distance function $\rho(D_1,D_2):=\rho(\iota_{D_0}(D_1), \iota_{D_0}(D_2))=\Vert [f_{D_2-D_0}]-[f_{D_1-D_0}]\Vert=\Vert\iota(D_1,D_2)\Vert=\Vert [f_{D_2-D_1}]\Vert = \max(f_{D_2-D_1})-\min(f_{D_2-D_1})$. \item Let $l=\rho(D_1,D_2)$. Let $\alpha=\iota_{D_0}(D_1)$ and $\beta=\iota_{D_0}(D_2)$. The \emph{tropical path} from $D_1$ to $D_2$ in $\RDivPlusD(\Gamma)$ is defined as a map $P_{D_2-D_1}:[0,l]\to \RDivPlusD(\Gamma)$ given by \begin{align*} &P_{D_2-D_1}(t):=\divf(P_{(\alpha,\beta)})+D_0 \\ &=\divf([\min(t,\underline{f_{D_2-D_0}-f_{D_1-D_0}})+f_{D_1-D_0}])+D_0 \\ &=\divf([\min(t,\underline{f_{D_2-D_1}})])+\divf([f_{D_1-D_0}])+D_0 \\ &=\divf([\min(t,\underline{f_{D_2-D_1}})])+D_1. \end{align*} Note that the value of $\divf([\min(t,\underline{f_{D_2-D_1}})])$ at each point $p\in\Gamma$ is at least $-D_1(p)$, and thus $P_{D_2-D_1}(t)$ is effective of degree $d$ which means $P_{D_2-D_1}$ is well-defined. In particular, $P_{D_2-D_1}(0)=D_1$ and $P_{D_2-D_1}(l) = D_2$. The image of $P_{D_2-D_1}$, denoted by $\mathrm{tconv}(D_1,D_2)$ or $[D_1,D_2]$, is called the \emph{tropical segment} connecting $D_1$ and $D_2$. \item It is easy to see that the tropical metric and the tropical paths on $\RDivPlusD(\Gamma)$ are independent of $D_0$ which we choose for the embedding $\iota_{D_0}$. \item A set $T\subseteq\RDivPlusD(\Gamma)$ is \emph{tropically convex} if for every $D_1,D_2\in T$, the whole tropical segment $[D_1,D_2]$ is contained in $T$. For a subset $S$ of $\RDivPlusD(\Gamma)$, the \emph{tropical convex hull} generated by $S$, denoted by $\mathrm{tconv}(S)$, is the intersection of all tropical convex sets in $\RDivPlusD(\Gamma)$ containing $S$. If $S$ is finite, we also call $\mathrm{tconv}(S)$ a \emph{tropical polytope}. \item All theorems about lower tropical convex sets in $\mathbb{TP}(X)$ in the previous sections can be applied to tropical convex sets in $\RDivPlusD(\Gamma)$. \end{enumerate} \begin{remark} Recall that we have defined two types of tropical paths, the lower ones and upper ones, for $\mathbb{TP}(X)$ in Definition~\ref{D:tpath}. But here we have essentially only one way to define tropical paths for $\RDivPlusD(\Gamma)$. As an analogy to Definition~\ref{D:tpath}., one may want to call $P_{D_2-D_1}$ the lower tropical path from $D_1$ to $D_2$, and define the upper tropical path from $D_1$ to $D_2$ as $P^{D_2-D_1}(t):=\divf(P^{(\alpha,\beta)})+D_0$. But an issue here is that in general $P^{D_2-D_1}(t)$ is not contained in $\RDivPlusD(\Gamma)$. Actually it can be verified that $P^{D_2-D_1}(t)=D_1+D_2-P_{D_2-D_1}(l-t)$ which is in general not effective. There are a few more points worth mentioning here: \begin{enumerate} \item In the above, we use lower tropical paths in $\mathbb{TP}(\Gamma)$ to define tropical paths in $\RDivPlusD(\Gamma)$. But we can also use upper tropical paths in $\mathbb{TP}(\Gamma)$ to give an equivalent definition of tropical paths in $\RDivPlusD(\Gamma)$. Let $\alpha'=[f_{D_0-D_1}]$ and $\beta'=[f_{D_0-D_1}]$. Then $P_{D_2-D_1}(t)=D_0-\divf(P^{(\alpha',\beta')})$. \item As for $\RDivPlusD(\Gamma)$, we may also embed $\RDivD(\Gamma)$ into $\mathbb{TP}(\Gamma)$ by fixing an ${\mathbb R}$-divisor $D_0$ in $\RDivD(\Gamma)$ and sending each ${\mathbb R}$-divisor $D$ in $\RDivD(\Gamma)$ to $[f_{D-D_0}]$ in $\mathbb{TP}(\Gamma)$. Therefore a tropical convexity theory for $\RDivD(\Gamma)$ can be derived from the tropical convexity theory for $\mathbb{TP}(\Gamma)$, just as what we have done for $\RDivPlusD(\Gamma)$. But in this way, there is some difference: both lower tropical paths $P_{D_2-D_1}(t):=\divf(P_{(\alpha,\beta)})+D_0$ and upper tropical paths $P^{D_2-D_1}(t):=\divf(P^{(\alpha,\beta)})+D_0$ can be defined for $\RDivD(\Gamma)$ since the ${\mathbb R}$-divisors in the paths are not required to be effective and are fully contained in $\RDivD(\Gamma)$. \item Throughout the rest of this section, we will make our discussion focused on the tropical convexity theory on $\RDivPlusD(\Gamma)$ instead of that on $\RDivD(\Gamma)$, and stick to the translation of theorems for lower tropical convexity on $\mathbb{TP}(\Gamma)$ to those for the tropical convexity on $\RDivPlusD(\Gamma)$. \end{enumerate} \end{remark} Now suppose $d$ is an integer and let us consider $\DivPlusD(\Gamma)$ which is a subset of $\RDivPlusD(\Gamma)$. As in convention, a function $f\in CPA(\Gamma)$ is said to be \emph{rational} if $f$ is piecewise-linear with integral slopes. Clearly, $\divf(f)$ is a divisor of degree $0$. For each $D_1,D_2\in\DivD(\Gamma)$, we say $D_1$ is \emph{linearly equivalent} to $D_2$ (denoted $D_1\sim D_2$) if $f_{D_2-D_1}$ is rational. Note that the linear equivalence is an equivalence relation on $\Div(\Gamma)$ and we denote the linear equivalence class of a divisor $D$ by $[D]$. The \emph{complete linear system} $|D|$ associated to $D\in\DivD(\Gamma)$ is the set of all effective divisors linearly equivalent to $D$ and we say the degree of $|D|$ is $d$. The following are some facts about $\DivPlusD(\Gamma)$: \begin{enumerate} \item $\DivPlusD(\Gamma)$ with the metric topology induced from the tropical metric on $\RDivPlusD(\Gamma)$ is homeomorphic to the $d$-th symmetric product $\Gamma^d/S_d$ of $\Gamma$ with the symmetric product topology. We prove this statement in Appendix~\ref{S:EquiTop}. \item For each $D\in\DivPlusD(\Gamma)$, $|D|$ is contained in $\DivPlusD(\Gamma)$. \item For $D_1,D_2\in \DivPlusD(\Gamma)$, $D_1\sim D_2$ if and only if $[D_1,D_2]\subseteq \DivPlusD(\Gamma)$. \item By definition, an equivalent way to say that a subset $T$ of $\DivPlusD(\Gamma)$ is tropically convex is that $T$ is tropical-path-connected. \item $\DivPlusD(\Gamma)$ is not tropical-path-connected in general, and the nonempty complete linear systems of degree $d$ are exactly the tropical-path-connected components in $\DivPlusD(\Gamma)$. \item All complete linear systems $|D|$ are tropical polytopes which can be generated by the finite set of extremals of $|D|$. (See \cite{HMY12} for the definition and finiteness of extremals of complete linear systems. This notion of extremals agrees with our notion of extremals in Section~\ref{S:TropIndep} in the case of $|D|$.) Note that by Corollary~\ref{C:Polytope}, complete linear systems are compact. \end{enumerate} \begin{remark} The divisor theory on finite graphs can be considered as a discretization of the divisor theory on metric graphs. Actually the divisor theory for $\Div(G)$ where $G$ is a finite graph is closely related to the divisor theory for $\Div(\Gamma)$ where $\Gamma$ is the metric graph geometrically realized as assigning the unit interval to all edges of $G$ \cite{Luo11,HKN13}. \end{remark} \begin{remark} \label{R:ChipFiring} Another way to think of linear equivalence on $\Div(\Gamma)$ is to use the so-called \emph{chip-firing moves}. We say a non-constant rational function $f$ on $\Gamma$ is $\emph{primitive}$ if $\Gamma\setminus(\Gmin(f)\bigcup \Gmax(f))$ is a disjoint union of open segments where $\Gmin(f)$ and $\Gmax(f)$ are the minimizer and maximizer of $f$ respectively. Note that these open segments form a cut of the metric graph. Then $\divf(f)=D^+-D^-$ where $D^+$ and $D^-$ are the effective part and noneffective part of $\divf(f)$ respectively. Now consider a divisor $D$ of degree $d$. Then $D'=D+\divf(f)=D^++(D-D^-)$ is also a divisor of degree $d$ which is linearly equivalent to $D$. This also means $f_{D'-D}=f$. Note that $\supp(D^+)=\partial \Gmax(f)$ and $\supp(D^-)=\partial \Gmin(f)$. Let $l=\max(f)-\min(f)=\rho(D,D')$. We may visualize the evolution from $D$ to $D'$ as follows: \begin{enumerate} \item For each point $x\in\supp(D)$ such that $D(x)\geq 0$, we put $D(x)$ chips at the point $x$. If $D(x)<0$, we simply say that the point $x\in\Gamma$ is in debt of $-D(x)$ chips. In this sense, the divisor $D$ is represented by the configuration of chips on $\Gamma$. \item Now we take a chip-firing move from $\Gmin(f)$ to $\Gmax(f)$ along the cut formed by the open segments in $\Gamma\setminus(\Gmin(f)\bigcup \Gmax(f))$. More precisely, for each point $x\in\partial \Gmin(f)$ and the outgoing tangent direction ${\mathbf t}$ at $x$ along an open segment in $\Gamma\setminus(\Gmin(f)\bigcup \Gmax(f))$ adjacent to $x$, take out $m_{\mathbf t}$ chips at $x$ where $m_{\mathbf t}$ is the slope of $f$ along ${\mathbf t}$ which is an integer (this might make the point $x$ in debt) and move these $m_{\mathbf t}$ along ${\mathbf t}$ at the speed of $1/m_{\mathbf t}$. After a period of time $l$, we get a new configuration of chips which represents the divisor $D'$. \end{enumerate} We call the above process the \emph{chip-firing move} associated to the primitive rational function $f$ or the chip-firing move from $D$ to $D'$ or the chip-firing move from $D$ of \emph{distance} $l$. By the \emph{direction} of this chip firing move, we mean the process of moving $m_{\mathbf t}$ chips from $x$ into $\Gmin(f)^c$ at the speed of of $1/m_{\mathbf t}$ for each $x\in\partial \Gmin(f)$ and each tangent direction ${\mathbf t}$ at $x$ shooting into $\Gmin(f)^c$ (ignoring the distance information). We also call $\Gmin(f)$ the \emph{base} of this chip-firing move or of this chip-firing move direction. If in addition, $D$ is effective and $D\geq D^-$, then we always have enough chips on the boundary of $\Gmin(f)$ to fire and $D'=D^++(D-D^-)$ is effective, i.e., this chip-firing move makes no point in debt. We call it an \emph{effective chip-firing move} from $D$. Moreover, it can be easily verified that any rational function is a finite sum of primitive rational functions. Therefore, two divisors $D,D'\in \Div(\Gamma)$ are linearly equivalent if and only if $D'$ can be reached by $D$ via finitely many steps of chip-firing moves. If in addition $D$ and $D'$ are effective, then it can also be shown that all the intermediate chip-firing moves can be chosen to be effective, i.e., the firings are within $|D|$. Actually, one way to choose the chip-firing moves is along the tropical path from $D$ to $D'$. Furthermore, The above notions of chip-firing moves, chip-firing move directions, effective chip-firing moves, and bases of chip-firing moves can be straightforwardly generalized to cases for ${\mathbb R}$-divisors by allowing the slopes $m_{\mathbf t}$ of the corresponding functions to be real numbers. \end{remark} The divisor theory on finite graphs or metric graphs is an analogue of the divisor theory on algebraic curves, while in the latter not only complete linear systems are studied, linear systems in general are also studied. Here we give the following definition of linear systems in the context of metric graphs. \begin{definition} \label{D:LinSys} For a divisor $D\in\DivPlusD(\Gamma)$, a tropical polytope $T\subseteq |D|$ containing $D$ is called a \emph{linear system} associated to $D$. We say the \emph{degree} of $T$ is $d$. For convenience, we also allow a linear system to be an empty set which is called the \emph{empty linear system}. \end{definition} \begin{example} \label{E:LinearSys} In Figure~\ref{F:LinearSys}, we consider a metric circle $\Gamma$ and effective divisors of degree $3$ on $\Gamma$. Suppose the total length of $\Gamma$ is $2\pi$. A point $x$ on $\Gamma$ can be represented by its polar angle $\theta(x)\in {\mathbb R}/2\pi$. Let $v_1$, $v_2$, $v_3$, $w_{12}$, $w_{23}$ and $w_{13}$ be points on $\Gamma$ such that $\theta(v_1)=7\pi/6$, $\theta(v_2)=\pi/2$, $\theta(v_3)=-\pi/6$, $\theta(w_{12})=5\pi/6$, $\theta(w_{23})=\pi/6$ and $\theta(w_{13})=-\pi/2$ respectively. Let $D_0=(v_1)+(v_2)+(v_3)$. Then one can verify that a divisor $D=(x_1)+(x_2)+(x_3)\in \DivPlusThree(\Gamma)$ is linearly equivalent to $D_0$ if and only if $\theta(x_1)+\theta(x_2)+\theta(x_3)=3\pi/2\ \mod 2\pi$. Then $|D_0|$ can be represented by the locus of $\theta(x_1)+\theta(x_2)+\theta(x_3)=3\pi/2\ \mod 2\pi$ in the quotient of $({\mathbb R}/2\pi)^3$ by the natural action of the symmetric group $S_3$, which can actually be viewed as an equilateral triangle centered at $D_0$ with vertices $D_1=3(v_1)$, $D_2=3(v_2)$ and $D_3=3(v_3)$ as shown in Figure~\ref{F:LinearSys}. Moreover, the midpoint of the side $D_1D_2$ corresponds to the divisor $D_{12}=2(w_{12})+(v_3)$, the midpoint of the side $D_2D_3$ corresponds to the divisor $D_{23}=2(w_{23})+(v_1)$, and the midpoint of the side $D_1D_3$ corresponds to the divisor $D_{13}=2(w_{13})+(v_2)$. In Figure~\ref{F:LinearSys}, we also show some sub-linear systems of $|D|$. For example, one can verify that the tropical segment $[D_1,D_3]$ is exactly the side $D_1D_3$ of the triangle, the tropical segment $[D_1,D_{23}]$ is the exactly the median $D_1D_{23}$ of the triangle, the tropical segment $[D_{12},D_{23}]$ is the union of $[D_0,D_{12}]$ and $[D_0,D_{23}]$ which are straight segments in the medians $D_3D_{12}$ and $D_1D_{23}$ respectively. In addition, \begin{enumerate} \item the tropical path from $D_1$ to $D_3$ corresponds to exactly one step of chip-firing move where one chip moves from $v_1$ to $v_2$ along the segment through $w_{12}$ at the speed of two units and two chips move from $v_1$ to $v_3$ along the segment through $w_{13}$ at the speed of one unit, \item the tropical path from $D_1$ to $D_{23}$ corresponds to exactly one step of chip-firing move where one chip moves from $v_1$ to $w_{23}$ along the segment through $v_2$ at the speed of one unit and one chip moves from $v_1$ to $w_{23}$ along the segment through $v_3$ at the speed of one unit, and \item the tropical path from $D_{12}$ to $D_{23}$ corresponds to two steps of chip-firing moves: the first step is the chip-firing move from $D_{12}$ to $D_0$ where one chip moves from $w_{12}$ to $v_1$ at the speed of one unit and one chip moves from $w_{12}$ to $v_2$ at the speed of one unit, and the second step is the chip-firing move from $D_0$ to $D_{23}$ where one chip moves from $v_2$ to $w_{23}$ at the speed of one unit and one chip moves from $v_3$ to $w_{23}$ at the speed of one unit. \end{enumerate} Moreover, the linear systems $\mathrm{tconv}(\{D_{12},D_{23},D_{13}\})$, $\mathrm{tconv}(\{D_{12},D_2,D_{13}\})$ and $\mathrm{tconv}(\{D_{12},D_2,D_{23}\})$ are also illustrated, which is purely $1$-dimensional, not of pure dimension and purely $2$-dimensional respectively. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw (-1,2.5) node {\large $\Gamma$}; \coordinate (center) at (0,0.3); \def1.5{1.2}; \coordinate (v1) at ($(center)+(210:1.5)$); \coordinate (v2) at ($(center)+(90:1.5)$); \coordinate (v3) at ($(center)+(-30:1.5)$); \coordinate (w12) at ($(center)+(150:1.5)$); \coordinate (w23) at ($(center)+(30:1.5)$); \coordinate (w13) at ($(center)+(-90:1.5)$); \coordinate (th1) at ($(center)+(0:1.5)$); \coordinate (th2) at ($(center)+(135:1.5)$); \draw[thick] (center) circle[radius=1.5]; \draw [dashed,line width = .8pt] (center) -- (th1); \draw [dashed,line width = .8pt] (center) -- (th2); \pic [draw, ->, "$\theta$", line width = .5pt, angle radius=0.3cm, angle eccentricity=1.8] {angle = th1--center--th2}; \fill[blue] (v1) circle[radius=2pt]; \fill[blue] (v2) circle[radius=2pt]; \fill[blue] (v3) circle[radius=2pt]; \fill[red] (w12) circle[radius=2pt]; \fill[red] (w23) circle[radius=2pt]; \fill[red] (w13) circle[radius=2pt]; \draw [anchor=north east] (v1) node {$v_1$}; \draw [anchor=south] (v2) node {$v_2$}; \draw [anchor=north west] (v3) node {$v_3$}; \draw [anchor=south east] (w12) node {$w_{12}$}; \draw [anchor=south west] (w23) node {$w_{23}$}; \draw [anchor=north] (w13) node {$w_{13}$}; \end{scope} \begin{scope}[shift={(4,0)}] \draw (-1,2.5) node {\large $|D_0|$}; \coordinate (center) at (0,0); \def1.5{1.5}; \coordinate (D1) at ($(center)+(210:1.5)$); \coordinate (D2) at ($(center)+(90:1.5)$); \coordinate (D3) at ($(center)+(-30:1.5)$); \coordinate (D12) at ($(center)+(150:1.5/2)$); \coordinate (D23) at ($(center)+(30:1.5/2)$); \coordinate (D13) at ($(center)+(-90:1.5/2)$); \fill [black!30,opacity=1] (D1) --(D2) -- (D3)--(D1); \fill[blue] (center) circle[radius=2pt]; \fill[blue] (D1) circle[radius=2pt]; \fill[blue] (D2) circle[radius=2pt]; \fill[blue] (D3) circle[radius=2pt]; \fill[red] (D12) circle[radius=2pt]; \fill[red] (D23) circle[radius=2pt]; \fill[red] (D13) circle[radius=2pt]; \draw [anchor=east] (center) node {$D_0$}; \draw [anchor=north] (D1) node {$D_1$}; \draw [anchor=south] (D2) node {$D_2$}; \draw [anchor=north] (D3) node {$D_3$}; \draw [anchor=south east] (D12) node {$D_{12}$}; \draw [anchor=south west] (D23) node {$D_{23}$}; \draw [anchor=north] (D13) node {$D_{13}$}; \end{scope} \begin{scope}[shift={(8,0)}] \draw (-0.5,2.5) node {$[D_1,D_3]$}; \coordinate (center) at (0,0); \def1.5{1.5}; \coordinate (D1) at ($(center)+(210:1.5)$); \coordinate (D2) at ($(center)+(90:1.5)$); \coordinate (D3) at ($(center)+(-30:1.5)$); \coordinate (D12) at ($(center)+(150:1.5/2)$); \coordinate (D23) at ($(center)+(30:1.5/2)$); \coordinate (D13) at ($(center)+(-90:1.5/2)$); \fill [black!30,opacity=0.3] (D1) --(D2) -- (D3)--(D1); \draw [line width = 1.8pt] (D1) -- (D3); \fill[black] (D1) circle[radius=2pt]; \fill[black] (D3) circle[radius=2pt]; \draw [anchor=north] (D1) node {$D_1$}; \draw [anchor=north] (D3) node {$D_3$}; \end{scope} \begin{scope}[shift={(12,0)}] \draw (0,2.5) node {$[D_1,D_{23}]$}; \coordinate (center) at (0,0); \def1.5{1.5}; \coordinate (D1) at ($(center)+(210:1.5)$); \coordinate (D2) at ($(center)+(90:1.5)$); \coordinate (D3) at ($(center)+(-30:1.5)$); \coordinate (D12) at ($(center)+(150:1.5/2)$); \coordinate (D23) at ($(center)+(30:1.5/2)$); \coordinate (D13) at ($(center)+(-90:1.5/2)$); \fill [black!30,opacity=0.3] (D1) --(D2) -- (D3)--(D1); \draw [line width = 1.8pt] (D1) -- (D23); \fill[black] (center) circle[radius=2pt]; \fill[black] (D1) circle[radius=2pt]; \fill[black] (D23) circle[radius=2pt]; \draw [anchor=north west] (center) node {$D_0$}; \draw [anchor=north] (D1) node {$D_1$}; \draw [anchor=south west] (D23) node {$D_{23}$}; \end{scope} \begin{scope}[shift={(0,-4.5)}] \draw (-0.2,2.2) node {$[D_{12},D_{23}]$}; \coordinate (center) at (0,0); \def1.5{1.5}; \coordinate (D1) at ($(center)+(210:1.5)$); \coordinate (D2) at ($(center)+(90:1.5)$); \coordinate (D3) at ($(center)+(-30:1.5)$); \coordinate (D12) at ($(center)+(150:1.5/2)$); \coordinate (D23) at ($(center)+(30:1.5/2)$); \coordinate (D13) at ($(center)+(-90:1.5/2)$); \fill [black!30,opacity=0.3] (D1) --(D2) -- (D3)--(D1); \draw [line width = 1.8pt] (D12) --(center)-- (D23); \fill[black] (center) circle[radius=2pt]; \fill[black] (D12) circle[radius=2pt]; \fill[black] (D23) circle[radius=2pt]; \draw [anchor=north] (center) node {$D_0$}; \draw [anchor=south east] (D12) node {$D_{12}$}; \draw [anchor=south west] (D23) node {$D_{23}$}; \end{scope} \begin{scope}[shift={(4,-4.5)}] \draw (-0.2,2.2) node {$\mathrm{tconv}(\{D_{12},D_{23},D_{13}\})$}; \coordinate (center) at (0,0); \def1.5{1.5}; \coordinate (D1) at ($(center)+(210:1.5)$); \coordinate (D2) at ($(center)+(90:1.5)$); \coordinate (D3) at ($(center)+(-30:1.5)$); \coordinate (D12) at ($(center)+(150:1.5/2)$); \coordinate (D23) at ($(center)+(30:1.5/2)$); \coordinate (D13) at ($(center)+(-90:1.5/2)$); \fill [black!30,opacity=0.3] (D1) --(D2) -- (D3)--(D1); \draw [line width = 1.8pt] (D12) --(center)-- (D23); \draw [line width = 1.8pt] (center)-- (D13); \fill[black] (center) circle[radius=2pt]; \fill[black] (D12) circle[radius=2pt]; \fill[black] (D23) circle[radius=2pt]; \fill[black] (D13) circle[radius=2pt]; \draw [anchor=north east] (center) node {$D_0$}; \draw [anchor=south east] (D12) node {$D_{12}$}; \draw [anchor=south west] (D23) node {$D_{23}$}; \draw [anchor=north] (D13) node {$D_{13}$}; \end{scope} \begin{scope}[shift={(8,-4.5)}] \draw (0,2.2) node {$\mathrm{tconv}(\{D_{12},D_2,D_{13}\})$}; \coordinate (center) at (0,0); \def1.5{1.5}; \coordinate (D1) at ($(center)+(210:1.5)$); \coordinate (D2) at ($(center)+(90:1.5)$); \coordinate (D3) at ($(center)+(-30:1.5)$); \coordinate (D12) at ($(center)+(150:1.5/2)$); \coordinate (D23) at ($(center)+(30:1.5/2)$); \coordinate (D13) at ($(center)+(-90:1.5/2)$); \fill [black!30,opacity=0.3] (D1) --(D2) -- (D3)--(D1); \fill [black!100,opacity=1] (D12) --(center)-- (D2) -- cycle; \draw [line width = 1.8pt] (center)-- (D13); \draw [line width = 1.8pt] (D12)--(center)-- (D2); \fill[black] (D12) circle[radius=2pt]; \fill[black] (D2) circle[radius=2pt]; \fill[black] (D13) circle[radius=2pt]; \draw [anchor=north east] (center) node {$D_0$}; \draw [anchor=south east] (D12) node {$D_{12}$}; \draw [anchor=east] (D2) node {$D_2$}; \draw [anchor=north] (D13) node {$D_{13}$}; \end{scope} \begin{scope}[shift={(12,-4.5)}] \draw (0.2,2.2) node {$\mathrm{tconv}(\{D_{12},D_2,D_{23}\})$}; \coordinate (center) at (0,0); \def1.5{1.5}; \coordinate (D1) at ($(center)+(210:1.5)$); \coordinate (D2) at ($(center)+(90:1.5)$); \coordinate (D3) at ($(center)+(-30:1.5)$); \coordinate (D12) at ($(center)+(150:1.5/2)$); \coordinate (D23) at ($(center)+(30:1.5/2)$); \coordinate (D13) at ($(center)+(-90:1.5/2)$); \fill [black!30,opacity=0.3] (D1) --(D2) -- (D3)--(D1); \fill [black!100,opacity=1] (D12) --(center)--(D23)-- (D2) -- cycle; \fill[black] (D12) circle[radius=2pt]; \fill[black] (D2) circle[radius=2pt]; \fill[black] (D23) circle[radius=2pt]; \draw [anchor=north] (center) node {$D_0$}; \draw [anchor=south east] (D12) node {$D_{12}$}; \draw [anchor=east] (D2) node {$D_2$}; \draw [anchor=south west] (D23) node {$D_{23}$}; \end{scope} \end{tikzpicture} \caption{A metric circle $\Gamma$, a complete linear system $|D_0|$ on $\Gamma$ and several linear systems in $|D_0|$.} \label{F:LinearSys} \end{figure} \end{example} \subsection{Tropical Projections and the Tropical Weak Independence Criterion for Divisors and ${\mathbb R}$-Divisors} As stated in the previous subsection, using the embedding map $\iota_{D_0}:\RDivPlusD(\Gamma)\to\mathbb{TP}(\Gamma)$ with $D\mapsto [f_{D-D_0}]$ where $D_0\in \RDivPlusD$, we may translate the theorems for lower tropical convexity on $\mathbb{TP}(\Gamma)$ to theorems for tropical convexity on $\RDivPlusD$ and this translation is essentially independent from the ${\mathbb R}$-divisor $D_0$ used for the embedding $\iota_{D_0}$. First we may define the $B$-pseudonorms to all degree zero ${\mathbb R}$-divisors $D$ as $\llfloor D \rrfloor_p := \llfloor f_D \rrfloor_p$ for all $p\in[1,\infty]$. In particular, $\llfloor D_2-D_1 \rrfloor_p = \llfloor f_{D_2-D_1} \rrfloor_p=\llfloor \iota_{D_0}(D_2)-\iota_{D_0}(D_1)\rrfloor_p$ for all $D_1,D_2\in \RDivPlusD$. Then we can rewrite the tropical projection theorem (Theorem~\ref{T:main} and Corollary~\ref{C:CritTropProj}) as follows: \begin{theorem} \label{T:TropProjDiv} For a compact tropically convex subset $T$ of $\RDivPlusD(\Gamma)$ and an arbitrary element $E$ in $\RDivPlusD(\Gamma)$, consider the following real-valued functions $\Theta^{(T,E)}_p:T \to [0,\infty)$ with $p\in[1,\infty]$ given by $D\mapsto\llfloor D-E\rrfloor_p$. In particular, $\Theta^{(T,E)}_\infty(D)= \rho(D,E)$. \begin{enumerate} \item There is a unique element $\pi_T(E)$ called the tropical projection of $E$ to $T$ which minimizes $\Theta^{(T,E)}_p$ for all $p\in[1,\infty)$. \item The minimizer of $\Theta^{(T,E)}_\infty$ is compact and tropically convex which contains $\pi_T(E)$. \item The following are equivalent: \begin{enumerate} \item $D_0=\pi_T(E)$; \item For each $D\in T$, $\llfloor D-E\rrfloor_1=\llfloor D-D_0\rrfloor_1+\llfloor D_0-E\rrfloor_1$; \item For each $D\in T$, $\Gmin(f_{D-D_0})\bigcap\Gmin(f_{D_0-E})\neq\emptyset$; \item For each $D\in T$, $\Gmin(f_{D-D_0})\bigcap\Gmin(f_{D_0-E})=\Gmin(f_{D-E})$; \item For each $D\in T$ such that $D\neq D_0$, $\Gmin(f_{D_0-D})\bigcap\Gmin(f_{D-E})=\emptyset$. \end{enumerate} \end{enumerate} \end{theorem} \begin{corollary} \label{C:TropProjDiv} For a compact tropically convex subset $T$ of $\RDivPlusD(\Gamma)$, let $cT+F:=\{c\cdot D+F\mid D\in T\}$ for $F\in\RDivPlusM(\Gamma)$ and $c>0$. Then $cT+F$ is a compact tropically convex subset of $\RDivPlusDM(\Gamma)$ and $\pi_{cT+F}(c\cdot E+F)=c\cdot \pi_T(E)+F$ for all $E\in\RDivPlusD(\Gamma)$. \end{corollary} \begin{proof} This is an interpretation of Proposition~\ref{P:TropProj}(1) for $\RDivPlus(\Gamma)$. Let $D_0=\pi_T(E)$. Then for each $D\in T$, $\Gmin(f_{D-D_0})\bigcap\Gmin(f_{D_0-E})\neq\emptyset$ by Theorem~\ref{T:TropProjDiv}. Since $\Gmin(f_{(c\cdot D+F)-(c\cdot D_0+F)})=\Gmin(c\cdot f_{D-D_0})=\Gmin(f_{D-D_0})$ and $\Gmin(f_{(c\cdot D_0+F)-(c\cdot E+F)})=\Gmin(c\cdot f_{D_0-E})=\Gmin(f_{D_0-E})$, we must have for each $c\cdot D+F \in cT+F$, $$\Gmin(f_{(c\cdot D+F)-(c\cdot D_0+F)})\bigcap \Gmin(f_{(c\cdot D_0+F)-(c\cdot E+F)})\neq \emptyset$$ which means $\pi_{cT+F}(c\cdot E+F)=c\cdot D_0+F$. \end{proof} The following proposition is the version of Proposition~\ref{P:SeqTropProj} for $\RDivPlusD$. \begin{proposition} \label{P:SeqTropProjDiv} Let $T$ and $T'$ be compact tropical convex subsets of $\RDivPlusD(\Gamma)$ such that $T'\subseteq T$. Then for each $E\in\RDivPlusD(\Gamma)$, $\pi_{T'}(E)=\pi_{T'}(\pi_T(E))$. \end{proposition} The following theorem is the version of Theorem~\ref{T:CriterionTropIndep} and Theorem~\ref{T:extremal} for $\RDivPlusD$. \begin{theorem}\label{T:FiniteCriterion} Let $T\subseteq\RDivPlusD$ be a tropical polytope generated by $D_1,\cdots,D_n$. Let $E$ be an ${\mathbb R}$-divisor of degree $d$. \begin{enumerate} \item $E\in T$ if and only if $\bigcup_{i=1}^n\Gmin(f_{D_i-E})=\Gamma$. \item If $D_0=\pi_T(E)$ and $D$ is an arbitrary ${\mathbb R}$-divisor in $T$, then $\Gmin(f_{D-E})\subseteq\Gmin(f_{D_0-E})=\bigcup_{i=1}^n\Gmin(f_{D_i-E})$. \item Let $S$ be the set of extremals of $T$. Then $S\subseteq \{D_1,\cdots,D_n\}$ and $T=\mathrm{tconv}(S)$. \end{enumerate} \end{theorem} Note that in the above theorem, if we make a further restriction to $\DivPlusD(\Gamma)$ with $T$ being a linear system (Definition~\ref{D:LinSys}), then it will be a finitely verifiable criterion to tell whether an arbitrary divisor $D\in \DivPlusD(\Gamma)$ is contained in $T$. Moreover, we have the following immediate corollary. \begin{corollary} Let $T$ be a linear system, i.e., $T=\mathrm{tconv}(\{D_1,\cdots,D_n\})$ where $D_1,\cdots,D_n$ are linearly equivalent divisors. Then for each divisor $E\in\RDivPlusD\setminus \DivPlusD$, we must have $\bigcup_{i=1}^n\Gmin(f_{D_i-E})\neq\Gamma$. \end{corollary} \begin{proof} This follows from Theorem~\ref{T:FiniteCriterion} knowing that $E$ cannot be an element of $T$. \end{proof} \begin{example} Let $T$ be the linear system $\mathrm{tconv}(\{D_{12},D_2,D_{13}\})$ in Figure~\ref{F:LinearSys}. Here we will use Theorem~\ref{T:FiniteCriterion} to verify that $D_0\in T$ and $D_{23}\notin T$. We note that $\Gmin(f_{D_{12}-D_0})$ is the segment $v_1v_3v_2$, $\Gmin(f_{D_2-D_0})$ is the segment $v_1v_3$, $\Gmin(f_{D_{13}-D_0})$ is the segment $v_1v_2v_3$, $\Gmin(f_{D_{12}-D_{23}})$ is the point $w_{23}$, $\Gmin(f_{D_2-D_{23}})$ is the segment $v_1v_3w_{23}$ and $\Gmin(f_{D_{13}-D_{23}})$ is the point $w_{23}$. Thus $\Gmin(f_{D_{12}-D0})\bigcup \Gmin(f_{D_2-D_0}) \bigcup \Gmin(f_{D_{13}-D_0}) =\Gamma$ which means $D_0\in T$ and $\Gmin(f_{D_{12}-D_{23}})\bigcup \Gmin(f_{D_2-D_{23}}) \bigcup \Gmin(f_{D_{13}-D_{23}}) $ is the segment $v_1v_3w_{23}$ which means $D_{23}\notin T$ by Theorem~\ref{T:FiniteCriterion}. \end{example} \subsection{Reduced Divisors: from $b$-Functions to $B$-Pseudnorms} \label{SS:bFuncBPseudoNorm} Let us first recall the definition of reduced divisors on a metric graph $\Gamma$. Let $X$ be a closed subset of $\Gamma$. For each $p\in X$, we denote the number of segments leaving $X$ at $p$ by $\mathrm{outdeg}_X(p)$. Note that $\mathrm{outdeg}_X(p)=0$ for all $p\in X\setminus \partial X$. \begin{definition} \label{D:RedDiv} Fix a point $q\in \Gamma$. We say a divisor $D\in\Div(\Gamma)$ is \emph{$q$-reduced} if \begin{enumerate} \item $D(x)\geq 0$ for all $x\in \Gamma\setminus\{q\}$ and \item for every closed subset $X$ of $\Gamma\setminus\{q\}$, there exists a point $p\in\partial X$ such that $D(p)<\mathrm{outdeg}_X(p)$. \end{enumerate} \end{definition} The most important property of reduced divisors is that for each point $q\in\Gamma$ each divisor $D$, there exists a unique $q$-reduced divisor $D_q$ in the linear equivalence class $[D]$. In particular, if $D$ is effective, then $D_q\in|D|$. Adapted from an algorithm in the context of sanpile models \cite{Dhar90}, there is a classical algorithm commonly called Dhar's algorithm which can efficiently determine whether a divisor is $q$-reduced for finite graphs and metric graphs \cite{Luo11}. Baker and Shokrieh \cite{BS13} obtained an elegant characterization of reduced divisors for finite graphs using energy pairing. In particular, they introduced the notion of $b_q$-function on the divisor group of a finite graph $G$ where $q$ is an arbitrary vertex of $G$ and proved that an effective divisor $D$ is $q$-reduced if and only if $D$ minimizes the $b_q$-function restricted to the complete linear system $|D|$ on $G$ (Theorem~4.14 in \cite{BS13}). An interesting observation is that the natural translation of $b_q$-function in the context of metric graphs is exactly the $\underline{B}^1$-pseudonorm of $f_{D-d\cdot(q)}$ where $D$ is a divisor of degree $d$ on $\Gamma$ (actually we name $B$-pseudonorms after $b$-functions) . On one hand, the $q$-reduced divisors in the context of metric graphs should also be minimizers of $b_q$-function on complete linear systems as in the context of finite graphs proved in \cite{BS13}. On the other hand, we know that $|D|$ is a tropical polytope and thus we can make a tropical projection of any effective divisor $E$ of degree $d$ to $|D|$ which minimizes all $B$-pseudonorms of $\llfloor \cdot - E\rrfloor_p$ restricted to $|D|$. This actually means that the $q$-reduced divisor in $|D|$ should be exactly the tropical projection of the divisor $d\cdot(q)$ to $|D|$. We give a precise characterization in the following proposition. \begin{proposition} \label{P:RedTroPro} Consider a complete linear system $|D|\subseteq \DivPlusD(\Gamma)$ on a metric graph $\Gamma$. For an arbitrary point $q\in\Gamma$, the following are equivalent: \begin{enumerate} \item $D_0\in |D|$ is $q$-reduced. \item The base of any possible effective chip-firing move from $D_0$ contains $q$. \item For each $D'\in |D|$, $\Gmin(f_{D'-D_0})$ contains $q$. \item For each $D'\in |D|$, $\Gmin(f_{D'-D_0})\bigcap \Gmin(f_{D_0-d\cdot (q)})\neq\emptyset$. \item $D_0=\pi_{|D|}(d\cdot (q))$. \end{enumerate} \end{proposition} \begin{proof} (1)$\Leftrightarrow$(2): By Remark~\ref{R:ChipFiring}, to make an effective chip-firing move from $D_0$ with base $X$, we must have enough chips at all boundary points of $X$, i.e., for each point $p\in\partial X$, $D_0(p)\geq D^-(p)\geq \mathrm{outdeg}_X(p)$ where $D^-$ is the noneffective part of the primitive rational function with respect to the chip-firing move. Therefore, by Definition~\ref{D:RedDiv}, the base of any effective chip-firing move from $D_0$ must contain $q$, since $D_0$ is $q$-reduced. (2)$\Leftrightarrow$(3): This is because on one hand chip-firing moves are associated to primitive rational functions and on the other hand $\Gmin(f_{D'-D_0})$ for each $D'\in |D|$ must coincide with the base of some effective chip-firing move from $D_0$. (3)$\Leftrightarrow$(4): This follows from Lemma~\ref{L:CritEqui}. (4)$\Leftrightarrow$(5): This follows from Theorem~\ref{T:TropProjDiv}. \end{proof} \begin{lemma} \label{L:CritEqui} For each $q\in\Gamma$ and $D,D'\in\RDivPlusD(\Gamma)$ with $d>0$, $\Gmin(f_{D'-D})\bigcap \Gmin(f_{D-d\cdot (q)})\neq \emptyset$ if and only if $q\in\Gmin(f_{D'-D})$. \end{lemma} \begin{proof} Clearly $\Gmin(f_{D-d\cdot (q)})$ must contain $q$. So if $q\in\Gmin(f_{D'-D})$, then $\Gmin(f_{D'-D})\bigcap \Gmin(f_{D-d\cdot (q)})\neq \emptyset$. Suppose $\Gamma_1,\cdots,\Gamma_m$ are the connected components of $\Gamma\setminus\{q\}$. We note that for $i=1,\cdots,m$, \begin{enumerate} \item $\Gmin(f_{D-d\cdot (q)})\bigcap \Gamma_i\neq \emptyset$ if and only if $\Gmin(f_{D-d\cdot (q)})\bigcap \Gamma_i=\Gamma_i$ if and only if $\supp(D)\bigcap \Gamma_i=\emptyset$, and \item if $q\notin\Gmin(f_{D'-D})$, then $\Gmin(f_{D'-D})\bigcap \Gamma_i\neq \emptyset$ if and only if $\supp(D)\bigcap \Gamma_i\neq\emptyset$. \end{enumerate} Now suppose $\Gmin(f_{D'-D})\bigcap \Gmin(f_{D-d\cdot (q)})\neq \emptyset$. If $q\notin\Gmin(f_{D'-D})$, then there must be some $\Gamma_i$ such that $\Gmin(f_{D'-D})\bigcap \Gamma_i\neq \emptyset$ and $\Gmin(f_{D-d\cdot (q)})\bigcap \Gamma_i\neq \emptyset$. Then by (1), $\Gmin(f_{D-d\cdot (q)})\bigcap \Gamma_i\neq \emptyset$ means that $\supp(D)\bigcap \Gamma_i=\emptyset$, and by (2), $\Gmin(f_{D-d\cdot (q)})\bigcap \Gamma_i\neq \emptyset$ means that $\supp(D)\bigcap \Gamma_i\neq\emptyset$, a contradiction. \end{proof} \begin{remark} Let $X$ be the vertex set of $G$ equipped with the counting measure and we can also use our theory to study the $b$-functions on a finite graph $G$ directly. In this way, the $b_q$-function on is exactly the $\underline{B}^1$-pseudonorm of $f_{D-d\cdot(q)}$ where $q$ is a vertex of $G$ and $D$ is a divisor of degree $d$ on $G$. Note that Remark~4.15 of \cite{BS13} provides variations of the $b_q$-function which says that the weights of different vertices can be different as long as all being non-negative. Such variations do not affect the $q$-reduced divisors being the minimizers of the $b_q$-function restricted to complete linear systems. This fact is exactly reflected in Corollary~\ref{C:CritTropProj}(c) which says tropical projections are independent of the measure on $X$, since different measures on $X$ can be considered as different distributions of weights on vertices. \end{remark} Proposition~\ref{P:RedTroPro} tells us that the $q$-reduced divisor in a complete linear system $|D|$ of degree $d$ is no more than the tropical projection of the divisor $d\cdot(q)$ to $|D|$. Therefore we have the following natural generalization of the notion of reduced divisors. \begin{definition} \label{D:GeneralRed} Let $T$ be a compact tropically convex subset of $\RDivPlusD$. Then for each $q\in\Gamma$, the $q$-reduced ${\mathbb R}$-divisor in $T$ is the tropical projection of $d\cdot(q)$ to $T$. \end{definition} We can derive the following proposition as an analogue of Proposition~\ref{P:RedTroPro}. \begin{proposition} \label{P:GeneralRed} Consider a compact tropically convex subset $T$ of $\RDivPlusD(\Gamma)$. For an arbitrary point $q\in\Gamma$, the following are equivalent: \begin{enumerate} \item $D_0$ is the $q$-reduced in $T$. \item The base of any possible effective chip-firing move inside $T$ from $D_0$ contains $q$. \item For each $D\in T$, $\Gmin(f_{D-D_0})$ contains $q$. \item For each $D\in T$, $\Gmin(f_{D-D_0})\bigcap \Gmin(f_{D_0-d\cdot (q)})\neq\emptyset$. \item $D_0=\pi_T(d\cdot (q))$. \end{enumerate} \end{proposition} \begin{proof} The equivalence of (1) and (5) follows from Definition. The equivalence of (2)--(5) follows from arguments analogous to those in the proof of Proposition~\ref{P:RedTroPro} with the notion of chip-firing moves generalized to cases for ${\mathbb R}$-divisors (Remark~\ref{R:ChipFiring}). \end{proof} \begin{corollary}\label{C:RedVal} Let $q$ be an arbitrary point in $\Gamma$ and $T$ be a compact tropical convex subset of $\RDivPlusD(\Gamma)$. For each $D\in T$, the value of $D$ at $q$ is at most the value of $q$-reduced divisor in $T$ at $q$. \end{corollary} \begin{proof} Let $D_0$ be the $q$-reduced divisor in $T$. Then by Proposition~\ref{P:GeneralRed}, we see that for each $D\in T$, $\Gmin(f_{D-D_0})$ contains $q$. This actually implies the value of $D-D_0$ at $q$ is non-positive. \end{proof} \begin{remark} \label{R:RedVal} We can actually be more precise about Corollary~\ref{C:RedVal}. As in the above proof, let $B=\Gmin(f_{D-D_0})$. If $q$ is a boundary point of $B$, then $D(q)<D_0(q)$, and if $q$ is not a boundary point of $B$, then $D(q)=D_0(q)$. \end{remark} The following proposition says that reduced ${\mathbb R}$-divisors are invariant under ``translation'' and scaling of the corresponding compact tropically convex set. \begin{proposition} Let $T$ be a compact tropically convex subset of $\RDivPlusD$ and $F$ be an effective ${\mathbb R}$-divisor of degree $m$. Let $cT+F:=\{c\cdot D+F\mid D\in T\}\subseteq \RDivPlusDM$. Then $\pi_{cT+F}((cd+m)\cdot (q))=\pi_{cT+F}(cd\cdot (q)+F)=c\cdot \pi_T(d\cdot (q))+F$ for each $q\in\Gamma$. \end{proposition} \begin{proof} First we note that $\pi_{cT+F}(cd\cdot (q)+F)=c\cdot \pi_T(d\cdot (q))+F$ follows from Corollary~\ref{C:TropProjDiv} directly. Now let $D_0$ be the $q$-reduced ${\mathbb R}$-divisor $\pi_T(d\cdot (q))$ in $T$. Then by Proposition~\ref{P:GeneralRed}, for each $D\in T$, $\Gmin(f_{D-D_0})$ contains $q$. Therefore, for each $c\cdot D+F\in cT+F$, we have $q\in \Gmin(f_{(c\cdot D+F)-(c\cdot D_0+F)})=\Gmin(f_{D-D_0})$. Using Proposition~\ref{P:GeneralRed} again, this means that $c\cdot D_0+E$ is exactly the $q$-reduced ${\mathbb R}$-divisor $\pi_{cT+F}((cd+e)\cdot (q))$ in $cT+F$. \end{proof} \begin{example} \label{E:Red} Consider the metric circle $\Gamma$ in Figure~\ref{F:LinearSys}. Let us use Proposition~\ref{P:GeneralRed}(2) to verify some divisors as being reduced with respect to certain points. \begin{enumerate} \item Consider the complete linear system $|D_0|$. There are three possible effective chip-firing move directions from $D_1$: \begin{enumerate} \item along the direction of $D_1D_2$, i.e., two chips move from $v_1$ towards $w_{12}$ at the speed of one unit and one chip moves from $v_1$ towards $w_{13}$ at the speed of two unit; \item along the direction of $D_1D_3$, i.e., two chips move from $v_1$ towards $w_{13}$ at the speed of one unit and one chip moves from $v_1$ towards $w_{12}$ at the speed of two unit; and \item along the direction of $D_1D_0$, i.e., one chip moves from $v_1$ towards $w_{12}$ and one chip moves from $v_1$ towards $w_{13}$, both at the speed of one unit. \end{enumerate} The bases of all the above chip-firing move directions are identical to $\{v_1\}$. Therefore, $v_1$-reduced divisor in $|D_0|$ is $D_1$ by Proposition~\ref{P:GeneralRed}(2). It can be verified analogously that the $v_2$-reduced divisor in $|D_0|$ is $D_2=3(v_2)$, and the $v_3$-reduced divisor in $|D_0|$ is $D_3=3(v_3)$. \item Consider the tropical segment $[D_1,D_3]$. We observe that an effective chip-firing move inside $[D_1,D_3]$ from $D_1$ can only be along the direction of $D_1D_3$ whose base is the point $v_1$, and an effective chip-firing move inside $[D_1,D_3]$ from $D_3$ can only be along the direction of $D_3D_1$ whose base is the point $v_3$. Therefore, $D_1$ and $D_3$ are the $v_1$-reduced and $v_3$-reduced divisors in $T$ respectively. Moreover, an effective chip-firing move inside $[D_1,D_3]$ from $D_{13}$ can be either along the direction of $D_{13}D_1$ whose base is the segment $w_{13}v_3v_2$ or along the direction of $D_{13}D_3$ whose base is the segment $w_{13}v_1v_2$. Both bases contain $w_{13}$ and $v_2$ which means that $D_{13}$ is the reduced divisor in $[D_1,D_3]$ with respect to both $w_{13}$ and $v_2$. \item Let $T=\mathrm{tconv}(\{D_{12},D_{23},D_{13}\})$. We observe that an effective chip-firing move inside $T$ from $D_0$ should be along one of the directions of $D_0D_{12}$, $D_0D_{23}$ and $D_0D_{13}$ whose bases are the segment $v_1v_3v_2$, the segment $v_2v_1v_3$ and the segment $v_1v_2v_3$ respectively. This means that the reduced divisors in $T$ with respect to $v_1$, $v_2$ and $v_3$ are all identical to $D_0$. From $D_{12}$, there is only one direction of chip-firing move which is along $D_{12}D_0$ with base being the point $w_{12}$. Therefore, $D_{12}$ is the $w_{12}$-reduced divisor in $T$. Analogously, $D_{23}$ and $D_{13}$ are the $w_{23}$-reduced and $w_{13}$-reduced divisors in $T$ respectively. \end{enumerate} \end{example} \subsection{Reduced Divisor Maps and Tropical Trees} By sending a point $q$ in a metric graph $\Gamma$ to a complete linear system $|D|$ on $\Gamma$, we can naturally define a map called the reduced divisor map from $\Gamma$ to $|D|$, which was originally studied in \cite{Amini13}. Using the generalized notion of reduced divisors introduced in the previous subsection, we can broaden the notion of reduced divisor maps. \begin{definition} \label{D:RedDivMap} Let $T$ be a compact tropically convex subset of $\RDivPlusD$. The map $\Red_T:\Gamma\to T$ given by $q\mapsto \pi_T(d\cdot (q))$ is called the \emph{reduced divisor map} from $\Gamma$ to $T$. \end{definition} The following proposition is a direct corollary of Proposition~\ref{P:SeqTropProjDiv}. \begin{proposition} \label{P:SeqRedjDiv} Let $T$ and $T'$ be compact tropical convex subsets of $\RDivPlusD(\Gamma)$ such that $T'\subseteq T$. Then for each $q\in\Gamma$, $\Red_{T'}(q)=\pi_{T'}(\Red_T(q))$. \end{proposition} Clearly reduced divisor maps are continuous since tropical projections are continuous. In the following discussions, we will focus on reduced divisor maps to linear systems. \begin{figure} \centering \begin{tikzpicture}[scale=0.9] \begin{scope} \draw (-1.5,1.5) node[anchor=east] {(a)}; \begin{scope} [shift = {(1,0)}] \coordinate (center) at (0,0); \def1.5{1.5}; \coordinate (v1) at ($(center)+(210:1.5)$); \coordinate (v2) at ($(center)+(90:1.5)$); \coordinate (v3) at ($(center)+(-30:1.5)$); \coordinate (w12) at ($(center)+(150:1.5)$); \coordinate (w23) at ($(center)+(30:1.5)$); \coordinate (w13) at ($(center)+(-90:1.5)$); \draw[thick] (center) circle[radius=1.5]; \fill[blue] (v1) circle[radius=3pt]; \fill[blue] (v2) circle[radius=3pt]; \fill[blue] (v3) circle[radius=3pt]; \draw [anchor=north east] (v1) node {$v_1$}; \draw [anchor=south] (v2) node {$v_2$}; \draw [anchor=north west] (v3) node {$v_3$}; \draw [anchor=north east] (w13) node {$w_{13}$}; \coordinate (center) at (0,-3.5); \def1.5{1.5}; \coordinate (D1) at ($(center)+(210:1.5)$); \coordinate (D2) at ($(center)+(90:1.5)$); \coordinate (D3) at ($(center)+(-30:1.5)$); \coordinate (D12) at ($(center)+(150:1.5/2)$); \coordinate (D23) at ($(center)+(30:1.5/2)$); \coordinate (D13) at ($(center)+(-90:1.5/2)$); \fill [black!30,opacity=0.3] (D1) --(D2) -- (D3)--(D1); \draw [line width = 1.8pt] (D1)-- (D3); \fill[black] (D1) circle[radius=2pt]; \fill[black] (D3) circle[radius=2pt]; \fill[black] (D13) circle[radius=2pt]; \draw [anchor=north] (D1) node {$D_1$}; \draw [anchor=north] (D3) node {$D_3$}; \draw [anchor=north] (D13) node {$D_{13}$}; \def0.25{0.2} \draw [->, dashed, line width = 1pt] (v1)-- ($(D1)+(0,0.25)$); \draw [->, dashed, line width = 1pt] (v3)-- ($(D3)+(0,0.25)$); \draw [->, dashed, line width = 1pt] (v2)-- ($(D13)+(0,0.25)$); \fill[red] (w13) circle[radius=3pt]; \end{scope} \begin{scope} [shift= {(6,0)}] \coordinate (center) at (0,0); \def1.5{1.5}; \coordinate (v1) at ($(center)+(210:1.5)$); \coordinate (v2) at ($(center)+(90:1.5)$); \coordinate (v3) at ($(center)+(-30:1.5)$); \fill[black] (v1) circle[radius=3pt]; \fill[black] (v2) circle[radius=3pt]; \fill[black] (v3) circle[radius=3pt]; \draw [anchor=north east] (v1) node {$v_1$}; \draw [anchor=south] (v2) node {$v_2$}; \draw [anchor=north west] (v3) node {$v_3$}; \coordinate (D1) at ($(v1)+(0,-3)$); \coordinate (D3) at ($(v3)+(0,-3)$); \draw [line width = 1.8pt] (D1)-- (D3); \fill[black] (D1) circle[radius=2pt]; \fill[black] (D3) circle[radius=2pt]; \draw [anchor=north] (D1) node {$D_1$}; \draw [anchor=north] (D3) node {$D_3$}; \path [line width = 1pt] (v1) edge node[pos=0.5,left]{$1$} (v2) (v2) edge node[pos=0.5,right]{$1$} (v3) (v1) edge node[pos=0.5,below]{$2$} (v3); \draw [->, dashed, line width = 1pt] ($(center)+(0,-1.5)$)-- ($(center)+(0,-3.5)$); \end{scope} \end{scope} \begin{scope} [shift={(0,-7)}] \draw (-1.5,1.5) node[anchor=east] {(b)}; \begin{scope}[shift={(0,-0.5)}] \coordinate (center) at (0,0); \def1.5{1.5}; \coordinate (v1) at ($(center)+(210:1.5)$); \coordinate (v2) at ($(center)+(90:1.5)$); \coordinate (v3) at ($(center)+(-30:1.5)$); \coordinate (w12) at ($(center)+(150:1.5)$); \coordinate (w23) at ($(center)+(30:1.5)$); \coordinate (w13) at ($(center)+(-90:1.5)$); \draw [name path= circle, thick] (center) circle (1.5); \fill[blue] (v1) circle[radius=3pt]; \fill[blue] (v2) circle[radius=3pt]; \fill[blue] (v3) circle[radius=3pt]; \fill[red] (w12) circle[radius=3pt]; \fill[red] (w23) circle[radius=3pt]; \fill[red] (w13) circle[radius=3pt]; \draw [anchor=north east] (v1) node {$v_1$}; \draw [anchor=south] (v2) node {$v_2$}; \draw [anchor=north west] (v3) node {$v_3$}; \draw [anchor=south east] (w12) node {$w_{12}$}; \draw [anchor=south west] (w23) node {$w_{23}$}; \draw [anchor=north] (w13) node {$w_{13}$}; \draw[blue, dashed] (center) -- (v1); \draw[blue, dashed] (center) -- (v2); \draw[blue, dashed] (center) -- (v3); \foreach \i in {1,2,3} {\coordinate (P) at ($(v1)+(150:\i*1.5/4)$) ; \coordinate (Q) at ($(v2)+(150:\i*1.5/4)$) ; \coordinate (O) at ($(center)+(150:\i*1.5/4)$) ; \path [name path=OP] (O) -- (P); \path [name path=OQ] (O) -- (Q); \draw [name intersections={of=circle and OP}] (intersection-1) coordinate (P1); \draw [name intersections={of=circle and OQ}] (intersection-1) coordinate (Q1); \draw [dashed] (P1)-- (O) -- (Q1); } \foreach \i in {1,2,3} {\coordinate (P) at ($(v2)+(30:\i*1.5/4)$) ; \coordinate (Q) at ($(v3)+(30:\i*1.5/4)$) ; \coordinate (O) at ($(center)+(30:\i*1.5/4)$) ; \path [name path=OP] (O) -- (P); \path [name path=OQ] (O) -- (Q); \draw [name intersections={of=circle and OP}] (intersection-1) coordinate (P1); \draw [name intersections={of=circle and OQ}] (intersection-1) coordinate (Q1); \draw [dashed] (P1)-- (O) -- (Q1); } \foreach \i in {1,2,3} {\coordinate (P) at ($(v1)+(-90:\i*1.5/4)$) ; \coordinate (Q) at ($(v3)+(-90:\i*1.5/4)$) ; \coordinate (O) at ($(center)+(-90:\i*1.5/4)$) ; \path [name path=OP] (O) -- (P); \path [name path=OQ] (O) -- (Q); \draw [name intersections={of=circle and OP}] (intersection-1) coordinate (P1); \draw [name intersections={of=circle and OQ}] (intersection-1) coordinate (Q1); \draw [dashed] (P1)-- (O) -- (Q1); } \coordinate (center) at (0,-3.5); \def1.5{1.5}; \coordinate (D1) at ($(center)+(210:1.5)$); \coordinate (D2) at ($(center)+(90:1.5)$); \coordinate (D3) at ($(center)+(-30:1.5)$); \coordinate (D12) at ($(center)+(150:1.5/2)$); \coordinate (D23) at ($(center)+(30:1.5/2)$); \coordinate (D13) at ($(center)+(-90:1.5/2)$); \fill [black!30,opacity=0.3] (D1) --(D2) -- (D3)--(D1); \draw [line width = 1.8pt] (D12) --(center)-- (D23); \draw [line width = 1.8pt] (center)-- (D13); \fill[black] (center) circle[radius=2pt]; \fill[black] (D12) circle[radius=2pt]; \fill[black] (D23) circle[radius=2pt]; \fill[black] (D13) circle[radius=2pt]; \draw [anchor=north east] (center) node {$D_0$}; \draw [anchor=south east] (D12) node {$D_{12}$}; \draw [anchor=south west] (D23) node {$D_{23}$}; \draw [anchor=north] (D13) node {$D_{13}$}; \end{scope} \begin{scope}[shift={(5,0)}, x = {(1cm,0cm)}, y = {(0.5cm,0.8cm)}, z = {(0cm,1cm)}] \coordinate (center) at (0,0,-3.5); \def1.5{2}; \coordinate (D12) at ($(center)+(150:1.5)$); \coordinate (D23) at ($(center)+(30:1.5)$); \coordinate (D13) at ($(center)+(-90:1.5)$); \coordinate (v1) at (0,0,0); \coordinate (v2) at (0,0,0.4); \coordinate (v3) at (0,0,-0.4); \coordinate (w12) at ($(v2)+(150:1.5)$); \coordinate (w23) at ($(v1)+(30:1.5)$); \coordinate (w13) at ($(v3)+(-90:1.5)$); \draw [line width = 1.8pt] (center)-- (D12); \draw [line width = 1.8pt] (center)-- (D23); \draw [line width = 1.8pt] (center)-- (D13); \draw [line width = 1pt] (w12)-- (v2) -- (w23); \draw [line width = 1pt] (w23)-- (v3) -- (w13); \draw [line width = 1pt] (w13)-- (v1) -- (w12); \fill[black] (v1) circle[radius=2pt]; \fill[black] (v2) circle[radius=2pt]; \fill[black] (v3) circle[radius=2pt]; \fill[black] (w12) circle[radius=2pt]; \fill[black] (w23) circle[radius=2pt]; \fill[black] (w13) circle[radius=2pt]; \fill[black] (D12) circle[radius=2pt]; \fill[black] (D23) circle[radius=2pt]; \fill[black] (D13) circle[radius=2pt]; \draw [anchor=east] (v1) node {$v_1$}; \draw [anchor=south] (v2) node {$v_2$}; \draw [anchor=north west] (v3) node {$v_3$}; \draw [anchor=south] (w12) node {$w_{12}$}; \draw [anchor=south] (w23) node {$w_{23}$}; \draw [anchor=west] (w13) node {$w_{13}$}; \draw [anchor=north west] (center) node {$D_0$}; \draw [anchor=east] (D12) node {$D_{12}$}; \draw [anchor=north] (D23) node {$D_{23}$}; \draw [anchor=north] (D13) node {$D_{13}$}; \def0.25{0.25} \draw [->, dashed, line width = 1pt] (v2)-- ($(center)+(0,0,0.25)$); \draw [->, dashed, line width = 1pt] (w12)-- ($(D12)+(0,0,0.25)$); \draw [->, dashed, line width = 1pt] (w23)-- ($(D23)+(0,0,0.25)$); \draw [->, dashed, line width = 1pt] (w13)-- ($(D13)+(0,0,0.25)$); \end{scope} \begin{scope}[shift={(10,0)}, x = {(1cm,0cm)}, y = {(0.5cm,0.8cm)}, z = {(0cm,1cm)}] \coordinate (center) at (0,0,-3.5); \def1.5{2}; \coordinate (D12) at ($(center)+(150:1.5)$); \coordinate (D23) at ($(center)+(30:1.5)$); \coordinate (D13) at ($(center)+(-90:1.5)$); \coordinate (v1) at (0,0,0); \coordinate (v2) at (0,0,0.4); \coordinate (v3) at (0,0,-0.4); \coordinate (w12) at ($(v2)+(150:1.5)$); \coordinate (w23) at ($(v1)+(30:1.5)$); \coordinate (w13) at ($(v3)+(-90:1.5)$); \coordinate (v12) at ($(v3)+(150:1.5)$); \coordinate (v23) at ($(v2)+(30:1.5)$); \coordinate (v13) at ($(v2)+(-90:1.5)$); \draw [line width = 1.8pt] (center)-- (D12); \draw [line width = 1.8pt] (center)-- (D23); \draw [line width = 1.8pt] (center)-- (D13); \draw [line width = 1pt] (w12)-- (v2) -- (w23); \draw [line width = 1pt] (w23)-- (v3) -- (w13); \draw [line width = 1pt] (w13)-- (v1) -- (w12); \draw [red, line width = 1pt] (v2) -- (v13); \draw [red, line width = 1pt] (v1) -- (v23); \draw [red, line width = 1pt] (v3) -- (v12); \fill[black] (v1) circle[radius=2pt]; \fill[black] (v2) circle[radius=2pt]; \fill[black] (v3) circle[radius=2pt]; \fill[black] (w12) circle[radius=2pt]; \fill[black] (w23) circle[radius=2pt]; \fill[black] (w13) circle[radius=2pt]; \fill[red] (v12) circle[radius=2pt]; \fill[red] (v23) circle[radius=2pt]; \fill[red] (v13) circle[radius=2pt]; \fill[black] (D12) circle[radius=2pt]; \fill[black] (D23) circle[radius=2pt]; \fill[black] (D13) circle[radius=2pt]; \def0.25{0.25} \draw [->, dashed, line width = .8pt] (v2)-- ($(center)+(0,0,0.25)$); \draw [->, dashed, line width = .8pt] (w12)-- ($(D12)+(0,0,0.25)$); \draw [->, dashed, line width = .8pt] (v23)-- ($(D23)+(0,0,0.25)$); \draw [->, dashed, line width = .8pt] (v13)-- ($(D13)+(0,0,0.25)$); \end{scope} \end{scope} \end{tikzpicture} \caption{Examples of reduced divisor maps and harmonic morphisms.} \label{F:RedHarm} \end{figure} \begin{example} \label{E:RedMap} As discussed in Example~\ref{E:Red}(1), we see that $\Red_{|D_0|}(v_1)=D_1$, $\Red_{|D_0|}(v_2)=D_2$ and $\Red_{|D_0|}(v_3)=D_3$. Actually $\Red_{|D_0|}$ is an embedding of $\Gamma$ into $|D_0|$ where the image of $\Red_{|D_0|}$ is the circumference of $|D_0|$. Now consider the reduced divisor map to the tropical segment $[D_1,D_3]$. In Example~\ref{E:Red}(2), we've shown that $\Red_{[D_1,D_3]}(v_1)=D_1$, $\Red_{[D_1,D_3]}(v_3)=D_3$ and $\Red_{[D_1,D_3]}(v_2)=\Red_{[D_1,D_3]}(w_{13})=D_{13}$. Note that from Example~\ref{E:LinearSys}, we know that in the chip-firing move from $D_1$ to $D_3$, two chips move from $v_1$ to $v_3$ along the segment $v_1w_{13}v_3$ at the speed of one unit and one chip moves from $v_1$ to $v_3$ along the segment $v_1v_2v_3$ at the speed of two units. Thus we may write the $P_{D_3-D_1}(t) = 2\cdot (x)+(y)$ where $x$ is the point on the segment $v_1w_{13}v_3$ of distance $t/2$ from $v_1$ and $x$ is the point on the segment $v_1v_2v_3$ of distance $t$ from $v_1$ for $t\in[0,\rho(D_1,D_3)]$. One can verify that $\Red_{[D_1,D_3]}(x)=\Red_{[D_1,D_3]}(y)=P_{D_3-D_1}(t)$. This reduced divisor map to $[D_1,D_3]$ is also illustrated in Figure~\ref{F:RedHarm}(a). Now consider $T=\mathrm{tconv}(\{D_{12},D_{23},D_{13}\})$. In Example~\ref{E:Red}(3), we've shown that $\Red_T(v_1)=\Red_T(v_2)=\Red_T(v_3)=D_0$, $\Red_T(w_{12})=D_{12}$, $\Red_T(w_{23})=D_{23}$ and $\Red_T(w_{13})=D_{13}$. Actually, $T$ can be realized by gluing $v_1w_{12}$ with $v_2w_{12}$, $v_2w_{23}$ with $v_3w_{23}$, and $v_1w_{13}$ with $v_3w_{13}$, as illustrated in Figure~\ref{F:RedHarm}(b). More precisely, the divisors on the tropical path from $D_{12}$ to $D_0$ can be written as $P_{D_{12}-D_0}(t)=(x)+(y)+(v_3)$ where $x$ and $y$ are points on the segments $w_{12}v_1$ and $w_{12}v_2$ respectively, both of distance $t$ to $w_{12}$ where $t\in[0,\rho(D_{12},D_0)]$. Then $\Red_T(x)=\Red_T(y)=P_{D_{12}-D_0}(t)$. Similarly, we can derive the reduced divisor map restricted to the segment $v_2w_{23}v_3$ whose image is $[D_{23},D_0]$ and restricted to the segment $v_1w_{13}v_3$ whose image is $[D_{13},D_0]$. \end{example} Now let us study one-dimensional linear systems and the corresponding reduced divisor maps. For a linear system $T$, let $\supp(T)=\bigcup_{D\in T}\supp(D)$. We have the following notions. \begin{definition} \label{D:TropTree} Let $T$ be a linear system on a metric graph $\Gamma$ generated by $D_1,\cdots,D_m$. We say that $T$ is a \emph{tropical tree} if $T=\bigcup_{i,j=1,\cdots,m}[D_i,D_j]$. A tropical tree $T$ is called \emph{dominant} if $\supp(T)=\Gamma$. For two tropical trees $T$ and $T'$, we say $T'$ is a tropical \emph{subtree} of $T$ if $T'\subseteq T$. We say a tropical tree $T$ is \emph{maximal} if the only tropical tree containing $T$ is $T$ itself. \end{definition} Clearly, when not being a singleton, since the intersection of two tropical segments is a tropical segment (Proposition~\ref{P:TropSeg}(6)), a tropical tree $T=\mathrm{tconv}(\{D_1,\cdots,D_m\})$ actually has a tree structure and the extremals of $T$ are the leaves of $T$. Moreover, if $T'$ is a tropical subtree of $T$, denote the set of connected components of $T\setminus T'$ by ${\mathcal U}(T\setminus T')$. Note that the closure $cl(U)$ of each $U\in {\mathcal U}(T\setminus T')$ is a also a tropical subtree of $T$ and the intersection of $T'$ and $cl(U)$ is a single point which we call the attaching point between $T'$ and $cl(U)$. Then following lemma says that the tropical projection from $T$ to $T'$ respects the natural retraction from $T$ to $T'$. \begin{lemma} \label{L:ConnCompRed} Let $T'$ be a tropical subtree of a tropical tree $T$. For $U\in {\mathcal U}(T\setminus T')$, let $D_0$ be the attaching point between $T'$ and $cl(U)$. Then $\pi_{T'}(D)=D_0$ for all $D\in cl(U)$. \end{lemma} \begin{proof} This is a corollary of Proposition~\ref{P:TropProj} (3) and (4). Actually for $D\in cl(U)$, $D_0$ must be contained in the tropical segment $[D,\pi_{T'}(D)]$ since $T$ is a tropical tree. Now by Proposition~\ref{P:TropProj} (3) and (4), $\pi_{T'}(D)=\pi_{T'}(D_0)=D_0$. \end{proof} The following proposition says that tropical trees can be tested locally. \begin{proposition} \label{P:TropTreeCrit} A linear system $T$ is a tropical tree if and only if for each $D\in T$ and the bases $B_1,\cdots,B_s$ of all possible effective chip-firing moves inside $T$ from $D$, we have $B_i\bigcup B_j=\Gamma$ for all distinct $B_i,B_j\in\{B_1,\cdots,B_s\}$. \end{proposition} \begin{proof} Suppose $T$ is a tropical tree generated by linear equivalent divisors $D_1,\cdots,D_m$ of degree $d$. We may assume that $D_1,\cdots,D_m$ are all extremals (or leaves) of $T$. Consider an arbitrary divisor $D\in T$. If $D$ is an extremal of $T$, then we may assume $D=D_1$ without loss of generality. In this way, $\bigcap_{i=2}^m [D,D_i]$ is a tropical segment $[D,D']$ for some $D'\in T$ which is distinct from $D$ (Proposition~\ref{P:TropSeg}(6)). This actually means that from $D$, we can only have exactly one effective chip-firing base $B_1=\Gmin(f_{D'-D})$. Now suppose that $D$ is not an extremal of $T$. Let $D'_1,\cdots,D'_s$ be all the distinct divisors close enough to $D$ such that $f_{D'_i-D}$ is a primitive rational function for $i=1,\cdots,s$. Note that $s$ is at least two since $D$ is not a leaf of $T$. Let $B_i=\Gmin(f_{D'_i-D})$ for $i=1,\cdots,s$. Then $B_1,\cdots,B_s$ are the bases of all possible effective chip-firing moves inside $T$ from $D$. Note that $B_i\neq \Gamma$. For each distinct $i,j=1,\cdots,s$, we know that $D\in[D'_i,D'_j]$ which means $B_i\bigcup B_j=\Gmin(f_{D'_i-D})\bigcup \Gmin(f_{D'_j-D})=\Gamma$ by Theorem~\ref{T:FiniteCriterion}. Conversely, suppose $D$ is a divisor in a linear system $T$ generated by the extremals $D_1,\cdots,D_m$ such that for the bases $B_1,\cdots,B_s$ of all possible effective chip-firing moves inside $T$ from $D$, we have $B_i\bigcup B_j=\Gamma$ for all distinct $B_i,B_j\in\{B_1,\cdots,B_s\}$. Claim that $D\in [D_i,D_j]$ for some $D_i,D_j\in \{D_1,\cdots,D_m\}$. This will imply that $T$ is a tropical tree. Let $B'_i=\Gmin(f_{D_i-D})$ for $i=1,\cdots,m$. If there is some $B'_i$ identical to $\Gamma$, then $D$ must be the extremal $D_i$ of $T$. Now suppose $D$ is not an extremal of $T$ which means $B'_i\neq \Gamma$ for all $i=1,\cdots,m$. Then it is clear that $\{B'_1,\cdots,B'_m\}\subseteq \{B_1,\cdots,B_s\}$. Since $D\in T=\mathrm{tconv}(\{D_1,\cdots,D_m\})$, we must have $\bigcup_{i=1}^m B'_i = \Gamma$ by Theorem~\ref{T:FiniteCriterion}. This actually means that $B'_1,\cdots,B'_m$ can not be all identical since they are proper subsets of $\Gamma$. Without loss of generality, we may assume $B'_1=B_1$ is distinct from $B'_2=B_2$. Then by assumption, $B'_1\bigcup B'_2 = B_1\bigcup B_2 =\Gamma$. Therefore, by applying Theorem~\ref{T:FiniteCriterion} again, we conclude that $D\in[D_1,D_2]$. \end{proof} \begin{remark} \label{R:TropTree} For a linear system $T$ and a divisor $D\in T$, let $C_1,\cdots, C_n$ be the connected components of $\Gamma\setminus \supp(D)$ and $B_1,\cdots,B_s$ be the bases of all possible effective chip-firing moves inside $T$ from $D$. Then a chip firing of base $B_i$ will move some chips on the boundary of $B_i$ into the complement $B_i^c$ of $B_i$. Note that $B_i^c$ must contain some ${\mathcal C}_i=C_{i_1}\bigcup \cdots \bigcup C_{i_{n_i}}$ as a dense subset. The local criterion for $D\in T$ with $T$ being a tropical tree $T$ (Proposition~\ref{P:TropTreeCrit}) is equivalent to saying that when $D$ is not an extremal of $T$, $s$ must be at least $2$ and for each distinct ${\mathcal C}_i=C_{i_1}\bigcup \cdots \bigcup C_{i_{n_i}}$ and ${\mathcal C}_j=C_{j_1}\bigcup \cdots \bigcup C_{j_{n_j}}$, we must have $\{C_{i_1}, \cdots , C_{i_{n_i}}\}\bigcap\{C_{j_1},\cdots,C_{j_{n_j}}\}=\emptyset$. This also implies that if $T$ is a tropical tree and $D\in T$, then there is a one-to-one correspondence among the following objects: \begin{enumerate} \item the bases of all possible effective chip-firing moves inside $T$ from $D$, \item the directions of all possible effective chip-firing moves inside $T$ from $D$, \item all the outgoing tangent directions from $D$ in $T$, and \item the connected components of $T\setminus\{D\}$. \end{enumerate} For future discussions, we denote the set of bases of all possible effective chip-firing moves inside $T$ from $D$ by ${\mathcal B}_T(D)$, the set of outgoing tangent directions from $D$ in $T$ by $\Tan_T(D)$, and the set of components of $T\setminus \{D\}$ by ${\mathcal U}_T(D)$. Then we have one-to-one correspondence among elements in ${\mathcal B}_T(D)$, $\Tan_T(D)$ and ${\mathcal U}_T(D)$. Moreover, when talking about chip-firing directions, given an outgoing tangent direction ${\mathbf t}$ from $D$ in $T$, we may also say that $D$ takes an effective chip-firing move along ${\mathbf t}$. \end{remark} \begin{example} \label{E:TropTree} Again, we consider the linear systems on the metric circle $\Gamma$ in Figure~\ref{F:LinearSys}. Let $T=\mathrm{tconv}(\{D_{12},D_{23},D_{13}\})$ and $T'=\mathrm{tconv}(\{D_{12},D_2,D_{13}\})$. One can easily verify that $T$ is a tropical tree while $T'$ is not by Definition~\ref{D:TropTree}. Note $D_0=(v_1)+(v_2)+(v_3)$ is a divisor in both $T$ and $T'$. There are three connected components $C_1$, $C_2$ and $C_3$ of $\Gamma\setminus \supp(D_0)$ where $C_1$ is the open segment between $v_2$ and $v_3$ through $w_{23}$, $C_2$ is the open segment between $v_1$ and $v_3$ through $w_{13}$, and $C_3$ is the open segment between $v_1$ and $v_2$ through $w_{12}$. Let us test $D_0$ locally based on Proposition~\ref{P:TropTreeCrit} and Remark~\ref{R:TropTree}. \begin{enumerate} \item There are three directions of effective chip-firing moves from $D_0$ allowed in $T$: (1) the chips at $v_2$ and $v_3$ move towards $w_{23}$ at the same speed whose base is $B_1=C_2\bigcup C_3\bigcup \{v_1,v_2,v_3\}$; (2) the chips at $v_1$ and $v_3$ move towards $w_{13}$ at the same speed whose base is $B_2=C_1\bigcup C_3\bigcup \{v_1,v_2,v_3\}$; and (3) the chips at $v_1$ and $v_2$ move towards $w_{12}$ at the same speed whose base is $B_3=C_1\bigcup C_2\bigcup \{v_1,v_2,v_3\}$. Clearly $B_i^c=C_i$ for $i=1,2,3$. Then $B_1\bigcup B_2=B_2\bigcup B_3=B_1\bigcup B_3=\Gamma$ or equivalently $C_1\bigcap C_2=C_2\bigcap C_3=C_1\bigcap C_3=\emptyset$. Therefore $D_0$ satisfies the local tropical tree condition for $T_1$. \item There are three directions of effective chip-firing moves from $D_0$ allowed in $T'$: (1) the chips at $v_1$ and $v_3$ move towards $w_{13}$ at the same speed whose base is $B'_1=C_1\bigcup C_3\bigcup \{v_1,v_2,v_3\}$; (2) the chips at $v_1$ and $v_3$ move towards $v_2$ at the same speed whose base is $B'_2=C_2\bigcup \{v_1,v_3\}$; (3) the chips at $v_1$ and $v_2$ move towards $w_{12}$ at the same speed whose base is $B'_3=C_1\bigcup C_2\bigcup \{v_1,v_2,v_3\}$. Note that ${B'_1}^c=C_2$, ${B'_2}^c=C_1\bigcup C_3\bigcup \{v_2\}$, and ${B'_3}^c=C_3$. Now $B'_2\bigcup B'_3=B'_3\neq \Gamma$ or equivalently $\{C_1,C_3\}\bigcap \{C_3\}=\{C_3\}\neq\emptyset$. Therefore, $D_0$ does not satisfy the local tropical tree condition for $T'$ which means $T'$ cannot be a tropical tree. \end{enumerate} \end{example} \begin{lemma} \label{L:TropTreeSurj} For any tropical tree $T$, the corresponding reduced divisor map $\Red_T$ is surjective and the preimage of $D\in T$ under $\Red_T$ is $\bigcap_{B\in {\mathcal B}_T(D)}B$. \end{lemma} \begin{proof} First we note that for each $p$ and $q$ in $\Gamma$ and any path $P$ connecting $p$ and $q$, the tropical segment $[\Red_T(p),\Red_T(q)]$ must be a subset of the image of $P$ under $\Red_T$ since $\Red_T$ is continuous. So it remains to show that the extremals $D_1,\cdots,D_m$ of $T$ are contained in the image of $\Red_T$. Consider an extremal $D_i$ of $T$. Note that there is exactly one base $B$ for all possible effective chip-firing moves inside $T$ from $D_i$. By Proposition~\ref{P:GeneralRed}, this means that $D_i$ is reduced with respect to all the points in $B$. Also by Proposition~\ref{P:GeneralRed}, in general we have $\Red_T^{-1}(D)=\bigcap_{B\in {\mathcal B}_T(D)}B$ for all $D\in T$. \end{proof} \begin{corollary} \label{C:RedFunc} Let $T$ be a tropical tree. For each $D_1,D_2\in T$, let $f$ be an associated function of $D_2-D_1$, i.e., $\divf(f)=D_2-D_1$. Consider a divisor $D=P_{D_2-D_1}(t)$ for some $t\in[0,\rho(D_1,D_2)]$. \begin{enumerate} \item $\Red_{[D_1,D_2]}^{-1}(D)=\underline{f}^{-1}(t)$, i.e., $\Red_{[D_1,D_2]}$ is exactly $P_{D_2-D_1} \circ \underline{f}$. \item If $\underline{f}^{-1}(t)$ is a finite set, then $\Red_T^{-1}(D)=\underline{f}^{-1}(t)$. \end{enumerate} \end{corollary} \begin{proof} Note that for $D$ in the interior of the tropical segment $[D_1,D_2]$, there are two directions of effective chip-firing moves from $D$ inside $[D_1,D_2]$, one towards $D_1$ with base $\{p\in\Gamma\mid \underline{f}(p)\geq t\}$ and the other towards $D_2$ with base $\{p\in\Gamma\mid \underline{f}(p)\leq t\}$. Then $\Red_{[D_1,D_2]}^{-1}=\bigcap_{B\in {\mathcal B}_T(D)}B=\{p\in\Gamma\mid \underline{f}(p)\geq t\} \bigcap \{p\in\Gamma\mid \underline{f}(p)\leq t\}=\underline{f}^{-1}(t)$. The same result for $D$ being $D_1$ or $D_2$ follows from analogous arguments while instead only one direction of effective chip-firing moves from $D$ is allowed. Now if $\underline{f}^{-1}(t)$ is a finite set, we can extend the result even to $\Red_T^{-1}(D)$. The reason is that the finiteness of $\underline{f}^{-1}(t)$ can guarantee that there are no more directions of effective chip-firing moves from $D$ inside $T$ than the directions only inside $[D_1,D_2]$. \end{proof} \begin{remark} For Corollary~\ref{C:RedFunc}(2), we note that the level sets $\underline{f}^{-1}(t)$ of $\underline{f}$ is generically finite, i.e., the set of $t$ such that $\underline{f}^{-1}(t)$ is infinite is a finite set. \end{remark} \begin{lemma} \label{L:RedComp} For each $q\in \Gamma$ and $U\in {\mathcal U}_T(\Red_T(q))$, all divisors in $U$ take the same value at $q$, which is at most the value of $\Red_T(q)$ at $q$. \end{lemma} \begin{proof} Let $D_0=\Red_T(q)$. By Corollary~\ref{C:RedVal}, we know that the value of all divisors in $T$ is at most the value of $D_0$ at $q$. Now consider any two divisors $D_1$ and $D_2$ in a component $U$ of $T\setminus \{D\}$. Let $B$ be the base of effective chip-firing moves inside $T$ from $D$ corresponding to $U$ (Remark~\ref{R:TropTree}). Then $B=\Gmin(f_{D_2-D_0})=\Gmin(f_{D_1-D_0})$ and $q$ belongs to $B$ by Proposition~\ref{P:GeneralRed}. Since $T$ is a tropical tree, we may assume that $D_1$ is in the interior of the tropical segment $[D_0,D_2]$. Let $B' = \Gmin(f_{D_2-D_1}) $. Then $B$ is a proper subset of $\Gmax(f_{D_1-D_0})^c$, $B'$ is the closure of $\Gmax(f_{D_1-D_0})^c$, and we conclude that $q$ belongs to $B'\setminus \partial B'$. Thus the value of $D_2-D_1$ is $0$ at $q$. \end{proof} \begin{remark} \label{R:tauq} By Lemma~\ref{L:RedComp}, we can define a function $\tau_q$ on ${\mathcal U}_T(\Red_T(q))$ such that for each $U\in{\mathcal U}_T(\Red_T(q))$, $\tau_q(U)$ is the value of a divisor on $U$ at $q$. We say $\tau_q$ is trivial if $\tau_q(U)=0$ for all $U\in {\mathcal U}_T(\Red_T(q))$. Moreover, let $T_q:=\{D\in T\mid q\in\supp(D)\}$. We have the following cases for $T_q$: \begin{enumerate} \item If $q\notin \supp(\Red_T(q))$, then $T_q=\emptyset$. \item If $q\in \supp(\Red_T(q))$, then $T_q=\{q\}\bigcup(\bigcup_{U\in {\mathcal U}_T(\Red_T(q)),\tau_q(U)>0} U)$. In particular, when in addition $\tau_q$ is trivial, $T_q$ is exactly the singleton $\{q\}$. Moreover, there are only finitely many points $q\in\Gamma$ with nontrivial $\tau_q$. This can be proved by the following arguments. Let $Q$ be the set of points $q$ such that $\tau_q$ is nontrivial. Consider the extremals $D_1,\cdots,D_m$ of $T$. Then for each $q\in Q$, $T_q\bigcap \{D_1,\cdots,D_m\}$ must be nonempty. Then if $Q$ is an infinite set, then there must be some $D_i\in \{D_1,\cdots,D_m\}$ such that there are infinitely many points $q\in Q$ with $D_i\in T_q$. But this means that $\supp(D_i)$ is an infinite set, which is impossible. \end{enumerate} \end{remark} The following proposition provides several criteria for a tropical tree being dominant. \begin{proposition} \label{P:DomTreeCrit} Let $T$ be a tropical tree. Then the following are equivalent: \begin{enumerate} \item $T$ is dominant. \item For each $p\in\Gamma$, $p\in\supp(\Red_T(p))$. \item The corresponding reduced divisor map $\Red_T$ is finite., i.e., the preimage of any divisor in $T$ is a finite set. \item (Local criterion) For each $D\in T$, we have $\supp(D)\bigcup(\bigcup_{B\in{\mathcal B}_T(D)} B^c)=\Gamma$ where $B^c$ is the complement of $B$ in $\Gamma$. \end{enumerate} \end{proposition} \begin{proof} (1)$\Leftrightarrow$(2): Suppose $T$ is dominant, i.e., for each $p\in \Gamma$, there exists a divisor $D\in T$ such that $p\in\supp(D)$. By Corollary~\ref{C:RedVal}, we know that the value of $\Red_T(p)$ at $p$ is at least the value of $D$ at $p$, which means $p\in\supp(\Red_T(p))$. (2)$\Leftrightarrow$(3): If $p\in\supp(\Red_T(p))$ for each $p\in\Gamma$, then clearly the preimage of each $D\in T$ under $\Red_T$ must be finite since it is a subset of $\supp(D)$. Conversely, if there exists a point $q\in\Gamma$ such that $q\notin \supp(\Red_T(p))$, then we claim that the preimage of $\Red_T(q)$ under the reduced divisor map $\Red_T$ is an infinite set. Let $D_0=\Red_T(q)$. By Lemma~\ref{L:TropTreeSurj}, we know that $\Red_T^{-1}(D_0)=\bigcap_{B\in {\mathcal B}_T(D_0)}B\neq \emptyset$. Note that the boundary points of each $B\in {\mathcal B}_T(D_0)$ must be contained in $\supp(D_0)$. Now $q$ is contained in $\Red_T^{-1}(D_0)$ but $q\notin \supp(D_0)$. Let $C$ be the connected component of $\Gamma\setminus\supp(D_0)$ which contains $q$. Then we must have $C\subseteq \Red_T^{-1}(D_0)$. (2)$\Leftrightarrow$(4): Recall that $\Red_T^{-1}(D)=\bigcap_{B\in{\mathcal B}_T(D)}B$ by Lemma~\ref{L:TropTreeSurj}. Therefore $\Red_T^{-1}(D)\subseteq \supp(D)$ is equivalent to $\supp(D)\bigcup(\bigcup_{B\in{\mathcal B}_T(D)} B^c)=\supp(D)\bigcup(\bigcap_{B\in{\mathcal B}_T(D)}B)^c=\Gamma$. \end{proof} Lemma~\ref{L:TropTreeSurj} says that the reduced divisor map to a tropical tree $T$ is surjective and provides a characterization of the inverse image of the reduced divisor map. By Proposition~\ref{P:DomTreeCrit}, we know that if in addition $T$ is dominant, then the preimage of each $D\in T$ under the reduced divisor map must be a subset of $\supp(D)$. The following proposition provides a criterion to characterize the preimage of $D\in T$ under $\Red_T$ more easily. \begin{proposition} \label{P:PreImagRed} For a dominant tropical tree $T$ and a divisor $D\in T$, the following are equivalent: \begin{enumerate} \item $D=\Red_T(q)$. \item $q\in\supp(D)$ and $q$ is a boundary point of some $B\in{\mathcal B}_T(D)$. \item $q\in\supp(D)$ and there exists at least one outgoing tangent direction ${\mathbf t}\in\Tan_T(D)$ such that as $D$ takes an effective chip-firing move along ${\mathbf t}$, at least one chip at $q$ moves. \end{enumerate} \end{proposition} \begin{proof} The equivalence of (2) and (3) follows from the one-to-one correspondence of ${\mathcal B}_T(D)$ and $\Tan_T(D)$ (Remark~\ref{R:TropTree}) and the fact that an effective chip-firing move along ${\mathbf t}$ moves chips on the boundary of the base of that chip-firing move. It remains to prove that $\Red^{-1}(D)=\bigcup_{B\in{\mathcal B}_T(D)}\partial B\subseteq \supp(D)$ where $\partial B$ is the set of boundary points of $B$ in $\Gamma$. By Lemma~\ref{L:TropTreeSurj}, we have $\Red_T^{-1}(D)=\bigcap_{B\in{\mathcal B}_T(D)}B$. Then $\Red_T^{-1}(D) \subseteq \supp(D)$ follows from Proposition~\ref{P:DomTreeCrit} directly. Note that this also means that $\Red_T^{-1}(D)$ is finite. Now by Proposition~\ref{P:TropTreeCrit}, for each $B\in {\mathcal B}_T(D)$, we must have $B^c\subseteq B'$ for all $B'\in {\mathcal B}_T(D)$ such that $B'\neq B$. This actually means that $\partial B\subseteq B'$ since $B'$ is closed in $\Gamma$. Therefore, $\bigcup_{B\in{\mathcal B}_T(D)}\partial B\subseteq \bigcap_{B\in{\mathcal B}_T(D)}B=\Red_T^{-1}(D)$. Now let $A=\bigcup_{B\in{\mathcal B}_T(D)}\partial B$ which is a finite set and suppose that there exists $q\in \Red_T^{-1}(D)\setminus A$. Let $C$ be the connected component of $\Gamma\setminus A$ which contains $q$. Then we must have $C\subseteq \Red_T^{-1}(D)$ since $q\in C\subseteq B$ for all $B\in{\mathcal B}_T(D)$. This contradicts the fact that $\Red_T^{-1}(D)$ is finite. \end{proof} \begin{corollary} Dominant tropical trees are maximal. \end{corollary} \begin{proof} Consider a dominant tropical tree $T$. Then by Lemma~\ref{L:TropTreeSurj}, we know that the reduced divisor map $\Red_T:\Gamma\to T$ is surjective. If $T$ is not maximal, then there exists a larger tropical tree $T'\supseteq T$ such that $T'\setminus T$ is nonempty. Consider a connected component $U$ of $T'\setminus T$. First we note that $\Red_{T'}^{-1}(U)$ must be an infinite subset of $\Gamma$ (this can be shown in various ways, for example as a simple consequence of Corollary~\ref{C:RedFunc} by considering a tropical segment contained in $U$). By Lemma~\ref{L:ConnCompRed} and Proposition~\ref{P:SeqRedjDiv}, this means that for all $p\in \Red_{T'}^{-1}(U)$, $\Red_T(p)=\pi_T(\Red_{T'}(p))=D_0$ with $D_0$ being the attaching point between $T'$ and $cl(U)$. However, by Proposition~\ref{P:DomTreeCrit}, it follows that $\Red_{T'}^{-1}(U)\subseteq \supp(D_0)$ which is impossible since $\Red_{T'}^{-1}(U)$ is an infinite set. Therefore, $T$ must be maximal. \end{proof} We have the following characterization of the reduced divisor map to a dominant tropical tree as a whole. \begin{proposition} \label{P:CritRedDivMap} Let $\omega$ be a continuous map from $\Gamma$ to a complete linear system. Let $T$ be the image of $\omega$. Let $T_p:=\{D\in T\mid p\in \supp(D)\}$. The following are equivalent. \begin{enumerate} \item $T$ is a dominant tropical tree and $\omega$ is the reduced divisor map to $T$. \item $T$ is a linear system and for each $p\in\Gamma$, we have $\omega(p)\in T_p$. \item $T$ is a linear system and there exists a dense subset $\Gamma_0$ of $\Gamma$ such that for each $p\in\Gamma_0$, $T_p$ is exactly the singleton $\{\omega(p)\}$. \end{enumerate} \end{proposition} \begin{proof} (1)$\Rightarrow$(2): This follows from Proposition~\ref{P:DomTreeCrit} directly. (2)$\Rightarrow$(3): Since the image $T$ of the continuous map $\omega$ is a linear system, it can be verified easily that $T$ must be a tropical tree (using Definition~\ref{D:TropTree} directly). Let $\Gamma_0:=\{p\in\Gamma\mid T_p=\{\omega(p)\}\}$. Then we want to show that $\Gamma_0$ is dense in $\Gamma$. Suppose for contradiction that $\Gamma_0$ is not dense in $\Gamma$, which means that there exists $q\in \Gamma\setminus \Gamma_0$ and a small enough closed neighborhood $N_q=\{p\in\Gamma\mid \dist(p,q)\leq\epsilon\}\subseteq \Gamma\setminus \Gamma_0$ with $\epsilon>0$. Here we suppose $\epsilon$ is small enough that $N_q$ is star-shaped with center $q$. Then $\omega(N_q)=\bigcup_{i=1}^k[\omega(q),E_i]$ where $E_i$'s are some divisors close enough to $\omega(q)$ in $T$. Note that $\omega(N_p)$ is tropically convex by this construction and cannot be the singleton $\{\omega(q)\}$. Otherwise the support of $\omega(q)$ is an infinite set which is impossible. First we claim that for each $p\in \Gamma$, $T_p$ is tropically convex. For each $D_1,D_2\in T_p$, we have $D_1(p),D_2(p)\geq 1$. Note that the divisors on the tropical path from $D_1$ to $D_2$ are $P_{D_2-D_1}(t)=\divf([\min(t,\underline{f_{D_2-D_1}})])+D_1$ for $t\in[0,\rho(D_1,D_2)]$. Then the value of $P_{D_2-D_1}(t)$ at $p$ is either $D_1(p)$ or $D_2(p)$ for $t\in[0,\rho(D_1,D_2)]$, which means that $P_{D_2-D_1}(t) \in T_p$. Now let us construct a sequence of divisors in $T$. We choose a divisor $D_1$ from the interior of the tropical segment $[\omega(q),E_1]$ (with $\omega(q)\neq E_1$). Then there must be a point $q_1\in N_q$ such that $D_1=\omega(q_1)$. This means that $T_{q_1}$ is not a singleton. Since $T_{q_1}$ is tropically convex, the set $T_{q_1}\bigcap [\omega(q),E_1]$ is a tropical segment which contains $D_1$ but is not the singleton $\{D_1\}$. Let $[D_1,F_1]$ with $D_1\neq F_1$ be a tropical segment contained in $T_{q_1}\bigcap [\omega(q),E_1]$. Choose a divisor $D_2$ from the interior of $[D_1,F_1]$. Again, there is a point $q_2\in N_q$ such that $D_2=\omega(q_2)$. Clearly $q_1\neq q_2$ since $D_1\neq D_2$. But we note that $\{q_1,q_2\}\subseteq \supp(D_2)$ since $D_2\in T_{q_1}$ and $D_2=\omega(q_2)$. Moreover, we can keep doing this process and construct a sequence of divisors $D_1,D_2,\cdots$: \begin{enumerate} \item Suppose we have already derived the tropical segment $[D_i,F_i]\in T_{q_i}\bigcap [D_{i-1},F_{i-1}]$. \item Choose a divisor $D_{i+1}$ from the interior of $[D_i,F_i]$ and find a point $q_{i+1}\in N_q$ such that $D_{i+1}=\omega(q_{i+1})$. \item The set $T_{q_{i+1}}\bigcap [D_i,F_i]$ is a tropical segment which contains $D_{i+1}$ but is not the singleton $\{D_{i+1}\}$. Let $[D_{i+1},F_{i+1}]$ with $D_{i+1}\neq F_{i+1}$ be a tropical segment contained in $T_{q_{i+1}}\bigcap [D_i,F_i]$. \item Let $i\leftarrow i+1$ and go to (1). \end{enumerate} In this way, we derive a nested sequence of tropical segments $[D_1,F_1]\supseteq[D_2,F_2]\supseteq \cdots$ where $[D_i,F_i]\subseteq T_{q_i}$. Note that all $q_i$'s are distinct since $D_i$'s are distinct. Then we conclude that $\{q_1,\cdots,q_i\}\subseteq \supp(D_i)$. But this is impossible since $\supp(D_i)$ must be a finite set. Therefore, $\Gamma_0$ must be a dense subset of $\Gamma$. (3)$\Rightarrow$(1): For $p\in\Gamma_0$, we must have $\omega(p)=\Red_T(p)$ since $\omega(p)$ is the only divisor in $T$ with $p$ belonging to $\supp(\omega(p))$ and the value of $\Red_T(p)$ at the point $p$ is the largest among the values of all divisors in $T$ at the point $p$ (Corollary~\ref{C:RedVal}). Now since $\omega$ and $\Red_T$ are both continuous maps which coincide restricted to the dense subset $\Gamma_0$ of $\Gamma$ and , we conclude that $\omega=\Red_T$. By the continuity of $\Red_T$ and the equivalence of metric topology and symmetric product topology of $T$ (Appendix~\ref{S:EquiTop}), $p\in\supp(\Red_T)$ is true for all $p\in \Gamma$, which means that $T$ is a dominant tropical tree by Proposition~\ref{P:DomTreeCrit}. \end{proof} \begin{example} Let us reconsider Example~\ref{E:TropTree} about the tropical tree $T=\mathrm{tconv}(\{D_{12},D_{23},D_{13}\})$ shown in Figure~\ref{F:LinearSys}. We note that $T$ is a dominant tropical tree (which can be verified in several ways, e.g., using Definition~\ref{D:TropTree} directly, using Proposition~\ref{P:DomTreeCrit} or Proposition~\ref{P:CritRedDivMap}). Now specifically let us consider the divisor $D_0\in T$. In Example~\ref{E:TropTree}, we have verified that $D_0$ satisfies the condition in the local criterion for tropical trees (Proposition~\ref{P:TropTreeCrit}). Here we will verify that $D_0$ satisfies the condition in the local criterion for dominant tropical trees (Proposition~\ref{P:DomTreeCrit}(4)). As shown in Example~\ref{E:TropTree}, there are three possible effective chip-firing directions from $D_0$ allowed in $T$: along $D_0D_{23}$ with base $B_1$ being the segment $v_2v_1v_3$, along $D_0D_{13}$ with the base $B_2$ being the segment $v_1v_2v_3$, and along $D_0D_{12}$ with the base $B_3$ being the segment $v_1v_3v_2$. Therefore, the local condition $\supp(D)\bigcup(\bigcup_{B\in{\mathcal B}_T(D)} B^c)=\Gamma$ in Proposition~\ref{P:DomTreeCrit}(4) is satisfied. Furthermore, by Proposition~\ref{P:PreImagRed}, we get $\Red_T^{-1}(D_0)=\{v_1,v_2,v_3\}$. \end{example} \subsection{Harmonic Morphisms to Trees} For a metric graph $\Gamma$ and a point $p\in\Gamma$, we denote the set of all outgoing tangent directions from $p$ in $\Gamma$ by $\Tan_\Gamma(p)$. \begin{definition} \label{D:Harmonic} Let $\Gamma$ and $\Gamma'$ be two metric graphs. \begin{enumerate} \item A \emph{pseudo-harmonic morphism} $\phi$ from $\Gamma$ to $\Gamma'$ is a continuous finite surjective piecewise-linear map with nonzero integral slopes. (Here we say $\phi$ is finite if $\phi^{-1}(y)$ is finite for all $y\in\Gamma'$.) In particular, for all points $p\in\Gamma$ and tangent directions ${\mathbf t}\in \Tan_\Gamma(p)$, the \emph{expansion factor} $d_{\mathbf t}(\phi)$ along ${\mathbf t}$ is the absolute value of the integral slope of $\phi$ along ${\mathbf t}$, i.e., the ratio of the distance between $\phi(x)$ and $\phi(y)$ in $\Gamma'$ over the distance between $x$ and $y$ in $\Gamma$ where $x$ and $y$ are points close enough to $p$ in the direction ${\mathbf t}$. \item We say a pseudo-harmonic morphism $\phi:\Gamma\to\Gamma'$ is \emph{harmonic} at a point $p\in\Gamma$ if $\phi$ satisfies the following \emph{balancing} condition: for any tangent direction ${\mathbf t}' \in \Tan_{\Gamma'}(\phi(p))$, the sum of the expansion factors $d_{\mathbf t}(\phi)$ over all tangent directions ${\mathbf t}$ in $\Tan_\Gamma(p)$ that map to ${\mathbf t}'$, i.e., the integer $$\sum_{{\mathbf t} \in \Tan_\Gamma(p),~{\mathbf t} \mapsto {\mathbf t}'}d_{\mathbf t}(\phi),$$ is independent of ${\mathbf t}'$ and is called the \emph{degree} of $\phi$ at $p$, denoted by $\deg_p(\phi)$. \item A pseudo-harmonic morphism $\phi:\Gamma\to\Gamma'$ is a \emph{harmonic morphism} if $\phi$ is harmonic at all $p\in\Gamma$. \item For a harmonic morphism $\phi:\Gamma\to\Gamma'$, we define the \emph{degree} of $\phi$ to be $$\deg(\phi):=\sum_{p \in \phi^{-1} (q)}\deg_p(\phi),$$ which can be shown to be independent of $q\in \Gamma'$. \end{enumerate} \end{definition} \begin{remark} \begin{enumerate} \item Harmonic morphisms and pseudo-harminic morphisms can also be defined to metrized complexes (metric graphs with certain points assoicated with algebraic curves) \cite{ABBR15,ABBR15_2,LM18}. \item Recall that the genus of a metric graph $\Gamma$ is the first Betti number of $\Gamma$. For example, a metric tree has genus $0$ and a metric circle has genus $1$. We say that a metric graph $\Gamma^{\Mod}$ is a modification of $\Gamma$ if $\Gamma$ is isometric to a subgraph of $\Gamma^{\Mod}$ and the genus of $\Gamma^{\Mod}$ is the same as the genus of $\Gamma$. This actually means that $\Gamma^{\Mod}$ can be realized by attaching metric trees to $\Gamma$ as extra ``branches''. For simplicity, we will treat $\Gamma$ just as a subgraph of $\Gamma^{\Mod}$. \item For a modification $\Gamma^{\Mod}$ of $\Gamma$, there is a natural retraction map $\gamma:\Gamma^{\Mod}\to\Gamma$. By abuse of notation, for each divisor $D=\sum_{p\in\Gamma^{\Mod}}m_p\cdot(p)$ on $\Gamma^{\Mod}$, we also write $\gamma(D)=\sum_{p\in\Gamma^{\Mod}}m_p\cdot(\gamma(p))$ which is a divisor on $\Gamma$ called the retraction $D$. Note that for each $D_1$ and $D_2$ in $\Div(\Gamma^{\Mod})$, $D_1$ and $D_2$ are linearly equivalent as divisors on $\Gamma^{\Mod}$ if and only if $\gamma(D_1)$ and $\gamma(D_2)$ are linearly equivalent as divisors on $\Gamma$. \item Consider a harmonic morphism $\phi:\Gamma\to\Gamma'$. Then any divisor $D$ on $\Gamma'$ can be naturally pulled back to a divisor $\phi^*(D)=\sum_{p\in\Gamma}\deg_p(\phi) D(\phi(p))\cdot(p)$ on $\Gamma$. It can be easily verified that $\deg(\phi^*(D))=\deg(\phi)\deg(D)$. \end{enumerate} \end{remark} Now let us focus on pseudo-harmonic morphisms and harmonic morphisms to metric trees. The following lemma says that there is not much difference between pseudo-harmonic morphisms and harmonic morphisms to metric trees. \begin{lemma} \label{L:Pseudo} For a pseudo-harmonic morphism $\phi:\Gamma\to\ T$ where $T$ is a metric tree, there exists a harmonic morphism $\phi^{\Mod}:\Gamma^{\Mod}\to\ T$ where $\Gamma^{\Mod}$ is a modification of $\Gamma$ such that the restriction of $\phi^{\Mod}$ to $\Gamma$ is exactly $\phi$. \end{lemma} \begin{proof} For a point $p\in \Gamma$, if the balancing condition in Definition~\ref{D:Harmonic}(2) is not satisfied, i.e., the integer $d_{p,{\mathbf t}'}=\sum_{{\mathbf t} \in \Tan_\Gamma(p),~{\mathbf t} \mapsto {\mathbf t}'}d_{\mathbf t}(\phi)$ is not independent of ${\mathbf t}' \in \Tan_{\Gamma'}(\phi(p))$, then we ``modify'' $\Gamma$ as follows: \begin{enumerate} \item Let $d_p:=\max_{{\mathbf t}' \in \Tan_{\Gamma'}(\phi(p))}(d_{p,{\mathbf t}'})$. \item Let ${\mathcal U}$ be the set of connected components of $T\setminus\{\phi(p)\}$. Note that elements in ${\mathcal U}$ is in one-to-one correspondence to elements in $\Tan_\Gamma(p)$. \item For each ${\mathbf t}' \in \Tan_{\Gamma'}(\phi(p))$, if $d_{p,{\mathbf t}'}<d_p$, then attach $d_p-d_{p,{\mathbf t}'}$ copies of the connected component $U\in {\mathcal U}$ corresponding to ${\mathbf t}'$ to $\Gamma$ at point $p$ as extra branches. \item The natural identification of each extra branch with the connected component below is incorporated into the morphism from the modification of $\Gamma$ to $T$. \end{enumerate} After these operations at $p$, we derive a new morphism which is balanced at $p$ of degree $d_p$. Note that there can only be finitely many unbalanced points. Applying the above procedure to all of them, we can derive a new metric graph $\Gamma^{\Mod}$ which is a modification of $\Gamma$ and a harmonic morphism $\phi^{\Mod}:\Gamma^{\Mod}\to\ T$ such that the restriction of $\phi^{\Mod}$ to $\Gamma$ is exactly $\phi$. \end{proof} \begin{remark} The modification in the above proof is not necessarily unique and can be quite flexible. For example, in Step (3), we may instead just attach a single branch to $\Gamma$ at $p$ which is a scaling of $U$ by a factor of $1/(d_p-d_{p,{\mathbf t}'})$. Actually in Proposition~6.2 of \cite{LM18}, a pseudo-harmonic morphism of a metrized complex ${\mathfrak C}(\Gamma)$ to a genus-$0$ metrized complex ${\mathfrak C}(T)$ can be extended to a harmonic morphism from a modification of ${\mathfrak C}(\Gamma)$ to ${\mathfrak C}(T)$ where the corresponding modification of the underlying metric graph $\Gamma$ can be subtly adjusted to respect the the finite morphisms between the associated algebraic curves. \end{remark} \begin{remark} We will call the harmonic morphism $\phi^{\Mod}:\Gamma^{\Mod}\to\ T$ an \emph{extension} of the pseudo-harmonic morphism in Lemma~\ref{L:Pseudo}. Note that depending on different allowable modifications of $\Gamma$, a pseudo-harmonic morphism can have various extensions to harmonic morphisms. \end{remark} Lemma~\ref{L:Pseudo} tells us that pseudo-harmonic morphisms and harmonic morphisms to metric trees are strongly correlated. The following theorem says that pseudo-harmonic morphisms to metric trees are exactly the reduced divisor maps to dominant tropical trees discussed in the previous subsection. \begin{theorem} \label{T:RedHarm} \begin{enumerate} \item For each dominant tropical tree $T\subseteq\DivPlusD(\Gamma)$, if $T$ is treated as a metric tree, the reduced divisor map $\Red_T:\Gamma\to T$ is a pseudo-harmonic morphism which can be extended to a degree-$d$ harmonic morphism. \item For each pseudo-harmonic morphism $\phi:\Gamma\to\ T$ where $T$ is a metric tree which which can be extended to a degree-$d$ harmonic morphism, $T$ can be isometrically embedded into a degree-$d$ complete linear system as a dominant tropical tree and $\phi$ coincides with the reduced divisor map $\Red_T$. \end{enumerate} \end{theorem} \begin{proof} For (1), let $D_1,\cdots, D_m$ be the extremals of $T$. Note that the rational function $f_{D_i-D_j}$ is a piecewise linear function with integral slopes (possibly zero at certain points) for each $i,j=1,\cdots,m$. Let $\Gamma_{ij}=\Red_T^{-1}([D_i,D_j])$. Note that $\Red_{[D_i,D_j]}=\pi_{[D_i,D_j]}\circ \Red_T$ by Proposition~\ref{P:SeqRedjDiv} and $\pi_{[D_i,D_j]}$ restricted to $T$ is exactly the natural retraction from $T$ to $[D_i,D_j]$ by Lemma~\ref{L:ConnCompRed}. Hence $\Red_T\mid_{\Gamma_{ij}}=\Red_{[D_i,D_j]}\mid_{\Gamma_{ij}}$. Recall that by Corollary~\ref{C:RedFunc}, $\Red_{[D_i,D_j]}=P_{D_j-D_i} \circ \underline{f_{D_j-D_i}}$ where the tropical path $P_{D_j-D_i}$ is an isometry from the segment $[0,\rho(D_i,D_j)]$ to the tropical segment $[D_i,D_j]$ and $f_{D_j-D_i}$ is a rational function associated to $D_j-D_i$. Consider a point $p\in\Gamma$ and a tangent direction ${\mathbf t}\in \Tan_\Gamma(p)$. Since $T$ is a dominant tropical tree which means that $\Red_T$ is a continuous finite surjection with $p\in\supp(\Red_T(p))$ for all $p\in\Gamma$ (Proposition~\ref{P:DomTreeCrit}), there is a tangent direction ${\mathbf t}' \in \Tan_T(\Red_T(p))$ which is the pushforward of ${\mathbf t}$ by $\Red_T$. Therefore we can choose a tropical path from $D_i$ to $D_j$ which goes through $\Red_T(p)$ along ${\mathbf t}'$. Note that $p\in\Gamma_{ij}$ and $\Red_T$ coincides with $\Red_{[D_i,D_j]}=P_{D_j-D_i} \circ \underline{f_{D_j-D_i}}$ over $\Gamma_{ij}$. Letting the expansion factor $d_{\mathbf t}(\Red_T)$ of $\Red_T$ at $p$ along ${\mathbf t}$ be the slope of $f_{D_j-D_i}$ at $p$ along ${\mathbf t}$ (which is always a positive integer), we conclude that $\Red_T$ is a pseudo-harmonic morphism from $\Gamma$ to $T$. Now let us extend $\Red_T$ to a degree-$d$ harmonic morphism. Recall that by Lemma~\ref{L:RedComp} and Remark~\ref{R:tauq}, for each $p\in \Gamma$ and $U\in {\mathcal U}_T(\Red_T(p))$ where ${\mathcal U}_T(\Red_T(p))$ is the set of connected components of $T\setminus \Red_T(p)$, all the divisors in $U$ take the same value at $p$ and therefore we can define a function $\tau_p$ on ${\mathcal U}_T(\Red_T(p))$ such that for each $U\in{\mathcal U}_T(\Red_T(p))$, $\tau_p(U)$ is the value of a divisor on $U$ at $q$. Moreover, we say $\tau_p$ is trivial if $\tau_p(U)=0$ for all $U\in {\mathcal U}_T(\Red_T(p))$, and there are only finitely many $p\in\Gamma$ with nontrivial $\tau_p$. Since $T$ is a dominant tropical tree, the divisor $\Red_T(p)$ takes value at least $1$ at $q$. We will get a modification $\Gamma^{\Mod}$ of $\Gamma$ as follows. \begin{enumerate} \item First we note that for each point $p\in\Gamma$ and ${\mathbf t}' \in \Tan_{\Gamma'}(\phi(p))$, the integer $d_{p,{\mathbf t}'}=\sum_{{\mathbf t} \in \Tan_\Gamma(p),~{\mathbf t} \mapsto {\mathbf t}'}d_{\mathbf t}(\Red_T(p))$ is exactly $\Red_T(p)(p)-\tau_p(U_{{\mathbf t}'})$ where $\Red_T(p)(p)$ is the value of the $p$-reduced divisor $\Red_T(p)$ at $p$ and $U_{{\mathbf t}'}$ is the connected component of $T\setminus \Red_T(p)$ corresponding to ${\mathbf t}'$. An interpretation of this is that the chip-firing move along ${\mathbf t}'$ consumes $d_{p,{\mathbf t}'}=\Red_T(p)(p)-\tau_p(U_{{\mathbf t}'})$ chips at $p$. \item The above argument implies that $\Red_T$ is harmonic at $p$ if and only if $\tau_p$ is trivial or a constant function. That is, for each possible effective chip-firing from $\Red_T(p)$ inside $T$, all chips at $\Red_T(p)$ are consumed. Therefore, the degree $\deg_p(\Red_T)$ of $\Red_T$ at $p$ is exactly $\Red_T(p)(p)$. \item We will modify $\Gamma$ at all those points $p$ with nontrivial $\tau_p$. Note that this is a little different from what we have done in the proof of Lemma~\ref{L:Pseudo} where modifications are taken only to non-harmonic points (recall that $\tau_p$ being a constant function also implies that $\Red_T$ is harmonic at $p$). \item Let $p$ be a point with nontrivial $\tau_p$. For each ${\mathbf t}' \in \Tan_{\Gamma'}(\phi(p))$, $U_{{\mathbf t}'}$ is the connected component of $T\setminus \Red_T(p)$ corresponding to ${\mathbf t}'$. We attach $\tau_p(U_{{\mathbf t}'})$ copies of $U_{{\mathbf t}'}$ to $\Gamma$ at $p$ as extra branches. \item After such attachments for all $p$ with nontrivial $\tau_p$, we obtain a modification $\Gamma^{\Mod}$ of $\Gamma$. Incorporating the natural identification of each extra branch with the connected component below, we extend the reduced divisor map $\Red_T$ to a new pseudo-harmonic morphism $\Red_T^{\Mod}:\Gamma^{\Mod}\to T$. \item $\Red_T^{\Mod}$ is then harmonic at all points $p\in\Gamma$ of degree $\Red_T(p)(p)$ at $p$. Since $\Red_T^{\Mod}$ is automatically harmonic at the points in the newly attached branches of degree $1$, $\Red_T^{\Mod}$ is overall a harmonic morhpism and the degree of $\Red_T^{\Mod}$ is identical to $d$ which is the degree of $T$. \end{enumerate} For (2), extend the pseudo-harmonic morphism $\phi:\Gamma\to\ T$ to a harmonic morphism $\phi^{\Mod}:\Gamma^{\Mod}\to\ T$ of degree $d$. Let $\gamma$ be the natural retraction of divisors on $\Gamma^{\Mod}$ to divisors on $\Gamma$. Choose an arbitrary point $x\in T$, let $D_x^{\Mod}$ be the pullback divisor of the divisor $(x)$ on $T$ by $\phi^{\Mod}$ and let $D_x=\gamma(D_x^{\Mod})$. Then $D_x$ is an effective divisor on $\Gamma$ of degree $d$ and clearly $p\in\supp(D_{\phi(p)})$ for all $p\in\Gamma$. For every two points $x_1,x_2\in T$, denote the segment connecting $x_1$ and $x_2$ in $T$ by $[x_1,x_2]_T$ and let $\rho$ be the distance between $x_1$ and $x_2$ in $T$. Claim that the tropical segment $[D_{x_1},D_{x_2}]$ is exactly $\{D_x\mid x\in[x_1,x_2]_T\}$ which will imply that $T$ is a tropical tree with each $x\in T$ identified with $D_x$. By sending each $p\in\Gamma$ to the distance between $x$ and $x_1$ where $x$ is the retraction of $\phi(p)$ to the segment $[x_1,x_2]_T$, we can derive a rational function $f$ on $\Gamma$. It can be easily verified that $\divf(f)=D_{x_2}-D_{x_1}$ and $P_{D_{x_2}-D_{x_1}}(t)=D_x$ where $t\in[0,\rho]$ and $x$ is the unique point of distance $t$ to $x_1$ in the segment $[x_1,x_2]_T$. Now $T$ is a tropical tree and $p\in\supp(D_{\phi(p)})$ for all $p\in\Gamma$. Therefore, by Proposition~\ref{P:CritRedDivMap}, we conclude that $T$ is a dominant tropical tree and $\phi$ is exactly the reduced divisor map to $T$. \end{proof} \begin{example} In Figure~\ref{F:RedHarm}, we give two examples of reduced divisor maps with extensions to harmonic morphisms. In Example~\ref{E:RedMap}, we have discussed the reduced divisor map $\Red_{[D_1,D_3]}$ to the tropical segment $[D_1,D_3]$ and the reduced divisor map $\Red_T$ to the tropical tree $T=\mathrm{tconv}(\{D_{12},D_{23},D_{13}\})$. Note that $[D_1,D_3]$ and $T$ are both dominant tropical trees. Thus $\Red_{[D_1,D_3]}$ and $\Red_T$ are both pseudo-harmonic. \begin{enumerate} \item As shown in Figure~\ref{F:RedHarm}(a), the expansion factor of $\Red_{[D_1,D_3]}$ along the segment $v_1w_{13}v_3$ is $2$ and the expansion factor of $\Red_{[D_1,D_3]}$ along the segment $v_1v_2v_3$ is $1$, since the chip-firing from $D_1$ to $D_3$ moves two chips along $v_1w_{13}v_3$ and one chip along $v_1v_2v_3$. $\Red_{[D_1,D_3]}$ is harmonic at all points and thus is a harmonic morphism. \item The reduced divisor map $\Red_T$ is illustrated in Figure~\ref{F:RedHarm}(b) (middle panel). By analyzing the chip-firing moves from $D_0$ to $D_{12}$, from $D_0$ to $D_{23}$ and from $D_0$ to $D_{13}$, we see that the expansion factors along all segments $v_1w_{12}$, $v_1w_{13}$, $v_2w_{12}$, $v_2w_{23}$, $v_3w_{13}$ and $v_3w_{23}$ are identical to $1$. Note that $\Red_T^{-1}(D_0)=\{v_1,v_2,v_3\}$. Therefore, the only non-harmonic points are $v_1$, $v_2$ and $v_3$. The right panel of Figure~\ref{F:RedHarm}(b) shows an extension of $\Red_T$ to a degree-$3$ harmonic morphism. The extra attachments in the modification of $\Gamma$ include an attachment of a copy of $D_0D_{23}$ at $v_1$, an attachment of a copy of $D_0D_{13}$ at $v_2$ and an attachment of a copy of $D_0D_{12}$ at $v_3$. \end{enumerate} \end{example} \subsection{Stable Gonality and Geometric Rank Functions} The gonality of an algebraic curve $C$ has several interpretations such as the minimum degree of a rank-$1$ linear system on $C$ or the minimum degree of a morphism from $C$ to the projective line ${\mathbb P}^1$. Accordingly, there are several types of gonality defined on finite graphs or metric graphs, e.g., the divisorial gonality \cite{Baker08} and the stable gonality \cite{CKK15}. However, unlike curve gonality, these definitions of gonality in the case of finite graphs or metric graphs are not equivalent \cite{Caporaso14}. In their foundational paper on graph-theoretical Riemann-Roch theory\cite{BN07}, Baker and Norine introduced a rank function on the set of divisors for a finite graph. In the context of metric graphs, the Baker-Norine rank function or combinatorial rank function $r_c:\Div(\Gamma) \to {\mathbb Z}$ on a metric graph $\Gamma$ is defined as follows: for a divisor $D\in \Div(\Gamma)$, if the complete linear system $|D|$ is empty, then $r_c(D)=-1$, and otherwise, $r_c(D)$ is the greatest non-negative integer $r$ such that for every $E\in\DivPlusR(\Gamma)$, there exists a divisor $D'\in|D|$ such that $E\leq D'$. Essentially, this combinatorial rank function can be consider to be a function on the set of complete linear systems since all divisors in a complete linear system have the same rank. The \emph{divisorial gonality} of $\Gamma$ is defined as the minimum degree $d$ such that there exists a complete linear system on $\Gamma$ of degree $d$ and rank at least $1$. Recall that we have defined linear systems in general as tropical polytopes contained in some complete linear systems in Definition~\ref{D:LinSys}. Therefore, the combinatorial rank function can be generalized as a function on the set of linear systems. \begin{definition} For a linear system $T$ on a metric graph $\Gamma$, the combinatorial rank $r_c(T)$ of $T$ is defined as follows: \begin{enumerate} \item If $T$ is the empty linear system, then $r_c(T)=-1$; \item Otherwise, $r_c(T)=\max\{r\mid\forall E\in \DivPlusR(\Gamma), \exists D'\in T\ \mbox{such that}\ E\leq D'\}$. \end{enumerate} \end{definition} The following notion of stable gonality of metric graphs comes from \cite{CKK15}. \begin{definition} \label{D:StaGonality} A metric graph $\Gamma$ is \emph{stably $d$-gonal} if it admits a degree-$d$ harmonic morphism from a modification $\Gamma^{\Mod}$ of $\Gamma$ to a metric tree. The \emph{stable gonality} of $\Gamma$ is the minimum degree $d$ such that there exists a harmonic morphism of degree $d$ from a modification of $\Gamma$ to a metric tree. \end{definition} Now we introduce a new rank function called geometric rank function here. \begin{definition} \label{D:GeoRank} For a linear system $T$ on a metric graph $\Gamma$, the \emph{geometric rank} $r_g(T)$ of $T$ is defined as follows: \begin{enumerate} \item If $T$ is the empty linear system, then $r_g(T)=-1$; \item Otherwise, $r_g(T)$ is the maximum of the integers $r$ such that there exists a map $\pi:\DivPlusR\rightarrow T$ with the following conditions satisfied: \begin{enumerate} \item $\pi$ is continuous. \item For every $E\in\DivPlusR(\Gamma)$, $E\leqslant\pi(E)$. \item The image $\pi(\DivPlusR(\Gamma))$ is tropically convex. \end{enumerate} In particular, we say $\pi$ is a \emph{rank-$r$} map to $T$. \end{enumerate} Moreover, the geometric rank of a divisor $D\in\Div(\Gamma)$ is defined to be the geometric rank of the complete linear system $|D|$. \end{definition} \begin{proposition} \label{P:SGon} $\Gamma$ is stable $d$-gonal if and only if $\Gamma$ admits a degree-$d$ dominant tropical tree if and only if there exists a divisor $D\in\Div(\Gamma)$ of degree $d$ and geometric rank $1$. \end{proposition} \begin{proof} By Theorem~\ref{T:RedHarm}, we see that $\Gamma$ is stable $d$-gonal if and only if $\Gamma$ admits a degree-$d$ dominant tropical tree $T$. By Proposition~\ref{P:CritRedDivMap}, the reduced divisor map to $T$ is exactly a rank-$1$ map in Definition~\ref{D:GeoRank}. Therefore, the existence of a degree-$d$ dominant tropical tree is equivalent to the existence of a divisor $D\in\Div(\Gamma)$ of degree $d$ and geometric rank $1$. \end{proof} An immediate consequence of Proposition~\ref{P:SGon} is that the stable gonality is larger than or equal to the divisorial gonality, while the inequality can be strict as shown in the following example. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \coordinate (center) at (0,0); \def1.5{1}; \def\sca{2.5}; \coordinate (v1) at ($(center)+(210:1.5)$); \coordinate (v2) at ($(center)+(90:1.5)$); \coordinate (v3) at ($(center)+(-30:1.5)$); \coordinate (u1) at ($(center)+(210:\sca*1.5)$); \coordinate (u2) at ($(center)+(90:\sca*1.5)$); \coordinate (u3) at ($(center)+(-30:\sca*1.5)$); \draw[thick] (center) circle[radius=1.5]; \fill[black] (v1) circle[radius=2pt]; \fill[black] (v2) circle[radius=2pt]; \fill[black] (v3) circle[radius=2pt]; \fill[black] (u1) circle[radius=2pt]; \fill[black] (u2) circle[radius=2pt]; \fill[black] (u3) circle[radius=2pt]; \foreach \i in {1,2,3} \path [thick] (u\i) edge (v\i) (u\i) edge [bend left=50] (v\i) (u\i) edge [bend right=50] (v\i); \draw [anchor=west] (v1) node {$v_1$}; \draw [anchor=north] (v2) node {$v_2$}; \draw [anchor=east] (v3) node {$v_3$}; \draw [anchor=north] (u1) node {$u_1$}; \draw [anchor=south] (u2) node {$u_2$}; \draw [anchor=north] (u3) node {$u_3$}; \end{scope} \begin{scope}[shift={(5,0)}] \coordinate (center) at (0,0); \def1.5{1.2}; \def\sca{2}; \coordinate (D1) at ($(center)+(210:1.5)$); \coordinate (D2) at ($(center)+(90:1.5)$); \coordinate (D3) at ($(center)+(-30:1.5)$); \coordinate (E1) at ($(center)+(210:\sca*1.5)$); \coordinate (E2) at ($(center)+(90:\sca*1.5)$); \coordinate (E3) at ($(center)+(-30:\sca*1.5)$); \coordinate (D12) at ($(center)+(150:1.5/2)$); \coordinate (D23) at ($(center)+(30:1.5/2)$); \coordinate (D13) at ($(center)+(-90:1.5/2)$); \fill [black!50,opacity=1] (D1) --(D2) -- (D3)-- (D1); \draw [black!50, line width = 1.8pt] (center) -- (E1); \draw [black!50, line width = 1.8pt] (center) -- (E2); \draw [black!50, line width = 1.8pt] (center) -- (E3); \draw [anchor=north] (D1) node {$D_1$}; \draw [anchor=east] (D2) node {$D_2$}; \draw [anchor=north] (D3) node {$D_3$}; \draw [anchor=north] (E1) node {$E_1$}; \draw [anchor=east] (E2) node {$E_2$}; \draw [anchor=north] (E3) node {$E_3$}; \end{scope} \begin{scope}[shift={(0,-5)}] \coordinate (center) at (0,0); \def1.5{1.2}; \def\sca{2}; \coordinate (D1) at ($(center)+(210:1.5)$); \coordinate (D2) at ($(center)+(90:1.5)$); \coordinate (D3) at ($(center)+(-30:1.5)$); \coordinate (E1) at ($(center)+(210:\sca*1.5)$); \coordinate (E2) at ($(center)+(90:\sca*1.5)$); \coordinate (E3) at ($(center)+(-30:\sca*1.5)$); \coordinate (D12) at ($(center)+(150:1.5/2)$); \coordinate (D23) at ($(center)+(30:1.5/2)$); \coordinate (D13) at ($(center)+(-90:1.5/2)$); \fill [black!10,opacity=1] (D1) --(D2) -- (D3)-- (D1); \draw [black!10, line width = 1.8pt] (center) -- (E2); \draw [black, line width = 1.8pt] (E1) -- (D1) -- (D3) -- (E3); \fill (D1) circle[radius=2pt]; \fill (D3) circle[radius=2pt]; \fill (E1) circle[radius=2pt]; \fill (E3) circle[radius=2pt]; \draw [anchor=north] (D1) node {$D_1$}; \draw [anchor=north] (D3) node {$D_3$}; \draw [anchor=north] (E1) node {$E_1$}; \draw [anchor=north] (E3) node {$E_3$}; \end{scope} \begin{scope}[shift={(5,-5)}] \coordinate (center) at (0,0); \def1.5{1}; \def\sca{2.5}; \coordinate (v1) at ($(center)+(210:1.5)$); \coordinate (v2) at ($(center)+(90:1.5)$); \coordinate (v3) at ($(center)+(-30:1.5)$); \coordinate (u1) at ($(center)+(210:\sca*1.5)$); \coordinate (u2) at ($(center)+(90:\sca*1.5)$); \coordinate (u3) at ($(center)+(-30:\sca*1.5)$); \path [black!20,thick] (u2) edge (v2) (u2) edge [bend left=50] (v2) (u2) edge [bend right=50] (v2); \fill[black!20] (u2) circle[radius=2pt]; \draw [black!20,anchor=south] (u2) node {$u_2$}; \draw[thick] (center) circle[radius=1.5]; \fill[black] (v1) circle[radius=2pt]; \fill[black] (v2) circle[radius=2pt]; \fill[black] (v3) circle[radius=2pt]; \fill[black] (u1) circle[radius=2pt]; \fill[black] (u3) circle[radius=2pt]; \foreach \i in {1,3} \path [thick] (u\i) edge (v\i) (u\i) edge [bend left=50] (v\i) (u\i) edge [bend right=50] (v\i); \draw [anchor=west] (v1) node {$v_1$}; \draw [anchor=north] (v2) node {$v_2$}; \draw [anchor=east] (v3) node {$v_3$}; \draw [anchor=north] (u1) node {$u_1$}; \draw [anchor=north] (u3) node {$u_3$}; \end{scope} \end{tikzpicture} \caption{An example of a metric graph of divisorial gonality of $3$ and stable gonality $4$. A complete linear system $|D|$ whose combinatorial rank is $1$ and geometric rank is $0$ is shown. Divisors $D_1=3(v_1)$, $D_2=3(v_2)$, $D_3=3(v_3)$, $E_1=3(u_1)$, $E_2=3(u_2)$ and $E_3=3(u_3)$ are all elements of $|D|$.} \label{F:gonality} \end{figure} \begin{example} Figure~\ref{F:gonality} shows an example of a metric graph $\Gamma$ of divisorial gonality $3$ and stable gonality $4$. This example also appears in \cite{ABBR15_2} (Example~5.13). $\Gamma$ is genus-$7$ metric graph made of a circle attached with three banana graphs of genus $2$. We let $u_1$, $u_2$, $u_3$, $v_1$, $v_2$ and $v_3$ be the vertices of $\Gamma$ with all edge lengths being identical. We consider the linearly equivalent divisors $D_1=3(v_1)$, $D_2=3(v_2)$, $D_3=3(v_3)$, $E_1=3(u_1)$, $E_2=3(u_2)$ and $E_3=3(u_3)$. In Figure~\ref{F:gonality}, the complete linear system $|D|$ which contains $D_1$, $D_2$, $D_3$, $E_1$, $E_2$ and $E_3$ is shown in the upper-right panel, the tropical segment $[E_1,E_3]$ is shown in the lower-left panel with its support shown in the lower-right panel. As a tropical tree, $[E_1,E_3]$ is maximal but not dominant. On the other hand, $\supp(|D|)=\Gamma$ and it can be easily verified the $|D|$ is the only complete linear system of degree at most $3$ and combinatorial rank at least $1$. Therefore, the divisorial gonality of $\Gamma$ is $3$. On the other hand, $|D|$ contains no degree-$3$ dominant tropical tree, which means that the geometric rank of $|D|$ is $0$ and the stable gonality is strictly larger than $3$. Actually, it can be verified that $\mathrm{tconv}(\{E_1+(u_2),E_3+(u_2), 2(w_{13})+2(x_1), 2(w_{13})+2(x_2), 2(w_{13})+2(x_3)\})$ is a tropical dominant tree of degree $4$ where $w_{13}$ is the middle point of $v_1v_3$, and $x_i$'s are the middle points of the three edges between $u_2$ and $v_2$ respectively. Therefore the stable gonality is $4$ by Proposition~\ref{P:SGon}. \end{example} Lastly, we note that a dominant tropical tree or the image of a rank-$1$ map in Definition~\ref{D:GeoRank} is the degeneration of a linear series of rank $1$ on an algebraic curve $C$ which degenerates to $\Gamma$. A conjecture is that in general, the image of a rank-$r$ map is the degeneration of a linear series of rank $r$ on $C$. \section*{Acknowledgments} We are very thankful to Matthew Baker for his encouragement and support of broadening our original results of tropical convexity in the context of metric graphs \cite{Luo13} to the more general context of analysis. \bibliographystyle{alpha}
1,314,259,995,228
arxiv
\section{Introduction} \label{sec:intro} \setcounter{equation}{0} First-order phase transitions lead to a variety of phenomena in the early Universe such as baryogenesis~\cite{Kuzmin:1985mm}, gravitational wave (GW) production~\cite{Witten:1984rs,Hogan:1986qda,Kosowsky:1991ua,Kosowsky:1992rz,Kosowsky:1992vn,Kamionkowski:1993fg}, and magnetogenesis~\cite{Vachaspati:1991nm}, among others. All these phenomena occur during the nucleation of bubbles, their expansion, and collisions. It is therefore important to understand the bubble dynamics in order to predict any possible signal of a first-order phase transition. The dynamics of bubble expansion is determined by the balance between pressure and friction to the walls: they receive pressure from the supercooled fluid, while friction arises from the particle species which receive masses from the change in the scalar field value. While there are discussions that friction is much more efficient than previously thought~\cite{Bodeker:2009qy, Bodeker:2017cim}, it is still important to understand the bubble behavior in a scalar-dominated system, since a scalar-dominated transition is likely to occur when the latent heat is much larger than the plasma energy density (e.g.~in near-conformal phase transitions with extreme supercooling~\cite{Randall:2006py,Espinosa:2008kw,Konstandin:2011dr,Hambye:2013sna,Jaeckel:2016jlh,Jinno:2016knw,Marzola:2017jzl,Iso:2017uuu,Chiang:2017zbz,vonHarling:2017yew,Bruggisser:2018mus,Bruggisser:2018mrt,Hambye:2018qjv,Baldes:2018emh,Hashino:2018wee,Prokopec:2018tnq,Brdar:2018num,Marzo:2018nov,Baratella:2018pxi,Fairbairn:2019xog}\footnote{ Ref.~\cite{Ellis:2019oqb} discusses the required value of $\alpha$ (latent heat density normalized by the plasma energy density just before the transition) for the scalar field to be dominant in a model having this property. Also, see Ref.~\cite{Ellis:2018mja} for the maximal value of $\alpha$ for polynomial potentials. }). If such a transition occurs, the energy released inside cosmological-scale bubbles accumulates mostly on the walls, and the resulting relativistic $\gamma$ factor is huge (say $\gtrsim 10^{10}$) at the time of collisions. Recent numerical simulations have been performed in Ref.~\cite{Child:2012qg,Braden:2014cra,Braden:2015vza,Bond:2015zfa,Cutting:2018tjt}, often with a focus on oscillons and gravitational wave production. However, simulating bubbles with a large $\gamma$ factor is on the lattice currently impossible in $3 + 1$ dimensions. Given this, the aim of the present work is to develop a method to understand relativistic bubble collisions analytically. We find that, in the relativistic limit, there is a simple governing equation that determines the wall behavior. In particular there is the possibility that after the collision the scalar field bounces back to the symmetric phase and is trapped there. This equation tells us whether the trapping at the false vacuum occurs after collisions. This has huge impact on the GW spectrum, since the GW spectrum takes quite different forms depending on whether the scalar field is damped at the collision point (in which case the envelope approximation~\cite{Kosowsky:1992rz,Kosowsky:1992vn,Huber:2008hg,Jinno:2016vai} should be appropriate) or the walls pass through each other mostly unhindered (in which case the flow approximation~\cite{Jinno:2017fby,Konstandin:2017sat} should be appropriate). Therefore, our study will help to identify the GW spectrum resulting in scalar-dominated transitions. The organization of the paper is as follows. In Sec.~\ref{sec:criterion} we first outline our setup and introduce the governing equation, which we call 'trapping equation'. In Sec.~\ref{sec:test} we numerically check the validity of this equation with a variety of setups. In Sec.~\ref{sec:appl} we discuss the application of the trapping equation to $U(1)$ or $SU(2)$ breaking transitions. In Sec.~\ref{sec:energy} we study the energy localization, which is important in determining the shape of the GW spectrum. Sec.~\ref{sec:conc} is devoted to conclusions. \section{The scalar dynamics after collisions} \label{sec:criterion} \setcounter{equation}{0} The goal of this paper is establish an easy criterion to decide how the scalar field behaves after the collision of two highly-relativistic bubbles. For most parts, we will work in the planar approximation which should be well justified at the first stages of the collision. The scalar field obeys the Klein-Gordon equations \begin{equation} \Box \phi + \frac{dV}{d\phi} = 0 \, , \end{equation} where we neglected all interactions but the self-interactions of the scalar field that are encoded in the scalar potential $V$. A soliton connects two local minima of the potential and the form of the potential will determine the shape of the soliton while it is accelerating. One important point is that in the relativistic limit, the kinetic term will dominate the dynamics and the potential is actually irrelevant during the collision. Since the kinetic term only leads to a linear term in the equations of motion, the superposition of two solitons will persist even after collision as long as the solitons collide with a highly-relativistic velocity. The solitons consist of an 'inner' region where the scalar field $\phi$ has the values $\phi_{\rm left}$ and $\phi_{\rm right}$. The solitons are expanding into an 'outer' region, where the scalar field has a value $\phi_{\rm outer}$. As long as the superposition of the solitons persists, the scalar field has to acquire the value \begin{equation} \phi_{\rm after} = \phi_{\rm left} + \phi_{\rm right} - \phi_{\rm outer} \, , \end{equation} after the collision. This is exemplified in Fig.~\ref{fig:solitons} where we show the collision of two solitons and the collision of a soliton and an anti-soliton in a periodic potential (in a periodic potential $\phi_{\rm after}$ is again a minimum of the potential). \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{plots/solitons.pdf} \caption{\small The collision of two solitons (left) and the collision of a soliton and an anti-soliton (right) in a periodic potential. The walls are shown before (blue) and after (red) the collision. } \label{fig:solitons} \end{center} \end{figure} However, in the long run and in non-periodic potentials, the scalar field will not pertain the value $\phi_{\rm after}$ since this is often not a local minimum of the potential. In essence, the scalar field will start rolling down the potential. The boundary conditions are set by the solitons flying apart which induces a $SO(1,1)$ symmetric initial condition and the solution can only depend on the lightfront coordinate $s = \sqrt{t^2 - x^2}$. If the potential height at the $\phi$ value after settling down differs from $V(\phi_{\rm left})$ or $V(\phi_{\rm right})$, the corresponding wall can decelerate/accelerate which breaks the $SO(1,1)$ symmetry. This will have a large impact on the gravitational wave spectrum created. For almost degenerate potential values, the walls will only use energy through the expansion and the model proposed in Refs.~\cite{Jinno:2017fby,Konstandin:2017sat} is likely to describe the GW production. If there is a large potential difference, the wall is quickly decelerated and the envelope approximation is more likely to describe the GW production~\cite{Kosowsky:1992rz,Kosowsky:1992vn,Huber:2008hg,Jinno:2016vai}. However, on times scales much smaller than the bubble separation, these effects are quite small and the $SO(1,1)$ symmetry is seen in most of our simulations (see Figs.~\ref{fig:spacetime} and \ref{fig:spacetime_example}). \begin{figure} \begin{center} \includegraphics[width=0.65\columnwidth]{plots/spacetime.pdf} \caption{\small Illustration for our setup. The bubble nucleates at $t = t_n$ at the position of the star. The bounce configuration is located along the thick black line. The scalar field configuration in the spacelike region from the nucleation point (red) is related to the bounce configuration (\ref{eq:phi_spacelike}), while the one in the timelike (blue) region is related to (\ref{eq:phi_timelike}). The collision time of the bubble $t_{\rm coll}$ is also indicated in the figure. To simulate two colliding bubbles, we impose reflecting boundary conditions at $x = x_{\rm coll}$, which is taken to be slightly larger than $t_{\rm coll} - t_n$. Our interest lies in the evolution of the system in the green region. We take the simulation end $t_{\rm end}$ so that $t_{\rm end} - t_{\rm coll} \approx x_{\rm coll}$ to guarantee that the most relativistic components around $x \simeq x_{\rm coll}$ at $t = t_{\rm coll}$ propagates back to $x \simeq 0$ at $t = t_{\rm end}$. The expanding bubble has $SO(1,3)$ symmetry before $t = t_{\rm coll}$, while we approximate the system to be planar symmetric after collision to simplify the simulation. } \label{fig:spacetime} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\columnwidth]{plots/spacetime_example1.pdf} \includegraphics[width=0.48\columnwidth]{plots/spacetime_example2.pdf} \caption{\small A density plot of the scalar field in a hierarchical model (see Fig.~\ref{fig:Hierarchy_V}). Red and blue regions correspond to the false ($\phi = 0$) and true vacua ($\phi = 1$), respectively. {\bf Left side:} Before $t_{\rm coll}$, corresponding to the red and blue regions in Fig.~\ref{fig:spacetime}. {\bf Right side:} After $t_{\rm coll}$, corresponding to the green region in Fig.~\ref{fig:spacetime}. } \label{fig:spacetime_example} \end{center} \end{figure} Under these assumptions the dynamics of the scalar field close to the collision point is governed by the 'trapping equation' \begin{equation} \partial_s^2 \phi + \frac{1}{s}\partial_s \phi + \frac{dV}{d\phi} = 0 \, . \label{eq:trapping} \end{equation} Here, the coordinate $s$ is the $SO(1,1)$ radial direction with the collision point in the origin, $s = \sqrt{t^2 - x^2}$. This equation can be easily solved numerically, which is what we will do in the next section in comparison to scalar field simulations to test our hypothesis. \section{Testing the trapping equation} \label{sec:test} \setcounter{equation}{0} \subsection{Setup and initial conditions} \label{subsec:Setup} In order to test the trapping equation, we will assume planar symmetry for the colliding bubble walls. For the scalar configuration just before collision, we use initial conditions that are derived from the $3 + 1$ dimensional setup (see Fig.~\ref{fig:spacetime}). Notice that in Secs.~\ref{subsec:Z2} and \ref{subsec:Z2mod} the potential minima are degenerate and the $3 + 1$ solutions become the exact soliton profiles in $1 + 1$ dimensions. The scalar field fulfills the equations of motion \begin{equation} (\partial_t^2 - \partial_x^2) \phi + \frac{dV}{d\phi} = 0, \end{equation} after collision, which assumes planar symmetry. Another important quantity to track is the energy density, which is given by \begin{equation} \rho = \frac{1}{2}(\partial_t \phi)^2 + \frac{1}{2}(\partial_x \phi)^2 + V(\phi). \end{equation} This will be the relevant indicator how gravitational waves are produced in the present scenario. A more detailed discussion of this topic will be given in section \ref{sec:energy}. The next question concerns the initial conditions of the scalar field before collision, i.e.~the shape of the soliton during acceleration. Even though our actual simulation is only $1 + 1$ dimensional, we mainly use the $3 + 1$ dimensional shape of the soliton. These two configurations can be sizable different due to the friction term in the bounce equation. The evolution of the single bubble configuration after nucleation at $t = t_n$ is as follows: \begin{itemize} \item In the spacelike region (red) the scalar configuration is related to the bounce configuration $\bar{\phi}$ through $SO(1,3)$ symmetry. That is, \begin{equation} \phi(t,r) = \bar{\phi} \left( \sqrt{- (t - t_n)^2 + x^2} \right), \end{equation} where $\bar{\phi}(s = \sqrt{- (t - t_n)^2 + x^2})$ satisfies the bounce equation of motion: \begin{equation} \frac{d^2 \bar{\phi}}{ds^2} + \frac{3}{s} \frac{d\bar{\phi}}{ds} - \frac{dV}{d\bar{\phi}} = 0. \label{eq:phi_spacelike} \end{equation} \item In the timelike region (blue) the scalar configuration is again related through $SO(1,3)$ symmetry \begin{equation} \phi(t,x) = \tilde{\phi} \left( \sqrt{(t - t_n)^2 - x^2} \right), \end{equation} to the solution $\tilde{\phi}(s = \sqrt{(t - t_n)^2 - x^2})$ of the following equation of motion: \begin{equation} \frac{d^2 \tilde{\phi}}{ds^2} + \frac{3}{s} \frac{d\tilde{\phi}}{ds} + \frac{dV}{d\tilde{\phi}} = 0. \label{eq:phi_timelike} \end{equation} In particular, the field performs oscillations around the new local minimum of the potential. The extent of these oscillations is quite different in $3 + 1$ dimensions compared to $1 + 1$ dimensions and we use the former as mentioned before even though the simulation is lower dimensional. \end{itemize} In the next subsections we will discuss a variety of models and test the trapping equation (\ref{eq:trapping}). First, we will discuss potentials with degenerate minima and a sizable barrier. In this case, trapping in the old phase is very likely. Next, we modify the potential beyond the new phase (but keep degenerate minima) and study for which parameters the simulation and the trapping equation predict a bounce back into the old phase. This allows for a quantitative test of the trapping equation. Then the opposite case is studied: potentials with a large hierarchy between the two phases and small barriers. First we study the idealized case of an infinitesimal barrier (that only enters in the initial conditions) and then move to more realistic models with and without a $Z_2$ symmetry. In the following numerical simulations we use the time discretization $\Delta t = 0.1 \Delta x$ except for Figs.~\ref{fig:Quartic_phi}, \ref{fig:Quartic_trapped}, \ref{fig:Quartic_escaped}, and \ref{fig:Quartic_escaped_limit}, in which we use $\Delta t = 0.05 \Delta x$. The spatial discretization $\Delta x$ is chosen depending on the setup. \subsection{Toy model 1: Simple $Z_2$ potential} \label{subsec:Z2} \begin{figure} \begin{center} \includegraphics[width=0.45\columnwidth]{plots/Z2_V.pdf} \caption{\small Potential shape of the simple and modified $Z_2$ model. The top line shows the $Z_2$-symmetric potential discussed in Sec.~\ref{subsec:Z2}, while the other lines belong to the modified potential in Sec.~\ref{subsec:Z2mod} with $\lambda = 0.1, 0.2, 0.3, 0.4$, and $0.5$. } \label{fig:Z2_V} \end{center} \vskip 0.5cm \begin{center} \includegraphics[width=0.48\columnwidth]{plots/Z2_phi_gamma=5.pdf} \includegraphics[width=0.48\columnwidth]{plots/Z2_phi_gamma=100.pdf} \caption{\small Simple $Z_2$ potential. Profile of $\phi$ for the $Z_2$ potential with a $\gamma$ factor of $5$ (left) and $100$ (right). The blue and red regions indicate the positive and negative vacua, respectively. } \label{fig:Z2_phi} \end{center} \end{figure} We first consider a $Z_2$-symmetric degenerate potential \begin{equation} V = \frac{1}{4}(\phi^2 - v^2)^2. \end{equation} As mentioned in Sec.~\ref{subsec:Setup}, the $3 + 1$ dimensional solutions reduce to the exact soliton profiles in $1 + 1$ dimensions. The profile is given by \begin{equation} \phi(t,x) = \pm \, v \, \tanh \left[ \frac{\gamma}{\sqrt{2}} \left( x - \sqrt{1 - \frac{1}{\gamma^2}} t \right) + \delta \right], \label{eq:tanh} \end{equation} with $\gamma$ being the relativistic $\gamma$ factor. We initially prepare soliton states according to (\ref{eq:tanh}) with the value $-v$ in the outer region and evolve the system with reflecting boundary conditions at $x = x_{\rm coll} = 10/v$. According to the trapping equation (\ref{eq:trapping}), the scalar field bounces back to the minimum at $\phi = -v$ after collision in the large $\gamma$ limit. We show the time evolution for $\gamma = 5$ (small $\gamma$) and $100$ (large $\gamma$) in Fig.~\ref{fig:Z2_phi}. We evolve the system from $t = t_{\rm coll} = 0$ to $t = t_{\rm end} = 10/v$. We take $400 \gamma$ points for $0 < x < x_{\rm coll}$ so that $\Delta x = 1/40\gamma/v$ for the spatial discretization, and we choose the phase $\delta$ so that the argument inside the $\tanh$ becomes $5$ at the boundary $x = x_{\rm coll}$ at $t = t_{\rm coll}$. It is seen that, for both large and small $\gamma$, the scalar field bounces back to $\phi = -v$. This is consistent with the trapping equation (\ref{eq:trapping}). \subsection{Toy model 2: Modified $Z_2$ potential} \label{subsec:Z2mod} We next modify the $Z_2$-symmetric potential slightly to test the validity of the trapping equation quantitatively: \begin{equation} V = \left\{ \begin{matrix} \displaystyle \frac{1}{4}(\phi^2 - v^2)^2 & ~~ (\phi \leq v), \\[0.3cm] \displaystyle \frac{\lambda}{4}(\phi^2 - v^2)^2 & ~~ (\phi > v). \end{matrix} \right. \end{equation} Here $\lambda$ is a free parameter which controls the steepness for $\phi > v$ (see Fig.~\ref{fig:Z2_V}). We solve the same evolution equation as before with the same initial profile. Note that the potential modification for $\phi > v$ does not change the soliton profile. The motivation for making the potential shallower beyond the broken phase is to inhibit that the field is driven back to the symmetric phase after collision. According to the trapping equation (\ref{eq:trapping}), $\phi$ settles down to the positive minimum for $\lambda < \lambda_{\rm th} \simeq 0.186$, while it bounces back to the negative minimum for $\lambda > \lambda_{\rm th}$, where $\lambda_{\rm th}$ is a threshold value. Below we will see that this indeed holds in the large $\gamma$ limit. In Fig.~\ref{fig:Z2mod_phi}, we show the profile of $\phi$ with $\lambda = 0.1 < \lambda_{\rm th} \simeq 0.186$ (left) and $\lambda = 0.3 > \lambda_{\rm th}$ (right). The blue and red regions indicate that $\phi$ is in the positive and negative vacua, respectively. We evolve the system from $t = t_{\rm coll} = 0$ to $t = t_{\rm end} = 10/v$ with $200 \gamma$ points for $0 < x < x_{\rm coll}$. In contrast to Fig.~\ref{fig:Z2_phi}, $\phi$ settles down to the positive minimum for $\lambda = 0.1$, while $\phi$ bounces back to the negative minimum for $\lambda = 0.3$. The parametric dependence on the parameters $\lambda$ and $\gamma$ is shown in Fig.~\ref{fig:Z2mod_lambda_gamma}. The red points denote parameters with a bounce back into the negative vacuum ($\phi (t = t_{\rm end}) < 0$), while the blue points denote parameters where $\phi$ stays in the positive vacuum ($\phi (t = t_{\rm end}) > 0$). The green-dashed line is $\lambda_{\rm th}$ as derived from the trapping equation. The boundary between the blue and red regions coincides very well with $\lambda_{\rm th}$ in the large $\gamma$ limit. \begin{figure} \begin{center} \includegraphics[width=0.48\columnwidth]{plots/Z2mod_lambda=01_gamma=100.pdf} \includegraphics[width=0.48\columnwidth]{plots/Z2mod_lambda=03_gamma=100.pdf} \caption{\small Profile of $\phi$ for the modified $Z_2$ potential with a $\gamma$ factor of $100$ with $\lambda = 0.1$ ($< \lambda_{\rm th} \simeq 0.186$, left) and $\lambda = 0.3$ ($> \lambda_{\rm th}$, right). The blue and red regions indicate the positive and negative vacua, respectively. For the former the scalar field does not bounce back to the vacuum at $\phi = -v$ while it does for the latter. Compare with the right panel of Fig.~\ref{fig:Z2_phi}. \label{fig:Z2mod_phi} } \end{center} \vskip 0.7cm \begin{center} \includegraphics[width=0.6\columnwidth]{plots/Z2mod_lambda_gamma.pdf} \caption{\small Modified $Z_2$ potential. The red and blue points are parameter points where $\phi (t = t_{\rm end}) \lessgtr 0$, respectively. The green-dashed line is $\lambda = \lambda_{\rm th} \simeq 0.186$ predicted by Eq.~(\ref{eq:trapping}). } \label{fig:Z2mod_lambda_gamma} \end{center} \end{figure} \subsection{Toy model 3: Hierarchical potential} \label{subsec:Quadratic} Next let us consider the opposite case to the previous one. We consider a potential where the false vacuum is located at $\phi = 0$ with an infinitesimally small trap. The true vacuum is located at $\phi = v$, and the potential shape around it is quadratic. Thus \begin{equation} V \simeq \frac{1}{2} m^2 (\phi - v)^2, \end{equation} up to small effects very close to the symmetric phase. As mentioned in Sec.~\ref{subsec:Setup}, we regard the system to be $3 + 1$ dimensional until collision (i.e. spherical), while approximate it to be $1 + 1$ (i.e. planar) when we calculate collision dynamics. For a quadratic potential, the initial conditions (\ref{eq:phi_spacelike}) and (\ref{eq:phi_timelike}) are solved analytically \begin{equation} \tilde{\phi}(s) = v \left[ 1 - \frac{2J_1(ms)}{ms} \right], \end{equation} where $J_1$ is the Bessel function. Therefore, the scalar configuration is given by \begin{equation} \phi(t,x) = v\left[ 1 - \frac{2J_1\left( m \sqrt{(t - t_n)^2 - x^2}\right)}{m \sqrt{(t - t_n)^2 - x^2}} \right]. \end{equation} For the collision, we again approximate the walls to be planar. Since the effect of the configuration in the spacelike region is infinitesimally small, we may approximate the configuration at the beginning of the simulation ($t = t_{\rm coll}$) as \begin{equation} \left. \phi(t,x) \right|_{t = t_{\rm coll}} \simeq \left\{ \begin{matrix} \displaystyle v \left[ 1 - \frac{2J_1\left( m \sqrt{(t_{\rm coll} - t_n)^2 - x^2}\right)}{m \sqrt{(t_{\rm coll} - t_n)^2 - x^2}} \right] ~~~~ (0 < x < t_{\rm coll} - t_n), \\[0.7cm] 0 ~~~~ (t_{\rm coll} - t_n < x), \end{matrix} \right. \label{eq:phiini} \end{equation} and also the time derivative is given by \begin{equation} \left. \partial_t \phi(t,x) \right|_{t = t_{\rm coll}} \simeq \left\{ \begin{matrix} \displaystyle \left. v \, \partial_t \left[ 1 - \frac{2J_1\left( m \sqrt{(t_{\rm coll} - t_n)^2 - x^2}\right)}{m \sqrt{(t_{\rm coll} - t_n)^2 - x^2}} \right] \right|_{t = t_{\rm coll}} ~~~~ (0 < x < t_{\rm coll} - t_n), \\[0.7cm] 0 ~~~~ (t_{\rm coll} - t_n < x). \end{matrix} \right. \label{eq:phiprini} \end{equation} We then study the time evolution of the system with these initial conditions\footnote{ The details of the setup are as follows. The system size is taken to be $x_{\rm coll} \equiv (t_{\rm coll} - t_n) + 5/\gamma/m$, and we evolve the system from $t = t_{\rm coll}$ to $t = t_{\rm coll} + x_{\rm coll} \equiv t_{\rm end}$ with reflecting boundary conditions at $x = x_{\rm coll}$. The scalar configuration is taken as \begin{equation} \phi(t,x) = \left\{ \begin{matrix} \displaystyle v \left[ 1 - \frac{2J_1\left( m \sqrt{(t - t_n)^2 - x^2}\right)}{m \sqrt{(t - t_n)^2 - x^2}} \right] ~~~~ (0 < x < t - t_n), \\ \displaystyle v \, e^{-m^2 \gamma^2 (x - (t - t_n))^2} \left[ - 1 + \frac{2J_1\left( m \sqrt{(t - t_n)^2 - (2(t - t_n) - x)^2}\right)}{m \sqrt{(t - t_n)^2 - (2(t - t_n) - x)^2}} \right] ~~~~ (t - t_n < x), \end{matrix} \right. \nonumber \end{equation} in order to ensure $\left. \partial_t \phi(t,x) \right|_{t = t_{\rm coll}, x = x_{\rm coll}} \simeq 0$. The initial conditions are given by $\left. \phi(t,x) \right|_{t = t_{\rm coll}}$ and $\left. \partial_t \phi(t,x) \right|_{t = t_{\rm coll}}$. }. Since no analytic profile for the initial scalar field is known, we define the $\gamma$ factor of the colliding bubble walls using the Lorentz contraction of the wall (see Fig.~\ref{fig:gamma_def}): At the nucleation time, we have the bounce configuration along $x$ direction (the thick black line), which has not yet reached the potential minimum. The scalar field reaches the minimum after some time ($\sim {\rm (typical~potential~mass~scale)}^{-1}$), which we denote $t_{\gamma = 1}$. We define $d_{\gamma = 1}$ as the spatial distance between the two points where the scalar field takes the minimum and the maximum at this time slice. We define $d_\gamma$ as the distance between such spatial points at the time slice $t = t_\gamma$. Then we define the $\gamma$ factor as the ratio of the two distances: \begin{equation} \gamma = \frac{d_{\gamma = 1}}{d_\gamma}. \end{equation} For the hierarchical potential we numerically find $t_{\gamma = 1} - t_n \simeq 3.83/m$. In the simulation, we identify $t_{\rm coll}$ as $t_\gamma$ for a given value of $\gamma$. For the spatial discretization we use $100\gamma^2$ points for $0 < x < x_{\rm coll}$. Fig.~\ref{fig:Hierarchy_phi} displays the time evolution for $\gamma = 5$, $10$, and $20$ from top to bottom, respectively. The left panels are the value of $\phi$, while the right panels are the time evolution at the collision point $\phi(t, x = x_{\rm coll})$. In the right panels the blue lines are the actual evolution in our simulation, while the red lines are the prediction from the trapping equation, Eq.~(\ref{eq:trapping}) (which reduces to $\phi \simeq v \left[ 1 + J_0(m(t - t_{\rm offset})) \right]$ with $t_{\rm offset} = (3.83 + 5)/\gamma/m$). In the offset, the contribution $3.83/\gamma/m$ comes from $t_{\gamma}$ as seen in Fig.~\ref{fig:gamma_def} and the contribution $5/\gamma/m$ comes from the offset between the collision and the initial time. We see that the prediction nicely matches the actual evolution for large $\gamma$. \begin{figure} \begin{center} \includegraphics[width=0.5\columnwidth]{plots/Hierarchy_V.pdf} \caption{\small Potential shape of the hierarchical model. } \label{fig:Hierarchy_V} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.6\columnwidth]{plots/gamma_def.pdf} \caption{\small Definition of the wall $\gamma$ factor in this paper. } \label{fig:gamma_def} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.45\columnwidth]{plots/Hierarchy_phi_gamma=5.pdf} \hskip 0.5cm \includegraphics[width=0.4\columnwidth]{plots/Hierarchy_phi0_gamma=5.pdf} \vskip 0.5cm \includegraphics[width=0.45\columnwidth]{plots/Hierarchy_phi_gamma=10.pdf} \hskip 0.5cm \includegraphics[width=0.4\columnwidth]{plots/Hierarchy_phi0_gamma=10.pdf} \vskip 0.5cm \includegraphics[width=0.45\columnwidth]{plots/Hierarchy_phi_gamma=20.pdf} \hskip 0.5cm \includegraphics[width=0.4\columnwidth]{plots/Hierarchy_phi0_gamma=20.pdf} \caption{\small Hierarchical potential. {\bf Left side:} Time evolution of $\phi$ for $\gamma = 5,10$, and $20$ from top to bottom. {\bf Right side:} Time evolution of $\phi(t,x = x_{\rm coll})$ for $\gamma = 5,10$, and $20$ from top to bottom. (Blue) Numerical evolution. (Red) Prediction from Eq.~(\ref{eq:trapping}). } \label{fig:Hierarchy_phi} \end{center} \end{figure} \subsection{Toy model 4: Simple quartic potential} \label{subsec:Quartic} Next, we consider a more realistic potential, namely \begin{equation} V = a v^2 \phi^2 - (2a + 4) \, v \, \phi^3 + (a + 3) \phi^4, \end{equation} where the coefficients are chosen so that $V(v) = -v^4$ becomes a local minimum. This potential takes a local maximum at $\phi / v = a/(2a + 6)$. We also define the degeneracy parameter $\epsilon$ as \begin{equation} \epsilon = \frac{{\rm (barrier~height)} - {\rm (false~vacuum~height)}}{{\rm (barrier~height)} - {\rm (true~vacuum~height)}} = \frac{a^3(a+4)}{a^3(a+4)+16(a+3)^3}. \end{equation} The smaller $\epsilon$ is, the smaller the false vacuum trapping becomes. In Fig.~\ref{fig:Quartic_V} we plot the potential for $\epsilon = 0.1$, $0.01$, and $0.001$. The trapping equation (\ref{eq:trapping}) predicts that $\phi$ is trapped at the false vacuum for $\epsilon \gtrsim \epsilon_{\rm th} \simeq 0.214$. For the numerical simulation we use the same definition for the $\gamma$ factor as in Fig.~\ref{fig:gamma_def}, and identify $t_{\rm coll}$ to be $t_\gamma$ for a given value of $\gamma$. We use $50\gamma \times v x_{\rm coll}$ or $25\gamma \times v x_{\rm coll}$ (both $\propto \gamma^2$) points for the spatial discretization for $\gamma \leq 30$ or $\gamma > 30$, respectively. In Fig.~\ref{fig:Quartic_phi} we plot the time evolution of $\phi$ (left panels) and $\phi (t, x = x_{\rm coll})$ (right panels) for $\gamma = 40$ and $\epsilon = 0.5$, $0.1$, and $0.05$ from top to bottom. The blue (red) regions in the left panels correspond to the true (false) vacua, while the blue (red) lines in the right panels are the actual (predicted) time evolution. As predicted by Eq.~(\ref{eq:trapping}), $\phi$ is trapped at the false vacuum soon after collision for $\epsilon = 0.5$, while it escapes for other values of $\epsilon$. Also, the diamond-like pattern in the top-left panel can be understood as a consequence of trapping: Once trapping occurs, the wall receives negative pressure due to the phase difference across it. The pressure eventually stops the wall motion completely and then inverts it. The position of the turnback can be estimated by equating the energy per surface area at the collision time ($x_{\rm coll} \cdot v^4 / 3$) with the work per surface area exerted on the wall from collision to turnback ($\Delta x_{\rm turnback} \cdot v^4$) as $\Delta x_{\rm turnback} \simeq x_{\rm coll}/3$, which gives a good estimate. Note that this diamond-like pattern has already been observed in the literature (e.g. Refs.~\cite{Kosowsky:1991ua,Konstandin:2011ds,Braden:2014cra,Bond:2015zfa,Cutting:2018tjt}). Fig.~\ref{fig:Quartic_epsilon_gamma} is the result of our parameter scan. The blue (red) points are the parameter values where $\phi$ escapes from (is trapped at) the false vacuum\footnote{ The criterion for trapping is as follows. The 'energy' at the collision point $\left[ (\partial_t {\phi})^2/2 + V(\phi) \right]_{x = x_{\rm coll}}$ decreases after collision. We numerically calculate the time when it drops to the value of the barrier height $\left[ (\partial_t {\phi})^2/2 + V(\phi) \right]_{x = x_{\rm coll}} = V (\phi / v = a / (2a + 6))$, and see whether $\phi$ is in the false or true vacuum side. We use the same criterion for the quartic $Z_2$ potential as well. }. The prediction of the trapping equation $\epsilon \gtrsim \epsilon_{\rm th} \simeq 0.214$ is indicated by the green line. We see that the boundary between the blue and red points approaches the green line in the large $\gamma$ limit. \begin{figure} \begin{center} \includegraphics[width=0.5\columnwidth]{plots/Quartic_V.pdf} \caption{\small Quartic potential for $\epsilon = 0.1$ (blue), $0.01$ (red), and $0.001$ (green). } \label{fig:Quartic_V} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.45\columnwidth]{plots/Quartic_phi_eps=05_gamma=40.pdf} \hskip 0.5cm \includegraphics[width=0.4\columnwidth]{plots/Quartic_phi0_eps=05_gamma=40.pdf} \vskip 0.3cm \includegraphics[width=0.45\columnwidth]{plots/Quartic_phi_eps=01_gamma=40.pdf} \hskip 0.5cm \includegraphics[width=0.4\columnwidth]{plots/Quartic_phi0_eps=01_gamma=40.pdf} \vskip 0.3cm \includegraphics[width=0.45\columnwidth]{plots/Quartic_phi_eps=005_gamma=40.pdf} \hskip 0.5cm \includegraphics[width=0.4\columnwidth]{plots/Quartic_phi0_eps=005_gamma=40.pdf} \caption{\small Simple quartic potential. {\bf Left side:} Density plot of $\phi$ for $\epsilon = 0.5$, $0.1$, and $0.05$ and $\gamma = 40$ from top to bottom. Note that false-vacuum trapping is predicted from Eq.~(\ref{eq:trapping}) for $\epsilon = 0.5$. {\bf Right side:} Time evolution of $\phi (t, x = x_{\rm coll})$ for the parameter choice in the left panels. } \label{fig:Quartic_phi} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.6\columnwidth]{plots/Quartic_epsilon_gamma.pdf} \caption{\small Simple quartic potential. The blue and red points indicate that $\phi$ is trapped at the true and false vacua, respectively. The green line is the threshold value $\epsilon_{\rm th} \simeq 0.214$ predicted by Eq.~(\ref{eq:trapping}). } \label{fig:Quartic_epsilon_gamma} \end{center} \end{figure} \subsection{Toy model 5: Quartic $Z_2$ potential\label{sec:QuarticZ2}} \begin{figure} \begin{center} \includegraphics[width=0.5\columnwidth]{plots/QuarticZ2_V.pdf} \caption{\small Quartic $Z_2$ potential $V$ for $\epsilon = 0.1$ (blue), $0.01$ (red), and $0.001$ (green). } \label{fig:QuarticZ2_V} \end{center} \vskip 0.7cm \begin{center} \includegraphics[width=0.6\columnwidth]{plots/QuarticZ2_epsilon_gamma.pdf} \caption{\small Quartic $Z_2$ potential. The red and green points indicate that $\phi$ is trapped at the symmetric and negative vacua, respectively. The green line is the threshold value $\epsilon_{\rm th} \simeq 0.867$ predicted by Eq.~(\ref{eq:trapping}). } \label{fig:QuarticZ2_epsilon_gamma} \end{center} \end{figure} Finally, we consider a potential similar to the previous one but modified to have $Z_2$ symmetry: \begin{equation} V = a v^2 \, |\phi|^2 - (2a + 4) \, v \,|\phi|^3 + (a + 3)|\phi|^4. \end{equation} The potential is plotted in Fig.~\ref{fig:QuarticZ2_V}. The degeneracy parameter $\epsilon$ is defined in the same way as before. This setup is not realistic in that a domain wall forms after different-sign configurations collide with each other, so we study it as a toy model. In the following we make two same-sign positive configurations collide from opposite directions. The prediction from Eq.~(\ref{eq:trapping}) is that $\phi$ is trapped at the negative (opposite) vacuum for $\epsilon < \epsilon_{\rm th}$ with $\epsilon_{\rm th} \simeq 0.867$, while it settles down to the symmetric one (vanishing VEV) for $\epsilon > \epsilon_{\rm th}$. There are no parameter values where $\phi$ settles down to the positive vacuum. This means that the scalar field is likely to be trapped at the opposite vacuum unless the vacua are almost degenerate. Fig.~\ref{fig:QuarticZ2_epsilon_gamma} is the result of numerical simulation. The red and green markers mean that $\phi$ is trapped in the zero and negative vacua, respectively, while the green line is the prediction of the trapping equation (\ref{eq:trapping}). As indicated from this equation, the scalar field is never trapped at the positive vacuum in the relativistic limit. Also, the boundary between the red and green regions approaches the green line in the relativistic limit. \section{Applications of the trapping equation} \label{sec:appl} \setcounter{equation}{0} In the last section we compared the results of the trapping equation with results from $1 + 1$ dimensional simulations. We could firmly establish that in the limit of highly-relativistic wall velocities, the trapping equation predicts the correct behavior of the scalar field not only qualitatively but also quantitatively quite well. In this section we will use the trapping equation to study more complicated setups, in particular setups with several scalar fields. In this case, lattice simulations might still be possible but solving the trapping equation is almost trivial. We discuss trapping for scalar fields with a $U(1)$ and $SU(2)$ global symmetry. As scalar potential, we use the quartic $Z_2$ potential in Sec.~\ref{sec:test}: \begin{equation} V(|\phi|) = a v^2 \, |\phi|^2 - (2a + 4) \, v \,|\phi|^3 + (a + 3)|\phi|^4. \end{equation} In both cases the collision of two solitons is parametrized by the opening angle $\alpha$ between the two configurations. Correspondingly, the initial condition for the trapping equation (\ref{eq:trapping}) is modified to \begin{equation} |\phi_{\rm after} - \phi_{\rm outer}| = |\phi_{\rm left} + \phi_{\rm right} - 2\phi_{\rm outer}| = 2 \, v \, \cos \left( \frac{\alpha}{2} \right), \label{eq:initial_appl} \end{equation} where $\phi_{\rm inner}$ and $\phi_{\rm outer}$ are the value of $\phi$ in the broken and symmetric phases, respectively (see Fig.~\ref{fig:alpha}). The case $\alpha =0$ then corresponds to the case studied in the last section while $\alpha=\pi$ corresponds to the collision of the two scalar walls with opposite direction in the $U(1)$ or $SU(2)$ symmetry space. In the following we take $\phi_{\rm after}$ to be real and positive without loss of generality. Fig.~\ref{fig:U1} displays the results of the trapping equation (\ref{eq:trapping}) with the initial condition (\ref{eq:initial_appl}). The blue, red, and green regions denote the regions where $\phi$ is trapped at the positive, zero, and negative (opposite) vacua, respectively. As the false vacuum trapping becomes weaker ($\epsilon \to 0$), the scalar field becomes less likely to be trapped at the false vacuum. \begin{figure} \begin{center} \includegraphics[width=0.4\columnwidth]{plots/alpha.pdf} \caption{\small Opening angle $\alpha$ for the $U(1)$ case. The $SU(2)$ case is analogous. } \label{fig:alpha} \end{center} \vskip 0.5cm \begin{center} \includegraphics[width=0.6\columnwidth]{plots/U1.pdf} \caption{\small Prediction of the trapping equation (\ref{eq:trapping}) with the initial condition (\ref{eq:initial_appl}) for $U(1)$ or $SU(2)$ breaking potentials. The blue, red, and green regions correspond to $\phi = +1$, $0$, and $-1$ for $s \to \infty$. } \label{fig:U1} \end{center} \end{figure} \section{Energy dynamics after collisions} \label{sec:energy} \setcounter{equation}{0} Finally we discuss the energy dynamics after the collision. This is important because the energy distribution determines observable signatures like the GW spectrum. Indeed, the GW spectrum takes quite different forms when the scalar field instantly lose energy at the collision point~\cite{Kosowsky:1992vn,Huber:2008hg,Jinno:2016vai} and when energy propagates even after collisions~\cite{Jinno:2017fby,Konstandin:2017sat}. Therefore, our main interest lies in the degree of energy localization. For later purpose let us first define $d_R$ as follows: \begin{align} d_R &\equiv {\rm minimum~value~of~the~spatial~interval} \nonumber \\ &~~~~ {\rm ~in~which~fraction~}R{\rm ~of~the~total~energy~is~localized}. \end{align} For example, $d_{0.5}$ and $d_{0.8}$ respectively mean that we can find $d_{0.5}$ and $d_{0.8}$ intervals in which $50\%$ and $80\%$ of the total energy is localized. In the following we present the ratio $d_R (t = t_{\rm end}) / d_R (t = t_{\rm coll})$, which parametrizes the degree of wall thickening during evolution from $t_{\rm coll}$ to $t_{\rm end}$. \begin{figure} \begin{center} \includegraphics[width=0.45\columnwidth]{plots/Z2mod_rho_lambda=01.pdf} \includegraphics[width=0.45\columnwidth]{plots/Z2mod_rho_lambda=03.pdf} \caption{\small Modified $Z_2$: Time evolution of $\rho$ for $\lambda = 0.1$ (left) and $0.3$ (right) with the $\gamma$ factor of $100$. The system evolves from the blue lines to the red lines. These plots correspond to the parameter choice of Fig.~\ref{fig:Z2mod_phi}. The collision occurs at $x \simeq 10/v$, where we impose reflecting boundary conditions. \label{fig:Z2_rho} } \vskip 0.5cm \includegraphics[width=0.45\columnwidth]{plots/Z2mod_ThicknessRatio_05.pdf} \includegraphics[width=0.45\columnwidth]{plots/Z2mod_ThicknessRatio_08.pdf} \caption{\small Modified $Z_2$: Thickness ratio $d_{0.5} (t_{\rm end}) / d_{0.5} (t_{\rm coll})$ (left) and $d_{0.8} (t_{\rm end}) / d_{0.8} (t_{\rm coll})$ (right) for $\lambda = 0.1$, $0.2$, $0.3$, $0.4$, and $0.5$ from the blue to the red lines. \label{fig:Z2mod_ThicknessRatio} } \end{center} \end{figure} {\bf Modified $Z_2$:} As a first example, consider the modified $Z_2$ potential in Sec.~\ref{subsec:Z2}. The time evolution of the energy density $\rho$ is displayed in Fig.~\ref{fig:Z2_rho} for $\lambda = 0.1$ (left) and $0.3$ (right). The $\gamma$ factor is taken to be $100$. Reflecting boundary conditions are imposed at $x \simeq 10/v$, and the system evolves from the blue to the red lines. Note that these parameters are the same as Fig.~\ref{fig:Z2mod_phi}. The left panel corresponds to the case where the scalar field stays at the positive vacuum, while in the right panel it bounces back to the negative vacuum. We see that in both cases the energy is localized at the wall front even in the last time slice (the outermost profiles). In Fig.~\ref{fig:Z2mod_ThicknessRatio} we plot the ratio $d_{0.5} (t_{\rm end}) / d_{0.5} (t_{\rm coll})$ (left) and $d_{0.8} (t_{\rm end}) / d_{0.8} (t_{\rm coll})$ (right). For any value of $\lambda$, the thickness ratio keeps ${\mathcal O}(1)$ values or gradually decreases as $\gamma$ increases. Note that the simulation time corresponds to the typical bubble size in a realistic situation. Since the initial wall thickness decreases as $\gamma^{-1}$, this means that the wall thickness after propagating over a distance comparable to bubble radius also decreases as $\gamma^{-1}$, regardless of whether $\phi$ bounces back to the old phase or not. \begin{figure} \begin{center} \includegraphics[width=0.45\columnwidth]{plots/Hierarchy_rho_gamma=5.pdf} \includegraphics[width=0.45\columnwidth]{plots/Hierarchy_rho_gamma=20.pdf} \caption{\small Hierarchical potential: Time evolution of $\rho$ for $\gamma = 5$ (left) and $20$ (right). The collision occurs at the center, where we impose reflecting boundary conditions, and the system evolves from the blue to the red lines. \label{fig:Hierarchy_rho} } \vskip 1cm \includegraphics[width=0.6\columnwidth]{plots/Hierarchy_ThicknessRatio.pdf} \caption{\small Thickness ratio $d_{0.5}(t = t_{\rm end})/d_{0.5}(t = t_{\rm coll})$ (blue) and $d_{0.8}(t = t_{\rm end})/d_{0.8}(t = t_{\rm coll})$ (red) for the hierarchical potential. \label{fig:Hierarchy_ThicknessRatio} } \end{center} \end{figure} {\bf Hierarchical potential:} Next let us discuss the hierarchical potential from Sec.~\ref{subsec:Quadratic}. In Fig.~\ref{fig:Hierarchy_rho} we plot the time evolution of the energy density $\rho$ for $\gamma = 5$ (left) and $20$ (right). Just as in the previous example, the energy localization is still strong even in the last time slice. In Fig.~\ref{fig:Hierarchy_ThicknessRatio} we show the thickness ratio $d_{0.5}(t = t_{\rm end})/d_{0.5}(t = t_{\rm coll})$ (blue) and $d_{0.8}(t = t_{\rm end})/d_{0.8}(t = t_{\rm coll})$ (red). We see that the thickness ratios approach ${\mathcal O}(1)$ values as $\gamma$ increases. Again, since the simulation time corresponds to the typical bubble size in a realistic situation, we expect that the wall thickness remains to be ${\rm (particle~physics~scale)}^{-1}$ even after the scalar field propagates over the typical bubble size. {\bf Quartic potential:} Let us finally study the quartic potential in Sec.~\ref{subsec:Quartic}. In this case the behavior of the scalar field is much more complicated than the previous two examples. We take two parameter points $\epsilon = 0.5$ (Fig.~\ref{fig:Quartic_trapped}) and $0.05$ (Figs.~\ref{fig:Quartic_escaped} and \ref{fig:Quartic_escaped_limit}). The trapping equation (\ref{eq:trapping}) predicts that the scalar field is trapped at (escapes from) the false vacuum for the former (latter) potential at the initial stage. In Fig.~\ref{fig:Quartic_trapped} we show the case with a sizable barrier between the two phases ($\epsilon = 0.5$). The left panel (the same as the top-right panel of Fig.~\ref{fig:Quartic_phi}) shows the time evolution of $\phi$, while the right panel is the energy density distribution at $t = t_{\rm end}$. The $\gamma$ factor is taken to be $40$. We see from the left panel that the scalar field is indeed trapped in the false vacuum as Eq.~(\ref{eq:trapping}) predicts. We also see several collisions at $vt \simeq 0$, $34$, and $58$ caused by the trapping. Since the scalar field is again trapped in the false vacuum, the large pressure across the wall decelerates the wall and then accelerates it again for the subsequent collision. These multiple collisions result in the energy distribution in the right panel at the simulation end. The three peaks (from outside to inside) come from the first, second and third collisions, respectively, while the energy localization at the center is the effect of trapping still continuing at the simulation end. Interestingly, the outermost peak does not dominate the energy of the system: it carries only $0.241$ of the total energy, and this fraction does not change significantly even if $\gamma$ increases. Indeed it takes $0.245$ and $0.246$ for $\gamma = 50$ and $60$, respectively. In addition, the distance between the outermost and inner peaks is stable against the change in $\gamma$, since it is determined by the condition ``(energy released until just before collision) $\simeq$ (energy stored in the false vacuum trapping)". Therefore, we conclude that the energy does not localize at the front if trapping occurs\footnote{ However, note two things: (1) The distance between the outermost and inner peaks is ${\mathcal O}(0.1) \times {\rm (bubble~radius)}$, which will not change significantly even in $3 + 1$ dimensional collisions. (2) Both peaks propagate at relativistic speeds. These mean that, after the energy peaks propagate over a distance much longer than the bubble radius at collisions, their distance is much shorter than the radius of the bubble-like structures. Therefore, the IR structure pointed out in Ref.~\cite{Jinno:2017fby} may appear in the GW spectrum. }. In Fig.~\ref{fig:Quartic_escaped} we show the case with a rather small barrier ($\epsilon = 0.05$). The $\gamma$ factor is taken to be the same as before, $\gamma = 40$. We see that the trapping does not occur, as Eq.~(\ref{eq:trapping}) predicts. In contrast to Fig.~\ref{fig:Quartic_trapped}, the outermost peaks are the highest ones in this case. The subsequent peaks carry a non-negligible fraction of the total energy, but they merge with the outermost ones in the large $\gamma$ limit. Fig.~\ref{fig:Quartic_escaped_limit} confirms this statement: the left and right panels show the energy localization for $\gamma = 50$ and $60$ with the same value of $\epsilon$, respectively. We clearly see that subdominant peaks merge with the outermost ones. Therefore, we conclude that the energy localization and the propagation speed of the wall persist even after collisions if trapping does not occur. \begin{figure} \begin{center} \includegraphics[width=0.45\columnwidth]{plots/Energy_phi_eps=05_gamma=40.pdf} \includegraphics[width=0.45\columnwidth]{plots/Energy_rho_eps=05_gamma=40.pdf} \caption{\small Time evolution of $\phi$ (left) and the energy density at the simulation end $\rho(t = t_{\rm end})$ (right) for the quartic potential with $\epsilon = 0.5$ and $\gamma = 40$. The collision occurs at the position of the dashed line in the right panel. The trapping equation (\ref{eq:trapping}) predicts that $\phi$ is trapped at the false vacuum at the initial stage. } \label{fig:Quartic_trapped} \end{center} \vskip 0.5cm \begin{center} \includegraphics[width=0.45\columnwidth]{plots/Energy_phi_eps=005_gamma=40.pdf} \includegraphics[width=0.45\columnwidth]{plots/Energy_rho_eps=005_gamma=40.pdf} \caption{\small The same as Fig.~\ref{fig:Quartic_trapped} except that $\epsilon = 0.05$ and $\gamma = 40$. The trapping equation (\ref{eq:trapping}) predicts that $\phi$ escapes from the false vacuum. } \label{fig:Quartic_escaped} \end{center} \vskip 0.5cm \begin{center} \includegraphics[width=0.45\columnwidth]{plots/Energy_rho_eps=005_gamma=50.pdf} \includegraphics[width=0.45\columnwidth]{plots/Energy_rho_eps=005_gamma=60.pdf} \caption{\small How the right panel of Fig.~\ref{fig:Quartic_escaped} changes for different values of $\gamma$. The value of $\gamma$ is chosen to be $50$ and $60$ for the left and right panels, respectively. The inner peaks merge with the outermost peaks as $\gamma$ increases. } \label{fig:Quartic_escaped_limit} \end{center} \end{figure} In summary, there are several cases to consider for the energy distribution. If the false and true vacua are degenerate, the energy localization is still strong even after the walls propagate over a distance of order bubble radius after collision. This holds true regardless of whether the scalar field bounces back or not (modified $Z_2$), since the bubble wall does not decelerate after collision. When the two vacua are not degenerate, the energy distribution depends on the dynamics of the scalar field. In case the false vacuum trapping does not occur, the energy localization is still much thinner than the bubble size (this has been the case for the hierarchical potential and the quartic potential with $\epsilon = 0.05$). If the false vacuum trapping occurs after collision, the scalar field feels a decelerating pressure and the energy gets dispersed from the relativistic front (as seen for the quartic potential with $\epsilon = 0.5$). The trapping equation (\ref{eq:trapping}) is hence a useful tool in determining the energy distribution after bubble collisions. \section{Conclusions} \label{sec:conc} \setcounter{equation}{0} In this paper we studied scalar field bubble collisions in first-order phase transitions in the relativistic regime. It is of great importance to understand the scalar field dynamics and the energy distribution in this case, since the shape of the GW spectrum differs significantly depending on whether the bubble walls instantly lose energy at the collision point or the energy propagates much further after collision. We proposed a 'trapping equation' which describes the behavior of bubbles at the initial stage after the collision. The equation can be used to determine whether or not the scalar field bounces back and becomes trapped in the false vacuum. We extensively tested the validity of the trapping equation in a variety of setups in Sec.~\ref{sec:test} and compared with scalar field simulations. We also discussed the implication of the trapping equation to $U(1)$ or $SU(2)$ breaking transitions in Sec.~\ref{sec:appl} where scalar field simulations are more elaborate. The false vacuum trapping has a huge impact on the resulting GW spectrum, since it leads to a decelerating pressure on the propagating scalar field and therefore changes the extent of energy penetration after collision~\cite{Konstandin:2011ds}. Ultimately, the 'trapping equation' determines which mechanism of GW production prevails after the phase transition: The so-called envelope approximation~\cite{Kosowsky:1992rz,Kosowsky:1992vn,Huber:2008hg,Jinno:2016vai} or the bulk flow model~\cite{Jinno:2017fby,Konstandin:2017sat}. \section*{Acknowledgment} The work of RJ was supported by Grants-in-Aid for JSPS Overseas Research Fellow (No. 201960698). This work is supported by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy -- EXC 2121 ,,Quantum Universe`` -- 390833306.
1,314,259,995,229
arxiv
\section{Introduction}% \label{sec:introduction} \subsection{Context and objectives}% \label{sec:context-objectives}% Stochastic modeling of complex systems involves a multilayered process where various sources of uncertainty are accounted for at different stages. In a goal-oriented framework, the ultimate focus of the model is to estimate a quantity of interest (QoI) that depends on model outputs in order to make predictions in support of risk management, regulatory assessment, performance optimization, safety or reliability engineering, and other critical decision tasks (\cite{OberkampfEtAl:2004cp, DupuisChowdhary:2013ae}). Since model outputs can be sensitive to distributional assumptions made throughout this process, the analysis of uncertainty in a QoI is essential. Additionally, applications in physically relevant settings often pose challenges not present in the conceptual formulation that demand incorporating data into the modeling process. The modeling of complex systems in physically relevant settings therefore requires goal-oriented uncertainty quantification (UQ) tools that allow one to distinguish sources of uncertainty present in the system with a view toward making robust, data-informed predictions. \begin{figure}[tb] \centering \includegraphics[width=0.8\textwidth]{FIG1.pdf} \caption{The modeling of steady-state subsurface flow in physically relevant settings demands goal-oriented UQ tools that allow one to distinguish different sources of uncertainty present in the system with a view toward making robust, data-informed predictions in support of decision tasks (adapted from \cite{OberkampfEtAl:2004cp}). A key challenge concerning the propagation of epistemic uncertainty from inputs to outputs is examined in detail in \cref{fig:propagation-uncertainty}.} \label{fig:layers-subsurface-flow-model} \end{figure} In the present work, we consider the modeling of steady-state subsurface flow, a complex system depicted in \cref{fig:layers-subsurface-flow-model} that has important applications in hydrology, carbon sequestration, and petroleum engineering (\cite{AndersonEtAl:2015gw, Ewing:1983rs, AarnesEtAl:2009ms, Dagan1989:ft}). The core mathematical problem involves a random partial differential equation (PDE) of elliptic type that describes the physics of steady-state flow. The stochastic coefficient in the PDE represents a conductivity field given by a geostatistical model with properties inferred from relevant data. Robust predictions for a QoI, and in particular techniques for quantifying the propagation uncertainty in the geostatistical model, are critical to achieving the goal of informing decision tasks. However, in physically relevant settings the complexity of subsurface porosity and the sparsity of available data pose a number of challenges. We seek to address some of these challenges using techniques from applied probability. Information divergences, the primary tool that we will employ, have been successfully applied to problems in stochastic dynamics (\cite{DupuisChowdhary:2013ae, DupuisEtAl:2015ps, HarmandarisEtAl:2016vi, GourgouliasEtAl:2016sp}). Based on the Donsker--Varadhan variational principle (\cite{DupuisEllis:1997ld}), information divergences provide goal-oriented UQ bounds that first appeared in \cite{DupuisChowdhary:2013ae} and have since undergone various extensions and further analysis (\cite{LiXiu:2012fp, AtarChowdharyDupuis:2015rd, KatsoulakisEtAl:2017sc}). In contrast to the applications considered in these works, to obtain robust bounds for our problem of interest demands a more nuanced implementation due to the multifaceted nature of the modeling task. Further, we wish to incorporate data into this process in a manner that complements existing inference procedures enabling data-informed predictions. For our system of interest, we develop a novel application of hybrid information divergences to study the propagation of model-form uncertainty related to the inputs of the goal-oriented framework in \cref{fig:layers-subsurface-flow-model}. Model-form or epistemic uncertainty (\cite{Helton:1994uq, HoffmanHammonds:1994pu, Rowe:1994uu, FersonGinzburg:1996uq, Hora:1996ae, Parry:1996uq, PateCornell:1996uq}) in this context expresses ignorance in the nature of the geostatistical model due to lacking priors or incomplete information that is impacted by the sparsity of available data. This uncertainty represents a modeling error that we evaluate as the weak error between a QoI obtained from a nominal and an alternative geostatistical model for the inputs. The physical system we wish to study has many uncertain aspects and this modeling error coalesces with other sources of randomness and ultimately propagates to the estimation of QoIs and impacts decision tasks. Importantly, the hybrid nature of the information divergences employed here allows us to represent, aggregate, and distinguish various sources of uncertainty by treating distinct sources under different performance measures. These hybrid divergences have the form of a relative entropy penalized by a risk sensitive hybrid performance measure, similar in flavor to the Gibbs variational formula in statistical mechanics (\cite{Ellis:2006ed}), that strikes a balance between data-informed quantities (relative entropy) and observable dependent quantities (risk sensitive performance measures). On the one hand, these bounds are robust in that for a fixed nominal model they bound all alternative models within a given information budget. On the other hand, these bounds are tight in that there exists an alternative model within a given information budget for which the bounds are attained as equality (\cite{AtarChowdharyDupuis:2015rd,GourgouliasEtAl:2017aa}). Further, there are connections with certain well-known concentration inequalities (\cite{DemboZeitouni:2010ld}) that can be leveraged for efficient computing; this latter approach was recently introduced and applied to model problems in \cite{GourgouliasEtAl:2017aa}. Numerical experiments are included here to demonstrate the application of this theory for (i) a straightforward UQ task in \cref{sec:screening-and-sa} concerning parametric sensitivity analysis and (ii) a more exploratory UQ task in \cref{sec:data-informed-bounds} featuring bounds for model misspecification due to sparse data that are not captured by small parametric perturbations. Although a mathematically rigorous hybrid modeling framework was first introduced in \cite{DupuisChowdhary:2013ae}, with variations in \cite{LiQiXiu:2014uq}, its full utility has not been explored in relation to the random PDE model investigated in this work or in relation to uncertainty arising from data. The hybrid divergences presented here provide a UQ framework that is well suited to the particular application of interest for a number of reasons. Firstly, the hybrid performance measures provide a natural way of representing and distinguishing various sources of uncertainty arising in the distinct layers of the modeling process that complement existing inference procedures. Secondly, the structure of the hybrid divergences allows UQ computations to be carried out non-intrusively, working with existing methods for the random PDE solver such as the popular Monte Carlo (MC) finite element method (FEM). Thirdly, the hybrid information divergence yields bounds that are, in some respect, the appropriate deliverable in the context of decision support due to their robustness which encapsulates ``worst-case'' scenarios within a rigorous formulation. In the remainder of this section, we formulate the model problem and some of the main UQ challenges that motivate our approach. \subsection{Formulation of the model problem}% \label{sec:model-problem}% Presently we detail the main layers comprising the subsurface flow system in \cref{fig:layers-subsurface-flow-model} each in turn. \subsubsection{Random PDE model} For a given probability space $(\Omega, \mathcal{F}, P} \newcommand{\Qb}{Q)$, we consider the random PDE in the unknown $u$, \begin{equation} \label{eq:model-rpde} - \nabla \cdot ( a(\omega, x) \nabla u(\omega, x)) = f(x), \quad \text{for } x \in \Gamma \subset \rset^d, \end{equation} subject to $u = u_0$ on $\partial \Gamma$, with given data $u_0$, source term $f: \Gamma \to \rset$, and conductivity $a:\Omega \times \Gamma \to \rset$. Arising from Darcy's law with continuity, \cref{eq:model-rpde} is a model for steady-state flow or diffusion. In subsurface hydrology, problem~\cref{eq:model-rpde} models time-independent groundwater flow where $u$ might represent a water head or the concentration of a containment (\cite{AndersonEtAl:2015gw, Dagan1989:ft}). The existence of a unique pathwise variational solution $u$ to \cref{eq:model-rpde} follows by assuming sufficiently regular data and boundedness of the conductivity (\cite{Charrier:2012}). In the sequel we approximate QoIs using the MC FEM. That is, we shall consider the standard, continuous piecewise-linear FEM approximation $\bar{u}(\omega) \approx u(\omega)$ of the pathwise variational solution of \cref{eq:model-rpde} and then use a MC method to approximate a QoI. This approach is favored for the application of interest since the finite element solution $z = (\bar{u}_n)$ is a possibly high dimensional random vector due to the low-regularity of the conductivity field. A discrete projection of the conductivity field $y = (\bar{a}_n) \approx a(\omega)$ is then used when forming the stiffness matrix in the FEM solver. Other approaches, such as the stochastic Galerkin (\cite{MatthiesKeese:2005}) or stochastic collocation (\cite{BabuskaNobileTempone:2007}) methods are typically advantageous when the conductivity possesses more regularity; being non-intrusive, our approach is also applicable using these methods. Regarding the analysis of FEM errors, \emph{a priori} estimates are available in \cite{CharrierScheichlTeckentrup:2013} and computable goal-oriented estimates for problems with rough stochastic conductivities are in \cite{Halletal:2016sc}. \subsubsection{QoIs} The goal of problem \cref{eq:model-rpde} is to estimate a QoI, \begin{equation*} \E_P} \newcommand{\Qb}{Q [g(u)] = \int_\Omega g(u(\omega,x)) \ddP} \newcommand{\Qb}{Q, \end{equation*} for a given goal functional $g$ in support of a decision task. The numerical experiments in this work focus on goal functionals that yield statistics of point estimates and indicator goal functionals $g(u)= \indic_A$ that correspond to failure probabilities, \begin{equation} \E_P} \newcommand{\Qb}{Q[\indic_{A}] = P} \newcommand{\Qb}{Q(A)\label{eq:failure-probability}, \end{equation} for events $A \subset \Omega$, such as $A :=\{\omega : u(\omega, x_0) > k \}$ that the solution at $x_0 \in \Gamma$ exceeds a threshold $k$. For example, in hydrology a QoI might be related to the average water head or the average concentration of a pollutant in a particular region of an aquifer. A QoI that is meant to provide a quantitative description of the system can be highly sensitive to distributional assumptions on the underlying geostatistical model. This poses a key challenge for the support of decision tasks and highlights the importance of understanding the propagation of modeling error due to the geostatistical model in order to quantify uncertainty in a QoI. \subsubsection{Geostatistical model}% \label{sec:geostatistical-model}% Subsurface materials are observed to be heterogeneous over each of the problem scales related to experimental measurements (\cite{Dagan:1986}). In applications to physically relevant settings, fully resolving a model for $a$ requires more data than is possible to acquire. Uncertainty in the problem data is subsequently captured through a geostatistical model for $a$ that typically takes the form of a log-normal random field that possesses low regularity. Such a field is characterized by its mean, $\mu(x) = \E[\log a(x)]$, and covariance, \begin{equation*} \label{eq:spatial-correlations} C(x,\tilde{x}) = \cov[\log a(x), \log a(\tilde{x})], \qquad \text{for } x, \tilde{x} \in \Gamma. \end{equation*} These quantities describe the spatial structure in terms of statistics between different locations and are nontrivial to model for heterogeneous media. In applications, these quantities are typically further assumed to have a parametric form where the hyperparameters that describe $\mu$ and $C$ are representative of physical properties that can theoretically be measured using relevant data. Inference procedures for the hyperparameters range from classical geostatistical methods, that rely on fitting variograms using likelihoods or moments, to more advanced Bayesian methodologies (\cite{GelfandEtAl:2010hb}). For applications in physically relevant settings, the complex physics of subsurface porosity and sparse data diminish our confidence in the form of the geostatistical model. In the next section, we further motivate and formulate these challenges. \subsection{UQ challenges}% \label{sec:uq-challenges}% The rough log-normal random fields that are typically used to capture the heterogeneity of subsurface materials produce computational challenges at the level of the solver and when analyzing model outputs (\cref{fig:layers-subsurface-flow-model}). For example, generalized polynomial chaos methods exhibit slow convergence due to the log-normality (\cite{ErnstEtAl:2012pc}). As noted in \cite{CliffeGilesScheichlTeckentrup:2011}, the correlation lengths involved in the geostatistical model are typically short with respect to the problem domain, hence stochastic Galerkin methods yield high dimensional QoIs, but still too large to guarantee a separation of scales, an obstacle to stochastic homogenization techniques. In addition, the complex physics also effects the acquisition and reliability of data. The data used to identify the salient features of the geostatistical model are typically sparse due to costs arising from a number of confounding factors. In this context, data may come from either empirical or experimental measurements collected by domain scientists at different scales of the problem. Then relevant data, for instance, the permeability field, needs to be assimilated and inferred from these measurements. For example, a geostatistical model for the conductivity might be based on empirical porosity and water retention measurements from a controlled laboratory experiment in combination with variogram fitting methods (\cite{Durner:1994hc}) or on hydraulic head measurements from \emph{in situ} field tests in combination with a Bayesian inverse problem (\cite{McLaughlinTownley:1996ip}). In both cases, the geostatistical model is determined from data whose availability is limited and whose reliability should be questioned. The combination of these factors influences our confidence in the form of the geostatistical model. We therefore view the geostatistical model as a source of model-form or epistemic uncertainty. Although theoretically reducible, eliminating epistemic uncertainty entirely for subsurface flow is not feasible due to the prohibitive cost of collecting sufficient data. Additionally, the physical system has other sources of randomness such as aleatoric uncertainty, or variability, in the model inputs, solver, and outputs and may have independent sources of epistemic uncertainty that arise in each of these layers. All of this randomness propagates to the QoI and influences the decision task. A less refined approach, in contrast to the one considered here, is to assume that epistemic uncertainty can be modeled by aleatoric uncertainty which is typically the case in a standard MC approximation when one assumes that a distribution for each uncertain aspect of the system exists (\cite{OberkampfEtAl:2004cp, DupuisChowdhary:2013ae}). The main UQ challenges postulated above are summarized as follows: \begin{compactitem}[\qquad $\circ$] \item represent and distinguish various sources of uncertainty in the system; \item propagate model-form uncertainty; \item inform decision tasks through robust data-informed deliverables; \item quantify impact of sparse data on predictions in a goal-oriented framework; and \item analyze heterogeneity of subsurface physics. \end{compactitem} The final challenge point, related to the solver layer of \cref{fig:layers-subsurface-flow-model}, leads to considerations that impact the random PDE model, such as UQ for multi-phase flows, and are beyond the scope of the present work. The focus of the present work is instead toward addressing the first four challenge points, that have important implications for the inputs, outputs, further analysis, and data layers of the model in \cref{fig:layers-subsurface-flow-model}. In particular, we address the propagation of model-form uncertainty by employing hybrid representations of information divergences, introduced in the next section, that allow us to represent and distinguish various sources of randomness. \section{Hybrid information divergences and model-form uncertainty}% \label{sec:hybr-inform-diverg}% \subsection{Propagation of model-form uncertainty} A key challenge addressed in this work concerns the propagation of model-form uncertainty within the goal-oriented framework in \cref{fig:layers-subsurface-flow-model}. We view the propagation of model-form uncertainty as a modeling error, \begin{equation} \label{eq:weak-error-observables} \mathcal{E}(\Qb, P} \newcommand{\Qb}{Q; g(\bar{u})) := \E_\Qb[g(\bar{u})] - \E_P} \newcommand{\Qb}{Q[g(\bar{u})], \end{equation} the weak error between a QoI evaluated under a nominal measure $P} \newcommand{\Qb}{Q$ and an alternative measure $\Qb$. We then seek representations for $P} \newcommand{\Qb}{Q$ and $\Qb$ that will allow us to track the propagation of uncertainty from model inputs to outputs as illustrated in \cref{fig:propagation-uncertainty}. We observe that although the propagation of model-form uncertainty is controlled by \cref{eq:weak-error-observables}, quantifying the propagation directly using \cref{eq:weak-error-observables} will be computationally infeasible (cf.\ \cref{cor:info-budget} below). \begin{figure} \centering \includegraphics[width=0.8\textwidth]{FIG2.pdf} \caption{Detail of the dashed box in \cref{fig:layers-subsurface-flow-model} that demonstrates model-form uncertainty in the geostatistical model propagating through the random PDE solver, with distribution \cref{eq:solution-distribution}, to a QoI. The propagation is controlled by the weak error \cref{eq:weak-error-observables} between a QoI evaluated under a nominal and alternative model with distributions \cref{eq:nominal-alternative-distributions}.} \label{fig:propagation-uncertainty} \end{figure} As the conductivity field $a$ is a source of epistemic uncertainty, the distribution of $y = (\bar{a}_n)$ is not known and it will therefore be of natural interest to compare the effects of different solver inputs as suggested in \cref{fig:propagation-uncertainty}. To this end we consider \begin{equation} \label{eq:nominal-alternative-distributions} \text{nominal } y \sim \gamma \quad\text{or}\quad \text{alternative } y \sim \lambda, \end{equation} for measures $\gamma$ and $\lambda$ on a Polish space $\mathcal{Y}$, arising from the geostatistical model(s) proposed for $a$. The discrete solution $z = (\bar{u}_n)$, resulting from a given algorithm for solving the random PDE, represents a source of variability as it is modeled by a stochastic process. Theoretically, $z$ has a known distribution given by $\nu$, on a Polish space $\mathcal{Z}$, that depends on the solver and $\bar{a}$. In general even if the distribution of $a$ is prescribed it may not be tractable to sample from $\nu$ directly. Instead, the conditional distribution of $\bar{u}$ given $y$, that is, \begin{equation} \label{eq:solution-distribution} z = (\bar{u}_n) \sim \nu(\dd z \mid y), \end{equation} is a computable quantity that we will evaluate using a FEM solver as indicated in \cref{fig:propagation-uncertainty}. Stating our objective more formally, we consider \cref{eq:weak-error-observables} between a QoI sampled with respect to the nominal measure $P} \newcommand{\Qb}{Q = \gamma \otimes \nu$ and alternative measure $\Qb = \lambda \otimes \nu$. The preceding analysis suggests that bounds for model outputs that distinguish these different sources of uncertainty can be obtained by concentrating on a special case of the hybrid bounds proposed in \cite{DupuisChowdhary:2013ae} that rely on the conditional distribution \cref{eq:solution-distribution}. \subsection{Representing and distinguishing sources of uncertainty}% \label{sec:hybr-perf-meas}% To achieve the desired bounds, we represent $g(\bar{u})$ in terms of a variational form \begin{equation*} g(\bar{u}) = h(y,z) \end{equation*} that distinguishes between the epistemic variable $y$, related to $\bar{a}$, and aleatoric variable $z$, related to $\bar{u}$. That is, for $\Qb = \lambda \otimes \nu$ we define a hybrid performance measure $h$ by \begin{equation} \label{eq:performance-meas} \E_{\lambda\otimes\nu} [h] = \int_{\mathcal{Y}}\int_{\mathcal{Z}} h(y,z) \nu(\dd z \mid y) \lambda(\dd y) = \int_\Omega g(\bar{u}) \dd \Qb = \E_\Qb [g(\bar{u})]. \end{equation} For simplicity in the presentation we assume that $z = (\bar{u}_n)$ is only a source of variability and that the $u_0$ and $f$ in \cref{eq:model-rpde} are deterministic; however, observe that the representation \cref{eq:performance-meas} can easily be extended to characterize various uncertain aspects of the system by introducing multiple integrals that aggregate and distinguish each independent source of randomness. A key observation is that the hybrid performance measure \cref{eq:performance-meas} can then be expressed as \begin{equation} \label{eq:equality-h-Hg} \E_{\lambda\otimes\nu} [h] = \E_\lambda [H^g]\,, \end{equation} where the random variable $H^g(y_j)$ is the marginal performance measure given by, \begin{equation} \label{eq:hybrid-pm} H^g(y_j) = \int_\mathcal{Z} h(y_j,z) \nu(\dd z \mid y_j) \,, \end{equation} for $y_j \in \mathcal{Y}$. $H^g$ encodes the propagation of the model-form uncertainty to the ensemble solution of the random PDE (the outputs in \cref{fig:propagation-uncertainty}). Approximation of a QoI depending on the goal functional $g$ amounts to a standard MC estimate for the sample mean of $H^g$ where $H^g$ is computed in a non-intrusive manner using any available algorithm for the solver. Next we introduce an important information theoretic tool that will allow us to measure differences between models suggested for epistemic variables. \subsection{Relative entropy in UQ} The relative entropy, or Kullback--Leibler divergence, quantifies the discrepancy between two distributions (see for example \cite[\S A.5]{RasmussenWilliams:2006gp}). Given probability measures $\gamma$ and $\lambda$, such that $\lambda \ll \gamma$, the relative entropy of $\lambda$ with respect to $\gamma$ is \begin{equation*} \label{eq:re-def} \RE(\lambda \mid \gamma) = \int \log\frac{\dd \lambda (y)}{\dd \gamma (y)} \lambda(\dd y), \end{equation*} and we note that $\RE(\lambda \mid \gamma) \geq 0$ with equality if and only if $\lambda = \gamma$ almost everywhere (the Gibbs inequality). For distributions that belong to a general exponential family, closed formulas exist for the relative entropy (\cite{GilEtAl:2013rd,LieseVajda:2007sd}). In particular, for multivariate Gaussian distributions, $\lambda = \mathcal{N}(\mu_i, \Sigma_i)$ and $\gamma = \mathcal{N}(\mu_j, \Sigma_j)$, the relative entropy is given by \begin{equation} \label{eq:re-multivar-norm} \RE(\lambda \mid \gamma) = \frac{1}{2}\left(\log |\Sigma_j| - \log |\Sigma_i| + \trace\left(\Sigma_j^{-1}\Sigma_i\right) + (\mu_i - \mu_j)^\top \Sigma_j^{-1} (\mu_i - \mu_j) - d\right), \end{equation} where $d$ denotes the dimension of the Gaussian and $|\cdot|$ denotes the determinant. The relative entropy is related to the observable $H^g$ via the Legendre transform of the cumulant generating functional $\log \E_\gamma [e^{H^g}]$. This well known fact from large deviations theory provides a representation that combines data-informed quantities with observable dependent quantities and will be indispensable for studying the propagation of model-form uncertainty in \cref{fig:propagation-uncertainty}. The proof of the following lemma is well known and available in \cite{DupuisEllis:1997ld}. \begin{lemma} \label{lem:bound-variational-form} Let $H^g$ be measurable and bounded and let $\gamma$ be a probability measure on $(\Omega, \mathcal{F})$. Then \begin{equation*} \log \E_{\gamma} [ e^{H^g}] = \sup_{\lambda \ll \gamma} \left\{ \E_\lambda [H^g] - \RE(\lambda \mid \gamma) \right\}. \end{equation*} \end{lemma} \Cref{lem:bound-variational-form} is the key ingredient in the proof of \cref{thm:information-divergence}, concerning hybrid information divergences, that follows in the next section. \subsection{Goal-oriented hybrid divergences for modeling error} We begin by defining a hybrid risk sensitive performance measure, \begin{equation} \label{eq:risk-sensitive-pm} \Lambda_\gamma (c, H^g) := \frac{1}{c} \log \int_{\mathcal{Y}} e^{c\left(\int_{\mathcal{Z}} h(y,z) \nu(\dd z \mid y) - \E_\gamma [H^g] \right)} \gamma(\dd y) = \frac{1}{c} \log \E_\gamma [e^{c(H^g - \E_\gamma[H^g])}], \end{equation} that is a functional of the marginal performance measure $H^g$ evaluated with respect to the nominal measure $\gamma$. As suggested by the last equality in \cref{eq:risk-sensitive-pm}, $\Lambda_\gamma$ is a weighted cumulant generating functional for the centered observable $c(H^g - \E_\gamma[H^g])$. Using this notation we also define the goal-oriented hybrid information divergence \begin{equation} \label{eq:xi} \Xi (\lambda \mid \gamma; H^g) := \inf_{c>0} \left\{ \Lambda_\gamma(c,H^g) + \frac{1}{c} \RE(\lambda \mid \gamma)\right\}. \end{equation} That \cref{eq:xi} is a divergence, i.e.\ $\Xi(\lambda \mid \gamma ; H^g) \geq 0$ and $\Xi(\lambda \mid \gamma ; H^g) = 0$ if and only if $\lambda = \gamma$ almost everywhere or $H^g$ is constant $\gamma$-a.s., follows from the proof of Theorem~2.7 in \cite{DupuisEtAl:2015ps}. A bound for the weak error \cref{eq:weak-error-observables} specialized to \cref{eq:xi} is then formulated as follows. \begin{theorem}[Hybrid Information Divergence] \label{thm:information-divergence} For a probability measure $\gamma$, measurable $H^g$, and finite $\Lambda_\gamma(c, H^g)$ in a neighborhood about $c=0$, the bound \begin{equation} \label{eq:xi-bound} -\Xi (\lambda \mid \gamma; -H^g) \leq \E_{\Qb}[g(\bar{u})] - \E_{P} \newcommand{\Qb}{Q}[g(\bar{u})] \leq \Xi (\lambda \mid \gamma; H^g) \end{equation} holds for any probability measure $\lambda$ such that $\RE(\lambda \mid \gamma) < \infty$ where $\Qb = \lambda \otimes \nu$ and $P} \newcommand{\Qb}{Q = \gamma \otimes \nu$. Further, the goal-oriented divergence can be linearized with respect to the relative entropy, \begin{equation} \label{eq:xi-linearization} \Xi (\lambda\mid \gamma; \pm H^g) = \sqrt{\var_\gamma\left[H^g\right]}\sqrt{2\RE(\lambda\mid\gamma)} + O(\RE(\lambda\mid\gamma)). \end{equation} \end{theorem} \begin{proof} For any bounded and measurable observable $H^g$, replacing $H^g$ in \cref{lem:bound-variational-form} with $c(H^g - \E_\gamma[H^g])$, for $c > 0$, yields \begin{equation*} \log \E_\gamma [e^{c(H^g-\E_\gamma[H^g])}] = \sup_{\lambda \ll \gamma} \left\{ c (\E_\lambda [H^g] - \E_\gamma [H^g]) - \RE (\lambda \mid \gamma) \right\}. \end{equation*} This variational characterization implies \cref{eq:xi-bound} for any measurable and bounded $H^g$ and, following an argument given in \cite[p.~86]{DupuisEtAl:2015ps}, this bound can be extended to any measurable $H^g$. The linearization \cref{eq:xi-linearization} arises from an asymptotic expansion at $\RE(\lambda \mid \gamma) = 0$ and the details follow from the proofs of Lemma 2.11 and Theorem 2.12 in \cite{DupuisEtAl:2015ps}. \end{proof} In contrast to classical bounds derived from the Pinsker or Chapman--Robbins inequalities (\cite{CoverThomas:2006in, Tsybakov:2009np}), the hybrid information divergence in \cref{thm:information-divergence} is tight in that equality is attainable for a suitable $\lambda$ within a given relative entropy distance of the nominal model (for a full discussion on tightness see \cite{GourgouliasEtAl:2017aa}). We also note that the definition of the divergence in \cref{eq:xi} depends on $H^g$, a quantity intimately related to the propagation of epistemic uncertainty in \cref{fig:propagation-uncertainty} that is computable in a non-intrusive manner. In the sequel, we write \begin{equation*} \Xi_{+} := \Xi\, , \quad \text{and} \quad \Xi_{-}(\cdot \mid \cdot; H^g) := -\Xi(\cdot \mid \cdot; -H^g) \end{equation*} to have a short notation for distinguishing the upper bound from the lower bound. Next, we emphasizes that \cref{eq:xi-bound} applies to all alternative models within an information budget. \begin{corollary} \label{cor:info-budget} Let the assumptions of \cref{thm:information-divergence} hold and let $\rho := \RE(\lambda \mid \gamma)$. Then \begin{equation} \label{eq:uq-bounds-rho} - \inf_{c>0} \left\{ \Lambda_\gamma(c,-H^g) + \frac{\rho}{c} \right\} \leq \E_\Qb[g(\bar{u})] - \E_P} \newcommand{\Qb}{Q[g(\bar{u})] \leq \inf_{c>0} \left\{ \Lambda_\gamma(c,H^g) + \frac{\rho}{c} \right\} \end{equation} for all $\Qb = \eta \otimes \nu$ such that $\RE(\eta \mid \gamma) \leq \rho$. \end{corollary} To make a statement equivalent to \cref{cor:info-budget} relying on \cref{eq:weak-error-observables} alone is not computationally feasible as it would amount to evaluating \cref{eq:weak-error-observables} for all $Q$ such that $\RE(\eta \mid \gamma) \leq \rho$. In contrast, calculating \cref{eq:uq-bounds-rho} only requires the nominal measure $P} \newcommand{\Qb}{Q = \gamma \otimes \nu$. \begin{remark} \label{rmk:data-processing} From the the Data Processing Inequality (see for example \cite{GilEtAl:2013rd}), \begin{equation*} \label{eq:re-data-processing-eq} \RE (\lambda \mid \gamma) = \RE (T(\lambda)\mid T(\gamma)), \end{equation*} for any invertible transformation $T$ . Thus, we can replace the distribution of the conductivity $\bar{a}$ with the distribution of $\log \bar{a}$ in any relative entropy statement without a loss of information. \end{remark} \begin{remark} The choice of the form of the information divergence needs to be aligned with our particular modeling goals. The original goal-oriented information divergence from \cite{DupuisEtAl:2015ps} defined on product measures has the form \begin{equation} \label{eq:xi-orig} \Xi(\lambda\otimes\nu \mid \gamma\otimes\nu; h) := \inf_{c>0} \left\{ \Lambda_{\gamma\otimes\nu}(c,h) + \frac{1}{c} \RE(\lambda \otimes\nu \mid \gamma \otimes\nu)\right\}, \end{equation} where $\Lambda_{\gamma\otimes\nu}$ is the standard risk sensitive performance measure given by \begin{equation} \label{eq:standard-pm} \Lambda_{\gamma\otimes\nu}(c,h) = \frac{1}{c} \log \E_{\gamma\otimes\nu}[e^{c(h-\E_{\gamma\otimes\nu}[h])}]. \end{equation} The performance measure \cref{eq:standard-pm} is not amenable to the present analysis concerning the propagation of model-form uncertainty as it incorporates epistemic and aleatoric variables in a balanced manner that does not conform with the asymmetrical way that we view these different sources of uncertainty. Moreover in the present setting it may not be possible to sample $h$ directly while $H^g$ can be evaluated non-intrusively. To make a connection between the original divergence and hybrid divergence, we observe that \cref{eq:xi-orig} arises from a representation for $h$ whereas \cref{eq:xi} arises from a representation for $H^g$ (cf.\ \cref{eq:equality-h-Hg}). While both \cref{eq:xi-orig} and \cref{eq:xi} are valid, we have that \begin{equation*} \Xi (\lambda \mid \gamma; H^g) \le \Xi(\lambda\otimes\nu \mid \gamma\otimes\nu; h) \end{equation*} due to Jensen's inequality applied to the exponential of \cref{eq:hybrid-pm}. That is, for the present application \cref{eq:xi-orig} contains more uncertainty than the hybrid divergence \cref{eq:xi}. We refer to these bounds as hybrid following the terminology introduced in \cite{DupuisChowdhary:2013ae} as they allow one to consider different levels of confidence in a model. \end{remark} \subsection{Uncertainty intervals}% \label{sec:uncertainty-intervals}% We end this section with another way of viewing the hybrid information divergences that may be useful for data-informed prediction. For a given nominal model $P} \newcommand{\Qb}{Q = \gamma \otimes \nu$, the hybrid information divergence bound \cref{eq:xi-bound} can be rewritten as \begin{equation*} \E_P} \newcommand{\Qb}{Q[g(\bar{u})] - \Xi(\lambda \mid \gamma; - H^g) \leq \E_\Qb[g(\bar{u})] \leq \E_P} \newcommand{\Qb}{Q[g(\bar{u})] + \Xi(\lambda \mid \gamma; H^g), \end{equation*} yielding an uncertainty interval for a QoI evaluated with respect to an alternative model $\Qb = \lambda \otimes \nu$, i.e.\ $\E_\Qb[g(\bar{u})] \in \E_P} \newcommand{\Qb}{Q[g(\bar{u})] \pm \Xi(\lambda \mid \gamma; \pm H^g)$. For failure probabilities \cref{eq:failure-probability} (and the goal functionals $g_1$ and $g_2$ to appear in \cref{sec:screening-and-sa,sec:data-informed-bounds}), the hybrid information divergences have a particularly simple form that gives a confidence interval for the $\Qb$-probability of failure. \begin{theorem}[Uncertainty Interval for $\Qb$-failure] \label{thm:uq-interval-failure} For a nominal model $P} \newcommand{\Qb}{Q = \gamma \otimes \nu$, let $g(\bar{u}) = \indic_A$ for $A \subset \Omega$ with $P} \newcommand{\Qb}{Q(A) = p$ and let $\rho := \RE(\lambda \mid \gamma)$. Then, \begin{equation*} - \min_{c>0} \left\{ \frac{1}{c}\log(pe^{-c}+1-p) + \frac{\rho}{c}\right\} \leq \Qb(A) \leq \min_{c>0} \left\{ \frac{1}{c} \log(pe^c+1-p) + \frac{\rho}{c} \right\}, \end{equation*} for every alternative model $\Qb = \eta \otimes \nu$ such that $\RE(\eta \mid \gamma) \leq \rho$. \end{theorem} \begin{proof} The risk sensitive performance measure is \begin{equation} \label{eq:Lambda-failure-prob} \Lambda_\gamma(c, \pm H^g) = \frac{1}{c} \log \E_\gamma [e^{\pm c(H^g - \E_\gamma[H^g])}] = \frac{1}{c} \log (pe^{\pm c} + 1 - p) \mp p \end{equation} and thus the bounds follows immediately from \cref{eq:xi-bound}. \end{proof} The remainder of this paper focuses on applications of the hybrid information divergences to UQ. First in \cref{sec:screening-and-sa} we apply \cref{thm:information-divergence} to derive bounds for parametric sensitivity analysis. Then in \cref{sec:data-informed-bounds} we examine a more exploratory UQ task and derive bounds for model misspecification due to sparse data. Finally in \cref{sec:efficient-sampling-risk}, we leverage the connection between certain concentration inequalities and the hybrid divergences for efficient computing. \section{The simplest UQ application: parametric sensitivity analysis}% \label{sec:screening-and-sa}% Presently we apply the tools presented in \cref{sec:hybr-inform-diverg} to sensitivity analysis when the model inputs are specified by a parametric geostatistical model. This setting represents the simplest UQ application of the hybrid information divergences in that we have tight control over the perturbations and hence over the alternative models under consideration. Although simple, the example nonetheless represents an important UQ task and allows us to demonstrate the tightness and robustness of the bounds derived from the hybrid information divergences. In principal, this approach can be applied to study non-parametric models, i.e.\ models that are infinite dimensional in the parameter space, such as a polynomial chaos representation of the conductivity in combination with a stochastic Galerkin method for the solver. However, we note that for our specific application of interest with a lognormal conductivity such an approximation is not guaranteed to converge due to Proposition 4.2 in \cite{ErnstEtAl:2012pc}. In the section that follows, we begin by providing notation and motivation for the parametric sensitivity analysis. In \cref{sec:si-and-small-perturbations} we give \cref{cor:screening-bound} containing a cheaply computed bound that can be used to efficiently screen for insensitive parameter directions. Then in \cref{sec:sa-reduced-model}, we apply \cref{thm:information-divergence} to obtain accurate and robust bounds for sensitivity analysis. Finally in \cref{sec:computability-of-bounds} we provide details on the implementation. We emphasize that although a one-dimensional example problem is considered, the techniques demonstrated easily scale to higher dimensions. \subsection{Parametric geostatistical models and sensitivity indices}% \label{sec:motivation-sa} For a given probability space $(\Omega, \mathcal{F}, P} \newcommand{\Qb}{Q)$ we consider the two-point boundary value problem \begin{equation} \label{eq:1d-rpde} -(a^\theta (\omega, x) u^\prime(\omega, x))^\prime = 1, \qquad \text{for } x \in [0,1], \end{equation} subject to $u(\omega, 0) = 0$ and $a^\theta(\omega, 1) u'(\omega, 1) = 1$, where randomness enters only through a scalar valued log-normal process $a^\theta$ that depends on a vector of hyperparameters $\theta \in \mathbf{R}^k$. In particular, we consider $\log a^\theta$ with mean $\mu$ and squared-exponential type two-point covariance function $C$ given by, \begin{equation} \label{eq:cov-se-nugget} C(r) = \begin{cases} \sigma^2 e^{-|r/\sqrt{2}\ell|^2}, & \text{for } r > 0,\\ \tau^2 + \sigma^2, & \text{for } r = 0, \end{cases} \end{equation} where $r = |x - \tilde{x}|$ for $x,\tilde{x} \in [0,1]$. For the mean and covariance above, $a^\theta$ is a stationary, isotropic random field where the hyperparameters of interest are $\theta = (\mu, \sigma^2, \ell, \tau^2) \in \mathbf{R}^4$. In applications, these hyperparameters have geostatistical interpretations that play a role in fitting the model for $a^\theta$ from data; $\mu$ is related to the overall trend, $\sigma^2$ is related to the sill measurement, $\ell$ is related to the spatial correlation length, and $\tau^2$ is related to the nugget effect or microscale variability (\cite{GelfandEtAl:2010hb}). The sample paths of the process $a^\theta$ exhibit qualitatively different behavior across a range of hyperparameter values and it is therefore natural to question the sensitivity of a QoI with respect to parametric modeling assumptions on the conductivity field. With a view toward employing the hybrid information divergences in \cref{sec:hybr-inform-diverg}, we denote the distributions $(\bar{a}^\theta_n) \sim \gamma$ and $(\bar{a}^{\theta'}_n) \sim \gamma'$ where $\theta' = \theta + \epsilon v$ is a small perturbation for $\epsilon >0$ in the direction $v \in \mathbf{R}^4$ with $|v|=1$. Then we consider the joint probability measures $P} \newcommand{\Qb}{Q^\theta = \gamma \otimes \nu$ and $P} \newcommand{\Qb}{Q^{\theta'} = \gamma' \otimes \nu$ that correspond to the nominal and perturbed parameters of the geostatistical model where we denote the distribution of the corresponding finite element solution by $(\bar{u}_n) \sim \nu(\dd z \mid \cdot)$. We would like to understand the sensitivity of $\E_{P} \newcommand{\Qb}{Q^\theta}[g(\bar{u})]$ with respect to distributional assumptions on $P} \newcommand{\Qb}{Q^\theta$ and in particular to quantify worst-case scenarios concerning this sensitivity with a view toward informing decision tasks. For a given goal functional $g$, we define the sensitivity index, \begin{equation} \label{eq:parametric-si} \mathcal{S} (v, \theta; g) = \theta \lim_{\epsilon \to 0} \frac{ \E_{P} \newcommand{\Qb}{Q^{\theta+\epsilon v}}[g(\bar{u})] - \E_{P} \newcommand{\Qb}{Q^{\theta}}[g(\bar{u})]}{\epsilon}, \end{equation} that describes the sensitivity of a given goal functional $g$ with respect to $\theta$ in the direction $v$, provided $\mathcal{S}$ depends continuously on $\theta$. In the limit of small $\epsilon$, $\mathcal{S}$ converges to the logarithmic derivative $\partial_{\log \theta} \E_{P} \newcommand{\Qb}{Q^\theta}[g(\bar{u})] = \theta \partial_\theta \E_{P} \newcommand{\Qb}{Q^\theta}[g(\bar{u})]$, a scaling chosen to control for differences in the orders of magnitude of the hyperparameters. Computing a classical gradient approximation of $\mathcal{S}$ in each parameter direction for each QoI represents a nontrivial computational cost even for the simple model problem \cref{eq:1d-rpde}. A naive finite difference approximation of the sensitivity index would require sampling with respect to both $P} \newcommand{\Qb}{Q^\theta$ and $P} \newcommand{\Qb}{Q^{\theta^\prime}$ where each sample involves a call to a PDE solver for each direction $v$ in $\theta' = \theta + \epsilon v$. Moreover, such a gradient approximation introduces a bias error that must be taken into account; for a better approximation of the sensitivity, corresponding to small $\epsilon$, the variance of the approximation increases and therefore our confidence of it decreases. While reduced variance methods for gradient approximations exist (\cite{Glasserman:2003mc, GlassermanYao:1992gg}), our direction here is an altogether different one. In contrast, (\ref{eq:xi-bound}) in \cref{thm:information-divergence} yields tight, non-gradient based estimates for the sensitivity \cref{eq:parametric-si} that only need to be sampled with respect to the nominal model $P} \newcommand{\Qb}{Q^\theta$. Before considering these more accurate bounds in \cref{sec:sa-reduced-model}, we first demonstrate in \cref{sec:si-and-small-perturbations} a cheaply computed bound, derived from \cref{eq:xi-linearization} in \cref{thm:information-divergence}, that can be used to screen for insensitive parameter directions. \subsection{Fast screening for small perturbations}% \label{sec:si-and-small-perturbations}% When the solver stage of the modeling process is computationally expensive, such as successive calls to a random PDE solver, efficiency can be gained by reducing the number of parameters to include in the full sensitivity analysis. Following from \cref{eq:xi-linearization} in \cref{thm:information-divergence}, we consider a linearization specialized to small perturbations that relies on the Fisher Information matrix (FIM). As the FIM and can be computed cheaply, that is, without sampling, the linearized bound can be used to efficiently screen for insensitive parameter directions. We recall that the FIM for a parametric family of distributions $P} \newcommand{\Qb}{Q^\theta$ is given by \begin{equation*} \label{eq:fim} \mathcal{I}(\theta) := \int_{\rset^d} \nabla_\theta \log p(x;\theta) (\nabla_\theta \log p(x;\theta))^\top p(x;\theta) \dd x, \end{equation*} where $p(x; \theta)$ is the density conditional on the value of $\theta$ (a classical definition from \cite{Wasserman:2013as}). \begin{corollary}[Efficient Screening] \label{cor:screening-bound} For a smooth parametric family $P} \newcommand{\Qb}{Q^\theta$ and $\epsilon >0$, \begin{equation*} \label{eq:screening-bound} \frac{1}{\epsilon} |\E_{P} \newcommand{\Qb}{Q^{\theta'}}[g(\bar{u})] - \E_{P} \newcommand{\Qb}{Q^{\theta}}[g(\bar{u})] | \le \sqrt{\var_{\gamma}[H^g]} \sqrt{v^\top \mathcal{I}(\theta) v} + O(\epsilon) \end{equation*} where $\mathcal{I}(\theta)$ is the FIM associated with $P} \newcommand{\Qb}{Q^\theta$. Hence \begin{equation*} |\mathcal{S}(v,\theta;g)| \leq \theta \sqrt{\var_{\gamma}[H^g]} \sqrt{v^\top \mathcal{I}(\theta) v}. \end{equation*} \end{corollary} \cref{cor:screening-bound} follows from the general non-infinitesimal linearization \cref{eq:xi-linearization} by noticing that the relative entropy has the expansion \begin{equation*} \RE(P} \newcommand{\Qb}{Q^{\theta + \epsilon v} \mid P} \newcommand{\Qb}{Q^\theta) = \frac{\epsilon^2}{2} v^\top \mathcal{I}(\theta)v + O(\epsilon^3) \end{equation*} when considering small perturbations to a smooth parametric family of probability measures; the complete proof follows from results in \cite{DupuisEtAl:2015ps}. A similar linearization has been used in chemical kinetics to screen for insensitive parameter directions in the situation where the number of parameters is large (\cite{ArampatzisEtAl:2015as, TsourtisEtAl:2015}). For both a multivariate normal distribution and log-normal distribution characterized by mean $\mu(\theta)$ and covariance $\Sigma(\theta)$, the $i,j$ component of FIM can be expressed as \begin{equation} \label{eq:fim-log-norm} v_i^\top \mathcal{I}(\theta) v_j = \frac{\partial \mu^\top}{\partial \theta_i} \Sigma \frac{\partial \mu}{\partial \theta_j} + \frac{1}{2} \trace\left(\Sigma^{-1} \frac{\partial \Sigma}{\partial \theta_i} \Sigma^{-1} \frac{\partial \Sigma}{\partial \theta_j} \right), \end{equation} (see for example \cite{KomorowskiEtAl:2011sa}). An analysis of the singular value decomposition of the FIM can then reveal arbitrary parameter directions that are relatively insensitive to perturbations. In \cref{fig:fim_SE_screening}, the screening index, \begin{equation} \label{eq:screening-index} J(i,i) = \theta_i \sqrt{ v_i^\top \mathcal{I}(\theta) v_i }, \end{equation} is calculated for $i \in \{\mu,\sigma^2,\ell,\tau^2\}$, the principal parameter directions corresponding to the diagonals of the FIM, and these indices are then compared across a range of nominal models for a fixed goal functional. \Cref{fig:fim_SE_screening} demonstrates the relative insensitivity of perturbations in $\mu$ and $\sigma^2$ over the range of nominal models where $\ell$ and $\tau^2$ vary for $\mu = 0.8$ and $\sigma^2 = 4$. $J$ is computed without sampling using expression \cref{eq:fim-log-norm} and identifies that the directions $\mu$ and $\sigma^2$ might be excluded from the full sensitivity analysis for the given goal functional since the screening index is small relative to the value for other directions. \begin{figure} \centering \includegraphics[scale=1]{FIG3.pdf} \caption{The index $J$ in \cref{eq:screening-index} depends on the FIM and can be computed cheaply (without sampling) thus providing an efficient method for screening parameters. In this instance, $J$ indicates the directions $\mu$ and $\sigma^2$ are insensitive compared to perturbations in $\ell$ and $\tau^2$ over nine different nominal models for a fixed QoI.} \label{fig:fim_SE_screening} \end{figure} \subsection{Robust bounds and worst-case scenarios}% \label{sec:sa-reduced-model}% Next we demonstrate bounds based on \cref{eq:xi-bound} in \cref{thm:information-divergence} that are more accurate than the linearized bounds at the cost of being more computationally expensive. To investigate the performance for parametric sensitivity analysis, we fix a nominal model $P^\theta$, with hyperparameters $\theta = (\mu=0.8, \sigma^2=4, \ell=0.005, \tau^2=0.045)$, and consider the sensitivity with respect to alternative models $P^{\theta+\epsilon v}$ corresponding to small perturbations in the $\ell$ and $\tau^2$ which we denote by $\epsilon(\ell)$ and $\epsilon(\tau^2)$ (see also \cref{fig:mod1_relandscape}). We also fix the goal functionals $g_1(\bar{u}) = \indic_{\{\bar{u}(1) > 1.2\}}$, $g_2(\bar{u}) = \indic_{\{0.25 < \bar{u}(1) <0.75\}}$, and $g_3(\bar{u}) = \min (u(1), 3)$. In \cref{fig:mod1_uq_bounds_ell,fig:mod1_uq_bounds_tau}, a scaled hybrid information divergence \cref{eq:xi} \begin{equation} \label{eq:xi-pm-star} \frac{\theta}{\epsilon} \, \Xi_{\pm} (\gamma' \mid \gamma; H^{g_i}), \end{equation} is compared to the reference quantity, \begin{equation} \label{eq:finite-diff} \hat{\Delta}(\epsilon, M; g_i) = \frac{\theta}{\epsilon} \left(E_{\gamma'}^M[g_i] - E_{\gamma}^M [g_i]\right), \end{equation} a finite difference approximation of the sensitivity $\mathcal{S}$ where \begin{equation} \label{eq:M-sample-avg} E_\gamma^M (f) = \frac{1}{M} \sum_{j=1}^{M} f(\omega_j) \end{equation} denotes the sample average based on $M$ independent and identically distributed samples of $f$ drawn with respect to $\gamma$. Each observation appearing in \cref{fig:mod1_uq_bounds_ell,fig:mod1_uq_bounds_tau} is based on the mean of $\num{e2}$ runs of $M=\num{e3}$ samples and the confidence intervals denote two standard deviations from the corresponding sample mean. \begin{figure} \centering \subfloat[]{\label{fig:mod1_uq_bounds_ell} \includegraphics[scale=0.9]{FIG4a.pdf}} \subfloat[]{\label{fig:mod1_uq_bounds_tau} \includegraphics[scale=0.905]{FIG4b.pdf}} \hfill \subfloat[]{\label{fig:mod1_relandscape} \includegraphics[scale=.77]{FIG4c.pdf}} \caption{In (a) and (b), the bounds \cref{eq:xi-pm-star} provide a tight estimate of the sensitivity index approximation \cref{eq:finite-diff}. The relative entropy landscape in (c) reveals that the information budget established by $\epsilon(\ell)$ contains corresponding $\epsilon(\tau^2)$. Thus, we note the bounds \cref{eq:xi-pm-star} in (a) are robust, providing a worst-case scenario envelope for all perturbations within an information budget, as in \cref{cor:info-budget}.} \label{fig:mod1} \end{figure} We observe in \cref{fig:mod1_uq_bounds_ell,fig:mod1_uq_bounds_tau} that for each QoI (facet corresponding to rows), the bounds \cref{eq:xi-pm-star} provide an accurate estimate of the sensitivity for $\epsilon(\ell)$ and $\epsilon(\tau^2)$. In particular, the plots are suggestive of the tightness of the bounds derived from the hybrid information divergence. Further, a comparison of \cref{fig:mod1_uq_bounds_ell} to \cref{fig:mod1_uq_bounds_tau}, indicates that the bounds \cref{eq:xi-pm-star} are robust. For this particular nominal model, the relative entropy landscape in \cref{fig:mod1_relandscape} shows that $\epsilon(\tau^2)$ always fall within the information budget established by $\epsilon(\ell)$, that is, the level sets relating to $\epsilon(\ell)$ contain the corresponding $\epsilon(\tau^2)$. Thus, the bounds \cref{eq:xi-pm-star} in \cref{fig:mod1_uq_bounds_ell} are guaranteed to contain the bounds in \cref{fig:mod1_uq_bounds_tau} by \cref{cor:info-budget} and can be interpreted as giving the worst-case scenario for each QoI, providing a natural way to rigorously incorporate worst-case scenarios into the decision support framework in \cref{fig:layers-subsurface-flow-model}. \subsection{\emph{A posteriori} computability}% \label{sec:computability-of-bounds}% We emphasize that the components appearing in the argument of the optimization problem in \cref{eq:xi}, and in particular in the right-hand side of \cref{eq:xi-pm-star}, are \emph{a posteriori} computable quantities that represent significant computational savings over gradient approximations. Moreover, \cref{eq:xi-pm-star} incorporate worst-case scenarios that might not be efficiently observed using traditional estimates of the sensitivity. In the previous numerical experiment, we approximate \cref{eq:xi-pm-star} by \begin{equation*} \Xi_{\pm} (\gamma' \mid \gamma ; H^g) \approx \pm \xi(c^*, \pm H^g) \end{equation*} where $c^* = \argmin_{c>0} \xi (c, H^g)$ and \begin{equation} \label{eq:estimate-xi} \xi (c , H^g) := \hat{\Lambda}_\gamma(c, H^g) + \frac{1}{c} \bar{\RE}(\gamma' \mid \gamma) \end{equation} for suitable approximations $\hat{\Lambda}_\gamma$ and $\bar{\RE}$ of the risk sensitive performance measure and relative entropy, respectively. The optimal $c^*$ as a function of $\rho := \RE(\lambda \mid \gamma)$ has the representation \begin{equation} \label{eq:optimal-cstar-rho} c^*(\rho) = \frac{\sqrt{2 \rho}}{\sqrt{\var_\gamma[H^g]}} + O(\rho), \end{equation} which follows from equation (2.28) of \cite{DupuisEtAl:2015ps}. For perturbations resulting in small $\rho$ \cref{eq:optimal-cstar-rho} can be used; otherwise, we find the optimal $c^*$ by a one-dimensional Newton-Raphson method, a step that must be repeated for each QoI for every alternative model under consideration. The $\hat{\Lambda}_\gamma$ appearing in \cref{eq:estimate-xi} can be sampled using a standard MC approximation, \begin{equation*} \hat{\Lambda}_\gamma(c,H^g) = \frac{1}{c} \log E_\gamma^M (\exp\{c (H^g - \widehat{H^g})\}) \approx \Lambda_\gamma(c,H^g), \end{equation*} for $\widehat{H^g} := E_\gamma^M(H^g) \approx \E_\gamma[H^g]$ where $E_\gamma^M$ denotes the sample average \cref{eq:M-sample-avg}. This quantity needs to be computed only once for each QoI, according to the nominal model $\gamma$, and can then be used as in \cref{eq:uq-bounds-rho} to test any number of alternative models within the established information budget as in \cref{cor:info-budget}. In contrast, the relative entropy appearing in \cref{eq:estimate-xi} needs to be computed for every alternative model under consideration. However, the relative entropy can computed without sampling using the analytic formula \cref{eq:re-multivar-norm} together with \cref{rmk:data-processing} to replace the distribution of the conductivity with the corresponding Gaussian. In the preceding experiments the approximation $\bar{\RE}(\gamma'\mid\gamma) \approx \RE(\gamma'\mid\gamma)$ is obtained by taking the Gaussians to have the same dimension as the finite element discretization. \begin{remark} \label{rmk:sampling-pm} For some QoIs computing \cref{eq:xi-pm-star} using the sampling strategy for $\hat{\Lambda}_\gamma$ outlined above may not be feasible; in \cref{fig:mod1_uq_bounds_ell}, the estimator of $\Xi_{+}$ for $g_3$ is observed to have high variance. In \cref{sec:efficient-sampling-risk}, we demonstrate an alternative means of estimating $\Lambda_\gamma$ using concentration inequalities that results in reduced variance predictions for certain goal functionals. \end{remark} \begin{remark} The formula \cref{eq:re-multivar-norm} depends on the discrete projection $(\bar{a}_n) \approx a$. In some instances $\Sigma$ may be close to singular, hampering the computation of the precision matrix $\Sigma^{-1}$ or the $\log$-determinant. In the numerical experiments presented here, such issues were easily addressed using a Cholesky-like covariance decomposition and facts about Toeplitz matrices. Geostatistical models based on Markov random fields (\cite{RueHeld:2005}), as opposed to parametric covariance models, is an approach that sidesteps this difficulty and we note the techniques outlined here also apply to conductivities given by Markov random fields (see \cite{GourgouliasEtAl:2017aa}). \end{remark} In the next section, we examine non-parametric perturbations to a geostatistical model. In particular, \cref{thm:information-divergence} yields tight and robust UQ bounds in the context of model misspecification due to sparse data. \section{Data-informed error bounds for non-parametric perturbations}% \label{sec:data-informed-bounds}% In the preset section we considering model-form uncertainty in connection with misspecification of the geostatistical model due to lacking or incomplete data. As emphasized in the introduction, data for our applications of interest are sparse. By allowing us to compare the effect that distributional assumptions on model inputs have on model outputs, the hybrid information divergence provides a link between data and decision tasks. In this vein we explore how the hybrid information divergences complement an existing inference procedure by providing robust, data-informed bounds that give a sense of worst case scenarios under modeling errors. Next we review the data set and the inference procedure used in our experiments. \subsection{Conductivity data and model problem}% \label{sec:cond-data-model-prob}% We utilize permeability data for a Brent sequence (\SI{365.76}{\meter} by \SI{670.56}{\meter} by \SI{51.816}{\meter}) from SPE10 model 2 in \cite{ChristieBlunt:2001sp}. Due to its importance in the North Sea petroleum industry, the Brent sequence is well studied from a geological perspective (\cite{Richards:1992bg}). The sequence has two distinct phases; the upper layers of the sequence comprise a Tarbert formation and the bottom layers comprise an Upper Ness formation. The log-permeability of these two formations both vary by several orders of magnitude and exhibit strikingly different spatial correlations. In our numerical experiments we will fit various geostatistical models based on data from a one dimensional slice of the upper-most level of the Tarbert formation displayed in \cref{fig:conductivity_slice}. \begin{figure}[] \centering \subfloat[]{\label{fig:conductivity_slice} \includegraphics[scale=0.76]{FIG5a.pdf}} \hfill \subfloat[]{\label{fig:experiment2_RE_hist} \includegraphics[scale=0.79]{FIG5b.pdf}} \caption{In (a), a one dimensional slice of the Tarbert formation data (from SPE10 model 2 in \cite{ChristieBlunt:2001sp}) used in numerical experiments in \cref{sec:data-informed-bounds} varies by orders of magnitude over the problem scale. In (b), the distribution for the relative entropy with respect to the family of alternative models \cref{eq:q-plus} depends in a nontrivial fashion on the nominal model.} \label{fig:experiment2} \end{figure} For the experiments that follow, we fix both a parametric form for the geostatistical model and an inference procedure. We then consider different geostatistical models fit from incomplete samples of the full data set. Specifically, we assume that the available log-data are Gaussian and then fit the parameters of the geostatistical model using the maximum likelihood method. For a given parametric model, this method gives parameter values that are found to maximize the likelihood of making an observation of a particular data point given the parameter value; this process is entirely automated by a number of software packages and the present experiments use `\texttt{RandomFields}' (\cite{SchlatherEtAl:2015rf}) available in R. For convenience we shall again use the covariance \cref{eq:cov-se-nugget} from \cref{sec:screening-and-sa}. Although we posit a parametric form, geostatistical models resulting from fits relying on incomplete observations of the full data set are not in general small parametric perturbations of one another. Even small changes to these discrete degrees of freedom may result in global changes to parameters and hyperparameters, in contrast to the localized sensitivity analysis in \cref{sec:screening-and-sa}. We again consider the one-dimensional model problem \cref{eq:1d-rpde}, but with a spatial interval and natural boundary condition scaled to match the conductivity data. The conductivity fields $(\bar{a}_n)$ are generated on a regular uniform mesh of $n$ equally spaced cells and this projection is used in forming the stiffness matrix for the FEM computation as well as the covariance matrices required for the relative entropy calculations (i.e.\ $n=d$). The FEM solution $(\bar{u}_{2n})$ is then computed using standard, piecewise linear elements on a coarse mesh with diameter $2n$. In the remainder of the present section, we describe two numerical experiments that use the hybrid information divergence \cref{eq:xi-bound} to obtain data-informed bounds. The first experiment in \cref{sec:model-misspecification} provides a sense of the modeling error due to misspecification stemming from incomplete data over a range of changes to the discrete degrees of freedom. In the second experiment in \cref{sec:finding-worst-case-scenarios}, we fix a nominal model based on a portion of the full data set and examine the distribution of the relative entropy to identify an information budget such that the hybrid information divergences give robust bounds that include worst-case scenarios. \subsection{Hybrid information divergences for model misspecification}% \label{sec:model-misspecification}% In \cref{fig:experiment1_g12,fig:experiment1_g3}, we demonstrate the sensitivity of the modeling error \cref{eq:weak-error-observables} with respect to changes in the geostatistical model resulting from the inclusion or exclusion of a small number of data points. This sensitivity is with respect to discrete changes to the degrees of freedom used to fit the parametric model and is not understood in the same sense as \cref{eq:parametric-si}. The weak errors in \cref{fig:experiment1_g12,fig:experiment1_g3} are between a nominal model, based on a portion of the available data set, and alternative models that correspond to including or excluding a fixed number of points from the data used to construct the nominal model. The data-informed bounds derived from the hybrid information divergence give tight and robust predictions for this weak error. \begin{figure}[] \centering \includegraphics[scale=1]{FIG6.pdf} \caption{For the failure probability goal functionals \cref{eq:data-goal-functionals-g1,eq:data-goal-functionals-g2}, the data-informed bounds \cref{eq:data-xi-pm-star} give a tight and robust prediction of the weak error \cref{eq:weak-error-observables} over a range of changes to the discrete degrees of freedom.} \label{fig:experiment1_g12} \end{figure} \begin{figure}[] \centering \includegraphics[scale=1]{FIG7.pdf} \caption{The data-informed bounds \cref{eq:data-xi-pm-star} provide a tight and robust estimate of the weak error \cref{eq:weak-error-observables} for an unbounded goal-functional \cref{eq:data-goal-functionals-g3}, however have high variance due to the limitations of the sampling strategy for $\Xi_{+}$. We emphasize that the predictions here and in \cref{fig:experiment1_g12} are robust in that $\Xi_{\pm}$ bound the weak error for all alternative models that fall within a given information budget thus including a sense of worst case scenarios.} \label{fig:experiment1_g3} \end{figure} In particular, we begin by fixing a data set for the nominal model $\gamma$ by sampling, uniformly at random, $50$ percent of the full data set depicted in \cref{fig:conductivity_slice}. We then fit a squared exponential covariance model \cref{eq:cov-se-nugget} using the maximum likelihood method. We also fit a collection of alternative models $\{\lambda_{10}, \dots, \lambda_{100}\}$ where $\lambda_q$ is related to a geostatistical model that is fit using $q$ percent of the full data set where a small number of points are added or deleted from the subset of data used for neighboring alternative models. For example, the data set used to construct $\lambda_{60}$ is formed by sampling $10$ percent of the data points from the full data set not included in $\lambda_{50}$ and then adding them to the partial set used for $\lambda_{50}$. In keeping with the notation used in previous sections, we then denote the nominal product measure $P} \newcommand{\Qb}{Q = \gamma \otimes \nu$ and the alternatives $\Qb_q = \lambda_q \otimes \nu$. Thus, the weak error displayed in \cref{fig:experiment1_g12,fig:experiment1_g3} correspond to alternative models related to a perturbation of the observed data set used to fit the models, that is, to a perturbation of discrete degrees of freedom. For this collection of nominal and alternative geostatistical models, the bounds \begin{equation} \label{eq:data-xi-pm-star} \Xi_{\pm} (\lambda_q \mid \gamma; H^g) \approx \pm \xi (c^*, \pm H^g) \end{equation} are computed using an expression similar to \cref{eq:estimate-xi} in the spirit of \cref{sec:computability-of-bounds}. Box plots for \num{e2} observations of each bound and weak error, each based on $M = \num{e3}$ samples, are displayed in \cref{fig:experiment1_g12,fig:experiment1_g3} along with a trend line corresponding to the mean of the observations. The bounds and the weak errors are examined for the goal functionals, \begin{subequations} \begin{align} &g_1(\bar{u}) = \indic_{\{\bar{u}(x_1) > m\}}, \label{eq:data-goal-functionals-g1}\\ &g_2(\bar{u}) = \indic_{\{ m+s > \bar{u}(x_1) > m-s\}}, \quad \text{and} \label{eq:data-goal-functionals-g2}\\ &g_3(\bar{u}) = \bar{u}(x_1) / m , \label{eq:data-goal-functionals-g3} \end{align} \end{subequations} where $m$ is the sample average of $\bar{u}(x_1)$ at the right-hand endpoint of the domain and $s$ is the corresponding standard deviation. We note that for \cref{eq:data-goal-functionals-g1,eq:data-goal-functionals-g2} the weak error is bounded in $[-1,1]$ whereas for \cref{eq:data-goal-functionals-g3} it is unbounded. In \cref{fig:experiment1_g12}, related to \cref{eq:data-goal-functionals-g1,eq:data-goal-functionals-g2}, we observe that there is a fairly wide spread in the values for the weak errors corresponding to different alternative models. In this instance, the data-informed bounds \cref{eq:data-xi-pm-star} form a tight envelope around this spread. Even in the case of \cref{eq:data-goal-functionals-g3}, \cref{fig:experiment1_g3} illustrates that \cref{eq:data-xi-pm-star} gives a reliable estimate of the weak error. However, we observe that the estimator for $\Xi_+$ has high variance in this instance due to sampling strategy used for the risk sensitive performance measure $\hat{\Lambda}_\gamma$. As noted in as in \cref{rmk:sampling-pm}, an alternative method will be discussed in \cref{sec:efficient-sampling-risk}. \subsection{Finding worst-case scenarios related to incomplete data} \label{sec:finding-worst-case-scenarios} Presently we examine worst-case scenarios due to changes in the discrete degrees of freedom. The distribution of the relative entropy with respect to a training set is used to determine an information budget that yields robust, data-informed bounds encapsulating worst-case scenarios. \begin{figure}[] \centering \includegraphics[scale=1]{FIG8.pdf} \caption{Choosing an alternative model $\Qb_{\max}$ related to the maximum relative entropy observed in a training set yields robust, data-informed bounds $\Xi_{\pm}$ that include a sense of worst-case scenarios related to the impact of incomplete data on the modeling process for the nominal models $P} \newcommand{\Qb}{Q_1$, $P} \newcommand{\Qb}{Q_2$, and $P} \newcommand{\Qb}{Q_3$ in \cref{fig:experiment2}.} \label{fig:experiment2_bounds} \end{figure} We consider three different nominal models, $P} \newcommand{\Qb}{Q_1 = \gamma_1 \otimes \nu$, $P} \newcommand{\Qb}{Q_2 = \gamma_2 \otimes \nu$, and $P} \newcommand{\Qb}{Q_3 = \gamma_3 \otimes \nu$, that are related to fitting a parametric geostatistical model $\gamma_i$ to $70$ percent of the full data set sampled uniformly. In \cref{fig:conductivity_slice}, the full data set is displayed in addition to the corresponding ``gaps'' in the three different nominal models. As in previous sections, these models are fit to a squared exponential covariance model \cref{eq:cov-se-nugget} using the maximum likelihood method. For each nominal model, we build the training set \begin{equation} \label{eq:q-plus} \mathcal{Q}_i = \{ \Qb^+ = \lambda^+ \otimes \nu \}\, , \quad i=1, 2, 3\, , \end{equation} a collection of alternative models based on $\lambda^+$ that are fit to enlargements of the nominal model data (i.e.\ to $80$ percent of the full data set) where points are added by sampling those excluded from the nominal model set uniformly. The corresponding frequency distribution of the relative entropy for each $P} \newcommand{\Qb}{Q_i$ with respect to $|\mathcal{Q}_i| = \num{e5}$ alternative models is displayed in \cref{fig:experiment2_RE_hist}. The distributions, which in two cases are multi-modal, demonstrate the non-trivial dependence of the relative entropy on the nominal model under consideration. The tightness of the hybrid information divergence suggests that the modeling error \cref{eq:weak-error-observables} may cluster according to the peaks in the relative entropy distribution. Further, the information budget established by the maximum observed relative entropy can be used to bound all the alternative models in $\mathcal{Q}_i$ according to \cref{cor:info-budget}. From each $\mathcal{Q}_i$, we select four alternative models $\Qb_{\max} = \lambda_{\max} \otimes \nu$, $\Qb_{\min} = \lambda_{\min} \otimes \nu$, $\Qb_{\mathrm{mean}} = \lambda_{\mathrm{mean}} \otimes \nu$, and $\Qb_{\mathrm{med}} = \lambda_{\mathrm{med}} \otimes \nu$ that correspond to the maximum, minimum, mean, and median relative entropy with respect to $P} \newcommand{\Qb}{Q_i$, respectively, see also \cref{fig:experiment2_RE_hist}. In \cref{fig:experiment2_bounds}, we observe that once again that the bounds $\Xi_{\pm}$ yield robust predictions for the the modeling error between the nominal and each of the alternative models. As expected, we observe that the weak error corresponding to $\Qb_{\max}$ appear to be worst-case scenarios and that this error is reliably contained in the envelop defined by $\Xi_{\pm}$. In the present setting, these goal-oriented bounds $\Xi_{\pm}$ represent data-informed quantities that encapsulate worst-case scenarios for the errors in misspecifying the geostatistical model due to epistemic uncertainty. \section{Efficient computation of hybrid information divergences}% \label{sec:efficient-sampling-risk}% In some instances, the computation of $\Xi$ can become infeasible due to the variance of the estimator. This large variance phenomenon, observed for $\Xi_{+}$ for $g_3$ in \cref{fig:mod1_uq_bounds_ell,fig:experiment1_g3,fig:experiment2_bounds}, stems from the sampling strategy used for $\Lambda_\gamma$ and depends on an interaction between $H^g$ and the information budget. \subsection{Variance of the standard estimator for $\Xi$}% \label{sec:vari-estimator-xi+}% The variance of the standard MC estimator for $\Xi_{+}$ is given by \begin{equation*} \var_\gamma\left[\Xi_{+}(\lambda\mid\gamma; H^g)\right] = \frac{\var_\gamma [ e^{c^*H^g}]}{(c^*)^2 M (\E_\gamma [e^{c^*H^g}])^2}, \end{equation*} a quantity that depends exponentially on both $c^*$ and $H^g$. We recall that the optimal $c^*$ is linked to the information budget $\rho = \RE(\lambda \mid \gamma)$ by (\ref{eq:optimal-cstar-rho}). For alternative models that are close in relative entropy to the nominal model, such as small parametric perturbations, then \cref{eq:optimal-cstar-rho} provides a good approximation of the optimal $c^*$ up to first order in $\rho$. However, for alternative models that are a large relative entropy distance from the nominal model, we see from \cref{eq:optimal-cstar-rho} that the optimal $c^*$ grows at least linearly in $\rho$. Attempting to sample an estimator with large variance poses a difficulty for the present application of interest as sampling involves calls to a random PDE solver. In such settings it is therefore of interest to find an alternative strategy to sampling $\Lambda_\gamma$. As suggested by the right-hand side of \cref{eq:risk-sensitive-pm}, $\Lambda_\gamma$ may have a known description as a cumulant generating functional for particular $H^g$. In other instances, $\gamma$ might have a form amenable to the numerical integration of $\E_{\gamma}[e^{c(H^g-\E_\gamma[H^g])}]$, for example via thermodynamic integration techniques (\cite{LelievreRoussetStoltz:2010fe}). In the remainder of this section we indicate an alternative approach, recently introduced in \cite{GourgouliasEtAl:2017aa}, that relies on concentration inequalities from large deviations theory to bound $\Lambda_\gamma$. The concentration inequalities, at least in their simplest form, require bounded observables $H^g$ but rely on quantities that we are already likely to be sampling in our simulation such as the expected value and the variance. Although in the form of concentration inequalities discussed below the observable must be bounded, we show that such bounds produce fairly reliable results even when the observable is merely finite and an artificial bound is imposed (see \cref{rmk:finite-observables}). We refer to \cite{GourgouliasEtAl:2017aa} for a complete discussion on UQ methods based on concentration inequalities for both bounded and unbounded observables in several model problems. \subsection{Concentration inequalities for risk sensitive performance measures}% \label{sec:conc-ineq}% We recall the following bound on the moment generating function of a random variable in terms of its first two moments (see e.g.\ \cite{DemboZeitouni:2010ld}). \begin{lemma}[Bennett] \label{lem:bennett} Supposed $X \leq b$ is a real-valued random variable with $m = \E[X]$ and $\E[(X-m)^2] \leq s^2$ for some $s > 0$. Then, for any $c \geq 0$, \begin{equation*} \label{eq:bennett} \E[e^{c X}] \leq e^{c m} \left( \frac{(b-m)^2}{(b-m)^2 +s^2}e^{- \frac{cs^2}{b-m}} + \frac{s^2}{(b-m)^2+s^2}e^{c(b-m)} \right). \end{equation*} \end{lemma} Thus we formulate a bound for $\Lambda_\gamma (c,H^g)$ where the estimator of this quantity does not involve sampling an exponentially large quantity, i.e.\ the moment generating functional. \begin{theorem}[Concentration] \label{thm:concentration} For a bounded observable $H^g \leq \bar{b}$ and $c \geq 0$, \begin{equation*} \Lambda_\gamma(c, H^g) \leq \frac{1}{c} \log \left( \frac{(\bar{b}-\widehat{H^g})^2} {(\bar{b}-\widehat{H^g})^2+s_g^2} e^{-cs_g^2/(\bar{b}-\widehat{H^g})} + \frac{s_g^2}{(\bar{b}-\widehat{H^g})^2+s_g^2} e^{c(\bar{b}-\widehat{H^g})} \right), \end{equation*} where $\widehat{H^g} = \E_\gamma [H^g]$ and $s_g^2 = \var_\gamma[H^g]$. \end{theorem} \begin{proof} This follows immediately from \cref{lem:bennett} by considering the centered $X = H^g - \widehat{H^g}$ with $b = \bar{b} - \widehat{H^g}$, $m = \E[H^g - \widehat{H^g}] = 0$, and $s_g^2 = \E[X^2] = \var_\gamma[H^g]$. \end{proof} We note from \cref{thm:uq-interval-failure}, that for failure probabilities the risk sensitive performance measure has the form \cref{eq:Lambda-failure-prob} thus the bound appearing in \cref{thm:concentration} holds with equality. An immediate extension to \cref{lem:bennett} bounds the moment generating function in terms of its mean and support and can be used when $H^g$ has both an upper and lower bound. \begin{lemma}[Bennett-$(a,b)$] \label{lem:bennett-ab} Suppose $X \in [a,b]$, for fixed $a<b$, is a real-valued random variable with $m = \E[X]$. Then for any $c \in \rset$, \begin{equation*} \E[e^{c X}] \leq \frac{m-a}{b-a} e^{cb} + \frac{b-m}{b-a} e^{ca}. \end{equation*} \end{lemma} We end by demonstrating these alternative bounds for the experiment in \cref{sec:screening-and-sa}. \subsection{Implementation for a parametric model}% \label{sec:conc-ineq-for-param-model}% In the spirit of \cref{eq:estimate-xi}, we let \begin{equation} \label{eq:conc-estimate} \zeta (c, H^g) := \bar{\Lambda}_\gamma(c, H^g) + \frac{1}{c}\bar{\RE}(\gamma' \mid \gamma), \end{equation} and obtain \begin{equation} \label{eq:risk-sensitive-bound-bennett} B_{+}(H^g) \approx \zeta (c^*, H^g) \qquad \text{and} \qquad C_{+}(H^g) \approx \zeta (c^*, H^g), \end{equation} for an optimal $c^*$ where $\bar{\Lambda}_\gamma(c ,H^g)$ in \cref{eq:conc-estimate} is approximated using \cref{thm:concentration} and \cref{lem:bennett-ab}, respectively. Corresponding lower bounds are derived in a similar manner. In \cref{fig:mod1_g3_xi_and_bennett}, we demonstrate all of the bounds for the goal functional $g_3$, noting that the bounds $B_{\pm}$ and $C_{\pm}$ have much smaller variance than the estimator for $\Xi_{\pm}$ and in each case form an envelope around the sensitivity for the QoI that remains tight and robust. \begin{figure}[] \centering \includegraphics[scale=1]{FIG9a.pdf} \hfill \includegraphics[scale=1.025]{FIG9b.pdf} \caption{The bounds \cref{eq:risk-sensitive-bound-bennett} based on the concentration inequalities in \cref{lem:bennett,lem:bennett-ab} provide a computationally efficient alternative to sampling when the standard MC estimator for $\Xi$ has high variance.} \label{fig:mod1_g3_xi_and_bennett} \end{figure} \begin{remark} \label{rmk:finite-observables} Although the concentration inequalities as quoted here are indicated only for a bounded QoI, we note that there exist other formulations for unbounded QoIs such as for sub-Gaussian random variables (\cite{GourgouliasEtAl:2017aa}). In practice \cref{thm:concentration} is a useful computational tool for a finite QoI; we observe the tight bounds demonstrated in \cref{fig:mod1_g3_xi_and_bennett} are for $g_3(\bar{u}) = \min(\bar{u}(1), 3)$ where the cut-off was arbitrarily chosen using a training set of \num{e3} observations of $\bar{u}(1)$. \end{remark} \section{Conclusions}% The present work develops UQ tools for a random PDE model of steady-state subsurface flow in \cref{fig:layers-subsurface-flow-model} with potential impacts in hydrology, carbon sequestration, and petroleum engineering. These tools are realized through the novel application of hybrid information divergences that balance observable and data dependent quantities. The hybrid nature of the divergences allows us to represent and distinguish various sources of uncertainty entering into the model and ultimately to address the propagation of model-form or epistemic uncertainty in the geostatistical model, a key challenge, via the pathway in \cref{fig:propagation-uncertainty}. We derive tight and robust estimates for modeling errors from the hybrid information divergences and apply these to important UQ tasks including parametric sensitivity analysis and model misspecification arising from sparse data. In particular, we demonstrate the use of these bounds for making data-informed predictions such as quantifying the impact of incomplete data as in \cref{sec:data-informed-bounds}. The robustness, when interpreted as including worst-case scenarios within a given information budget, suggests that these bounds are an appropriate deliverable in the context of the decision support framework in \cref{fig:layers-subsurface-flow-model}. We emphasize that the bounds derived here are also goal-oriented and non-intrusive in nature, that is, can be used in conjunction with any algorithm or solver for the random PDE problem in \cref{fig:layers-subsurface-flow-model}. Finally, we also make connections between the hybrid information divergences and certain concentration inequalities from large deviations theory that can be leveraged for efficient computing. \bibliographystyle{siamplain}%
1,314,259,995,230
arxiv
\section{Introduction In the quiet solar photosphere the energy stored in the magnetic field is comparable to the kinetic energy due to convective motions. This gives rise to a rich variety of phenomena that evolve at very short time and spatial scales. The IMaX instrument (Imaging Magnetograph eXperiment; Mart{\'\i}nez Pillet et al. 2011a ) onboard of stratospheric balloon {\sc Sunrise} (Barthol et al. 2011; Solanki et al. 2012) has helped to uncover many of these phenomena (see e.g. Solanki et al. 2010), from resolving magnetic flux-tubes (Lagg et al. 2010), to finding vortex tubes (Steiner et al. 2010), vortex flows (Bonet et al. 2010), etc. One of those discoveries involves supersonic magnetic upflows (Borrero et al. 2010, 2012). These events are characterized by highly blue-shifted circular polarization signals, that appear at the center or edges of granular cells and last for about 80 seconds. Similar events were subsequently found also in data from the SP instrument onboard the Hinode spacecraft (Mart{\'\i}nez Pillet et al. 2011b). The fact that magnetic fields of opposite polarity connected by horizontal fields appear in the vicinity of these events in 70 \% of the cases, led us to surmise that they are caused by magnetic reconnection. In this paper we study them in more detail using a different data set from the IMaX instrument. A comparison between the old and new data sets is provided in Section 2. In Section 3 we describe the criterion employed to select events. The analysis technique, namely, the inversion of the observed Stokes profiles to retrieve the physical conditions of the solar atmosphere, is detailed in Section 4. Section 5 presents our results while Section 6 briefly addresses the choice of model for the inversion. Finally, Section 7 presents our main conclusions.\\ \section{Instruments and observations The data employed in this work were recorded with the stratospheric balloon-borne observatory {\sc Sunrise} (Solanki et al. 2010; Barthol et al. 2011). {\sc Sunrise} was launched on June 8, 2009 from Kiruna (Sweden) and landed on June 13, 2009 on Somerset Island (Canada). During this time, {\sc Sunrise}'s 1-meter telescope took broad-band images in different spectral windows with the SUFI instrument (Gandorfer et al. 2011), and spectropolarimetric data of the solar photosphere with IMaX (Mart{\'\i}nez Pillet et al. 2011a). An average flight altitude of 35 km allowed {\sc Sunrise} to avoid more than 95 \% of the disturbances introduced by Earth's atmosphere. In addition, image motions due to wind during the flight were stabilized by the Correlation-Tracker and Wavefront Sensor (CWS; Berkefeld et al. 2011). Owing to the aforementioned advantages, IMaX spectropolarimetric data yielded a spatial resolution of 0.25" and a field-of-view of 50"$\times$50". Further image reconstruction based on phase diversity calibration of the PSF of the optical system improved the resolution to 0.15"-0.18".\\ In Borrero et al. (2010) we employed reconstructed IMaX data that included the four components of the Stokes vector ($I$, $Q$, $U$, $V$) measured at five wavelength positions across the \ion{Fe}{1} 5250.217 {\AA} spectral line. In the following we will refer to this observing mode as V5-6. In this work, however, we will use a different observing mode, referred to as L12-2. In this mode the intensity $I$ and circular polarization $V$ were measured in twelve (instead of five) wavelength positions). For reasons that will be explained later, we restricted ourselves to employ non-reconstructed data with a spatial resolution of \emph{only} 0.25".\\ Figure 1 displays the region of the intensity spectra around the \ion{Fe}{1} 5250.217 spectral line, as recorded by the Fourier Transform spectrometer in the quiet Sun (Wallace et al. 1998). Crosses in this figure show the five wavelength positions scanned by the V5-6 data used in Borrero et al. (2010). These were located at $\Delta = [-80,-40,40,80,227]$ m{\AA} from line-center. Filled circles illustrate the wavelength positions scanned in the L12-2 observing mode. Here, the wavelength range goes from $-192.5$ m{\AA} to $+192.5$ m{\AA}, in twelve positions equidistantly distributed in steps of 35 m{\AA}.\\ \begin{center} \includegraphics[width=9cm]{fig1.ps} \figcaption{Comparison of the V5-6 (crosses) and L12-2 (filled circles) observing modes. The former mode records the four components of the Stokes vector, while the latter acquires Stokes $I$ and Stokes $V$ (see text for details). The solid-black line corresponds to the Fourier Transform Spectrometer data (FTS-atlas) from Kitt Peak observatory in Arizona. The effective Land\'e factors $g_{\rm eff}$ of each line are calculated under the LS approximation from the electronic configurations given in Table 1.} \end{center} The lack of linear polarization profiles $Q$ and $U$ in the L12-2 observing mode makes the polarimetric calibration of the data slightly more difficult to implement compared to V5-6. The instrument's calibration matrix had been theoretically calculated (Mart{\'\i}nez Pillet et al. 2011a) and experimentally confirmed at the INTA (Instituto Nacional de T\'ecnicas Aeroespaciales) facilities in Spain (Mart{\'\i}nez Pillet 2007; del Toro Iniesta \& Mart{\'\i}nez Pillet 2012) such that linear polarization cross-talk could be minimized by tuning the voltages of the nematic liquid crystals to the appropiate retardances. Unfortunately, this calibration stage was not performed while mated to the Sunrise telescope (main and secondary mirrors) which introduced more cross-talk than anticipated. Therefore, we expected to find a non-negligible contribution from $Q$ and $U$ in the measured linear combinations of $I$ and $V$. Fortunately, the events that we will focus on, the so-called {\it supersonic magnetic upflows in granular cells}, had already been analyzed using the V5-6 observing mode. From that data we learned that these events have a negligible amount of linear polarization (although patches of enhanced linear polarization usually appear within 2" of these events; see for instance Fig.~4 in Borrero et al. 2010 and Figs.~3-7 in Borrero et al. 2012). This means that, even in our scenario, where there is a significant cross-talk from $Q$ and $U$ into Stokes $V$, the circular polarization is not affected much. The uncorrected cross-talk does however slightly increase the noise level in the circular polarization.\\ IMaX used the L12-2 mode to observe the solar photosphere for about 60 minutes on June 10, 2009. The observations were recorded close to disk center, $\mu = \cos\Theta = 0.99$, where $\Theta$ corresponds to the heliocentric angle. Due to pointing problems, the time series was interrupted several times and only about half of that time is usable. In total, there are 52 full scans of Stokes $I$ and $V$ in twelve wavelength positions. Each scan is recorded during a time interval of 33 seconds. The noise-level is estimated to be $10^{-3}$ in units of the average quiet Sun continuum intensity.\\ \section{Selection of Events In Borrero et al. (2010), where we employed V5-6, the supersonic magnetic upflows were detected in the circular polarization (Stokes $V$) at $\Delta \lambda = + 227$ m{\AA} (see Fig.~1). At this wavelength any signal can be produced by either a strong red-shift (downflow) from \ion{Fe}{1} 5250.217 {\AA} or by strong blue-shift (upflow) from \ion{Fe}{1} 5250.653 {\AA}. In the aforementioned work, the signal was ascribed to the latter case because the Stokes $I$ signal from \ion{Fe}{1} 5250.217 {\AA} was blue-shifted.\\ Bearing this information in mind, we look for a strategy to find highly blue-shifted Stokes $V$ profiles in the L12-2 data. The first idea that comes to mind is to use the blue-most wavelength position in these data, $\Delta \lambda = -192.5$ m{\AA}, to find such upflows. Unfortunately, this scanning position is not far enough towards the blue to guarantee that the observed signal at this wavelength is only due to large velocities, as it could also be caused by a magnetic field that shifts the $\sigma$-component of Stokes $V$ into this wavelength. Indeed, displaying $V(\lambda = \lambda_0-192.5~ \textrm{m}${\AA}$)$, reveals a pattern that closely resembles the network. Thus, this wavelength alone cannot be employed to uniquely identify large upflows.\\ In order to disentangle network elements from possible supersonic magnetic upflows, we propose a different strategy, in which we compare the circular polarization close to the spectral line core and the circular polarization close to the blue continuum. Let us refer to these two quantities as $V_{\rm c}$ and $V_{\rm line}$, respectively. They are defined as: \begin{eqnarray} V_{\rm line} = \frac{1}{3} \left[\sum_{i=4}^{i=6} V(\lambda_i) - \sum_{i=7}^{i=9} V(\lambda_i)\right]\\ V_{\rm c} = \frac{1}{2} \sum_{i=1}^{i=2} |V(\lambda_i)| \;, \end{eqnarray} \noindent where the index $i$ runs from the blue-most ($i=1$) o the red-most ($i=12$) scanning positions indicated by the filled circles in Fig.~1. We note that, in the definition of $V_{\rm line}$ we are subtracting Stokes $V$ in the red wing ($i=6,7,8$) from Stokes $V$ in the blue wing ($i=3,4,5$). This is done in order to obtain the polarity of the magnetic field vector, and has the additional benefit of partially canceling the noise.\\ In the top-left panel of Figure 2 we display $V_{\rm line}$ (normalized to the averaged quiet Sun continuum intensity $I_{\rm qs}$) over a portion of the full field-of-view from one of our available 52 snapshots. The regions of enhanced $V_{\rm line}$ correspond mostly to the network elements since the circular polarization close to the line-center is large. In the top-right panel of Figure 2 we plot, for the same region, the absolute value of the quotient of $V_{\rm c}$ and $V_{\rm line}$. In this panel, the network appears as those regions where $\| V_{\rm c} / V_{\rm line}\| \rightarrow 0$. Regions where $\| V_{\rm c} / V_{\rm line}\| >> 1$ denote Stokes $V$ profiles that are highly blue-shifted. Our selection criterion will consider as {\it supersonic magnetic upflows} any pixel in the field-of-view where $\| V_{\rm c} / V_{\rm line}\| > 4$. In Figure 2, those regions are indicated by the white contours. Note that the selected events also coincide with the center/edges of granular cells where the line-of-sight velocity is blue-shifted by about $-2$ km s$^{-1}$ (see black contours in the left-bottom and right-bottom panels in Figure 2). This confirms that the selected events have the same properties as those studied in Borrero et al. (2010).\\ The line-of-sight velocity $V_{\rm LOS}$ displayed in Figure 2 is obtained by calculating the center-of-gravity of Stokes $I$ and correcting for gravitational red-shift, convective blue-shift, and for the wavelength shift across the FOV due to the collimated configuration of the instrument (see Sect.~9.1 in Mart{\'\i}nez Pillet et al. 2011a). We mention this in order to avoid confusion with the $V_{\rm LOS}$ that will be used in the next sections of this paper, which will be inferred from the simultaneous fitting of Stokes $I$ and $V$.\\ It is also important to clarify that Figure 2 shows a somewhat exceptional situation, in which three events occur on a small portion of the full field-of-view. This case has been selected to highlight the properties of the selected events, but by no means corresponds to the typical case seen in the observations. In fact, from the 52 available snapshots, the aforementioned selection criteria selects 857 pixels, belonging to 122 events. This results in an average of 2.3 events in each 50"$\times$50" snapshot. Unfortunately, the pointing problems described in Sect.~2 prevented us from having a continuous time-series and therefore we cannot track each event in time. Thus, some of those 122 events might correspond to the same one but at different times in their evolution. Consequently, we cannot compare these numbers with the occurrence rates obtained in Borrero et al. 2010.\\ \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[width=9cm]{fig2a.ps} & \includegraphics[width=9cm]{fig2b.ps} \\ \includegraphics[width=9cm]{fig2c.ps} & \includegraphics[width=9cm]{fig2d.ps} \\ \end{tabular} \figcaption{{\it Top-left}: close-up ($28"\times 25"$) of a snapshot displaying the total circular polarization around the center of the spectral line $V_{\rm line}$ (Eq.~1) and normalized to the average continuum intensity over the quiet Sun $I_{\rm qs}$. {\it Top-right}: same FOV as before but showing $\| V_{\rm c} / V_{\rm line}\|$, where $V_{\rm c}$ corresponds to the circular polarization on the blue-wing of the spectral line (Eq.~2). {\it Bottom-left}: same FOV as before but showing the continuum intensity $I_{\rm c}$ normalized to the average continuum intensity over the quiet Sun $I_{\rm qs}$. {\it Bottom-right}: line-of sight velocity $V_{\rm LOS}$ derived from the center-of-gravity of Stokes $V$. In all panels the black and white contours enclose the regions where $\| V_{\rm c} / V_{\rm line}\| > 4$. These patches contain pixels that are selected for our study (see text for details). While the upper panels are unreconstructed, the bottom ones have been subject, for visualization purposes, to image restoration (see Sect.~2).} \end{center} \end{figure*} \section{Inversion of Stokes profiles Once we have the selected pixels that correspond to the possible {\it supersonic magnetic upflows in granules}, we now proceed to extract physical information from their corresponding Stokes $I$ and $V$ profiles. This is done by means of the inversion of the radiative transfer equation employing the SIR (Stokes Inversion based on Response functions) code (Ruiz Cobo \& del Toro Iniesta 1992). Starting with an initial model of the solar photosphere, SIR solves the radiative transfer equation to obtain the theoretical Stokes vector that arises from such model. The observed Stokes vector is then compared to the theoretical one through a $\chi^2$ merit-function. Via a Levenberg-Marquardt method, the original model is then iteratively modified until $\chi^2$ reaches a minimum. At each iteration step, the Levenberg-Marquardt algorithm provides the perturbations in the physical parameters at several optical-depth positions called {\it nodes}, that are needed to produce a better fit to the observed Stokes profiles. Each node represents a free parameter in the inversion. The resulting model (from the $\chi^2$-minimization) can be then considered to represent the physical conditions present in the solar photosphere. For some recent reviews on this subject we refer the reader to del Toro Iniesta (2002), Bellot Rubio (2006) and Ruiz Cobo (2007). Since photon noise affects the results of the inversion rather negatively (see e.g. Borrero \& Kobel 2011, 2012) and owing to the fact that image reconstruction techniques slightly increase the noise in the observations, we considered for the inversion the unreconstructed data with a spatial resolution of 0.25 arcsec (see Sect.~2).\\ Our inversions have been carried out with a 1-component model, in which the photosphere is considered to be laterally homogeneous within each selected pixel. Thus, we only need to consider the vertical variations of the physical parameters in the photosphere. These variations are often treated in terms of the dimensionless optical-depth at a reference wavelength of 5000 {\AA}, $\tau_5$, instead of the geometrical height $z$. In general, the physical parameters relevant for the formation of spectral lines are: temperature $T(\tau_5)$, line-of-sight velocity $V_{\rm LOS}(\tau_5)$, and the three components of the magnetic field vector: $B(\tau_5)$ (modulus of the magnetic field vector), $\gamma(\tau_5)$ (inclination of the magnetic field vector with respect to the observer's line-of-sight) and $\phi(\tau_5)$ (azimuth of the magnetic field vector in the plane perpendicular to the observer's line-of-sight). Other quantities, such as the gas $P_{\rm g}(\tau_5)$ and electron $P_{\rm e}(\tau_5)$ pressure, as well as the density $\rho(\tau_5)$, are derived from $T(\tau_5)$, the condition of hydrostatic equilibrium, and the equation of ideal gases with a variable mean molecular weight. In our inverions we allow for the following free parameters (nodes): three for $T(\tau_5)$, one for $B(\tau_5)$ (constant value with height), two for $\gamma(\tau_5)$, five for $V_{\rm LOS}(\tau_5)$, and finally one for the micro-turbulent velocity $V_{\rm mic}(\tau_5)$ (also constant with height). This adds up to a total of 12 free parameters. The full stratification of the physical parameters with $\tau_5$ is obtained via interpolation across the values at the nodes. In section 6 we give more details about our choice of model and free parameters, as well as discussing its implications.\\ \begin{center} \includegraphics[width=9cm]{fig3a.ps}\\ \includegraphics[width=9cm]{fig3b.ps}\\ \figcaption{Observed Stokes $I$ (upper panel) and Stokes $V$ (bottom panel) profiles from the 857 selected pixels. For better visualization we have interpolated the data to a common wavelength grid and we have changed the sign in Stokes $V$ so that the of lobe is always negative. All profiles are normalized to the average continuum intensity of the quiet Sun $I_{\rm qs}$.} \end{center} It is important to remind here that the L12-2 observing mode does not record the linear polarization profiles ($Q$, $U$; see Sect.~2). In the case of strong magnetic fields it is plausible to recover, through the magneto-optical effects, the azimuth of the magnetic field vector $\phi$ from only Stokes $I$ and $V$ (Ruiz Cobo \& del Toro Iniesta 1992). However, it is unclear whether this is also the case when the circular polarization signals are weak ($V/I_{\rm qs} \leq 0.03$; see Fig.~1 and also Borrero \& Kobel 2011), and the spectral resolution is limited. Therefore, we do not invert the azimuth angle of the magnetic field vector $\phi(\tau_5)$, and instead we fix it at a value of zero. Finally, we note that IMaX's spectral transmission profile is fully considered in the inversion. This is done by convolving the theoretical Stokes vector, before it is compared to the observed Stokes vector at each iteration step, with the instrument's transmission curve (see Fig.~2 in Mart{\'\i}nez Pillet et al. 2011a).\\ In order to improve convergence, and to reduce the chances of the Levenberg-Marquardt algorithm falling into a local minimum, each of the 857 selected pixels is inverted a total of 100 times, where the initial values of the physical parameters are randomly chosen each time. From all those 100 independent inversions we retain only the one with the smallest value of $\chi^2$. In the inversion we include the effects of the three spectral lines present in Figure 1. This is done because it is not possible to rule out the possibility of the \ion{Co}{1} 5250.008 {\AA} and/or \ion{Fe}{1} 5250.653 {\AA} spectral lines entering the wavelength regions scanned by the L12-2 observing-mode (see filled circles in Figure 1), specially when considering events that involve large line-of-sight velocities\footnote{In fact, as mentioned in the first paragraph in Sect.~3, this already happened in the V5-6 data (see Borrero et al. 2010 for details).}. The atomic parameters for these three spectral lines are given in Table 1.\\ \begin{deluxetable}{cccccccc} \tablecaption{Atomic parameters of the spectral lines included in the inversion.} \tablehead{\colhead{Specie} & \colhead{$\lambda_{\astrosun}$\tablenotemark{1}} & \colhead{$\chi_{\rm low}$} & \colhead{$\log(gf)$} & \colhead{Elec.conf} & \colhead{$\sigma$}\tablenotemark{2} & \colhead{$\alpha$}\tablenotemark{2} & \colhead{$g_{\rm eff}$}\\ \colhead{} & \colhead{[{\AA}]} & \colhead{[eV]} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{}} \startdata \ion{Co}{1} & 5250.008 & 4.175 & $-$0.114 & ${^4}G_{5/2}-{^4}H_{7/2}$ & n/a & n/a & 0.785 \\ \ion{Fe}{1} & 5250.217 & 0.121 & $-$4.938 & ${^5}D_0-{^7}D_1$ & 207 & 0.253 & 3.0\\ \ion{Fe}{1} & 5250.653 & 2.198 & $-$2.198 & ${^5}P_2-{^5}P_3$ & 344 & 0.268 & 1.5\\ \enddata \tablenotetext{1}{$\lambda_{\astrosun}$ represents the central wavelength position of the spectral line on the Sun.} \tablenotetext{2}{$\sigma$ and $\alpha$ represent the atomic transition's cross-section (in units of Bohr's radius squared $a_0^2$) and velocity parameter, respectively, for collisions with neutral atoms under the ABO theory (Asntee \& O'Mara 1995; Barklem et al. 1998). Collisional data for \ion{Co}{1} is not available, and therefore we rely on Uns\"old's theory (Uns\"old 1955) to calculate the collisional broadening of the spectral line.} \end{deluxetable} \section{Inversion results The inversion of the Stokes $I$ and $V$ profiles from the 857 selected profiles provides (among other physical parameters) the stratification with optical depth of the temperature $T(\tau_5)$, line-of-sight velocity $V_{\rm LOS}(\tau_5)$, magnetic field strength $B(\tau_5)$, and inclination $\gamma(\tau_5)$ in all selected pixels. Searching for similarities among all available results turns out to be a difficult task, as the inferred stratifications do not seem to follow, at first glance, any particular pattern. However, if one looks at the optical-depth dependence of the line-of-sight component of the magnetic field vector $B_\parallel = B \cos \gamma$, one realizes that almost all pixels show a change in sign from $B_{\parallel} > 0$ to $B_{\parallel} < 0$ (or viceversa) at some optical-depth point $\tau_5$ (height in the atmosphere). This observation allows us to classify the different results as a function of the optical-depth where the polarity of the magnetic field reverses. In particular we distinguish three cases: polarity change at around $\log\tau_5 \approx -1$, $\log\tau_5 \approx -2$, and finally, possible polarity change at $\log\tau_5 < -3$. Hereafter, these families of solutions will be referred to as {\it family 1}, {\it family 2}, and {\it family 3}, respectively. Figures 4, 5 and 6 show the individual results as a function of $\tau_5$ from the inversion of all pixels belonging to each family (black-dashed lines). Each of these figures show: the line-of-sight component of the magnetic field vector $B_\parallel$ (top-left panel), the temperature $T$ (top-right panel), and the line-of-sight velocity $V_{\rm LOS}$ (bottom-left panel). For the latter two physical parameters, $T(\tau_5)$ and $V_{\rm LOS}(\tau_5)$, we also show in solid-red lines the average stratification obtained from all the pixels belonging to a given family. For comparison purposes, the top-right panels in Figs.4, 5 and 6 also display the average temperature stratification (blue-solid lines) in granules (Borrero \& Bellot Rubio 2002). This model is chosen because, as mentioned in Sect.~3, these event occurs typically at the center or edges of granular cells (see also Fig.~2). In the following, we will describe each of the aforementioned families separately.\\ \subsection{Family 1: polarity change at $\log\tau_5 \approx -1$.} This family comprises 123 pixels out of the 857 selected ones (14.3 \%). As imposed by our classification criterion, $B_\parallel$ changes sign at around $\log\tau_5 \approx -1$ (Fig.~4; top-left panel). The temperature shows an enhancement of about $400-600$ K in the mid- and upper-photosphere ($\log\tau_5 \in [-1.5,-3]$) with respect to a typical granule (Fig.~4; top-right panel). The line-of-sight velocity (Fig.~4; bottom-left panel) displays variations from extreme downflows ($V_{\rm LOS} \approx 12$\kms) in the upper-photosphere ($\log\tau_5 \approx -3$) to large upflows in the mid-photosphere ($V_{\rm LOS} \approx -7$\kms~ at $\log\tau_5 \approx -2$), and then back to downflows in the deep-Photosphere ($V_{\rm LOS} \approx 3$\kms~ at $\log\tau_5 \approx 0$). Since the speed of sound in the solar photosphere is about $V_s \simeq 7$\kms, the inferred line-of-sight velocities are close to supersonic. Once we consider that $V_{\rm LOS}$ is only a lower limit of the total modulus of the velocity vector, the final velocities are likely to be much larger, hence supersonic. An additional feature is the fact that the line-of-sight velocity remains close to zero where the polarity of the magnetic field changes ($\log\tau_5 \approx -1$).\\ \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[width=9cm]{fig4a.ps} & \includegraphics[width=9cm]{fig4b.ps} \\ \includegraphics[width=9cm]{fig4c.ps} & \includegraphics[width=9cm]{fig4d.ps} \\ \end{tabular} \figcaption{{\it Top-left panel}: line-of-sight component of the magnetic field vector as a function of the optical depth $B_\parallel(\tau_5)$. {\it Top-right panel}: temperature as a function of the optical depth $T(\tau_5)$. {\it Bottom-left panel}: line-of-sight velocity as a function of the optical depth $V_{\rm los}(\tau_5)$. Dashed-black lines show the results from the inversion of the Stokes profiles $I$ and $V$ from each of the 123 selected pixels that belong to {\it family 1}. The red-solid line shows the average obtained from the individual results. Blue-solid line in the top-right panel corresponds to the temperature stratification in the granular model by Borrero \& Bellot Rubio (2002). {\it Bottom-right panel}: average of the observed (circles) and of the fitted (solid line) Stokes $V$ profiles in all pixels belonging to {\it family} 1. The mean value of $\chi^2$ from all the individual inversions is also indicated.} \end{center} \end{figure*} \subsection{Family 2: polarity change at $\log\tau_5 \approx -2$.} This family contains 434 of the 857 selected pixels (50.7 \%). As imposed by the classification criterion, $B_\parallel$ changes sign at around $\log\tau_5 \approx -2$ (Fig.~5; top-left panel). Similarly to {\it family 1}, the temperature in {\it family 2} also shows enhancements ($150-200$ K) compared to an average granule. Although not as large as in the first case, the increase occurs over all optical depths (Fig.~5; top-right panel). The line-of-sight velocities are again large, although in this case they always involve upflows (Fig.~5; bottom-left panel). These upflows are visible both in the upper-photosphere ($V_{\rm LOS} \approx -2$\kms~ at $\log\tau_5 \approx -3$), and in the deep-photosphere ($V_{\rm LOS} \approx -7$\kms~ at $\log\tau_5 \approx 0$). Again, the line-of-sight velocity remains close to zero where the magnetic field changes polarity ($\log\tau_5 \approx -2$).\\ \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[width=9cm]{fig5a.ps} & \includegraphics[width=9cm]{fig5b.ps} \\ \includegraphics[width=9cm]{fig5c.ps} & \includegraphics[width=9cm]{fig5d.ps} \\ \end{tabular} \figcaption{Same as Figure 4 but showing the 434 pixels belonging to {\it family 2}.} \end{center} \end{figure*} \subsection{Family 3: polarity change $\log\tau_5 < -3$ ?} There are 300 pixels in this family, which corresponds to 35.0 \% of the total number. As our selection criterion imposes, there is no real change in the polarity of the magnetic field vector (Fig.~6; top-left panel). Interestingly, $B_\parallel$ decreases as $\log\tau_5$ decreases. If the decreasing trend continues towards higher photospheric layers the polarity would eventually switch, although that would happen close to the temperature minimum. As it happened in the two previous families, the temperature in the mid-photosphere ($\log\tau_5 \in [-1,-2]$) is enhanced with respect to the average temperature of a granule (Fig.~6; top-right panel). In this case, the enhancement is the largest ($\simeq 1000$ K) of the three studied families. Finally, the line-of-sight velocity changes from large upflows in the mid-photosphere, $V_{\rm LOS} \approx -8$\kms~ at $\log\tau_5 \approx -2$ (Fig.~6; bottom-left panel), to extreme downflows in the low-photosphere, $V_{\rm los} \approx 15-20$\kms~ at $\log\tau_5 \approx 0$. As in all previous families, the line-of-sight velocity drops to zero as the line-of-sight component of the magnetic field vector vanishes, which now happens in the upper-photosphere ($V_{\rm LOS} \rightarrow 0$ at $\log\tau_5 \approx -3$).\\ \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[width=9cm]{fig6a.ps} & \includegraphics[width=9cm]{fig6b.ps} \\ \includegraphics[width=9cm]{fig6c.ps} & \includegraphics[width=9cm]{fig6d.ps} \\ \end{tabular} \figcaption{Same as Figure 4 but showing the 300 pixels belonging to {\it family 3}.} \end{center} \end{figure*} \section{Discussion on model choice In this section we discuss the choice of model and free parameters in the inversion described in Section 4. To make this choice we look at the observed Stokes profiles and try to establish the most suitable model to fit them. The first step is to realize that, as illustrated in Fig.~3, all 857 selected Stokes $V$ profiles feature the almost complete lack of one of the lobes (produced by the $\Delta M = \pm 1$ transitions in the Zeeman pattern) in the circular polarization.\\ The question is now to decide whether these kind of profiles are really asymmetric, or on the other hand, they are symmetric but the missing lobe in Stokes $V$ is located at $\Delta \lambda < -200$ m{\AA}, hence lying just outside of the region scanned by IMaX. This question can be answered by looking at the wavelength region around $\Delta \lambda \approx 150-200$ m{\AA} in Figs. 4, 5 and 6 (bottom-right panels). The average observed Stokes profiles (filled circles) for all families around this region is negative and, although small, clearly above the noise level. We have established that this corresponds to the Stokes $V$ signal from the nearby \ion{Fe}{1} 5250.653 {\AA} spectral line by repeating all the inversions described in Section 4, but excluding this spectral line (see Table 1). In this case, the fitted profiles in the bottom-right panels in Figs.~4-5-6 (solid-black lines) fail to reproduce the negative values of Stokes $V$ around $\Delta \lambda \approx 150-200$ m{\AA}. With this in mind we now move to $\Delta \lambda < -200$ m{\AA} and conclude that, if there is a missing positive lobe in Stokes $V$ there, then the signal from \ion{Fe}{1} 5250.653 {\AA} at $\Delta \lambda \approx 150-200$ should also be positive. However, it is negative, and therefore we conclude that the observed Stokes $V$ profile from \ion{Fe}{1} 5250.653 {\AA} are indeed asymmetric and that there is no missing positive lobe at $\Delta \lambda < -200$ m{\AA}.\\ Once established that the observed circular polarization profiles are highly asymmetric, we follow Solanki \& Montavon (1993) and Landolfi \& Landi Degl'Innocenti (1996) and consider that those asymmetries are caused by the simultaneous effect of gradients in the line-of-sight velocity $V_{\rm LOS}$, and in line-of-sight component of the magnetic field vector $B_\parallel = B \cos\gamma$. The large number of nodes allowed in $V_{\rm LOS}(\tau_5)$ (see Sect.~4) is a direct consequence of the need to fit these extremely asymmetric Stokes $V$ profiles.\\ Gradients in $B_\parallel$ have been included by, as already mentioned, allowing two nodes in $\gamma(\tau_5)$ and one node in $B(\tau_5)$. This combination is more general than allowing two nodes for $B(\tau_5)$ and only one for $\gamma(\tau_5)$. This is a consequence of the modulus of the magnetic field vector being defined as a positive quantity which makes, unlike the former case, the inversion with only one node in $\gamma(\tau_5)$ unable to yield solutions where $B_\parallel$ changes sign with optical depth. The possibility of obtaining solutions where $B_\parallel$ changes sign does not imply that all inferred stratification will actually have it (i.e. {\it family} 3; see Sect.~5.3 and Fig.~6).\\ Since our discussion in the next section will rely heavily on the presence of a reversal in the polarity of the magnetic field, it is crucial to establish whether this feature is really needed to reproduce the observed profiles. To this end we once more repeated our inversions (as described in Sect.~4) but employing this time one node in $\gamma(\tau_5)$ and two in $B(\tau_5)$. In this case, all retrieved stratification in $B_{\parallel}(\tau_5)$ show the same sign at all optical depths (as in {\it family 3}). Interestingly, the quality of the fits worsens significantly. The average value of the merit function, $\tilde{\chi}^2$, doubles in those pixels that belonged to {\it families} 1 and 2. Meanwhile, $\tilde{\chi}^2$ also increases in those pixels that belonged to {\it family 3}, but comparatively less ($< 50$ \%). These results suggest that allowing two nodes in $\gamma(\tau_5)$ is necessary to successfully fitting the observed Stokes profiles, and therefore also that, the inferred reversal in the polarity of the magnetic field $B_\parallel$ (in {\it families} 1 an 2) is not an artifact imposed by our choice of model, but rather a characteristic feature of these events.\\ Finally, as previously discussed, a model that contains only one component assumes that the solar photophere is laterally homogeneous within each observed pixel, or at least, assumes that the vertical variations in the physical parameters play a more important role than the horizontal ones in the formation of the observed spectral line. While there is no guarantee that this is indeed the case, the high spatial resolution achieved by {\sc Sunrise}/IMaX makes this approximation a reasonable first step in the study of these events.\\ \section{Discussion and conclusions The results presented here confirm our initial conclusions in our previous studies (Borrero et al. 2010, 2012) on these extremely shifted polarization signals. Namely that, {\bf a)} they occur mostly at the center or edges of granular cells; {\bf b)} they are characterized by supersonic upward velocities; {\bf c)} they involve magnetized plasma, and d) magnetic fields of opposity polarities are oftentimes ($\simeq 70$ \% of the cases) seen in their proximity ($\simeq 2"$). In addition to this, the inversion of the Stokes profiles reveals that these events seem to belong to three distinct families. These families frequently present features such: {\bf e)} temperature enhancement of a few hundred Kelvin in the mid-photosphere; {\bf f)} shift from supersonic upflows to supersonic downflows at some height in the photosphere; and {\bf g)} presence of a reversal in the polarity of the magnetic field vector also at some height in the photosphere at the exact location where the event occurs.\\ Owing to their common features, and under the assumption that only one physical mechanism is responsible for all the observed events, it would be almost straightforward to consider magnetic reconnection as their probable cause: magnetic field lines of opposite polarity coalesce and the energy stored in the magnetic field is released into kinetic and thermal energy. The two different polarities would channel the plasma in different directions giving rise to both positive and negative line-of-sight velocities (Rezaei et al. 2007; Cameron et al. 2011). Unfortunately, not all investigated pixels share the aforementioned properties. For instance, only some cases ({\it families} 1 and 3) show both positive and negative supersonic line-of-sight velocities $V_{\rm LOS}$, while {\it family} 2 posseses only $V_{\rm LOS} < 0$ (upflows). In addition, the reversal in the polarity of the magnetic field vector is not always present (e.g {\it family} 3; see Fig.~6), and only in the case of {\it family 1} (14.3 \% of the cases) the reversal in $B_\parallel$ occurs at the same location as the change in the sign of $V_{\rm LOS}$.\\ On the one hand, taking into account that the interaction between magnetic fields and granular convection leads to a rich variety of phenomena, it is conceivable that the differences between the inferred families are caused by the underlying physical mechanism being different in each case. Although there are many possible candidates, a search across the available literature (Steiner et al 1998; Cheung et al. 2008; and references therein) does not reveal any mechanism that reproduces the observational features, neither in general nor of the individual families, of the events studied in this work. For instance, the supersonic flows predicted by Cattaneo et al. (1990) and later observed by Ryb\'ack et al. (2004) and Bellot Rubio (2009), occur above granules and involve supersonic flows, but they are mostly horizontal and therefore their contribution to $V_{\rm LOS}$ is unlikely to be large. Flux-emergence processes described in Cheung et al. (2008) take place also in granules, but they do not seem to involve very large upflows. {\it Swaying motions} in flux tubes (Steiner et al. 1998) excite up-ward propagating shock fronts, but they occur mainly above intergranular lanes. Finally, {\it vortex tubes} that were originally found in granules (Steiner et al. 2010) have been recently associated with fast upflows but on nearby dark lanes (Yurchyshyn et al. 2011) and therefore, they probably correspond to a different kind of event.\\ On the other hand, one could attempt to salvage the hypothesis of reconnection by adopting different views. For instance, we could argue that one cannot expect all families to be fully consistent with the classic picture of magnetic reconnection, because they might correspond to different stages in the temporal evolution of the events (see Cameron et al. 2011). In order to rule out or to confirm this possibility, one would need an uninterrupted, and possibly longer, time-series of L12-2 data. Hopefully, this will be possible in the incoming second flight from {\sc Sunrise}/IMaX that is scheduled to take place in the summer of 2013. It can also be argued that, even if events belonging to {\it family} 3 do not show a polarity reversal in the magnetic field, this reversal can indeed take place in the upper-photosphere ($\log\tau_5 < -3$; see Sect.~5.3). Moreover, even if the polarity reversal is not present on the same pixel, in Borrero et al. (2010) we had already detected opposite polarities within 2" in 70\% of the events. In the future, it would be very interesting to combine IMaX observations with data from the upcoming IRIS mission, to study a possible relationship between these reconnection events and the presence of mostly unipolar regions (coronal holes) and/or type II spicules in the Chromosphere (McIntosh et al. 2011).\\ \begin{acknowledgements} Comments from Oskar Steiner and Rolf Schlichenmaier are gratefully acknowledged. Thanks to Fatima Rubio for providing the heliocentric angle of the observations, and to Tino Riethm\"uller for pointing out an error in the identification of the \ion{Co}{1} line spectral line in Figure 1. The German contribution to {\sc Sunrise} is funded by the Bundesministerium f\"{u}r Wirtschaft und Technologie through Deutsches Zentrum f\"{u}r Luft-und Raumfahrt e.V. (DLR), Grant No. 50~OU~0401, and by the Innovationsfond of the President of the Max Planck Society (MPG). The Spanish contribution has been funded by the Spanish MICINN under projects ESP2006-13030-C06 and AYA2009-14105-C06 (including European FEDER funds). \end{acknowledgements}
1,314,259,995,231
arxiv
\section{Introduction} \label{Intro} The analysis of optical data at a wide frequency range collected by various astronomical surveys is a critical component used to study the origin and evolution of galaxies. Data on galaxy shape \citep{GalShape} and luminosity \citep{GalLum1,GalLum2} in various bands provide information about the evolution of galaxies at different cosmic times. As each band provides information about different characteristics of each object, stronger conclusions may be drawn from studies that incorporate data from a wide range of wavelengths. While a large range of optical wavelengths is covered by most modern surveys, such as the Dark Energy Survey \citep[DES;][]{DESDR1} and the Sloan Digital Sky Survey \citep[SDSS;][]{SDSSDR7,SDSSCoadd}, the depth, the footprint, and signal-to-noise ratio (S/N) varies from survey to survey. Inparticular, these will be vastly improved with future surveys like the Legacy Survey of Space and Time (LSST) \citep{LSST}. As a result, feature extraction in a particular band may be difficult in certain regions due to incomplete field coverage by surveys with high-quality data within that band. \subsection*{Prior Work} In order to understand the underlying galaxy formation model and physics behind galaxy properties, simulations are required to mimic observations; however, their systematics are computationally expensive. Synthetic image generation of individual objects via deep learning is an alternative method for synthetic sky catalog generation that avoids the time and computational expense of other physically driven simulations. Various neural network architectures have been used for this purpose, including variational autoencoders \citep{VAEGen1,VAEGen2,GalSimHub,AstroVaDEr} and generative adversarial networks (GANs) \citep{GANGen,LSSML}. While these methods efficiently generate mock galaxy images, the accuracy of the output images depends on that of the input images. As a result, their physical information is fundamentally limited by the quality of the survey data they are trained with. In addition to image generation, autoencoders of various types have been used for a number of purposes in astronomy, including anomaly detection \citep{AEAnom} and object classification \citep{RGZoo,AstroVaDEr,SuperRAENN}. GANs have also been utilized for feature extraction \citep{GANWL} and anomaly detection \citep{MLAnom}. Two autoencoder architectures that are of particular importance for this work are convolutional autoencoders (CAEs) \citep{CAE} and denoising autoencoders (DAEs) \citep{DenoisingAE}. CAEs have been utilized in astronomy for purposes including classification/feature extraction \citep{CAEMerge,CAELens,CAEHSI} and anomaly detection \citep{MLAnom}. DAEs take as input an artificially corrupted input and are trained to reconstruct a distortion-free representation of that input. DAEs are primarily used to eliminate noise from images \citep{DAEImg} and data \citep{DAEGravWave}, as well as for feature extraction \citep{DAEFX2,DAEFX1}. One little explored alternative for improving the size and quality of survey datasets is through the use of feature transfer techniques across survey data. A feature transfer model is trained to recognize differences between features in corresponding image pairs $\mathcal{X}$ and $\mathcal{Y}$ from datasets $\mathbb{X}$ and $\mathbb{Y}$. Using an image from $\mathcal{X}'\in\mathbb{X}$ as input, the trained neural network can then be used to construct a representation of this image with the features characteristic of images in $\mathbb{Y}$. In the context of astronomy and astrophysics, feature transfer learning using conditional GANs has recently found application for data analysis and feature extraction. \citet{LineIntensity} developed a method to extract/reconstruct H$\alpha$ line intensity maps from noisy hydrodynamic simulation data. In addition, \citet{GANWL} used feature transfer techniques to extract information from weak lensing maps. However, other modified GAN architectures can be used for feature transfer learning; in particular, cycle-consistent generative adversarial networks (CycleGANs; these are described in Section \ref{Methodology}) are particularly suited for image analysis and generation. Developed by \citet{CycleGAN} and \citet{Pix2Pix}, CycleGANs have been used for image-to-image translation (feature transfer between paired or unpaired sets of images) \citep{CycleSemantics,CycleUltra,CycleVid,CycleFor,MolGAN}. However, there has been minimal exploration of generative models using feature transfer learning in astronomy and astrophysics. Recently, \citet{AstroCycle} used image-to-image translation to reconstruct high-frequency noise patterns characteristic to different astronomical surveys. The authors used several modified CycleGAN architectures with a semi-supervised training scheme using unpaired images to separate the signal and noise in images from two distinct surveys. A noise emulator is then used to reconstruct the noise patterns from each survey. The noise emulator can then be used to reconstruct images from a target dataset with said characteristic noise patterns. While several of their models were successful at emulating noise patterns, training using unpaired images hindered the reconstruction of small-scale features of the signal. Save for this work, the authors have been unable to identify any other use of CycleGANs in astronomy and astrophysics. Methods other than feature transfer ones can hypothetically be used to generate representations of galaxies with altered parameters. In particular, fader networks \citep{FaderNetwork,InvGAN} have been used by \citet{FaderGen} for the purpose of testing hypotheses about mechanisms that drive galaxy formation. While this could be used as a method to transfer individual physical parameters of galaxies from one dataset to another, image reconstruction would not be feasible using this method because a large number of parameters must be known \textit{ab initio} to generate faithful representations of images in the target dataset. We propose a novel method of feature transfer between galaxy surveys using CAEs and CycleGANs that can be used to expand galaxy image catalogs and can be adopted to multiple wavelengths and resolutions. By training these architectures with images from DES DR1~\citep{DESDR1} paired with corresponding images from SDSS DR16~\citep{SDR16}, we demonstrate that information from DES images may be transferred to SDSS images, improving their S/N, contrast, and brightness. We show that the synthetic DES images reconstructed from SDSS images share the same characteristics as the true DES images, and that this consistency is retained when performing reconstructions using images from a separate set of lower quality SDSS images which do not have a counterpart in the DES catalog. While other works have demonstrated that variational autoencoders \citep{VAEGen1,VAEGen2,GalSimHub,AstroVaDEr} and GANs \citep{GANGen,LSSML} are effective at generating realistic synthetic galaxy images and improving the S/N, these models both train and validate using images from the same dataset. Our method utilizes techniques that are generally similar to these; however, by using different data to train (SDSS) and validate (DES), we are able to generate false images that share the same morphological features of the SDSS images, but with a brightness and S/N more characteristic of the DES images. Like other generative models, this can be used to increase the size of survey datasets; however, this method generates false representations of real observed galaxies. This provides benefit when studying the properties of galaxies in a specific region that has not yet be covered by high quality surveys. More importantly, transfer learning may allow for cross-band reconstruction: all surveys cover a limited range of wavelengths at sufficiently high quality for effective analysis, making feature extraction from particular bands impossible in certain regions of the sky. By training using images with fewer bands than the validation data, a feature transfer-based generative model may be able to generate synthetic representations of galaxies with a greater range of wavelength bands than the input image. This provides a method to allow more thorough analysis of galaxies in regions that lack sufficient band coverage. In this work, we demonstrate the creation of \texttt{Survey2Survey}, a neural network architecture used to transfer features between SDSS and DES galaxy images that can be easily generalized to other optical surveys or even across multiple wavelengths. The parameters of the SDSS and DES datasets used for training and validation are described in Section \ref{Data}. In Section \ref{Methodology}, we detail the CAE and CycleGAN architectures used. In Section \ref{Results}, we present qualitative and quantitative metrics of the accuracy of the reconstructed image, then summarize our findings in Section \ref{Conclusion}. \section{Data} \label{Data} In this section we describe the datasets used to carry out this study. We focused on optical data from the SDSS and DES surveys and the overlapping region in the Stripe82 \citep{S82}. All of the data used in this paper is publicly accessible via their respective websites. All images consisted of three layers (one layer for each RGB channel), where the brightness of each pixel $P_i$ was represented by an 32-bit float, $0 \leq P_i \leq 1$. Each SDSS image was 150\by150 \unit{pix}, and each DES image was 228\by228 \unit{pix}; the DES images were downscaled to match the dimensions of the SDSS images prior to training and reconstruction. After the reconstruction and prior to the analysis, each three-layer 150\by150 \unit{pix} image was reduced to a single layer by averaging over the RGB channels. \subsubsection*{SDSS} \label{SDSS} SDSS images were captured by the Ritchey-Chr\`etien altitude-azimuth telescope \citep{RitChTel}, the Ir\`en\`ee du Pont Telescope \citep{IreneeTel}, and the NMSU 1-Meter Telescope \citep{NMSUTel}. We selected a sample of galaxies in Stripe82 that overlapped with the DES footprint, and randomly sampled data from outside that region and within the northern cap for a total of 25,000 galaxies. We chose galaxies with band Petrosian magnitude limits $14 < R < 17.77$, $z < 0.25$ and a resolution of $0.396 \unit{arcsec}/\!\unit{pix}$, using the galaxy flag produced by SDSS to select high confidence galaxy images. Images of these galaxies were obtained from the SDSS cutout server\footnote{http://casjobs.sdss.org/ImgCutoutDR7}. \subsubsection*{DES and Overlap Region} \label{DES} DES uses the Dark Energy Camera \citep[DECam;][]{DECam} mounted at the Blanco 4m telescope at the Cerro Tololo Inter-American Observatory (CTIO) in Chile to observe $\roughly 5000 \deg^2$ of the southern sky in the $g$, $r$, $i$, $z$, and $Y$ broadband filters ranging from $\roughly 400\unit{nm}$ to $\roughly 1000\unit{nm}$ in wavelength. We used images from the Dark Energy Survey DR1 release \citep{DESDR1}, which is comprised of over 10,000 co-added tiles of $0.534 \deg^2$ with a resolution of $0.263 \unit{arcsec}/\unit{pix}$ and a depth reaching S/N $\roughly 10$ for extended objects up to $i_{AB}\,\roughly 23.1$. We selected DES galaxies using a combination of filtered criteria in terms of the concentration and error in the magnitude model as recommended\footnote{https://des.ncsa.illinois.edu/releases/dr1/dr1-faq} with $g < 17$ located in the Stripe 82 region \citep{S82} corresponding to roughly $300 \deg^2$ near the celestial equator. We selected all images from Stripe 82 that have an SDSS counterpart \citep{SDSSDR7,SDSSCoadd}. These images were obtained using the public DES cutout service\footnote{https://des.ncsa.illinois.edu/desaccess}. We removed images with incomplete coverage and cleaned the images of anomalies and contaminants such as stars using visual inspection. Each DES image was scaled to 150\by150 \unit{pix} to match the resolution of the SDSS images. We aligned the orientation and central pixels of each DES/SDSS image pair, and the final RGB composite was generated using the \cite{RGBCCD} prescription in order to closely match the SDSS colors. Figure \ref{fig:DSImages} shows examples of the galaxies selected, where we can see that the DES images appear brighter and more detailed than the SDSS images. The overlap region was used for training and validation; each SDSS image in the overlap region had a DES counterpart. In total, there were 5,538 RGB images in the overlap region. 5,000 SDSS/DES image pairs were used for training the models, while the remaining 538 were used as the validation dataset. Because of the large variation in the brightness and spatial extent of objects in the SDSS and DES datasets, we chose to use a training dataset that was $\approx 5\times$ larger than those used by both \citep{Pix2Pix} and \citep{CycleGAN}. The external dataset, which consisted of 25,076 from outside of the Stripe 82 region, was used to provide evidence for the robustness of our methodology. These images were fainter and of lower S/N than the training and validation datasets. \begin{figure} \centering \includegraphics[width = \columnwidth]{s2d_figs/Collages/DSImg.jpg} \caption{Sample images used from the Dark Energy Survey (DES) (top row) and Sloan Digital Sky Survey (SDSS) (bottom row) datasets. More examples can be seen throughout the text.} \label{fig:DSImages} \end{figure} \section{Methodology} \label{Methodology} Convolutional Autoencoders (CAE) \citep{CAE} and Cycle-Consistent Generative Adversarial Networks (CycleGAN) \citep{GAN} were used to generate synthetic galaxy images from the SDSS input images. Since the images were scaled, rotated, and centered so that each pair of pixels in a given image pair corresponded with one another, minimizing the loss function used for both models corresponded with minimizing the pixel-to-pixel differences between the reconstructed image and the DES target image. These two types of models differ in their implementation and objective function as described below. We did not perform any methods that have traditionally been used to reduce overfitting and provide data augmentation, such as image rotations and translations, for either the CAE or CycleGAN. Spatial transformations would have likely led to failure: as we intend to perform pixel-to-pixel translations, any misalignment of pixels would lead to the creation of an invalid mapping function. While this may not cause an issue in many other cases, spatial transformations on SDSS and DES images could drastically reduce the accuracy of the mapping function given the small spatial extent of the signal region relative to the background in many images. While this may have led to overfitting, the analysis of the external reconstructions provides evidence of the robustness of our method. As an initial application of image-to-image translation for false image generation, we chose to minimize the number of factors that could affect the pixel-to-pixel map; future research should be dedicated to establishing methods to ensure that overfitting does not occur. \subsection{Convolutional Autoencoders (CAE)} \label{CAE} An autoencoder is a neural network architecture typically used for classification that is comprised of an encoder/decoder pair. The encoder compresses data from an input image using one or more hidden layers to isolate important features from that image, generating a latent space representation of that image with lower dimensionality. The decoder uses the information in the latent space to reconstruct a representation of the input image. The autoencoder is trained to optimize a loss function to minimize the difference between the source and reconstructed image. A convolutional autoencoder performs encoding and decoding using convolution filters: during the encoding stage, convolution filters are used to extract information from and decrease the dimensionality of the input image. Additional convolution filters are used to map the latent space representation to a reconstruction of the input image. Training is performed by iteratively modifying the weights of the convolution filters to minimize the differences between the source image and its reconstruction. Our CAE was implemented using \texttt{Keras} \citep{Keras} using a \texttt{Tensorflow} \citep{TensorFlow} backend, and was run on a 32 GB Tesla V1000-PCIE GPU. Training over the course of 100 epochs (a value chosen via early stopping) took $\roughly 30$ minutes. Details about our architecture are shown in Table \ref{tab:CAEArch}. We intentionally did not substantially decrease the dimensionality of the latent space of each layer because of the complexity of the images we aimed to reproduce. The SDSS images were generally less bright and noisier, and objects from the DES dataset often had a greater number of pixels distinguishable from the background noise (i.e. the signal in DES images had a larger spatial extent) than the SDSS images, so it is unlikely that a low-dimensionality latent space would be capable of producing sufficiently detailed false images. \begin{table} \centering \setlength\tabcolsep{4pt} \setlength\extrarowheight{2pt} \begin{tabular}{|c|c|c|c|} \hline \textbf{Stage} & \textbf{Output Shape} & \textbf{Activation} & $\bm{N_{\textnormal{\textbf{P}}}}$ \\ \hline \textbf{Input} & 150\by150\by3 & \multicolumn{2}{c|}{N/A} \\ \hline \multirow{3}{*}{\textbf{Encoder}} & 150\by150\by128 & ReLU & 3584 \\ {} & 150\by150\by64 & ReLU & 73792 \\ {} & 150\by150\by32 & ReLU & 18464 \\ \hline \multirow{4}{*}{\textbf{Decoder}} & 150\by150\by32 & ReLU & 9248 \\ {} & 150\by150\by64 & ReLU & 18496 \\ {} & 150\by150\by128 & ReLU & 73856 \\ {} & 150\by150\by3 & Sigmoid & 3459 \\ \hline \end{tabular} \caption{CAE architecture used for image reconstruction. The initial input and final output images were 150\by150 \unit{pix} with 3 color channels; the output shape of each image in the encoder and decoder stages is length\xspace\ensuremath{\times}\xspace~width\xspace\ensuremath{\times}\xspace~no. filters. Each row in the encoder and decoder stages represents a single convolution layer with the specified activation function; convolution was performed using 3\by3 kernel with a stride of 1 and zero padding. The image passed to the subsequent convolution layer had dimensions corresponding to that row's output shape. $N_{\textnormal{P}}$ is the number of training features in that layer. The number of filters and activation functions used were chosen through manual tuning.} \label{tab:CAEArch} \end{table} The RGB data from each image was separated into three layers, each of which were used to generate a unique set of filters. The encoder and decoder both consisted of three hidden layers, each of which filtered the image data from the previous layer using 150 3\by3 \unit{pix} convolution filters. These filters were initialized using randomly generated weights. Rectified Linear Unit (ReLU) activation functions were used for each layer of the encoder and decoder, and a sigmoid activation function was used during the final reconstruction phase. For each epoch, the input image $\vec{x}_0$ was an image from the SDSS catalog, while the target image $\vec{x}_T$ was the same object taken from the DES catalog. The difference between the reconstructed image $\vec{x}'_0$ and the target image was calculated using the mean squared error loss function \begin{align} \mathcal{L}\left(\vec{x}'_0, \vec{x}_T\right) &= \@ifstar{\oldnorm}{\oldnorm*}{\vec{x}_T - \vec{x}'_0} \end{align} The \texttt{Adadelta} \citep{ADADELTA} optimizer was used to determine filter weights. At the conclusion of 100 training epochs, the trained algorithm was used to reconstruct the DES validation images from their corresponding SDSS image. \subsection{Cycle-Consistent Generative Adversarial Networks (CycleGAN)} \label{CycleGAN} \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth,keepaspectratio]{s2d_figs/Architecture/CycleGAN_Arch_Final.pdf} \end{minipage} \caption{A representation of the architecture of a CycleGAN. \textbf{1.} A false image representation $\hat{x}$ is generated from $y$, a member of the target dataset, via the mapping function $F$. \textbf{2.} $\hat{x}$ is mapped to a false image $\hat{y}$ via the mapping function $G$. \textbf{3.} The GAN loss function $\mathcal{L}_{\textnormal{GAN}}^X$ for discriminator $D_{\mathcal{X}}$ is calculated by comparing $x$ and $\hat{x}$. \textbf{4.} The backward cycle-consistency loss function $\mathcal{L}_{\textnormal{cyc}}^{\textnormal{b}}$ is calculated by comparing $\hat{y}$ to the true target image $y$ (the forward cycle-consistency loss function $\mathcal{L}_{\textnormal{cyc}}^{\textnormal{f}}$ is calculated similarly using $\hat{x}$ and $x$). In our case, to calculate $\mathcal{L}_{\textnormal{cyc}}^{\textnormal{f}}$, we generate a DES representation of an SDSS image $x$, then use $F$ to generate a false SDSS representation of that image. $\mathcal{L}_{\textnormal{cyc}}^{\textnormal{f}}$ quantifies the differences between the source SDSS image and the false SDSS image; its combination with $\mathcal{L}_{\textnormal{cyc}}^{\textnormal{b}}$ quantifies the error accumulated when the SDSS image completes a full ``cycle'' between the SDSS and DES image spaces ($\mathbb{X} \to \mathbb{Y} \to \mathbb{X}$). \textbf{5.} These loss functions are combined with $\mathcal{L}_{\textnormal{GAN}}^Y$ and $\mathcal{L}_{\textnormal{cyc}}^{\textnormal{f}}$ to calculate the total loss function $\mathcal{L}$. $F$ and $G$ are then updated to minimize $\mathcal{L}$. This process is repeated to optimize the neural network. \label{fig:GANArch}} \end{figure*} A Generative Adversarial Network (GAN) \citep{GAN} is an unsupervised or semi-supervised generative model consisting of a generator $G$ and discriminator $D$. $D$ is trained to distinguish between images from a training dataset of ``true'' images ($\mathcal{Y}$) and those generated by sampling from the latent space of $G$ ($\mathcal{X}$). Backpropagation of error from $D$ is used to generate a map $g:\mathcal{X}\to\mathcal{Y}$ from the latent space of $G$ to the ``true'' image dataset by minimizing a loss function $\mathcal{L}(G, D, \mathcal{X}, \mathcal{Y})$. After training, the GAN may be used to generate false images that replicate the features of $\mathcal{Y}$. A CycleGAN \citep{CycleGAN,Pix2Pix} is a variation of a traditional GAN that minimizes cycle-consistency loss through the additional of a second generator/discriminator pair; a diagram of this architecture is shown in Fig. \ref{fig:GANArch}. Images from $\mathcal{X}$ ($\mathcal{Y}$) are used to train discriminators $D_\mathcal{X}$ ($D_\mathcal{Y}$). The generators $F:\mathcal{X}\to\mathcal{Y}$ and $G:\mathcal{Y}\to\mathcal{X}$ are trained to extremize the adversarial loss function $\mathcal{L}(H, D_\mathcal{Y}, \mathcal{X}, \mathcal{Y})$ for generator $G$, discriminator $D_\mathcal{Y}$, and datasets $\mathcal{X}$ and $\mathcal{Y}$. For the purposes of this project, we chose to use the loss function used by \citet{CycleGAN}: \begin{align} \label{eq:GLoss} \mathcal{L}_\textnormal{GAN}(G, D_\mathcal{Y}, \mathcal{X}, \mathcal{Y}) \kern2pt=\kern2pt &\mathbb{E}_{\kern0.5pt y\sim p_\textnormal{data}(y)}\left[\kern0.5pt\log D_\mathcal{Y}(y)\kern0.5pt\right] +\\ &\mathbb{E}_{\kern0.5pt x\sim p_\textnormal{data}(X)} \left[\kern0.5pt\log(1 - D_\mathcal{Y}(G(x))\kern0.5pt\right] \nonumber \end{align} for images $x\in\mathcal{X}$ and $y\in\mathcal{Y}$, where $p_\textnormal{data}$ is the true data distribution. $G$ was trained to maximize $\mathcal{L}_\textnormal{GAN}$ ($\max_G\max_{D_\mathcal{Y}}\mathcal{L}_\textnormal{GAN}(G, D_\mathcal{Y}, \mathcal{X}, \mathcal{Y})$), while $F$ was trained to minimize it ($\min_F\max_{D_\mathcal{X}}\mathcal{L}_\textnormal{GAN}(F, D_\mathcal{X}, \mathcal{Y}, \mathcal{X})$). To constrain the space of possible mapping functions, a CycleGAN optimizes $F$ and $G$ by minimizing the forward and backward cycle consistency error. For images $x\in\mathcal{X}$ and $y\in\mathcal{Y}$, let $x' = F(G(x))$ and $y' = G(F(y))$. Forward cycle consistency is achieved when the difference between $x'$ and $x$ is minimized (i.e. $F = G^{-1} + \eps_x$ for some small error $\eps_x$), indicating that the full translation cycle beginning in $\mathcal{X}$ reproduces a close approximation of $x$; backward cycle consistency is defined identically for images $y \in \mathcal{Y}$ and $G = F^{-1} + \eps_y$ for some small error $\eps_y$. An optimized CycleGAN will simultaneously minimize the forward and backward cycle-consistency error; this is equivalent to ensuring that $F$ and $G$ are bijective inverses of one another, limiting the size of the set of possible mapping functions. This improves the robustness of the neural network and decreases the amount of training required relative to many other GAN variations. We note that for our particular case we used the architecture described in \cite{Pix2Pix} which is adapted from the unsupervised representation learning GAN architecture introduced in \cite{radford2016}. In particular, we highlight the use of a generator with skips and a Markovian discriminator. These additions helped with the translation process and limited the GAN discriminator to high-frequency structures, reducing the potential for artifacts. The cycle-consistency loss function $\mathcal{L}_\textnormal{cyc}(G, F)$ we used is defined as \begin{align} \label{CLoss} \mathcal{L}_\textnormal{cyc}(G, F) =\kern3pt &\mathbb{E}_{x\sim p_\textnormal{data}(x)}\left[\kern0.5pt\@ifstar{\oldabs}{\oldabs*}{F(G(x)) - x}_1\right] + \\ &\mathbb{E}_{y\sim p_\textnormal{data}(y)}\left[\kern0.5pt\@ifstar{\oldabs}{\oldabs*}{G(F(y)) - y}_1\right], \nonumber \end{align} where $\@ifstar{\oldabs}{\oldabs*}{A - B}_1 = \sum\limits_i\hspace{2pt}\@ifstar{\oldabs}{\oldabs*}{A_i - B_i}$ is the pixel-to-pixel $L^1$-norm between images $A$ (SDSS) and $B$ (DES). The full loss function used for training $F$ and $G$ was \begin{align} \mathcal{L}(G, F, D_\mathcal{X}, D_\mathcal{Y}) \kern2pt=\kern2pt &\mathcal{L}_\textnormal{GAN}(G, D_\mathcal{Y}, \mathcal{X}, \mathcal{Y}) \kern2pt+ \\ \nonumber &\mathcal{L}_\textnormal{GAN}(F, D_\mathcal{X}, \mathcal{Y}, \mathcal{X}) \kern2pt+ \\ &\lambda\kern0.5pt\mathcal{L}_\textnormal{cyc}(G, F) \nonumber \end{align} for some parameter $\lambda$, which describes the relative importance of the optimization of the adversarial and cycle consistency errors. For this work, we set $\lambda = 0.2$. Image translation using a CycleGAN architecture provides benefit over a traditional GAN by constraining the allowed mapping functions by ensuring that the discriminator pair $F$ and $G$ are inverses. This benefits the translation between noisy images by making sure that the differences in noise patterns in $\mathcal{X}$ and $\mathcal{Y}$ is taken into account, helping distinguish between the signal and noise more easily after training on many images. \section{Results and Analysis} \label{Results} Here, we demonstrate that we can transfer information from DES images to their SDSS counterparts, generating synthetic images that are brighter, of higher quality, and have less noise, yet retain the morphological information contained within the source image. We begin with a qualitative analysis to understand properties of the reconstructed images, then quantify the brightness and noise level of the image datasets. We then use correlations between the light profiles of the source and reconstructed objects to establish the small-scale differences between the datasets. Finally, we combine this information with comparative quality assessments to establish that the image reconstruction process improves the image quality, brightens objects, and reduces background noise. We provide evidence for the robustness of the reconstruction process by comparing the statistics of the validation and external datasets. \subsection{Qualitative Analysis} \label{QualAnalysis} \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth]{s2d_figs/Collages/TestColl.jpg} \end{minipage} \caption{Examples of galaxy images from the validation dataset (from the Stripe82 region). Each column shows an SDSS galaxy (row A), its DES counterpart (row B), and the DES image reconstruction by the CAE (row C) and CycleGAN (row D) methods. CAE and CycleGAN residuals (reconstruction - DES) are shown in rows E and F respectively, wile the CAE and CycleGAN pixel-to-pixel brightness increases (reconstruction - SDSS) are shown in rows G and H, respectively. \textbf{Note that to increase visibility, images in rows E, F, G and H were artificially enhanced with a power law transform (\boldmath$P_i\,\to\,{P_i}' = P_i^\gamma$ for each pixel \boldmath$P_i$).} In rows E and F, $\gamma = 0.3$, while in rows G and H, $\gamma = 0.5$. Additional galaxy samples can be found in Appendix \ref{GalSamp}.} \label{fig:TestCollage} \end{figure*} \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth]{s2d_figs/Collages/ExtColl.jpg} \end{minipage} \caption{Examples of galaxy images from the external dataset (from outside of the Stripe82 overlap region). Each column shows an SDSS galaxy (row A) and its DES image reconstruction by the CAE (row B) and CycleGAN (row C) methods. The CAE and CycleGAN pixel-to-pixel brightness increases (reconstruction - SDSS) are shown in rows D and E, respectively. \textbf{Note that to increase visibility, images in rows D and E were artificially enhanced with a power law transform (\boldmath$P_i\,\to\,{P_i}' = P_i^\gamma$ for each pixel \boldmath$P_i$, where \boldmath$\gamma = 0.5$).} Additional galaxy samples can be found in Appendix \ref{GalSamp}.} \label{fig:ExtCollage} \end{figure*} Fig. \ref{fig:TestCollage} shows several examples of false images generated by the neural networks paired with their corresponding SDSS and DES images from the overlap region. These images were selected to demonstrate the wide variety of galaxy types and structures included in the validation sample which were not including during the training. Row A contains images from the SDSS catalog; the corresponding DES images are located in row B. Rows C and D contain the reconstructed CAE and CycleGAN images, respectively. We can observe that the DES images and the synthetic images in rows C and D are remarkably similar, where the small differences come from the the lack of structure resolution of the reconstructed objects. Qualitatively, the reconstructed images are generally blurrier than the corresponding DES images. However, images reconstructed by both the models are generally brighter than their SDSS counterparts. In addition, the false images, particularly the those generated by the CAE, are often less noisy than their SDSS and/or DES counterparts. Image residuals for the CAE and CycleGAN reconstructions are shown in rows E and F, respectively. These show the pixel-to-pixel brightness differences between the reconstructed and DES images; note that these images were artificially enhanced using a power law transform ($P_i\,\to\,{P_i}' = P_i^\gamma$ for each pixel $P_i$; in rows E and F, $\gamma = 0.3$, while in rows G and H, $\gamma = 0.5$). This was done so that the residual structure was visible. It appears that both neural networks isolated and enhanced the galaxy signal while affecting the background minimally or try to reduce the noise. Both networks were also able to distinguish between separate structures in each image; this is particularly evident in the second column. Rows G and H show the pixel-to-pixel brightness increase provided by the CAE and CycleGAN reconstructions relative to the corresponding SDSS galaxies, respectively. Qualitatively, the CAE reconstructions are brighter than the CycleGAN images, and provided greater amplification to the internal structure of each galaxy. Interestingly, both networks consistently amplified the galaxy center more than other regions. This amplification was not exclusive to the central galaxy; rather, it was present in most regions the network identified as a signal region. Other example galaxies are included in Appendix \ref{GalSamp}. Figure \ref{fig:ExtCollage} shows examples of images from the external dataset (from outside of the Stripe82 region). These images are generally of lower quality; however, both reconstruction models succeeded in selecting and amplifying the objects of interest with little effect on the background, even maintaining much of the small-scale detail of the images (particularly in the fourth and seventh columns). As in Fig. \ref{fig:TestCollage}, the reconstructions generally increased the spatial extent of objects in the image. Notably, the CAE reconstruction appears to have removed an artifact from the SDSS image in the final column; this phenomenon is discussed in greater detail in Section \ref{ImQual}. \subsection{Dataset Properties} \label{BaseProp} Here, we quantify the brightness and quality of images from each dataset to use as baseline comparison metrics between the original input SDSS images, the DES target images, and the reconstructions. \subsubsection*{Pseudo-Flux Magnitude} In this work we have used the RGB images from SDSS and DES to test our architectures. Image brightness was quantified using the average pseudo-flux magnitude $F$ of each image. We refer to $F$ as the ``pseudo-flux magnitude'' because, while $F$ does not represent the physical flux magnitude (our images consisted solely of (r, g, b) channel pixel values), it acts as a proxy for this quantity due to the similarities between the two measurements. The pseudo-flux magnitude $F$ was defined by \begin{align} \label{eq:FluxMag} F &= 30 - 2.5\log\left(\hspace{-4pt}\sum_{{\hspace{6pt}r_i\,<\,r_{\textnormal{max}}}}\hspace{-10pt} \beta_i\hspace{2pt}\right) \\ &= 30 - 2.5\log\,\beta^\circ. \nonumber \end{align} Here, the pixel brightness $\beta_i$ describes the average of the red, green, and blue channel values in $P_i$ and $\beta^\circ$ is the total pixel brightness contained within an aperture of radius $r_{\textnormal{max}} = 75\unit{pix}$. A constant factor (zero point) of 30 was added to approximate the appearance of a physical magnitude distribution. Gaussian kernel density estimates (KDEs) for histograms of the pseudo-flux magnitudes are shown in Fig. \ref{fig:FluxMag}. {The first, second, and third quartiles were used as a conservative estimate of the spread of the distribution data; this was chosen due to the heavy skew of the distributions of the external data. However, they \textit{cannot} be used to determine whether there was a significant difference between the S/N of different datasets. This is because the differences in pseudo-flux magnitude must be evaluated on an \textit{image-to-image} basis, not by the relative frequency of each S/N value. The pseudo-flux magnitude values for the reconstructions in the validation datasets were comparable to those of the DES dataset, showing an improvement in the brightness relative to the SDSS dataset. In the external dataset, the pseudo-flux magnitude distributions for both reconstructions were shifted to the left of the SDSS distribution, indicating that the reconstruction process successfully increased the brightness of images from the external dataset.} {To quantify the image-to-image brightnesses and provide an error estimate, define the mean relative flux difference $\Delta\Woline[0.85]{\mathcal{F}}^{\kern1pt ij}$ between datasets $i$ and $j$ as} \begin{align} \label{eq:FluxDiff} \ensuremath{\Delta\Woline[0.85]{\mathcal{F}}^{\kern1pt ij}} = \frac{1}{N_{\textnormal{img}}}\sum_m^{N_{\textnormal{img}}}\frac{F_m^{\kern1pt j} - F_m^{\kern1pt i}}{F_m^{\kern1pt i}}, \end{align} {where $F_m^{\kern1pt i}$ and $F_m^{\kern1pt j}$ are the pseudo-flux magnitudes of corresponding images from the $N_{\textnormal{img}}$ image in datasets $\mathbb{X}_i$ and $\mathbb{X}_j$, respectively.} {The values of \ensuremath{\Delta\Woline[0.85]{\mathcal{F}}^{\kern1pt ij}} for the external and validation datasets are shown in Table \ref{tab:FluxRat} (scaled by a factor of $10^3$ to enhance readability); the error was estimated using the standard deviation of $\frac{F_m^{\kern1pt j} - F_m^{\kern1pt i}}{F_m^{\kern1pt i}}$. Quartiles were used to estimate the error of the pseudo-flux magnitude plot (Figure \ref{fig:FluxMag}) due to the clear skew of the data, so the variance would not provide an adequate representation of the spread of the data. However, the distribution of \ensuremath{\Delta\Woline[0.85]{\mathcal{F}}^{\kern1pt ij}} was more symmetric than those of $F$, allowing the use of the standard deviation as an estimate of error.} {The only dataset pair in which there was not a significant difference between the fluxes was for CycleGAN vs. DES. This implies that the CycleGAN reconstructions decreased the flux of the SDSS images to match that of the DES images. It should be noted that the results from this table seem to be in conflict with those from Figure \ref{fig:FluxMag}. This is not unexpected: Figure \ref{fig:FluxMag} show the distribution of the relative frequency of \textit{individual} pseudo-flux magnitudes, while the values in Table \ref{tab:FluxRat} show an \textit{image-to-image} comparison of the pseudo-flux magnitudes. Hence, the values of Table \ref{tab:FluxRat} provide an appropriate measure of the differences in the pseudo-flux magnitudes of the reconstructions relative to their source images.} {Notably, there was not a statistically significant difference between the values of \ensuremath{\Delta\Woline[0.85]{\mathcal{F}}^{\kern1pt ij}} for $\mathbb{X}_i, \mathbb{X}_j = \textnormal{SDSS, CAE}$ in the validation and external datasets; the same is also true for $\mathbb{X}_i, \mathbb{X}_j = \textnormal{SDSS, CycleGAN}$. Hence, the decrease in flux provided by the reconstructions were similar for both the validation and external datasets. This provides evidence for the robustness of our method: the increase in the brightnesses of the false images was the same regardless of the brightnesses of the input images.} \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth]{s2d_figs/single/flux.png} \end{minipage} \caption{Top: Pseudo-flux magnitudes (defined in Eqn. \eqref{eq:FluxMag}) for the validation and external data. Bottom: The first, second, and third quartiles of the corresponding datasets, providing a measure of spread. SDSS images tended to be fainter than the DES and reconstructed images. Note that the quartiles cannot be used as a measure of error/statistical significance because this plot does not provide a representation of the image-to-image differences in $F$; this is discussed in greater detail in the text.} \label{fig:FluxMag} \end{figure*} \begin{table} \centering \setlength\tabcolsep{0pt} \setlength\extrarowheight{0pt} \begin{tabularx}{\columnwidth}{@{}|C|C|C|C|@{}} \hline \rule{0pt}{\the\baselineskip}$\ensuremath{\Delta\Woline[0.85]{\mathcal{F}}^{\kern1pt ij}} \times 10^3$ & External & \multicolumn{2}{|c|}{Validation} \\[2pt] \hline \diagbox[width=\dimexpr0.25\columnwidth\relax]{ \vspace{1pt}\hspace{4pt}$\mathbb{X}_j$}{ \vspace{-6pt}$\mathbb{X}_i$\hspace{6pt}} & \vspace{-7pt}SDSS & \vspace{-7pt}SDSS & \vspace{-7pt}DES \\ \hline \rule{0pt}{\dimexpr\baselineskip-1pt\relax} {CAE} & {$-40.65 \pm 5.59$} & {$-41.90 \pm 5.54$} & {$-9.36 \pm 6.72$} \\%[2pt] {CycleGAN} & {$-30.44 \pm 7.28$} & {$-29.33 \pm 6.28$} & {$\mathit{3.65 \pm 8.22}$} \\%[2pt] {DES} & {N/A} & {$-32.80 \pm 9.13$} & {N/A} \\%[2pt] \hline \end{tabularx} \caption{The mean proportional difference \ensuremath{\Delta\Woline[0.85]{\mathcal{F}}^{\kern1pt ij}} (scaled by a factor of $10^3$) in the pseudo-flux magnitudes (defined in Eq. \eqref{eq:FluxDiff}) between each of the image sets; the standard deviation was used to estimate the error. The only dataset pair that does not show a significant difference in $F$ is CycleGAN vs. DES.} \label{tab:FluxRat} \end{table} \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth]{s2d_figs/single/snr.png} \end{minipage} \caption{Top: KDEs of the histograms of the mean S/N (defined in Eqn. \eqref{eq:SNR}) for the validation and external data. Bottom: The first, second, and third quartiles of the distributions. Both reconstruction models were effective at increasing the S/N. Note that, as described in Figure \ref{fig:FluxMag}, the quartile bars cannot be used to determine statistical significance; see the text for further explanation.} \label{fig:SNR} \end{figure*} \subsubsection*{Signal-to-Noise Ratio} As metric for image quality, we measured the average signal-to-noise ratio (S/N) of images in each dataset. We define the mean S/N as \begin{align} \label{eq:SNR} \textnormal{S/N} &= \frac{\mu^\circ_\beta}{\sigma^\circ_\beta}, \end{align} where $\mu^\circ_\beta$ $\left(\sigma^\circ_\beta\right)$ is the mean (standard deviation) of the pixel brightness $\beta$ for pixels within a radius of $r_{\textnormal{max}} = 75\unit{pix}$. {In Fig. \ref{fig:SNR}, we show KDEs for histograms of the mean S/N, along with the first, second, and third quartiles, which are used as a measure of spread; however, as discussed in the analysis of Table \ref{tab:FluxRat}, they cannot be used as a measure of error. On average, both reconstruction models were effective at boosting the S/N relative to the SDSS images, and the S/N for the CycleGAN reconstructions nearly matched that of the DES images. Denoising autoencoders have been used to reduce the amount of noise in images \citep{DenoisingAE}, so it is not surprising that the S/N in CAE images was greater than that of the target images.} {In Table \ref{tab:SNRRat}, we list the mean proportional differences in the signal-to-noise ratios between image sets $i$ and $j$ to summarize the results from Figure \ref{fig:SNR}. As in Eq. \eqref{eq:FluxDiff}, we define the mean proportional difference between image set $i$ and $j$ as} \begin{align} \label{eq:SNRDiff} \ensuremath{\Delta\Woline[0.85]{\mathcal{S}}^{\kern1pt ij}} = \frac{1}{N_{\textnormal{img}}}\sum_m^{N_{\textnormal{img}}}\frac{\left[\textnormal{S/N}\right]_m^{\kern1pt j} - \left[\textnormal{S/N}\right]_m^{\kern1pt i}}{\left[\textnormal{S/N}\right]_m^{\kern1pt i}} \end{align} {where $\left[\textnormal{S/N}\right]_m^{\kern1pt i}$ and $\left[\textnormal{S/N}\right]_m^{\kern1pt j}$ are the signal-to-noise ratios (as defined in Eqn. \eqref{eq:SNR}) of corresponding image pairs from among the $N_{\textnormal{img}}$ images in datasets $\mathbb{X}_i$ and $\mathbb{X}_j$, respectively. In Table \ref{tab:SNRRat}, we can see that, for the validation dataset, the CycleGAN reconstructions did not provide a significant increase in the S/N relative to the DES images ($\ensuremath{\Delta\Woline[0.85]{\mathcal{S}}^{\kern1pt ij}} - \sigma < 0 < \ensuremath{\Delta\Woline[0.85]{\mathcal{S}}^{\kern1pt ij}}$ for $\mathbb{X}_i, \mathbb{X}_j = \textnormal{SDSS, CycleGAN}$, where $\sigma$ is the standard deviation of \ensuremath{\Delta\Woline[0.85]{\mathcal{S}}^{\kern1pt ij}}). However, the CAE reconstructions did provide a significant increase over the DES images. The second and third columns from the left ($\mathbb{X}_i = \textnormal{SDSS}$ for the external and validation data, respectively) indicate that the reconstructions provided a significant increase in the S/N of their corresponding SDSS images. Moreover, there was not a significant difference between the value of \ensuremath{\Delta\Woline[0.85]{\mathcal{S}}^{\kern1pt ij}} for $\mathbb{X}_i, \mathbb{X}_j = \textnormal{SDSS, CAE}$ in the validation and external datasets; this relationship is the same for $\mathbb{X}_i, \mathbb{X}_j = \textnormal{SDSS, CycleGAN}$. This implies that image reconstruction via feature translation using our architectures provides a robust method to generate false galaxy images that share the same S/N as DES galaxies in this study.} \begin{table} \centering \setlength\tabcolsep{0pt} \setlength\extrarowheight{0pt} \begin{tabularx}{\columnwidth}{@{}|C|C|C|C|@{}} \hline \rule{0pt}{\the\baselineskip}$\Delta\Woline[0.95]{\mathcal{S}}^{\kern1pt ij}$ & External & \multicolumn{2}{|c|}{Validation} \\[2pt] \hline \diagbox[width=\dimexpr0.25\columnwidth\relax]{ \vspace{1pt}\hspace{4pt}$\mathbb{X}_j$}{ \vspace{-6pt}$\mathbb{X}_i$\hspace{6pt}} & \vspace{-7pt}SDSS & \vspace{-7pt}SDSS & \vspace{-7pt}DES \\ \hline \rule{0pt}{\dimexpr\baselineskip-1pt\relax} {CAE} & {$5.138 \pm 2.719$} & {$2.893 \pm 1.819$} & {$0.710 \pm 0.317$} \\%[2pt] {CycleGAN} & {$2.067 \pm 1.001$} & {$1.334 \pm 0.716$} & {$\mathit{0.069 \pm 0.125}$} \\%[2pt] {DES} & {N/A} & {$1.224 \pm 0.779$} & {N/A} \\%[2pt] \hline \end{tabularx} \caption{The mean proportional difference in the signal-to-noise ratios (see Eqn. \eqref{eq:SNRDiff}) between each of the image sets; the standard deviation was used to estimate the error. As in Table \ref{tab:FluxRat}, the only dataset pair that does not show a significant difference in S/N is CycleGAN vs. DES.} \label{tab:SNRRat} \end{table} \subsection{Pseudo-Luminosity Profile} \label{StrucAnalysis} \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth]{s2d_figs/single/lum_prof_log_log.png} \end{minipage} \caption{Pseudo-luminosity profiles $\D[F]{S}$ for the validation (left) and external (right) data; $\D[F]{S}$ is defined in Eqn. \eqref{eq:LumProf}. The solid line represents $\D[F]{S}$, while the dotted lines show $\D[F]{S} \pm \hat{\sigma}$, where $\hat{\sigma}$ is the sample standard deviation. There was no statistically significant difference between the pseudo-luminosity profiles for any dataset.} \label{fig:LumProf} \end{figure*} In Section \ref{BaseProp}, we showed that the image reconstructions provided a significant increase in the pseudo-flux magnitude and S/N of their corresponding SDSS images that matched that of the DES images. We also demonstrate that the improvement in $F$ and S/N were not heavily dependent on the dataset from which the source image was taken, providing evidence for the robustness of our method. Now, we will compare the pseudo-luminosity profiles of the objects in these images to characterize the structure of the objects themselves. The pseudo-luminosity profile $\D[F]{S}$, which is analogous to the luminosity profile in observed data, is defined by \begin{align} \label{eq:LumProf} \D[F(r)]{S} &= \frac{1}{2\pi r{\Delta r}}\hspace{-4pt}\sum_{\hspace{4pt}r_i\in\textnormal{Ann}(r)}\hspace{-8pt}F_i \\ &= \frac{F_{\textnormal{Ann}(r)}}{2\pi r{\Delta r}} \nonumber \end{align} where $F_{\textnormal{Ann}(r)}$ is the total flux contained within an annulus-shaped aperture $\textnormal{Ann}(r)$ with central radius $r$ and area $S = 2\pi r{\Delta r}$, where ${\Delta r} = 1\unit{pix}$ Plots of the pseudo-luminosity profile for the validation and external datasets are shown in Fig. \ref{fig:LumProf}. {Bootstrapping was used to estimate the sample variance $\hat{\sigma}^2$; the dotted lines represent $\D[F]{S} \pm \hat{\sigma}$. $\hat{\sigma}^2$ was estimated by resampling each dataset 1000 times; for the validation dataset, the sample size was 50, while for the external dataset, the sample size was 2500. In both the external and validation datasets, there was no significant difference between the pseudo-luminosity profiles of any image datasets, and from the pseudo-flux magnitude results (Figure \ref{fig:FluxMag}), we know that the reconstructions were generally brighter than their SDSS counterparts. This implies that the reconstructions improved the brightness quality of the SDSS images without losing information about the object's brightness profile distribution.} \subsection{Image Quality Comparison} \label{ImQual} \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth]{s2d_figs/comp/luma.png} \end{minipage} \caption{Top: Mean luminance index $\bar{\ell}$ (defined in Eqn. \eqref{eq:lcs}) for the validation and external data. Bottom: The first, second, and third quartiles of each distribution. $\bar{\ell}$ describes the similarities in brightness between two images at small scales ($\sim 10 \unit{pix}$). The robustness of the method is indicated by the similarities in the validation and external distributions in the left-hand plot. In the right-hand plot, both reconstruction models increased $\bar{\ell}$ by a similar amount, indicating that, at small scales, the brightness increase provided by the two models were similar. This supports the conclusions drawn from the pseudo-flux magnitude in Fig. \ref{fig:FluxMag} and Table \ref{tab:FluxRat}. Note that unlike in Figures \ref{fig:FluxMag} and \ref{fig:SNR}, the quartiles can be used to determine statistical significance because $\ell$ calculations provide direct image-to-image comparisons.} \label{fig:LumId} \end{figure*} \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth]{s2d_figs/comp/cont.png} \end{minipage} \caption{Top: Mean contrast index $\bar{c}$ (defined in Eqn. \eqref{eq:lcs}) for the validation and external data. Bottom: The first, second, and third quartiles of each distribution. $\bar{c}$ describes the relative sharpness of two images at small scales ($\sim 10 \unit{pix}$). The robustness of the method is indicated by the similarities in the validation and external distributions in the left-hand plot. In the right-hand plot, $\bar{c}$ was generally lower for the CAE reconstructions than for the CycleGAN reconstructions, implying that the CAE images were generally blurrier than the CycleGAN images. This confirms the qualitative observations about the images described in Section \ref{QualAnalysis} (see Fig. \ref{fig:TestCollage}). Note that unlike in Figures \ref{fig:FluxMag} and \ref{fig:SNR}, the quartiles can be used to determine statistical significance because $c$ calculations provide direct image-to-image comparisons.} \label{fig:ContId} \end{figure*} \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth]{s2d_figs/comp/struc.png} \end{minipage} \caption{Top: Mean cross-correlation index $\bar{s}$ (defined in Eqn. \eqref{eq:lcs}) for the validation and external data. Bottom: The first, second, and third quartiles of each distribution. $\bar{s}$ describes the similarities between the structure of two images at small scales ($\sim 10 \unit{pix}$), providing a measure of the faithfulness of the reconstruction. The robustness of the method is indicated by the similarities in the validation and external distributions in the left-hand plot. In the right-hand plot, $\bar{s}$ was generally lower for the CycleGAN reconstructions than for the CAE reconstructions, implying that the CAE architecture more accurately recreated small-scale details of the DES images, providing a more accurate reconstruction of the morphological properties of the image. Note that unlike in Figures \ref{fig:FluxMag} and \ref{fig:SNR}, the quartiles can be used to determine statistical significance because $s$ calculations provide direct image-to-image comparisons.} \label{fig:StrucId} \end{figure*} \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth]{s2d_figs/comp/mssim.png} \end{minipage} \caption{Top: Mean structural similarity index (defined in Eqn. \eqref{eq:MSSIM}) for the validation and external data. Bottom: The first, second, and third quartiles of each distribution. The MSSIM, which is the mean of the product of $\ell$, $c$, and $s$, provides a metric for the overall relative image quality. The robustness of the method is indicated by the similarities in the validation and external distributions in the left-hand plot. From the right-hand plot, we can see that the overall quality of the CAE images was similar to that of the DES images, while the quality of the CycleGAN reconstructions was further removed from that of the DES images. Note that unlike in Figures \ref{fig:FluxMag} and \ref{fig:SNR}, the quartiles can be used to determine statistical significance because MSSIM calculations provide direct image-to-image comparisons.} \label{fig:SSIM} \end{figure*} As shown in Section \ref{BaseProp}, the brightnesses and S/N of the reconstructions were greater than or not significantly different from those of the DES images, and in \ref{StrucAnalysis}, we show that the brightness increase provided by the reconstruction has little effect on the radial profiles of the objects. Now, we will characterize how effective each reconstruction model is at amplifying the image signal, reducing background noise, improving image quality, and retaining the morphological information contained within the original image. We also highlight several notable images from the external dataset that show that CAE reconstructions may help remove image artifact. The mean structural similarity index (MSSIM) \citep{SSIM} is a method used to compare image quality that takes into account differences in brightness, sharpness, and small-scale features. The MSSIM is defined by the product of the luminance index $\ell$, contrast index $c$, and cross-correlation index $s$. For a pair of images $\vec{X}$ and $\vec{Y}$, where each respective entry $\vec{X}_{ij}$ and $\vec{Y}_{ij}$ is the pixel brightness $\beta_{ij}$ of pixel $P_{ij}$, let $\vec{x}_{ij}$ $\left(\vec{y}_{ij}\right)$ be an 11\by11 window centered around pixel $x_{ij}$ $\left(y_{ij}\right)$. After smoothing $\vec{x}_{ij}$ $\left(\vec{y}_{ij}\right)$ by an 11-tap Gaussian filter, define $\ell$, $c$, and $s$ as \begin{align} \label{eq:lcs} \ell\left(\vec{x}_{ij}, \vec{y}_{ij}\right) &= \frac{2\mu_x\mu_y + C_1}{\mu_x^2 + \mu_y^2 + C_1} \nonumber\\ c\left(\vec{x}_{ij}, \vec{y}_{ij}\right) &= \frac{2\sigma_x\sigma_y + C_2}{\sigma_x^2 + \sigma_y^2 + C_2}, \\ s\left(\vec{x}_{ij}, \vec{y}_{ij}\right) &= \frac{2\sigma_{xy} + C_2}{2\sigma_x\sigma_y + C_2}. \nonumber \end{align} Then structural similarity index SSIM can be calculated as \begin{align} \textnormal{SSIM}\left(\vec{x}_{ij}, \vec{y}_{ij}\right) &= \ell\left(\vec{x}_{ij}, \vec{y}_{ij}\right) c\left(\vec{x}_{ij}, \vec{y}_{ij}\right) s\left(\vec{x}_{ij}, \vec{y}_{ij}\right) \\ &= \frac{\left(2\mu_x\mu_y + C_1\right) \left(2\sigma_{xy} + C_2\right)}{\left(\mu_x^2 + \mu_y^2 + C_1\right) \left(\sigma_x^2 + \sigma_y^2 + C_2\right)}, \nonumber \end{align} Here, $\mu_x$ ($\mu_y$) is the mean of $\vec{x}_{ij}$ $\left(\vec{y}_{ij}\right)$, $\sigma_x^2$ $\left(\sigma_y^2\right)$ is the variance of $\vec{x}_{ij}$ $\left(\vec{y}_{ij}\right)$, $\sigma_{xy}^2$ is the covariance, $C_1 = (0.01 R_D)^2$, and $C_2 = (0.03 R_D)^2$ are stabilization constants for which $R_D$ is the dynamic range of the image (in our case, $R_D = 1$). Then the MSSIM is defined by \begin{align} \label{eq:MSSIM} \textnormal{MSSIM} = \frac{1}{N_P^2}\sum_{i,j}^{N_P}\textnormal{SSIM}\left(\vec{x}_{i,j},\vec{y}_{i,j}\right) \end{align} and the mean luminance, contrast, and cross-correlation indices ($\bar{\ell}$, $\bar{c}$, and $\bar{s}$, respectively) are defined similarly. KDEs of histograms for $\bar{\ell}$, $\bar{c}$, $\bar{s}$, and MSSIM for the overlap and external data are shown in Figures \ref{fig:LumId}, \ref{fig:ContId}, \ref{fig:StrucId}, and \ref{fig:SSIM}, respectively. As the SDSS galaxies are substantially different in brightness and radii, it is not valid to use $\bar{\ell}$, $\bar{c}$, $\bar{s}$, and MSSIM as image quality metrics for DES/SDSS and reconstruction/SDSS image pairs. However, if the reconstruction process is robust, the distributions for DES/SDSS pairs and reconstruction/SDSS pairs should be consistent in the validation and external datasets. Hence, we will use reconstruction/DES and reconstruction/reconstruction measurements to quantify the reconstruction quality and reconstruction/SDSS measurements as metrics for robustness. {The mean luminance index $\bar{\ell}$ is a measure of the differences in the pixel-to-pixel brightness of two (smoothed) images. The reconstruction/DES distributions for $\bar{\ell}$ were similar in shape, and there was not a significant difference between their medians, indicating that they had similar brightness qualities to one another; this is consistent with the pseudo-flux magnitude results (Figure \ref{fig:FluxMag} and Table \ref{tab:FluxRat}). The brightness quality of the reconstructions relative to their SDSS counterparts were extremely similar to one another in both the validation and external distributions, implying that both were equally effective at increasing the image brightnesses.} {While the $\bar{\ell}$ values for the external dataset cannot be interpreted as measures of the image brightness qualities, they can be used to support the robustness of the reconstruction process. The shapes of the CAE/SDSS $\bar{\ell}$ distributions for the validation and external datasets were similar to one another, and there was not a significant difference between their medians; the same is true for CycleGAN/SDSS. This implies that the brightness quality improvement was consistent for both the validation and external SDSS datasets, providing evidence for the robustness of this method.} {The mean contrast index $\bar{c}$ describes the average difference in smoothness between small cut-outs of image pairs. For the validation data, the CycleGAN reconstructions had a significantly higher contrast index than the CAE reconstructions, implying that the sharpness of the CycleGAN images was more consistent with that of the DES images. This confirms that the CAE reconstructions tended to smooth the images, leading to the blurriness seen in Figures \ref{fig:TestCollage} and \ref{fig:ExtCollage}. The robustness of the reconstructions can again be seen by the lack of a significant difference between the test and external distributions for the reconstruction/SDSS $\bar{c}$ distributions. Note that the differences between the S/N for each image set likely contributed to the value of $\bar{c}$; however, the qualitative sharpness of the reconstructions are consistent with the conclusions drawn from $\bar{c}$.} {The mean cross-correlation index $\bar{s}$ is a measure of the deviations in the small-scale structure between two images; large values of $\bar{s}$ indicate that, after normalizing for the brightness and sharpness, the morphological features of the images at small scales are similar to (strongly correlated with) one another. The $\bar{s}$ distributions for the validation data indicates that the CAE reconstructions are significantly more closely correlated with their DES counterparts at small scales than the CycleGAN reconstructions. This implies that CAE reconstruction preserves more information at small scales than CycleGAN.} The combination of these quantities yields the MSSIM distributions seen in Fig. \ref{fig:SSIM}. This metric indicates that the overall quality of the CAE images was comparable to that of the CycleGAN reconstructions; however, the breakdown in terms of $\bar{\ell}$, $\bar{c}$, and $\bar{s}$ suggests that the reconstruction methods provide differing benefits. Specifically, CycleGAN reconstructions are generally sharper than their CAE counterparts, while CAE reconstructions preserve more information at small scales in the image. \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth]{s2d_figs/NRem/nrem_ext.jpg} \end{minipage} \caption{A selection of several notable objects from the external dataset. In each of these images, it appears that the CAE reconstructions may have removed artifacts from the image. {The reconstructed objects may have been generated through inpainting.} \label{fig:NRemExt}} \end{figure*} \begin{figure*} \centering \begin{minipage}{\textwidth} \centering \includegraphics[width = \textwidth]{s2d_figs/NRem/nrem_test.jpg} \end{minipage} \caption{{A selection of several notable objects from the validation dataset. In each of these images, it appears that the CAE reconstructions may have removed large artifacts from the image. Note that both reconstructions may have used inpainting to generate the images in column 1, while the CAE reconstruction of the central object in column 2 appears to have removed the corrupted region of that object.}} \label{fig:NRemTest} \end{figure*} Finally, we would like to highlight several unique images from the external dataset; these are shown in Figure \ref{fig:NRemExt}. These images were found through visual inspection of images with the lowest reconstruction/SDSS $\bar{\ell}$, $\bar{c}$, $\bar{s}$, and/or MSSIM values in the external dataset. {Each image in Figure \ref{fig:NRemExt} is heavily corrupted by artifacts; however, the CAE reconstructions appear to have removed these artifacts at the cost of blurring the objects in the image. These results are consistent with studies of denoising autoencoders \citep{DenoisingAE}, which proven effective at smoothing brightness/color variations, removing artifacts, and restoring corrupted images. As the base architecture of a denoising autoencoder is similar to that of our encoder/decoder pair, it is not surprising that the image reconstructions were effective at removing these artifacts. The CycleGAN reconstructions, however, fail to consistently remove these artifacts, though do succeed in amplifying the brightness of these objects.} {Figure \ref{fig:NRemTest} shows several images from the validation dataset that contain artifacts. The images in row 1 were found due to their extreme reconstruction/SDSS $\bar{\ell}$, $\bar{c}$, $\bar{s}$, and MSSIM values; however, those in row 2 were found via manual inspection of the validation dataset. This was to be expected because the Stripe82 dataset is generally of higher quality than the external dataset.} {In row 1, it appears that the CAE reconstruction removed the artifact, albeit at the cost of blurring the central object. The artifact in row 2 consists of a blue streak passing through the upper-left edge of the central object. Like column 4 in Figure \ref{fig:NRemExt}, there is no signal in this region of the CAE reconstruction, implying that little or no inpainting was performed.} {As the validation data and training data were taken from the same population, it is likely that the training data had a similar incidence of corrupted images as the validation data. As a result, it is unlikely that either neural network was trained sufficiently to accurately extract the signal from the heavily corrupted images in Figure \ref{fig:NRemExt}, implying that objects recovered in these images likely resulted from inpainting. While outside the scope of this work, the improvement in the quality of the images in Figure \ref{fig:NRemExt}, especially given the lack of training on corrupted images, warrants a more thorough analysis of the effectiveness of corrupted image reconstruction using our CAE architecture.} \section{Conclusions} \label{Conclusion} In this work, we demonstrated the viability of robust cross-survey galaxy image translation using neural networks and generative models. Using the pseudo-flux magnitude (Section \ref{BaseProp}) and mean luminance index $\bar{\ell}$ (Section \ref{ImQual}), we show that the average brightnesses of the reconstructions more closely match DES images than their SDSS source images while preserving the structural information contained within the source galaxy (Section \ref{StrucAnalysis}). {In Section \ref{BaseProp}, we also demonstrated that the reconstruction process improved the signal-to-noise ratio of the source images. The signal-to-noise ratio of the CycleGAN images closely correlated with that of the DES images, while the CAE images improved this quantity relative to the DES images; this behavior is expected because autoencoders have been shown to be effective at reducing the amount of noise in images \citep{DenoisingAE}.} Together, these imply that our method can be used to improve image brightness and signal strength using image-to-image translation. In Section \ref{ImQual}, we discuss the pros and cons of each reconstruction method using the mean contrast index $\bar{c}$ and cross-correlation index $\bar{s}$. We found that CycleGAN reconstructions were sharper, while CAE reconstructions more accurately reproduced the structure of DES galaxies at length scales on the order of several pixels at the cost of being slightly blurrier. Finally, we highlighted several instances in which the reconstructions appear to have removed large artifacts. {We find evidence for the robustness of our method by performing reconstructions on images from the SDSS catalog in the external region, which contains objects without a DES counterpart. Though these images were fainter and had lower S/N than images from the overlap region (Stripe82), the large- and small-scale statistics of these image reconstructions were similar to those in the overlap region, implying that the reconstruction process accurately created DES representation of these objects. However, there is the possibility that our model was overfitted due to our choice to avoid factors that may impact the accuracy of the map between SDSS and DES images in the Stripe82 region.} {While this only constitutes an initial application, our results show that feature transfer learning shows promise as a method for false galaxy image generation. This has great implications for the analysis of astronomical survey data: assuming that there is a sufficiently large sample of corresponding SDSS and DES image pairs, one could improve the brightness and S/N of many images from the SDSS catalog, decreasing the amount of error and improving the statistical power of analyses. Additionally, this provides an important advantage over other generative models used supplement survey data: while other methods generate false images that share the properties of the images in the data set of interest, feature-to-feature translation provides representations of observed galaxies, providing a way to extend both the size and the sky coverage of galaxy surveys.} The reconstruction pipeline we developed solely constitutes a initial exploration, but the efficiency and robustness of the reconstruction process shows promise as a method for generating or improving survey data. {While SDSS and DES data were used in this work, we expect that this may be applicable to other surveys, particularly for deeper surveys such as LSST \citep{LSST}. All quantities calculated were derived solely from the mean of the (r, g, b) channel pixel values of survey images; however, we anticipate that similar methods could be used for the generation of false images with physical observables consistent with those of survey images.} In addition, our methodology could be expanded to enable cross-wavelength or band-to-band translation. A neural network could be trained with a feature set containing fewer bands than the target dataset, generating a map between each pair of bands in the training and target data. {The trained network could be used to supplement survey data by generating realistic reconstructions of image data in frequency bands not probed by that survey. We intend to explore these applications in future work using DES DR2 data, which contains more images and has a greater field depth than DES DR1 \citep{DESDR2}}. \section{Acknowledgments} \noindent This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE --- 1746047. M. Carrasco Kind has been supported by NSF Grant AST-1536171. \subsection*{Author contribution} B. Buncher: Data analysis, figure creation, writing, and editing. \\ A. N. Sharma: AI model creation, data collection, figure creation, writing, and editing. \\ M. Carrasco Kind: Oversight, data collection, writing, and editing. \subsection*{Softwares Used} This research made us of \texttt{matplotlib} \citep{matplotlib}, \texttt{numpy} \citep{numpy1,numpy2}, \texttt{scikit-image} \citep{skimage}, \texttt{SciPY} \citep{SciPy}, and \texttt{seaborn} \citep{seaborn}. This research made use of \texttt{Astropy},\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{astropy1,astropy2}. This research made use of \texttt{Photutils}, an \texttt{Astropy} package for detection and photometry of astronomical sources \citep{photutils}. \section*{Data Availability Statement} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
1,314,259,995,232
arxiv
\section{Introduction} There are advantages in the use of high dimensional entangled systems in quantum information processing and communication. Such systems with dimensions larger than 2 allow for a higher information capacity \cite{walborn} and lead to increased security in quantum cryptography \cite{bp2000}. Quantum entanglement is an important resource in many quantum information protocols. However it is fragile, since it decays through interaction with noisy environments \cite{konrad,tiersch}, such as atmospheric turbulence. The orbital angular momentum (OAM) states of photons are a suitable candidate for the implementation of high dimensional quantum systems for use in high dimensional quantum key distribution \cite{zeil2006,mafu} and long-range quantum communication \cite{capraro} through free-space. Unlike the polarization of light, which offers a two-level Hilbert space, the OAM of photons provides an infinite-dimensional Hilbert space. Fortuitously, photonic states produced in spontaneous parametric down-conversion (SPDC) are naturally entangled in their OAM degrees of freedom due to the conservation of OAM \cite{arnaut,franke,mair}. The decay of OAM entanglement in a qubit pair evolving in turbulence has been studied both theoretically \cite{sr,qturb4,qturb3,ipe, toddbrun,leonhard} and experimentally \cite{pors,malik,oamturb,qkdturb}. These studies show that qubit OAM entanglement decays slower in weak scintillation for modes with higher OAM-values. While, the study of turbulence induced decay of OAM entanglement in qubit pairs can help one to understand basic behavior of OAM entanglement in turbulence, it offers little practical benefit over the use of the polarization states of single photons, which are less affected by turbulence. The point of using the OAM states of light is to have high dimensional quantum states. Therefore, it makes sense that one should consider high dimensional states in the OAM basis for these studies. The effect of atmospheric turbulence on such high dimensional OAM entanglement have received little attention. The only case where the evolution of OAM entanglement for high dimensional quantum state has been considered, is a theoretical investigation involving qutrits \cite{bruenner}. It appears that the effects of turbulence on the OAM entanglement for high dimensional quantum states, have not yet been considered experimentally. For this reason, we report here on the theoretical and experimental results of a study into the effects of atmospheric turbulence on high dimensional OAM entanglement. Using SPDC, we prepare photon pairs that are entangled in their OAM degrees of freedom. One of these photons is propagated through turbulence, while allowing the other to propagate undisturbed through free-space (without turbulence). The turbulence is implemented as a phase-only distortion on a single phase screen, using a spatial light modulator (SLM). By implication, we use the single phase screen (SPS) approximation of the atmospheric scintillation process \cite{paterson}, which is only valid under weak scintillation conditions \cite{turbsim}. Inspite of this restriction, the SPS model is still currently the most widely used model for the study of photonic quantum states propagating in turbulence \cite{sr,qturb4,qturb3, toddbrun,leonhard,pors,malik,oamturb}. Our results show a good agreement between theory and experiment. There are various aspects that pose challenges to the experimental study of the propagation of high dimensional OAM entangled states through turbulence. To determine the high dimensional quantum state, one needs to perform a quantum state tomography, which requires a number of individual measurements that increases rapidly with the increasing number of dimensions \cite{tomo}. Moreover, due to the randomness of atmospheric turbulence, one has to repeat these measurements a reasonable number of times and average the results to obtain a meaningful statistical description of the evolution of the quantum state. In other words, an investigation of the decay of high dimensional entanglement in turbulence is a very time consuming process. In our experiment, we consider qutrits --- three-dimensional states --- and performed a full state tomography for each output state obtained from different realizations and different strengths of the turbulence. While three-dimensional states allow us to observe the effects of high dimensions, we are able to perform the experiments over a reasonable period. An important purpose of this work is to show that we are able to obtain a good agreement between the theoretical predictions that one can compute using the SPS approximation. Here, we use the Laguerre-Gaussian (LG) basis as the measurement basis, because it enables us to obtain relatively simple analytical expressions. For accurate comparison, the experimental measurements need to implement precisely that which is assumed in the calculations. As a result we measure in the exact LG basis and not in the helical basis where the amplitude variation of the mode is ignored, apart from the Gaussian envelop that is imposed by the overlap in the optical fibre. For this reason we use complex amplitude modulation \cite{arrizon1} on the SLMs to perform the projective measurements required for the quantum state tomography. The paper is organized as follows. Various theoretical aspects are discussed in Sec.~\ref{agter}. The experimental setup is discussed in Sec.~\ref{opstel}, followed by a discussion of the results in Sec.~\ref{resdis}. Conclusions are provided in Sec.~\ref{concl}. \section{Theory} \label{agter} \subsection{SPS model} In the experiment, we pass one of the photons through turbulence that is simulated by a random phase modulation using an SLM. Hence, the appropriate theory with which these experimental results are to be compared is the SPS approximation. For the biphoton case with only one photon propagating through turbulence, the elements of the output density matrix in the SPS approximation are given by \cite{paterson,lindb} \begin{eqnarray} \rho_{mnpq} & = & \int E_m^*({\bf x}_1) E_p^*({\bf x}_2) E_n({\bf x}_3) E_q({\bf x}_4) \nonumber \\ & & \times \psi({\bf x}_1,{\bf x}_2) \psi^*({\bf x}_3,{\bf x}_4) \exp\left[-\frac{1}{2} D_{\theta} (\Delta x) \right] \nonumber \\ & & \times {\rm d}^2 x_1\ {\rm d}^2 x_2\ {\rm d}^2 x_3\ {\rm d}^2 x_4 , \label{spsint} \end{eqnarray} where $\Delta x= |{\bf x}_1-{\bf x}_3|$ and the input field $\psi({\bf x}_1,{\bf x}_2)$ is the biphoton state obtained from the SPDC process (simply called the SPDC state). The output modes, in terms of which the density matrix is defined, are given by $E_n({\bf x}) = \braket{{\bf x}}{n}$, where $\ket{n}$ represents the chosen basis for the output density matrix. The turbulence is represented by the phase structure function $D_{\theta}(\cdot)$. The photon in the $A$-system ($B$-system) is associated with the coordinate vectors ${\bf x}_1$ and ${\bf x}_3$ (${\bf x}_2$ and ${\bf x}_4$) and with the indices $m$ and $n$ ($p$ and $q$). \subsection{Structure function} In the Kolmogorov theory \cite{scintbook}, the phase structure function can be expressed as \begin{equation} D_{\theta}(x) = 6.88 \left(\frac{x}{r_0}\right)^{5/3} , \label{strukt} \end{equation} in terms of the Fried parameter \cite{fried}, \begin{equation} r_0 = 0.185 \left(\frac{\lambda^2}{C_n^2 z}\right)^{3/5} . \label{fried} \end{equation} Here $C_n^2$ is the structure constant and $\lambda$ is the wavelength of the down-converted photons. For degenerate SPDC, $\lambda=2\lambda_{\rm p}$, where $\lambda_{\rm p}$ is the pump wavelength. Due to the power of $5/3$ in Eq.~(\ref{strukt}), the integral in Eq.~(\ref{spsint}) is not easy to evaluate. Therefore, we use the quadratic structure function approximation \cite{leader}, which implies that one can replace $x^{5/3}\rightarrow x^2$. The resulting quadratic structure function then takes the form \begin{equation} D'_{\theta}(x) = 6.88 \left(\frac{x^2}{w_{\rm p}^{1/3}r_0^{5/3}}\right) . \label{struktq} \end{equation} The extra factor of $w_{\rm p}^{-1/3}$ is necessary to retain a dimensionless argument in the exponential function in Eq.~(\ref{spsint}). Here $w_{\rm p}$ is the radius of the pump beam waist. \subsection{Input (SPDC) state} The input state in the SPS calculation is the SPDC state. For the purpose of the theoretical calculation, one can represent the SPDC state as the product of the pump profile in the Fourier domain and the phase matching function that governs the SPDC process in the nonlinear crystal. The pump is assumed to be a Gaussian beam. The details of the phase matching function is not important, because, as it turns out, the experimental conditions are such that the phase matching function plays a diminished role. (See the discussion of the thin-crystal limit below.) For that reason we follow \cite{eberly} and use the (analytically more tractable) Gaussian function model to represent the phase matching function. The resulting SPDC input state in the Fourier domain can therefore be represented by \begin{eqnarray} \Psi'_{\rm spdc}({\bf a}_{\rm s},{\bf a}_{\rm i}) & = & \braket{{\bf a}_{\rm s},{\bf a}_{\rm i}}{\Psi_{\rm spdc}} \nonumber \\ & = & {\cal P'} \exp \left(-\pi^2 w_{\rm p}^2 |{\bf a}_{\rm s}+{\bf a}_{\rm i}|^2\right) \nonumber \\ & & \times \exp \left(-\frac{1}{2} \pi^2 w_{\rm p}^2 \beta |{\bf a}_{\rm s}-{\bf a}_{\rm i}|^2 \right) , \label{spdcin} \end{eqnarray} where ${\cal P'}$ is a normalization constant, ${\bf a}$ is a two-dimensional spatial frequency vector (related to the transverse propagation vector by ${\bf k}_{\perp} = 2\pi{\bf a}$), and the subscripts $s$ and $i$ denote the two (`signal' and `idler') down-converted photons, respectively. Furthermore, we defined the dimensionless parameter \begin{equation} \beta = \frac{n_{\rm o}L\lambda_{\rm p}}{\pi w_{\rm p}^2} = \frac{n_{\rm o}L}{z_{Rp}} , \label{beta} \end{equation} where $L$ and $n_{\rm o}$ are the length and the ordinary refractive index of the nonlinear crystal, respectively, and $z_{Rp}$ is the Rayleigh range of the pump beam ($\pi w_{\rm p}^2/\lambda_{\rm p}$). In most experiments, the Rayleigh range of the pump beam is several orders of magnitude larger than the thickness of the nonlinear crystal, $L\ll z_{Rp}$. This leads to the so-called {\em thin crystal limit}, where the phase matching function is only evaluated at the origin. It implies that one can set $\beta=0$ and drop the phase matching function in the calculations. However, the phase matching function may help to regularize the integrals and can be removed at the end by taking the limit $\beta\rightarrow 0$. The inverse Fourier transform of the SPDC state, given in Eq.~(\ref{spdcin}), is \begin{eqnarray} \Psi_{\rm spdc}({\bf x}_{\rm s},{\bf x}_{\rm i}) & = & {\cal F}^{-1} \left\{ \Psi'_{\rm spdc}({\bf a}_{\rm s},{\bf a}_{\rm i}) \right\} \nonumber \\ & = & {\cal P} \exp \left(-\frac{|{\bf x}_{\rm s}+{\bf x}_{\rm i}|^2}{4w_{\rm p}^2} -\frac{|{\bf x}_{\rm s}-{\bf x}_{\rm i}|^2}{2 w_{\rm p}^2\beta} \right) . \nonumber \\ \label{spdcinx} \end{eqnarray} The normalization constant ${\cal P}$ contains ${\cal P}'$ and includes addition dimension parameters. These normalization constants will eventually drop out of the expression, when the projected state is renormalized. \subsection{Effective pump width} The coincidence counts that are produced in the experiment can be predicted in the thin-crystal limit by the follow three-way overlap integral \begin{equation} C_{\ell} \propto \int m_p({\bf x}) m_s^*({\bf x}) m_i^*({\bf x})\ {\rm d}^2x , \label{oorvl} \end{equation} where $m_{p,s,i}({\bf x})$ represent the mode profiles of the pump, signal and idler beams. The pump is a Gaussian beam, with a particular beam radius $w_{\rm p}'$. However, one needs to take the Gaussian overlap functions coming from the coupling of the light into the SMFs, into account in the overlap calculation. Due to the way in which the measurement basis is encoded on the SLMs, one cannot incorporate the Gaussians from the SMFs into the measurement mode profiles. Therefore, we combine them with the pump profile to produce an effective (Gaussian) pump profile, with an effective pump mode size, given by \begin{equation} \frac{1}{w_{\rm p}^2} = \frac{1}{w_{\rm p}'^2} + \frac{2}{w_{\rm SMF}^2} , \label{pompwyd} \end{equation} where $w_{\rm SMF}$ is the radius of the SMF Gaussian functions. Henceforth, $w_{\rm p}$ represents the effective pump mode size, instead of the original pump mode size, as before. \subsection{Output (LG) modal basis} The basis for the output density matrix is chosen to be the LG modes. In terms of normalized coordinates, these modes are given by \begin{eqnarray} E_{p,\ell}(u,v,t) & = & {\cal N} \frac{(u \pm iv)^{|\ell|} (1+it)^p}{(1-it)^{p+|\ell|+1}} \exp \left( \frac{u^2+v^2}{it-1} \right) \nonumber \\ & & \times L_p^{|\ell|} \left[ \frac{2(u^2+v^2)}{1+t^2} \right] , \label{lgm} \end{eqnarray} where $p$ and $\ell$ are the radial index and the azimuthal index (the $\pm$ sign is given by the sign of $\ell$), respectively, $L_p^{|\ell|}(\cdot)$ represents the associate Laguerre polynomials and ${\cal N}$ is a normalization constant given by \begin{equation} {\cal N} = \left[ \frac{2^{|\ell|+1}p!}{\pi (p+|\ell|)!} \right]^{1/2} . \label{lgn} \end{equation} The normalized coordinates are given by $u=x/w_0$, $v=y/w_0$ and $t=z/z_R=z\lambda/\pi w_0^2$, in terms of the waist radius $w_0$ and the Rayleigh range $z_R$ of the output basis. The LG modes are OAM eigenstates \cite{allen}. This implies that an LG beam as a whole (and every photon in it) has a well-defined OAM. The amount of OAM in the beam is proportional to the azimuthal index $\ell$. OAM is conserved in the SPDC process \cite{arnaut,franke,mair}. This means that the sum of OAM of the signal and the idler photons equals the OAM of the pump photon. As a result, the SPDC state is entangled in terms of OAM. Hence, without turbulence, the output density matrix in the LG basis would be that of a highly entangled biphoton state. We only use LG modes with $p=0$ for the output basis. Since we use the SPS approximation, we also set $z=t=0$. The output space is restricted to a three-dimensional basis for each photon. The basis elements are chosen symmetrically to be $\{E_{0,{-\ell}},E_{0,0},E_{0,\ell}\}$. For our experiment, we consider the three cases, $\ell=1,2,3$. To expedite the calculations, we use a generating function for the LG modes \cite{ipe,pindex,noncol}. For $p=0$ and $z=t=0$, the generating function is given by \begin{equation} {\cal G}_{\pm} = \frac{1}{w_0} \exp \left[ \frac{(x\pm i y) \mu}{w_0} - \frac{x^2+y^2}{w_0^2} \right] , \label{genlg} \end{equation} where $\mu$ is the generating parameter for the azimuthal index and the $\pm$ sign in the exponent is determined by the sign of $\ell$. To generate a particular mode, one uses \begin{equation} E_{0,\ell}({\bf x}) = {\cal N} \left. \partial_{\mu}^{|\ell|} {\cal G}_{\pm} \right|_{\mu=0} , \label{simodes} \end{equation} where ${\cal N}$ is given in Eq.~(\ref{lgn}) with $p=0$. Without turbulence, the (pure) state that one would obtain in the output within the ($3\times 3$)-dimensional space in which we measure, can be expressed as \begin{equation} \ket{\Psi} = {\cal A}_0 \ket{0}_A \ket{0}_B + {\cal A}_{|\ell|} \left( \ket{\ell}_A \ket{{-\ell}}_B + \ket{{-\ell}}_A \ket{\ell}_B \right) , \label{suiwer} \end{equation} where the coefficients ${\cal A}_0$ and ${\cal A}_{|\ell|}$ are determined by the OAM spectrum of the SPDC state. For a maximally entangled state ${\cal A}_0={\cal A}_{|\ell|}=1/\sqrt{3}$. However, for the SPDC state we generally have ${\cal A}_0>{\cal A}_{|\ell|}$, which implies that the initial state is less than maximally entangled. Nevertheless, with appropriate experimental conditions, the initial entanglement of the state can be close to being maximally entangled. \subsection{Generating function for density matrix elements} Substitute, Eqs.~(\ref{struktq}), (\ref{spdcinx}) and (\ref{genlg}) into Eq.~(\ref{spsint}), and evaluate the eight integrals. One then imposes the thin crystal limit by setting $\beta=0$. The result is a generating function for the elements of the density matrix, given by \begin{eqnarray} {\cal G}_{\rho} & = & M_0 M_1 \exp\left[ (M_0-M_1) \left(S^{(+)}_{13}\mu_1\mu_3 \right. \right. \nonumber \\ & & \left. + S^{(+)}_{14}\mu_1\mu_4 + S^{(+)}_{23}\mu_2\mu_3 + S^{(+)}_{24}\mu_2\mu_4\right) \nonumber \\ & & \left. +(M_0+M_1) \left(S^{(-)}_{12}\mu_1\mu_2 + S^{(-)}_{34}\mu_3\mu_4\right)\right] , \label{elmgen} \end{eqnarray} where we neglect an overall constant, $\mu_1 ... \mu_4$ are four generating parameters, respectively associated with the four indices $m,n,p,q$ in Eq.~(\ref{spsint}) and where \begin{eqnarray} M_0 & = & \frac{1}{2(\alpha+2)} \\ M_1 & = & \frac{1}{2(\alpha+2+\alpha\xi)} \\ S^{(\pm)}_{mn} & = & \left\{\begin{array}{cl} 1 & {\rm for} ~~ {\rm sign}(\ell_m) = \pm {\rm sign}(\ell_n) \\ 0 & {\rm otherwise} \\ \end{array} \right. , \end{eqnarray} with \begin{eqnarray} \alpha & = & \frac{w_0^2}{w_{\rm p}^2} , \label{alphadef} \\ \xi & = & 6.88 \left(\frac{w_{\rm p}}{r_0}\right)^{5/3} . \end{eqnarray} To generate a particular matrix element, using Eq.~(\ref{elmgen}), one needs to compute Eq.~(\ref{simodes}) for each of the four $\ell$-values that are associated with $m,n,p,q$. \subsection{Negativity} There is currently no known entanglement measure for high dimensional states that is easy to compute and gives the exact amount of entanglement in the state. The concurrence \cite{wootters}, which is relatively easy to calculate for qubits, is much more complicated to compute when it is generalized to arbitrary dimensions. Nevertheless, one needs to quantify the high dimensional entanglement in order to understand the effects of turbulence on high dimensional OAM entangled states. Such an understanding is necessary to enable the successful implementation of a free-space quantum communication channel with high dimensional OAM states. In our experiment, we use the negativity to quantify the entanglement. The negativity is defined as \begin{equation} {\cal E} = \frac{1}{2} \sum_n (|\lambda_n|-\lambda_n) , \end{equation} where $\lambda_n$ denotes the eigenvalues of the partial transpose of the density matrix. To compute the partial transpose of a bipartite density matrix, one exchanges the rows and columns for one of the two partites, while leaving those of the other one unchanged. The expressions for the negativity that we obtained from our theoretical calculations for $\ell=1,2,3$, are provided in Appendix \ref{negexp}. The curves of the negativity obtained from these theoretical calculations are presented together with the experimental results in Sec.~\ref{resdis}. \section{Experimental setup} \label{opstel} The experimental setup is shown in Fig.~\ref{setup}. A mode-locked laser source with a wavelength of 355~nm, an average power of 350~mW and a repetition rate of 80~MHz pumps a 3~mm-thick type I BBO crystal to produce noncollinear, degenerate photon pairs via SPDC. A small noncollinear angle ($\sim 3$ degrees between signal and idler beams) is used to improve the OAM bandwidth \cite{romero,noncol}. The pump beam has a radius of 0.24~mm at the crystal. The plane of the crystal is imaged onto SLMs in the signal and idler beams, respectively, with a magnification of $\times 4$ (4-f system with $f_{1} = 100$~mm and $f_{2} = 400$~mm not shown). The SLM planes are re-imaged with a demagnification factor of $\times 375$ (4-f system with $f_{3} = 750$~mm and $\textrm{f}_{4} = 2$~mm not shown) onto single-mode fibres (SMFs). The radii of backprojected beams emitted from the SMFs onto the crystal plane are $\sim 0.26$~mm, giving an effective pump mode size of 0.15~mm. The down-converted light beams pass through 10~nm bandwidth interference filters (IF) before coupling into the SMFs. Avalanche photo diodes (APDs) at the ends of the SMFs are used to register the photon pairs with the aid of a coincidence counter (CC). The measured coincidence counts are accumulated over a 2~s integration time, with a gating time of 12.5~ns (based on the repetition rate). Projective measurements are performed with the aid of the SLMs, by selecting particular pairs of LG modes (and superpositions of LG modes) for detection. We employ techniques for complex amplitude modulation \cite{arrizon1} on the phase-only SLMs to ensure that the modulation involves the exact LG mode functions and not only their phase functions. The size of the modes on the SLM is 0.45~mm, which gives $\alpha=0.59$. The atmospheric turbulence is simulated in the experiment by adding a random phase fluctuation to the encoded modal basis functions on one of the SLMs. This random phase function is computed with \cite{knepp,mf1,dainty} \begin{equation} \theta({\bf x}) = \frac{1}{\Delta} {\cal F}^{-1} \left\{\chi({\bf a}) \left[\Phi_{\theta}({\bf a}) \right]^{1/2} \right\} , \end{equation} where ${\cal F}^{-1}\{\cdot\}$ is the two-dimensional inverse Fourier transform, $\Delta$ is the sample spacing in the frequency domain and $\chi({\bf a})$ is a frequency domain delta-correlated zero-mean Gaussian random complex function. If we assume that $\theta({\bf x})$ is real-valued, then $\chi^*({\bf a})=\chi(-{\bf a})$. However, by allowing $\theta({\bf x})$ to be complex, one obtains two phase functions --- the real and the imaginary parts of $\theta({\bf x})$ --- with each calculation. \begin{figure}[th] \includegraphics{opstspdc.eps} \caption{Experimental setup used to perform high dimensional quantum state tomography on the quantum state after passing through SPS turbulence.} \label{setup} \end{figure} The phase power spectral density $\Phi_{\theta}$ is related to the refractive index power spectral density $\Phi_n$ through \begin{equation} \Phi_\theta({\bf a}) = 2\pi k^2 z \Phi_n(2\pi{\bf a},0) , \end{equation} where $k=2\pi/\lambda$ is the wavenumber (not to be confused with $|{\bf k}|$ below). We use the refractive index power spectral density in Kolmogorov theory, given by \cite{kolmog,scintbook} \begin{equation} \Phi_n({\bf k}) = 0.033~C_n^2 |{\bf k}|^{-11/3} , \end{equation} to calculate the phase screen. Subgrid sample points are added to the Fourier domain representation of the phase function to ensure that the calculated random phase functions can reproduce the Kolmogorov structure function reliably \cite{dainty}. \section{Results and discussion} \label{resdis} We considered values of $W=w_0/r_0$, representing the scintillation strength of the random phase function, in the range 0 to 1.5. For each value of $W$, we computed 25 different sets of phase functions, corresponding to different realizations of the simulated turbulent medium. A quantum state tomography \cite{thew} is performed for each realization, to reconstruct the bipartite qutrit density matrix. The final density matrix representing the state of the two photons is calculated by averaging density matrices corresponding to each value of $W$. From the averaged density matrices, we then compute the negativity. The results are shown in Fig.~\ref{negqut}. One can see that the negativity decreases gradually with increasing $W$. When only one of the two photons propagates through turbulence, the theoretical value of the negativity never reaches zero. We see that the experimentally obtained negativities follow the same trend, at least up to $W=1.5$. The initial values for the negativity in the three graphs decrease gradually as the value of $|\ell|$ increases. This is an indication of the OAM spectrum that is produced in the SPDC process and measured in terms of the chosen measurement basis. It also depends on the value of $\alpha$. \begin{figure}[ht] \includegraphics{qutritso.eps} \caption{The negativity is shown as a function of $W$ for $\ell=1,2,3$, respectively. The solid lines are theoretical curves and the points are experimental results. The error bars indicate the statistical error due to the Poisson photon statistics} \label{negqut} \end{figure} For the case when $\ell=1$, we produced graphic representations of the three averaged density matrices, for which $W=0, 0.68, 1.5$, respectively. These are shown in Fig.~\ref{digt}. The density matrices for pairs of entangled qutrits are $9\times 9$ matrices. The magnitudes of the central (highest) elements are $0.45$, $0.36$ and $0.24$ for the three respective cases shown in Fig.~\ref{digt}. In the case where $W=0$, one can see the input state having a high purity. The varying heights (magnitudes of the elements) indicate that the input state is not maximally entangled, due to the modal spectrum that is produced in the SPDC process. For $W=0.68$, we see that other elements in the density matrix start to grow at the cost of the elements representing the original input state. Finally, for $W=1.5$, one observes that the other elements in the matrix, in particular those on the diagonal, start to dominate over the elements of the original input state. \begin{figure}[ht] \includegraphics{compell.eps} \caption{Comparison of the theoretical negativity curves as a function of $W$.} \label{thneg} \end{figure} The three theoretical curves from the three graphs in Fig.~\ref{negqut}, are shown together in Fig.~\ref{thneg}. One observes that, contrary to the case with qubits \cite{sr}, a higher value of $|\ell|$ does not give a better performance during propagation through turbulence. In fact, the curves cross each other at particular values of $W$. The original trend where higher value of $|\ell|$ performed better in turbulence, applied to qubits (Bell-states) within the SPS approximation (weak scintillation). It has already been shown previously that this trend does not apply in strong scintillation \cite{turbsim}. Here, it is shown that the trend also does not apply for higher dimensional states in weak scintillation. As a result, one would not expect to see the trend for higher dimensional states in strong scintillation. \section{Conclusion} \label{concl} A quantum state that is produced by type I SPDC is entangled in its spatial degrees of freedom. This entanglement manifests strongly in the OAM degrees of freedom, because the Schmidt basis of this state is an OAM basis. Here, we study the evolution of this OAM entangled state through turbulence, within the weak scintillation conditions appropriate for a SPS approximation. The study includes both theoretical predictions, based on calculations using the SPS model, and experimental work, using an SLM to modulate one of the two entangled photons with a random phase function to simulate the turbulence. \begin{figure}[ht] \includegraphics{wees2.eps} \caption{Graphic representations of the density matrices for $W=0, 0.68, 1.5$, respectively. The diagonal elements of the densty matrices are horizontally arranged. } \label{digt} \end{figure} Our experimental results for the evolution of high dimensional quantum states in an OAM basis show that the amount of entanglement between two qutrits does not decay slower when higher $|\ell|$-values are considered. This is in contrast to the situation that was found for qubits in weak scintillation, namely that the entanglement between two qubits evolving in turbulence is more robust when higher $|\ell|$-values are considered. Comparing the results that we obtain here for the qutrits, with those that were previously obtained for the qubits \cite{oamturb} under the same conditions, we see that the entanglement for the qutrits decay quicker as a function of $W$ than the entanglement for the qubits. The benefit of using higher dimensional states (better security and higher information capacity), may therefore be offset by a poorer performance in turbulence. This would play an important role in the design of a free-space QKD system. \vskip 10 mm \section*{Acknowledgements} This work was done with funding support from the CSIR and NRF.
1,314,259,995,233
arxiv
\section{Introduction} \label{intro} Simulation of dynamical systems has become an essential part in the development of science to study complex physical phenomena. However, as the ever increasing need for accuracy has lead to ever larger dimensional dynamical systems, this increased dimension often makes the desired numerical simulations prohibitively expensive to perform. Model order reduction (MOR) is one remedy for this predicament. MOR tackles this issue by constructing a much lower dimensional representation of the corresponding full-order dynamical system, which is cheap to simulate, yet provides high-fidelity, i.e., it provides a good approximation to the original quantity of interest. In many applications such as optimization, design, control, uncertainty quantification, and inverse problems, the dynamics of the system are defined by a set of parameters that describe initial conditions, material properties, etc. Since carrying out model reduction for every parameter value is not computationally feasible, the goal in the parameterized setting is to construct a parametric reduced model that can approximate one or more quantities of interest well for the whole parameter range of interest. This lead to the parametric model reduction framework. For more specific details on both parametric and nonparametric model reduction, we refer the reader to \cite{antoulas2001asurvey,baur2014model,antoulas2005approximation,benner2015survey,benner2017model,hesthaven2016certified} and the references therein. In this paper, we will focus on large-scale bilinear systems parametrized with the parameter vector ${\bf p} \in \mathbb{R}^\nu$ and represented in state-space form \begin{align} \left\{ \begin{array}{l} \label{binon} {\bf E} ( {\bf p} ) \dot{ {\bf x} } (t;{\bf p}) = \displaystyle {\bf A} ({\bf p}) {\bf x} (t;{\bf p}) + \sum_{j=1}^m {\bf N}_j ({\bf p}) {\bf x} (t) u_j (t) + {\bf B} ({\bf p}) {\bf u} (t), \\[1ex] {\bf y} (t;{\bf p}) = {\bf C} ({\bf p}) {\bf x} (t;{\bf p}), \end{array} \right. \end{align} where ${\bf x}(t;{\bf p}) \in \mathbb{R}^n$, ${\bf y}(t;{\bf p}) \in \mathbb{R}^\ell$, and ${\bf u}(t) = [u_1(t),~u_2(t),\ldots,~u_m(t)]^\top\in \mathbb{R}^m$ denote the states, outputs (measurements/quantities of interest), and inputs (excitation/forcing) of the bilinear dynamical system, respectively. Thus, the corresponding state-matrices have the dimensions ${\bf E}({\bf p}),{\bf A}({\bf p}), {\bf N}_j({\bf p}) \in \mathbb{R}^{n\times n}$, for $j=1,\dots m$, ${\bf B}({\bf p}) \in \mathbb{R}^{n\times m}$, and ${\bf C}({\bf p}) \in \mathbb{R}^{\ell\times n}$. In this paper, we assume that the matrix ${\bf E}({\bf p})$ is nonsingular for every parameter value ${\bf p} \in \mathbb{R}^\nu$. Bilinear systems of the form \eqref{binon} appear in a variety of applications such as the study of biological species and nuclear fission, are used in the context of stochastic control problems, and frequently appear in modeling nonlinear phenomena of small magnitude, for instance, \cite{rugh1981nonlinear,mohler1991nonlinear,mohler1970natural,weiner1980sinusoidal,hartmann2013balanced,benner2011lyapunov,benner2017dual}. We are interested in large-scale settings where simulating/solving \eqref{binon} for a wide variety of inputs $ {\bf u} (t) $ and parameters ${\bf p}$ solely to determine the output $ {\bf y} (t;{\bf p}) $ is too expensive. Therefore, our goal is to construct a reduced parametric bilinear system of order $r \ll n$ in state-space form \begin{align} \widetilde{\Sigma} : \ \left\{ \begin{array}{l} \label{brom} {\bf \widetilde{E}} ( {\bf p} ) \dot{ {\bf \widetilde{x}} } (t;{\bf p}) = \displaystyle {\bf \widetilde{A}} ( {\bf p} ) {\bf \widetilde{x}} (t;{\bf p}) + \sum_{j=1}^m {\bf \widetilde{N}}_j ( {\bf p} ) {\bf \widetilde{x}} (t;{\bf p}) u_j (t) + {\bf \widetilde{B}} ( {\bf p} ) {\bf u} (t), \\[1ex] {\bf \widetilde{y}} (t;{\bf p}) = {\bf \widetilde{C}} ( {\bf p} ) {\bf \widetilde{x}} (t;{\bf p}), \end{array} \right. \end{align} where ${\bf \widetilde{E}}({\bf p}),{\bf \widetilde{A}}({\bf p}), {\bf \widetilde{N}}_j({\bf p}) \in \mathbb{R}^{r\times r}$, for $j=1,\dots m$, ${\bf \widetilde{B}}({\bf p}) \in \mathbb{R}^{r\times m}$, and ${\bf \widetilde{C}}({\bf p}) \in \mathbb{R}^{\ell\times r}$ such that the reduced output ${\bf \widetilde{y}} (t;{\bf p})$ provides a good approximation to the original output ${\bf y}(t;{\bf p})$ for a variety of inputs ${\bf u}(t)$ and a range of parameters ${\bf p}$. \emph{Non-parametric} bilinear systems where the state-space matrices ${\bf E}$, ${\bf A}$, ${\bf N}_j$ for $j=1,\ldots,m$, ${\bf B}$ and ${\bf C}$ are constant, have been studied thoroughly, and input-independent/optimal model reduction techniques from the linear case (${\bf N}_j = 0$ for $j=1,\ldots,m$) have been successfully generalized to non-parametric bilinear systems. For example, \cite{phillips2003projection,bai2006projection,breiten2010krylov,benner2011generalised,ahmad2017krylov} have extended model reduction via rational interpolation \cite{antoulas2010interpolatory,beattie2017model} from linear to non-parametric bilinear systems. The optimal model reduction of linear dynamical systems in the $\mathcal{H}_2$ norm via the iterative rational Krylov algorithm (IRKA) \cite{gugercin2008h_2} has been generalized to bilinear systems via bilinear IRKA (B-IRKA) \cite{benner2012interpolation}. Later, \cite{flagg2015multipoint} showed that, as with IRKA and $\mathcal{H}_2$ model reduction in the linear case, the reduced model via B-IRKA also yields a Hermite interpolation in this case, but in the sense of Volterra series interpolation. Similarly, gramians and balanced truncation (BT) for linear dynamical systems \cite{mullis1976synthesis,moore1981principal} have also been generalized to nonparametric bilinear systems \cite{al1993new,gray1998energy,hartmann2013balanced,benner2011lyapunov}. Moreover, \cite{antoulas2016model} has applied the Loewner framework \cite{mayo2007framework} to bilinear systems. A plethora of work exists on model reduction of parametrized linear dynamical systems, i.e., ${\bf N}_j = 0$ for $j=1,\ldots,m$ in \eqref{binon}; see, for example, \cite{baur2011interpolatory,benner2015survey,gunupudi2003ppt,Daniel2004,BenF14,Panzer_etal2010,AmsallemFarhat2011,Degroote2010,BuiThanh2008} and the references therein. In this paper, we are interested in input-independent (transfer function-based) model reduction of parametric bilinear systems where only the state-space matrices enter into the model reduction process and there is no need to choose a specific input ${\bf u}(t)$ nor to simulate the full model \eqref{binon}. More specifically, we are focused on parametric model reduction that uses the concept of (parametric) rational interpolation. These methods, also referred to as interpolatory parametric model reduction, have been successfully applied to parametric \emph{linear} dynamical systems; see, e.g., \cite{baur2011interpolatory,Daniel2004,BenF14}. However, unlike the extensions of interpolation theory and IRKA to \emph{non-parametric} bilinear systems, interpolatory methods have not yet been generalized to parametric bilinear systems. In this paper, we close this gap and provide a natural extension of interpolatory projections to parametric bilinear dynamical systems. Our framework yields a reduced parametric bilinear model whose subsystem transfer functions will (tangentially) interpolate the original subsystem transfer functions together with the parameter sensitivities and Hessians at the sampled frequencies and parameter values along chosen directions. Note that we are {\em not} focusing on the problem of selecting parameter samples, but rather on ensuring tangential interpolation of the full and reduced models at the chosen points and directions. One can then couple this interpolatory model reduction algorithm to a desired sampling strategy. The remainder of the paper is organized as follows: Section \ref{problem} introduces the problem description and presents the main theoretical results. Section \ref{examples} illustrates the theory using two numerical examples. This is followed by the conclusions and future directions in Section \ref{sec:conc}. \section{Problem Description} \label{problem} In this section, we introduce the ingredients of the model reduction problem for parametric bilinear systems such as projection, subsystem transfer function, and tangential interpolation. We then present the main results of the paper. \subsection{Projection-based model reduction of parametric bilinear systems via global basis} \label{sec:projintro} We construct the reduced parametric bilinear system \eqref{brom} via projection. We follow the \emph{global basis approach} (as opposed to using a local basis and performing extrapolation \cite{hay2009local,zimmermann2016local} or interpolation \cite{AmsallemFarhat2011,Degroote2010,zimmermann2014locally}). Thus we construct two constant global model reduction bases, namely $ {\bf V} \in \mathbb{C}^{n \times r}$ and $ {\bf W} \in \mathbb{C}^{n \times r}$, that capture the parametric dependence of the underlying system using the information from various sampling points. We refer the reader to \cite{benner2015survey} for detailed explanations regarding global and local bases, and different sampling options. The subspaces ${\bf V}$ and ${\bf W}$ are computed to enforce specific interpolation conditions as discussed in Section \ref{sec:conditions}. Once the model reduction bases ${\bf V}$ and ${\bf W}$ are constructed, the reduced model quantities in \eqref{brom} are obtained via Petrov-Galerkin projection: \begin{equation} \label{eq:romss} \begin{array}{llll} {\bf \widetilde{E}}({\bf p}) = {\bf W}^\top {\bf E} ({\bf p}) {\bf V}, & {\bf \widetilde{A}} = {\bf W}^\top {\bf A} ({\bf p}) {\bf V}, & {\bf \widetilde{B}}({\bf p}) = {\bf W}^\top {\bf B} ({\bf p}), \\ {\bf \widetilde{C}}({\bf p}) = {\bf C} ({\bf p}) {\bf V} , ~~~~~~\mbox{and}~~~ & {\bf \widetilde{N}}_j = {\bf W}^\top {\bf N}_j ({\bf p}) {\bf V} ~~~~\mbox{for}~j=1,\ldots,m. \end{array} \end{equation} Now consider reevaluating reduced model quantities in \eqref{eq:romss} for a new parameter value ${\bf \widehat{p}} \in \mathbb{R}^\nu$. Consider the case of ${\bf \widetilde{E}}({\bf \widehat{p}})$. This will require re-evaluating the projection ${\bf \widetilde{E}}({\bf \widehat{p}}) = {\bf W}^\top {\bf E} ({\bf \widehat{p}}) {\bf V}$ where the operations depend on the original system dimension $n$. In practice, many problems exhibit an affine parametric structure, which makes the projection step numerically efficient. For simplicity, continue to consider the matrix ${\bf E}({\bf p})$ only. Assume that ${\bf E}({\bf p})$ has the following affine parametric form \begin{equation} \label{eq:affine} {\bf E}({\bf p})={\bf E}_0+ \sum_{i=1}^N f_i({\bf p}){\bf E}_i, \end{equation} where $f_i$ are scalar (nonlinear) functions reflecting the parametric dependency, and ${\bf E}_i \in \mathbb{R}^{n \times n}$ for $i=0,\ldots,N$ are constant matrices. Then, the reduced matrix ${\bf E}_r({\bf p})$ in \eqref{eq:romss} is given by \begin{eqnarray} \label{eq:affine_rom} {\bf E}_r({\bf p})= {\bf W}^\top{\bf E}_0 {\bf V}+ \sum_{i=1}^N f_i({\bf p}) {\bf W}^\top{\bf E}_i {\bf V}, \end{eqnarray} where ${\bf W}^\top{\bf E}_i {\bf V}$, for $i=0,\ldots,N$ have to be computed once in an offline phase then can be recombined for efficient computation of ${\bf E}_r({\bf \widehat{p}})$ in any online phase. The same discussion applies to other matrices in \eqref{eq:romss} as well. When ${\bf E}({\bf p})$ does not admit such an affine parametrization as in \eqref{eq:affine}, one usually performs an affine approximation of ${\bf E}({\bf p})$ first, usually via a matrix version of (Discrete) Empirical Interpolation Method \cite{Grepl07,Chaturantabut2010}; see \cite{benner2015survey} for details. We will revisit this issue in the second numerical example in Section \ref{ex:heat}. \subsection{Interpolatory projections for parametric linear systems} \label{sec:linear} A powerful framework in the case of linear dynamical systems \begin{align} \label{eq:lfom} {\bf E} ( {\bf p} ) \dot{ {\bf x} } (t;{\bf p}) = \displaystyle {\bf A} ({\bf p}) {\bf x} (t;{\bf p}) + {\bf B} ({\bf p}) {\bf u} (t), ~~~~{\bf y} (t;{\bf p}) = {\bf C} ({\bf p}) {\bf x} (t;{\bf p}), \end{align} is to transform the problem into the frequency domain via Laplace transform. To do so, let ${{\bf Y}}(s;{\bf p})$ and ${{\bf U}}(s)$ denote the Laplace transforms of ${\bf y}(t;{\bf p})$ and ${\bf u}(t)$, respectively. Then, applying the Laplace transform to \eqref{eq:lfom} leads to $$ {\bf Y}(s;{\bf p}) = {\bf H}(s;{\bf p}) {\bf U}(s),~~~\mbox{where}~~~{\bf H}(s;{\bf p}) = {\bf C}({\bf p})\left(s\,{\bf E}({\bf p})\, -\,{\bf A}({\bf p})\right)^{-1}{\bf B}({\bf p}) $$ is the transfer function of \eqref{eq:lfom}. Then, the goal is to construct a reduced parametric linear model \begin{align} \label{eq:lrom} {\bf \widetilde{E}} ( {\bf p} ) \dot{ {\bf \widetilde{x}} } (t;{\bf p}) = \displaystyle {\bf \widetilde{A}} ({\bf p}) {\bf \widetilde{x}} (t;{\bf p}) + {\bf \widetilde{B}} ({\bf p}) {\bf u} (t), ~~~~{\bf \widetilde{y}}(t;{\bf p}) = {\bf \widetilde{C}} ({\bf p}) {\bf \widetilde{x}} (t;{\bf p}), \end{align} whose reduced parametric transfer function ${\bf \widetilde{H}}(s;{\bf p}) = {\bf \widetilde{C}}({\bf p})\left(s\,{\bf \widetilde{E}}({\bf p})\, -\,{\bf \widetilde{A}}({\bf p})\right)^{-1}{\bf \widetilde{B}}({\bf p})$ approximates ${\bf H}(s;{\bf p})$ well, which would in turn imply ${\bf \widetilde{y}}(t;{\bf p}) \approx {\bf y}(t;{\bf p})$ since ${\bf Y}(s;{\bf p}) - {\bf \widetilde{Y}}(s;{\bf p}) = ({\bf H}(s;{\bf p}) -{\bf \widetilde{H}}(s;{\bf p})){\bf U}(s)$. One way to enforce ${\bf \widetilde{H}}(s;{\bf p})\approx {\bf H}(s;{\bf p})$ is via rational interpolation: Given the frequency interpolation points $\{\sigma_1,\ldots,\sigma_{q_s}\} \subset \mathbb{C} $, the right tangential directions $\{{\bf b}_1,\ldots,{\bf b}_{q_s}\} \subset \mathbb{C}^m $, the left tangential directions $\{{\bf c}_1,\ldots,{\bf c}_{q_s}\} \subset \mathbb{C}^\ell$, and the parameter interpolation samples $\{{\bf \widehat{p}}_1,\ldots,{\bf \widehat{p}}_{q_p}\} \subset \mathbb{R}^\nu$, find a reduced model \eqref{eq:lrom} such that ${\bf \widetilde{H}}(s;{\bf p})$ is a Hermite tangential interpolant to ${\bf H}(s;{\bf p})$ at the selected samples, i.e., $$ \begin{array}{cc} {\bf H}(\sigma_i;{\bf \widehat{p}}_j) {\bf b}_i ={\bf \widetilde{H}}(\sigma_i;{\bf \widehat{p}}_j) {\bf b}_i, & {\bf c}_i^\top{\bf H}(\sigma_i;{\bf \widehat{p}}_j) ={\bf c}_i^\top{\bf \widetilde{H}}(\sigma_i;{\bf \widehat{p}}_j), \\ \frac{\partial}{\partial s} \left( {\bf c}_i^\top {\bf H}(\sigma_i;{\bf \widehat{p}}_j) {\bf b}_i \right) = \frac{\partial}{\partial s} \left( {\bf c}_i^\top {\bf \widetilde{H}}(\sigma_i;{\bf \widehat{p}}_j) {\bf b}_i \right), & ~~~\mbox{and}~~~ \nabla_{\bf p} \left( {\bf c}_i^\top {\bf H} ( \sigma_{i}; {\bf \widehat{p}}_j ) {\bf b}_{i} \right) = \nabla_{\bf p} \left( {\bf c}_{i}^\top {\bf \widetilde{H}}_1 ( \sigma_{i}; {\bf \widehat{p}}_j ) {\bf b}_i \right) , \end{array} $$ for $i =1,\ldots,q_s$ and $j= 1,\ldots,q_p$. In other words, the reduced model tangentially matches transfer function values in addition to its frequency and parametric derivatives at the sampled points. One can impose higher order interpolation conditions at the frequency and parameter samples as well, such as the parameter Hessian. We omit it for brevity here. The following result from \cite{baur2011interpolatory} shows how to construct model reduction bases ${\bf V}$ and ${\bf W}$ that satisfy the desired interpolation conditions. \begin{theorem} \label{thm:linear} Given ${\bf H}(s;{\bf p}) ={\bf C}({\bf p})\left(s\,{\bf E}({\bf p})\, -\,{\bf A}({\bf p})\right)^{-1}{\bf B}({\bf p}) $, let ${\bf \widetilde{H}}(s;{\bf p}) = {\bf \widetilde{C}}({\bf p})\left(s\,{\bf \widetilde{E}}({\bf p})\, -\,{\bf \widetilde{A}}({\bf p})\right)^{-1}{\bf \widetilde{B}}({\bf p})$ be obtained via Petrov-Galerkin projection using the bases ${\bf V}$ and ${\bf W}$. Let $ \sigma \in \mathbb{C} $, $ {\bf \widehat{p}} \in \mathbb{R}^\nu $, $ {\bf b} \in \mathbb{C}^m \setminus \{ \bf0 \} $, and $ {\bf c} \in \mathbb{C}^\ell \setminus \{ \bf0 \} $. Define \begin{equation} \label{eq:cA} \mathcal{A} ( s; {\bf p} ) = s {\bf E} ( {\bf p} ) - {\bf A} ( {\bf p} ) \quad \mbox{and}\quad \mathcal{\widetilde{A}} ( s; {\bf p} ) = s {\bf \widetilde{E}} ( {\bf p} ) - {\bf \widetilde{A}} ( {\bf p} ). \end{equation} \begin{enumerate}[(a)] \item If $ \mathcal{A} (\sigma; {\bf \widehat{p}})^{-1} {\bf B} ({\bf \widehat{p}}) {\bf b} \in \textup{Ran} ({\bf V}) $, then $$ {\bf H}(\sigma; {\bf \widehat{p}}) {\bf b} = {\bf \widetilde{H}} (\sigma; {\bf \widehat{p}}) {\bf b};$$ \item If $ \mathcal{A} (\sigma; {\bf \widehat{p}})^{-\top} {\bf C} ({\bf \widehat{p}})^\top {\bf c} \in \textup{Ran} ({\bf W}) $, then $$ {\bf c}^\top {\bf H} (\sigma; {\bf \widehat{p}}) = {\bf c}^\top {\bf \widetilde{H}} (\sigma; {\bf \widehat{p}});$$ \item If both (a) and (b) hold simultaneously, then $$ \frac{\partial}{\partial s} \left( {\bf c}^\top {\bf H} (\sigma; {\bf \widehat{p}}) {\bf b} \right) = \frac{\partial}{\partial s} \left( {\bf c}^\top {\bf \widetilde{H}} (\sigma; {\bf \widehat{p}}) {\bf b} \right) \quad\mbox{and}\quad \nabla_{\bf p} \left( {\bf c}^\top {\bf H} (\sigma; {\bf \widehat{p}}) {\bf b} \right) = \nabla_{\bf p} \left( {\bf c}^\top {\bf \widetilde{H}} (\sigma; {\bf \widehat{p}}) {\bf b} \right), $$ \end{enumerate} provided $ \mathcal{A} (\sigma; {\bf \widehat{p}}) $ and $ \mathcal{\widetilde{A}} (\sigma; {\bf \widehat{p}}) $ are invertible. \end{theorem} Theorem \ref{thm:linear} shows how to construct ${\bf V}$ and ${\bf W}$ to fulfill the required interpolation conditions. All one has to do is to compute the vectors, e.g., the vector $ \mathcal{A} (\sigma; {\bf \widehat{p}})^{-1} {\bf B} ({\bf \widehat{p}}) {\bf b}$, for the desired frequency interpolation points $\sigma$ and parameter interpolation point ${\bf \widehat{p}}$, and use these vectors as columns of ${\bf V}$. We refer the reader to the original source \cite{baur2011interpolatory} for more details. The goal of this paper is to extend this result to parametric bilinear systems. \subsection{Interpolatory parametric bilinear model reduction problem} \label{sec:intproblem} Re-consider the full-order parametric bilinear system in \eqref{binon}: \begin{align} \left\{ \begin{array}{l} \tag{\ref{binon}} {\bf E} ( {\bf p} ) \dot{ {\bf x} } (t;{\bf p}) = \displaystyle {\bf A} ({\bf p}) {\bf x} (t;{\bf p}) + \sum_{j=1}^m {\bf N}_j ({\bf p}) {\bf x} (t) u_j (t) + {\bf B} ({\bf p}) {\bf u} (t), \\[1ex] {\bf y} (t;{\bf p}) = {\bf C} ({\bf p}) {\bf x} (t;{\bf p}). \end{array} \right. \end{align} Even though this system is nonlinear, due to the terms involving ${\bf N}_j({\bf p})$, the concept of transfer function can still be applied via Volterra series representation \cite{rugh1981nonlinear}. Given the bilinear system \eqref{binon}, we first introduce some notation to make the presentation of the Volterra series representation more compact: \begin{align} \label{eq:not1} {\bf N} ({\bf p}) & = [ {\bf N}_1 ({\bf p}) \ {\bf N}_2 ({\bf p}) \ \cdots \ {\bf N}_m ({\bf p}) ] , & \overline{ {\bf N} } ({\bf p}) & = \left[ \begin{array}{c} {\bf N}_1 ({\bf p}) \\ {\bf N}_2 ({\bf p}) \\ \vdots \\ {\bf N}_m ({\bf p}) \end{array} \right], ~~\mbox{and}~~& {\bf I}_m^{\otimes^k} & = \underbrace{ {\bf I}_m \otimes \cdots \otimes {\bf I}_m }_{k \text{ times }}, \end{align} where $\otimes$ denotes the Kronecker product. The output ${\bf y}(t;{\bf p})$ of \eqref{binon} can be represented as a Volterra series \begin{equation} \mathbf{y}(t;{\bf p})=\sum_{k=1}^{\infty}\int_0^{t_1}\int_0^{t_2}\cdots \int_0^{t_{k}}\mathbf{h}_k(t_1,t_2,\dots,t_k;{\bf p})\left(\mathbf{u}(t-\sum_{i=1}^k t_i)\otimes \cdots \otimes \mathbf{u}(t-t_k)\right) \mbox{d} t_k \cdots \mbox{d} t_1, \end{equation} where $\mathbf{h}_k(t_1,t_2,\dots,t_k;{\bf p})$'s are the regular Volterra kernels, also called subsystem kernels. Then, taking the multivariable Laplace transform of the degree $k$ regular kernel $\mathbf{h}_k$ leads to the $k^{\rm th}$ subsystem transfer function: \begin{align} {\bf H}_k ( s_1, \dots, s_k; {\bf p} ) = ~ {\bf C} ({\bf p}) \mathcal{A} (s_k; {\bf p})^{-1} \times{\bf N} ({\bf p}) [ {\bf I}_m \otimes \mathcal{A} (s_{k-1}; {\bf p})^{-1} {\bf N} ({\bf p}) ] \cdots [ {\bf I}_m^{\otimes^{k-2}} \otimes &\mathcal{A} (s_2; {\bf p})^{-1} {\bf N} ({\bf p}) ] \label{eq:Hkfom} \\ & \times [ {\bf I}_m^{\otimes^{k-1}} \otimes \mathcal{A} (s_1; {\bf p})^{-1} {\bf B} ({\bf p}) ], \nonumber \end{align} where $\mathcal{A} (s; {\bf p})$ is as defined in \eqref{eq:cA}, and ${\bf N}({\bf p})$ and ${\bf I}_m^{\otimes^{k}}$ are as defined in \eqref{eq:not1}. For details of this analysis, we refer the reader to \cite{siu1991convergence,rugh1981nonlinear}. The Volterra series representation of bilinear systems has been successfully used for interpolation-based input-independent, optimal model reduction of non-parametric bilinear systems; see, e.g., \cite{benner2012interpolation,flagg2015multipoint}. Similarly, for the reduced bilinear system \eqref{brom}, define \begin{align} \label{eq:not2} {\bf \widetilde{N}} ({\bf p}) & = [ {\bf \widetilde{N}}_1 ({\bf p}) \ {\bf \widetilde{N}}_2 ({\bf p}) \ \cdots \ {\bf \widetilde{N}}_m ({\bf p}) ]. \end{align} Then, the $k^{\rm th}$ subsystem transfer function of the reduced model \eqref{brom} is given by \begin{align} \label{eq:Hrfom} {\bf \widetilde{H}}_k ( s_1, \dots, s_k; {\bf p} ) = ~ {\bf \widetilde{C}} ({\bf p}) \mathcal{\widetilde{A}} (s_k; {\bf p})^{-1} {\color{black}\times}{\bf \widetilde{N}} ({\bf p}) [ {\bf I}_m \otimes \mathcal{\widetilde{A}} (s_{k-1}; {\bf p})^{-1} {\bf \widetilde{N}} ({\bf p}) ] \cdots [ {\bf I}_m^{\otimes^{k-2}} \otimes & \mathcal{\widetilde{A}} (s_2; {\bf p})^{-1} {\bf \widetilde{N}} ({\bf p}) ] \\ & \times [ {\bf I}_m^{\otimes^{k-1}} \otimes \mathcal{\widetilde{A}} (s_1; {\bf p})^{-1} {\bf \widetilde{B}} ({\bf p}) ]. \nonumber \end{align} This allows us to formulate the parametric interpolatory model reduction problem in our setting: Given interpolation frequencies $ \{ \sigma_1, \dots, \sigma_q \} \subset \mathbb{C} $, nontrivial right direction $ {\bf b} \in \mathbb{C}^m $, nontrivial left direction $ {\bf c} \in \mathbb{C}^\ell $, and interpolation parameter sample $ {\bf \widehat{p}} \in \mathbb{R}^\nu $, find $ {\bf V}, {\bf W} $ such that the reduced model \eqref{brom} constructed via projection as in \eqref{eq:romss} satisfies the following interpolation conditions for any $ k \in \{ 1, \dots, q \}$: \begin{align} {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) & = {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ), \label{eq:rint} \\ {\bf c}^\top {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) & = {\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ), \label{eq:lint} \\ \frac{\partial}{\partial s_i} \left( {\bf c}^\top {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right) & = \frac{\partial}{\partial s_i} \left( {\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right) , & & i \in \{ 1, \dots, k \} \label{eq:hermite} \\ \mbox{\boldmath${\EuScript{J}}$} _{\bf p} \left( {\bf c}^\top {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right) & = \mbox{\boldmath${\EuScript{J}}$} _{\bf p} \left( {\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right), \label{eq:gradient} \\ \mbox{\boldmath${\EuScript{H}}$} _{\bf p} \left( {\bf c}^\top {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} )\right) & = \mbox{\boldmath${\EuScript{H}}$} _{\bf p} \left({\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right), \label{eq:hessian} \end{align} where $\mbox{\boldmath${\EuScript{J}}$} _{{\bf p}}(\cdot)$ denotes the matrix of sensitivities (Jacobian) and $\mbox{\boldmath${\EuScript{H}}$} _{{\bf p}}(\cdot)$ denotes the Hessian (tensor) with respect to ${\bf p}$. In other words, we would like to construct a reduced parametric bilinear system whose leading subsystems interpolate (both in frequency and parameter space) the corresponding leading subsystems of the full order parametric model. Note that we are not only enforcing Lagrange interpolation. We require the reduced model to match the parameter sensitivities and Hessians as well, which is important, especially, in the setting of reduced models in optimization. Moreover, these conditions can then be generalized for different reordering of the frequencies, multiple tangential directions, and several parameter values. \subsection{Subspace conditions for parametric bilinear interpolation} \label{sec:conditions} In this section, we establish the subspace conditions to enforce the desired interpolation conditions \eqref{eq:rint}--\eqref{eq:hessian} for parametric bilinear systems. Note that even for the parametric bilinear system \eqref{binon} we consider here, some of these interpolation conditions, e.g, \eqref{eq:rint}, do not involve parameter gradient and/or parameter Hessian interpolation; and thus can be interpreted as regular tangential bilinear subsystem interpolation for a fixed parameter ${\bf p} = {\bf \widehat{p}}$ as considered in \cite{benner2011generalised}. However, even though our subspace conditions for \eqref{eq:rint}-\eqref{eq:lint} will look similar to those in \cite{benner2011generalised}, we include the corresponding theorem (Theorem \ref{thm:pbmor1sided} below) and its complete proof for the following reasons. Although \cite{benner2011generalised} considers tangential interpolation for non-parametric bilinear systems, the tangential interpolation conditions appear differently. In our formulation, tangential directions appear in Kronecker product form due to the structure of ${\bf H}_k (s_1,s_2,\ldots,s_k;{\bf p})$ as defined in \eqref{eq:Hkfom} and illustrates that ${\bf H}_k (s_1,s_2,\ldots,s_k;{\bf p})$ can be considered to have $m^k$ inputs. Our conditions result in regular tangential interpolation in the subblocks of ${\bf H}_k (s_1,s_2,\ldots,s_k;{\bf p})$; details will be given below. Moreover, we provide different proofs for \eqref{eq:rint} and \eqref{eq:lint}, which we later use in the proof of Theorem \ref{thm:pbmor2sided}. Finally, we include the ${\bf E}({\bf p})$ term in the full model. Clearly, the subspaces conditions \eqref{eq:gradient} for matching the parameter gradient and \eqref{eq:hessian} for matching the parameter Hessian are new and will be fully discussed. \begin{theorem} \label{thm:pbmor1sided} Let $ q $ be the number of subsystems we wish to interpolate. Consider $ \{ \sigma_1, \dots, \sigma_q \} \subset \mathbb{C} $ and $ {\bf \widehat{p}} \in \mathbb{R}^\nu $ such that $ \mathcal{A} ( \sigma_i; {\bf \widehat{p}} ) $ is invertible for all $ i \in \{ 1, \dots, q \} $. Consider also the nontrivial vectors $ {\bf b} \in \mathbb{C}^m $ and $ {\bf c} \in \mathbb{C}^\ell $. Define \begin{align} {\bf V}_1 & = \mathcal{A} ( \sigma_1; {\bf \widehat{p}} )^{-1} {\bf B} ({\bf \widehat{p}}) {\bf b} , & {\bf V}_k & = \mathcal{A} ( \sigma_k; {\bf \widehat{p}} )^{-1} {\bf N} ({\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf V}_{k-1} ) , & \mbox{for}~~k = 2, \dots, q , \label{eq:defineV} \\ {\bf W}_1 & = \left( \mathcal{A} ( \sigma_q; {\bf \widehat{p}} ) \right)^{-\top} {\bf C} ({\bf \widehat{p}})^\top{\bf c},& {\bf W}_k & = \left( \mathcal{A} ( \sigma_{q+1-k}; {\bf \widehat{p}} ) \right)^{-\top} \overline{ {\bf N}}({\bf \widehat{p}})^\top ( {\bf I}_m \otimes {\bf W}_{k-1}), & \mbox{for}~~k = 2, \dots, q . \label{eq:defineW} \end{align} If \begin{equation} \label{eq:Vcond} \bigcup_{k=1}^q {\bf V}_k \subseteq \textup{Ran} ( {\bf V} ) , \end{equation} then, for $ k = 1, \dots, q$, \begin{equation} \label{eq:Vint} {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) = {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) . \end{equation} If \begin{equation} \label{eq:Wcond} \bigcup_{k=1}^q {\bf W}_k \subseteq \textup{Ran} ( {\bf W} ) , \end{equation} then, for $ k =1, \dots, q$, \begin{equation} \label{eq:Wint} {\bf c}^\top {\bf H}_k ( \sigma_{q+1-k}, \dots, \sigma_q; {\bf \widehat{p}} ) = {\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_{q+1-k}, \dots, \sigma_q; {\bf \widehat{p}} ) . \end{equation} \end{theorem} \begin{proof} Define \begin{align} \label{eq:pqfg} \mathcal{P} ( s; {\bf p} ) & = {\bf V} \mathcal{\widetilde{A}} ( s; {\bf p} )^{-1} {\bf W}^\top \mathcal{A} ( s; {\bf p} ) , \nonumber\\ \mathcal{Q} ( s; {\bf p} ) & = \mathcal{A} ( s; {\bf p} ) {\bf V} \mathcal{\widetilde{A}} ( s; {\bf p} )^{-1} {\bf W}^\top ,\nonumber\\ {\bf f}_k ( s_1, \dots, s_k; {\bf p} ) & = \mathcal{A} (s_k; {\bf p})^{-1} {\bf N} ({\bf p}) ( {\bf I}_m \otimes \mathcal{A} (s_{k-1}; {\bf p})^{-1} {\bf N} ({\bf p}) ) \cdots ( {\bf I}_m^{\otimes^{k-2} } \otimes \mathcal{A} (s_2; {\bf p})^{-1} {\bf N} ({\bf p}) ) \\ & \quad\quad~ \times( {\bf I}_m^{\otimes^{k-1} } \otimes \mathcal{A} (s_1; {\bf p})^{-1} {\bf B} ({\bf p}) {\bf b} ),~\mbox{and}\nonumber\\ {\bf g}_k^\top ( s_1, \dots, s_k; {\bf p} ) & = {\bf c}^\top {\bf C} ({\bf p}) \mathcal{A} (s_k; {\bf p})^{-1} {\bf N} ({\bf p}) ( {\bf I}_m \otimes \mathcal{A} (s_{k-1}; {\bf p})^{-1} {\bf N} ({\bf p}) ) \cdots ( {\bf I}_m^{\otimes^{k-2} } \otimes \mathcal{A} (s_2; {\bf p})^{-1} {\bf N} ({\bf p}) ) \nonumber\\ & \quad\quad\times ( {\bf I}_m^{\otimes^{k-1} } \otimes \mathcal{A} (s_1; {\bf p})^{-1} ) .\nonumber \end{align} Note that $ \mathcal{P} (s;{\bf p}) $ is a skew projector onto $ \textup{Ran} ({\bf V}) $ while $ \mathcal{Q} (s;{\bf p}) $ is a skew projector along $ \textup{Ker} ({\bf W}^\top) $. Also note that $ {\bf H}_k (s_1,\ldots,s_k;{\bf p}) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) = {\bf C} ({\bf p}){\bf f}_k ( s_1, \dots, s_k; {\bf p} ) $ and $ {\bf c}^\top {\bf H}_k (s_1,\ldots,s_k;{\bf p}) = {\bf g}_k^\top ( s_1, \dots, s_k; {\bf p} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf B} ({\bf p}) ) $. First we prove \eqref{eq:Vint}. Suppose \eqref{eq:Vcond} holds. We know that \eqref{eq:Vint} is true for $ k = 1 $ by Theorem \ref{thm:linear}. Assume that the result is true for $ k -1 $; recall $k<q$. Then using the definitions of $\mathcal{P} ( s; {\bf p} )$ and ${\bf f}_k ( s_1, \dots, s_k; {\bf p} )$ from \eqref{eq:pqfg}, we obtain \begin{equation} \label{intHk} {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) - {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) = {\bf C} ({\bf \widehat{p}}) ( {\bf I}_n - \mathcal{P} ( \sigma_k; {\bf \widehat{p}} ) ) {\bf f}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}}) , \end{equation} where we factor out the term $ {\bf f}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}}) $ using the right interpolation of $ {\bf H}_{k-1} (\sigma_1, \dots, \sigma_{k-1}; {\bf \widehat{p}}) $ due to the induction assumption. Then, what is left to show is that \eqref{intHk} is zero: By the construction of $ {\bf V} $ in \eqref{eq:Vcond}, we obtain $ {\bf f}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}}) \in \textup{Ran} ( {\bf V} ) $. Hence $ {\bf f}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}}) \in \textup{Ran} ( \mathcal{P} ( \sigma_k; {\bf \widehat{p}} ) )$, which implies \begin{equation} \label{intPf} ( {\bf I}_n - \mathcal{P} ( \sigma_k; {\bf \widehat{p}} ) ) {\bf f}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}}) = 0, \end{equation} since $ \mathcal{P} (s;{\bf p}) $ is a skew projector onto $ \textup{Ran} ({\bf V}) $. The proof of \eqref{eq:Wint} follows similarly. Suppose \eqref{eq:Wcond} holds. Once again, the result is true for $ k = 1 $. Assume that it holds for $ k-1$. Similar to \eqref{intHk}, we obtain \begin{equation} \label{LintHk} {\bf c}^\top {\bf H}_k ( \sigma_{q+1-k}, \dots, \sigma_q; {\bf \widehat{p}} ) - {\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_{q+1-k}, \dots, \sigma_q; {\bf \widehat{p}} ) = {\bf g}_k^\top ( \sigma_{q+1-k}, \dots, \sigma_q; {\bf \widehat{p}}) [ {\bf I}_m^{\otimes^{k-1}} \otimes ( {\bf I}_n - \mathcal{Q} ( \sigma_q; {\bf \widehat{p}} ) ) {\bf B} ({\bf \widehat{p}}) ]. \end{equation} Again we show that this expression is zero. Note that by the definition of $ {\bf g}_k$ and the construction of $ {\bf W} $ in \eqref{eq:Wcond}, we have $ {\bf g}_k ( \sigma_{q+1-k}, \dots, \sigma_q; {\bf \widehat{p}}) \perp \textup{Ker} ( {\bf I}_m^{\otimes^{k-1}} \otimes \mathcal{Q} ( \sigma_q; {\bf \widehat{p}} ) ) $; thus \begin{equation} \label{intQg} {\bf g}_k^\top ( \sigma_{q+1-k}, \dots, \sigma_q; {\bf \widehat{p}}) [ {\bf I}_m^{\otimes^{k-1}} \otimes ( {\bf I}_n - \mathcal{Q} ( \sigma_q; {\bf \widehat{p}} ) ) ] = 0 . \end{equation} \hfill $\square$ \end{proof} Theorem \ref{thm:pbmor1sided} provides tangential interpolation of ${\bf H}_k(s_1,\ldots,s_k;{\bf p})$ in a specific order of the frequencies, namely in the order $\{\sigma_1,\ldots,\sigma_k\}$. However one might also consider enforcing interpolation at the frequency samples $\{\sigma_1,\ldots,\sigma_k\}$ in any order, including repetitions, as is considered in \cite{benner2011generalised}. Indeed, as we will show in Theorem \ref{thm:pbmor2sided}, interpolation of the transfer function sensitivities will require this. The result is a direct extension of Theorem \ref{thm:pbmor1sided}; thus we skip the details. It simply requires the subspaces to contain all possible combinations: \begin{corollary} \label{remek:order} Let $ q $ be the number of subsystems we wish to interpolate. Consider $ \{ \sigma_1, \dots, \sigma_q \} \subset \mathbb{C} $ and $ {\bf \widehat{p}} \in \mathbb{R}^\nu $ such that $ \mathcal{A} ( \sigma_i; {\bf \widehat{p}} ) $ is invertible for all $ i \in \{ 1, \dots, q \} $. Consider also the nontrivial vectors $ {\bf b} \in \mathbb{C}^m $ and $ {\bf c} \in \mathbb{C}^\ell $. Define \begin{align} \begin{aligned} \label{hyp} {\bf V}_1 & = [ \mathcal{A} ( \sigma_1; {\bf \widehat{p}} )^{-1} {\bf B} ({\bf \widehat{p}}) {\bf b}, \ \ \cdots, \ \ \mathcal{A} ( \sigma_q; {\bf \widehat{p}} )^{-1} {\bf B} ({\bf \widehat{p}}) {\bf b} ], \\ {\bf V}_k & = [ \mathcal{A} ( \sigma_1; {\bf \widehat{p}} )^{-1} {\bf N} ({\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf V}_{k-1} ), \ \ \cdots, \ \ \mathcal{A} ( \sigma_q; {\bf \widehat{p}} )^{-1} {\bf N} ({\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf V}_{k-1} ) ] , & k = 2, \dots, q , \\ {\bf W}_1 & = [ \left( \mathcal{A} ( \sigma_1; {\bf \widehat{p}} ) \right)^{-\top} {\bf C} ({\bf \widehat{p}})^\top{\bf c}, \ \ \cdots, \ \ \left( \mathcal{A} ( \sigma_q; {\bf \widehat{p}} ) \right)^{-\top} {\bf C} ({\bf \widehat{p}})^\top{\bf c} ] \\ {\bf W}_k & = [ \left( \mathcal{A} ( \sigma_1; {\bf \widehat{p}} ) \right)^{-\top} \overline{ {\bf N}}({\bf \widehat{p}})^\top ( {\bf I}_m \otimes {\bf W}_{k-1}), \ \ \cdots, \ \ \left( \mathcal{A} ( \sigma_q; {\bf \widehat{p}} ) \right)^{-\top} \overline{ {\bf N}}({\bf \widehat{p}})^\top ( {\bf I}_m \otimes {\bf W}_{k-1}) ] , & k = 2, \dots, q . \end{aligned} \end{align} If \begin{equation} \label{eq:Vcondall} \bigcup_{k=1}^q {\bf V}_k \subseteq \textup{Ran} ( {\bf V} ) , \end{equation} then, for $ k = 1, \dots, q $, and for any $ i_1, \dots, i_k \in \{ 1, \dots, q \} $, \[ {\bf H}_k ( \sigma_{i_1}, \dots, \sigma_{i_k}; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) = {\bf \widetilde{H}}_k ( \sigma_{i_1}, \dots, \sigma_{i_k}; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ). \] If \begin{equation} \label{eq:Wcondall} \bigcup_{k=1}^q {\bf W}_k \subseteq \textup{Ran} ( {\bf W} ) , \end{equation} then, for $ k = 1, \dots, q $, and for any $ i_1, \dots, i_k \in \{ 1, \dots, q \} $, \[ {\bf c}^T {\bf H}_k ( \sigma_{i_1}, \dots, \sigma_{i_k}; {\bf \widehat{p}} ) = {\bf c}^T {\bf \widetilde{H}}_k ( \sigma_{i_1}, \dots, \sigma_{i_k}; {\bf \widehat{p}} ). \] \end{corollary} So far, we proved the interpolation conditions using either only ${\bf V}$ or only ${\bf W}$; i.e., we assumed interpolation information only in one of the subspaces, considering one-sided projection. The next theorem shows that when both subspaces are considered one automatically matches the sensitivities (derivatives) with respect to the frequencies and parameter, indeed without computing the sensitivities to be matched. \begin{theorem} \label{thm:pbmor2sided} Assume the hypotheses of Corollary \ref{remek:order}. Let ${\bf V}_k$ and ${\bf W}_k$ be constructed as in \eqref{hyp} for $k=1,2,\ldots,q$. If both \[ \bigcup_{k=1}^q {\bf V}_k \subseteq \textup{Ran} ( {\bf V} ) \qquad and \qquad \bigcup_{k=1}^q {\bf W}_k \subseteq \textup{Ran} ( {\bf W} ) , \] then for $ k = 1, \dots, q $ and for $ i = 1, \dots, k $: \[ \frac{\partial}{\partial s_i} \left( {\bf c}^\top {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right) = \frac{\partial}{\partial s_i} \left( {\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes{k-1}} \otimes {\bf b} ) \right) , \] and \[ \mbox{\boldmath${\EuScript{J}}$} _{{\bf p}} \left({\bf c}^\top {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right) = \mbox{\boldmath${\EuScript{J}}$} _{{\bf p}} \left({\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right). \] where $\mbox{\boldmath${\EuScript{J}}$} _{{\bf p}}(\cdot)$ denotes the matrix of sensitivities (Jacobian) with respect to ${\bf p}$. \end{theorem} \begin{proof} Recall the definitions $ \mathcal{P} (s;{\bf p}) $, $ \mathcal{Q} (s;{\bf p}) $, $ {\bf f}_k (s_1,\dots,s_k;{\bf p}) $, and $ {\bf g}_k^\top (s_1,\dots,s_k;{\bf p}) $ from \eqref{eq:pqfg}. Similarly, define \begin{align*} {\bf \widetilde{f}}_k ( s_1, \dots, s_k; {\bf p} ) & = \mathcal{\widetilde{A}} (s_k; {\bf p})^{-1} {\bf \widetilde{N}} ({\bf p}) ( {\bf I}_m \otimes \mathcal{\widetilde{A}} (s_{k-1}; {\bf p})^{-1} {\bf \widetilde{N}} ({\bf p}) ) \cdots ( {\bf I}_m^{\otimes^{k-2} } \otimes \mathcal{\widetilde{A}} (s_2; {\bf p})^{-1} {\bf \widetilde{N}} ({\bf p}) ) \\ & \quad\quad~ \times( {\bf I}_m^{\otimes^{k-1} } \otimes \mathcal{\widetilde{A}} (s_1; {\bf p})^{-1} {\bf \widetilde{B}} ({\bf p}) {\bf b} ) ,~\mbox{and}\\ {\bf \widetilde{g}}_k^\top ( s_1, \dots, s_k; {\bf p} ) & = {\bf c}^\top {\bf \widetilde{C}} ({\bf p}) \mathcal{\widetilde{A}} (s_k; {\bf p})^{-1} {\bf \widetilde{N}} ({\bf p}) ( {\bf I}_m \otimes \mathcal{\widetilde{A}} (s_{k-1}; {\bf p})^{-1} {\bf \widetilde{N}} ({\bf p}) ) \cdots ( {\bf I}_m^{\otimes^{k-2} } \otimes \mathcal{\widetilde{A}} (s_2; {\bf p})^{-1} {\bf \widetilde{N}} ({\bf p}) ) \\ & \quad\quad~\times ( {\bf I}_m^{\otimes^{k-1} } \otimes \mathcal{\widetilde{A}} (s_1; {\bf p})^{-1} ) . \end{align*} Let $ k \in \{ 1, \dots, q \} $ and $ i \in \{ 1, \dots, k \} $. Under the assumptions of the theorem, we know \eqref{intPf} and \eqref{intQg} are satisfied for any choice of frequencies due to Corollary \ref{remek:order}; in particular, \begin{align*} ( {\bf I}_n - \mathcal{P} ( \sigma_\kappa; {\bf \widehat{p}} ) ) {\bf f}_\kappa ( \sigma_1, \dots, \sigma_\kappa; {\bf \widehat{p}}) & = 0 , ~~~\mbox{and}~~~ & {\bf g}_{\kappa-\iota+1}^\top ( \sigma_\iota, \dots, \sigma_\kappa; {\bf \widehat{p}}) [ {\bf I}_m^{\otimes^{\kappa-\iota}} \otimes ( {\bf I}_n - \mathcal{Q} ( \sigma_\kappa; {\bf \widehat{p}} ) ) ] & = 0 , \end{align*} for any $ \kappa \in \{ 1, \dots, q \} $ and $ \iota < \kappa $. Recalling the definitions of the full and reduced transfer functions \eqref{eq:Hkfom} and \eqref{eq:Hrfom}, together with \eqref{intHk} and \eqref{LintHk}, and our definitions in this proof, we can rewrite these two terms as \begin{align*} {\bf f}_\kappa ( \sigma_1, \dots, \sigma_\kappa; {\bf \widehat{p}}) - {\bf V} {\bf \widetilde{f}}_\kappa ( \sigma_1, \dots, \sigma_\kappa; {\bf \widehat{p}}) & = 0 , & {\bf g}_{\kappa-\iota+1}^\top ( \sigma_\iota, \dots, \sigma_\kappa; {\bf \widehat{p}}) - {\bf \widetilde{g}}_{\kappa-\iota+1}^\top ( \sigma_\iota, \dots, \sigma_\kappa; {\bf \widehat{p}}) ( {\color{black}{\bf I}_m^{\otimes^{\kappa-\iota}} \otimes }{\bf W}^\top ) & = 0, \end{align*} or equivalently \begin{equation} \label{VWint} {\bf f}_\kappa ( \sigma_1, \dots, \sigma_\kappa; {\bf \widehat{p}}) = {\bf V} {\bf \widetilde{f}}_\kappa ( \sigma_1, \dots, \sigma_\kappa; {\bf \widehat{p}}) ~~\mbox{and}~~ {\bf g}_{\kappa-\iota+1}^\top ( \sigma_\iota, \dots, \sigma_\kappa; {\bf \widehat{p}}) = {\bf \widetilde{g}}_{\kappa-\iota+1}^\top ( \sigma_\iota, \dots, \sigma_\kappa; {\bf \widehat{p}}) ( {\bf I}_m^{\otimes^{\kappa-\iota}} \otimes {\bf W}^\top ) . \end{equation} Now fix $ k \in \{ 1, \dots, q \} $ and $ i \in \{ 1, \dots, k \} $, and recall that $ {\bf \widetilde{E}} ({\bf p}) = {\bf W}^\top {\bf E} ({\bf p}) {\bf V} $. If $ i \ne 1 $, then \begin{align*} \frac{\partial}{\partial s_i} \left( {\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right) & = {\bf \widetilde{g}}^\top_{k-i+1} (\sigma_i, \dots, \sigma_k;{\bf \widehat{p}}) ( {\bf I}_m^{\otimes^{k-i}} \otimes {\bf W}^\top ) {\bf E} ({\bf \widehat{p}}) {\bf V} {\bf \widetilde{f}}_i (\sigma_1,\dots,\sigma_i;{\bf \widehat{p}}) \\ & = {\bf g}^\top_{k-i+1} (\sigma_i, \dots, \sigma_k;{\bf \widehat{p}}) {\bf E} ({\bf \widehat{p}}) {\bf f}_i (\sigma_1,\dots,\sigma_i;{\bf \widehat{p}}) & (\textup{by } \eqref{VWint})\\ & = \frac{\partial}{\partial s_i} \left( {\bf c}^\top {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right) . \end{align*} If $ i = 1 $, then \begin{align*} \frac{\partial}{\partial s_i} {\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) & = {\bf \widetilde{g}}^\top_k (\sigma_1, \dots, \sigma_k;{\bf \widehat{p}}) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf W}^\top ) {\bf E} ({\bf \widehat{p}}) {\bf V} {\bf \widetilde{f}}_1 (\sigma_1;{\bf \widehat{p}}) \\ & = {\bf g}^\top_k (\sigma_1, \dots, \sigma_k;{\bf \widehat{p}}) {\bf E} ({\bf \widehat{p}}) {\bf f}_1 (\sigma_1;{\bf \widehat{p}}) & (\textup{by } \eqref{VWint}) \\ & = \frac{\partial}{\partial s_i} {\bf c}^\top {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ). \end{align*} Similarly, we can justify interpolation of the parameter gradients. Since the expression of the parameter gradient for a general subsystem transfer function $ {\bf H}_k(s_1,\ldots,s_k;{\bf p})$ becomes too involved to properly present in a single page, we provide the proof for the second subsystem (the result for the first subsystem follows Theorem \ref{thm:linear}) and sketch the proof for a general subsystem. Let $ p_j $ refer to any entry of the parameter vector $ {\bf p} \in \mathbb{R}^\nu$. Consider \begin{equation} \label{eq:hh} \frac{\partial}{\partial p_j} \left( {\bf c}^\top {\bf H}_2 (\sigma_1,\sigma_2;{\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf b} ) - {\bf c}^\top {\bf \widetilde{H}}_2 (\sigma_1,\sigma_2;{\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf b} ) \right). \end{equation} Let $\mathbf{M}_{p_j}({\bf p})$ denote the partial derivative of $\mathbf{M}({\bf p})$ with respect to $p_j$. Then, by taking the partial derivatives in \eqref{eq:hh}, using interpolation of the first subsystem, rearranging terms and using $ {\bf \widetilde{C}}_{p_j} ({\bf p}) = {\bf C}_{p_j} ({\bf p}) {\bf V} $, $ \mathcal{\widetilde{A}}_{p_j} (s;{\bf p}) = {\bf W}^\top \mathcal{A}_{p_j} (s;{\bf p}) {\bf V} $, $ {\bf \widetilde{N}}_{p_j} ({\bf p}) = {\bf W}^\top {\bf N}_{p_j} ({\bf p}) ( {\bf I}_m \otimes {\bf V} ) $, and $ {\bf \widetilde{B}}_{p_j} ({\bf p}) = {\bf W}^\top {\bf B}_{p_j} ({\bf p}) $, one can write \begin{align*} & \frac{\partial}{\partial p_j} \left( {\bf c}^\top {\bf H}_2 (\sigma_1,\sigma_2;{\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf b} ) - {\bf c}^\top {\bf \widetilde{H}}_2 (\sigma_1,\sigma_2;{\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf b} ) \right) \\ & \qquad = ( {\bf c}^\top {\bf C}_{p_j} ({\bf \widehat{p}}) - {\bf g}_1 ( \sigma_2; {\bf \widehat{p}}) \mathcal{A}_{p_j} ( \sigma_2; {\bf \widehat{p}} ) ) ( {\bf I} - \mathcal{P} ( \sigma_2; {\bf \widehat{p}} ) ) {\bf f}_2 ( \sigma_1, \sigma_2; {\bf \widehat{p}}) \\ & \qquad\qquad + {\bf g}_2^\top ( \sigma_1, \sigma_2; {\bf \widehat{p}} ) \left( {\bf I}_m \otimes ( {\bf I} - \mathcal{Q} ( \sigma_1; {\bf \widehat{p}}) ) ( {\bf B}_{p_j} ({\bf \widehat{p}}) {\bf b} - \mathcal{A}_{p_j} ( \sigma_1; {\bf \widehat{p}} ) {\bf f}_1 ) \right) , \end{align*} which can be justified by multiplying out the right-hand side and re-grouping. We know that in the second-line of this expression, we obtain $( {\bf I} - \mathcal{P} ( \sigma_2; {\bf \widehat{p}} ) ) {\bf f}_2 ( \sigma_1, \sigma_2; {\bf \widehat{p}}) =0 $ and in the third-line of this expression, we obtain ${\bf g}_2^\top ( \sigma_1, \sigma_2; {\bf \widehat{p}} ) \left( {\bf I}_m \otimes ( {\bf I} - \mathcal{Q} ( \sigma_1; {\bf \widehat{p}}) ) \right) = 0 $ using $ k = 2 $ in the proof of Theorem \ref{thm:pbmor1sided}. Thus, we have $$ \frac{\partial}{\partial p_j} \left( {\bf c}^\top {\bf H}_2 (\sigma_1,\sigma_2;{\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf b} )\right) =\frac{\partial}{\partial p_j} \left( {\bf c}^\top {\bf \widetilde{H}}_2 (\sigma_1,\sigma_2;{\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf b} ) \right). $$ Since $p_j$ was an arbitrary entry of ${\bf p}$, this yields $\mbox{\boldmath${\EuScript{J}}$} _{{\bf p}} \left({\bf c}^\top {\bf H}_2 ( \sigma_1, \sigma_2; {\bf \widehat{p}} ) ( {\bf I}_m \otimes {\bf b} ) \right) = \mbox{\boldmath${\EuScript{J}}$} _{{\bf p}} \left( {\bf c}^\top {\bf \widetilde{H}}_2 ( \sigma_1, \sigma_2; {\bf \widehat{p}} ) ( {\bf I}_m \otimes {\bf b} ) \right)$ as desired. Now, for the general case, consider \begin{align} \label{eq:l1} \frac{\partial}{\partial p_j}\left( {\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right) & = {\bf c}^\top {\bf C}_{p_j} ({\bf \widehat{p}}) {\bf V} {\bf \widetilde{f}}_k (\sigma_1, \dots, \sigma_k;{\bf \widehat{p}}) \\ & \qquad + {\bf \widetilde{g}}^\top_1 (\sigma_k;{\bf \widehat{p}}) {\bf W}^\top \mathcal{A}_{p_j} (\sigma_k;{\bf \widehat{p}}) {\bf V} {\bf \widetilde{f}}_k (\sigma_1, \dots, \sigma_k;{\bf \widehat{p}})\label{eq:l2} \\ & \qquad + {\bf \widetilde{g}}^\top_1 (\sigma_k;{\bf \widehat{p}}) {\bf W}^\top {\bf N}_{p_j} ({\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf V} ) {\bf \widetilde{f}}_{k-1} (\sigma_1, \dots, \sigma_{k-1};{\bf \widehat{p}}) \label{eq:l3}\\ & \qquad + \dots + {\bf \widetilde{g}}^\top_k (\sigma_1, \dots, \sigma_k; {\bf \widehat{p}}) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf W}^\top {\bf B}_{p_j} ({\bf \widehat{p}}) {\bf b} ). \label{eq:l4} \end{align} Consider the right-hand side of \eqref{eq:l1}. Using \eqref{VWint}, one can replace ${\bf V} {\bf \widetilde{f}}_k (\sigma_1, \dots, \sigma_k;{\bf \widehat{p}})$ with ${\bf f}_k (\sigma_1, \dots, \sigma_k;{\bf \widehat{p}})$. Similarly, in \eqref{eq:l2}, once again using \eqref{VWint}, one replaces $ {\bf \widetilde{g}}^\top_1 (\sigma_k;{\bf \widehat{p}}) {\bf W}^\top$ with ${\bf g}^\top_1 (\sigma_k;{\bf \widehat{p}})$. Continuing in this fashion, we obtain \begin{align*} \frac{\partial}{\partial p_j}\left( {\bf c}^\top {\bf \widetilde{H}}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right) & = {\bf c}^\top {\bf C}_{p_j} ({\bf \widehat{p}}) {\bf f}_k (\sigma_1, \dots, \sigma_k;{\bf \widehat{p}}) \\ & \qquad + {\bf g}^\top_1 (\sigma_k;{\bf \widehat{p}}) \mathcal{A}_{p_j} (\sigma_k;{\bf \widehat{p}}) {\bf f}_k (\sigma_1, \dots, \sigma_k;{\bf \widehat{p}}) \\ & \qquad + {\bf g}^\top_1 (\sigma_k;{\bf \widehat{p}}) {\bf N}_{p_j} ({\bf \widehat{p}}) {\bf f}_{k-1} (\sigma_1, \dots, \sigma_{k-1};{\bf \widehat{p}}) \\ & \qquad + \dots + {\bf g}^\top_k (\sigma_1, \dots, \sigma_k; {\bf \widehat{p}}) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf B}_{p_j} ({\bf \widehat{p}}) {\bf b} ) \\ & = \frac{\partial}{\partial p_j}\left( {\bf c}^\top {\bf H}_k ( \sigma_1, \dots, \sigma_k; {\bf \widehat{p}} ) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right) \end{align*} as desired. \hfill $\square$ \end{proof} We now present the final theoretical result, showing the interpolation of the parameter Hessian. As the expressions become too involved for a general subsystem transfer function, we write and proof the conditions for the first and second subsystems only, but the results can be generalized similarly. \begin{theorem} \label{thm:hess} Assume the hypotheses of Corollary \ref{remek:order} for $ q = 2 $. Define \begin{align*} {\bf V}_1 ({\bf p}) & = \left[ \mathcal{A} ( \sigma_1; {\bf p} )^{-1} {\bf B} ({\bf p}) {\bf b}, ~~~ \mathcal{A} ( \sigma_2; {\bf p} )^{-1} {\bf B} ({\bf p}) {\bf b} \right], \\ {\bf V}_2 ({\bf p}) & = \left[ \mathcal{A} ( \sigma_1; {\bf p} )^{-1} {\bf N} ({\bf p}) ( {\bf I}_m \otimes {\bf V}_1 ({\bf p}) ), ~~~ \mathcal{A} ( \sigma_2; {\bf p} )^{-1} {\bf N} ({\bf p}) ( {\bf I}_m \otimes {\bf V}_1 ({\bf p}) ) \right] , \\ {\bf W}_1 ({\bf p}) & = \left[ \left( \mathcal{A} ( \sigma_1; {\bf p} ) \right)^{-\top} {\bf C} ({\bf p})^\top {\bf c}, ~~~ \left( \mathcal{A} ( \sigma_2; {\bf p} ) \right)^{-\top} {\bf C} ({\bf p})^\top {\bf c} \right], \\ {\bf W}_2 ({\bf p}) & = \left[ \left( \mathcal{A} ( \sigma_1; {\bf p} ) \right)^{-\top} \overline{ {\bf N}} ({\bf p})^\top ( {\bf I}_m \otimes {\bf W}_1 ({\bf p}) ), ~~~ \left( \mathcal{A} ( \sigma_2; {\bf p} ) \right)^{-\top} \overline{ {\bf N}} ({\bf p})^\top ( {\bf I}_m \otimes {\bf W}_1 ({\bf p}) ) \right] . \end{align*} Assume \begin{equation} \label{Vh1} \bigcup_{k=1}^2 {\bf V}_k ({\bf \widehat{p}}) \subseteq \textup{Ran} ( {\bf V} ) , \quad and \quad \bigcup_{k=1}^2 {\bf W}_k ({\bf \widehat{p}}) \subseteq \textup{Ran} ( {\bf W} ) . \end{equation} If either \begin{equation}\label{Vh2} \bigcup_{k=1}^2 \bigcup_{j=1}^\nu \frac{\partial}{\partial p_j} {\bf V}_k ({\bf \widehat{p}}) \subseteq \textup{Ran} ( {\bf V} ) \quad \mbox{or} \quad \bigcup_{k=1}^2 \bigcup_{j=1}^\nu \frac{\partial}{\partial p_j} {\bf W}_k ({\bf \widehat{p}}) \subseteq \textup{Ran} ( {\bf W} ) , \end{equation} then \[ \mbox{\boldmath${\EuScript{H}}$} _{{\bf p}} \left( {\bf c}^\top {\bf H}_2 ( \sigma_1, \sigma_2; {\bf \widehat{p}} ) ( {\bf I}_m \otimes {\bf b} ) \right) = \mbox{\boldmath${\EuScript{H}}$} _{{\bf p}} \left( {\bf c}^\top {\bf \widetilde{H}}_2 ( \sigma_1, \sigma_2; {\bf \widehat{p}} ) ( {\bf I}_m \otimes {\bf b} ) \right). \] where $\mbox{\boldmath${\EuScript{H}}$} _{{\bf p}}(\cdot)$ denotes the Hessian with respect to ${\bf p}$. \end{theorem} \begin{proof} Assume we have the extra conditions on $ {\bf V} $. First note that we have interpolation of $ \mbox{\boldmath${\EuScript{H}}$} _{\bf p} \left( {\bf c}^\top {\bf H}_1 (\sigma_i;{\bf \widehat{p}}) {\bf b} \right) $ for $ i \in \{ 1, 2 \} $ since this is the linear case (see \cite{baur2011interpolatory} for details). Let $ p_i $ and $ p_j $ refer to any entries in the parameter vector $ {\bf p} $. Recall the definition of the second transfer function of the full and reduced model, respectively, \begin{align*} {\bf H}_2 (s_1,s_2;{\bf p}) & = {\bf C} ({\bf p}) \mathcal{A} (s_1;{\bf p})^{-1} {\bf N} ({\bf p}) ( {\bf I}_m \otimes \mathcal{A} (s_1;{\bf p})^{-1} {\bf B} ({\bf p}) ) , \\ {\bf \widetilde{H}}_2 (s_1,s_2;{\bf p}) & = {\bf \widetilde{C}} ({\bf p}) \mathcal{\widetilde{A}} (s_1;{\bf p})^{-1} {\bf \widetilde{N}} ({\bf p}) ( {\bf I}_m \otimes \mathcal{\widetilde{A}} (s_1;{\bf p})^{-1} {\bf \widetilde{B}} ({\bf p}) ) . \end{align*} We take the second partial derivative of ${\bf \widetilde{H}}_2 (s_1,s_2;{\bf p})$ with respect to $ p_j $ and $ p_i $, apply the definition of the reduced order matrices, rearrange the terms and use the notation in previous proofs to obtain \begin{align} \frac{\partial^2}{\partial p_j \partial p_i}&\left( {\bf c}^\top {\bf \widetilde{H}}_2 ( \sigma_1, \sigma_2; {\bf \widehat{p}} ) ( {\bf I}_m \otimes {\bf b} ) \right) \nonumber\\ & = {\bf c}^\top {\bf C}_{p_jp_i} ({\bf \widehat{p}}) {\bf V} {\bf \widetilde{f}}_2 (\sigma_1,\sigma_2;{\bf \widehat{p}})\label{m1} \\ & \qquad + {\bf \widetilde{g}}_2^\top (\sigma_1,\sigma_2;{\bf \widehat{p}}) {\bf W}^\top {\bf B}_{p_jp_i} ({\bf \widehat{p}}) {\bf b} \\ & \qquad + {\bf \widetilde{g}}_1^\top (\sigma_2;{\bf \widehat{p}}) {\bf W}^\top {\bf N}_{p_jp_i} ({\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf V} {\bf \widetilde{f}}_1 (\sigma_1;{\bf \widehat{p}}) ) \\ & \qquad + {\bf \widetilde{g}}_1^\top (\sigma_2;{\bf \widehat{p}}) {\bf W}^\top \mathcal{A}_{p_jp_i} (\sigma_2;{\bf \widehat{p}}) {\bf V} {\bf \widetilde{f}}_2 (\sigma_1,\sigma_2;{\bf \widehat{p}}) \\ & \qquad + {\bf \widetilde{g}}_2^\top (\sigma_1,\sigma_2;{\bf \widehat{p}}) ( {\bf I}_m \otimes {\bf W}^\top \mathcal{A}_{p_jp_i} (\sigma_1;{\bf \widehat{p}}) {\bf V} {\bf \widetilde{f}}_1 (\sigma_1;{\bf \widehat{p}}) ) \label{m2}\\ \label{m3} & \qquad + \{ {\bf \widetilde{g}}_1^\top (\sigma_2;{\bf \widehat{p}}) {\bf W}^\top {\bf N}_{p_j} ({\bf \widehat{p}}) + {\bf \widetilde{g}}_2^\top (\sigma_1,\sigma_2;{\bf \widehat{p}}) {\bf W}^\top \mathcal{A}_{p_j} (\sigma_1;{\bf \widehat{p}}) \} ( {\bf I}_m \otimes {\bf V} [ {\bf \widetilde{f}}_1 ]_{p_i} (\sigma_1;{\bf \widehat{p}}) ) \\ & \qquad + \{ {\bf \widetilde{g}}_1^\top (\sigma_2;{\bf \widehat{p}}) {\bf W}^\top {\bf N}_{p_i} ({\bf \widehat{p}}) + {\bf \widetilde{g}}_2^\top (\sigma_1,\sigma_2;{\bf \widehat{p}}) {\bf W}^\top \mathcal{A}_{p_i} (\sigma_1;{\bf \widehat{p}}) \} ( {\bf I}_m \otimes {\bf V} [ {\bf \widetilde{f}}_1 ]_{p_j} (\sigma_1;{\bf \widehat{p}}) ) \\ & \qquad + \{ {\bf c}^\top {\bf C}_{p_j} ({\bf \widehat{p}}) + {\bf \widetilde{g}}_1^\top (\sigma_2;{\bf \widehat{p}}) {\bf W}^\top \mathcal{A}_{p_j} (\sigma_2;{\bf \widehat{p}}) \} {\bf V} [ {\bf \widetilde{f}}_2 ]_{p_i} (\sigma_1,\sigma_2;{\bf \widehat{p}}) \\ & \qquad + \{ {\bf c}^\top {\bf C}_{p_i} ({\bf \widehat{p}}) + {\bf \widetilde{g}}_1^\top (\sigma_2;{\bf \widehat{p}}) {\bf W}^\top \mathcal{A}_{p_i} (\sigma_2;{\bf \widehat{p}}) \} {\bf V} [ {\bf \widetilde{f}}_2 ]_{p_j} (\sigma_1,\sigma_2;{\bf \widehat{p}}),\label{m4} \end{align} where $\mathbf{M}_{p_jp_i}({\bf p})$ denotes the second partial derivative of $\mathbf{M}({\bf p})$ with respect to $ p_j $ and $ p_i $, and $[ {\bf f}_k]_{p_i}$ denotes the partial derivative of ${\bf f}_k$ with respect $p_i$. Then, we follow the similar manipulations used in the proof of Theorem \ref{thm:pbmor2sided} for \eqref{eq:l1}-\eqref{eq:l4}: Equations \eqref{m1}-\eqref{m2} contain the same terms, and thus we follow the same reasonings that we used for \eqref{eq:l1}-\eqref{eq:l4}. Then, even though \eqref{m3}-\eqref{m4} contain the new terms $ [ {\bf f}_1 ]_{p_j} (\sigma_1;{\bf \widehat{p}}) $ and $ [ {\bf f}_2 ]_{p_j} (\sigma_1,\sigma_2;{\bf \widehat{p}}) $, the same manipulations still apply here due to the construction of ${\bf V}$ in \eqref{Vh1} and \eqref{Vh2}, $ [ {\bf f}_1 ]_{p_j} (\sigma_1;{\bf \widehat{p}}) $ and $ [ {\bf f}_2 ]_{p_j} (\sigma_1,\sigma_2;{\bf \widehat{p}}) $ are now also spanned by $ \textup{Ran} ({\bf V}) $ for any $ p_j $. Therefore, we obtain $$ \frac{\partial^2}{\partial p_j \partial p_i}\left( {\bf c}^\top {\bf \widetilde{H}}_2 ( \sigma_1, \sigma_2; {\bf \widehat{p}} ) ( {\bf I}_m \otimes {\bf b} ) \right) = \frac{\partial^2}{\partial p_j \partial p_i}\left( {\bf c}^\top {\bf H}_2 ( \sigma_1, \sigma_2; {\bf \widehat{p}} ) ( {\bf I}_m \otimes {\bf b} ) \right). $$ Since $p_j$ and $p_i$ were arbitrary, we obtain the Hessian matching as desired. The proof would be analogous if we assumed the extra conditions on $ {\bf W} $ instead. Only the rearrangement of the terms would change so that the expression depends on $ [ {\bf g}_1 ]_{p_j} (\sigma_2;{\bf \widehat{p}}) $ and $ [ {\bf g}_2 ]_{p_j} (\sigma_1,\sigma_2;{\bf \widehat{p}}) $ instead. \hfill $\square$ \end{proof} \begin{remark} As we stated above, one can write the conditions for matching the parameter Hessian of the higher index subsystems. Let $ q $ be the number of subsystems we wish to interpolate. To obtain the parameter Hessian matching for the general $k^{\rm th}$ order subsystem, i.e., to satisfy \[ \mbox{\boldmath${\EuScript{H}}$} _{\bf p} \left( {\bf c}^\top {\bf H}_k (\sigma_1, \dots, \sigma_k; {\bf \widehat{p}}) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right) = \mbox{\boldmath${\EuScript{H}}$} _{\bf p} \left({\bf c}^\top {\bf \widetilde{H}}_k (\sigma_1, \dots, \sigma_k; {\bf \widehat{p}}) ( {\bf I}_m^{\otimes^{k-1}} \otimes {\bf b} ) \right), \qquad k = 1, \dots, q \] we would need \begin{align*} {\bf V}_1 ({\bf p}) & = \left[ \mathcal{A} ( \sigma_1; {\bf p} )^{-1} {\bf B} ({\bf p}) {\bf b}, ~~~ \cdots, ~~~ \mathcal{A} ( \sigma_q; {\bf p} )^{-1} {\bf B} ({\bf p}) {\bf b} \right], \\ {\bf V}_k ({\bf p}) & = \left[ \mathcal{A} ( \sigma_1; {\bf p} )^{-1} {\bf N} ({\bf p}) ( {\bf I}_m \otimes {\bf V}_{k-1} ({\bf p}) ), ~~~\cdots,~~~ \mathcal{A} ( \sigma_q; {\bf p} )^{-1} {\bf N} ({\bf p}) ( {\bf I}_m \otimes {\bf V}_{k-1} ({\bf p}) ) \right] , & k=1,\dots,q \\ {\bf W}_1 ({\bf p}) & = \left[ \left( \mathcal{A} ( \sigma_1; {\bf p} ) \right)^{-\top} {\bf C} ({\bf p})^\top {\bf c}, ~~~ \cdots, ~~~ \left( \mathcal{A} ( \sigma_q; {\bf p} ) \right)^{-\top} {\bf C} ({\bf p})^\top {\bf c} \right], \\ {\bf W}_k ({\bf p}) & = \left[ \left( \mathcal{A} ( \sigma_1; {\bf p} ) \right)^{-\top} \overline{ {\bf N}} ({\bf p})^\top ( {\bf I}_m \otimes {\bf W}_1 ({\bf p}) ), ~~~\cdots, ~~~ \left( \mathcal{A} ( \sigma_q; {\bf p} ) \right)^{-\top} \overline{ {\bf N}} ({\bf p})^\top ( {\bf I}_m \otimes {\bf W}_1 ({\bf p}) ) \right] & k=1,\dots, q. \end{align*} (evaluated at $ {\bf \widehat{p}} $) to be contained in the ranges of the basis $ {\bf V} $ and $ {\bf W} $, respectively, \emph{together with} either the partial derivatives of the $ {\bf V}_k $'s or the partial derivatives of the $ {\bf W}_k $'s with respect to the parameter entries (evaluated at $ {\bf \widehat{p}} $). \end{remark} \section{Numerical Examples} \label{examples} In this section, we illustrate the theoretical discussion from Section \ref{problem} using two examples: A nonlinear RC circuit in Section \ref{ex:rc} and an advection-diffusion equation in Section \ref{ex:heat}. Throughout this section, ${\bf \widehat{p}}^{(i)}$ (or $\hat{p}^{(i)}$ when the parameter is a scalar) denotes the parameter sampling points we used in constructing the model reduction bases ${\bf V}$ and ${\bf W}$, and ${\bf p}^{(i)}$ (or ${p}^{(i)}$) denotes the parameter points (which are not sampled) at which we evaluate both reduced and full models to investigate the accuracy of the reduced model. \subsection{A nonlinear RC circuit} \label{ex:rc} We begin with a modified version of a standard benchmark problem for bilinear systems, namely a nonlinear RC circuit \cite{bai2006projection,phillips2003projection}. The original benchmark problem leads to a non-parametric bilinear system. We have revised the problem to add parametric dependence. To clearly motivate this parametric dependence, we include details of the model derivation. Consider the following SISO parametric nonlinear system \begin{align} \label{RC} \left\{ \begin{array}{l} \dot{{\bf v}} (t;p) = {\bf f} ( {\bf v} (t) ; p ) + {\bf b} u (t) \\ y (t;p) = {\bf c}^\top {\bf v} (t;p), \end{array} \right. \end{align} where $ {\bf v} (t;p) \in \mathbb{R}^n $, $ {\bf b} = {\bf c} = \left[ 1 \ 0 \ \cdots \ 0 \right]^\top \in \mathbb{R}^n$, \begin{align} {\bf f} ( {\bf v} ; p ) = \left[ \begin{array}{c} - g ( v_1 ; p ) - g ( v_1-v_2 ; p ) \\ g ( v_1-v_2 ; p ) - g ( v_2-v_3 ; p ) \\ \vdots \\ g ( v_{k-1}-v_k ; p ) - g ( v_k-v_{k+1} ; p ) \\ \vdots \\ g ( v_{N-1}-v_N ; p ) \end{array} \right] , \end{align} and \begin{align} g ( v ; p ) = e^{ p v } + v - 1. \end{align} System \eqref{RC} models a nonlinear RC circuit with $ N $ resistors where the state variable $ {\bf v} (t;p) $ is the voltage at each node, $ u (t) $ is the input signal to the current source, the ouput $ y (t;p) $ is the voltage between node 1 and ground, and $ g (\nu;p) $ gives the current-voltage dependency at each resistor. We have introduced a parameter dependency $ p \in \mathbb{R} $ in the exponential term of this current-voltage dependency, which models the influence of the operating temperature on the current. Following \cite{bai2006projection,phillips2003projection}, we apply Carleman bilinearization to $ {\bf f} ( {\bf v} ; p ) \approx {\bf A}_1 (p) {\bf v} + {\bf A}_2 (p) ( {\bf v} \otimes {\bf v} ) $ and a second-order approximation of $ g ( v ; p ) \approx (p+1)v + \frac{1}{2} p^2 v^2 $ leading to an approximation of the nonlinear dynamics \eqref{RC} by the following parametric bilinear system: \begin{align} \label{RCb} \left\{ \begin{array}{l} {\bf E} \dot{{\bf x}} (t;p) = {\bf A} (p) {\bf x} (t;p) + {\bf N} {\bf x} (t;p) u (t) + {\bf b} u (t) \\ y_b (t;p) = {\bf c}^\top {\bf x} (t;p), \end{array} \right. \end{align} where \begin{align} {\bf x} (t;p) &= \left[ \begin{array}{c} {\bf v} (t;p) \\ {\bf v} (t;p) \otimes {\bf v} (t;p) \end{array} \right],\quad\quad\quad\quad{\bf E} = {\bf I}_n, & {\bf c} & = {\bf b} = \left[ \begin{array}{c} 1 \\ \bf 0 \end{array} \right], \\ {\bf A} (p) & = \left[ \begin{array}{cc} {\bf A}_1 (p) & {\bf A}_2 (p) \\ \bf 0 & {\bf A}_1 (p) \otimes {\bf I} + {\bf I} \otimes {\bf A}_1 (p) \end{array} \right], & {\bf N} & = \left[ \begin{array}{cc} \bf 0 & ~~ \bf 0 \\ {\bf b} \otimes {\bf I} + {\bf I} \otimes {\bf b} & ~~ \bf 0 \end{array} \right], \end{align} and we use $y_b(t;p)$ to denote the output of the full-order parametric bilinear system. Note that the dimension of the bilinear system is $ n = N + N^2 $ and the matrices $ {\bf A}_1 (p) \in \mathbb{R}^{N\times N} $ and $ {\bf A}_2 (p) \in \mathbb{R}^{N\times N^2} $ are given by \begin{align*} {\bf A}_1 (p) & = (1 + p ) \left[ \begin{array}{ccccc} -2 & 1 & \\ 1 & -2 & 1 \\ & \ddots & \ddots & \ddots \\ & & 1 & -2 & 1 \\ & & & 1 & - 1 \end{array} \right], \end{align*} and, for $ k = 2, \dots, N - 1 $, \begin{align*} [{\bf A}_2(p)]_{(1,1)} & = -p^2 , \\ [{\bf A}_2(p)]_{(1,2)} & = [{\bf A}_2(p)]_{(1,N+1)} = [{\bf A}_2(p)]_{(k,(k-2)N + k - 1)} = [{\bf A}_2(p)]_{(k,(k-1)N + k + 1)} = \\ & = [{\bf A}_2(p)]_{(k,kN + k)} = [{\bf A}_2(p)]_{(N,(N-2)N + N - 1)} = [{\bf A}_2(p)]_{(N,(N-1)N+ N)} = \frac{p^2}{2} , \\ [{\bf A}_2(p)]_{(1,N+2)} & = [{\bf A}_2(p)]_{(k,(k-2)N + k)} = [{\bf A}_2(p)]_{(k,(k-1)N + k - 1)} = [{\bf A}_2(p)]_{(k,kN + k + 1)} = \\ & = [{\bf A}_2(p)]_{(N,(N-2)N + N )} = [{\bf A}_2(p)]_{(N,(N-1)N + N-1)} = - \frac{p^2}{2}, \end{align*} where $[{\bf A}_2(p)]_{(i,j)}$ denotes the $(i,j)^{\rm th}$ entry of ${\bf A}_2(p)$. Note that both ${\bf A}_1(p)$ and ${\bf A}_2(p)$ have the desired affine structure \eqref{eq:affine} with the nonlinear scalar parametric functions $-p^2$, $\frac{1}{p^2}$, and $-\frac{1}{p^2}$. As in the original benchmark problem, we choose $ N = 200 $, and thus obtain a parametric bilinear system of dimension $ n= 40,200$. \textcolor{black}{We are interested in the parameter range $p \in [0,70]$}, and choose two parameter sampling points, $\hat{p}^{(1)} = 1$ and $\hat{p}^{(2)} = 50$. For each sampling point, we focus on the leading $q=2$ subsystems. We choose $ \{ \sigma_1, \sigma_2 \}$ by running IRKA on the linearized model (by setting ${\bf N}=0$); i.e, $\{ \sigma_1, \sigma_2 \} $ correspond to optimal sampling points for the linear model. With these frequencies, we construct the basis $ {\bf V}_1 $ and $ {\bf W}_1$ (using Theorem {\color{black}\ref{thm:hess}}) that guarantees interpolation of $ {\bf H}_1 (s;p) $, $ {\bf H}_2 (s_1,s_2;p) $, and their sensitives for $ p=\hat{p}^{(1)} = 1,$ and at $ \{ \sigma_1, \sigma_2 \}$. Similarly, we construct $ {\bf V}_2 $ and $ {\bf W}_2 $ for $ \hat{p}^{(2)} = 50 $. We then construct the global bases $ {\bf V} = [ {\bf V}_1 \ {\bf V}_2 ] $ and $ {\bf W} = [ {\bf W}_1 \ {\bf W}_2 ]$ and obtain a reduced parametric bilinear model of dimension $ r = 12 $ using the projection described in \eqref{eq:romss}; thus we are approximating a parametric bilinear system of dimension $n=40,200$ by a reduced parametric bilinear model of dimension $r=12$. To test the accuracy of the parametric reduced model, we simulate and compare the outputs of the original nonlinear model \eqref{RC}, the full bilinear model \eqref{RCb}, and the reduced bilinear model for two different inputs, $u(t) = e^{-t} $ and $u(t) = \frac{1}{2} ( \cos ( 5 \pi t) + 1 ) $, and three different parameter values, $p^{(1)} = 18$, $p^{(2)} = 40$, and $p^{(3)}=62$. Note that these parameter values are not the sampled values; indeed $p_3=62$ is even outside the sampling range $[1,50]$. Moreover, note that the inputs $u(t) = e^{-t} $ and $u(t) = \frac{1}{2} ( \cos ( 5 \pi t) + 1 ) $ were not used in the model reduction step, i.e., the reduced model is not informed by these choices of excitation. As Figure \ref{figRC} shows, the parametric reduced bilinear system provides a very accurate approximation to the full bilinear model; their responses are almost indistinguishable. Relative $L_2$ errors in the outputs for our three parameter values $ p^{(1)} $, $ p^{(2)} $, $ p^{(3)} $ are listed in Table \ref{tab:rc}, showing a relative error on the order of $10^{-3}$. We also emphasize that the only deviations visible in Figure \ref{figRC} are deviations from the original nonlinear system, due to Carleman bilinearization, and not due to the model reduction step. We also note that even though the responses might look similar for different parameter values, the scales of the outputs are different. \begin{table} \centering \begin{tabular}{|c||c|c|c|} \hline\noalign{\smallskip} \mbox{Input} & $ p=p^{(1)} $ & $ p=p^{(2)} $ & $ p=p^{(3)} $ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $u(t) = e^{-t}$ & $ 2.54 \times 10^{-3} $ & $ 2.91 \times 10^{-3} $ & $ 1.53 \times 10^{-3} $ \\ \hline \noalign{\smallskip} $u(t) = \frac{1}{2} ( \cos ( 5 \pi t) + 1 ) $ & $ 2.54 \times 10^{-3} $ & $ 4.33 \times 10^{-3} $ & $ 4.57 \times 10^{-3} $ \\ \noalign{\smallskip}\hline \end{tabular} \caption{Relative $L_2$ output error } \label{tab:rc} \end{table} To make the numerical investigations more detailed, we performed a parameter sweep using $10^3$ \textcolor{black}{linearly} sampled points in the interval \textcolor{black}{$[0,70]$}, then identified the performance of the reduced model measured in terms of the relative $L_2$ error in the output $y_b(t;p)$ for each of the inputs. We found that for the first input $u(t) = e^{-t}$, in the \emph{worst}-case scenario, the reduced model led to a relative $L_2$ output error of {\color{black}$6.31\times10^{-3}$}. For the second input $u(t) = \frac{1}{2} ( \cos ( 5 \pi t) + 1 )$, the \emph{worst} performance yielded a relative $L_2$ output error of {\color{black}$5.96\times10^{-3}$}. These numbers further illustrate the ability of the reduced model to accurately approximate the full order parametric bilinear system. \begin{figure} \includegraphics*[width=0.45\textwidth]{RC_p1_1_p2_50_par_18_u_1} \ \includegraphics*[width=0.45\textwidth]{RC_p1_1_p2_50_par_18_u_2} \includegraphics*[width=0.45\textwidth]{RC_p1_1_p2_50_par_40_u_1} \ \includegraphics*[width=0.45\textwidth]{RC_p1_1_p2_50_par_40_u_2} \includegraphics*[width=0.45\textwidth]{RC_p1_1_p2_50_par_62_u_1} \ \includegraphics*[width=0.45\textwidth]{RC_p1_1_p2_50_par_62_u_2} \caption{Solution to \eqref{RC} (denoted by ``original"), \eqref{RCb} (denoted by ``full"), and reduced order model (denoted by ``reduced") for different inputs and parameter values. Left column: $ u (t) = e^{-t} $. Right column: $ u (t) = \frac{1}{2} ( \cos ( 5 \pi t) + 1 ) $.} \label{figRC} \end{figure} \subsection{Advection-diffusion equation} \label{ex:heat} For our second example consider a model of the transport and diffusion of a passive scalar field $T$ (representing a chemical concentration, temperature, etc.) on the domain $ \Omega = [-1,1] \times [-1,1] $. The transport of $T$ is controlled using a background velocity field described with two input parameters $u_1$ and $u_2$, and two velocity fields ${\bf v}_1$ and ${\bf v}_2$. Thus the background velocity field is $ {\bf v} (x,y) = u_1 (t) {\bf v}_1 (x,y) + u_2 (t) {\bf v}_2 (x,y) $. The value of the passive scalar on the boundary of $\Omega$ ($\partial\Omega$) is controlled by an input $u_3$. We model the diffusion using the viscosity parameter $ p_1 $, and include a source term centered at $ (p_2, p_3) \in \Omega $ with an area of affect described by $p_4$ given by \[ f (x,y;p_2,p_3,p_4) = \exp \left( - \frac{ ( x - p_2 )^2 + ( y - p_3 )^2 }{ p_4 } \right). \] The strength of the source term is controlled by an input $u_4$. Our passive scalar field $ T $ then satisfies \begin{align} \label{eq:advectionDiffusion} \dot{T}(x,y,t) & = p_1 \Delta T(x,y,t) - {\bf v} \cdot \nabla T(x,y,t) + u_4(t) f(x,y;p_2,p_3,p_4), & (x,y,t) & \in \Omega \times (0,\infty) , \\ T (x,y, 0) & = T_0 (x,y), & (x,y) & \in \Omega, \\ T (x,y,t) & = u_3(t) , & (x,y,t) & \in \partial \Omega \times (0,\infty). \end{align} Thus our model depends on the parameter vector \[ {\bf p} = \left[ \begin{array}{c} p_1 \\ p_2 \\ p_3 \\ p_4 \end{array} \right] . \] We will consider the following as our parameter range \[ -3 \le \ln p_1 \le 1, \qquad (p_2, p_3) \in \Omega, \qquad 1 \le p_4 \le 10. \] We approximate solutions to (\ref{eq:advectionDiffusion}) using a finite element discretization $ T_N (x,y,t) = \sum_{j=1}^N x_j (t) \varphi_j (x,y) $, where the $\{\varphi_j\}_{j=1}^N$ arise from quadratic (P2) triangular elements. For convenience, we will split the summation above into two disjoint parts, one with indices corresponding to boundary nodes (${\cal B}$) and the remainder corresponding to interior nodes (${\cal I}$). Thus $\{ 1, 2, \ldots, N \} = {\cal B} \cup {\cal I}$. Upon substituting this into the weak form of (\ref{eq:advectionDiffusion}) and suppressing function arguments, we arrive at \begin{displaymath} \left( \dot{ \left[ \sum_{j=1}^N x_j \varphi_j \right] } , \varphi_i \right) = -\left( p_1 \nabla \left[ \sum_{j=1}^N x_j \varphi_j \right] , \nabla\varphi_i \right) - \left( {\bf v} \cdot \nabla \left[ \sum_{j=1}^N x_j \varphi_j \right] , \varphi_i \right) + \left( u_4 f , \varphi_i \right), \qquad \forall i\in {\cal I}, \end{displaymath} where the boundary integrals vanish since $\varphi_i$ are zero on the boundary when $i\in {\cal I}$. Interchanging integration in the $L_2$-inner products with the summation leads to \begin{align*} \sum_{j\in{\cal I}} \left( \varphi_j , \varphi_i \right) \dot{ x }_j & = - p_1 \sum_{j\in{\cal I}} \left( \nabla \varphi_j , \nabla \varphi_i \right) x_j - p_1 u_3 \sum_{j\in{\cal B}} \left( \nabla \varphi_j , \nabla \varphi_i \right) \\ & \quad - u_1 \sum_{j\in{\cal I}} \left( {\bf v}_1 \cdot \nabla \varphi_j , \varphi_i \right) x_j - u_2 \sum_{j\in{\cal I}} \left( {\bf v}_2 \cdot \nabla \varphi_j , \varphi_i \right) x_j + u_4 \left( f , \varphi_i \right), \end{align*} for each $i\in{\cal I}$. Letting $ {\bf x} (t) = [x_1(t) \ x_2(t) \ \dots \ x_{|{\cal I}|}(t)]^\top $ and \begin{align*} [ {\bf E} ]_{ij} & = \left( \varphi_i , \varphi_j \right) , & [ {\bf N}_1 ]_{ij} & = - \left( {\bf v}_1 \cdot \nabla \varphi_j , \varphi_i \right) , & [ {\bf b}_3 ]_i & = - p_1 \sum_{k\in{\cal B}} \left( \nabla \varphi_i , \nabla \varphi_k \right), \\ [ {\bf A} ]_{ij} & = -p_1 \left( \nabla \varphi_i , \nabla \varphi_j \right) , & [ {\bf N}_2 ]_{ij} & = - \left( {\bf v}_2 \cdot \nabla \varphi_j , \varphi_i \right) , & [ {\bf b}_4 ]_i & = \left( f(\cdot;p_2,p_3,p_4) , \varphi_i(\cdot) \right), \end{align*} for $i,j\in{\cal I}$. We can write our discrete problem as \begin{align*} {\bf E} \dot{{\bf x}} (t;p) = {\bf A} ({\bf p}) {\bf x} (t;p) + {\bf N}_1 {\bf x} (t;p) u_1 (t) + {\bf N}_2 {\bf x} (t;p) u_2 (t) + {\bf b}_3 ({\bf p}) u_3 (t) + {\bf b}_4 ({\bf p}) u_4 (t) , \end{align*} or \begin{align*} {\bf E} \dot{{\bf x}} (t;p) = {\bf A} ({\bf p}) {\bf x} (t;p) + \sum_{i=1}^4 {\bf N}_i {\bf x} (t;p) u_i (t) + {\bf B} ({\bf p}) {\bf u} (t), \end{align*} where \begin{align*} {\bf N} & = [ {\bf N}_1 \ {\bf N}_2 \ {\bf N}_3 \ {\bf N}_4 ] = [ {\bf N}_1 \ {\bf N}_2 \ \textbf{0} \ \textbf{0} ] , \\ {\bf B} ({\bf p}) & = [ \textbf{0} \ \textbf{0} \ {\bf b}_3 ({\bf p}) \ {\bf b}_4 ({\bf p}) ] , ~~\mbox{and}\\ {\bf u} (t) & = [ u_1 (t) \ u_2 (t) \ u_3 (t) \ u_4 (t) ]^\top . \end{align*} We also include an output $ {\bf y} (t;p) = {\bf c}^\top {\bf x} (t;p) $ that represents the average of our scalar field over $ [0.5,1] \times [0.5,1] $. In summary, we have a bilinear parametric multi-input/single-output system \begin{align} \label{Heatb} \left\{ \begin{array}{l} {\bf E} \dot{{\bf x}} (t;p) = {\bf A} ({\bf p}) {\bf x} (t;p) + \sum_{i=1}^4 {\bf N}_i {\bf x} (t;p) u_i (t) + {\bf B} ({\bf p}) {\bf u} (t) \\ y (t;p) = {\bf c}^\top {\bf x} (t;p), \end{array} \right. \end{align} which can be reduced using the strategy presented in the previous example. For our simulations, we chose a 21-by-21 FEM mesh (which results in a FOM of dimension $ n = |{\cal I}| =361 $) and velocity with $ {\bf v}_1 (x,y) = [ -y, \ x ]^\top $ and $ {\bf v}_2 (x,y) = \frac{1}{2} ( \cos ( \pi ( x - y ) ) + 1 ) [ 1, \ 1 ]^\top $. Note that the full order matrices $ {\bf E}, {\bf N}_1, {\bf N}_2, {\bf c} $ are constant, $ [{\bf A} ({\bf p})]_{ij} = -p_1 [{\bf A}]_{ij} $, and $ [{\bf b}_3 ({\bf p})]_i = - p_1 [{\bf b}_3]_i $, hence the ROM is given by \begin{align} \label{Heatbr} \left\{ \begin{array}{l} {\bf \widetilde{E}} \dot{{\bf \widetilde{x}}} (t;p) = {\bf \widetilde{A}} ({\bf p}) {\bf \widetilde{x}} (t;p) + {\bf \widetilde{N}}_1 {\bf \widetilde{x}} (t;p) u_1 (t) + {\bf \widetilde{N}}_2 {\bf \widetilde{x}} (t;p) u_2 (t) + {\bf \widetilde{b}}_3 ({\bf p}) u_3 (t) + {\bf \widetilde{b}}_4 ({\bf p}) u_4 (t) \\ \widetilde{y} (t;p) = {\bf \widetilde{c}}^\top {\bf \widetilde{x}} (t;p) \end{array} \right. \end{align} where \begin{align} {\bf \widetilde{E}} & = {\bf W}^\top {\bf E} {\bf V} & {\bf \widetilde{A}} ({\bf p}) & = - p_1 {\bf W}^\top {\bf A} {\bf V} \\ {\bf \widetilde{N}}_j & = {\bf W}^\top {\bf N}_j {\bf V} & {\bf \widetilde{b}}_3 ({\bf p}) & = - p_1 {\bf W}^\top {\bf b}_3 \\ {\bf \widetilde{c}}^\top & = {\bf c}^\top {\bf V} & {\bf \widetilde{b}}_4 ({\bf p}) & = {\bf W}^\top {\bf b}_4 ({\bf p}). \end{align} Even though the dimension in \eqref{Heatbr} is lower, the reduction of the vector $ {\bf b}_4 ({\bf p}) $ can not be done offline (as for the rest of the system matrices), hence we aim to reduce the cost of computing $ {\bf b}_4 ({\bf p}) $ by means of DEIM approximation as we discussed in Section \ref{sec:projintro}. In this case, since $ {\bf b}_4 ({\bf p}) $ is a vector, there is no need for a matrix-version and the original DEIM formulation suffices. Applying DEIM, we want to find a basis $ {\bf U} \in \mathbb{R}^{n\times M} $ where $ M \ll n $ and a row selector $ \mathbb{S} $ so that \[ {\bf b}_4 ({\bf p}) \approx {\bf U} ( \mathbb{S}^\top {\bf U} )^{-1} \mathbb{S}^\top {\bf b}_4 ({\bf p}) \quad \mbox{and} \quad {\bf \widetilde{b}}_4 ({\bf p}) \approx {\bf W}^\top {\bf U} ( \mathbb{S}^\top {\bf U} )^{-1} \mathbb{S}^\top {\bf b}_4 ({\bf p}) \] are good approximations. This way $ {\bf W}^\top{\bf U} ( \mathbb{S}^\top {\bf U} )^{-1} $ can be precomputed offline, while the online computation of $ \mathbb{S}^\top {\bf b}_4 ({\bf p}) $ will now only require us to compute the entries in $ {\bf b}_4 ({\bf p}) $ indicated by $ \mathbb{S} $. Clearly the accuracy of this approximation depends on $ {\bf U} $ and $ \mathbb{S} $. We first find $ {\bf U} $ by Proper Orthogonal Decomposition (POD) \cite{lumley,berkooz}. That is, we generate a matrix of snapshots of the vector $ {\bf b}_4 ({\bf p}) $ and select the leading $ M $ left singular vectors to be the columns of $ {\bf U} $. By taking \emph{enough} snapshots and singular vectors, we expect the range of our basis $ {\bf U} $ to represent the values of $ {\bf b}_4 ({\bf p}) $ over the parameter domain. We chose a tolerance of $ 10^{-5} $ to truncate the singular values in the POD basis, resulting in a DEIM approximation of order $M=33$. To chose the interpolation indices (row selector) in $\mathbb{S}$, we use the Q-DEIM algorithm \cite{drmac2016new} which determines $ \mathbb{S} $ using a pivoted QR factorization of ${\bf U}^\top$. In Figure \ref{ADRerrorDEIM} we show the relative error of the Q-DEIM approximation of ${\bf b}_4({\bf p})$ over $ 10^4 $ random parameter values in the entire parameter domain. Note that the maximum relative error is on the order of $ 10^{-4} $, showing the accuracy of the DEIM approximation. Thus, we can confidently use ${\bf \widetilde{b}}_4 ({\bf p}) \approx {\bf W}^\top {\bf U} ( \mathbb{S}^\top {\bf U} )^{-1} \mathbb{S}^\top {\bf b}_4 ({\bf p})$ in our reduced model. \begin{center} \begin{figure} [h!] \centering \includegraphics[width=0.45\textwidth]{adr_error_deim_p2p3p4.pdf} \caption{Relative error in the DEIM approximation of $ {\bf b}_4 ({\bf p}) $.} \label{ADRerrorDEIM} \end{figure} \end{center} To construct our ROM, we sample at four parameter values $ {\bf \widehat{p}}^{(i)}$ for $i=1,2,3,4$ (see the leading four rows in Table \ref{tab1}). We calculate the corresponding projection matrices $ {\bf V}_1, {\bf V}_2, {\bf V}_3, {\bf V}_4, {\bf W}_1, {\bf W}_2, {\bf W}_3, $ and ${\bf W}_4 $ using Theorem \ref{thm:hess} that guarantee interpolation and sensitivity matching at the frequency interpolation points (generated via IRKA once again) and tangential directions corresponding to each parameter value sampled. To maintain symmetry in ${\bf \widetilde{E}}$ and ${\bf \widetilde{A}}$, we concatenate all of the projection matrices and consider a one-sided projection, i.e., $ {\bf V} = [ {\bf V}_1 \ {\bf V}_2 \ {\bf V}_3 \ {\bf V}_4 {\bf W}_1 \ {\bf W}_2 \ {\bf W}_3 \ {\bf W}_4] $ and $ {\bf W} = {\bf V} $. We truncate the basis (using SVD) and obtain a ROM with dimension $ r = 20 $. \begin{table} \centering \begin{tabular}{|l||ccc|} \hline\noalign{\smallskip} & viscosity & source center & source reach \\ \noalign{\smallskip} \hline \noalign{\smallskip} $ {\bf \widehat{p}}^{(1)} $ & 0.1 & $ (0.25,0.8) $ & 1 \\ $ {\bf \widehat{p}}^{(2)} $ & 1 & $ (0,0) $ & 9 \\ $ {\bf \widehat{p}}^{(3)} $ & $ e^{-3} $ & $ (1,1) $ & 4 \\ $ {\bf \widehat{p}}^{(4)} $ & $ e^{-3} $ & $ (-0.5,-1) $ & 1 \\ \hline $ {\bf p}^{(1)} $ & 0.0529 & $ (0.975,0.9275) $ & 1.6636 \\ $ {\bf p}^{(2)} $ & 0.2392 & $ (0.6914,0.3149) $ & 3.6730 \\ $ {\bf p}^{(3)} $ & 0.1261 & $ (-0.7224,-0.7623) $ & 5.1100 \\ $ {\bf p}^{(4)} $ & 0.0754 & $ (-0.3214,0.4988) $ & 2.816 \\ \noalign{\smallskip}\hline \end{tabular} \caption{Advection-diffusion model. Parameter values.} \label{tab1} \end{table} To illustrate the accuracy of the reduced model, we test it for two different inputs sets (see Table \ref{tab2}) and for four different parameter samples $ {\bf p}^{(i)}$ for $i=1,2,3,4$ (see Table \ref{tab1}, rows 5--9) that were not part of the sampling set. We show the results, the full-order and reduced-order outputs in Figures \ref{ADRpar_i2} and \ref{ADRpar_i1} for two different inputs (see Table \ref{tab2}). Both figures show that for each input selection (neither of which entered into our transfer function-based model reduction process), the parametric reduced bilinear model provides a high-quality approximation, only showing slight variations at the parameter values that were not sampled. \begin{table} \centering \begin{tabular}{|c||cccc|} \hline\noalign{\smallskip} & $ u_1 $ & $ u_2 $ & $ u_3 $ & $ u_4 $ \\ \noalign{\smallskip} \hline \noalign{\smallskip} Input 1 & $ \sin t $ & $ \cos t $ & -1 & 0.5 \\ Input 2 & 0.5 & 0.25 & 1 & -1 \\ \noalign{\smallskip}\hline \end{tabular} \caption{Advection-diffusion model. Input values.} \label{tab2} \end{table} \begin{figure} [h!] \includegraphics[width=0.45\textwidth]{adr9_r20_i2} \ \includegraphics[width=0.45\textwidth]{adr6_r20_i2} \\ \includegraphics[width=0.45\textwidth]{adr7_r20_i2} \ \includegraphics[width=0.45\textwidth]{adr8_r20_i2} \ \caption{Solution of the full (FOM) and reduced order model (ROM) corresponding to non-sampled parameter values (see values in Table \ref{tab1}) for Input 1 with entires $ u_1 (t) = \sin t $, $ u_2 (t) = \cos t $, $ u_3 (t) = -1 $, $ u_4 (t) = 0.5 $.} \label{ADRpar_i2} \end{figure} \begin{figure} [h!] \includegraphics[width=0.45\textwidth]{adr9_r20_i1} \ \includegraphics[width=0.45\textwidth]{adr6_r20_i1} \\ \includegraphics[width=0.45\textwidth]{adr7_r20_i1} \ \includegraphics[width=0.45\textwidth]{adr8_r20_i1} \ \caption{Solution of the full (FOM) and reduced order model (ROM) corresponding to non-sampled parameter values (see values in Table \ref{tab1}) for for Input 1 with entires $ u_1 (t) = 0.5 $, $ u_2 (t) = 0.25 $, $ u_3 (t) = 1 $, $ u_4 (t) = -1 $.} \label{ADRpar_i1} \end{figure} As in the previous example, to ensure a fair comparison, we performed an exhaustive search via $10^4$ uniform random samples in our full parameter domain (except that we fixed the fourth parameter entry at $ p_4 = 5 $ so that we can present the results with a $3$-dimensional plot) for both input selections from Table~\ref{tab2}. Out of these $10^4$ parameter selections, in Figure~\ref{ADRworst_i2}, we display in the left-plot the relative errors at every sampling point and in the right-plot, the outputs for the \emph{worst} performance of the reduced model for Input $1$. Note that even for the worst parameter sample, the parametric reduced model still provides an accurate approximation with a relative $L_2$ error of \textcolor{black}{$4.79 \times 10^{-2}$}. We repeat the procedure for Input 2 in Figure~\ref{ADRworst_i1} and obtain similar results. \begin{center} \begin{figure} [h!] \includegraphics[width=0.45\textwidth]{adr_i2_relE_p4fix5_p1p2p3.pdf} \ \includegraphics[width=0.45\textwidth]{adr10_r20_i2} \caption{For Input 1: on the left, the relative $ L_2 $ output error; on the right, solution of the full order model and the reduced order model corresponding to the highest relative $ L_2 $ error.} \label{ADRworst_i2} \end{figure} \end{center} \begin{center} \begin{figure} [h!] \includegraphics[width=0.45\textwidth]{adr_i1_relE_p4fix5_p1p2p3.pdf} \ \includegraphics[width=0.45\textwidth]{adr10_r20_i1} \caption{For Input 2: on the left, the relative $ L_2 $ output error; on the right, solution of the full order model and the reduced order model corresponding to the highest relative $ L_2 $ error.} \label{ADRworst_i1} \end{figure} \end{center} \section{Conclusions and Future Work} \label{sec:conc} In this paper, we presented conditions that ensure Hermite interpolation conditions for parametric bilinear systems. These conditions also ensure that parametric directional derivatives of the reduced-order transfer functions match the full-order transfer function at given interpolation points and directions. We demonstrate the quality of our model reduction using two examples, one a well-known benchmark and the other an interesting advection-diffusion equation. The performance was very good, and we emphasize that no effort was made to select the sample points in parameter space. In fact, this approach is agnostic to the parameter choices and can be easily embedded in well-known parameter selection schemes. The next natural steps are to test this algorithm with different schemes and more challenging problems. We also intend to extend this approach to parametric quadratic nonlinear systems. \bibliographystyle{plain}
1,314,259,995,234
arxiv
\section{} \centerline{\Large\bf Abstract} \vspace{.25cm} \noindent The inversion potential for the ammonia molecule is approximated by $V(x)=k(x^2-r^2)^2/8r^2$. The Hamiltonian thereby contains only even powers of $p$ and $x$ and a representation in terms of ladder operators $a$ and $a^\dagger$ is suggested. The frequency variable $\omega$ occurring in the operators is introduced as a free parameter, with its value to be determined such as to optimize agreement with experimental results. Using known structural parameters of the ammonia molecule, the eigenvalues for a $10\times 10$ truncation of the Hamiltonian matrix are computed. The splitting between the two lowest eigenvalues corresponds to the ammonia maser frequency, 24.87 GHz, this value being reproduced by the appropriate choice of $\omega$. \\ \\ The inversion of the ammonia molecule NH$_3$, shown in Fig. \ref{am}, can be described by a simplified Schr\"odinger equation \begin{equation}\label{H} \frac{p^2}{2\mu}\psi(x)+V(x)\psi(x)=\varepsilon\psi(x). \end{equation} Here only the linear motion of the nitrogen atom is considered, with neglect of the other vibrational modes of the molecule. Here $\mu$ is the reduced mass of the nitrogen molecule, given by \begin{equation}\label{mu} \mu=\frac{3m_N m_H}{m_N+3m_H}. \end{equation} A number of analytic representations of the inversion potential were discussed by Swalen and Ibers.\footnote{J. D. Swalen and J. A. Ibers, ``Potential Function for the Inversion of Ammonia,'' {\it J. Chem. Phys.} {\bf 36}(7) (1962), pp. 1914-1918.} We consider a potential of the form (Fig. \ref{Vx}): \begin{equation}\label{V} V(x)=\frac{k(x^2-r^2)^2}{8r^2} = \frac{k r^2}{8}-\frac{k x^2}{4}+\frac{k x^4}{8r^2}, \end{equation} with minima at $x=\pm r$ and a barrier of height $k r^2/8$ at $x=0$. This general form was utilized by Damburg and Propin.\footnote{R. J. Damburg and R. Kh. Propin, ``Model Potential for Inversion in Ammonia,'' {\it Chemical Physics Letters} {\bf 14}(1) (1972), pp. 82-84; the form of the potential was first introduced by Certain, Hirschfelder, Kolos, and Wolniewicz, in a study of exchange interactions in the H$_2^+$ molecule.} \begin{figure} \begin{center} \includegraphics[height=5cm]{ammonia.png} \caption{Ammonia molecule in its two metastable pyramidal states.} \label{am} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[height=5cm]{Vx.png} \caption{Ammonia inversion potential.} \label{Vx} \end{center} \end{figure} Since the Hamiltonian contains only even powers of $p$ and $x$, a representation based on the ladder operators $a$ and $a^\dagger$ suggests itself, a generalization of the canonical operator formulation for the harmonic oscillator. Accordingly, we define \begin{equation} a = \sqrt{\frac{\mu\omega}{2}} x + i\sqrt{\frac{1}{2 \mu \omega}} p, \qquad a^\dagger = \sqrt{\frac{\mu\omega}{2}} x - i \sqrt{\frac{1}{2 \mu \omega}} p. \end{equation} The parameter $\omega$ is introduced, with its value to be determined such as to optimize agreement with experimental results. The actions of the ladder operators on a basis ket are given by \begin{equation} a|n\rangle = \sqrt{n}|n-1\rangle, \qquad a^\dagger |n\rangle = \sqrt{n+1}|n+1\rangle. \end{equation} The Hamiltonian in Eq. (\ref{H}), can be expanded to give \begin{equation} H= \frac{p^2}{2\mu}+\frac{k r^2}{8}-\frac{k}{4} x^2+\frac{k}{8r^2}x^4. \end{equation} In terms of the ladder operators, we have \begin{equation} x = \sqrt{\frac{1}{2 \mu \omega}} (a + a^\dagger), \qquad p = -i \sqrt{\frac{\mu \omega}{2}} (a - a^\dagger), \end{equation} so that \begin{equation} x |n\rangle = \sqrt{\frac{1}{2 \mu \omega}} \Big(\sqrt{n}|n-1\rangle + \sqrt{n+1}|n+1\rangle\Big) \end{equation} and \begin{equation} p |n\rangle= -i \sqrt{\frac{\mu \omega}{2}} \Big(\sqrt{n}|n-1\rangle - \sqrt{n+1}|n+1\rangle\Big). \end{equation} By successive application of these operators, it follows that \begin{equation} x^2 |n\rangle = \frac{1}{2 \mu \omega} \Big(\sqrt{n(n-1)} |n-2\rangle +(2n+1) |n\rangle +\sqrt{(n+1)(n+2)} |n+2\rangle \Big) \end{equation} and \begin{equation} p^2 |n\rangle = -\frac{\mu \omega}{2} \Big(\sqrt{n(n-1)} |n-2\rangle -(2n+1) |n\rangle +\sqrt{(n+1)(n+2)} |n+2\rangle \Big). \end{equation} Note, incidentally, that \begin{equation} \Big(\frac{p^2}{2\mu}+\frac{1}{2}\mu\omega^2 x^2 \Big) |n\rangle =\left(n+\frac{1}{2}\right)\omega |n\rangle, \end{equation} which agrees with the result for an harmonic oscillator. Finally, we need \begin{eqnarray} x^4 |n\rangle = \frac{1}{4 \mu^2 \omega^2} \Big(\sqrt{n(n-1)(n-2)(n-3)} |n-4\rangle+ 2\sqrt{n(n-1)}(2n-1) |n-2\rangle+ \hspace{1cm} \nonumber \\ (6n^2+6n+3) |n\rangle+ 2\sqrt{(n+1)(n+2)} (2n+3) |n+2\rangle+ \hspace{1cm} \nonumber \\ \sqrt{(n+1)(n+2)(n+3)(n+4)} |n+4\rangle \Big). \hspace{1cm} \end{eqnarray} The nonzero matrix elements of the Hamiltonian are given by \begin{equation} H_{n,n}= \frac {1} {32} \Big (\frac {(6 n^2 + 6 n + 3) k} {r^2 \mu^2 \omega^2} + 4 k r^2 - \frac {(8 n + 4) k} {\mu\omega} + (16 n + 8)\omega \Big), \end{equation} \begin{equation} H_{n+2,n}= \frac {\sqrt {(n + 1) (n + 2)}\Big (-4 r^2 \mu^2 \omega^3 + (2 n + 3 - 2 r^2\mu\omega)k \Big)} {16 r^2 \mu^2\omega^3}, \end{equation} \begin{equation} H_{n+4,n}= \frac{\sqrt{(n+1)(n+2)(n+3)(n+4)}k}{32r^2 \mu^2\omega^2}. \end{equation} The matrix is symmetrical, so that $H_{m,n}=H_{n,m}$. For explicit computation, we need numerical values for the parameters $\mu$, $r$ and $\omega$. We use Hartree atomic units with $\hbar = m_e =e =1$. The mass of a hydrogen atom is given by $m_H =1837.153$, and a nitrogen atom by $m_N=25530.80$. Thus the reduced mass of the nitrogen atom, as it participates in inversion (Eq. \ref{mu}), is equal to $\mu= 4532.92$. The experimentally-determined equilibrium displacement of the nitrogen atom from the plane of the three hydrogen atoms gives $r$= 0.3816 \AA= 0.7211 bohr.\footnote{National Institute of Standards and Technology, tabulation at https://cccbdb.nist.gov/expdata.asp.} The accepted value for the inversion barrier for ammonia is $V_0$=24.2 kJ/mol = 0.009243 hartrees.\footnote{A. Rauk and L. C. Allen,``Electronic Structure and Inversion Barrier of Ammonia,'' {\it J. Chem. Phys.} {\bf 52}(8) (1970) pp. 4133-4144.} We assign the value $\omega = 0.00206226$. The 10$\times$10 truncated matrix of $H$ is shown below: \begin{figure}[h] \begin{center} \includegraphics[height=3cm]{mx.png} \caption{Matrix $H_{mn}$.} \label{mx} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[height=5cm]{evals.png} \caption{Computed eigenvalues plotted with inversion potential. Values in atomic units.} \label{evals} \end{center} \end{figure} \noindent The eigenvalues are calculated using the {\bf Eigenvalue} program in {\it Mathematica}\textsuperscript{TM}. The lowest eight eigenvalues are given by \begin{eqnarray} \varepsilon_0=0.00272086,\ \varepsilon_1=0.00272464,\ \varepsilon_2= 0.00736816,\ \varepsilon_3=0.00776894, \nonumber \\ \varepsilon_4=0.0106695, \varepsilon_5= 0.0126591,\ \varepsilon_6=0.0167717,\ \varepsilon_7= 0.0194514. \end{eqnarray} These occur in closely-spaced pairs, representing nearly-degenerate symmetric and antisymmetric states. The eigenvalues are plotted in Fig. \ref{evals}, with the symmetric and antisymmetric states are shown in black and red, respectively. The computed splitting in the ground torsional state is given by $\varepsilon_1-\varepsilon_0 = 3.7941 \times 10^{-6}$ hartrees = 24.87 GHz, the frequency of the ammonia maser. The value of $\omega$ has been adjusted to produce this agreement. \vspace{1cm} \leftline{\Large\bf References} \theendnotes \end{document}
1,314,259,995,235
arxiv
\section{Why Mass Transfer to Compact Binaries Is Important} The discovery of gravitational radiation from the mergers of black holes (BHs) and neutron stars (NSs) has made it important to understand how two stellar remnants can come to be in a close-enough orbit that they will merge within a Hubble time \citep{2016PhRvL.116f1102A, 2017PhRvL.119p1101A, 2017PhRvL.119n1101A, 2016PhRvX...6d1015A}. Here we consider the effects of mass transfer from a third star in a wider orbit. Mass gained by components of the inner binary can change white dwarfs (WDs) into NSs, or NSs into BHs. Whether or not the natures of its components are altered, modifications of the total mass and angular momentum of the inner binary changes the time to gravitational merger. For wide ranges of physically reasonable parameters, the times to merger are decreased, although times to merger can also increase. This has implications for the rates of formation and the rates of mergers of binaries producing gravitational radiation, as well as for the rates of Type Ia supernovae (SNe~Ia) and the accretion-induced collapse (AIC) of WDs to NSs and NSs to BHs. Mass transfer from a companion in a wider orbit also produces directly detectable signatures. These systems can be be bright, with X-ray luminosities on the order of the Eddington luminosity. Modulations of the X-ray luminosity on times governed by the orbital period of the compact binary, and possibly even binary self-lensing, can lead to definitive identifications. The components of the inner binary have already evolved, yet they are in a close orbit. They therefore have likely experienced at least two epochs of prior interaction involving the transfer of mass and/or episodes during which the binary was engulfed by a common envelope. The wide-orbit companion star has not yet become a stellar remnant. As it evolves, it will begin to transfer mass to the inner binary during an epoch that may be as short as $10^5$~years, or may be longer than $\sim 10^8$~years. In \S 2 we develop the model and sketch key elements of the basic science. In \S 3 we present a set of examples. We find that the times to mergers and the masses of the compact objects are increased under a broad range of physically-motivated input assumptions. In \S 4 we focus on the transformations that are possible [e.g., through accretion-induced collapse (AIC) or Type Ia supernova (SN~Ia)] when mass is added to a WD or NS. A broader range of possibilities for the underlying physical processes is discussed in \S 5. In \S 6 we focus on the implications for gravitational mergers, collapsing WDs and NSs, and for exploding WDs. We find that hierarchical triples may contribute to the rates of NS-NS, NS-BH, and BH-BH mergers, as well as to the rate of Type Ia supernovae (SNe~Ia). \section{The Model} \subsection{Overview} Our model starts with a hierarchical triple. The triple contains a binary composed of two stellar remnants in a close orbit, and a third, unevolved star in a wider orbit. Although we are interested in gravitational mergers, the two stellar remnants need not have an orbital separation small enough to allow them to merge within a Hubble time. This is because interaction with mass from the third star can decrease the time to merger. If one or both of the compact objects is a NS, the eventual merger could nevertheless be a BH-BH merger, since a NS may gain enough mass to collapse. Similarly, WDs may be transformed into NSs or, with more significant mass increase, even to BHs. We therefore consider inner binaries with the full range of compact-object combinations. The inner binary has evolved into two compact objects. We concentrate on the epoch during which the third star is able to lose mass which comes under the gravitational influence of the inner binary.\footnote{There may have been an earlier epoch during which three-body dynamics played a role.} Note that, if star~3 is massive, it could start transferring mass during its giant phase very soon after the formation of the inner binary. If, on the other hand, it is a solar-mass star, the wait time could be billions of years. Mass transfer takes place over a time $t_{mt}$. The value of $t_{mt}$ also depends on the mass of star~3. It can range from $10^5$~years for massive donors to tens of millions of years for donors of lower mass. During the epoch of mass transfer, the characteristics of the inner binary can change. Specifically, the masses and orbital separation of the components can be altered, thereby changing the time to merger. As matter accretes onto the components of the inner binary, X-rays will be emitted, When the accretion rates is large, the system can be highly luminous. This means that these systems can be detected as X-ray sources even in external galaxies. Furthermore, if the count rate is high enough, subtle effects, such as modulation of the X-ray flux at harmonics of the inner orbit, may be detectable. \subsection{Orbital dimensions} We consider a binary composed of two compact objects with masses $M_1$ and $M_2$, orbiting each other with semimajor axis $a_{in}$. If the orbit is circular, the time to merger is \begin{equation} \tau_{merge} = \Bigg(\frac{1.5\times 10^8 {\rm yr}}{M_1 M_2 (M_1+M_2)}\Bigg) \, \Bigg[\Bigg(a_{in}^4 - a_{min}^4\Bigg)\Bigg], \end{equation} where $a_{min}$ is the separation at the time of merger, and masses and distances are expressed in solar units. In the cases we consider, $a_{min}$ is small enough relative to $a_{in}$ that it can be neglected in the calculations described below. A third body, a non-degenerate star with mass $M_3$, is in orbit with the binary. Dynamical stability requires that the closest approach between $M_3$ and either component of the binary be much larger than $a_{in}$. We define $q=M_2/M_1,$ with $M_2 < M_1$, and $q_{out}= (M_1 + M_2)/M_3$. \citet{1995ApJ...455..640E} have derived an expression for the minimum possible radius, $a_{out}^{min}$ of the outer orbit. \begin{equation} a_{out}^{min} = a_{in} \times \Bigg[ \frac{3.7}{q_{out}^\frac{1}{3}} + \frac{2.2}{1 + q_{out}^\frac{1}{3}} + \Bigg(\frac{1.4}{q_{out}^\frac{1}{3}}\Bigg)\, \Bigg(\frac{q_{out}^\frac{1}{3}-1}{q_{out}^\frac{1}{3}+1}\Bigg) \Bigg] \end{equation} In principle, $a_{out}^{min}$ could correspond to the periapse of a wide elliptical outer orbit. Here, for the sake of simplicity and also because many mass transfer systems have been tidally circularized, we consider circular orbits. The finite size of the star in the outer orbit places additional restrictions on the outer orbit. As we discuss in the appendix, the Roche-lobe picture of channeled mass transfer can be applied when mass is transferred to a compact inner binary from a donor in a much wider orbit. If we assume that the triple system is in dynamical equilibrium sometime before mass transfer from the outer star begins, then the outer star must have fit inside its Roche lobe. Thus, \begin{equation} a_{out} > \frac{R_3}{f(1/q_{out})}, \end{equation} where $f(x)=0.49\, x^{0.67}/(0.6\, x^{0.67}+lg(1+x^{0.33}))$. The true value of the minimum separation between star~3 and the center of mass of the inner binary is therefore \begin{equation} a_{out}^{min} = max\Bigg[a_{out}^{min} , \frac{R_3}{f(1/q_{out})}\Bigg]. \end{equation} On the giant branch, the radius, $R$, and luminosity, $L$, of the star are strong functions of the instantaneous value of the core mass, $C(t)$. \begin{equation} R = 0.85\, M(0)^{0.85} + \frac{3700\, C(t)^4}{1 + C(t)^3 + 1.75\, C(t)^4} \end{equation} \begin{equation} L = M(0)^3 + \frac{10^{5.3} C(t)^6}{1 + 10^{0.4} C(t)^4 + 10^{0.5} C(t)^5} \end{equation} The expressions for $R$ and $L$ each consist of a first term meant to correspond to the value of the radius and luminosity, respectively, of a main sequence star. Depending on the specific value of the initial mass, expressions which differ from those above may be more appropriate. For giants, however, these terms are dwarfed by the second terms, which depends only on the core mass. Thus, the radius and luminosity of giants depends only weakly on the initial mass. \subsection{The Flow of Mass} Mass flows from star 3. A fraction, $\gamma,$ of $\dot M_3$ falls toward the inner binary and the rest, $(1-\gamma)\, \dot M_3$ exits the system. The paths taken by mass falling toward the binary may be complex. The upshot is simply that a fraction of the incoming mass is retained by one star and another fraction is retained by the second star. Let $\beta_1$ and $\beta_2$ be the fraction of the mass retained by stars 1 and 2, respectively. Then $\dot M_1 = \beta_1\, \gamma\, \dot M_3$, $\dot M_2 = \beta_2\, \gamma\, \dot M_3$, $\dot M_{in} = (\beta_1 + \beta_2)\, \gamma\, \dot M_3$. The remainder of the mass exits the system. \subsection{Retention of Mass} Many factors, including the geometry and dynamics of the mass flow, and the action of magnetic fields determine how much infalling matter can be retained by a compact object. For each type of compact object we select a formula with which to compute $\dot M_{min}$, the minimum accretion rate for which all mass is retained ($\beta=1$) and $\dot M_{max}$, the maximum accretion rate for which $\beta=1$. For rates lower than $\dot M_{min}$, we set $\beta=0$. For rates larger than $\dot M_{max}$ we use: $\beta = \dot M_{edd}/\dot M_{in}$ For NSs and BHs we chose: $\dot M_{min}= 0.1 \times \dot M_{Eddington}$, and $\dot M_{max}= 10 \times \dot M_{Eddington}$. Although mass can likely be retained for even smaller values of $\dot M_{min}$, our prescription leads to a conservative estimate of mass gain and also focuses on intervals when mass transfer is most likely to produce high luminosities, and therefore to be detectable. The upper limit reflects the fact that super-Eddington accretion has been observed in X-ray binaries: e.g., \citet{2017Sci...355..817I}; \citet{2014Natur.514..202B}; \citet{2016ApJ...831L..14F}. If we assume that the the accretion luminosity is $L_{acc} = 0.1 \times \dot M_{acc} c^2,$ then the infall rate onto a NS or BH for Eddington-limited accretion is $\approx M \times 2.4 \times 10^{-8} M_\odot$~yr$^{-1}$, where we use $L_{Edd}\approx 1.3\times 10^{38}$~erg~s$^{-1}$ and where $M$ is the mass of the accretor. For WDs, there is a narrow range of infall rates for which mass can undergo nuclear burning as it accretes, and can therefore be retained. [See, e.g., \citet{Iben.1982}; \citet{Nomoto.1982}; \citet{Shen.2007}.] The upper and lower bounds of this range depend on the value of the WD mass, but at high masses, $\dot M^{\rm burn}_{min}$ may be a few times $10^{-7} M_\odot$~yr$^{-1}$, and $\dot M_{max}$ is $\sim 10^{-6} M_\odot$~yr$^{-1}$. At very low rates of accretion, classical novae occur over intervals that can be as long as ${\cal O}(10^5)$~yrs, ejecting much of the matter accreted between explosions. At rates just under $\dot M^{\rm burn}_{min}$, however, much of the accreted mass can be retained, because nuclear burning occurs during recurrent novae. These repeat on intervals that can range from months to decades, and are less energetic than novae, allowing processed material to be retained. We therefore take $\dot M_{min}$ to be the minimum rate of accretion consistent with recurrent novae. To provide points of reference we note that, for many of the systems we consider, infall rates of $10^{-8} M_\odot$~yr$^{-1}$ to $10^{-5} M_\odot$~yr$^{-1}$ are associated with mass retention. Since not all of the mass lost by the donor falls toward the inner binary, the rate of accretion is generally only a fraction of the donor's rate of mass loss. Donors must therefore have the high mass loss rates associated with either giants or massive main-sequence stars, if the rate of mass infall to be adequate to lead to genuine mass gain by the compact objects comprising the inner binary. Other stars can, however, influence the orbital angular momentum and time-to-merger of the inner binary. \subsection{The Effects of Mass Retention} Mass retention plays three important roles. {\bf (1)} Mass retention decreases the time to merger. Equation 1 shows that the time for the components of the inner binary to merge is inversely proportional to the product $M_1 M_2 (M_1+M_2)$ Thus, increases in the masses of star~1 and/or star~2 decrease the time to merger. The effect is actually more pronounced, because the addition of mass to the inner binary can also significantly alter the orbital separation, which (for fixed orbital angular momentum) is proportional to $[(M_1+M_2)/(M_1^2)\, (M_2^2)]$. This means that increases in the masses of the components decreases the value of their orbital separation, even when the orbital angular momentum is constant. Equation~1 indicates that this plays an even more significant role in decreasing the time to merger. {\bf (2)} Mass retention can change the physical nature of the accretor. A carbon-oxygen (CO) WD, typically with mass below $\sim 1.15\, M_\odot$, will explode as a Type Ia supernova if it achieves a critical mass, $M_c$. This critical mass may be the Chandrasekhar mass, $M_{Ch}$, with a value of $\sim 1.38\, M_\odot.$ The exact value of $M_{c}$ depends, however, on detailed composition and also on the WD's spin. A slightly more massive WD, an oxygen-neon-magnesium(O-N-Mg) WD, will collapse to become a NS after achieving its critical mass. Such a WD may require less than $0.2\, M_\odot$ to collapse. Similarly, a NS may collapse to become a BH if its mass exceeds a certain critical value. That value is not yet well determined. To be specific we will take the upper limit of the NS mass to be $2.2\, M_\odot$, which will also be the lower limit of the BH mass in our simulations \citet{2017ApJ...850L..19M}, \citet{2011ApJ...741..103F}. {\bf (3)} Even if the nature of the accretor is unaltered, mass retention changes the mass of the binary's components from the values they would have achieved through single-star or binary evolution. BHs can undergo the largest mass increases possible among accretors that retain the same physical natures. This can happen when star 3 is itself a very massive star that could donate a significant fraction of its mass to the close binary. Whatever the nature(s) of the accretors, mass added by star~3 alters the mass we measure at the time of merger. Thus, if mass transfer is common, the masses of the merging compact stars have a good chance of having been significantly changed between the time the close binary was formed and the time its components merged. \subsection{Flow of Angular Momentum} Although three-body motion can be complex, we will focus on intervals during which the inner and outer orbits are each well defined. The two compact objects occupy the inner orbit, and the much larger outer orbit is defined by star~3 in orbit with the binary's center of mass. The orbital angular momenta are $L_{in}=M_1 M_2 \sqrt{a_{in}/M_T}$, where $M_T=M_1+M_2$, and $L_{out} = M_3 M_T \sqrt{a_{out}/M_{tot}}$, with $M_{tot} = M_3+M_T.$ The total orbital angular momentum is $\vec L=\vec L_{in}+\vec L_{out}$ Gravitational radiation drains angular momentum from each orbit. Mass flowing from the system also carries angular momentum. Thus the net flow of angular momentum from the 3-body system is negative, with the ejection of mass potentially removing more angular momentum per unit time than does gravitational radiation. If some of the exiting angular momentum is drawn from the inner orbit, then the time to merger decreases. The flow of angular momentum within the system depends on a variety of factors. In many respects the mass flow configurations should be similar to those observed in systems in which a single compact object accretes matter from a companion. The rotating dipole component of the gravitational potential should play a significant role only when the incoming mass approaches the inner binary. It is therefore likely that, in many cases, an accretion disk will be formed and that, as is found in for supermassive BH-BH binaries, the inner binary clears a region just around it. In this case, the less massive compact object will come closer to the edge of the circumbinary disk; a minidisk may then form around it. This mode of accretion would eventually mean the components of the inner binary could come to have nearly equal masses, even if they had started with very different masses. A circumbinary disk can play an active role in angular momentum transport and loss. In the calculations performed for this paper, we did not incorporate direct effects produced by such a disk. We note, however, that both tidal interactions between the disk and binary, and the release of even small amounts of mass from the outer disk, may tend to shrink the inner binary. In addition to situations in which there is a circumbinary disk, it is likely that there are cases in which mass falls almost radially inward toward the inner binary's center of mass. In this case, the more massive component may be more likely to be the first to capture incoming mass. Alternatively, the mass flow may be well modeled by considering accretion from a dense medium within which the inner binary moves as it orbits its distance companion. Furthermore, if one or both accretors are NSs or WDs, magnetic effects may play important roles in channeling mass toward them; or else they could produce a propeller effect, shooting mass from the system. In addition, the compact objects can act as sinks of angular momentum when accreted mass spins them up, or else as sources of angular momentum, if ejected mass spins them down. In our evolutionary calculations we will not consider spin. The considerations above indicate that the flow of angular momentum may be complicated and that different processes may play dominant roles in different systems. Nevertheless, there are important commonalities that we try to capture. First, of course is that angular momentum is carried by ejected mass. Second, that a good portion of the angular momentum carried away is drawn from the orbits. Since the orbital angular momentum of the outer orbit is generally much larger than that of the inner orbit, most of the angular momentum carried away will be at the expense of the outer orbit. Nevertheless, even a small decrease in angular momentum of the inner orbit can decrease its time to merger. In our calculations mass ejected from the vicinity of one of the three stellar components carries away an amount of angular momentum that is proportional to the specific angular momentum of that star. \subsection{Mechanisms for Mass Transfer and Internal Loss of Angular Momentum} \subsubsection{Winds} To model $\dot M_{wind}$, the rate of mass loss due to winds, we implement an approach well suited to donors which leave WD remnants, using a version of the Reimer's wind which we have modified so that the envelope of the star is exhausted when the core mass of $M_3$ has reached its final value (i.e., when star~3 has evolved to become a WD). To compute the final mass of a WD-producing star, we use an observationally established initial-mass/final-mass relationship. This formalism allows the instantaneous value of the stellar mass, $M_\ast(t)$, to be expressed in terms of the instantaneous value of the star's core mass, $C(t)$, the initial value $M_\ast(0)$ of the stellar mass, and a parameter $C_0,$ which is set to $0.2\, M_\odot.$ \begin{equation} M_\ast(t) = \Bigg[M_\ast(0)^2 + \frac{\Big(C_{max}^2-M_\ast(0)^2\Big)\, \Big(C^5-C_0^5\Big)} {\Big(C_{max}^5-C_0^5\Big)}\Bigg]^\frac{1}{2} \end{equation} Wind mass loss in this model increases dramatically toward the end of the giant's life, consistent with observations. The stellar wind, $\dot M_\ast(t)$ is just the time derivative of the mass; this includes terms involving the $\dot C(t)$, which is proportional to the stellar luminosity. Because the donor is a giant, releasing mass in many directions, only a fraction $f$ of the ejected mass can be captured by the inner binary. We use the following formula. \begin{equation} \gamma = \kappa\, \Bigg(\frac{R_3}{R_L}\Bigg)^{\Big(2-\frac{R_3}{R_L}\Big)}, \end{equation} where $R_3$ and $R_L$ are the instantaneous values of the donor's physical radius and Roche-lobe radius, respectively, and $\kappa$ is a constant, whose value we have taken to be $0.5$ in the calculations described here. The functional form above ensures that the rate of wind capture is roughly equal to $\kappa$ when the donor fills or nearly fills its Roche lobe. Because $R_L$ is proportional to $a_{out}$, the capture fraction falls off as $1/{distance}$ for systems close to Roche-lobe filling. On the other hand, gravitational focusing plays a smaller role at larger separations, so the capture fraction should fall off as roughly $(1/{distance})^2$. The exponential ensures that this is the case at larger distances. The rate of mass infall to the binary is $\gamma\, \dot M_3.$ As we will show in \S 3, winds alone can produce important changes in the inner binary, decreasing its time to merger and sometimes transforming the nature of its components. \subsubsection{Roche-lobe-filling} The donor star expands with age so that, if the initial size of the outer orbit, $a_{out}(0)$, is small enough, the donor may come to fill its Roche lobe. Once the Roche lobe is filled, there will either be an instability that leads to a common envelope (see below) or else there can be a relatively long epoch during which mass loss from the donor increases, with a fraction of its mass channeled through the region around the L1 point. When the donor star is a subgiant or giant, this epoch ends when the stellar envelope is exhausted by the combination of mass loss and the growth of the stellar core. \subsubsection{Common envelope} When a star in a binary fills its Roche lobe, the transfer of mass to its companion will change the dimensions of the Roche lobe. At the same time, the loss of mass from the donor may alter its radius. If the Roche lobe shrinks at a rate faster than the star can adjust, mass transfer will proceed on a dynamical time scale and a common envelope will encompass both the core of the donor and the accretor \citet{1977ApJ...215..851W, 1976IAUS...73...75P}. Generally, the envelope will be ejected over an interval of $10^4-10^5$ years. Unless super-Eddington accretion occurs during this short-lived phase (a process that is invoked to form double NSs, for example), little or no mass may be gained by the donor's companion. The common envelop can, however, have a pronounced effect on the orbital angular momentum of a binary \citet{2005MNRAS.356..753N}. By imparting angular momentum to mass in the envelop, the components of the binary spiral closer to each other while ejecting the envelope. Angular momentum considerations can be used to express the final orbital separation in terms of the initial system parameters. When the accretor is a compact binary, it too can impart angular momentum to the common envelope. This helps the envelope to escape, while at the same time bringing the binary's components closer to each other. The question of how much angular momentum is lost by the inner binary is difficult to answer, if only because the analogous question has proved challenging even for the simpler two-body systems. A formalism well suited to computing the effects of the common envelope on the separation of the inner binary focuses on the role of angular momentum. \begin{equation} \frac{\Delta\, L}{L} = g\, \frac{\Delta\, M}{M}. \end{equation} The value of $g$ is uncertain. Nelemans and Tout (2005) estimated that its value is $\approx 1.6$ for the common envelope phase that produced a set of double WDs. \subsection{Detectability} Mass transfer is potentially detectable. During the epoch in which the accretors are able to retain mass, their X-ray luminosities, in our model (\S 2.4) are within a factor of 10 of the Eddington luminosity. Many of these systems would have X-ray luminosity above $10^{38}$~erg~s$^{-1}$ or $10^{39}$~erg~s$^{-1}$ during this interval and would be detectable even in external galaxies. The slower accretion that would take place over longer times prior to the high-accretion phase would be dimmer, but nevertheless detectable in the Milky Way, Magellanic Clouds and, during some intervals, even in M31. The X-ray emission could also show signs of a short-period component, due to the motion of the inner binary. The inner orbital period decreases as mass transfer proceeds. For nearly edge-on orientations, emission from the active accretor(s) could be lensed, producing a distinctive periodic signature. In most cases the donor star would be a giant, and the system would be identified as a symbiotic binary. Any short-period signature associated with the motion of the inner binary would be the tip-off that the accreting system is a binary. The duration of interval when the X-ray emission is detectable depends on the flux (hence the distance to the source), and on the lifetime of high-wind phase of the donor, which is longer for less massive donors. \subsection{Calculation of the Evolution} Our calculations start at the time when the core mass of the donor, star~3, is $0.2\, M_\odot$, and continue until the donor's envelope is exhausted. We increment the core mass of star~3 by $dc$, compute the time $dt$ it would take for the core to have grown by this amount, and determine the star's mass, radius, and rate of wind mass loss at the new time. The donor ejects mass from the system at the rate: $(1-\gamma)\, \dot M_{winds}$. The ejected mass carries specific orbital angular equal to $v_3$ times the orbital angular momentum of star~3. $v_3$ is one of the model's adjustable parameters. The ejected angular momentum comes entirely at the expense of $L_{out},$ the angular momentum of the outer orbit. The remainder of the mass, $\gamma\, \dot M_{winds}$ flows toward the less massive star, star~2. We use the considerations described in \S 2.3 to compute $\beta_2,$ and consider that the rest of the mass, $(1-\beta_2)\,\gamma \, \dot M_{winds}$, is incident on star~1. We compute $\beta_1$ and assume that any mass that cannot be retained by star~1 is ejected from the system, carrying $v_1$ times the specific angular momentum of star~1; $v_1$ is another adjustable parameter. Because star~1 is part of both the inner and outer binary, we have to subtract angular momentum from both $L_{in}$ and $L_{out}$. To do this we subtract from $L_{in}$ ($L_{out}$) the (rate of mass loss from the inner binary) multiplied by $v_1$ times the (specific angular momentum associated with the inner [outer] orbit of star~1). Independently draining angular momentum from the inner and outer orbits is well suited to cases in which the orbits are orthogonal. It is also appropriate for any system without a direct link between the orbital angular momentum of the inner and outer orbits: for example, when accretion onto the compact objects is approximately spherical or else when angular momentum is dissipated within an accretion disk. We discuss cases in which there is a link between the inner and outer orbits in \S 4. Note that the above approach paints the evolution with a broad brush. It includes the key processes determining the fates of these hierarchical triples, but does not attempt to track these processes in detail. In fact, the true physical processes are complex and not yet well understood. For example, the rate of mass loss due to winds in evolving stars is not likely to be steady, and both first principles calculations and inferences from observations are challenging, especially for massive stars and for stars in the end stages of stellar evolution. The focusing of winds is suggested by observations, but exactly how this depends on the mass loss rate, the speed of exiting mass, and irradiation from the accretors still needs to be understood. There are also significant uncertainties about the infall of mass to the inner binary, the ability of this mass to reach the components of this binary, and the ability of these components to retain matter. Finally, an important question is: how much angular momentum is carried by matter exiting the system? Any calculations that attempt to model all of these processes would have to include the above-mentioned significant and difficult-to-quantify uncertainties. Our approach captures the important features of the evolution;and the extent of the associated uncertainties can be gauged by conducting a range of simulations with different values of a small number of input parameters. As we will see, the character of the results depend on only a few key assumptions. \section{Results} \subsection{Individual Systems} \begin{figure} \includegraphics[width=\columnwidth]{evolution_d.pdf} \caption{{\bf Sample evolutions:} In both cases $M_1 = 6\, M_\odot$ and $M_3=4\, M_\odot$. Left: $M_2 = 0.9\, M_\odot.$ Right: $M_2 = 1.8\, M_\odot.$ The top panels show $\dot M_{wind}$ (red), and $\dot M_{in},$ the rate of mass infall to the binary (set of dark blue curves). The next panel down shows the values of $M_2$ (blue, lower), and $M_1$ (red, higher). The straight gold line on the left marks the $M_{Ch}$ and the gold line on the right corresponds to a maximum NS mass of $2.2\, M_\odot.$ Proceeding downward, the next panel shows the orbital period of the inner binary. The final panel shows the ratio of the final value of $\tau_{merge}$ to $|t-t_{merge,initial}|$ versus $M_3$ (left) and $a_{out}(0)$ (right). } \label{fig:example_figure} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{evolution_88_95.pdf} \caption{The same as Figure 1, except that the donor masses on both the left and right are $14\, M_\odot.$ All other system parameters are identical. } \label{fig:example_figure} \end{figure} In Figures 1 and 2 we show the results of evolving 16 individual systems. Figure 1 shows, side by side, two sets of evolutions, each starting with a single value of $M_1$ ($6\, M_\odot$; corresponding to a stellar-mass BH) and $M_3$ ($4\, M_\odot$; corresponding to a donor star of relatively modest mass). On the left, star~2 is a $0.9\, M_\odot$ CO-WD, and on the right it is a $1.6\, M_\odot$ NS. For both the WD and NS, we show the evolutions of $4$ systems which differ from each other in the initial radius of the outer orbit, which ranges from $10$~AU to $23$~AU. For all of the initial orbital separations, the WD (left) achieves the Chandrasekhar mass, exploding as an SN~Ia. Similarly, for all the initial separations considered, the NS (right) reaches the critical mass and undergoes an AIC to become a BH. In all cases the time-to-merger decreases from above the Hubble time to a value on the order of a few billion years. The orbital period of the inner orbit also significantly decreases. In all cases the time interval during which the most significant changes occurred lasts for a few times $10^6$ years. Only for the two smallest initial values of $a_{out}$ does star~3 fill its Roche lobe. Roche-lobe filling leads to a slightly larger accretor final mass (a few tenths of a solar mass). The results for wider separations, where the donor never fills its Roche lobe, show that significant changes can be effected in the inner binary through the agency of winds alone. From the perspective of the processes at work, the key item of note is that, even though the donor star is a subgiant or giant throughout the time shown, there are essentially no changes in the properties of the inner orbits until the rate of wind mass loss is high. This is because, in our models, the accretors gain mass only for large infall rates. The infall rate depends on the donor's mass loss rate, and also on the size of the outer orbit relative to the donor's radius. The donor's winds and radius increase with time. In addition to winds, Roche-lobe filling provides an effective way for the inner binary to gain mass. Even for Roche-lobe-filling systems, winds are important because an epoch of heavy winds precedes Roche-lobe filling. Thus, there is not a dramatic change at the time of Roche-lobe filling unless the donor is so massive that a common envelope forms. Figure 2 differs from Figure 1 only in the mass of the outer star, which was taken to be $14\, M_\odot$ for both the WD (left) and NS (right) cases. The same set of $4$ orbital separations were chosen. In this case, two of the evolutions terminate at relatively early times. These correspond to systems in which star~3 fills its Roche lobe when it is more massive than the inner binary, so that a common envelope forms. During the very short duration of the common envelope, angular momentum continues to be lost by the inner orbit, but (in our model) no mass is gained by the compact objects in the inner orbit. The most obvious difference between the cases with $M_3=14\, M_\odot$ (instead of $4\, M_\odot$, as considered in Figure 1) is the availability of more mass\footnote{Note, however, that in the evolutions shown, the availability of additional mass did not lead to significantly larger increases in the masses of the accretors. This is because of the interplay in our model between the rate of mass infall and the ability of the accretors to accept and retain mass.}. There is, however, another difference that plays an important role: the more massive star evolves on a shorter time scale, so that the transitions take place over shorter times. \subsection{Large Numbers of Systems} \begin{figure} \includegraphics[width=\columnwidth]{tau_format.pdf} \caption{Results of calculations for 100,000 hierarchical triples; the model parameters are given in the figure's top label. {\sl Top panel}: the logarithm to the base ten of the final time to merger ($\tau_f$, the time to merger as measured after mass transfer is finished) is plotted against the the logarithm to the base ten of $\tau_{f,expected}$, the time to merger that would have been expected, had no mass transfer occurred. {\sl Bottom panel}: The logarithm to the base ten of the ratio $\tau_f/\tau_{f,expected}$ is plotted versus the ratio of the final total mass of the inner binary to its initial total mass. Points in cyan (lightest points) are systems experiencing only wind mass transfer; larger and somewhat darker (blue) points are systems that have experienced wind mass transfer and then stable mass transfer while the donor fills its Roche lobe; darkest points (red) experienced wind mass transfer and then a common envelope after Roche-lobe filling. } \label{fig:example_figure} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{tau_format_v.pdf} \caption{ Same set-up as for Figure 3, with simulation parameters given in the top label. Color (gray scale) coding of points is described in \S 3.2.1 and also in the caption to Figure 3.} \label{fig:example_figure} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{characteristics_a_format.pdf} \caption{Results of the same calculation shown in Figure~3. The change in $M_2$ is shown along the vertical axes in the bottom panels. {\sl Right:}~$M_3$ is plotted along the horizontal axis; {\sl Left:}~$a_{out}(0)$ (in AU) is plotted along the horizontal axis. In the top panels the quantity $\tau_f/|\tau_0-t|$ is plotted along the vertical axis. Color (gray scale) coding of points is described in \S 3.2.1 and also in the caption to Figure 3.} \label{fig:example_figure} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{characteristics_a_format_v.jpg} \caption{ Same as for Figure 5, with simulation parameters shown in the top label. Color (gray scale) coding of points is described in \S 3.2.1 and also in the caption to Figure 3.} \label{fig:example_figure} \end{figure} \begin{table*} \centering \caption{ Numbers of events per 33,333 triples} \label{tab:example_table1} \begin{tabular}{lccccccccccc} \hline \hline & & & & & X-WD & X-WD & X-WD & X-NS & NS-NS & WD-WD & WD-WD\\ $v_1$ & $v_3$ & $\kappa$ & $N_{< 0.5\, \tau(0)}$ & $N_{< 0.1\, \tau(0)}$ & $\Downarrow$ & $\Downarrow$ & $\Downarrow$ & $\Downarrow$ & $\Downarrow$ & $\Downarrow$ & $\Downarrow$ \\ & & & & & X-Ia & X-NS & X-BH & X-BH & BH-BH & NS-NS & Ia-Ia\\ \hline \hline 0.00 & 0.00 & 0.50 & 5282 & 2790 & 618 & 361 & 93 & 889 & 98 & 18 & 36 \\ 0.50 & 0.50 & 0.50 & 5911 & 3090 & 721 & 400 & 82 & 1000 & 102 & 21 & 36 \\ 1.00 & 1.00 & 0.35 & 6321 & 2742 & 709 & 475 & 52 & 1033 & 82 & 22 & 27 \\ 1.00 & 1.00 & 0.50 & 6853 & 3538 & 801 & 498 & 101 & 1217 & 109 & 16 & 37 \\ 1.00 & 1.00 & 0.75 & 7408 & 4368 & 929 & 490 & 108 & 1287 & 137 & 25 & 43 \\ 2.00 & 2.00 & 0.50 & 9024 & 4469 & 912 & 718 & 68 & 1419 & 110 & 30 & 20 \\ 2.00 & 0.00 & 0.50 & 5369 & 2798 & 662 & 389 & 77 & 930 & 82 & 22 & 28 \\ 0.00 & 2.00 & 0.50 & 8923 & 4344 & 873 & 693 & 91 & 1430 & 123 & 35 & 35 \\ \hline \hline \end{tabular} \end{table*} To identify the sets of initial parameters (donor masses, orbital separations) that lead to significant increases in the mass of the inner binary and decreases in the time to merger, we conducted a set of simulations, each starting with tens of thousands of hierarchical triples. To generate each hierarchical triple, we started by generating the value of $M_1$, selecting uniformly between $0.6\, M_\odot$ (corresponding to a CO WD) and $7\, M_\odot$ [corresponding to a stellar-mass BH with mass typical of those discovered in nearby X-ray binaries; \citet{2016A&A...587A..61C}; \citet{2006ARA&A..44...49R}]. We allowed $M_2$ to have any mass smaller than $M_1$ but larger than $0.6\, M_\odot$. The value of $M_3$ was chosen to be in the range from $0.8\, M_\odot$ (roughly corresponding to the minimum mass of a star expected to evolve within a Hubble time) and $20\, M_\odot.$ The next step was to choose the time-to-merger for the inner binary: $\tau_{merge}$ was selected to be in the range between ($0.1\times$ the main-sequence lifetime of star~3) to ($10^{12}$ years), with the exponent chosen from a uniform distribution. We then used Equation~1 to compute the radius of the inner orbit. Equation~4 defines the minimum radius of the outer orbit. We selected the maximum orbital radius to be as large as $10^4$ times the maximum radius of star~3, selecting the exponent from a uniform distribution\footnote{With this formulation, many outer stars are in orbits so wide that they cannot send a significant amount of mass to the neighborhood of the inner binary. Our goal, however, is to use the calculations to explore the outer limits, beyond which mass transfer does not significantly change the inner binary.}. We then computed the evolution of each individual system, ending at the time when the envelope of the donor was exhausted. Each simulation was defined by the values of: $\kappa,$ $v_1, v_2,$ and $v_3.$ \subsubsection{Times to Merger and Mass Increases} Figures 3 and 4 illustrate and quantify (1)~decreases in the time to merger and (2)~ mass gains by the inner binary. Points in cyan (lightest color) correspond to systems in which the donor never filled its Roche lobe; all mass transfer proceeded through winds. Points in red (same size as cyan points, but darker) correspond to systems in which mass transfer occurred through winds, but the donor filled its Roche lobe at a time when it was more massive than the binary. During the ensuing common envelope phase, no additional mass was gained by the binary, but the inner binary did lose orbital angular momentum as it helped to eject the common envelope. Points in blue (dark and larger than the others) are systems in which the donor filled its Roche lobe and was able to continue giving mass to the inner binary in a stable manner. In the top panel of Figure~3 we consider the time-to-merger as measured at the end of mass transfer, $\tau_f$. The logarithm to the base 10 of $\tau_f$ is plotted along the vertical axis. Along the horizontal axis is the logarithm of the ``expected'' time-to-merger. The definition of $\tau_{f,expected}$ is: the original value of the time to merger (i.e., as calculated prior to mass transfer), minus the time duration of mass transfer. Thus, the value of $\tau_{f,expected}$ would have been the remaining time to merger had no mass transfer occurred. The diagonal green line corresponds to the case in which mass transfer has no effect on the time to merger. The factor by which the time-to-merger decreases is often as small as $0.01$, with some systems exhibiting even more dramatic decreases. The bottom panel explores the relationship between the decrease in the time to merger and the increase in the masses of the components of the inner binary. Along the horizontal axis is plotted the ratio of the total mass of the inner binary after mass transfer, to its value prior to mass transfer. Along the vertical axis is the logarithm to the base 10 of $\tau_f/\tau_{f,expected}$. The panel shows that the time to merger can decrease significantly, even if the inner components of the binary gain very little mass. As expected, common envelope evolution can lead to significant shortening of the time to merger while the mass of the inner binary experiences only a marginal increase. On the other hand, Roche lobe filling and even winds alone or winds followed by a common envelope phase, can lead to both significant mass increases and to significant decreases in the time to merger. The same quantities are plotted in Figure~4, but the parameters that control angular momentum loss are larger ($v_1=v_2=v_3=2$), indicating that more angular momentum is carried away by mass leaving the system. In particular, the outer orbit is more likely to shrink, so that larger numbers of donors will fill their Roche lobes. A larger fraction of common-envelope and Roche-lobe-filling systems inhabit the panels of Figure~4. As a consequence, typical times to merger decrease even more than in Figure~3, and the fractional change in mass of the inner binaries is larger. Two messages emerging from these simulations are the following. {\bf (1) There are significant effects even with low angular momentum loss. (2) Increasing the amount of angular momentum lost increases the magnitude of the effects and the numbers of systems experiencing them. } \subsubsection{What characteristics of the outer binary produce the most pronounced changes?} Figures~5 and 6 relate the changes in the inner binary to the initial characteristics of the outer binary. In the right-hand panels we consider the effect of varying $a_{out}(0)$, the initial radius of the outer orbit. By this we mean its radius just prior to the epoch of mass transfer from star~3. In the left-hand panels we explore the influence of the value of $M_3(0)$, the initial mass of the donor. The bottom panels plot $(M_2(f)-M_2(0))$, the change in the mass of star that was initially the least massive stellar remnant in the inner binary. The top panels plot the quantity $\tau_f/|\tau_0-t|$ \footnote{This functional form yields values almost equal to the ratio $\tau_f/\tau_0$ when the evolutionary time is shorter than the initial time to merger, but gives large values when the merger time is already so short that the binary may merge event before star~3 is fully evolved. Such systems are interesting because there would be mass in the vicinity of the binary as it merges, potentially producing detectable electromagnetic signatures to accompany the emission of gravity waves. From the perspective of altering the time to merger, however, the effects are likely to be slight and we therefore designed the functional form $\tau_f/|\tau_0-t|$ so that small values would allow us to easily identify the systems in which the time-to-merger was most altered by mass transfer.}. Figures 5 and 6 share several common features. First, with regard to the influence of the donor mass: while more massive donors can increase the secondary's mass by the most, something just over $3\, M_\odot$ for $M_3(0) = 20\, M_\odot$, even low mass donors can be responsible for significant increases. A star of $2\, M_\odot$ ($5\, M_\odot$) can increase the secondary mass by nearly $1\, M_\odot$ ($2\, M_\odot$). Furthermore, these large changes can be made through the action of winds. (This is more obvious in the left-hand panels, which include fewer of the somewhat larger points associated with Roche-lob filling.) Another commonality is that the initial orbital separation makes a big difference to both the inner-binary mass increase and the decrease in the time to merger. Furthermore, there is a range of values of $a_{out}(0)$ over which the changes in mass and merger times are sharply peaked. For $v_3=1,$ the peak lies in the range between about $10$~AU and $20$~AU. This peak moves out to larger values of $a_{out}(0)$ when there is more angular momentum loss. This is because the loss of angular momentum from the outer orbit decreases the value of $a_{out};$ thus, the initial value of $a_{out}$ is much larger than the values at which most of the mass transfer occurs. \subsubsection{Comparing results across simulations} Table 1 shows the results for a set of 8 simulations. The first two columns show the values used for $v_1$ and $v_3$. These are the constants of proportionality between the angular momentum per unit time carried away from the outer orbit by mass exiting from star~1 and star~3, respectively, and the specific angular momentum of these stars. Matter incident on the binary first travels to $M_2$. Any mass that cannot be accreted by $M_2$ then travels to $M_1,$ and mass that cannot be accreted by $M_1$ exits the system, Since matter does not exit directly from $M_2$, the value of $v_2$ doesn't directly influence the evolution. The model parameter $\kappa$ appears in the third column. The fourth column shows the number of systems, $N_{< 0.5\, \tau(0)}$, for which the times to merger were reduced by more than a factor of $2$. $N_{< 0.5\, \tau(0)}$ is a rough estimate of the numbers of systems in each simulation that are significantly influenced by mass flowing from the outer star toward the inner binary; its value ranges from $\sim 5300$ to $\sim 9000$, representing between $5\%$ and almost $10\%$ of the initial set of triples. Because most members of the initial set did not start with values of $a_{out}$ small enough to allow significant mass flow to the inner binary, the value of $N_{< 0.5\, \tau(0)}$ provides a guide to the numbers of sets of initial conditions that lead to significant changes. There are some clear trends in the values of $N_{< 0.5\, \tau(0)}$. For example, larger changes are effected for larger values of $\kappa$. The third, fourth, and fifth rows are for systems with $v_1=v_3=1$, but for $\kappa$ equal to $0.35$, $0.50$, and $0.75$, respectively. The increase in the effectiveness of mass transfer with increasing $\kappa$ is expected, unless incoming mass is processed much less efficiently by the inner binary when the rate of mass infall is large. Second, more angular momentum loss, particularly from the outer orbit, increases the numbers of systems experiencing significant effects. This can be seen by comparing the first, fourth, sixth, and eighth rows. The more angular momentum lost from the outer orbit, the larger the numbers of outer stars that can be pulled in close enough to the inner binary to donate significant amounts of mass. The fifth column shows the number of systems, $N_{< 0.1\, \tau(0)}$, for which the times to merger were reduced by more than a factor of $10$. We find that, across simulations, $N_{< 0.1\, \tau(0)} \approx 0.5\, N_{< 0.5\, \tau(0)}$. This includes binaries which started having times-to-merger larger than $\tau_H.$ The sum of the sixth and seventh columns provides the numbers of WD-containing close binaries in which a WD made a transition to an SN~Ia or to an NS, while the nature of the companion stayed the same. The numbers of transitions to NSs are smaller than the numbers of SNe-Ia in these simulations because the initial WDs masses needed to transition to a NS span only the narrow range between $1.15\, M_\odot$ and $1.38\, M_\odot,$ while SNe~Ias occur in our simulations from $0.6\, M_\odot$ to $1.15\, M_\odot$. Although the ratio of the lengths of the starting ranges is only $0.41$, the ratio of AICs to SNe~Ia is larger because less mass is generally needed to effect an AIC. The sum of the sixth and seventh columns is roughly equal to $0.3\, N_{< 0.1\, \tau(0)}$. The numbers of NSs that make transitions to BHs, while their companion does not transition is also roughly equal to $0.3\, N_{< 0.1\, \tau(0)}$. The numbers of binaries that experience double transitions (e.g., NS/NS to BH/BH) is roughly $10\%$ the number of single transitions (e.g., NS/X to BH/X). The upshot of these comparisons is that, although the amount of mass channeled to the inner binary, and the loss of orbital angular momentum from the outer binary, both play significant roles, changes in the time-to-merger and transitions of compact objects should both be common. \subsubsection{X-Ray Hierarchical Triples} Mass approaching close to or accreting onto one of the compact objects is expected to emit X-rays. For each particle, the accretion luminosity is typically a significant fraction of its rest mass. Compact binaries receiving mass emitted by winds or through Roche-lobe-filling can therefore be very bright. With intrinsic and/or lensed luminosities that may be near or above $10^{40}$ erg~s$^{-1}$, they would be detectable in external galaxies. An interesting feature of these systems is that, although they would appear to be X-ray binaries, they are actually X-ray hierarchical triples. They could exhibit periodic or quasiperiodic signatures related to the orbital period of the inner binary. At the same time, they would exhibit features characteristic of symbiotic binaries. If hierarchical triples are common, a significant fraction of bright variable X-ray sources are likely to actually be {\sl accreting binaries} with wide-orbit companions. Archival studies searching for variability among bright X-ray sources, and also identifying counterparts across wavebands could identify these systems. The population would consist of pre-merger binaries, but also many others which cannot merge in a Hubble time. In addition, some short-orbital period binaries of many types might be accreting from low-mass companions. An example of a system in which one of the components of the inner binary may not be a stellar remnant is a cataclysmic variable, in which the companion may be a very \-low mass star, perhaps itself degenerate. The orbital period would be on the order of a few hours. It is important to conduct archival searches for X-ray triples. \begin{figure} \includegraphics[width=\columnwidth]{detect2_format.pdf} \caption{ {\bf Average X-ray luminosity of the least massive accretor versus the ratio of the inner binary's final mass to its initial mass (upper panel) and final orbital period (lower panel).} As in the previous figures, systems in: {\color {Cyan}} transferred mass entirely through winds; {\color {red}} transferred mass entirely through winds and then produced a common envelope when the donor filled its Roche lobe; {\color {Blue}} transferred mass through winds and then through the L1 point as well, during Roche-lobe filling. The luminosity of $M_2$ is computed in each time step and ts average value over the time during which it is larger than $10^{32}$~erg~s$^{-1}$ is computed. } \label{fig:example_figure} \end{figure} \begin{figure*} \hspace{-1.6in} \vspace{-3in} \includegraphics[scale=1]{picture.pdf} \caption{ Each diamond represents a close-orbit binary, and is divided into two parts, each representing one of the binary's components. The links among these diamonds represent evolutionary pathways in which one of the binary's components gains enough mass to make a transition that changes its nature. Because the addition of only a relatively small amount of mass can spark a transition from a WD to a NS or from a NS to a BH, all of these transitions are possible.} \label{fig:example_figure} \end{figure*} \begin{figure*} \includegraphics[width=2\columnwidth]{wd_trans.pdf} \caption{{\bf Initial masses (in orchid) and final masses (in blue) of binaries} in which a WD undergoes an AIC (top panels), an SN~Ia (bottom panels). Each point represents a pair of masses, with the initial masses on the left (orchid points) and the final masses on the right (blue points). The WD mass, $M_2$, is plotted along the horizontal axis, and its companion's mass is plotted along the vertical axis. } \label{fig:example_figure} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{ns_trans.pdf} \caption{{\bf Initial masses (in orchid) and final masses (in blue) of binaries} in which NSs undergo AICs to become BHs. In the top panel, the NS starts with a BH companion. In the bottom panel, the NS starts with an NS companion and both NSs collapse. Each point represents a pair of masses, with the initial masses on the left (orchid points) and the final masses on the right (blue points). An NS mass, $M_2$, is plotted along the horizontal axis, and its companion's mass is plotted along the vertical axis. } \label{fig:example_figure} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{main_lim.pdf} \caption{ {\bf Top panel: The logarithm to the base ten of the time to merger, versus the donor mass} of a main-sequence Roche-lobe filling donor. Each curve corresponds to a given value of the inner binary's total mass, as shown in the legend. The value of the orbital separation of the inner binary is the maximum consistent with the orbital stability of the hierarchical triple, ${a_{in}}_{max}$. {\bf Bottom panel: ${a_{in}}_{max}$ (in solar radii) is plotted versus the donor's mass.} } \label{fig:example_figure} \end{figure} \section{Transformations of the Natures of the Accretor(s)} Figure~8 illustrates the full set of transformations made possible through the accretion of mass by one or both compact objects comprising a binary. Each diamond in the figure represents a compact-object binary. The binary's components are labeled: {\textcolor {red} {WD}}; {\textcolor {blue} {NS}}; {\textcolor {Violet}{BH}}. The least massive combination, {\color{red}{WD-WD}} appears at the top of the structure, and the most massive, {\color{Violet}{BH-BH}}, appears at the bottom. This structure provides a simple visualization of the possible transformative effects of mass infall. Imagine mass accreting onto the {\color{red}{WD-WD}} pair. It may influence the orbit, shrinking the size of the diamond (which represents the orbit) or in some cases, expanding it. If mass stops falling before either WD reaches the critical mass, then no transitions are made\footnote{We note, however, that the eventual merger of the WDs would produce an SN~Ia via the double-degenerate channel.}. If, however, one of the WDs is an O-Ne WD that achieves the critical mass, that WD becomes a NS. A red line connects the {\color{red}{WD-WD}} diamond to the diamond below and to the left of it, which represents an {\color {blue}{NS}}-{\color{red}{WD}} pair. Note that there are also two red lines connecting the {\color {red}{WD-WD}} binary to a double-NS binary. Each line represents the AIC of a WD; two such connections side-by-side correspond to two transitions that occur during a time interval short compared with the time scale of mass transfer. While we don't expect two AICs to occur at exactly the same moment, the near equality of the mass-gaining WDs in our accretion scenario suggests that two AICs could occur very close in time to each other. Thus, the after effects of the first event may still be detectable at the time of the second event. Every red line connecting a diamond on an upper level to a diamond on the level below represents the AIC of a WD. Such transitions are expected if O-Ne WDs are in close binaries accreting from a hierarchical companion. This is because the amount of mass that is needed to effect the transition is at most $\sim 0.25\, M_\odot.$ Thus, if O-Ne WDs are in close binaries, and also have donor stars in wide orbits, the only condition that must be satisfied in order to produce an AIC is that, for an interval of a few times $10^5$ years, mass from the donor would have to be incident on the WD at a high enough rate ($\sim 10^{-6} M_\odot$~yr$^{-1}$) for nuclear burning to occur, or else at the slightly lower rates that would produce recurrent novae. This is achievable for giant donors at orbital separations ${\cal O}(AU)$. Mass changes that occur when a compact object makes a transition are illustrated in Figures~9 and 10. The points in Figure~9 and in Figure~10 were generated by the full set of binaries evolved to produce Table~1. In orchid (blue) are the masses of the WD [$M_2$] and its companion [$M_1$] prior to (after) mass transfer. Figure~9 considers transitions of WDs. The top left-hand panel of Figure~9 shows cases in which a WD collapses into a NS when its initial companion was either a NS or a BH. In the systems shown, the companion did not transition to a compact object of another type. One can see that, in general, both compact objects gain mass. The small gaps that appear around $M_1=2.$ correspond to systems in which the WD's NS companion transitioned to a BH. On the right are binaries which started with two WDs and ended with two NSs. Note that the masses of the WDs track each other, staying almost equal. This feature is related to our mass-transfer scenario, in which mass flows first to the least massive component. The two bottom panels show transitions in which one or both WDs undergo SNe~Ia, Each of these panels is analogous to the panel just above it. The gaps in the ``before'' and ``after'' masses of $M_2$ reflect the fact that the upper mass limit for C-O WDs is below the Chandrasekhar mass. Otherwise the characteristics are very similar to those illustrated above in the WD to NS transitions. Note that we did not terminate the evolutions at the time a WD achieves $M_{Ch}.$ In principle, a Type Ia supernova could occur at this point. In practice, there will be a simmering phase prior to explosion that could last $\sim 1000$ years \citep{2011ApJ...738L...5P}. An even longer delay is possible if the WD has spun up and needs to spin down to explode \citep{rd.2011}. Because we did not assume that a WD explodes immediately upon reaching the Chandrasekhar mass, the masses of the WDs in the final state exhibit a range of values, extending to above $2\, M_\odot.$ If the WDs are spun up by accretion, super-Chandrasekhar explosions are expected \citep{rd.2011}. In the right-hand panel, cases in which both WDs explode are shown. Such a scenario is possible if accretion can continue even after one WD has achieved $M_{Ch}$ (i.e., if that WD does not explode on a short time scale), or else if the orbital parameters are such that, even after a WD undergoes supernova, its companion WD can stay bound to the donor and can continue to accrete. In Figure~10, the top panel shows mass changes during transitions in which a BH-NS pair becomes a BH-BH pair when the NS collapses. The bottom panel shows systems in which two NSs collapse to become two BHs. Again the near-equality of the final BH-BH masses is exhibited. Note that there are clear deviations from exact equality. This is because the mode of mass transfer in our simulations allows mass that could not be retained by the lower-mass accretor to flow to its companion, which may then retain some of incident mass. {\bf Observable Signatures:} The question arises: how can we know if some of the compact objects we detect are the result of transformations from less massive compact objects? Of particular relevance: how can we know that the mass that led to collapse was donated by a third star in a hierarchical triple? The first answer is to study the distribution of masses of the compact objects we detect through the gravitational radiation they emit during merger. Analysis of the gravitational wave signatures allows us to measure the masses of the compact objects that merged. While may mergers are likely to occur in systems that did not undergo accretion from a third star, those that have followed the evolutionary pathways we outline here could help to shape the form of the mass distribution. For example, the accretion-induced collapses of NSs produced BHs that, at least at the time of collapse, have masses lower than those of the BHs we have been studying within X-ray binaries. Furthermore, if mass is channeled toward the least massive component, then the masses of the compact objects that merge should tend to be similar. Thus, our model is likely to produce pairs of merging BHs, with the two members of the pair having similar masses, and those masses may each be less than $\sim 4-5\, M_\odot,$ with some hovering right near the maximum NS mass. In addition, we may NS-BH mergers in which the masses of the NS and BH are nearly the same. WD-NS mergers will, with present-generation instruments, be detected primarily through electromagnetic signals. If, however, the mass of the WD can be established, our model predicts that some of the WDs should have masses very close to the critical mass. AICs of WDs to NSs and NSs to BHs should also be electromagnetically luminous. In our model. roughly half of them should take place within an accreting hierarchical triple. Furthermore, if the electromagnetic signature In addition, pulsar searches are discovering many more systems in which a NS is in orbit with another compact companion \begin{figure*} \includegraphics[scale=0.5]{flow_chart_p.pdf} \vspace{-.5in} \caption{ Flow chart in which all inner binaries start on the left-hand side and evolve toward the right. The green rectangle of length $Y(b)$ corresponds to binaries that would merge within $\tau_H$ in binary-only scenarios. In binary-only scenarios, all other systems end at the red oval. In the model in which there is a mass-donating star in an outer orbit, all evolutions continue to the right. Note that some systems that would have merged in the binary-only scenarios may now fail to merge within $\tau_H$. The flattened red oval represents such failures. The upper green oval on the right, of length $Y(b,t)$, corresponds to mergers that could have happened, even in binary-only scenarios, and the lower green oval of length $Y(t,t)$ corresponds to mergers that are added through the effects of mass from the third star. The sum of the lengths of these two ovals is $Y(b,t)+Y(t,t)$ and is expected to be larger than $Y(b)$.} \label{fig:example_figure} \end{figure*} \section{Wider Range of Characteristics and Possibilities} Mass transfer for a star in a wide orbit can influence the masses and orbits of compact objects in a close binary. We have illustrated this with examples of mass transfer from an evolved star, because this process can be followed in a simple way, closely connected to an analytic formalism. We have focused on cases in which angular momentum from the outer orbit is carried away from the triple. Here we consider elements of our model that extend the results beyond those derived in \S 3 and \S 4. \subsection{Mass Transfer from a Main Sequence Star} Main sequence stars can serve as donors in hierarchical triples. There are several important differences between systems with main-sequence and giant donors. Main-sequence stars have wind mass-loss rates typically much smaller than evolved stars. If a main-sequence star is not filling its Roche lobe, the mass infall rate to the inner binary is likely to be low. We therefore consider only cases in which the main-sequence donor fills its Roche lobe. The mass, $M_3$ of the main-sequence star determines its equilibrium radius, and therefore the radius, $R_L$, of its Roche lobe. The value of the orbital separation at which the donor fills its Roche lobe, $a_{out}$, is determined by the combination of $R_L,$ $M_3$, and the total mass, $M_1+M_2$ of the inner binary. \footnote{At the time the donor first comes to fill its Roche lobe, its radius is likely to be the same as the equilibrium radius. As the star loses mass, however, it can either shrink or expand. The difference from the equilibrium radius is not typically large, so here we use it as a guide.} Once we know the value of $a_{out}$, we can employ the condition for orbital stability to determine ${a_{in}}_{max}$, the maximum possible value of $a_{in}$ for which the triple is dynamically stable. The ratio $a_{out}/a_{in}$ provides a basic guideline to whether the orbits are stable. If the effects of mass flow drive the ratio to values that are too low, the hierarchical triple will no longer be dynamically stable, and mergers or ejections may occur. If, on the other hand, the triple is dynamically stable, then a large fraction of the mass lost by the donor will come under the gravitational influence of the inner binary. The bottom panel of Figure~11 plots ${a_{in}}_{max}$, the maximum value of $a_{in}$ consistent with orbital stability, for three values of the total mass, $M_1+M_2$, of the inner binary, and for a range of donor masses extending to $20\, M_\odot$ The upper panel plots values of the logarithm to the base 10 of the time to merger when the separation is ${a_{in}}_{max}$. Figure~11 demonstrates that the times to merger in all of these cases is short. For example if the total mass of the inner binary is $15\, M_\odot$, the time to merger ranges from tens of thousands of years for an M-dwarf donor star to just under $10^8$~years for a donor of $20\, M_\odot.$ \footnote{ We used a mass-radius relationship appropriate for stars below roughly $9\, M_\odot$. The results would not be qualitatively different with a mass-dependent formulation.} We can also consider cases in which the inner orbits are smaller than ${a_{in}}_{max}$, since these will also be stable with respect to the dynamical evolution. The times to merger would then be even shorter: in most cases shorter than the main-sequence lifetime of the donor. Main-sequence donors are therefore likely to be present at the time of merger. Mass from the donor can provide a luminous electromagnetic signature before, during, and after merger. The prior signal would be strong at X-ray wavelengths if one or both of the compact objects in the inner binary accretes. The orbital period of the inner binary may be measurable through variations in the X-ray flux. Furthermore, the signal from one accretor can be enhanced through gravitational lensing by its compact companion \citet{2018MNRAS.474.2975D,DoDi:2018inprep}. If the merger result and donor star are able to remain in orbit, the long-term result after merger will be an X-ray binary. \subsection{Angular Momentum: General Considerations} If the angular momentum flow is more complex than in the simple examples we have considered, there can be interesting consequences. \smallskip {\bf Possible increases in the angular momentum of the inner orbit:} The angular momentum of the inner orbit can be significantly smaller than the angular momentum associated with the outer orbit, yielding small values of the following ratio. \begin{equation} \frac{L_{in}}{L_{out}} = \Bigg(\frac{M_1\, M_2}{M_3\, (M_1+M_2)}\Bigg)\, \Bigg(\frac{M_1+M_2+M_3}{M_1+M_2}\Bigg)^{\frac{1}{2}} \Bigg(\frac{a_{in}}{a_{out}}\Bigg)^\frac{1}{2} \end{equation} Thus, if even a small fraction of angular momentum from the outer orbit is transferred to the inner orbit, the results can be dramatic. Consider, for example, the case of perfectly aligned orbits in which mass from the outer star is accreted by one or both of the inner stars without any mediation by, e.g., a disk. The binary can be ``spun up''. If the triple remains stable, the time to merger will increase. If, however, $a_{in}$ increases and $a_{out}$ either decreases or increases at a rate slower than $a_{in}$, the triple may become unstable, calling its final fate into question. The ejection of one of the stars is a possibility, and so is a prompt merger. In the latter case, the merger would take place in a mass-containing region, and the donor star would also be present at the time of merger, potentially producing an electromagnetic signature. {\sl It is interesting to note that, if the outer orbit is not too big, expansion of the inner orbit can trigger a dynamical instability, producing a prompt merger. } \smallskip {\bf Three-dimensional rotation:} The inner and outer orbits may not be in exact alignment. In such cases, mass incident on the inner binary from star~3 can induce rotation in the orbital plane of the inner binary. Were the inner binary isolated, the force of gravity acting on its two components would define a two dimensional plane. In our case, the inner orbit defines one plane, with an angular momentum vector $\vec L_{in}$ perpendicular to that plane. The same is true of the outer orbit. The two angular momenta, $\vec L_{in}$ and $\vec L_{out}$ may point in different directions. Mass flow from the outer star influences not only the motion of the compact objects within the plane of the inner orbit, but can also serve to rotate the orbital plane, setting it ``spinning'', as the two compact masses continue to orbit each other. {\sl The complex dynamics has much in common with situations in which mass is not flowing through the system, but instead the dynamics is dominated by three-body interactions (second appendix).} \subsection{Massive Donors} Massive donors introduce additional features. When such donors have close stellar companions, binary interactions can strip them of their hydrogen envelopes \citet{1986ApJ...310L..35U}. Energy provided by the remaining nuclear-burning core can produce pre-supernova outbursts, consistent with observational evidence for precursor events [\citet{2018MNRAS.476.1853F} and references therein]. Heavy winds, and precursor events may inject mass into the orbit of the inner binary. While such mass injections may or may not increase the mass of the inner binary, they are likely to decrease the time to merger. Furthermore, any alterations made close to the time of supernova would have been preceded by an epoch of sustained winds more likely to produce genuine mass increases and to decrease the orbital angular momentum. Thus, even prior to supernova, the close binary may be more massive and closer to merger than it would have been had the companion not been in orbit with it. The supernova emits high-power matter which flows past the binary, interacting with it, possibly torquing it. Depending on the initial configuration of the triple, its evolution, and the response of the binary to the supernova, the binary may merge near the time of the supernova or at least while the supernova remnant is still detectable. We would then detect a gravitational wave (GW) source within a supernova remnant. The supernova would not be associated with either of the merging compact objects, however. It is therefore important to search for tell-tale signatures, such as the location of the gravity-wave emitter away from the center of the supernova remnant. Depending on the total change in the mass of the triple-star system, the remnant of star~3 could become unbound from the compact-object binary. \subsection{Uncertain or New Physics} {\bf Common Envelope:} Many elements of the common envelope are not yet well understood \citet{1993PASP..105.1373I, 2018arXiv180303261M}, and references therein. These include the initiation of the common envelope; its evolution, and whether there is mass gain by the stars engulfed by it; the evolution of the stellar orbit; and the end state left behind. In our calculations we have assumed that the compact objects in the inner binary gain no mass during the common envelope phase. Some evolutionary calculations allow hypercritical accretion. (See \citet{2010CQGra..27q3001A} for an overview for binary evolution of systems leading to compact binary coalescences.) Mass gain during such a phase would likely be minimal. Nevertheless, it could be enough to change the nature of one or both components of the compact binary. In our conservative approach, the common envelope doesn't produce prompt mergers and no mass is gained during the common envelope phase. It is likely, however, that in some real systems the consequences of the formation of a common envelope are more dramatic than those allowed in our calculations. This will have the effect of driving more close binaries to an earlier time of merger. Also, in combination with mass gained by the components of the close binary prior to the formation of the common envelope, any mass gain during the common envelope could change the natures of some accretors. {\bf Circumbinary Disks:} Circumbinary disks have been studied in the context of supermassive black hole binaries. The least massive component of the inner binary passes closest to the accretion disk, and simulations show that a {\sl minidisk} can therefore form around it, allowing this least-massive compact object to gain mass [\citet{2018arXiv180102266T}]. Should its mass become equal to that of its companion, then both stars may gain mass, with one of them accreting, and then the other. In this scenario, the two masses stay nearly equal to each other, so that mass from star~3 tends to equalize the masses of the inner binary. There are other possibilities, both for supermassive and stellar mass BHs [\citet{2018arXiv180106179M, 2018arXiv180102624K}]. These possibilities include the accelerated growth of the more massive component. It is also worth noting that the compact objects in the inner binary are close enough to each other that if incoming matter forms a structure, such as a corona, some of the mass forming that structure could be transferred to the other star. This reasoning produced the flow of mass we incorporated into the calculations of \S 3, which gave the least massive component of the inner binary a chance to accrete incoming mass; mass which could not be retained by the least massive component was then passed on to the more massive component. Studies of circumbinary disks show that they can play active roles in, for example, extracting angular momentum from the binary. Because the possibilities are not yet well understood, we have not explicitly incorporated disks into our calculations. Generally, a disk promotes dissipative effects. These tend to erase the effects of the detailed mass flow history. Our approach of considering only the local flow of matter in the vicinity of each accretor is therefore likely to be valid in many cases. {\bf Gravitational Lensing Within the Compact Binary:} Consider inner binaries whose components are NSs and/or BHs. When mass is incident on one of these compact objects, it emits electromagnetic radiation, primarily at X-ray wavelengths. Roughly $10\%$ of the inner binaries have orientations favorable for the detection of gravitational lensing. That is, the projected distance between the luminous accretor and its companion (during a time interval in which the companion passes in front of the accretor) is small enough that radiation from the accretor is significantly lensed. (See DoDi:2018inprep, 2018MNRAS.474.2975D.) When this happens, the X-ray count rate is temporarily increased. This would occur once, or (if both compact objects are accreting) twice per orbital period. Furthermore, the resultant X-ray flux, produced by lensing from a baseline that can be $10^{38}-10^{40}$~erg~s$^{-1}$, can be high. If lensing occurs, the number of photons we receive may be large enough to permit detection in external galaxies. Even binaries that will require more than $\tau_H$ to merge may be detectable through such lensing. In addition to this unique effect, which can allow the masses of the compact objects to be determined, there are other reasons that the periodicity of the inner orbit may be imprinted on the X-ray signal. A definitive identification of lensing would, however, nail the case for the presence of an accreting inner binary composed of NSs and/or BHs. The reason we have explicitly discussed NSs and BHs in this subsection, is that the relatively larger size of WDs means that finite-lens-size effects decrease the probability of binary-self-lensing and also diminish the magnitude of the any effects that do occur. \section{Summary and Implications} \subsection{Summary} We have considered triple-star systems consisting of two compact objects in a close orbit and an unevolved star in a wider orbit. We have shown that, for a large range of plausible initial conditions, mass from the outer star can influence the evolution of the inner binary. This is possible simply through the agency of winds if a small fraction of the donor's mass comes close enough to the inner binary to interact with it. Roche-lobe filling can also play a role. If the subsequent mass transfer is stable, it contributes to the mass of the components of the inner binary as well as possibly altering the orbital angular momentum. If it is unstable, then a common envelope is likely to drain angular momentum from the inner orbit. The primary effects are the following. \begin{itemize} \item {\bf Increases in the mass of one or both components of the inner binary.} It is of interest that even donor stars with masses similar to or a few times larger than that of the Sun can provide significant mass to the components of an inner compact binary. \item {\bf Mass increase can transform one type of compact object into another,} with O-Mg-Ne WDs becoming NSs, and NSs becoming BHs. Such transformations substantially increase the pool of NS-NS, NS-BH, and BH-BH binaries that merge within a Hubble time. Only a modest amount of added mass is required to effect these transformations. \item {\bf Changes in the time to merger.} These can occur if the inner binary's components gain mass, even if its orbital angular momentum is unchanged. Decreases in time to merger can be more dramatic when mass from the donor drains angular momentum from the inner orbit. \item {\bf A new model of Type Ia supernovae.} When WDs that would not have merged in a Hubble time can do so because of mass provided by a companion in a hierarchical orbit, the rates of SNe~Ia generated through the double-degenerate channel can increase. In addition, mass gain by C-O WDs during mass transfer from the third star can produce SNe~Ia through an analog of the single-degenerate model. The first-formed WD would already have accreted matter during the evolution of its original stellar companion that eventually produced the second WD. Mass from the third star provides another chance for it (and a first chance for its WD companion) to increase its mass to the Chandrasekhar mass. Thus, the rate of SNe~Ia through the accretion channel can also be increased because of mass provided by the outer star. \smallskip \end{itemize} \noindent Our calculations have been conservative. It therefore seems likely that gains in mass and losses of orbital angular momentum much larger than those we have considered are possible. For example, {\bf (a)}~the third star could be more massive than the stars we considered, {\bf (b)}~the ejection of the common envelope almost certainly drains more angular momentum than we have assumed, {\bf (c)}~there may be more than one outer-orbit star that can come to donate mass. \subsection{Implications} Our model has several points of connection to observations. Key elements of it can therefore be tested. \subsubsection{Gravitational Mergers} Triples in which the outer star sends mass toward the inner binary can increase the rate of gravitational mergers. \footnote{In an appendix we briefly discuss the role of three-body dynamics, even when mass is not transferred from the outer star.} Consider, for example, BH-BH mergers. Binary interactions provide a baseline prediction. Binary-only calculations apply to true binaries and to hierarchical triples in which the third star is too distant to influence the fate of the inner binary. We have shown that interactions with mass provided by the outer star can change the time-to-merger from values larger than $\tau_H$ to values smaller than $\tau_H$. This adds to the total reservoir of binaries that can merge within a Hubble time. Note that the number of binaries added to this reservoir is almost certainly much larger than the numbers that leave it by being pushed to longer merger times by the effects of mass infall from an outer star. In addition, the conversion of NSs to BHs, and even of WDs to NSs and then to BHs provides a brand new reservoir of binaries that will experience BH-BH mergers. If triple systems are not rare, such transformations have a good chance of contributing significantly because binaries containing the less massive stars that produce NSs and WDs are more common than the binary systems producing only BHs. There are caveats, including the effects of supernovae. Nevertheless it is worth noting that binaries undergoing transformations should be considered as potentially important sources of BH-BH mergers. A parallel set of arguments applies to mergers involving NSs. In addition to influencing the merger rates, mass from a third star can change the distribution of merger properties. In particular, the merging components are likely to have nearly equal mass, at least if the principles we have used to trace the path of incoming mass are correct. The values of the masses depend on how much mass is available from the third star and on the efficiency of accretion. If the total mass added to the inner binary is small, then we expect there to be merging NSs with masses near the Chandrasekhar mass, and low-mass merging BHs, which masses less than about $5\, M_\odot.$ If the outer star is more massive, and/or of there is a sequence of outer stars, mergers of nearly-equal high-mass BHs are expected. \subsubsection{X-ray emission} Continuing accretion is also a signature expected for a subset of our post-merger systems. If the compact object formed through the merger continues to accrete material provided by the donor star, it will emit X-rays. Whether we would be able to detect the X-rays depends of the accretion luminosity and the distance from us. Consider, for example, an X-ray source with X-ray luminosity $L_X=10^{40}$~erg~s$^{-1}$, and a power-law spectrum with $\Gamma=1.7$ and intervening gas with $N_H = 1 \times 10^{21}$~cm$^{-1}$. If this X-ray source (XRS) were $50$~Mpc away, {\sl Chandra's} ACIS-S detector would record roughly $2$ counts per ks, so that the source would be detectable. The proposed {\sl Lynx} mission could record $\sim 50$ times as many counts, potentially allowing short-time-scale variations to be traced. Thus, post-mergers at intermediate distances may be detected as X-ray binaries. {\bf Hierarchical X-Ray Triples} Finally, many of the types of systems we consider should be detectable prior to merger as X-ray hierarchical triples. In our model of mass impinging on an inner binary, the compact accretors may be detected as powerful X-ray emitters. X-ray emission therefore provides a way to identify systems within which a compact-object binary is accreting. The signature for which in order to identify hierarchical triples is periodic or quasiperiodic modulation of the X-rays, where the repetition times are harmonics of the inner orbital period. There are many possible reasons for X-ray from binaries to exhibit a range of periodic and/or quasiperiodic signatures: for example, complex accretion flows and warped disks can introduce periodicity. It is therefore important to model the signatures in order to test whether they are consistent with accretion onto an inner binary, or whether another explanation is equally good or better. Typical galaxies house dozens of bright X-ray binaries. If accretion from a wide-orbit star is common, then a fraction of X-ray sources that we have assumed to be binaries may actually be hierarchical triples. Even if there are only a handful of X-ray triples among the hundreds of bright X-ray sources in nearby galaxies, it may be possible to identify them in archived data. We note that only a fraction of the triples may include close binaries that will merge in a Hubble time. Thus, the numbers of hierarchical triple accretors could be relatively large compared with the lifetime-scaled merger rate. The discovery of such systems would be a powerful indication that our model of mass transfer to compact inner binaries may be important. \subsection{Conclusions} Close binary systems form one of the most important classes of astrophysical objects. Their importance is highlighted by the fact that aLIGO has begun to discover the gravitational-wave signatures of their mergers. This makes them the first observed multimessenger emitters. We have introduced triple-star pathways that have the potential to contribute substantially to the rates of gravitational mergers. This reduces the pressure on other channels to produce all events. The contributions of triples however, have still to be quantified relative to the contributions of isolated binaries. This will be an important next step. The ubiquity of triples involving massive stars tells us that it is an important channel to explore. It remains to quantify the relative contributions of hierarchical triples to several important processes: Type Ia supernovae, merging NS-NSs, NS-BHs, and BH-BHs. Do triples contribute only a small fraction? Or do they make significant and measurable differences to the event rates? These questions can be answered in part through continuing observations of the events themselves. For example, are merging low-mass BHs more common than expected based on binary interactions alone? Other observations, for example improved measurements of the properties of primordial triples, will also be important. Theoretical work is also needed to determine the role that 3-body dynamical interactions have in helping to determine the initial characteristics of inner binaries: what are the initial conditions needed as input to calculations designed to realistically model mass transfer from a third star? Other questions have to do with developing more detailed evolutions of mass transfer from a third star. For example, how are winds focused? What is the role of irradiation? What is the geometry of accretion? We have shown that the conditions needed for components of an inner binary to interact with mass from a star in a hierarchical orbit should be common in that they extend over a wide range of orbital separations, donor masses, and characteristics of the inner binary. Accretion from an outer star is, therefore, a process that is certain to occur. Fortunately there are many links between our model and a range of observational signatures that may be detected pre-merger, during merger, or even in the epoch after merger. In addition there are connections to Type Ia supernovae. \section{Appendix: Roche-lobe approximation} Mass from star 3 can be channeled directly to the inner binary in a process analogous to what happens when a donor star in an isolated binary system fills its Roche lobe. The definition of the Roche lobe of a star in a binary is based on the existence of an equipotential surface that takes into account both the gravitational attraction of both the donor and its companion, and the rotation of the donor (whose spin period is equal to the orbital period). This equipotential surface is smooth and constant in a frame that rotates with the outer binary. Our hierarchical triple, however, introduces a crucial new feature: a time-dependent gravitational potential, $\Phi(t)$, whose instantaneous value at any point in space depends of the positions of the individual components of the inner binary. Fortunately, the fact that the outer orbit is significantly wider than the inner orbit means that a good approximation to the gravitational potential of its donor can be written as a sum of (a)~a time-independent monopole term, corresponding to the concentrating the total mass of the inner binary at its center of mass; and (b)~a time dependent dipole term associated with the orbital motion of the inner binary. The dominant term is the monopole term, which is the same as it would be for a single star with mass $M_T = M_1+M_2$. In addition, the donor star in the hierarchical triple would have a rotational period roughly equal to its orbital period, although this too is an approximation, since the motion of the individual stars in the close binary can introduce time-dependent tidal interactions. Thus, the Roche-lobe formalism can be applied to the case of mass transfer from a wide-orbit star onto a close-orbit binary, although there may be observable signatures associated with the short-orbital-period binary. \section{Appendix: Dynamical three-body interactions} The presence of the third body may have played a role in the earlier evolution of the hierarchical triple. For example, the Lidov-Kozai mechanism creates an interplay between the eccentricity of the inner orbit and its orientation relative to the outer orbit \citet{1962AJ.....67..591K, 1962P&SS....9..719L, 2016ARA&A..54..441N}. A change in eccentricity can promote mass transfer, influencing the evolution of the inner binary. Thus, the initial conditions for mass transfer from the outer star may have been influenced by prior 3-body interactions. These dynamical interactions could have been active before the components of the inner binary interacted; during the binary interactions that produced the close-orbit compact-object binary\footnote{Dynamical interactions with the third body are not likely to play a dominant role during the intervals of most active mass transfer. They could, however, serve to enhance mass transfer. In addition, there are intervals, e.g., after the more massive of the two stars star has evolved and before its close companion evolves, when dynamical interactions can be relatively important.}; and/or after the close binary was formed. It will be important to consider the full range of prior interactions in order to develop the profile of realistic configurations for the start of mass transfer from the third star. Furthermore, if the inner binary has not yet merged by the time mass transfer has ceased, 3-body dynamical interactions continue to influence the characteristics of the inner orbit. These interactions have the potential to influence the time at which the eventual merger will occur, and should be carefully considered. \section*{Acknowledgements} I would like to thank Daniel D'Orazio for discussions, and he, Morgan McCleod, and Amber Medina for their careful reading and comments on the manuscript. \bibliographystyle{mnras}
1,314,259,995,236
arxiv
\section{Introduction} \label{sec_intro} Direct imaging of exoplanets has been a field in rapid development over the past few years, with detections of several planets \citep[e.g.][]{lagrange2010,marois2010} and low-mass brown-dwarfs \citep[e.g.][]{chauvin2005,thalmann2009} that have been enabled by the advent of high-quality adaptive optics correction and sophisticated PSF subtraction techniques. This has allowed for various kinds of characterization of such systems \citep[e.g.][]{janson2010,bowler2010,currie2011,janson2011}, and opened up new domains for the types of planets that can be studied; for instance, observations within the disk gap of the transitional disk LkCa15 \citep[e.g.][]{espaillat2007,thalmann2010} using sparse aperture masking has recently revealed what could potentially be a planet in the process of forming \citep{kraus2011}. Among these discoveries, one claimed planet detection that stands out as peculiar in many ways is that of Fomalhaut~b \citep[hereafter K08]{kalas2008}. The presence of a planet around Fomalhaut has been predicted on the basis of the geometry of the debris disk in the system \citep{kalas2005}, which has a sharp inner edge and a center that is offset from that of the star. This implies that it must be eccentric, which could in turn be indicative of the presence of a planetary companion (or several) exerting a gravitational influence on the disk \citep[e.g.][]{quillen2006,chiang2009}, although alternative interpretations do exist \citep{jalali2011}. Hence, when a point-source was discovered in two epochs within the disk gap (K08), with a direction of motion largely parallel to the disk edge, it was assumed that this was an image of the predicted disk-perturbing planet. However, the observed properties of the point-source are hard to consolidate with such an interpretation. Unlike the other detections mentioned above, which were made at near-infrared wavelengths where young substellar objects radiate the bulk of their energy, the Fomalhaut~b candidate is detected only at visible wavelengths, where the expected emission is near zero. No corresponding near-infrared radiation has so far been detected, despite several attempts \citep{kalas2008,marengo2009}. Several alternative interpretations of the observed properties have been made, which will be discussed in Sect. \ref{sec_discuss}. Regardless of interpretation, it remains the case that the best way to increase our understanding of the system and test whether a planet is associated with the observed point-source is to better constrain its near-infrared properties. Motivated by this, we have performed a study with the \textit{Spitzer Space Telescope} in order to improve the detection limits at 4.5~$\mu$m, which is the wavelength range where Fomalhaut~b is expected to emit its peak flux. We describe our observational methods and data reduction in Sect. \ref{sec_obs}, our various data analysis approaches and results in Sect. \ref{sec_result}, and discuss the implications for the Fomalhaut system in Sect. \ref{sec_discuss}. \section{Observations and Data Reduction} \label{sec_obs} Our observations were taken with the Infrared Array Camera \citep[IRAC;][]{fazio2004} of the \textit{Spitzer Space Telescope} as part of program 70009, and consist of eight individual runs spread over cycle 7, from August 2010 through July 2011 (see Table \ref{t:obslog}). Each run consists of 48 exposures, structured as a cycle of a 12-point Reuleaux dither pattern with four exposures per position, which enables efficient spatial oversampling and bad pixel removal. The individual exposures have integration times of 10.4~s each with an execution time of 12~s, leaving the primary saturated in individual frames. All observations were taken in the 4.5~$\mu$m band where the peak flux of Fomalhaut~b is expected. The observing strategy was optimized for angular differential imaging (ADI) purposes, with a large spread in telescope roll angles. Since active rotation around the optical axis of the telescope is not possible, we exploited the fact that nominal rotation occurs naturally over the course of the year and distributed the observations as uniformly over the observing cycle as the scheduling allowed. Given the observing windows for Fomalhaut, this led to observations being acquired in August and December of 2010, and January and July of 2011, with a range of position angles as summarized in Table \ref{t:obslog}. \begin{table*}[p] \caption{Log of Fomalhaut observations in \textit{Spitzer} program 70009.} \label{t:obslog} \begin{tabular}{lccccc} \hline \hline AOR ID & Chan. & Exp. & Frames & MJD & PA \\ \hline 40250112 & 4.5~$\mu$m & 10.4~s & 48 & 55416.265 & 254.5$^{\rm o}$ \\ 40249856 & 4.5~$\mu$m & 10.4~s & 48 & 55423.457 & 257.6$^{\rm o}$ \\ 40249600 & 4.5~$\mu$m & 10.4~s & 48 & 55547.666 & 52.6$^{\rm o}$ \\ 40249344 & 4.5~$\mu$m & 10.4~s & 48 & 55561.778 & 58.4$^{\rm o}$ \\ 40249088 & 4.5~$\mu$m & 10.4~s & 48 & 55571.925 & 62.2$^{\rm o}$ \\ 40248832 & 4.5~$\mu$m & 10.4~s & 48 & 55579.832 & 65.0$^{\rm o}$ \\ 40248576 & 4.5~$\mu$m & 10.4~s & 48 & 55762.480 & 244.5$^{\rm o}$ \\ 40250368 & 4.5~$\mu$m & 10.4~s & 48 & 55765.605 & 245.7$^{\rm o}$ \\ \hline \end{tabular} \end{table*} Basic data reduction for all the observations was performed with the \textit{Spitzer} Science Center (SSC) IRAC Pipeline (version S18.18.0), which produced Basic Calibrated Data (BCD) frames and data quality masks for each individual exposure. We also used the post-BCD IRACproc package \citep[version 4.3;][]{schuster2006} for the sole purpose of removing outliers (cosmic-rays) for each frame. Subsequent steps were performed with custom procedures in IDL. An extra bad pixel removal step was introduced in order to identify and remove residual bad pixels that occurred only in single frames. This was done by identifying outliers from the median of each quadruplet of frames that were taken contiguously for a given dither position. The absolute center of the PSF in each frame was determined by cross-correlating the spider pattern with itself after a rotation by 180 degrees. All frames were then shifted to a common center and oversampled to a pixel scale of 300 mas/pixel, after which the ADI-based PSF subtraction could commence. This procedure is based on the Locally Optimized Combination of Images (LOCI) procedure \citep{lafreniere2007,lafreniere2009}, with adaptations to suit the new type of observational scheme and the science goals. The main aspects of the data set that distinguish it from most types of situations in which LOCI is applied are the extremely high PSF stability for space-based observations, the very large number of frames ($48 \times 8 = 384$), and the relatively small separation of the region of primary interest (compared to the PSF size, FWHM of 1.72\arcsec). Particularly the latter two factors make the data prone to substantial self-subtraction in a regular LOCI context. Since one of our main objectives in this study is to set a firm upper flux limit in case no detection would be made, we adapted the procedure in such a way as to avoid self-subtraction to a very high degree, while still maintaining a strong contrast performance. This is a conservative approach, and we note that it is entirely possible that the contrast could be even further enhanced with a more aggressive LOCI implementation. This will be a subject of future studies. Our adapted LOCI implementation follows the following procedure: First, each frame is sequentially subjected to an individual optimized reference PSF construction and subtraction. The basic optimization area is an annulus with an inner radius of 15 pixels and an outer radius of 60 pixels, centered on the star. This optimization area is split up into pieces of approximately 9-by-9 pixels ($\sim$1.5~FWHM). In practice, this is done in such a way that the basic annulus is split up into four annuli, each 9 pixels in width, and each annulus is split up azimuthally in such a way that an integer number of segments of equal angular width are created, and such that the length of the inner edge of the segment is as close to 9 pixels as possible. We will refer to these segments as `exclusion areas' henceforth. For each area, an individual LOCI optimization is executed where the optimization area consists of the full 15--60 pixel annulus, but with the exclusion area removed. Based on the resulting LOCI coefficients, a subtraction is then made only in the exclusion area. The resulting small area is saved to a frame which is put together piece by piece from the subtractions corresponding to the respective exclusion areas. Hence, in this procedure, the subtraction area is equal to the exclusion area, and is completely non-overlapping with the optimization area. The advantage of this approach is that it becomes impossible for the algorithm to systematically fit for any companion in the data, and thus the self-subtraction will be approximately zero. The possible cost comes from the fact that we exclude the stellar PSF regions that are physically the closest to the companion, and which may therefore correlate best with the actual PSF noise at the exact position of the companion. As we will see in the testing described in Sect. \ref{sec_result}, the procedure indeed works extremely well for avoiding any self-subtraction. For the procedure described above, the reference frames are chosen from the available library of frames based on how far separated the position angles are between the subtraction frame and a given reference frame. Since we wish to ensure a very low degree of self-subtraction of any real companion, we conservatively choose that the separation must be at least 1.72\arcsec (1~FWHM) at the inner edge of the exclusion area. This corresponds to different position angles at different separations from the star, hence the subtraction areas at larger separation typically have access to a larger number of reference frames. However, since the position angles are spread over a large range (see Table \ref{t:obslog}), every examined position has access to a sufficient number of reference frames for a very high quality PSF subtraction. Once all frames have been subjected to the PSF subtraction, they are de-rotated to a common sky orientation where North is up and East is to the left, and collapsed into a final frame using the median of the individual frames. Another step is then performed, in order to take boundary effects into account. If a real companion happens to be positioned exactly on the boundary between two or more exclusion zones, it is partially vulnerable to residual oversubtraction even in the above procedure. To overcome this effect, we perform three additional LOCI procedures in exactly the same way as described above, with only one difference: In the first additional procedure, the exclusion zones are shifted one half step in the azimuthal direction, in the second one, they are shifted one half step in the radial direction, and in the final one, they are shifted half a step in each direction. In this way, four different reduced frames are available to check if some companion gets fainter due to boundary effects in some of the frames, and additionally allows to check for spurious features that could occur in some reductions but not in others. In this case there is a box-like artefact in two of the images but otherwise they all show nicely consistent patterns, hence for the further analysis we use the mean of the frames from the four reductions. In order to evaluate the achieved contrast as a function of separation, we use the same procedure as described in \citet{marengo2009}, evaluated from the standard deviation in consecutive 1-pixel annuli. An important difference is that we always use a 5$\sigma$ criterion here for our measurements, rather than 3$\sigma$ as in \citet{marengo2009}. This is more stringent in the presence of speckle noise, although note also that there is a fairly close equivalence between a 5$\sigma$ single-signature criterion and a 3$\sigma$ double-signature criterion \citep{janson2008}, the latter of which is relevant for the \citet{marengo2009} data. \section{Results and Analysis} \label{sec_result} We show the final reduced image in Fig. \ref{f:fomaltile} and the corresponding sensitivity limits in Fig. \ref{f:contrast}, where we also plot the expected brightnesses for planets of a few different masses at ages of 200 and 400~Myr. A very substantial improvement in contrast performance is achieved with our LOCI implementation, with an order of magnitude improvement in flux detectability compared to the conventional ADI reduction in \citet{marengo2009}, and with a more stringent detection criterion. No signature is found at the position where Fomalhaut~b would be expected. Hence, we estimate an upper limit (5$\sigma$) at the relevant position based on the standard deviation in a 7-by-7 pixel box ($\sim$1.2 FWHM on the side). This gives an upper limit of 16.7~mag, which corresponds to 38.8~$\mu$Jy, again more than an order of magnitude improvement over previous data. \begin{figure*}[p] \centering \includegraphics[width=16cm]{f1.eps} \caption{Final reduced image for the real data (left) and for the data with an artificial companion introduced at the expected position of Fomalhaut~b (right). Arrow 1 points out the expected position of the companion based on earlier detections in the visible-light images. There is no corresponding point source seen in the real data. The artificial companion in the right-hand image passes through the data reduction without any flux loss, verifying that the non-detection is real, such that a stringent upper flux limit can be set. Arrow 2 points toward the brightest possible point source in the field. Its position is consistent with a ring-nested orbit, but the significance is too low to make any assessment of whether or not it is a real object. North is up and East is to the left in the images.} \label{f:fomaltile} \end{figure*} \begin{figure*}[p] \centering \includegraphics[width=12cm]{f2.eps} \caption{Sensitivity limit as function of separation from Fomalhaut. The solid line is the azimuthally averaged sensitivity profile. There is a bump around 12\arcsec due to concentrated noise at some ranges of position angles. The expected position of Fomalhaut~b from the visible wavelength range detections is in a cleaner part of the image space, and the local sensitivity at this position is shown by the black asterisk. Also plotted as dashed lines are the expected brightnesses for planets with a few different combinations of mass and age, according to models from \citet{spiegel2011}.} \label{f:contrast} \end{figure*} In order to test that the non-detection is real and not an effect of any unexpected oversubtraction in the LOCI procedure, we make a full reduction following the exact same procedure as described in the previous section, except we also introduce a faint artificial companion (a Gaussian with 1.72\arcsec FWHM) in all the pre-LOCI frames, at the expected position of Fomalhaut b, with a flux of 57~$\mu$Jy (this corresponds to an effective temperature of $\sim$250~K, or equivalently 0.5--1~$M_{\rm jup}$ at 200~Myr and 1--2~$M_{\rm jup}$ at 400~Myr). The artificial companion passes through the LOCI reduction entirely unaffected, with the same measured flux in the final frame as the flux that was put in within the error bars, and is well visible at 7$\sigma$ confidence in the final frame (see Fig. \ref{f:fomaltile}). Note that the introduction of a new feature in the data affects the LOCI reduction itself -- since the artificial companion exists in most reference frames as an additional feature that has to be fit for by the algorithm, the fit quality will in general be slightly worse on average. Since the extent of the companion is small with respect to the optimization area and it is faint with respect to the stellar PSF, the effect is small but noticeable as a marginally higher general noise level in the final reduced frame. In summary, the procedure described here validates that the non-detection is real, and thus the upper flux limit is relevant. We show our upper limit in Fig. \ref{f:400k}, along with other upper limits from the literature at various wavelengths, as well as the detection values in the visual wavelength range. We also compare these values with various theoretical models. One model spectrum is from \citet{burrows2003} (henceforth BSL03), corresponding to a 2--3~$M_{\rm jup}$ planet (interpolated between 2~$M_{\rm jup}$ and 5~$M_{\rm jup}$ to match the flux value at F814W\footnote{F606W and F814W are filters for the Advanced Camera for Surveys on the \textit{Hubble Space Telescope}, centered on wavelengths of $\sim$600 and $\sim$800~nm, respectively}) at $\sim$200~Myr. As noted in K08 and \citet{marengo2009}, this model is similar to e.g. the \citet{fortney2008} model except in H-band where the BSL03 flux is higher. It can be seen from the comparison to the data that in the context of this model, a thermal flux interpretation is entirely inconsistent with both the non-detection in H-band and our non-detection in the IRAC 4.5~$\mu$m channel. The H-band flux is strongly model-dependent as it is sensitive to uncertainties in opacity and the treatment of clouds. However, by contrast, the 4.5~$\mu$m flux is very insensitive to these effects, and varies only very marginally across different models. To show this, we use newer models from \citet{spiegel2011} based on \citet{burrows2011} (henceforth BHN11), which has improved opacities and cloud treatment, and also includes a set of entirely cloud-free models for comparison. \begin{figure*}[p] \centering \includegraphics[width=16cm]{f3.eps} \caption{Model comparisons to the observational data for a $\sim$400~K atmosphere ($\sim$2--3~$M_{\rm jup}$ at $\sim$200~Myr for BSL03, 4~$M_{\rm jup}$ at 200~Myr for BHN11). The solid line in each panel is the model spectrum, and the black lines are the corresponding fluxes in the respective photometric bands. Crosses mark detections and triangles mark upper limits. HST data points from K08 are in green and Keck/Gemini data points from K08 are in brown. Note that the error bars at F606W and F814W are smaller than the sizes of the symbols. Red symbols are \textit{Spitzer} upper limits from \citet{marengo2009}. Our new upper limit from \textit{Spitzer} is the magenta triangle. Upper left: A BSL03 model. Upper right: A BHN11 model with patchy clouds and Solar composition. Lower left: A BHN11 model with patchy clouds and increased metallicity. Lower right: A BHN11 model with Solar composition and a cloud-free atmosphere. Models in this effective temperature range are required to produce an adequate amount of flux in F814W, but they are typically inconsistent with the upper flux limits in H-band and/or L'-band, and always fully inconsistent with the upper limit at 4.5~$\mu$m}. \label{f:400k} \end{figure*} By using these new models and including clouds, it is possible to suppress the H-band flux to a significant extent (though typically still not quite to a sufficient extent for a non-detection to be consistent with thermal flux in F814W). However, this is not the case at 4.5~$\mu$m. The flux remains very stable for a constant effective temperature (typically $\sim$400~K for these model comparisons), regardless of how the clouds are treated including the cloud-free case, independently of opacity treatment and also of specific metallicity (the BHN11 models provide both Solar and super-Solar metallicity cases). We conclude that the effective temperatures required to get any substantial contribution of thermal flux to the observed F814W data point are simply inconsistent with the non-detection at 4.5~$\mu$m. In order to comply with the upper limit at 4.5~$\mu$m, we need effective temperatures in the range of $\sim$200~K or lower, corresponding to, e.g., 1~$M_{\rm jup}$ at 400~Myr (see Fig. \ref{f:200k}). As a side point, this latter age would correspond to a new but so far unpublished estimate (E. Mamajek, priv. comm.) which is a bit older than the mean estimate of 200~Myr used in K08 and \citet{marengo2009}. Here we do not make any assessment of the relative credibility of these two estimates, but simply remark that it has no real relevance for the spectral comparisons we are performing here. The main factor (beyond cloud treatment and opacity) that affects the spectral energy distribution is the effective temperature. The mass to which this temperature corresponds depends on the age and vice versa, but this has little impact on the spectrum, especially for the small discrepancy in age that we are concerned with here. Hence, as long as we do not actually try to determine the mass, we do not need to assess which of 200~Myr or 400~Myr is the better estimate. While on this note, it is worth pointing out that hot/warm/cold-start models are of no significant relevance to this discussion, since convergence will have occurred at these ages and masses, regardless of initial entropy \citep{spiegel2011}. \begin{figure*}[p] \centering \includegraphics[width=16cm]{f4.eps} \caption{Model comparisons to the observational data for a $\sim$200~K atmosphere (1~$M_{\rm jup}$, 400~Myr). The symbols have the same meaning as in Fig. \ref{f:400k}. Upper left: A BSL03 model. Upper right: A BHN11 model with patchy clouds and Solar composition. Lower left: A BHN11 model with patchy clouds and increased metallicity. Lower right: A BHN11 model with Solar composition and a cloud-free atmosphere. These models are marginally consistent with the upper flux limit at 4.5$\mu$m, but predict one to several orders of magnitude too little flux to be consistent with thermal radiation in F814W.} \label{f:200k} \end{figure*} For the colder models required to match the 4.5~$\mu$m upper flux limit, virtually all flux at shorter wavelengths is lost, and there is no way to match the F814W point with any thermal flux. The closest case is the extreme case of a completely cloud-free atmosphere and solar metallicity, but also in this case the flux is more than an order of magnitude too small (see Fig. \ref{f:200k}). Increasing metallicity has the effect of decreasing flux at 800~nm compared to 4.5~$\mu$m, hence decreasing metallicity could have the opposite effect. However, Fomalhaut~A has a super-solar metallicity with a mean measured value of 0.3~dex in a compilation of literature values in \citet{soubiran2010}. Fomalhaut~b should have an equal or larger metallicity -- thus, adjusting the metallicity is not a feasible route toward reaching consistency with the observational data. Based on the above results and considering the further aspects of the collected body of observations of Fomalhaut as will be discussed in detail in Sect. \ref{sec_discuss}, it is highly unlikely that the observed flux at visible wavelengths has any direct connection to the suspected giant planet that might shepherd the debris disk of Fomalhaut and force it into an eccentric state \citep{kalas2005}. In this context, and considering our very strong detection limits, it is interesting to assess whether this shepherding planet (what might be referred to as the `real' Fomalhaut b) can be seen in our images. Since the shepherding planet can be as low in mass as 0.5~$M_{\rm jup}$ \citep{chiang2009}, it is fully plausible that it could remain undetected in our images if the age is as old as 400~Myr, and since its orbit covers a range of projected separations, it can also hide in some parts of the orbit even if the mass is slightly higher. However, we do cover a very large fraction of its possible parameter space, and it is noteworthy that the brightest possible point-source in the field with a significance of 4.3$\sigma$ is in fact located at a position that would be consistent with a ring-nested orbit (arrow 2 in Fig. \ref{f:fomaltile}). However, the fact that the possible point source is not at a 5$\sigma$ confidence level obviously means that more data or an even further improved PSF reduction would be necessary in order to test its validity. \section{Discussion} \label{sec_discuss} In this section, we discuss in detail how our non-detection at 4.5~$\mu$m affects the interpretation of Fomalhaut~b, and what can be deduced about the system from the full body of existing data. The data points that exist for the detected point-source in K08 are two detections in the F606W filter from 2004 and 2006, and one detection in the F814W filter from 2006. These data points are shown in Figs. \ref{f:400k} and \ref{f:200k}. Note that the object is variable between 2004 and 2006. This is not a small effect; in fact, it corresponds to a change in brightness (dimming) of a factor 2, at a confidence of 8$\sigma$. This is the same level of confidence as, e.g., the second-epoch F606W detection of the point-source altogether. Hence, to the extent that we can trust the data at all, we must consider this variability as a real effect, and it needs to be accounted for in a comprehensive interpretation of the object. In addition to these data, a third epoch HST observation has also been acquired but has not yet been published at the time of writing (P. Kalas, priv. comm.). There are also upper flux limits from non-detections in a range of bands in \citet{kalas2008}: F435W, H, CH4S, CH4L, and L', and upper limits at 3.6, 4.5, 5.8 and 8.0~$\mu$m from \textit{Spitzer} \citep{marengo2009}, the second of which we improve on in this article. There are two main lines of interpretation of the point source in K08, only one of which actually includes any flux from a planet. In this scenario, the flux at F814W originates from the thermal emission of a planet, which has to be close to a mass of $\sim$3~$M_{\rm jup}$ in order to fit the data point assuming an age of 200 Myr \citep[e.g.][]{burrows2003,fortney2008}. This is however poorly consistent with the rest of the available data. Most obviously, it is a factor 20--40 brighter in F606W than expected in such a scenario. In order to explain the F606W data in terms of brightness and variability, K08 infer a hypothesis of H$\alpha$ accretion on the planet. Given that this would require gas accretion at the same rate of a few Myr old TTauri stars like GQ Lup (accretion rate calculated in the supplemental material of K08), whereas Fomalhaut is 200--400~Myr and has no other known signs of gas anywhere in the system, we consider this hypothesis highly unlikely. Aside from this, there is the issue that a $\sim$3~$M_{\rm jup}$ companion should also be detectable in the near-infrared. This was an issue already in K08, where the H-band flux predicted by theoretical models was significantly higher than the upper limit from observations, and became an even larger issue given the \textit{Spitzer} data published by \citet{marengo2009}, where the additional limits at larger wavelengths provided very little opportunity for any flux to remain undetected from such a companion. As was shown in the previous section, our new \textit{Spitzer} data now provides even much tighter constraints on the thermal emission hypothesis, and provides the opportunity to conclusively address this issue. We find that any thermally emitting companion responsible for the F814W flux (effective temperatures of $\sim$400~K, e.g. 2--3~$M_{\rm jup}$ at 200~Myr) would have emitted more than an order of magnitude more flux at 4.5~$\mu$m than our 5$\sigma$ upper limit, regardless of the choice of theoretical models. Conversely, any companion that thermally emits radiation at 4.5~$\mu$m at levels comparable to our upper limit (effective temperatures of $\sim$200~K, e.g. 1~$M_{\rm jup}$ at 400~Myr) would emit at least an order of magnitude too little flux at F814W to explain the observations, and for most realistic model parameters (e.g. any inclusion of clouds) the flux would be even much smaller. Hence, we can firmly exclude the hypothesis that any of the observed flux in K08 actually originates from a giant planet. Given that the SED of the K08 point source can be interpreted as having a reasonable match to the stellar SED, it seems more likely that what is seen in the K08 images is some form of reflected or scattered radiation from the star. Given the large effective area that is required for this, the only plausible origin for reflected/scattered radiation is dust. Hence, it is likely that we are seeing a concentration of dust, which may or may not be associated with a planet. We will discuss some dust-related interpretations in the following: firstly, we will consider the hypothesis of an optically thick disk around a giant planet. This is the second preferred scenario in K08, and requires a disk radius of at least $\sim$20~$R_{\rm jup}$ (for the case of high albedo; larger for the case of low albedo). It may not be unreasonable that circumplanetary rings start out with such properties. However, it might also be argued that the age of the Fomalhaut system of 200--400~Myr should have provided adequate time for moons to form and excavate the disk, leaving only rings of a much smaller effective size. Regardless of whether the optically thick disk scenario is fundamentally realistic or not, there are several reasons for why this scenario is inconsistent with the existing data. One very important constraint is that such a disk cannot account for the factor 2 variability observed in F606W. Furthermore, the spin of the star has been recently measured with interferometry \citep{lebouquin2009}. If the spin of the star is aligned with the plane of the disk, this means that the fainter Western side is closer to us than the brighter Eastern side. Although this is opposite to what would be expected for purely forward-scattering dust, it is shown in \citet{min2010} to be consistent with the scattering behaviour of large dust grains. If it is indeed the case that the Western side of the disk is closer to us, and the K08 point source orbits within the disk, then it follows that the object is located between the parent star and Earth, in the radial direction. It would be very difficult for an optically thick disk to reflect large amounts of light to Earth under such circumstances. In addition to these points, another important argument against the involvement of a giant planet (which also applies to the thermal emission case discussed above) is the orbit of the object. The third epoch astrometric measurement implies a ring-crossing orbit for the point source (P. Kalas, priv. comm.). Although this is not yet published, it would not be surprising given the previously published data, because, in fact, already the first two epochs are inconsistent with an orbit that traces the edge of the ring, at a $\sim$2$\sigma$ level (the direction is largely consistent with such an orbit, but the speed is not). This is pointed out by \citet{chiang2009}, who do not put a large emphasis on this fact, as they argue that the error bars are probably underestimated. However, we note that this is diametrically opposite to the interpretation of the astrometric errors in K08, where it is concluded that they are an upper limit to the real error (K08, supplemental material). A ring-crossing orbit would be inconsistent with an association of the point source to a giant planet, as it would strongly affect the geometry of the disk, hence any planet associated with the point source would need to be low in mass. In the context of a low-mass planet, we note that one way to suppress the 4.5~$\mu$m flux with respect to shorter wavelengths is to consider hotter temperatures and smaller surfaces. Effective temperatures higher than the $\sim$200--400~K that we have considered thus far are not reasonable for isolated objects at the system age, particularly for smaller (and thus lower-mass) objects than Jupiter-class planets. However, following intense bombardment of planetesimals, rocky protoplanets may acquire molten surfaces and reach effective temperatures of $\sim$1000--3000~K over brief periods of time \citep{kempton2009}. Hence, the possibility that the observed light-source could be a hot collisional afterglow of a $\leq$10~$M_{\rm Earth}$ object should not be dismissed out of hand. Still, it would probably be difficult to reproduce all the observed data points in such an interpretation, particularly the detection in F606W, and the simultaneous non-detections in H-band and at 4.5~$\mu$m. The \citet{kempton2009} models do not cover the full relevant wavelength range, but if we consider, for example, a planet of 10~$M_{\rm Earth}$ and 1.8~$R_{\rm Earth}$ and work from pure blackbody considerations, we can establish that the brightness temperature required to reproduce even the lowest of the F606W data points (0.30~$\mu$Jy) is more than 1500~K, whereas the upper limits in H-band (0.71~$\mu$Jy) and at 4.5~$\mu$m (38.8~$\mu$Jy) both require brightness temperatures of $\sim$700~K or less. We consider that future modelling efforts would be worthwhile to examine whether these conditions and the rest of the flux limits can all be simultaneously fulfilled, but for the purpose of our discussion here, we simply treat it as an option that cannot be categorically excluded. On balance, it should be noted that an observation of this type of scenario is probably rather unlikely, for several reasons, including the fact that it is expected to last over timescales of $\sim$10$^4$~yr for the case of a thin atmosphere, very short compared to the age of Fomalhaut. The timescale can be extended if the atmosphere is thickened and clouds are included, but the observable brightness temperature decreases accordingly. As a side note, if clouds were involved, they could possibly account for the variability in F606W in this scenario. Given the SED of the point source and its variability, along with the considerations above, the perhaps most plausible way to consistently explain the observed properties of the K08 point source is through a cloud of dust, which is either transient or has a transient component. There are two possible scenarios associated with such an interpretation that have been suggested in the literature. In one scenario, the observed point source is a residual (gradually dispersing) dust cloud from a recent planetesimal collision. We certainly know that such collisions should occur frequently in the Fomalhaut system, given that they are the very origin of dust in debris disks. This scenario is mentioned by K08, who argue against it based on the fact that such collisions should be much more common within the actual ring feature than just outside of it where the point source is observed, hence the relative probability to observe it where it is observed should be low. This is certainly true, but we note that there is a clear selection effect involved -- due to the high visual brightness of the ring and the speckle-like nature of the noise, any number of equivalent events that hypothetically do happen within the ring feature would be likely to pass unnoticed. One might also hypothetically imagine that the cloud is in the present position for some specific dynamical reason, for instance if the material is trapped in resonance with a giant planet situated elsewhere. The second scenario is essentially the same as the first, but involves a central rocky/icy object with a mass less than $\sim$10~$M_{\rm Earth}$, to which a swarm of planetesimals is gravitationally bound \citep{kennedy2011}. Collisions between these planetesimals produce the observed dust. We consider both of these scenarios to be reasonable within the constraints set by the data, and simply conclude that the K08 point source is well consistent with a transient or semi-transient dust cloud, which may or may not be gravitationally bound to a central object of planetary mass. With regards to the fact that the point source has been frequently referred to as a directly imaged planet in the literature, we note that this is incompatible with the observational evidence, for two independent reasons: (1) Although it cannot be formally excluded, there is insufficient evidence to support that there is any compact object of planetary mass associated with the point source altogether. (2) Even if such an object is present, in several of our considered scenarios we do not observe any photons from this object itself, hence it cannot be established that it has been directly imaged. \section{Conclusions} In this paper, we have presented observations performed with \textit{Spitzer}/IRAC in the 4.5~$\mu$m band for the purpose of trying to detect thermal emission from Fomalhaut~b. A new LOCI-based PSF subtraction scheme was implemented to achieve high contrast with minimal companion flux loss, which enabled an order of magnitude improvement in contrast-limited sensitivity with respect to previous efforts. The non-detection of any flux at the expected position can therefore be used to provide strong constraints on the underlying physics of the point-source seen at visible wavelengths. In particular, we find that there is almost certainly no direct flux from a planet contributing to the visible-light signature. This, in combination with the existing body of data for the Fomalhaut system, strongly implies that the dynamically inferred giant planet companion and the visible-light point source are physically unrelated. This in turn implies that the `real' Fomalhaut~b still hides in the system. Although we do find a tentative point source in our images that could in principle correspond to this object, its significance is too low to distinguish whether it is real or not at this point. Concerning the visible-light point source, its underlying physics is unclear, but the only hypothesis that can be shown to reasonably fit all existing data is an optically thin dust cloud, which is transient or has a transient component. If this interpretation is valid, the cloud may or may not be physically bound to a central object in the super-Earth mass regime. \acknowledgements The Fomalhaut system is a rich topic of conversation, and we thank Adam Burrows, Carsten Dominik, James Graham, Ray Jayawardhana, Paul Kalas, Michiel Min, and many others for interesting discussions. This work is based on observations made with the \textit{Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. M.J. is funded by the Hubble fellowship. J.C.C., J.R.B., and P.W. were supported by grant AST-1009203 from the National Science Foundation. D.S.S. gratefully acknowledges support from NSF grant AST-0807444 and the Keck Fellowship. {\it Facilities:} \facility{Spitzer}.
1,314,259,995,237
arxiv
\section{Notations} \label{intro} Along this paper, we consider a {\sl multiplex network} $\mathcal{G}$, made of $m\in\mathbb{N}$ {\sl layers} $G_1,\cdots,G_m$, such that each layer is a (directed or undirected) un-weighted network $G_k=(X,E_k)$, with $X=\{e_1,\cdots,e_n\}$ (i.e. all layers have the same $n\in\mathbb{N}$ nodes). The {\bf transpose} of the adjacency matrix of each layer $G_k$ is denoted by $A_k=(a_{ij}^k)\in \mathbb{R}^{n\times n}$, where \[ a_{ij}^k =\left\{ \begin{tabular}{ll} 1 & \text{if $(e_j,e_i)\in E_k,$} \\ 0 & \text{otherwise,} \end{tabular} \right. \] for $1\le i,j\le n$ and $1\le k\le m$. The {\sl projection network} associated to $\mathcal{G}$ is the graph $\overline{G}=(X,E)$, where \[ E=\bigcup_{k=1}^m E_k. \] The {\bf transpose} of the adjacency matrix of $\overline{G}$ will be denoted by $\overline{A}=(\overline{a}_{ij})\in \mathbb{R}^{n\times n}$. Note that for every $1\le i,j\le n$ \[ \overline{a}_{ij}^k =\left\{ \begin{tabular}{ll} 1 & \text{if $a_{ij}^k\ne 0$ for some $1\le k\le m,$} \\ 0 & \text{otherwise.} \end{tabular} \right. \] The paper is structured as follows. In the next section, we will introduce different heuristic arguments suggesting proper ways of measuring centrality in multiplex networks. Section III is devoted to establishing, under reasonable conditions, the existence and consistency of the proposed measures of centrality. In Section IV, we report some computer experiments and simulations showing how the introduced measures provide substantially different results when applied to the same multiplex networks. These results are discussed in the concluding section. \section{Mathematical models for eigenvector centrality in connected multiplex networks} \label{heuristics} In the case of a multiplex network, the central question to be addressed is the following: How can one take into account all the interactions between the different subnetworks (channels, communities, layers...) bearing in mind that not all of them have the same importance? It is essential, indeed, to remark that in order to get the centrality of a node it is necessary to take into account how the centrality (importance, influence,...) of a node is propagated within the whole network through different channels (layers) that are not necessarily additives. For instance, worldwide social networks (such as {\sf Facebook} or {\sf Twitter}) are characterized by very heterogeneous interactions, which are also typical of interactions among units in fields as diverse as climate systems \cite{DSMZK}, game theory \cite{GRCVS, GRAF}, interacting infrastructures \cite{Leicht, Buldyrev10} and many others (\cite{CFGGR}, \cite{Brummitt12}). With reference to the network $\mathcal{G}$, for each layer, one can consider the classical eigenvector centrality $G_k$ as the principal eigenvector of $A_k$ (if it exists). Specifically, the eigenvector centrality of a node $e_i$ within the layer $G_k$ would be the $i^{th}$ entry of the positive definite and normalized vector $c_k \in\mathbb{R}^n$ corresponding to the largest eigenvalue of the matrix $A_k$. In a similar way, the eigenvector centrality of the projection network $\overline{G}$ will be the principal eigenvector of $\overline{A}$. The existence and uniqueness of these vectors are guaranteed by the Perron-Frobenius theorem for any symmetric matrix with positive entries. Interestingly, the Perron-Frobenius theorem can conveniently be extended to multiplex networks, leading to even deeper concepts of nodes' centrality. We remark that other extensions of the Perron-Frobenius theorem have been proposed for hypergraphs and nonnegative tensors (\cite{MichoelNacthergaele, YangYang}). Once all the eigenvector centralities are computed, one can consider the {\sl independent layer} eigenvector-like centrality of $\mathcal{G}$ (abbreviated as the independent layer centrality of $\mathcal{G}$) as the matrix \[ C= \left( \begin{array}{c|c|c|c} c_1 & c_2 & \dots & c_m \end{array} \right) \in \mathbb{R}^{n\times m}. \] Notice that $C$ is column stochastic, since $c_k>0$ and $\|c_k\|_1=1$ for every $1\le k\le m$. Bearing in mind that the centrality (importance) of a node must be proportional to the centrality of its neighbors (lying on all layers), and considering that all layers have the same importance, one has that $$ \forall i,j\in X,\,\,\, c(i) \varpropto c(j)\,\,\, \mbox{ if }(j\to i)\in G_\ell,\,\,\, \ell\in\{1,\dots,m\}. $$ This allows defining the {\sl uniform eigenvector-like centrality} (abbrev. the uniform centrality) as the positive and normalized eigenvector $\widetilde c\in \mathbb{R}^n$ (if it exists) of the matrix $\widetilde A$ given by \[ \widetilde A=\sum_{k=1}^m A_k. \] This situation occurs, for instance, in social networks, where different individuals may have different relationships with other people, while one is generically interested in measuring the centrality of the network of acquaintances. Going a step further, one may consider that layers are associated with different levels of importance (or influence) in different layers of the network, and to include this sort of information in the matrix accounting for the mutual influence between layers. Thus, in order to calculate the importance (or influence) of a node within a specific layer, one must also take into account also all other layers, as some of them may be relevant for that calculation. Consider, for instance, the case of a boss going to the same gym as one of his employees: the relationship between the two fellows within the gym layer has a totally different nature from that occurring inside the office layer, but the role of the boss (i.e. his centrality) in this case can be even bigger than if he was the only one person of the office frequenting that gym. In other words, one needs to consider the situation where the influence amongst layers is heterogeneous. To this purpose, one can introduce an {\sl influence matrix} $W=(w_{ij})\in \mathbb{R}^{m\times m}$ as a non-negative matrix $W\ge 0$ such that $w_{ij}$ measures the influence {\it of} the layer $G_j$ {\it on} the layer $G_i$. Once $\mathcal{G}$ and $W=(w_{ij})$ have been fixed, one then defines the {\sl local heterogeneous eigenvector-like centrality} of $\mathcal{G}$ (abbrev. the local heterogeneous centrality of $\mathcal{G}$) on each layer $G_k$ ($1\le k\le m$) as a positive and normalized eigenvector $c^{\star}_k\in \mathbb{R}^n$ (if it exists) of the matrix \[ A^{\star}_k=\sum_{j=1}^m w_{kj}A_j. \] Once again, the {\sl local heterogeneous eigenvector-like centrality} (abbreviated as local heterogeneous centrality) matrix of the multiplex network $\mathcal{G}$ is defined as \[ C^{\star}= \left( \begin{array}{c|c|c|c} c^{\star}_1 & c^{\star}_2 & \dots & c^{\star}_m \end{array} \right) \in \mathbb{R}^{n\times m}. \] Another important aspect to be elucidated is that, in general, the centrality of a node $e_i$ within a specific layer $k$ may depend not only on the neighbors that are linked to $e_i$ within the layer $k$, but also to all other neighbors of $e_i$ that belong to the other layers. That is the case of scientific citations in different areas of knowledge; indeed, imagine two scientists (a chemist and a physicist) and one of them has been awarded the Nobel Prize: the importance of the other scientist will significantly increase, even though the Nobel prize laureate had few citations within the other researcher's area. This heuristic argument leads to the introduction of another concept of centrality: Given a multiplex network $\mathcal{G}$ and an influence matrix $W=(w_{ij})$, the {\sl global heterogeneous eigenvector-like centrality} of $\mathcal{G}$ (abbrev. global centrality of $\mathcal{G}$) is defined as a positive and normalized eigenvector $c^{\otimes}\in \mathbb{R}^{nm}$ (if it exists) of the matrix \[ A^{\otimes}= \left( \begin{array}{c|c|c|c} w_{11}A_1 & w_{12}A_2 & \cdots & w_{1m}A_m \\ \hline w_{11}A_1 & w_{22}A_2 & \cdots & w_{2m}A_m \\ \hline \vdots & \vdots & \ddots & \vdots \\ \hline w_{m1}A_1 & w_{m2}A_2 & \cdots & w_{mm}A_m \end{array} \right) \in \mathbb{R}^{(nm)\times (nm)}. \] Note that $A^{\otimes}$ is the Khatri-Rao product of the matrices \[ W= \left( \begin{array}{c|c|c} w_{11} & \cdots & w_{1m} \\ \hline \vdots & \ddots & \vdots \\ \hline w_{m1} & \cdots & w_{mm} \end{array} \right) \text{ and } \left( \begin{array}{c|c|c|c} A_1 & A_2 & \cdots & A_m \end{array} \right). \] In analogy with what has been one before, if one introduces the notation \[ c^{\otimes}= \left( \begin{array}{c} c^{\otimes}_1 \\ \hline c^{\otimes}_2 \\ \hline \vdots \\ \hline c^{\otimes}_m \end{array} \right), \] with $c^{\otimes}_1,\cdots, c^{\otimes}_m\in\mathbb{R}^n$, then one can define the {\sl global heterogeneous eigenvector-like centrality matrix} of $\mathcal{G}$ as the matrix given by \[ C^{\otimes}= \left( \begin{array}{c|c|c|c} c^{\otimes}_1 & c^{\otimes}_2 & \dots & c^{\otimes}_m \end{array} \right) \in \mathbb{R}^{n\times m}. \] Note that, in general $C^{\otimes}$ is neither column stochastic nor row stochastic, but the sum of all the entries of $C^{\otimes}$ is 1. Note also that the matrix $A^\otimes$ may be interpreted as a linear operator from the tensor product $\mathbb{R}^n\otimes\mathbb{R}^m$ to itself, form which $c^\otimes$ is its normalized principal eigenvector. Using a tensor algebra approach to represent networks with different types of interactions is not new. For example, a multilinear version of Perron-Frobenius Theorem may be used to define the centrality of uniform hypergraphs (see, for instance, \cite{PearsonZhang}); furthermore, a Perron-Frobenius-type Theorem for general (not necessarily uniform), irreducible hypergraphs has been proved by \cite{MichoelNacthergaele}. \section{Existence and consistency} \label{existenz} Let us now move to discussing the conditions that guarantee the existence and uniqueness of the centrality measures introduced in the previous section. The natural question here is whether the strong connectedness of the projected graph $\overline{G}$ or, equivalently, the irreducibility of the nonnegative matrix $\overline{A}$, is a sufficient condition for the existence and uniqueness of our centralities measures. One can make use of the Perron-Frobenius theorem, as well as on irreducible matrices and strongly connected graphs, for which we refer the interested reader to Ref. \cite{Meyer}. In fact, recalling that the graph determined by $\widetilde{A}=\sum_{k}A_k$ coincides with the projected graph of the network, in the case of the Uniform Centrality we immediately get the following \begin{theorem}\label{thm:UCexist} If the projected graph $\overline{G}$ of a multiplex network $\mathcal{G}$ is strongly connected, then the Uniform Centrality $\widetilde{C}$ of $\mathcal{G}$ exists and is unique. \end{theorem} The case of the Local Heterogeneous Centrality is similar, as every row $C^\star_\ell$ of the matrix $C^\star$ is the principal normalized eigenvector of a linear combination $A^\star_\ell=\sum_kw_{k\ell}A_k$. In particular, if $W$ is positive, the graph associated to every $A^\star_\ell$ is the projected graph of the multiplex network, hence one get also \begin{theorem}\label{thm:LHCexist} If the projected graph $\overline{G}$ of a multiplex network $\mathcal{G}$ is strongly connected, and $W>0$ then the Local Heterogeneous Centrality $C^\star$ of $\mathcal{G}$ exists and is unique. \end{theorem} A more delicate case is that of the Global Heterogeneous Centrality, that is constructed upon the principal normalized eigenvector of the matrix $$ A^\otimes=\left(\begin{array}{c|c|c|c} w_{11}A_1&w_{12}A_2&\cdots&w_{1m}A_m\\\hline w_{21}A_1&w_{22}A_2&\cdots&w_{2m}A_m\\\hline \vdots&\vdots&\ddots&\vdots\\\hline w_{m1}A_1&w_{m2}A_2&\cdots&w_{mm}A_m \end{array} \right). $$ Such a matrix is the transpose of the adjacency matrix of a graph with $nm$ nodes that we denote by $G^\otimes=(X^\otimes,E^\otimes)$, where $X=\left\{e_{ik},i=1,\dots,n,\,\,k=1,\dots,m\right\}$ and $(e_{j\ell}, e_{ik})\in E^\otimes$ iff $w_{k\ell}a^\ell_{ij}\neq 0$. Unfortunately, even if the projected graph of a multiplex network $\mathcal{G}$ is strongly connected and $W$ is positive, the graph $G^\otimes$ is not, in general, strongly connected. In fact one can easily check that this is already the case for the example in which $\mathcal{G}$ consists of two nodes and two layers, with matrices: $$ A_1=\left(\begin{array}{cc}0&0\\0&1\end{array}\right),\quad A_2=\left(\begin{array}{cc}0&1\\0&0\end{array}\right). $$ Nevertheless, it is still possible to infer the existence and unicity of $C^\otimes$ from the strong-connectedness of $\overline{G}$ and the positivity of $W$. Indeed, one has first to notice that, if $\overline{G}$ is strongly connected and $W$ is positive, then $G^\otimes$ satisfies: $$ (e_{j\ell}, e_{ik})\in E^\otimes\iff a^\ell_{ij}\neq 0\iff (e_j, e_i)\in E_\ell. $$ Now, we denote a node $e_{j\ell}$ of $G^\otimes$ as a $\otimes$-{\it sink} when $a_{ij}^\ell=0$ for all $i$, so that the corresponding column of $A^\otimes$ is identically zero. If a node $e_{j\ell}$ is not a $\otimes$-sink, we claim that, given any other node $e_{ik}$ there exists a path in $G^\otimes$ going from $e_{j\ell}$ to $e_{ik}$. Assuming $\overline{G}$ to be strongly connected, there exist then indices $i_1=j,i_2,\dots,i_r=i$ such that, for every $s\in\{1,\dots,r-1\}$, there exists an index $\ell_s\in\{1,\dots,m\}$ for which $a^{\ell_s}_{i_{s+1} i_{s}}\neq 0$. Thus, by construction, $(e_{i_s\ell_s},e_{i_{s+1}\ell_{s+1}})\in E^\otimes$ for all $s$, and this finishes the proof of the latter claim. {From} these arguments, one may easily deduce that the normal form of the matrix $A^{\otimes}$ (cf. \cite[p. 46]{Varga}) is written as $$ N=P\cdot A^\otimes\cdot P^t=\left(\begin{array}{ccc|ccc} 0&\cdots&0&\star&\cdots&\star\\ \vdots&\ddots&\vdots&\vdots&&\vdots\\ 0&\cdots&0&\star&\cdots&\star\\\hline 0&\cdots&0&&&\\ \vdots&&\vdots&&\Huge{B}&\\ 0&\cdots&0&&& \end{array}\right), $$ where $P$ is a permutation matrix and $B$ is an irreducible nonnegative matrix, to which the Perron-Frobenius Theorem can be applied. It follows that the spectrum of $A^\otimes$ is the union of the spectrum of $B$ and $\{0\}$, and that $A^\otimes$ has a unique normalized eigenvector associated to $\rho(A^\otimes)=\rho(B)$. Summing up, we get the following \begin{theorem}\label{thm:GHCexist} If the projected graph $\overline{G}$ of a multiplex network $\mathcal{G}$ is strongly connected, and $W>0$ then the Global Heterogeneous Centrality $C^\otimes$ of $\mathcal{G}$ exists and is unique. \end{theorem} We now discuss the consistency of our definitions in a variety of special cases. \noindent{\bf Monoplex networks.} It is straightforward to demonstrate that on a monoplex network (i.e. a multiplex network consisting of only one layer) our three concepts of multiplex centrality coincide with the usual eigenvector centrality of the layer. \noindent{\bf Identical layers.} Let $\mathcal{G}$ be a multiplex network for which $A_k=A_\ell$ for every $1\le k,\ell\le m$, and note that $A_k=\overline{A}$, for every $k$, so that the Uniform Centrality of $\mathcal{G}$ coincides with the Eigenvector Centrality of every layer $G_k$. Assuming that every row of $W$ is nonnegative (in particular if $W>0$) it is also clear that every column of the Local Heterogeneous Centrality $C^\star$ coincides with the Uniform Centrality $\overline{C}$ of $\mathcal{G}$. The case of the Global Heterogeneous Centrality is slightly different. If all the layers are identical, the matrix $A^\otimes$ coincides with the so called {\it Kronecker product} of the matrices $W$ and $\overline{A}$. It is well known (see for instance \cite[Ch.~2]{Steeb}) that the spectral radius of $A^\otimes$ is then equal to $\rho(W)\rho(\overline{A})$ and that its normalized principal eigenvector is the Kronecker product of the normalized principal eigenvectors $C_W$ of $W$ and $\overline{C}$ of $\overline{A}$. In terms of matrices, this is equivalent to say that $C^\otimes=C_W^t\cdot \overline{C}$. In particular, the normalization of all the columns of $C^\otimes$ equals $\overline{C}$. \noindent{\bf Starred layers.} We finally consider the case in which the multiplex network $\mathcal{G}$ contains exactly $m=n$ layers, satisfying that the layer $G_k$ consists of a set of edges coming out of the node $e_k$. In other words, $a^k_{ij}=0$ if $j\neq k$. In this case there exists a permutation matrix $P$ such that: $$ P\cdot A^\otimes\cdot P^t=\left(\begin{array}{ccc|ccc} 0&\cdots&0&\star&\cdots&\star\\ \vdots&\ddots&\vdots&\vdots&&\vdots\\ 0&\cdots&0&\star&\cdots&\star\\\hline 0&\cdots&0&&&\\ \vdots&&\vdots&&\hspace{-0.3cm}{W\circ \overline{A}}\hspace{-0.3cm}&\\ 0&\cdots&0&&& \end{array}\right), $$ where $W\circ \overline{A}$ is the Hadamard product (see, for example \cite{HJ}) of $W$ and $\overline{A}$ (i.e. $(W\circ \overline{A})_{ij}=w_{ij}\overline{a}_{ij}$). In particular the Global Heterogeneous Centrality of $\mathcal{G}$ is the diagonal $n\times n$ matrix whose diagonal is the eigenvector centrality of $W\circ \overline{A}$. Note that $W\circ \overline{A}$ can be interpreted as the transpose of the matrix of the graph $\overline{G}$, in which the edge going from $e_j$ to $e_i$ has been assigned a weight equal to $w_{ij}$. In this sense the eigenvector centrality of a weighted graph can be seen as a particular case of the Global Heterogeneous Centrality. \section{Comparing centralities of a multiplex network} \label{comparing} In the following two sections we will compute and compare the different types of centrality measures that we have defined for some examples, constructed upon both real and synthetic data. We will start by describing two ways of comparing centrality measures, and then we will apply them to a real example of social multiplex network. If we take a network of $n$ nodes $\{e_1,\cdots,e_n\}$ and consider two centrality measures $c,c'\in\mathbb{R}^n$ such that the $i$-th coordinate of $c$ and $c'$ measure the centrality of node $v_i$ for every $1\le i\le n$, one way of measuring the correlation between $c$ and $c'$ is by computing $\|c-c'\|$ for some norm $\|\cdot\|$. While $\|c-c'\|$ measures the discrepancy between $c$ and $c'$, its value is not representative of the {\it real} information about the correlation between $c$ and $c'$. Note, indeed, that one of the main features of the centrality measures is the fact that they produce {\sl rankings}, i.e. in many cases the crucial information obtained from a centrality measure is the fact that a node $v_i$ is more relevant than another node $v_j$, and this ordering is more important than the actual difference between the corresponding centrality of nodes $v_i$ and $v_j$. Hence, if we want to analyze the correlations among a set of centrality measures, we should study in detail the correlations between the associated rankings. The literature suggests various alternative ways to study the correlations between two rankings $r$ and $r'$, two standard ones being the Spearman's rank correlation coefficient $\rho(r,r')$ and the Kendall's rank correlation coefficient $\tau(r,r')$. If we consider two centrality measures $c,c'\in\mathbb{R}^n$ of a network with nodes $\{e_1,\cdots,e_n\}$, then each centrality measure $c$ and $c'$ produces a ranking of the nodes that will be denoted by $r$ and $r'$ respectively. The Spearman's rank correlation coefficient \cite{Spearman} between two centrality measures $c$ and $c'$ is defined as \[ \rho(c,c')=\rho(r,r') =\frac{\sum_{i=1}^n(r(v_i)-\overline{r})(r'(v_i)-\overline{r'})}{\sqrt{\sum_{i=1}^n(r(v_i)-\overline{r})^2(r'(v_i)-\overline{r'})^2}}, \] where $r(v_i)$ and $r'(v_i)$ are the ranking of node $v_i$ with respect to the centrality measures $c$ and $c'$ respectively, $\overline{r}=\frac 1n\sum_ir(v_i)$ and $\overline{r'}=\frac1n\sum_ir'(v_i)$. Similarly, the Kendall's rank correlation coefficient \cite{Kendall} between two centrality measures $c$ and $c'$ is defined as \[ \tau(c,c')=\tau(r,r')=\frac {\tilde K(r,r')- K(r,r')}{\binom n 2}, \] where $ \tilde K(r,r')$ is the number of pairs of nodes $\{v_i,v_j\}$ such that they appear in the same ordering in $r$ and $r'$ and $K(r,r')$ is the number of pairs of nodes $\{v_i,v_j\}$ such that they appear in different order in rankings $r$ and $r'$. Note that both $\rho(c,c')$ and $\tau(c,c')$ give values in $[-1,1]$. The closer $\rho(c,c')$ is to 1 the more correlated $c$ and $c'$ are, while the closer $\rho(c,c')$ is to 0 the more independent $c$ and $c'$ are (and similarly for $\tau(c,c')$). In addition, if $\rho(c,c')$ (or $\tau(c,c')$) is close to $-1$ then $c$ and $c'$ are anti-correlated. A further remark comes from the fact that the centrality measures introduced so far are very different from one another, and therefore one has to carefully describe how to compare them. Indeed, on one hand, some {\sl scalar} measures introduced in section~\ref{heuristics} (the centrality of the node in the network) associate a single number to each node of the network, while on the other hand, other {\sl vectorial} measures assign a vector to each node $v_i$ (with each coordinate of the vector measuring the centrality of the node $v_i$ as an actor of a different layer of the multiplex network). Actually, for a multiplex network $\mathcal{G}$ of $n$ nodes, two scalar centralities (the eigenvector centrality $\overline{c}\in\mathbb{R}^n$ of the projection graph, and the uniform eigenvector-like centrality $\widetilde c\in\mathbb{R}^n$) and three vectorial centralities (the independent layer centrality $C\in\mathbb{R}^{n\times m}$, the local heterogeneous centrality $C^{\star}\in\mathbb{R}^{n\times m}$, and the global heterogeneous centrality $C^{\otimes}\in\mathbb{R}^{n\times m}$) have been proposed. To compare these different measures, the information contained in each vectorial-type centrality must be aggregated to associate a number to each node. There are several alternative methods for aggregating information, but we use the convex combination technique as main criterion. For a multiplex network $\mathcal{G}$ of $n$ nodes and $m$ layers, we can fix some $\lambda_1,\cdots,\lambda_m\in[0,1]$ such that $\lambda_1+\cdots+\lambda_m=1$ and compute the aggregated scalar centralities \[ \begin{split} c=&c(\lambda_1,\cdots,\lambda_m)=\sum_{j=1}^m\lambda_jc_j,\\ c^{\star}=&c^{\star}(\lambda_1,\cdots,\lambda_m)=\sum_{j=1}^m\lambda_jc^{\star}_j, \end{split} \] where $c_j$ is the $j$th-column of the independent layer centrality $C$ and $c^{\star}_j$ is the $j$th-column of the local heterogeneous centrality $C^{\star}$. Note that the value of each $\lambda_j$ can be understood as the {\sl relative influence} of the layer $G_j$ in the aggregated scalar centrality of the multiplex network. In our numerics, the specific value $\lambda_1=\cdots=\lambda_m=\frac 1m$ has been chosen, as we suppose that no extra information about the relative relevance of each layer is available, and therefore the influence of each of them is considered equivalent. Note that $c$ and $c^{\star}$ are normalized, since $C$ and $C^{\star}$ are column-stochastic. The case of the global heterogeneous centrality is different, since $C^{\otimes}$ is not column-stochastic. In this case, since the sum of all entries of $C^{\star}$ is 1, it is enough to take \[ c^{\otimes}=\sum_{j=1}^mc^{\otimes}_j, \] where $c^{\otimes}_j$ is the $j$th-column of the global heterogeneous centrality $C^{\otimes}$. Consequently, the {\sl relative influence} of each layer $G_j$ can be defined as $\|c^{\otimes}_j\|_1$ (i.e. the sum of all the coordinates of $c^{\otimes}_j$). Once all the vectorial measures have been aggregated (and the setting unified), we discuss the ranking comparisons. In addition to the actual correlation among the centrality measures, we analyze the influence of the matrix $W$ (called {\sl influence matrix} in section~\ref{heuristics}) used in the definition of the local heterogeneous centrality $C^{\star}$ and in the global heterogeneous centrality $C^{\otimes}$. Since this matrix $W\in\mathbb{R}^{m\times m}$ is non-negative we consider two families of matrices $\{W_1(q)\}$ and $\{W_2(q)\}$ given for every $0\le q\le 1$ by \[ W_1(q)= \left( \begin{array}{cccc} 1 & q & \cdots & q \\ q & 1 & \cdots & q \\ \vdots & \vdots & \ddots & \vdots\\ q & q & \cdots & 1 \end{array} \right),\hfill W_2(q)= \left( \begin{array}{cccc} 1 & q & \cdots & q\\ q^2 & 1 & \cdots & q\\ \vdots & \vdots & \ddots & \vdots\\ q^2 & q^2 & \cdots & 1 \end{array} \right). \] Note that while each $W_1(q)$ corresponds to a symmetric influence among the layers, each $W_2(q)$ models an asymmetric influence among the layers of the multiplex network. We apply now our methods of comparison of the different centralities to a classic example: the social network of the Renaissance Florentine families in $1282-1500$. The dataset of the network (that are available in \cite{ucinet}) collects information about marriage and business ties among sixteen Renaissance Florentine families. This social system can be modelled as a multiplex network with two layers: one related with the business ties (specifically, recorded financial ties such as loans, credits and joint partnerships) and other that shows the marriage ties in the total dataset of sixteen families (see \cite{BrePa, Padgett}). These two layers are represented in Figure~\ref{Florence01}. \begin{figure*}[t] $\,$\hfill \includegraphics[width=0.4\textwidth]{Padgett_Business.eps}\hfill \includegraphics[width=0.4\textwidth]{Padgett_Marriage.eps}\hfill$\,$\hfill \caption{\label{Florence01} The business layer (on the left) and the marriage layer (on the right) of the social multiplex network of the Renaissance Florentine families.} \end{figure*} The comparisons among the different centrality measures for the social multiplex network of the Renaissance Florentine families is presented in Figure~\ref{Florence_centrality}. More precisely, we represent the $q$-dependent Spearman (in red) and Kendall (in black) correlation coefficients among the eigenvector centrality of the projection graph, the uniform centrality, the local heterogeneous centrality and the global heterogeneous centrality, in this particular example. \begin{figure*}[t] \includegraphics[width=\textwidth]{Florence_centrality.eps} \caption{\label{Florence_centrality} Ranking comparisons for the eigenvector centrality measures for the social multiplex network of the Renaissance Florentine families with the family of {\it symmetric} influence matrices of type $W_1(q)$ (panels from (a) to (e)) and with the family of {\it non-symmetric} influence matrices of type $W_2(q)$ (panels from (f) to (j)). Panels in the first and second column show the ($q$-dependent) correlations between the eigenvector centrality of the projection and the uniform centrality vs. the local heterogeneous centrality, respectively. Panels in the third and fourth column show the ($q$-dependent) correlations between the eigenvector centrality of the projection and the uniform centrality vs. the global heterogeneous centrality, respectively. Finally, the fifth column shows the correlation between the local and the global heterogeneous centrality. In all panels, Spearman and Kendall coefficient are respectively depicted in red and black.} \end{figure*} \section{Numerical testings} \label{numerical} In this section we illustrate the different behaviour of the introduced centrality measures by testing them against a class of randomly generated multiplex networks. To do so, instead of considering particular taylor-made examples, we consider random networks from a class of scale-free assortative-inspired synthetic graphs (cf. \cite{CFGGR}), that we will describe later on. First of all, we briefly describe the method used to construct the synthetic multiplex networks used in the numerical testing, which corresponds to the {\sl model II} of Ref. \cite{CFGGR}. The model is inspired by the Barab\'asi-Albert preferential attachment model \cite{Albert1} as well as by several bipartite networks models such as the collaboration network model proposed by J.J.\,Ramasco {\em et al.} \cite{ramasco}, or the sublinear preferential attachment bipartite model introduced by M.\,Peltom\"{a}ki and M.\,Alava \cite{peltomaki}. It consists of a growing random model determined by the following rules: \begin{itemize} \item[{\it (i)}] {\em Model parameters}. The model has three main parameters: $n$, $m$ and $p_{\textrm{new}}$. We set $n\in \mathbb{N}$ as the minimal number of nodes in the multiplex network and $2\le m\le n$ as the number of {\sl active nodes} in each layer ({\em i.e.} nodes that will produce links in each layers). Note that if we take $m=2$, we recover the Barab\'asi-Albert model \cite{Albert1}. In this model $m$ will be fixed, but the results are similar for other non-negative integer random variable. Finally, we set $p_{\textrm{new}}\in(0,1]$ as the probability of joining a new node to the growing multiplex network during its construction. \item[{\it (ii)}] {\em Initial conditions}. We start with a seed multiplex network made of a single layer $G_0$ of $m$ nodes that are linked all to all, ({\em i.e.} $G_0$ is the complete graph $K_m$). We can replace the {\sl all-to-all} structure by any other structure (such as a scale free or a Erd\H{o}s-R\'enyi network), but the results obtained are similar. This initial layer $G_0$ will be removed from the final multiplex network $\mathcal{G}$, since the {\sl all-to-all} structure would make the eigenvector-centrality of the projection graph a bisector. \item[{\it (iii)}] {\em Layer composition}. At each time step $t$, a new layer $G_t$ of $m$ nodes is added to the multiplex network. We start by randomly choosing an existing node of the multiplex network with a probability proportional to its degree ({\em preferential election}) that we call the {\sl coordinator node}. Therefore if at step $t-1$, the set of nodes of the multiplex network is $\{v_1,\ldots,v_n\}$, and $k_i$ denotes the degree of node $v_i$ at time $t-1$ in the projection network, then we choose the node $v_i$ randomly and independently with probability \[ p_i=\frac {k_i}{\sum_{j=1}^{n} k_j}. \] Once the coordinator node has been chosen, each of the remaining $m-1$ active nodes of $G_t$ will be a new node with probability $p_{\textrm{new}}$ and an existing node with probability $(1-p_{\textrm{new}})$. Already existing nodes are added by choosing them uniformly and independently. Note that we can replace the uniform random selection by other random procedures (such as preferential selection), but the random tests done suggest that the multiplex network obtained have statistically the same structural properties when $n$ is large enough (see \cite{CFGGR}). At this step, we have chosen $m$ nodes $\tilde v_1, \ldots, \tilde v_m$ that will be the {\sl active nodes} of the new layer $G_t$ (i.e. nodes that will produce links in this layers). \item[{\it (iv)}] {\em Layer inner-structure}. After fixing the active nodes $\tilde v_1, \ldots, \tilde v_m$ of the new layer $G_t$, we have to give its links. First, we link all the active nodes to the coordinator in order to ensure that all the eigenvector-like centrality are well defined. We set new links between each pair of active nodes $v_i$ and $v_j$ (with $1<i \ne j\leq m$) by using a {\sl random assortative linking strategy} (this corresponds to the {\sl Model II in \cite{CFGGR}}). For every $2\le i\ne j\le m$, we add randomly the link $\{\tilde v_i,\tilde v_j\}$ in proportion to the number of common layers that hold simultaneously $\tilde v_i$ and $\tilde v_j$. Hence if we denote by $Q_{ij}$ the number of layers that hold simultaneously $\tilde v_i$ and $\tilde v_j$ at time step $t$ (including $G_t$) and by $q_i$ the number of layers that hold $\tilde v_i$ at time step $t$ (also including $G_t$), thus the probability of linking node $\tilde v_i$ with node $\tilde v_i$ is given by \[ p_{ij}=\frac {2Q_{ij}}{q_i+q_j}, \] for every $2\le i\ne j\le m$. The heuristic behind this model comes from social networks, since the relationships in a new social group are correlated with the previous relationships between the actors in other social groups \cite{Wasserman}. Hence, if two actors that belong to the new social group coincide in many (previous) groups, then the probability of being connected in this new group is large. The model also reflects the fact that if two new actors join their first group, the probability of establishing a relationship between them is high. At the end of this step, the new layer $G_t$ is completely defined. \item[{\it (v)}] Finally, we repeat steps {\it (iii)} and {\it (iv)} until the number of nodes of the multiplex network is at least $n$. \end{itemize} After fixing all the settings of the numerical testings, we perform the comparison for three multiplex networks $\mathcal{G}_1$, $\mathcal{G}_2$ and $\mathcal{G}_3$ (constructed as above), where: \begin{itemize} \item[{\it(i)}] $\mathcal{G}_1$ is a network of 102 nodes (computed with $n=100$ as initial parameter) and 13 layers of 10 nodes each ($k=10$ as initial parameter). The probability $p_{\textrm{new}}=0.8$ of adding new active nodes to each layer. This is an example of a network with a relative small number of active nodes in each layer and such as each node is active in a few number of layers (since $p_{\textrm{new}}=0.8$). \item[{\it(ii)}] $\mathcal{G}_2$ is a network of 108 nodes (computed with $n=100$ as initial parameter) and 4 layers of 40 nodes each ($k=40$ as initial parameter). The probability $p_{\textrm{new}}=0.5$ of adding new active nodes to each layer. In this case, this is a network with a relative big number of active nodes in each layer and a balanced number of newcomers and experienced nodes as actives nodes in each layer ($p_{\textrm{new}}=0.5$). \item[{\it(iii)}] $\mathcal{G}_3$ is a network of 102 nodes (computed with $n=100$ as initial parameter) and 6 layers of 60 nodes each ($k=60$ as initial parameter). The probability $p_{\textrm{new}}=0.1$ of adding new active nodes to each layer. In this case, this is a network with a big number of active nodes in each layer and a very low number of newcomers in each layer ($p_{\textrm{new}}=0.1$). \end{itemize} For each of these networks we compute the correlation between the eigenvector centrality of the projection graph, and the uniform centrality vs. the local and global heterogeneous centralities. Figures (\ref{fig1}) and (\ref{fig2}) plot the dependency of these correlations with respect to the influence strength $q\in[0,1]$ in a family of \textbf{symmetric} influence matrices $W_1(q)$ (Figure~\ref{fig1}) and with respect to the influence strength $q\in[0,1]$ in a family of \textbf{non-symmetric} influence matrices $W_2(q)$ (Figure~\ref{fig2}), exhibiting a similar pattern. Note that this phenomena does not occur in the case of the example considered in section \ref{comparing}, since in this case there were deep differences between the symmetric case and the non symmetric one (see Figure \ref{Florence_centrality}). Similar results for the correlations between the heterogeneous centralities and the independent layer centrality are displayed in Figure~\ref{fig3}. Finally, we also report the local heterogeneous centrality vs. the global heterogeneous centrality, under the action of the two families of influence matrices $W_1(q)$ and $W_2(q)$ (see Figure (\ref{fig4})). \begin{figure*}[t] \includegraphics[width=\textwidth]{correlation_symmetric_influence.eps} \caption{\label{fig1} Ranking comparison for the eigenvector centrality measures for two multiplex networks with the family of {\it symmetric} influence matrices of type $W_1(q)$. Panels (a,b,c,d), (e,f,g,h), and (i,j,k,l) respectively correspond to network $\mathcal{G}_1$, $\mathcal{G}_2$ and $\mathcal{G}_3$ (see text for details on the network construction). Panels (a) and (b) ((c) and (d)) show the ($q$-dependent) correlations between the eigenvector centrality of the projection graph and the uniform centrality vs. the local (global) heterogeneous centrality of $\mathcal{G}_1$, respectively. Similarly, panels (e) to (h) give the same information for $\mathcal{G}_2$ and panels (i) to (l) correspond to $\mathcal{G}_3$ respectively.} In all panels Spearman and Kendall coefficient are respectively depicted in red and black. \end{figure*} \begin{figure*}[t] \includegraphics[width=\textwidth]{correlation_non_symmetric_influence.eps} \caption{\label{fig2} Ranking comparison for the eigenvector centrality measures for two multiplex networks with the family of {\it non-symmetric} influence matrices of type $W_2(q)$. Panels (a,b,c,d), (e,f,g,h), and (i,j,k,l) respectively correspond to network $\mathcal{G}_1$, $\mathcal{G}_2$ and $\mathcal{G}_3$. Panels (a) and (b) ((c) and (d)) show the ($q$-dependent) correlations between the eigenvector centrality of the projection graph and the uniform centrality vs. the local (global) panels (e) to (h) give the same information for $\mathcal{G}_2$ and panels (i) to (l) correspond to $\mathcal{G}_3$ respectively. Same stipulations as in the caption of Figure 1.} \end{figure*} \begin{figure*}[t] \includegraphics[width=0.9\textwidth]{indep_layers.eps} \caption{\label{fig3} Ranking comparison for the independent layer centrality vs. local and global heterogeneous centralities. Panels (a,b,c,d), (e,f,g,h), and (i,j,k,l) respectively correspond to network $\mathcal{G}_1$, $\mathcal{G}_2$ and $\mathcal{G}_3$. The first two columns of panels on the left correspond to the symmetric family of influence matrices $W_1(q)$ while the two on the right are for the asymmetric family of influence matrices $W_2(q)$ ($0\le q\le 1$). The first and the third columns of panels on the left show the correlations between the independent layer centrality and the local heterogeneous centrality, while the second and the forth columns of panels on the left show the correlations between the independent layer centrality and the global heterogeneous centrality. The Spearman coefficient is in red, and the Kendall coefficient is in black.} \end{figure*} \begin{figure*}[t] \includegraphics[width=0.9\textwidth]{local_vs_global.eps} \caption{\label{fig4} Ranking comparison between the local heterogeneous centrality and the the global heterogeneous centrality for $\mathcal{G}_1$ (panels (a) and (d)), for $\mathcal{G}_2$ (panels (b) and (e))and for $\mathcal{G}_3$ (panels (c) and (f)). The computation has been done with the family of \textit{symmetric} influence matrices of type $W_1(q)$ (top panels) and with the family of \textit{non-symmetric} influence matrices of type $W_2(q)$ (bottom panels). Once again, the Spearman coefficient is in red, and the Kendall coefficient is in black.} \end{figure*} \section{Discussion and Conclusions} \label{conclusions} Introducing a layer structure on a complex network or, equivalently, distinguishing different types of interactions between its nodes, may significantly vary the behaviour of the network (cf. \cite{Leicht, Buldyrev10,Brummitt12}). The main goal of this paper is analysing the influence of the layer structure in some eigenvector-like centralities of multiplex networks. In order to that, we have introduced several eigenvector centralities that take into account the layer structure by means of a directed graph of influences among layers. The examples presented in the paper show that the centrality measures introduced are qualitatively different and, in particular, different from the eigenvector centrality of the projected network. In order to measure conveniently these differences, we have introduced an algorithm that produces randomly generated multiplex networks and measured the pairwise correlations of the different centralities studied, under different types of influence between layers, according to a parameter $q\in[0,1]$ and two distinct types of influence matrix. We have selected three representative examples from the family of synthetic networks analysed, and presented them here since all the numerical simulations we have performed show similar behaviour. For the multiplex examples considered, this behaviour may be described as follows: \begin{itemize} \item The rankings given by the different eigenvector centrality measures introduced in the paper are qualitatively different and hence the corresponding centrality measures are also different. \item The correlations between these new eigenvector centrality measures strongly depend on the structure of the multiplex networks, including the number of layers and the number of nodes per layer. \item The results obtained with Spearman's and Kendall's coefficients are qualitatively equivalent in all the examples considered, although Spearman's rank is always slightly higher. \item The differences between the heterogeneous (global, local) and the flat centralities (centrality of the projected network, uniform centrality) are significantly broader for lower values of $q$. In fact, there is a non-linear relationship between the centrality measures and the strength $q$ of the influence between layers. On the other hand,for high values of $q$, the behaviour of these particular multiplex networks is similar to the corresponding, monoplex, projected networks. In other words $q$, thought of as a measure of the multiplexity of the network, is detected by heterogeneous centrality measures. \item In the synthetic examples considered, the total variation with respect to $q$ of the correlation between a heterogeneous and a flat measure grows with the ratio between number of layers and number of nodes of each layer. \item The symmetry of the influence between layers does not play a critical role in the correlations among centrality measures in the randomly generated networks considered. However, in the example of the Florentine families (in which the number of nodes and layers is small) the differences between the symmetric and non symmetric case is significant. \end{itemize} In summary, we introduced several definitions of centrality measures for multiplex networks, and proved that, under reasonable conditions, these centrality measures exist and are unique (theorems \ref{thm:UCexist}, \ref{thm:LHCexist}, and \ref{thm:GHCexist}). Computer experiments and simulations performed by using the model introduced in \cite{CFGGR} show that our measures provide substantially different results when applied to the same multiplex networks. This is in agreement with the fact that each of these measures arises from a different heuristic. In this sense, the concept of multiplex network may be used to model complex networks of different kinds, so that the most appropriate kind of centrality measure shall be carefully determined in each case. \section*{Acknowledgements} The authors would like to thank an anonymous referee for useful suggestions, that have helped us to improve the final version of this paper. We also thank David Papo for his valuable comments. This work has been partially supported by the Spanish DGICYT under projects MTM2009-13848 and MTM2012-32670.
1,314,259,995,238
arxiv
\section{Introduction}\label{sect:intro} Following a celebrated paper \cite{GSS1} by Grillakis, Shatah and Strauss, we consider abstract Hamiltonian systems of the form \begin{equation}\label{eq:1.1} \frac{du}{dt}(t)=\tilde JE'(u(t)), \end{equation} where $E$ is the energy functional on a real Hilbert space $X$, $J$ is a skew-symmetric operator on $X$, and $\tilde J$ is a natural extension of $J$ to the dual space $X^*$. We assume that \eqref{eq:1.1} is invariant under a one-parameter group $\{\Trans (s)\}_{s\in \R}$ of unitary operators on $X$, and study the instability of bound states $\Trans (\omega t)\phi_{\omega}$, where $\omega\in \R$ and $\phi_{\omega}$ is a solution of the corresponding stationary problem. Precise formulation of the problem will be set up in Section \ref{sect:form} based on \cite{GSS1}. We also borrow some notation from \cite{GSS2}, Comech and Pelinovsky \cite{CP} and Stuart \cite{stu}. Although it is desirable to work on the same general framework as in \cite{GSS1}, we need stronger assumptions for our purpose which will be explained below. We will formulate our assumptions in order to apply our theorems to nonlinear Schr\"odinger equations. In particular, we assume that the group $\{\Trans (s)\}_{s\in \R}$ is generated by the skew-symmetric operator $J$, that $J$ is bijective from $X$ to itself, and that the charge functional $Q$ is positive definite. These assumptions exclude nonlinear Klein-Gordon equations and KdV type equations from our framework. Moreover, we introduce an intermediate space $H$ between the energy space $X$ and the dual space $X^*$, which is a symmetry-constrained $L^2$ space in application to nonlinear Schr\"odinger equations. Such space as $H$ does not appear in \cite{GSS1}, but it will make the description of the theory simpler. In Section \ref{sect:results}, we state two main Theorems and four Corollaries. In Theorem \ref{thm1} we give a general sufficient condition for instability of bound states in non-degenerate case. We clarify that the conditions (A1), (A2a) and (A3) are essential in the proof of the instability theorem of \cite{GSS1}. We note that Theorem \ref{thm1} is inspired by a recent paper \cite{mae2} of Maeda. In fact, the condition (A3) appears explicitly in \cite{mae2} but not in \cite{GSS1}. It would be interesting that Theorem \ref{thm1} unifies two different known results, Corollaries \ref{cor3} and \ref{cor4}. Here, Corollary \ref{cor3} is a classical result due to \cite{GSS1,SS1}, while Corollary \ref{cor4} is originally due to \cite{mae2} with modifications. Although the key Lemma \ref{lem3} for the proof of Theorem \ref{thm1} is the same as Lemma 4.4 of \cite{GSS1}, some improvements are made in the proof of Lemma \ref{lem3}. For example, the function $\Lambda (\cdot)$ in Lemma \ref{lem3} is directly given by \eqref{eq:4.2} in the present paper, while in \cite{GSS1} it is determined by solving a differential equation and by the implicit function theorem (see (4.6) and Lemma 4.3 of \cite{GSS1}). It should be also mentioned that the proof of Lemma \ref{lem3} relies only on some simple Taylor expansions as in the proof of the stability theorem (see Theorem 3.4 of \cite{GSS1} and \cite{wei2}). On the other hand, in Theorem \ref{thm2}, we study the instability of bound states in a degenerate or critical case. We give two corollaries of Theorem \ref{thm2}. Corollary \ref{cor1} is a special case of Theorem \ref{thm2}, but it is a new result and will be useful to study the instability of bound states at a bifurcation point. While, Corollary \ref{cor2} is originally due to Comech and Pelinovsky \cite{CP}. We notice that our proof is completely different from that of \cite{CP}. In fact, the proof of \cite{CP} is based on a careful analysis of the linearized system, while Theorem \ref{thm2} is based on the Lyapunov functional method as well as Theorem \ref{thm1}. Our proof may be simpler, at least shorter than that of \cite{CP}. Another advantage of our approach is that Corollary \ref{cor2} requires the minimal regularity $E\in C^3(X,\R)$, while a higher regularity of $E$ is needed in \cite{CP} in application to nonlinear Schr\"odinger equations, especially for higher dimensional case (see Assumption 2.10, Remark 2.11 and Appendix B of \cite{CP}). As stated above, our abstract theorems are not applicable to nonlinear Klein-Gordon equations. For an instability result on NLKG in a critical case, see Theorem 4 of \cite{OT}. In Section \ref{sect:pre}, we recall some basic lemmas proved by \cite{GSS1}, and the proofs of Theorems \ref{thm1} and \ref{thm2} are given in Sections \ref{sect:proofthm1} and \ref{sect:proofthm2}, respectively. The representation formula \eqref{eq:4.5} of functional $P$ plays an important role especially in the proof of Theorem \ref{thm2}. Corollaries \ref{cor2}--\ref{cor4} are proved in Section \ref{sect:proofcor}. In Section \ref{sect:examples}, we give three examples. In Subsection \ref{ss:1}, we consider a simple example to explain the role of the assumption (A3) in Theorem \ref{thm1}. In Subsection \ref{ss:delta}, we apply Corollaries \ref{cor2} and \ref{cor4} to a nonlinear Schr\"odinger equation with a delta function potential, and give some remarks to complement the previous results in \cite{FJ,FOO,LFF}. In Subsection \ref{ss:system}, we apply Theorem \ref{thm1} to a system of nonlinear Schr\"odinger equations, and also mention the applicabililty of Theorem \ref{thm2} and Corollary \ref{cor1} to the problem at the bifurcation point. \section{Formulation}\label{sect:form} Let $X$ and $H$ be two real Hilbert spaces with dual spaces $X^*$ and $H^*$ such that $$X\hookrightarrow H\cong H^*\hookrightarrow X^*$$ with continuous and dense embeddings. We denote the inner product and the norm of $X$ by $(\cdot,\cdot)_X$ and $\|\cdot\|_X$, and those of $H$ by $(\cdot,\cdot)_H$ and $\|\cdot\|_H$. We identify $H$ with $H^*$ by the Riesz isomorphism $I:H\to H^*$ defined by $\dual{Iu}{v}=(u,v)_H$ for $u,v\in H$. Here and hereafter, $\dual{\cdot}{\cdot}$ denotes the pairing between a Banach space and its dual space. Let $R:X\to X^*$ be the Riesz isomorphism between $X$ and $X^*$ defined by $$\dual{Ru}{v}=(u,v)_X, \quad u,v\in X.$$ Let $J\in \BLO(X)$ be bijective and skew-symmetric in the sense that \begin{equation}\label{eq:2.1} (Ju,v)_X=-(u,Jv)_X, \quad (Ju,v)_H=-(u,Jv)_H, \quad u,v\in X. \end{equation} The operator $J$ is naturally extended to $\tilde J:X^*\to X^*$ defined by $$\dual{\tilde Jf}{u}=-\dual{f}{Ju}, \quad u\in X,~ f\in X^*.$$ Let $\{\Trans (s)\}_{s\in \R}$ be the one-parameter group of unitary operators on $X$ generated by $J$. By \eqref{eq:2.1}, we have $$\|\Trans (s)u\|_X=\|u\|_X, \quad \|\Trans (s)u\|_H=\|u\|_H, \quad s\in \R,~ u\in X.$$ We assume that $\Trans$ is $2\pi$-periodic, that is, $\Trans (s+2\pi)=\Trans (s)$ for $s\in \R$. The operator $\Trans (s)$ is naturally extended to $\tilde \Trans (s):X^*\to X^*$ defined by \begin{equation}\label{eq:2.2} \dual{\tilde \Trans (s)f}{u}=\dual{f}{\Trans (-s)u}, \quad u\in X,~ f\in X^*. \end{equation} Then, $\{\tilde \Trans (s)\}_{s\in \R}$ is the one-parameter group of unitary operators on $X^*$ generated by $\tilde J$. Let $E\in C^2(X,\R)$, and we consider the equation \begin{equation}\label{eq:2.3} \frac{du}{dt}(t)=\tilde JE'(u(t)). \end{equation} We say that $u(t)$ is a solution of \eqref{eq:2.3} in an interval $\mathcal{I}$ of $\R$ if $u\in C(\mathcal{I},X)\cap C^1(\mathcal{I},X^*)$ and satisfies \eqref{eq:2.3} in $X^*$ for all $t\in \mathcal{I}$. We assume that $E$ is invariant under $\Trans$, that is, $E(\Trans (s)u)=E(u)$ for $s\in \R$ and $u\in X$. Then \begin{equation}\label{eq:2.4} E'(\Trans (s)u)=\tilde \Trans (s)E'(u), \quad s\in \R,~ u\in X. \end{equation} We define $Q:X\to \R$ by $$Q(u)=\frac{1}{2}\|u\|_H^2, \quad u\in X.$$ Then, $Q'(u)=Iu$ for $u\in X$, and \begin{equation}\label{eq:2.5} Q(\Trans (s)u)=Q(u), \quad Q'(\Trans (s)u)=\tilde \Trans (s)Q'(u), \quad s\in \R, ~ u\in X. \end{equation} We assume that the Cauchy problem for \eqref{eq:2.3} is locally well-posed in $X$ in the following sense. \vspace{2mm} \noindent{\bf Assumption.} For each $u_0\in X$ there exists $t_0>0$ depending only on $k$, where $\|u_0\|_X\le k$, and there exists a unique solution $u(t)$ of \eqref{eq:2.3} in the interval $[0,t_0)$ such that $u(0)=u_0$ and $E(u(t))=E(u_0)$, $Q(u(t))=Q(u_0)$ for all $t\in [0,t_0)$. \vspace{2mm} By a {\it bound state} we mean a solution of \eqref{eq:2.3} of the form $u(t)=\Trans (\omega t)\phi$, where $\omega \in \R$ and $\phi\in X$ satisfies $E'(\phi)=\omega Q'(\phi)$. \vspace{2mm} \noindent{\bf Definition.} We say that a bound state $\Trans (\omega t)\phi$ of \eqref{eq:2.3} is {\it stable} if for all $\varepsilon>0$ there exists $\delta>0$ with the following property. If $\|u_0-\phi\|_X<\delta$ and $u(t)$ is the solution of \eqref{eq:2.3} with $u(0)=u_0$, then $u(t)$ exists for all $t\ge 0$ and $u(t)\in \mathcal{N}_{\varepsilon}(\phi)$ for all $t\ge 0$, where $$\mathcal{N}_{\varepsilon}(\phi) =\{u\in X: \inf_{s\in \R}\|u-\Trans (s)\phi\|_X<\varepsilon\}.$$ Otherwise $\Trans (\omega t)\phi$ is called {\it unstable}. \section{Main Results}\label{sect:results} In Sections \ref{sect:results}--\ref{sect:proofcor}, we assume all the requirements in Section \ref{sect:form}. For $\omega\in \R$ we define $S_{\omega}:X\to \R$ by $S_{\omega}(u)=E(u)-\omega Q(u)$ for $u\in X$. To state our main results, we impose the following conditions. \vspace{2mm} \noindent {\bf (A1).} There exist $\omega\in \R$ and $\phi_{\omega}\in X$ such that $S_{\omega}'(\phi_{\omega})=0$, $\phi_{\omega}\ne 0$ and $R\phi_{\omega} \in I(X)$. \vspace{2mm} \noindent {\bf (A2a).} There exists $\psi\in X$ such that $\|\psi\|_H=1$, $(\phi_{\omega},\psi)_H=0$, $(J\phi_{\omega},\psi)_H=0$ and $\dual{S_{\omega}''(\phi_{\omega})\psi}{\psi}<0$. \vspace{2mm} \noindent {\bf (A2b).} $E\in C^3(X,\R)$. There exist $\psi\in X$ and $\mu\in \R$ such that $\|\psi\|_H=1$, $(\phi_{\omega},\psi)_H=0$, $(J\phi_{\omega},\psi)_H=(J\phi_{\omega},\psi)_X=0$ and \begin{equation}\label{eq:3.1} S_{\omega}''(\phi_{\omega})\psi=\mu Q'(\phi_{\omega}), \quad \dual{S_{\omega}'''(\phi_{\omega})(\psi,\psi)}{\psi}\ne 3\mu. \end{equation} \vspace{2mm} \noindent {\bf (A3).} There exists a constant $k_0>0$ such that \begin{equation}\label{eq:3.2} \dual{S_{\omega}''(\phi_{\omega})w}{w}\ge k_0 \|w\|_X^2 \end{equation} for all $w\in X$ satisfying $(\phi_{\omega},w)_H=(J\phi_{\omega},w)_H=(\psi,w)_H=0$. \begin{remark}\label{rem1} By \eqref{eq:2.4} and \eqref{eq:2.5}, we see that $S_{\omega}'(\Trans (s)\phi_{\omega})=0$ for all $s\in \R$, and that $S_{\omega}''(\phi_{\omega})(J\phi_{\omega})=0$. The condition $(J\phi_{\omega},\psi)_X=0$ is assumed in (A2b) but not in (A2a). \end{remark} \begin{remark}\label{rem2} By (A2b), we have $\dual{S_{\omega}''(\phi_{\omega})\psi}{\psi}=\mu (\phi_{\omega},\psi)_H=0$. Moreover, $\dual{S_{\omega}''(\phi_{\omega})\psi}{w}=0$ for all $w\in X$ satisfying $(w,\phi_{\omega})_H=0$. \end{remark} The main results of this paper are the following. \begin{theorem}\label{thm1} Assume $({\rm A1})$, $({\rm A2a})$ and $({\rm A3})$. Then the bound state $\Trans (\omega t)\phi_{\omega}$ is unstable. \end{theorem} \begin{theorem}\label{thm2} Assume $({\rm A1})$, $({\rm A2b})$ and $({\rm A3})$. Then the bound state $\Trans (\omega t)\phi_{\omega}$ is unstable. \end{theorem} The following Corollary \ref{cor1} is a special case of Theorem \ref{thm2} such that $\mu=0$ in $({\rm A2b})$. When $\mu=0$ in $({\rm A2b})$, the kernel of $S_{\omega}''(\phi_{\omega})$ contains a nontrivial element $\psi$ other than $J\phi_{\omega}$ which comes from the symmetry (see Remark \ref{rem1}). This is a typical situation at a bifurcation point (see Case (ii) of Example D in Section 6 of \cite{GSS1} and \cite{KKSW}), and Corollary \ref{cor1} will be useful to study the instability of bound states at the bifurcation point (see Subsection \ref{ss:system}). \begin{corollary}\label{cor1} Assume $({\rm A1})$ and $E\in C^3(X,\R)$. Assume further that there exists $\psi\in X\setminus\{0\}$ such that $(\phi_{\omega},\psi)_H=0$, $(J\phi_{\omega},\psi)_H=(J\phi_{\omega},\psi)_X=0$, and that the kernel of $S_{\omega}''(\phi_{\omega})$ is spanned by $J\phi_{\omega}$ and $\psi$. If $\dual{S_{\omega}'''(\phi_{\omega})(\psi,\psi)}{\psi}\ne 0$ and $({\rm A3})$ holds, then the bound state $\Trans (\omega t)\phi_{\omega}$ is unstable. \end{corollary} Next, we show that some known results are obtained as corollaries of Theorems \ref{thm1} and \ref{thm2}. For this purpose, we impose the following conditions. \vspace{2mm} \noindent {\bf (B1).} There exist an open interval $\Omega$ of $\R$ and a mapping $\omega \mapsto \phi_{\omega}$ from $\Omega$ to $X$ which is $C^1$ such that for each $\omega \in \Omega$, $S_{\omega}'(\phi_{\omega})=0$, $\phi_{\omega}\ne 0$, $R\phi_{\omega} \in I(X)$ and $(J\phi_{\omega},\phi_{\omega}')_H=(J\phi_{\omega},\phi_{\omega}')_X=0$, where $\phi_{\omega}'=d\phi_{\omega}/d\omega$. \vspace{2mm} \noindent {\bf (B2a).} There exist a negative constant $\lambda_{\omega}<0$ and a vector $\chi_{\omega}\in X$ such that $S_{\omega}''(\phi_{\omega})\chi_{\omega}=\lambda_{\omega}I\chi_{\omega}$, $\|\chi_{\omega}\|_H=1$, and $\dual{S_{\omega}''(\phi_{\omega})p}{p}>0$ for all $p\in X$ satisfying $(\chi_{\omega},p)_H=(J\phi_{\omega},p)_H=0$ and $p\ne 0$. \vspace{2mm} \noindent {\bf (B2b).} There exist two negative constants $\lambda_{0,\omega}$, $\lambda_{1,\omega}<0$ and vectors $\chi_{0,\omega}$, $\chi_{1,\omega}\in X$ such that $(\chi_{0,\omega},\chi_{1,\omega})_H=(\chi_{1,\omega},\phi_{\omega})_H=0$, $$S_{\omega}''(\phi_{\omega})\chi_{j,\omega}=\lambda_{j,\omega}I\chi_{j,\omega}, \quad \|\chi_{j,\omega}\|_H=1 \qquad (j=0,1),$$ and $\dual{S_{\omega}''(\phi_{\omega})p}{p}>0$ for all $p\in X$ satisfying $(\chi_{0,\omega},p)_H=(\chi_{1,\omega},p)_H=(J\phi_{\omega},p)_H=0$ and $p\ne 0$. \vspace{2mm} \noindent {\bf (B3).} The functional $u\mapsto \dual{S_{\omega}''(\phi_{\omega})u}{u}$ is weakly lower semi-continuous on $X$, and there exist positive constants $C_1$ and $C_2$ such that \begin{equation}\label{eq:3.3} C_1\|u\|_X^2\le \dual{S_{\omega}''(\phi_{\omega})u}{u}+C_2\|u\|_H^2 \end{equation} for all $u\in X$. Moreover, if a sequence $(u_n)$ of $X$ satisfies $\|u_n\|_X=1$ for all $n\in \N$ and $u_n\rightharpoonup 0$ weakly in $X$, then $\liminf_{n\to \infty}\dual{S_{\omega}''(\phi_{\omega})u_n}{u_n}>0$. \vspace{2mm} We define $d(\omega)=S_{\omega}(\phi_{\omega})$ for $\omega\in \Omega$. As a corollary of Theorem \ref{thm2}, we have the following result which was proved in \cite{CP} assuming a higher regularity of the energy functional $E$. \begin{corollary}\label{cor2} Assume $({\rm B1})$ and that for each $\omega\in \Omega$, $({\rm B2a})$ and $({\rm B3})$ hold. Assume further that $E\in C^3(X,\R)$ and that $\omega \mapsto \phi_{\omega}$ is $C^2$ from $\Omega$ to $X$. If $\omega_0\in \Omega$ satisfies $d''(\omega_0)=0$ and $d'''(\omega_0)\ne 0$, then the bound state $\Trans (\omega_0 t)\phi_{\omega_0}$ is unstable. \end{corollary} On the other hand, as corollaries of Theorem \ref{thm1}, we have the following results. Corollary \ref{cor3} is a classical result due to \cite{GSS1,SS1}, while Corollary \ref{cor4} is an abstract generalization of the result in \cite{mae2}. \begin{corollary}\label{cor3} Assume $({\rm B1})$ and that for each $\omega\in \Omega$, $({\rm B2a})$ and $({\rm B3})$ hold. If $\omega_0\in \Omega$ satisfies $d''(\omega_0)<0$, then the bound state $\Trans (\omega_0 t)\phi_{\omega_0}$ is unstable. \end{corollary} \begin{corollary}\label{cor4} Assume $({\rm B1})$ and that for each $\omega\in \Omega$, $({\rm B2b})$ and $({\rm B3})$ hold. If $\omega_0\in \Omega$ satisfies $d''(\omega_0)>0$, then the bound state $\Trans (\omega_0 t)\phi_{\omega_0}$ is unstable. \end{corollary} \begin{remark}\label{rem3} Under the assumptions (B1), (B2a) and (B3), it is proved that if $\omega_0\in \Omega$ satisfies $d''(\omega_0)>0$, then the bound state $\Trans (\omega_0 t)\phi_{\omega_0}$ is stable (see Section 3 of \cite{GSS1}). \end{remark} \begin{remark}\label{rem4} When $S_{\omega}''(\phi_{\omega})$ has two or more negative eigenvalues, linear instability of $\Trans (\omega t)\phi_{\omega}$ is studied by many authors (see, e.g., \cite{ES,gri,GSS2,jon,KKSW}). However, it is a non-trivial problem whether linear instability implies (nonlinear) instability. For a recent development in this direction, see \cite{GO}. Corollary \ref{cor4} gives a sufficient condition for instability of bound states without using the argument through linear instability (see also Subsection \ref{ss:delta}). This was the main assertion in \cite{mae2}. \end{remark} \section{Preliminaries}\label{sect:pre} In this section we assume (A1). Recall that $\Trans$ is $2\pi$-periodic. We often use the relations $R\Trans (s)=\tilde \Trans (s)R$, $RJ=\tilde JR$, $I\Trans (s)=\tilde \Trans (s)I$, $IJ=\tilde JI$, which follow from the definitions of $R$, $I$, $\tilde \Trans(s)$ and $\tilde J$ in Section \ref{sect:form}. \begin{lemma}\label{lem1} There exist $\varepsilon>0$ and a $C^2$ map $\theta:\mathcal{N}_{\varepsilon}(\phi_{\omega})\to \R/2\pi \Z$ such that for all $u\in \mathcal{N}_{\varepsilon}(\phi_{\omega})$ and all $s \in \R/2\pi \Z$, \begin{align} &\|\Trans (\theta (u)) u-\phi_{\omega}\|_X\le \|\Trans (s)u-\phi_{\omega}\|_X, \nonumber \\ &(\Trans (\theta (u)) u, J\phi_{\omega})_X=0, \quad \theta (\Trans (s)u)=\theta (u)-s, \nonumber \\ &\theta'(u)=\frac{R\Trans (-\theta(u))J\phi_{\omega}} {(J^2\phi_{\omega},\Trans (\theta (u))u)_X}\in I(X). \label{eq:4.1} \end{align} \end{lemma} \begin{proof} See Lemma 3.2 of \cite{GSS1}. We remark that $\theta'(u)\in I(X)$ follows from the assumption $R\phi_{\omega}\in I(X)$ in (A1). \end{proof} For $u\in \mathcal{N}_{\varepsilon}(\phi_{\omega})$, we define $M(u)=\Trans (\theta (u))u$, and \begin{equation}\label{eq:4.2} A(u)=(M(u),J^{-1}\psi)_H, \quad \Lambda (u)=(M(u),\psi)_H. \end{equation} Then we have $$\dual{A'(u)}{v}=(\Trans (\theta(u))v,J^{-1}\psi)_H-\Lambda (u)\dual{\theta'(u)}{v}$$ for $v\in X$. By Lemma \ref{lem1}, we see that $A'(u)\in I(X)$ and \begin{equation}\label{eq:4.3} JI^{-1}A'(u)=\Trans (-\theta (u))\psi-\Lambda (u)JI^{-1}\theta'(u) \end{equation} for $u\in \mathcal{N}_{\varepsilon}(\phi_{\omega})$. Moreover, since $A$ is invariant under $\Trans$, we have \begin{equation}\label{eq:4.4} 0=\frac{d}{ds}A(\Trans(s)u)|_{s=0} =\dual{A'(u)}{Ju}=-\dual{Q'(u)}{JI^{-1}A'(u)}. \end{equation} We define $P$ by $$P(u)=\dual{E'(u)}{JI^{-1}A'(u)}$$ for $u\in \mathcal{N}_{\varepsilon}(\phi_{\omega})$. By \eqref{eq:4.4}, we have $P(u)=\dual{S_{\omega}'(u)}{JI^{-1}A'(u)}$. Moreover, by \eqref{eq:4.1}, \eqref{eq:4.3} and by \eqref{eq:2.2}, \eqref{eq:2.4}, \eqref{eq:2.5}, we see that \begin{equation}\label{eq:4.5} P(u)=\dual{S_{\omega}'(M(u))}{\psi}-\Lambda (u) \frac{\dual{S_{\omega}'(M(u))}{JI^{-1}RJ\phi_{\omega}}}{(M(u),J^2\phi_{\omega})_{X}}. \end{equation} \begin{lemma}\label{lem2} Let $\mathcal{I}$ be an interval of $\R$. Let $u\in C(\mathcal{I},X)\cap C^1(\mathcal{I},X^*)$ be a solution of \eqref{eq:2.3}, and assume that $u(t)\in \mathcal{N}_{\varepsilon}(\phi_{\omega})$ for all $t\in \mathcal{I}$. Then $$\frac{d}{dt}A(u(t))=-P(u(t))$$ for all $t\in \mathcal{I}$. \end{lemma} \begin{proof} By Lemma 4.6 of \cite{GSS1}, we see that $t\mapsto A(u(t))$ is a $C^1$ function on $\mathcal{I}$, and $$\frac{d}{dt}A(u(t)) =\dual{\partial_t u(t)}{I^{-1}A'(u(t))}$$ for all $t\in \mathcal{I}$. Since $u(t)$ is a solution of \eqref{eq:2.3}, we have \begin{align*} &\dual{\partial_t u(t)}{I^{-1}A'(u(t))}=\dual{\tilde J E'(u(t))}{I^{-1}A'(u(t))} \\ &=-\dual{E'(u(t))}{JI^{-1}A'(u(t))}=-P(u(t)) \end{align*} for $t\in \mathcal{I}$. This completes the proof. \end{proof} \section{Proof of Theorem \ref{thm1}} \label{sect:proofthm1} In this section we make the same assumptions as in Theorem \ref{thm1}. We define \begin{equation}\label{eq:5.1} W=\{w\in X: (\phi_{\omega},w)_H=(J\phi_{\omega},w)_H=(\psi,w)_H=0\}. \end{equation} \begin{lemma}\label{lem3} There exists $\varepsilon_0>0$ such that $$E(u)\ge E(\phi_{\omega})+\Lambda (u) P(u)$$ for all $u\in \mathcal{N}_{\varepsilon_0}(\phi_{\omega})$ satisfying $Q(u)=Q(\phi_{\omega})$. \end{lemma} \begin{proof} We put $v=M(u)-\phi_{\omega}$, and decompose $v$ as $$v=a\phi_{\omega}+bJ\phi_{\omega}+c\psi+w,$$ where $a$, $b$, $c\in \R$ and $w\in W$. Note that $\|v\|_{X}<\varepsilon_0$. Since $$Q(\phi_{\omega})=Q(u)=Q(M(u))=Q(\phi_{\omega})+(\phi_{\omega},v)_{H}+Q(v),$$ we have $(\phi_{\omega},v)_{H}=a\|\phi_{\omega}\|_{H}^2=-Q(v)$. In particular, $a=O(\|v\|_X^2)$. Moreover, by \eqref{eq:2.1} and Lemma \ref{lem1}, we have $(\phi_{\omega},J\phi_{\omega})_X=(M(u),J\phi_{\omega})_X=0$. Thus, $$0=(v,J\phi_{\omega})_X=b\|J\phi_{\omega}\|_X^2+(c\psi+w,J\phi_{\omega})_X,$$ $\|bJ\phi_{\omega}\|_X\le \|c\psi\|_X+\|w\|_X$, and \begin{equation}\label{eq:5.2} 2|c|\|\psi\|_X+2\|w\|_X\ge \|v\|_X-O(\|v\|_{X}^2). \end{equation} Since $S_{\omega}'(\phi_{\omega})=0$ and $Q(u)=Q(\phi_{\omega})$, by the Taylor expansion, we have \begin{equation}\label{eq:5.3} E(u)-E(\phi_{\omega})=S_{\omega}(M(u))-S_{\omega}(\phi_{\omega}) =\frac{1}{2}\dual{S_{\omega}''(\phi_{\omega})v}{v}+o(\|v\|_{X}^2). \end{equation} Here, since $a=O(\|v\|_X^2)$ and $S_{\omega}''(\phi_{\omega})(J\phi_{\omega})=0$, we have \begin{align} &\dual{S_{\omega}''(\phi_{\omega})v}{v} =\dual{S_{\omega}''(\phi_{\omega})(c\psi+w)}{c\psi+w} +o(\|v\|_{X}^2) \nonumber \\ &=c^2\dual{S_{\omega}''(\phi_{\omega})\psi}{\psi} +2c \dual{S_{\omega}''(\phi_{\omega})\psi}{w} +\dual{S_{\omega}''(\phi_{\omega})w}{w}+o(\|v\|_X^2). \label{eq:5.4} \end{align} On the other hand, we have $c=(v,\psi)_H=\Lambda (u)=O(\|v\|_X)$ and \begin{align*} &S_{\omega}'(\phi_{\omega}+v) =S_{\omega}'(\phi_{\omega})+S_{\omega}''(\phi_{\omega})v+o(\|v\|_X) =S_{\omega}''(\phi_{\omega})v+o(\|v\|_X), \\ &(M(u),J^2\phi_{\omega})_X=(\phi_{\omega}+v,J^2\phi_{\omega})_X =-\|J\phi_{\omega}\|_X^2+O(\|v\|_X). \end{align*} Thus, by \eqref{eq:4.5}, we have \begin{align} \Lambda (u) P(u) &=c\dual{S_{\omega}''(\phi_{\omega})v}{\psi}+o(\|v\|_X^2) \nonumber \\ &=c^2\dual{S_{\omega}''(\phi_{\omega})\psi}{\psi} +c\dual{S_{\omega}''(\phi_{\omega})\psi}{w}+o(\|v\|_X^2). \label{eq:5.5} \end{align} By \eqref{eq:5.3}, \eqref{eq:5.4} and \eqref{eq:5.5}, we have \begin{align} &E(u)-E(\phi_{\omega})-\Lambda (u) P(u) \nonumber \\ &=-\frac{c^2}{2}\dual{S_{\omega}''(\phi_{\omega})\psi}{\psi} +\frac{1}{2}\dual{S_{\omega}''(\phi_{\omega})w}{w} +o(\|v\|_X^2). \label{eq:5.6} \end{align} Here, by the assumptions (A2a) and (A3), there exists a positive constant $k>0$ such that $$-\frac{c^2}{2}\dual{S_{\omega}''(\phi_{\omega})\psi}{\psi} +\frac{1}{2}\dual{S_{\omega}''(\phi_{\omega})w}{w} \ge k(c^2+\|w\|_X^2).$$ Moreover, since $\|v\|_X=\|M(u)-\phi_{\omega}\|_X<\varepsilon_0$, it follows from \eqref{eq:5.2} that the right hand side of \eqref{eq:5.6} is non-negative, if $\varepsilon_0$ is sufficiently small. This completes the proof. \end{proof} \begin{lemma}\label{lem4} There exist $\lambda_1>0$ and a smooth mapping $\lambda\mapsto \varphi_{\lambda}$ from $(-\lambda_1,\lambda_1)$ to $X$ such that $\varphi_{0}=\phi_{\omega}$ and $$E(\varphi_{\lambda})<E(\phi_{\omega}), \quad Q(\varphi_{\lambda})=Q(\phi_{\omega}), \quad \lambda P(\varphi_{\lambda})<0 \quad \mbox{for} \quad 0<|\lambda|<\lambda_1.$$ \end{lemma} \begin{proof} For $\lambda$ close to $0$, we define $$\varphi_{\lambda}=\phi_{\omega}+\lambda \psi+\sigma(\lambda)\phi_{\omega}, \quad \sigma (\lambda)=\left(1-\frac{Q(\psi)}{Q(\phi_{\omega})}\lambda^2\right)^{1/2}-1.$$ Then, we have $Q(\varphi_{\lambda})=Q(\phi_{\omega})$, $\sigma (\lambda)=O(\lambda^2)$, and \begin{align*} &S_{\omega}(\varphi_{\lambda})=S_{\omega}(\phi_{\omega}) +\frac{\lambda^2}{2}\dual{S_{\omega}''(\phi_{\omega})\psi}{\psi}+o(\lambda^2), \\ &S_{\omega}'(\varphi_{\lambda}) =\lambda S_{\omega}''(\phi_{\omega})\psi+o(\lambda), \quad P(\varphi_{\lambda})=\lambda \dual{S_{\omega}''(\phi_{\omega})\psi}{\psi}+o(\lambda) \end{align*} as $\lambda\to 0$. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1}] Suppose that $\Trans (\omega t)\phi_{\omega}$ is stable. For $\lambda$ close to $0$, let $\varphi_{\lambda}\in X$ be the vector given in Lemma \ref{lem4}, and let $u_{\lambda}(t)$ be the solution of \eqref{eq:2.3} with $u_{\lambda}(0)=\varphi_{\lambda}$. Then, there exists $\lambda_0>0$ such that if $|\lambda|<\lambda_0$, then $u_{\lambda}(t)\in \mathcal{N}_{\varepsilon_0}(\phi_{\omega})$ for all $t\ge 0$, where $\varepsilon_0$ is the positive constant given in Lemma \ref{lem3}. Moreover, by the definition \eqref{eq:4.2} of $A$ and $\Lambda$, there exist positive constants $C_1$ and $C_2$ such that $|A(v)|\le C_1$ and $|\Lambda(v)|\le C_2$ for all $v\in \mathcal{N}_{\varepsilon_0}(\phi_{\omega})$. Let $\lambda \in (0,\lambda_0)$ and put $\delta_{\lambda}=E(\phi_{\omega})-E(\varphi_{\lambda})>0$. Since $P(\varphi_{\lambda})<0$ and $t\mapsto P(u_{\lambda}(t))$ is continuous, by Lemma \ref{lem3} and conservation of $E$ and $Q$, we see that $P(u_{\lambda}(t))<0$ for all $t\ge 0$ and that $$\delta_{\lambda}=E(\phi_{\omega})-E(u_{\lambda}(t)) \le -\Lambda (u_{\lambda}(t))P(u_{\lambda}(t)) \le -C_2 P(u_{\lambda}(t))$$ for all $t\ge 0$. Moreover, by Lemma \ref{lem2}, we have $$\frac{d}{dt}A(u_{\lambda}(t))=-P(u_{\lambda}(t))\ge \delta_{\lambda}/C_2$$ for all $t\ge 0$, which implies that $A(u_{\lambda}(t))\to \infty$ as $t\to \infty$. This contradicts the fact that $|A(u_{\lambda}(t))|\le C_1$ for all $t\ge 0$. Hence, $\Trans (\omega t)\phi_{\omega}$ is unstable. \end{proof} \section{Proof of Theorem \ref{thm2}}\label{sect:proofthm2} In this section we make the same assumptions as in Theorem \ref{thm2}. We modify the argument in the previous section to prove Theorem \ref{thm2}. We put \begin{equation}\label{eq:6.1} \nu:=3\mu-\dual{S_{\omega}'''(\phi_{\omega})(\psi,\psi)}{\psi}. \end{equation} By the assumption \eqref{eq:3.1}, $\nu\ne 0$. \begin{lemma}\label{lem5} There exist positive constants $\varepsilon_0$ and $k^*$ such that $$E(u)\ge E(\phi_{\omega})+\frac{\nu}{|\nu|}k^* P(u)$$ for all $u\in \mathcal{N}_{\varepsilon_0}(\phi_{\omega})$ satisfying $Q(u)=Q(\phi_{\omega})$. \end{lemma} \begin{proof} We put $v=M(u)-\phi_{\omega}$, and decompose $v$ as $$v=a\phi_{\omega}+bJ\phi_{\omega}+c\psi+w,$$ where $a$, $b$, $c\in \R$, $w\in W$, and $W$ is the set defined by \eqref{eq:5.1}. Then we have $(\phi_{\omega},v)_{H}=a\|\phi_{\omega}\|_{H}^2=-Q(v)$. Moreover, by \eqref{eq:2.1}, (A2b) and Lemma \ref{lem1}, we have $(\phi_{\omega},J\phi_{\omega})_X=(\psi,J\phi_{\omega})_X=(M(u),J\phi_{\omega})_X=0$. Thus, $0=(v,J\phi_{\omega})_X=b\|J\phi_{\omega}\|_X^2+(w,J\phi_{\omega})_X$, and \begin{equation}\label{eq:6.2} \|bJ\phi_{\omega}\|_X\le \|w\|_X, \quad |c|\|\psi\|_X+2\|w\|_X\ge \|v\|_X-O(\|v\|_{X}^2). \end{equation} We also have \eqref{eq:5.3}. Here, by Remark \ref{rem2}, we have \begin{align} \dual{S_{\omega}''(\phi_{\omega})v}{v} &=c^2\dual{S_{\omega}''(\phi_{\omega})\psi}{\psi} +2c \dual{S_{\omega}''(\phi_{\omega})\psi}{w}+\dual{S_{\omega}''(\phi_{\omega})w}{w} +o(\|v\|_{X}^2) \nonumber \\ &=\dual{S_{\omega}''(\phi_{\omega})w}{w}+o(\|v\|_{X}^2). \label{eq:6.3} \end{align} By \eqref{eq:5.3}, \eqref{eq:6.3} and (A3), we have \begin{equation}\label{eq:6.4} E(u)-E(\phi_{\omega}) =\frac{1}{2}\dual{S_{\omega}''(\phi_{\omega})w}{w}+o(\|v\|_{X}^2) \ge \frac{k_0}{2}\|w\|_X^2-o(\|v\|_{X}^2). \end{equation} On the other hand, we have $c=(v,\psi)_H=\Lambda (u)=O(\|v\|_X)$ and \begin{align*} &S_{\omega}'(\phi_{\omega}+v) =S_{\omega}''(\phi_{\omega})v+\frac{1}{2}S_{\omega}'''(\phi_{\omega})(v,v) +o(\|v\|_{X}^2), \\ &(M(u),J^2\phi_{\omega})_X =(\phi_{\omega}+v,J^2\phi_{\omega})_X=-\|J\phi_{\omega}\|_X^2+O(\|v\|_X). \end{align*} Thus, by \eqref{eq:4.5}, we have \begin{align*} P(u)=\dual{S_{\omega}''(\phi_{\omega})\psi}{v} &+\frac{1}{2}\dual{S_{\omega}'''(\phi_{\omega})(v,v)}{\psi} \\ &+\frac{c}{\|J\phi_{\omega}\|_{X}^2} \dual{S_{\omega}''(\phi_{\omega})v}{JI^{-1}RJ\phi_{\omega}}+o(\|v\|_{X}^2). \end{align*} Here, by \eqref{eq:3.1} and \eqref{eq:6.2}, we have \begin{align*} \dual{S_{\omega}''(\phi_{\omega})\psi}{v} &=\mu (\phi_{\omega},v)_{H}=-\mu Q(v)=-\frac{\mu}{2}\|v\|_H^2 \\ &=-\frac{\mu}{2}\left\{a^2\|\phi_{\omega}\|_{H}^2 +b^2\|J\phi_{\omega}\|_{H}^2+c^2\|\psi\|_{H}^2+\|w\|_{H}^2\right\} \\ &=-\frac{c^2\mu}{2}+O(\|w\|_X^2)+o(\|v\|_X^2), \end{align*} \begin{align*} \dual{S_{\omega}'''(\phi_{\omega})(v,v)}{\psi} =c^2 & \dual{S_{\omega}'''(\phi_{\omega})(\psi,\psi)}{\psi} +2c\dual{S_{\omega}'''(\phi_{\omega})(\psi,bJ\phi_{\omega}+w)}{\psi} \\ &+O(\|w\|_X^2)+o(\|v\|_X^2), \end{align*} \begin{align*} &c\dual{S_{\omega}''(\phi_{\omega})v}{JI^{-1}RJ\phi_{\omega}} =c\dual{S_{\omega}''(\phi_{\omega})(c\psi+w)}{JI^{-1}RJ\phi_{\omega}}+o(\|v\|_X^2) \\ &=-c^2\mu \|J\phi_{\omega}\|_X^2 +c\dual{S_{\omega}''(\phi_{\omega})w}{JI^{-1}RJ\phi_{\omega}}+o(\|v\|_X^2). \end{align*} Therefore, there exists a constant $k>0$ such that $$\left|P(u)+\frac{\nu}{2}c^2\right| \le k \left(|c|\|w\|_{X}+\|w\|_{X}^2\right)+o(\|v\|_{X}^2),$$ where $\nu$ is the constant defined by \eqref{eq:6.1}. Thus, there exists a constant $k_1>0$ such that \begin{equation}\label{eq:6.5} -\frac{\nu}{|\nu|}P(u)\ge \frac{|\nu|}{4}c^2-k_1 \|w\|_{X}^2-o(\|v\|_{X}^2). \end{equation} By \eqref{eq:6.4} and \eqref{eq:6.5}, we have \begin{equation}\label{eq:6.6} E(u)-E(\phi_{\omega})-\frac{\nu}{|\nu|}k^*P(u) \ge k_2 c^2+k_3\|w\|_{X}^2-o(\|v\|_{X}^2), \end{equation} where $k^*={k_0}/4k_1$, $k_2=k^*|\nu|/4$ and $k_3=k_0/4$. Finally, since $\|v\|_X=\|M(u)-\phi_{\omega}\|_X<\varepsilon_0$, it follows from \eqref{eq:6.2} that the right hand side of \eqref{eq:6.6} is non-negative, if $\varepsilon_0$ is sufficiently small. This completes the proof. \end{proof} \begin{lemma}\label{lem6} There exist $\lambda_1>0$ and a smooth mapping $\lambda\mapsto \varphi_{\lambda}$ from $(-\lambda_1,\lambda_1)$ to $X$ such that $\varphi_{0}=\phi_{\omega}$ and $$E(\varphi_{\lambda})<E(\phi_{\omega}), \quad Q(\varphi_{\lambda})=Q(\phi_{\omega}) \quad \mbox{for} \quad 0<\frac{\nu}{|\nu|}\lambda<\lambda_1.$$ \end{lemma} \begin{proof} For $\lambda$ close to $0$, we define $$\varphi_{\lambda}=\phi_{\omega}+\lambda \psi+\sigma(\lambda)\phi_{\omega}, \quad \sigma (\lambda)=\left(1-\frac{Q(\psi)}{Q(\phi_{\omega})}\lambda^2\right)^{1/2}-1.$$ Then, we have $Q(\varphi_{\lambda})=Q(\phi_{\omega})$ and \begin{align*} &\sigma (\lambda)=-\frac{1}{2\|\phi_{\omega}\|_H^2}\lambda^2+O(\lambda^4), \\ &S_{\omega}(\varphi_{\lambda})=S_{\omega}(\phi_{\omega})+\frac{\lambda^2}{2} \dual{S_{\omega}''(\phi_{\omega})\psi}{\psi}+\lambda \sigma (\lambda) \dual{S_{\omega}''(\phi_{\omega})\psi}{\phi_{\omega}} \\ &\hspace{20mm} +\frac{\lambda^3}{6}\dual{S_{\omega}'''(\phi_{\omega})(\psi,\psi)}{\psi}+o(\lambda^3). \end{align*} Here, by \eqref{eq:3.1} we have $\dual{S_{\omega}''(\phi_{\omega})\psi}{\psi}=\mu (\phi_{\omega},\psi)_H=0$ and $\dual{S_{\omega}''(\phi_{\omega})\psi}{\phi_{\omega}} =\mu \|\phi_{\omega}\|_H^2$. Thus, $$S_{\omega}(\varphi_{\lambda}) =S_{\omega}(\phi_{\omega})-\frac{\nu}{6}\lambda^3+o(\lambda^3).$$ This completes the proof. \end{proof} By Lemmas \ref{lem5} and \ref{lem6}, we can prove Theorem \ref{thm2} in the same way as in the proof of Theorem \ref{thm1}. We omit the detail. \section{Proofs of Corollaries}\label{sect:proofcor} In this section we prove Corollaries \ref{cor2}, \ref{cor3} and \ref{cor4}. We first give a sufficient condition for (A3). \begin{lemma}\label{lem7} Assume $({\rm B2a})$ and $({\rm B3})$. Assume further that there exist $\psi\in X$ and constants $\lambda\le 0$ and $\mu\in \R$ such that $\|\psi\|_H=1$, $(\phi_{\omega},\psi)_H=(J\phi_{\omega},\psi)_H=0$ and $S_{\omega}''(\phi_{\omega})\psi=\lambda I\psi+\mu Q'(\phi_{\omega})$. Then $({\rm A3})$ holds. \end{lemma} \begin{proof} First we claim that $\dual{S_{\omega}''(\phi_{\omega})w}{w}>0$ for all $w\in X$ satisfying $w\ne 0$ and $(\phi_{\omega},w)_H=(J\phi_{\omega},w)_H=(\psi,w)_H=0$. We prove this by contradiction. Suppose that there exists $w_0\in X$ such that $\dual{S_{\omega}''(\phi_{\omega})w_0}{w_0}\le 0$, $w_0\ne 0$ and $(\phi_{\omega},w_0)_H=(J\phi_{\omega},w_0)_H=(\psi,w_0)_H=0$. Then there exists $(\alpha,\beta)\in \R^2$ such that $(\alpha,\beta)\ne (0,0)$ and $(\alpha \psi+\beta w_0,\chi_{\omega})_H=0$. We put $p=\alpha \psi+\beta w_0$. Then $p\in X$ satisfies $(\chi_{\omega},p)_H=(J\phi_{\omega},p)_H=0$ and $p\ne 0$. Thus, by (B2a), we have $\dual{S_{\omega}''(\phi_{\omega})p}{p}>0$. On the other hand, we have \begin{align*} &\dual{S_{\omega}''(\phi_{\omega})\psi}{w_0} =\lambda (\psi,w_0)_H+\mu (\phi_{\omega},w_0)_H=0, \\ &\dual{S_{\omega}''(\phi_{\omega})p}{p} =\alpha^2\lambda \|\psi\|_H^2 +2\alpha \beta \dual{S_{\omega}''(\phi_{\omega})\psi}{w_0} +\beta^2\dual{S_{\omega}''(\phi_{\omega})w_0}{w_0}\le 0. \end{align*} This contradiction proves our first claim. Next we prove (A3) by contradiction. Suppose that (A3) does not hold. Then there exists a sequence $(w_n)$ in $X$ such that $\dual{S_{\omega}''(\phi_{\omega})w_n}{w_n}\to 0$, $\|w_n\|_X=1$ and $(\phi_{\omega},w_n)_H=(J\phi_{\omega},w_n)_H=(\psi,w_n)_H=0$. There exist a subsequence $(w_{n'})$ of $(w_n)$ and $w\in X$ such that $w_{n'}\rightharpoonup w$ weakly in $X$. By (B3), we see that $w\ne 0$, $(\phi_{\omega},w)_H=(J\phi_{\omega},w)_H=(\psi,w)_H=0$ and $$\dual{S_{\omega}''(\phi_{\omega})w}{w}\le \liminf_{n'\to \infty} \dual{S_{\omega}''(\phi_{\omega})w_{n'}}{w_{n'}}=0.$$ However, this contradicts the first claim. This completes the proof. \end{proof} \begin{proof}[Proof of Corollary \ref{cor2}] We verify that $\phi_{\omega_0}$ satisfies the assumptions (A1), (A2b) and (A3) of Theorem \ref{thm2}. First (A1) follows from (B1). Next, by (B1), $E'(\phi_{\omega})=\omega Q'(\phi_{\omega})$ for all $\omega\in \Omega$. Differentiating this with respect to $\omega$, we have \begin{equation}\label{eq:7.1} S_{\omega}''(\phi_{\omega})\phi_{\omega}'=Q'(\phi_{\omega}), \quad S_{\omega}'''(\phi_{\omega})(\phi_{\omega}',\phi_{\omega}') +S_{\omega}''(\phi_{\omega})\phi_{\omega}''=2Q''(\phi_{\omega})\phi_{\omega}', \end{equation} where $\phi_{\omega}'=d\phi_{\omega}/d\omega$ and $\phi_{\omega}''=d^2\phi_{\omega}/d\omega^2$. While, differentiating $d(\omega)=E(\phi_{\omega})-\omega Q(\phi_{\omega})$, we have \begin{align} d'(\omega)&=\dual{E'(\phi_{\omega})}{\phi_{\omega}'} -\omega \dual{Q'(\phi_{\omega})}{\phi_{\omega}'} -Q(\phi_{\omega})=-Q(\phi_{\omega}), \nonumber \\ d''(\omega)&=-\dual{Q'(\phi_{\omega})}{\phi_{\omega}'} =-(\phi_{\omega},\phi_{\omega}')_H =-\dual{S_{\omega}''(\phi_{\omega})\phi_{\omega}'}{\phi_{\omega}'}. \label{eq:7.2} \end{align} Moreover, by \eqref{eq:7.1} and \eqref{eq:7.2}, we have \begin{align} d'''(\omega) &=-\dual{Q''(\phi_{\omega})\phi_{\omega}'}{\phi_{\omega}'} -\dual{Q'(\phi_{\omega})}{\phi_{\omega}''} \nonumber \\ &=\dual{S_{\omega}'''(\phi_{\omega})(\phi_{\omega}',\phi_{\omega}')}{\phi_{\omega}'} -3\dual{Q''(\phi_{\omega})\phi_{\omega}'}{\phi_{\omega}'} \nonumber \\ &=\dual{S_{\omega}'''(\phi_{\omega})(\phi_{\omega}',\phi_{\omega}')} {\phi_{\omega}'}-3\|\phi_{\omega}'\|_H^2. \label{eq:7.3} \end{align} Here we take $$\mu=\frac{1}{\|\phi_{\omega_0}'\|_H}, \quad \psi=\mu \phi_{\omega_0}'.$$ Then, $\|\psi\|_H=1$ and $S_{\omega_0}''(\phi_{\omega_0})\psi=\mu Q'(\phi_{\omega_0})$. By (B1), we have $(J\phi_{\omega_0},\psi)_H=(J\phi_{\omega_0},\psi)_X=0$. Moreover, since $d''(\omega_0)=0$ and $d'''(\omega_0)\ne 0$, by \eqref{eq:7.2} and \eqref{eq:7.3}, we have $(\phi_{\omega_0},\psi)_H=0$ and $$\dual{S_{\omega_0}'''(\phi_{\omega_0})(\psi,\psi)}{\psi} =\mu^3\dual{S_{\omega_0}'''(\phi_{\omega_0}) (\phi_{\omega_0}',\phi_{\omega_0}')}{\phi_{\omega_0}'}\ne 3\mu.$$ Thus, (A2b) is verified. Finally, (A3) follows from (A2b) and Lemma \ref{lem7}. \end{proof} The following lemma is used in the proof of Corollary \ref{cor3}. \begin{lemma}\label{lem8} Assume $({\rm B1})$ and that for each $\omega\in \Omega$, $({\rm B2a})$ and $({\rm B3})$ hold. If $\omega_0\in \Omega$ satisfies $d''(\omega_0)<0$, then there exist $\psi\in X$ and constants $\lambda<0$ and $\mu\in \R$ such that $\|\psi\|_H=1$, $(\phi_{\omega_0},\psi)_H=(J\phi_{\omega_0},\psi)_H=0$ and $S_{\omega_0}''(\phi_{\omega_0})\psi=\lambda I\psi+\mu Q'(\phi_{\omega_0})$. \end{lemma} \begin{proof} We define \begin{equation}\label{eq:7.4} \lambda=\inf\{\dual{S_{\omega_0}''(\phi_{\omega_0})w}{w}: w\in X,~ \|w\|_H=1,~ (\phi_{\omega_0},w)_H=0\}. \end{equation} By Theorem 4.1 of \cite{GSS1} and by \eqref{eq:3.3} in (B3), we see that $-\infty<\lambda<0$. Moreover, by the standard variational argument with (B3) (see, e.g., Chapter 11 of \cite{LL}), we see that \eqref{eq:7.4} is attained at some $\psi$, that is, there exists $\psi\in X$ such that $\dual{S_{\omega_0}''(\phi_{\omega_0})\psi}{\psi}=\lambda$, $\|\psi\|_H=1$ and $(\phi_{\omega_0},\psi)_H=0$. Then there exists a Lagrange multiplier $\mu\in \R$ such that $S_{\omega_0}''(\phi_{\omega_0})\psi=\lambda I\psi+\mu Q'(\phi_{\omega_0})$. Finally, by this equation, we have $$\lambda (\psi,J\phi_{\omega_0})_H =\dual{S_{\omega_0}''(\phi_{\omega_0})(J\phi_{\omega_0})}{\psi} -\mu (\phi_{\omega_0},J\phi_{\omega_0})_H=0.$$ Since $\lambda\ne 0$, we have $(J\phi_{\omega_0},\psi)_H=0$. This completes the proof. \end{proof} \begin{proof}[Proof of Corollary \ref{cor3}] We verify that $\phi_{\omega_0}$ satisfies the assumptions (A1), (A2a) and (A3) of Theorem \ref{thm1}. (A1) follows from (B1), and (A2a) follows from Lemma \ref{lem8}. Finally, (A3) follows from Lemmas \ref{lem7} and \ref{lem8}. \end{proof} The following lemma is based on Theorem 2 of \cite{mae2} (see also Theorem 3.3 of \cite{GSS1}), and is used in the proof of Corollary \ref{cor4}. \begin{lemma}\label{lem9} Assume $({\rm B1})$ and that for each $\omega\in \Omega$, $({\rm B2b})$ and $({\rm B3})$ hold. If $\omega_0\in \Omega$ satisfies $d''(\omega_0)>0$, then there exists a constant $k_0>0$ such that $$\dual{S_{\omega_0}''(\phi_{\omega_0})w}{w}\ge k_0 \|w\|_X^2$$ for all $w\in X$ satisfying $(\phi_{\omega_0},w)_H=(\chi_{1,\omega_0},w)_H=(J\phi_{\omega_0},w)_H=0$. \end{lemma} \begin{proof} As in the proof of Lemma \ref{lem7}, it suffices to prove that $\dual{S_{\omega_0}''(\phi_{\omega_0})w}{w}>0$ for all $w\in X$ satisfying $w\ne 0$ and $(\phi_{\omega_0},w)_H=(\chi_{1,\omega_0},w)_H=(J\phi_{\omega_0},w)_H=0$. We define $$P_{\omega}=\{p\in X: (\chi_{0,\omega},p)_H=(\chi_{1,\omega},p)_H=(J\phi_{\omega},p)_H=0\}.$$ Let $w\in X$ satisfy $w\ne 0$ and $(\phi_{\omega_0},w)_H=(\chi_{1,\omega_0},w)_H=(J\phi_{\omega_0},w)_H=0$. We decompose $w$ and $\phi_{\omega_0}'$ as \begin{align*} w&=a_0\chi_{0,\omega_0}+a_1\chi_{1,\omega_0}+a_2J\phi_{\omega_0}+p, \\ \phi_{\omega_0}'&=b_0\chi_{0,\omega_0}+b_1\chi_{1,\omega_0}+b_2J\phi_{\omega_0}+q, \end{align*} where $a_j,b_j\in \R$ and $p,q\in P_{\omega_0}$. Since $(\chi_{1,\omega_0},w)_H=(J\phi_{\omega_0},w)_H=0$, we have $a_1=a_2=0$. Moreover, by the first equation of \eqref{eq:7.1}, $$\lambda_{1,\omega_0}b_1 =(\lambda_{1,\omega_0}\chi_{1,\omega_0},\phi_{\omega_0}')_H =\dual{S_{\omega_0}''(\phi_{\omega_0})\chi_{1,\omega_0}}{\phi_{\omega_0}'} =(\chi_{1,\omega_0},\phi_{\omega_0})_H=0.$$ Thus, $b_1=0$. By \eqref{eq:7.2}, we have $$0>-d''(\omega_0) =\dual{S_{\omega_0}''(\phi_{\omega_0})\phi_{\omega_0}'}{\phi_{\omega_0}'} =b_0^2\lambda_{0,\omega_0}+\dual{S_{\omega_0}''(\phi_{\omega_0})q}{q}.$$ In particular, $b_0\ne 0$. On the other hand, by the first equation of \eqref{eq:7.1}, $$0=(\phi_{\omega_0},w)_H =\dual{S_{\omega_0}''(\phi_{\omega_0})\phi_{\omega_0}'}{w} =a_0b_0\lambda_{0,\omega_0}+\dual{S_{\omega_0}''(\phi_{\omega_0})q}{p}.$$ In particular, $p\ne 0$. By the Cauchy-Schwarz inequality, we have \begin{align*} &b_0^2|\lambda_{0,\omega_0}|\dual{S_{\omega_0}''(\phi_{\omega_0})w}{w} =b_0^2|\lambda_{0,\omega_0}|\{a_0^2\lambda_{0,\omega_0} +\dual{S_{\omega_0}''(\phi_{\omega_0})p}{p}\} \\ &>-a_0^2b_0^2\lambda_{0,\omega_0}^2+\dual{S_{\omega_0}''(\phi_{\omega_0})p}{p} \dual{S_{\omega_0}''(\phi_{\omega_0})q}{q} \\ &\ge -a_0^2b_0^2\lambda_{0,\omega_0}^2 +\dual{S_{\omega_0}''(\phi_{\omega_0})q}{p}^2=0. \end{align*} Therefore, $\dual{S_{\omega_0}''(\phi_{\omega_0})w}{w}>0$. This completes the proof. \end{proof} \begin{proof}[Proof of Corollary \ref{cor4}] We verify that $\phi_{\omega_0}$ satisfies the assumptions (A1), (A2a) and (A3) of Theorem \ref{thm1}. (A1) follows from (B1). Let $\psi=\chi_{1,\omega}$. Then, (A2a) follows from (B2b). Finally, (A3) follows from Lemma \ref{lem9}. \end{proof} \section{Examples}\label{sect:examples} \subsection{Linear Schr\"odinger equation on a bounded interval} \label{ss:1} We begin with a simple \lq\lq counter-example" to emphasize the role of (A3) in Theorem \ref{thm1}. We consider the linear Schr\"odinger equation on the interval $(0,\pi)$ with zero-Dirichlet boundary conditions \begin{equation}\label{eq:8.1} \left\{\begin{array}{ll} i\partial_tu-\partial_x^2u=0, &\quad t\in \R,~ x\in (0,\pi), \\ u(t,0)=u(t,\pi)=0, &\quad t\in \R. \end{array}\right. \end{equation} Let $H=L^2(0,\pi)$ and $X=H^1_0(0,\pi)$ be real Hilbert spaces with inner products $$(u,v)_H=\Re \int_{0}^{\pi}u(x)\overline{v(x)}\,dx, \quad (u,v)_X=(\partial_xu,\partial_xv)_H.$$ We define $E(u)=(1/2)\|\partial_xu\|_H^2$ and $Ju=iu$ for $u\in X$. $\Trans$ is given by $\Trans (s)u=e^{is}u$ for $u\in X$ and $s\in \R$. For $u_0\in X$, the solution $u(t)$ of \eqref{eq:8.1} with $u(0)=u_0$ is expressed as $$u(t)=\sum_{n=1}^{\infty}a_n\Trans(n^2t)\varphi_n, \quad \varphi_n(x)=\sqrt{\frac{2}{\pi}}\,\sin nx, ~ a_n=\int_{0}^{\pi}u_0(x)\varphi_n(x)\,dx.$$ For each $n\in \N$, the bound state $\Trans(n^2t)\varphi_n$ is stable in the sense of Definition in Section \ref{sect:form}. In particular, we consider the case $n=2$, and put $\omega=n^2=4$, $\phi_{\omega}=\varphi_2$ and $\psi=\varphi_1$. Then, (A1) and (A2a) are satisfied. On the other hand, the inequality \eqref{eq:3.2} holds for $w\in X$ satisfying $(\phi_{\omega},w)_H=(J\phi_{\omega},w)_H=(\psi,w)_H=0$ and $(J\psi,w)_H=0$, but (A3) does not hold. This simple example shows optimality of (A3) in Theorem \ref{thm1}. \subsection{NLS with a delta function potential}\label{ss:delta} We consider a nonlinear Schr\"odinger equation with a delta function potential \begin{equation}\label{eq:8.2} i\partial_tu-\partial_x^2u+\gamma \delta(x)u=|u|^{p-1}u, \quad (t,x)\in \R\times \R, \end{equation} where $1<p<\infty$, $\gamma\in \R$ and $\delta(x)$ is the delta measure at the origin. Although the stability problem of bound states for \eqref{eq:8.2} has been studied by many authors (see \cite{FJ,FOO,GHW,LFF}), we give some remarks to complement their results. For simplicity, we consider the repulsive potential case $\gamma>0$ only. As real Hilbert spaces $H$ and $X$, we take $H=L^2(\R)$ and $X=H^1(\R)$ or $H=L^2_{\even}(\R)$ and $X=H^1_{\even}(\R)$. We define the inner products of $H$ and $X$ by \begin{align*} &(u,v)_H=\Re \int_{\R}u(x)\overline{v(x)}\,dx, \\ &(u,v)_X=(\partial_xu,\partial_xv)_H+(u,v)_H+\gamma \Re [u(0)\overline{v(0)}]. \end{align*} Note that by the embedding $H^1(\R)\hookrightarrow C_b(\R)$, the norm $\|\cdot\|_X$ is equivalent to the usual norm in $H^1(\R)$. We define $E:X\to \R$ and $J:X\to X$ by $$E(u)=\frac{1}{2}\|\partial_xu\|_{L^2}^2+\frac{\gamma}{2}|u(0)|^2 -\frac{1}{p+1}\|u\|_{L^{p+1}}^{p+1}, \quad Ju=iu$$ for $u\in X$. Then, $E\in C^2(X,\R)$ for $1<p<\infty$, $E\in C^3(X,\R)$ if $p>2$, and $\Trans$ is given by $\Trans (s)u=e^{is}u$ for $u\in X$ and $s\in \R$. Moreover, \eqref{eq:8.2} is written in the form \eqref{eq:2.3}, and all the requirements in Section \ref{sect:form} are satisfied. For $\omega\in \Omega:=(-\infty,-\gamma^2/4)$, \eqref{eq:8.2} has a bound state $e^{i\omega t}\phi_{\omega}(x)$, where $\phi_{\omega}\in H^1(\R)$ is a positive solution of \begin{equation}\label{eq:8.3} -\partial_x^2\phi+\gamma \delta(x)\phi-\omega \phi-|\phi|^{p-1}\phi=0, \quad x\in \R. \end{equation} The positive solution $\phi_{\omega}$ of \eqref{eq:8.3} is given by \begin{equation}\label{eq:8.4} \phi_{\omega}(x)=\left\{\begin{array}{ll} \varphi_{\omega}(x-b_{\omega}), &\quad x\ge 0, \\ \varphi_{\omega}(x+b_{\omega}), &\quad x<0, \end{array}\right. \end{equation} where $b_{\omega}=2\tanh^{-1}({\gamma}/{2\sqrt{-\omega}})/[(p-1)\sqrt{-\omega}\,]$, and $$\varphi_{\omega}(x)=\left(\frac{-(p+1)\omega}{2}\right)^{1/(p-1)} \left\{\cosh \left(\frac{(p-1)\sqrt{-\omega}}{2}x \right)\right\}^{-2/(p-1)}$$ is a positive and even solution of \begin{equation}\label{eq:8.5} -\partial_x^2\varphi-\omega \varphi-|\varphi|^{p-1}\varphi=0,\quad x\in \R. \end{equation} Then we see that $\omega\mapsto \phi_{\omega}$ is a $C^2$ mapping from $\Omega$ to $X$, and that $$R\phi_{\omega} =-\partial_x^2\phi_{\omega}+\phi_{\omega}+\gamma \delta (x)\phi_{\omega} =(1+\omega) \phi_{\omega}+|\phi_{\omega}|^{p-1}\phi_{\omega}\in H^1_{\even}(\R).$$ Thus (B1) is satisfied. The linearized operator $S_{\omega}''(\phi_{\omega}):X\to X^*$ is given by $$\dual{S_{\omega}''(\phi_{\omega})u}{v} =\dual{L_{\omega}\Re u}{\Re v}+\dual{M_{\omega}\Im u}{\Im v}$$ for $u,v\in X$, where \begin{align*} &\dual{L_{\omega}w}{z}=\int_{\R}(\partial_xw\partial_xz-\omega wz -p\phi_{\omega}(x)^{p-1}wz)\,dx+\gamma w(0)z(0), \\ &\dual{M_{\omega}w}{z}=\int_{\R}(\partial_xw\partial_xz-\omega wz -\phi_{\omega}(x)^{p-1}wz)\,dx+\gamma w(0)z(0). \end{align*} The assumption (B3) is easily verified. It is proved in Lemmas 28 and 29 of \cite{FJ} that (B2a) holds for the case $X=H^1_{\even}(\R)$, while it is proved in Section 4 of \cite{LFF} that (B2b) holds for the case $X=H^1(\R)$. Here, we give a simple proof for the latter fact. \begin{lemma}\label{lem10} $\inf\{\dual{L_{\omega}v}{v}:v\in H^1_{\odd}(\R,\R),~ \|v\|_{L^2}=1\}<0$. \end{lemma} \begin{proof} Let $s\in (-b_{\omega},\infty)$, and we define $$\psi_s(x)=\left\{\begin{array}{ll} \varphi_{\omega}'(x-b_{\omega}-s), &\quad x>b_{\omega}+s, \\ \varphi_{\omega}'(x+b_{\omega}+s), &\quad x<-b_{\omega}-s, \\ 0, &\quad -b_{\omega}-s\le x\le b_{\omega}+s. \end{array}\right.$$ Then, $\psi_s\in H^1_{\odd}(\R,\R)$ and \begin{align*} f(s):=&\dual{L_{\omega}\psi_s}{\psi_s} \\ =&2\int_{b_{\omega}+s}^{\infty}\{|\varphi_{\omega}''(x-b_{\omega}-s)|^2 -\omega |\varphi_{\omega}'(x-b_{\omega}-s)|^2 \\ &\hspace{20mm} -p\varphi_{\omega}(x-b_{\omega})^{p-1} |\varphi_{\omega}'(x-b_{\omega}-s)|^2\}\,dx \\ =&\int_{0}^{\infty}\{|\varphi_{\omega}''(y)|^2-\omega |\varphi_{\omega}'(y)|^2 -p\varphi_{\omega}(y+s)^{p-1}|\varphi_{\omega}'(y)|^2\}\,dy. \end{align*} Since $\varphi_{\omega}$ is an even solution of \eqref{eq:8.5}, we see that $f(0)=0$. Moreover, since $$f'(s)=-p(p-1) \int_{0}^{\infty}\varphi_{\omega}(y+s)^{p-2} \varphi_{\omega}'(y+s)|\varphi_{\omega}'(y)|^2\,dy,$$ we have $f'(0)>0$. Thus, we see that $f(s)<0$ for $s<0$ close to $0$, which concludes the lemma. \end{proof} \begin{lemma}\label{lem11} For each $\omega\in \Omega$, $({\rm B2b})$ holds for $X=H^1(\R)$. \end{lemma} \begin{proof} By Lemma 31 of \cite{FJ}, the kernel of $S_{\omega}''(\phi_{\omega})$ is spanned by $J\phi_{\omega}$, while by Lemma 32 of \cite{FJ}, the number of negative eigenvalues of $S_{\omega}''(\phi_{\omega})$ is at most two. Moreover, we know that the first eigenvalue $\lambda_{0,\omega}$ is negative, and the corresponding eigenfunction $\chi_{0,\omega}\in H^1_{\even}(\R,\R)$. By Lemma \ref{lem10}, we have the second eigenvalue $\lambda_{1,\omega}<0$ and the corresponding eigenfunction $\chi_{1,\omega}\in H^1_{\odd}(\R,\R)$. Since $\phi_{\omega}\in H^1_{\even}(\R,\R)$, we see that $(\chi_{0,\omega},\chi_{1,\omega})_H=(\chi_{1,\omega},\phi_{\omega})_H=0$. This completes the proof. \end{proof} By the explicit formula \eqref{eq:8.4}, we can compute the derivatives of the function $d(\omega)=S_{\omega}(\phi_{\omega})$. The following is proved in \cite{FJ}. If $1<p\le 3$, then $d''(\omega)>0$ for all $\omega\in \Omega$. If $3<p<5$, then there exists $\omega_*\in \Omega$ such that $d''(\omega)<0$ for $\omega\in (\omega_*,-\gamma^2/4)$, $d''(\omega)>0$ for $\omega\in (-\infty,\omega_*)$, $d''(\omega_*)=0$ and $d'''(\omega_*)<0$. If $p\ge 5$, then $d''(\omega)<0$ for all $\omega\in \Omega$. In particular, for the case where $1<p\le 3$ and $\omega\in \Omega$ and for the case where $3<p<5$ and $\omega\in (-\infty,\omega_*)$, it follows from Corollary \ref{cor4} that $e^{i\omega t}\phi_{\omega}$ is unstable in $X=H^1(\R)$. This result is originally due to Theorem 4 of \cite{LFF}. However, it seems that the proof in \cite{LFF} is not complete. In fact, in Section 4 of \cite{LFF}, linear instability of $e^{i\omega t}\phi_{\omega}$ is proved by applying the abstract theory of \cite{GSS2}, but there is no proof for the assertion that linear instability implies (nonlinear) instability (see Remark \ref{rem4} in Section \ref{sect:results}). Note that, because of the singularity of delta function potential, it seems difficult to apply the results available in the literature for this problem directly (see \cite{GO} and the references therein), and it might be easier to apply Corollary \ref{cor4}. While, for the case where $3<p<5$ and $\omega=\omega_*$, it follows from Corollary \ref{cor2} that $e^{i\omega t}\phi_{\omega}$ is unstable in $X=H^1_{\even}(\R)$, which was left open in Remark 7 of \cite{LFF}. There are not so many examples such that the derivatives of the function $d(\omega)$ can be computed explicitly. In \cite{mae1}, one can find other examples to which Corollary \ref{cor2} is applicable. \subsection{A system of NLS}\label{ss:system} We consider a system of nonlinear Schr\"odinger equations of the form \begin{equation}\label{eq:8.6} \left\{\begin{array}{l} i\partial_tu_1-\Delta u_1=|u_1|u_1+\gamma \overline{u_1}u_2, \quad (t,x)\in \R\times \R^N, \\ i\partial_tu_2-2\Delta u_2=2|u_2|u_2+\gamma u_1^2, \quad (t,x)\in \R\times \R^N, \end{array}\right. \end{equation} where $N\le 3$ and $\gamma>0$. This is a reduced system of a three-component system studied in \cite{CCO1,CCO2}. In what follows, we use the vectorial notation $\vec u=(u_1,u_2)$, and it is considered to be a column vector. We define the inner products of $H=L^2_{\rad}(\R^N)\times L^2_{\rad}(\R^N)$ and $X=H^1_{\rad}(\R^N)\times H^1_{\rad}(\R^N)$ by \begin{align*} &(\vec u,\vec v)_H=\Re \int_{\R^N}u_1(x)\overline{v_1(x)}\,dx +\Re \int_{\R^N}u_2(x)\overline{v_2(x)}\,dx, \\ &(\vec u, \vec v)_X=(\nabla \vec u,\nabla \vec v)_H+(\vec u,\vec v)_H \end{align*} for $\vec u=(u_1,u_2)$ and $\vec v=(v_1,v_2)$. We define $J\vec u=(iu_1,2iu_2)$ and \begin{align*} E(\vec u)=\frac{1}{2}\|\nabla u_1\|_{L^2}^2 +\frac{1}{2}\|\nabla u_2\|_{L^2}^2 -\frac{1}{3}\|u_1\|_{L^3}^3-\frac{1}{3}\|u_2\|_{L^3}^3 -\frac{\gamma}{2} \Re \int_{\R^N}u_1^2\overline{u_2}\,dx, \end{align*} for $\vec u\in X$. Then, \eqref{eq:8.6} is written in the form \eqref{eq:2.3}, $\Trans$ is given by $\Trans (s)\vec u=(e^{is}u_1,e^{2is}u_2)$ for $\vec u\in X$ and $s\in \R$, and all the requirements in Section \ref{sect:form} are satisfied. Let $\omega<0$ and let $\varphi_{\omega}\in H^1_{\rad}(\R^N)$ be a unique positive radial solution of \begin{equation}\label{eq:8.7} -\Delta \varphi-\omega \varphi-\varphi^2=0, \quad x\in \R^N. \end{equation} In the same way as in \cite{CCO1,CCO2}, it is proved that a semi-trivial solution $(0,e^{2i\omega t}\varphi_{\omega})$ of \eqref{eq:8.6} is stable if $0<\gamma<1$, and unstable if $\gamma>1$. Here, we consider instability of bound states bifurcating from the semi-trivial solution at $\gamma=1$. For $0<\gamma<1$, we put $\vec \phi_{\omega}=(\alpha \varphi_{\omega},\beta \varphi_{\omega})$, where $$\alpha=\frac{2-\gamma-\gamma \sqrt{1+2\gamma (\gamma-1)}}{2+\gamma^3}, \quad \beta=\frac{1+\gamma^2+\sqrt{1+2\gamma (\gamma-1)}}{2+\gamma^3}.$$ Then, $S_{\omega}'(\vec \phi_{\omega})=0$, and (A1) is satisfied. Note that $\alpha$ and $\beta$ are positive constants, and satisfy $|\alpha|+\gamma \beta=1$, $\gamma \alpha^2+2|\beta|\beta=2\beta$, and $(\alpha,\beta)\to (0,1)$ as $\gamma\to 1$. By applying Theorem \ref{thm1}, we show that the bound state $\Trans(\omega t)\vec \phi_{\omega}$ is unstable for any $0<\gamma<1$. First, the linearized operator $S_{\omega}''(\vec \phi_{\omega})$ is given by \begin{equation}\label{eq:8.8} \dual{S_{\omega}''(\vec \phi_{\omega})\vec u}{\vec u} =\dual{\mathcal{L}_R \Re \vec u}{\Re \vec u} +\dual{\mathcal{L}_I \Im \vec u}{\Im \vec u} \end{equation} for $\vec u=(u_1,u_2)\in X$, where $\Re \vec u=(\Re u_1,\Re u_2)$, $\Im \vec u=(\Im u_1,\Im u_2)$, and \begin{align*} &\mathcal{L}_R=\left[\begin{array}{cc} -\Delta-\omega & 0 \\ 0 & -\Delta-\omega \end{array}\right] -\left[\begin{array}{cc} (2\alpha+\gamma \beta) \varphi_{\omega} & \gamma \alpha \varphi_{\omega} \\ \gamma \alpha \varphi_{\omega} & 2\beta \varphi_{\omega} \end{array}\right], \\ &\mathcal{L}_I=\left[\begin{array}{cc} -\Delta-\omega & 0 \\ 0 & -\Delta-\omega \end{array}\right] -\left[\begin{array}{cc} (\alpha-\gamma \beta) \varphi_{\omega} & \gamma \alpha \varphi_{\omega} \\ \gamma \alpha \varphi_{\omega} & \beta \varphi_{\omega} \end{array}\right]. \end{align*} For $a\in \R$, we define $L_av=-\Delta v-\omega v-a\varphi_{\omega} v$ for $v\in H^1_{\rad}(\R^N,\R)$. Then, by orthogonal matrices $$A=\frac{1}{\sqrt{\alpha^2+\beta^2}} \left[\begin{array}{cc} \alpha & \beta \\ -\beta & \alpha \end{array}\right], \quad B=\frac{1}{\sqrt{\alpha^2+4\beta^2}} \left[\begin{array}{cc} \alpha & 2\beta \\ -2\beta & \alpha \end{array}\right],$$ $\mathcal{L}_R$ and $\mathcal{L}_I$ are diagonalized as follows: \begin{equation}\label{eq:8.9} \mathcal{L}_R=A^* \left[\begin{array}{cc} L_2 & 0 \\ 0 & L_{(2-\gamma)\beta} \end{array}\right]A, \quad \mathcal{L}_I=B^* \left[\begin{array}{cc} L_1 & 0 \\ 0 & L_{(1-2\gamma)\beta} \end{array}\right]B. \end{equation} Moreover, by elementary computations, we see that $1<(2-\gamma)\beta<2$ and $(1-2\gamma)\beta<1$ for $0<\gamma<1$. Here, we recall some known results on the operator $L_a$ defined on $H^1_{\rad}(\R^N,\R)$. \begin{lemma}\label{lem12} Let $N\le 3$ and let $\varphi_{\omega}$ be the positive radial solution of \eqref{eq:8.7}. \par \noindent $({\rm i})$ \hspace{1mm} $L_2$ has one negative eigenvalue, $\ker L_2=\{0\}$, and there exists a constant $c_1>0$ such that $\dual{L_2v}{v}\ge c_1 \|v\|_{H^1}^2$ for all $v\in H^1_{\rad}(\R^N,\R)$ satisfying $(\varphi_{\omega},v)_{L^2}=0$. \par \noindent $({\rm ii})$ \hspace{1mm} $L_1$ is non-negative, $\ker L_1$ is spanned by $\varphi_{\omega}$, and there exists $c_2>0$ such that $\dual{L_1v}{v} \ge c_2 \|v\|_{H^1}^2$ for all $v\in H^1_{\rad}(\R^N,\R)$ satisfying $(\varphi_{\omega},v)_{L^2}=0$. \par \noindent $({\rm iii})$ \hspace{1mm} If $a<1$, then there exists $c_3>0$ such that $\dual{L_a v}{v}\ge c_3 \|v\|_{H^1}^2$ for all $v\in H^1_{\rad}(\R^N,\R)$. \par \noindent $({\rm iv})$ \hspace{1mm} If $1<a<2$, then $\dual{L_a \varphi_{\omega}}{\varphi_{\omega}}<0$, and there exists $c_4>0$ such that $\dual{L_a v}{v}\ge c_4 \|v\|_{H^1}^2$ for all $v\in H^1_{\rad}(\R^N,\R)$ satisfying $(\varphi_{\omega},v)_{L^2}=0$. \end{lemma} \begin{proof} The parts (i) and (ii) are well-known (see \cite{wei1}). Note that the quadratic nonlinearity in \eqref{eq:8.7} is $L^2$-subcritical if and only if $N\le 3$, and that the assumption $N\le 3$ is essential for (i). The parts (iii) and (iv) follow from (i) and (ii) immediately. \end{proof} We put $\vec \xi=(-\beta \varphi_{\omega},\alpha \varphi_{\omega})$ and $\vec \psi=\vec \xi/\|\vec \xi\|_H$. Then, $A\vec \psi=(0,\varphi_{\omega})/\|\varphi_{\omega}\|_{L^2}$. By Lemma \ref{lem12} (iv), we have $$\dual{S_{\omega}''(\vec \phi_{\omega})\vec \psi}{\vec \psi} =\dual{\mathcal{L}_R \vec \psi}{\vec \psi} =\dual{L_{(2-\gamma)\beta} \varphi_{\omega}}{\varphi_{\omega}} /\|\varphi_{\omega}\|_{L^2}^2<0,$$ and (A2a) is satisfied. Next, we show two lemmas to prove (A3). \begin{lemma}\label{lem13} There exists a constant $k_1>0$ such that $\dual{\mathcal{L}_R \vec v}{\vec v}\ge k_1 \|\vec v\|_{X}^2$ for all $\vec v\in H^1_{\rad}(\R^N,\R)^2$ satisfying $(\vec \phi_{\omega},\vec v)_H=0$ and $(\vec \xi,\vec v)_H=0$. \end{lemma} \begin{proof} By \eqref{eq:8.9}, we have $\dual{\mathcal{L}_R \vec v}{\vec v} =\dual{L_2w_1}{w_1}+\dual{L_{(2-\gamma)\beta}w_2}{w_2}$, where $\vec w=A\vec v$. Since $(\varphi_{\omega},w_1)_{L^2} =(\vec \phi_{\omega},\vec v)_H/\sqrt{\alpha^2+\beta^2}=0$, Lemma \ref{lem12} (i) implies $\dual{L_2w_1}{w_1}\ge c_1 \|w_1\|_{H^1}^2$. Moreover, since $(\varphi_{\omega},w_2)_{L^2}=(\vec \xi,\vec v)_H/\sqrt{\alpha^2+\beta^2}=0$ and $1<(2-\gamma)\beta<2$, Lemma \ref{lem12} (iv) implies $\dual{L_{(2-\gamma)\beta}w_2}{w_2}\ge c_4 \|w_2\|_{H^1}^2$. Since $\|\vec w\|_X=\|\vec v\|_X$, this completes the proof. \end{proof} \begin{lemma}\label{lem14} There exists a constant $k_2>0$ such that $\dual{\mathcal{L}_I \vec v}{\vec v}\ge k_2 \|\vec v\|_{X}^2$ for all $\vec v\in H^1_{\rad}(\R^N,\R)^2$ satisfying $(\vec \eta,\vec v)_H=0$, where $\vec \eta=(\alpha \varphi_{\omega},2\beta \varphi_{\omega})$. \end{lemma} \begin{proof} By \eqref{eq:8.9}, we have $\dual{\mathcal{L}_I \vec v}{\vec v} =\dual{L_1w_1}{w_1}+\dual{L_{(1-2\gamma) \beta}w_2}{w_2}$, where $\vec w=B\vec v$. Since $(\varphi_{\omega},w_1)_{L^2}=(\vec \eta,\vec v)_H/\sqrt{\alpha^2+4\beta^2}=0$, Lemma \ref{lem12} (ii) implies $\dual{L_1\tilde v_1}{w_1}\ge c_2 \|w_1\|_{H^1}^2$. Moreover, since $(1-2\gamma) \beta<1$, Lemma \ref{lem12} (iii) implies $\dual{L_{(1-2\gamma) \beta}w_2}{w_2}\ge c_3 \|w_2\|_{H^1}^2$. This completes the proof. \end{proof} We verify (A3). Let $\vec w\in X$ satisfy $(\vec \phi_{\omega},\vec w)_H =(J\vec \phi_{\omega},\vec w)_H=(\vec \psi,\vec w)_H=0$. Since $(\vec \phi_{\omega},\Re \vec w)_H=(\vec \phi_{\omega},\vec w)_H=0$ and $(\vec \xi,\Re \vec w)_H=\|\vec \xi\|_H(\vec \psi,\vec w)_H=0$, it follows from Lemma \ref{lem13} that $\dual{\mathcal{L}_R \Re \vec w}{\Re \vec w}\ge k_1\|\Re \vec w\|_{X}^2$. While, since $(\vec \eta,\Im \vec w)_H=-(J\phi_{\omega},\vec w)_H=0$, Lemma \ref{lem14} implies $\dual{\mathcal{L}_I \Im \vec w}{\Im \vec w}\ge k_2\|\Im \vec w\|_{X}^2$. Thus, by \eqref{eq:8.8}, we see that (A3) is satisfied. In conclusion, it follows from Theorem \ref{thm1} that the bound state $\Trans(\omega t)\vec \phi_{\omega}$ is unstable for any $0<\gamma<1$. Finally, we consider instability of semi-trivial solution $\Trans(\omega t)(0,\varphi_{\omega})$ at the bifurcation point $\gamma=1$. In this case, we have $\mathcal{L}_R\vec v=(L_1v_1,L_2v_2)$ and $\mathcal{L}_I\vec v=(L_{-1}v_1,L_1v_2)$ for $\vec v=(v_1,v_2)\in H^1_{\rad}(\R^N,\R)^2$, the kernel of $S_{\omega}''(0,\varphi_{\omega})$ is spanned by $J(0,\varphi_{\omega})$ and $(\varphi_{\omega},0)$, and (A3) holds with $\psi=(\varphi_{\omega},0)/\|\varphi_{\omega}\|_{L^2}$. Since $E\notin C^3(X,\R)$, Corollary \ref{cor1} is not applicable to this problem directly. However, by modifying the proof of Theorem \ref{thm2}, it is proved that $\Trans(\omega t)(0,\varphi_{\omega})$ is unstable for the case $\gamma=1$. The detail will be discussed in a forthcoming paper \cite{CO}. \vspace{3mm}\noindent \textbf{Acknowledgements.} This work was supported by JSPS Excellent Young Researchers Overseas Visit Program and by JSPS KAKENHI (21540163). The author is grateful to Mathieu Colin and Masaya Maeda for useful discussions.
1,314,259,995,239
arxiv
\section{Introduction} Deep learning approaches have been successfully applied to expressive text-to-speech (TTS). Expressive styles of speech can be modeled using explicit labels for style attributes~\cite{zhu2019controlling,lei2021fine} or by extracting high-level latent features from input speech~\cite{wang2018style,skerry2018towards,hsu2019hierarchical,zhang2019learning}. However, achieving competitive performance in low-resource scenarios remains a challenge. In previous studies on low-resource TTS, researchers used transfer learning~\cite{jia2018transfer,tits2019exploring,chung2019semi} or multi-speaker modeling~\cite{valle2020mellotron,Char2017DeepV2,byambadorj2021multi}. Most recently, data augmentation techniques have been successfully applied in low-resource scenarios~\cite{hwang2021tts,huybrechts2021low,shah2021non}. In particular, a cross-speaker style transfer method via voice conversion (VC) enables expressive TTS systems to be built where expressive data is only available for some existing speakers (i.e., source speaker)~\cite{ribeiro2022cross}. In this method, a pair of neutral speech databases of source and target speakers is used to learn a VC model. Then, the learned VC model is used to transfer the source model's expressive style (e.g., conversation) to the target speaker. Finally, a TTS acoustic model is trained using the VC-augmented speech together with the recorded neutral speech. However, although a high-quality VC model is crucial for data augmentation approaches, it is challenging to learn a stable VC model when (1) the amount of data is limited under low-resource conditions or (2) highly expressive speech has large acoustic variety. Under such conditions, a lack of accurate prosody conversion is often observed because VC models tend to focus on spectral (e.g., Mel-spectrogram) conversion~\cite{kaneko2019CycleGAN-VC2}. Although some VC models use a mean-variance normalization method for fundamental frequency ($F_o$) conversion~\cite{liu2007high}, this is not sufficient to stably generate the highly emotional voice of the target speaker. To address the aforementioned problem, we propose a novel data augmentation method that combines pitch-shift (PS) augmentation and non-parallel VC-based augmentation. Our method differs from existing methods~\cite{ribeiro2022cross} in that the proposed system focuses on improving VC performance to make it suitable for converting emotional attributes, even though the target speaker's data only consist of neutral recordings. In detail, in the proposed method, we first apply PS-based augmentation to both the source and target speaker's neutral recordings. As this enables the VC model to cover a variety of pitch dynamics, it substantially improves the stability of the training process. Additionally, we also propose incorporating a short-time Fourier transform (STFT)-based $F_o$ regularization loss into the optimization criteria of the VC training process. This also stabilizes the target speaker's $F_o$ trajectory, which is crucial for converting highly emotional speech segments. As a result, the VC model learned by the proposed method stably transfers the source speaker's speaking style to the target speaker, and even makes it possible to build the target speaker's emotional TTS system. We investigated the effectiveness of the proposed data augmentation approach by performing subjective evaluation tasks. Note that PS-based augmentation and STFT $F_o$ regularization loss can be extended to any neural VC model; however, our focus is the Scyclone model~\cite{kanagaki2020scyclone} based on a cycle-consistent adversarial network (CycleGAN)~\cite{zhu2017unpaired}. The experimental results demonstrated that our VC-augmented TTS system achieved better naturalness and emotional similarity than conventional methods when only 1,000 utterances of the target speaker's neutral data were available. \section{Method} \begin{figure}[!t] \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=0.9\linewidth]{overview_a.pdf} \centerline{(a)} \medskip \vspace{-3mm} \end{minipage} \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=0.9\linewidth]{overview_b.pdf} \centerline{(b)} \medskip \vspace{-3mm} \end{minipage} \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=0.9\linewidth]{overview_c.pdf} \centerline{(c)} \medskip \vspace{-3mm} \end{minipage} \vspace{-8mm} \caption{ Overview of the proposed method for building emotional TTS. We used data augmentation for training both the VC and TTS model: (a) PS-based data augmentation, (b) VC-based data augmentation, and (c) TTS model. } \label{fig:overview} \vspace{-2mm} \end{figure} Figure~\ref{fig:overview} shows an overview of our proposed method. In this study, we investigate three speaking styles: \textit{neutral}, \textit{happiness}, and \textit{sadness}. The proposed method consists of PS-based data augmentation, VC-based data augmentation and emotional TTS system. In the following, we describe the details of each component. \subsection{PS-based data augmentation} Figure~\ref{fig:pitch-shift} shows an overview of our PS-based data augmentation. Unlike traditional PS methods such as pitch-synchronous overlap-add~\cite{charpentier1986diphone} and vocoders~\cite{Morise2016WORLDAV}, our method does not require $F_o$ estimation. Specifically, the proposed method applies a stretching technique to the spectral fine structure to convert the pitch of the input signal. In the separation step as shown in Figure~\ref{fig:pitch-shift}a, we first compute a speech spectrogram using STFT and then separate it into spectral envelopes and fine structures based on the lag-window method~\cite{tohkura1978spectral}. Next, by applying a linear interpolation method, we stretch the spectral fine structure along the frequency axis. Let $S_{t,k}$ denote the spectral fine structure for the $t$-th time index and $k$-th frequency bin. Then, we obtain the stretched spectrum as follows~\cite{morise2018onsei}: \begin{align} \hat{S}_{t,\alpha k} &= {S}_{t,k}, \label{eq:stretch} \\ \alpha &= 2^{p/12}, \label{eq:alpha} \end{align}% where $\alpha$ denotes the stretching ratio determined by the semitone unit $p$. In the generation step as shown in Figure \ref{fig:pitch-shift}b, we obtain the pitch-shifted spectrogram by multiplying the original spectral envelope and corresponding stretched spectral fine structure. As shown in Figure \ref{fig:overview}a, we apply the proposed PS method to augment both the source and target speakers' neutral data. In detail, we vary the semitone unit $p$ in the range [-3, 12], which results in generating data 15 times larger than the original recordings. All the augmented datasets are used to train the VC model, which we explain further in the following section. \begin{figure}[!t] \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=0.9\linewidth]{pitchshift_a.pdf} \centerline{(a)} \medskip \vspace{-4mm} \end{minipage} \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=0.9\linewidth]{pitchshift_b.pdf} \centerline{(b)} \medskip \vspace{-4mm} \end{minipage} \vspace{-8mm} \caption{Pitch-shift data augmentation: (a) spectrogram separation and (b) spectrogram generation.} \label{fig:pitch-shift} \vspace{-2mm} \end{figure} \subsection{Non-parallel voice conversion} \label{ssec:vc} \subsubsection{Model} From the many state-of-the-art VC models, we adopt a non-parallel Scyclone model~\cite{kanagaki2020scyclone} because of its stable generation and competitive quality. This method uses two separate modules: a CycleGAN-based spectrogram conversion model~\cite{zhu2017unpaired} and a single-Gaussian WaveRNN-based vocoder~\cite{okamoto2019real}. However, we only use the spectrogram conversion model because VC aims to augment acoustic features when training TTS models. Note that we use the log-Mel spectrogram as the target acoustic features together with continuous log $F_o$~\cite{yu2010continuous}, and voiced/unvoiced flags (V/UV). Predicting these additional features using the VC model is essential to create emotional TTS models that include $F_o$-dependent high-fidelity neural vocoders~\cite{hwang2021high}. \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{f0compare3.pdf} \vspace{-8mm} \caption{Comparison of the generated $F_o$ with and without the proposed regularization and source $F_o$.} \label{fig:f0compare} \vspace{-4mm} \end{figure} \begin{table*}[t] \caption{Systems for comparison and the number of utterances for training the VC and TTS models.} \label{tbl:mos_model} \begin{center} \vspace{-15pt} \scalebox{0.90}{ \begin{tabular}{l|l|cc|ccc|cc|cc} \hline \multicolumn{1}{c|}{\multirow{3}{*}{Model}} & \multicolumn{1}{c|}{\multirow{3}{*}{Type}} & \multicolumn{2}{c|}{VC training} & \multicolumn{7}{c}{TTS training} \\ \cline{3-11} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Neutral} & \multicolumn{3}{c|}{Neutral} & \multicolumn{2}{c|}{Happiness} & \multicolumn{2}{c}{Sadness} \\ \cline{3-11} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{REC} & \multicolumn{1}{c|}{PS-DA} & \multicolumn{1}{c}{Source} & \multicolumn{1}{c}{Target} & \multicolumn{1}{c|}{VC-DA} & \multicolumn{1}{c}{Source} & \multicolumn{1}{c|}{VC-DA} & \multicolumn{1}{c}{Source} & \multicolumn{1}{c}{VC-DA} \\ \hline Source & Recorded audio & - & - & - & - & - & - & - & - & - \\ SRC-TTS & Source speaker TTS & - & - & 5.0 K & - & - & 2.5 K & - & 2.5 K & - \\ TGT-NEU-TTS & Target speaker TTS & - & - & - & 2.5 K & - & - & - & - & - \\ MS-TTS & Multi-speaker TTS & - & - & 5.0 K & 2.5 K & - & 2.5 K & - & 2.5 K & - \\ VC-TTS & VC-TTS w/o PS & 2.5 K & - & - & 2.5 K & 5.0 K & - & 2.5 K & - & 2.5 K \\ VC-TTS-PS & VC-TTS w/ PS & 2.5 K & 37.5 K & - & 2.5 K & 5.0 K & - & 2.5 K & - & 2.5 K \\ VC-TTS-PS-1K & VC-TTS w/ PS & 1.0 K & 15.0 K & - & 1.0 K & 5.0 K & - & 2.5 K & - & 2.5 K \\ \hline \end{tabular}} \scalebox{0.90}{ \begin{threeparttable} \begin{tablenotes} \item REC: Recorded data; PS-DA: Data augmented by pitch-shifting; VC-DA: Data augmented by voice conversion \end{tablenotes} \end{threeparttable}} \vspace{-10pt} \end{center} \end{table*} \subsubsection{STFT $F_o$ regularization loss function} To avoid unnatural conversion of the prosody features, we propose an STFT-based $F_o$ regularization loss function. Following a previous study on a spectrogram domain $F_o$ loss function~\cite{ren2020fastspeech}, we also define the regularization loss function on the spectrogram domain. Let $X_{n,k}$ and $\hat{X}_{n,k}$ be STFT magnitudes for extracted and predicted $F_o$ sequences for the $n$-th frame index and $k$-th frequency bin, respectively. The regularization loss is defined as follows: \begin{equation} L_\mathrm{F_o} = \frac{1}{M} \sum_{n=1}^{N}\sum_{k=\beta}^{K} \left| \log X_{n,k} - \log \hat{X}_{n,k}\right|, \label{f0stftloss} \end{equation} where $N, K,$ and $M$ represent the number of frames, number of frequency bins, and number of elements in the magnitude, respectively; $\beta$ denotes a hyperparameter that controls the regularization strength. To regularize only the fine structure component of $F_o$ (i.e., high-frequency components of the STFT magnitude) that contains little information about speaking styles for reading speech, we set $\beta=3$ based on our preliminary experiments. Furthermore, we extend the loss function to multiple resolutions inspired by previous studies on multi-level $F_o$ modeling~\cite{ming2015fundamental} and multi-resolution STFT loss~\cite{yamamoto2020parallel}. Consequently, we optimize the VC model using the proposed regularization loss along with the adversarial loss, cycle consistency loss, and identity mapping loss functions, as described in Scyclone~\cite{kanagaki2020scyclone}. As shown in Figure~\ref{fig:f0compare}, the $F_o$ trajectory produced without the regularization method oscillates unstably. By contrast, {with } regularization, the stability of the $F_o$ trajectory improves as the VC model can focus on converting essential aspects of prosody variations. \subsubsection{VC-based data augmentation} For the criteria described above, we train the Scyclone model using a pair of source and target speakers' speech databases. Note that the training database consists of neutral recordings and PS-augmented data from each speaker. As illustrated in Figure \ref{fig:overview}b, we use the resulting VC model to convert all the source speaker's emotional voice into the target speaker's voice. Simultaneously, to stabilize the training process of the TTS model, we also convert the source speaker's neutral voice to the target speaker's voice. We use all the converted data, together with the target speaker's neutral recordings, to train the target speaker's emotional TTS system. \subsection{Text-to-speech} \label{ssec:tts} Our TTS model consists of two components: (1) an acoustic model that converts an input phoneme sequence into acoustic features and (2) a vocoder that converts the acoustic feature into the waveform. For the acoustic model, we use FastSpeech~2~\cite{ren2020fastspeech} with a Conformer encoder~\cite{gulati2020conformer} because of its fast but high-quality TTS capability~\cite{guo2021recent}. To adapt FastSpeech~2 for emotional TTS, we condition the model using external emotion code~\cite{choi2019multi}. For the vocoder, we use the high-fidelity harmonic-plus-noise Parallel WaveGAN (HN-PWG)~\cite{hwang2021high}. Figure \ref{fig:overview}~(c) shows the training process of TTS with the proposed data augmentation. We mix synthetic and recorded data for the target speaker and use them to train the acoustic model. At the inference stage, the TTS model generates emotional speech by inputting text and an emotion code. Note that we do not use data augmentation for training the vocoder because (1) it has been reported that using a large amount of training data is not crucial for the vocoder~\cite{okamoto2020realtime}, and (2) our preliminary experiments also confirmed subtle improvements when the amount of the source speaker's data was sufficiently large. \section{Experiments} \subsection{Experimental setup} \subsubsection{Database and feature extraction settings} For the experiments, we used two phonetically and prosodically rich speech corpora recorded by two female Japanese professional speakers, which represent data for the source and target speakers. We sampled speech signals at 24~kHz with 16~bit quantization. The source speaker data contained three speaking styles: \textit{neutral}, \textit{happiness}, and \textit{sadness}, whereas the target speaker data contained only \textit{neutral} style. We concatenated the $80$-dimensional log-Mel spectrogram, continuous log $F_o$, and V/UV with 5~ms analysis intervals as $82$-dimensional features. We used them as the target acoustic features for both the VC and acoustic models. We calculated the log-Mel spectrogram with a 40 ms window length. We extracted $F_o$ and V/UV using the improved time-frequency trajectory excitation vocoder~\cite{song2017effective}. We obtained $F_o$ for the acoustic features generated by PS data augmentation by shifting the $F_o$ extracted from the original speech. We used V/UV extracted from the original speech as the V/UV for the generated data. We normalized the acoustic features so that they had a zero mean and unit variance using the training data statistics. \begin{table*}[t] \caption{ Naturalness, speaker similarity, and emotional similarity MOS test results with 95\% confidence intervals. Results for the highest score in the VC-based TTS systems are shown in bold. } \label{tbl:mos} \begin{center} \vspace{-15pt} \scalebox{0.90}{ \begin{tabular}{l|ccc|ccc|cc}\hline \multicolumn{1}{c|}{\multirow{2}{*}{Model}} & \multicolumn{3}{c|}{Naturalness} & \multicolumn{3}{c|}{Speaker similarity} & \multicolumn{2}{c}{Emotional similarity} \\\cline{2-9} & {Neutral} & {Happiness} & {Sadness} & {Neutral} & {Happiness} & {Sadness} & {Happiness} & {Sadness} \\\hline {Source} & 4.88 ± 0.05 & 4.84 ± 0.05 & 4.69 ± 0.07 & - & - & - & - & - \\%\hline {SRC-TTS} & 4.56 ± 0.06 & 4.46 ± 0.08 & 4.40 ± 0.08 & 1.15 ± 0.05 & 1.28 ± 0.08 & 1.29 ± 0.08 & 3.48 ± 0.08 & 3.71 ± 0.07 \\ {TGT-NEU-TTS} & - & - & - & - & - & - & 1.69 ± 0.09 & 1.29 ± 0.07 \\ {MS-TTS} & 3.91 ± 0.09 & 2.85 ± 0.10 & 2.74 ± 0.11 & 2.85 ± 0.13 & 2.76 ± 0.11 & 2.51 ± 0.12 & 2.48 ± 0.10 & 2.72 ± 0.11 \\%\hline {VC-TTS} & 4.06 ± 0.08 & 3.88 ± 0.10 & \textbf{4.00 ± 0.10} & 2.98 ± 0.12 & 2.89 ± 0.11 & 3.26 ± 0.11 & 3.33 ± 0.08 & 3.63 ± 0.07 \\ {VC-TTS-PS} & 4.03 ± 0.08 & \textbf{4.20 ± 0.09} & \textbf{4.00 ± 0.09} & 2.98 ± 0.12 & 2.87 ± 0.12 & \textbf{3.35 ± 0.10} & 3.82 ± 0.05 & \textbf{3.67 ± 0.07} \\ {VC-TTS-PS-1K} & \textbf{4.20 ± 0.08} & 4.08 ± 0.09 & 3.96 ± 0.11 & \textbf{3.19 ± 0.11} & \textbf{2.90 ± 0.12} & 3.31 ± 0.09 & \textbf{3.87 ± 0.04} & 3.63 ± 0.07 \\\hline \end{tabular}} \vspace{-20pt} \end{center} \end{table*} \subsubsection{Model details} For the CycleGAN-based VC model, the generator and discriminator were composed of four and three residual blocks, respectively, and each block consisted of two convolutional layers with leaky ReLU activation. We set the kernel size to three for all the convolutional layers. We trained the VC model for 400 K steps using an Adam optimizer~\cite{kingma2014adam}. We set the learning rate to 0.0002, and reduced this by a factor of ten every 100 K steps. We set the minibatch size to 64. We set the weight of the proposed regularization loss described in Section \ref{ssec:vc} to 0.1, and used the FFT sizes (32, 64, 128), window sizes (32, 64, 128), and hop sizes (8, 16, 32) for the multi-resolution STFT loss. We used the identity mapping loss only for the first 10 K steps~\cite{kaneko2019CycleGAN-VC2}. For the TTS acoustic model, we used four Conformer and Transformer blocks for the encoder and decoder, respectively. For each block, we set the hidden sizes of the self-attention and feedforward layers to 384 and 1024, respectively. To achieve natural prosody for Japanese, for the model, we used accent information as external input~\cite{yasuda2019investigation}. For emotional TTS, we added emotion embedding followed by a projection layer with 256-dimensional phoneme and accent embedding. To improve the duration stability, we used manually annotated phoneme durations. At the training stage, we used a dynamic batch size with an average of 23 samples to create a minibatch~\cite{hayashi2020espnet}, and trained the models for 200 K steps using the RAdam optimizer~\cite{liu2019radam}. Table~\ref{tbl:mos_model} summarizes the systems used in our experiments. We trained the following TTS systems: \begin{description} \item[SRC-TTS:] Baseline TTS model trained with the source speaker's recordings. \item[TGT-NEU-TTS:] Baseline TTS model trained with the target speaker's recordings (\textit{neutral} style alone). \item[MS-TTS:] Baseline multi-speaker TTS model trained with source and target speakers' recordings. \item[VC-TTS:] Baseline TTS model trained with target speaker's recordings and VC-augmented data. \item[VC-TTS-PS:] Proposed TTS model trained with target speaker's recordings and PS-VC-augmented data \item[VC-TTS-PS-1K:] Proposed TTS model similarly configured with \textbf{VC-TTS-PS} system, but trained with a limited amount of recordings. \end{description} As the vocoder, we trained HN-PWG~\cite{hwang2021high} for 400 K steps with the RAdam optimizer~\cite{liu2019radam}. For training, we used 5,000 utterances of \textit{neutral} style, 2,500 utterances of \textit{happiness} style, and 2,500 utterances of \textit{sadness} style from the source speaker, and 1,000 utterances of \textit{neutral} style from the target speaker. We used the same vocoder for all the aforementioned TTS systems. \subsection{Evaluation} To evaluate the effectiveness of our proposed method, we conducted subjective listening tests: 5-point naturalness mean opinion score (MOS), 4-point speaker similarity MOS, and 4-point emotional similarity MOS\footnote{ Following the method used in the VC challenge~\cite{zhao2020voice}, we used the 5-point responses 1 = Bad; 2 = Poor; 3 = Fair; 4 = Good; and 5 = Excellent; and the 4-point responses 1 = Different, absolutely sure; 2 = Different, not sure; 3 = Same, not sure; and 4 = Same, absolutely sure. }. We asked native Japanese raters to make a quality judgment. The number of subjects for each evaluation was 14, 12, and 12, respectively. For all the tests, we randomly selected 20 utterances from the evaluation set for each system. In the naturalness evaluation, we evaluated the recorded speech of the source speaker and synthetic speech of five TTS systems, for a total of 360 utterances. For the speaker similarity evaluation, we presented the recorded speech of the target speaker as a reference, and evaluated a total of 300 utterances for five TTS systems. Note that the reference contained samples of \textit{neutral}, \textit{happiness}, and \textit{sadness}, which constituted of 25 seconds in total. We based the emotional similarity evaluation on 240 pairs of utterances, where each pair consisted of the recorded emotional speech of the source speaker and synthetic speech from six TTS systems. Note that we used the \textit{neutral} TTS system of the target speaker (i.e., \textbf{TGT-NEU-TTS}) as an anchor system only in the emotional similarity evaluation. \subsection{Results} The results of the MOS evaluations are shown in Table~\ref{tbl:mos}. The findings can be summarized as follows: (1) VC data augmentation was effective for improving naturalness and speaker/emotional similarities over the multi-speaker TTS baseline (\textbf{VC-TTS} vs. \textbf{MS-TTS}), particularly for emotional styles; and (2) the proposed PS data augmentation further improved performance. In particular, naturalness and emotional similarity significantly improved for \textit{happiness} (\textbf{VC-TTS} vs. \textbf{VC-TTS-PS}) while achieving high emotion reproducibility of the source speaker nearly the same or even better than \textbf{SRC-TTS}.; and (3) our proposed method achieved competitive performance, even with a limited number of training data (\textbf{VC-TTS-PS} vs. \textbf{VC-TTS-PS-1K}). We observed that \textbf{VC-TTS-PS-1K} achieved better naturalness and speaker similarity than \textbf{VC-TTS-PS} for the \textit{neutral} style. For naturalness, this could be explained by the source speaker's database having a more natural speaking style than the target speaker, and the style of the source speaker was transferred to the target speaker's TTS when the relative amount of VC augmented data is high (i.e., \textbf{VC-TTS-PS-1K}). For speaker similarity, we hypothesized that it was caused by the difference of $F_o$ statistics of the training data. To verify this, we examined the $F_o$ statistics of pseudo \textit{neutral} data used for training \textbf{VC-TTS-PS} and \textbf{VC-TTS-PS-1K}, and found that the latter contained higher $F_o$ on average 4.04~Hz. Because $F_o$ of the target speaker was higher than that of the source speaker in our experiments, higher-pitched samples of \textbf{VC-TTS-PS-1K} tended to have higher speaker similarity for the \textit{neutral} style. We encourage readers to listen to the samples provided on our demo page\footnote{\url{https://ryojerky.github.io/demo_vc-tts-ps/}}. To further verify the effectiveness of the proposed method, we analyzed the $F_o$ distributions of the original data and the pseudo data generated by the VC model. As illustrated in Figure~\ref{fig:f0-distribution}, the distribution of $F_o$ with PS data augmentation was closer in shape to that of the original data for \textit{happiness}. The results confirmed that the VC model trained on the proposed PS-augmented data generated richer pitch variations that were close to natural recordings compared with the VC model trained without PS data augmentation. By contrast, we observed similar distributions with and without the proposed PS augmentation for \textit{sadness}. This can be explained as follows: \textit{sadness} is less dynamic and has fewer pitch variations than \textit{happiness}. The results suggest that our proposed method was particularly suited for emotionally expressive and dynamic styles, such as \textit{happiness}. \begin{figure}[!t] \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=0.9\linewidth]{f0hist_a.pdf} \centerline{(a)} \medskip \vspace{-5mm} \end{minipage} \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=0.9\linewidth]{f0hist_b.pdf} \centerline{(b)} \medskip \vspace{-5mm} \end{minipage} \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=0.9\linewidth]{f0hist_c.pdf} \centerline{(c)} \medskip \vspace{-2mm} \end{minipage} \vspace{-10mm} \caption{$F_o$ distributions obtained from each emotion: (a) the source speaker's recorded data, (b) the target speaker's VC-augmented data, and (c) that with the proposed PS method.} \label{fig:f0-distribution} \vspace{-4mm} \end{figure} \section{Conclusion} We proposed a cross-speaker emotion style transfer method for a low-resource expressive TTS system, where expressive data is not available for the target speaker. Our proposed method combines PS-based and VC-based augmentation methods to stabilize training for both VC and TTS acoustic models. Subjective test results showed that the FastSpeech~2-based emotional TTS system learned by the proposed method improved naturalness and emotional similarity compared with conventional methods. In the future, we aim to apply the proposed method to more distinctive, expressive, and dynamic styles of speech. \section{Acknowledgments} This work was supported by Clova Voice, NAVER Corp., Seongnam, Korea. \section{References} { \setstretch{1.0} \printbibliography } \end{document}
1,314,259,995,240
arxiv
\section*{Introduction} Let $A$ be a semisimple Hopf algebra and $D(A)$ be its Drinfeld double. It is well known that the category $\mtr{Rep}(D(A))$ of representations $D(A)$ is a modular category \cite{BaKi} and is braided equivalent to the center $\mtc{Z}(\mtr{Rep}(A))$ of the tensor category $\mtr{Rep}(A)$. The Drinfeld doubles $D(A)$ play a very important role in the classification of semisimple Hopf algebras. In this paper we will give a description of the kernels of the representations of $D(A)$. This will allow us to obtain a description of the normal Hopf subalgebras of $D(A)$ in terms of normal Hopf subalgebras of $A$ and $A^*$. Recently the author introduced in \cite{Bker} the notion of kernel of a representation of a semisimple Hopf algebra. It was proven that if the character of the representation is central in the dual Hopf algebra then the kernel is a normal Hopf subalgebra. It is not known whether the kernel is in general a normal Hopf subalgebra. In this paper we prove that in the case $D(G)$, of a Drinfeld double of a finite group $G$, all representation kernels are normal Hopf subalgebras of $D(G)$. Fusion subcategories of the category $\mtr{Rep}(D(G))$ of a finite group $G$ were recently studied in \cite{NNW}. They are parameterized in terms of a pair of commuting subgroups of $G$ and a $G$-invariant bicharacter defined on their product. The main ingredient used in \cite{NNW} is the notion of centralizer for a fusion subcategory introduced in \cite{M}. In this paper we will identify from the above mentioned parametrization all fusion subcategory of $\mtr{Rep}(D(G))$ that are of the form $\mtr{Rep}(D(G)//L)$ where $L$ is a normal Hopf subalgebra of $D(G)$. They correspond to those bicharacters from \cite{NNW} satisfying a stronger condition than that of $G$-invariance. The paper is organized as follows. The first section recalls few basic results on Hopf algebras and the kernels of their characters that are needed in the paper. The second section is concerned with fusion subcategories of $\mtr{Rep}(A)$ for a semisimple Hopf algebra $A$. Theorem \ref{main} gives a necessary and sufficient condition for such a category to be normal (i.e of the form $\mtr{Rep}(A//L)$ for some normal Hopf subalgebra $L$ of $A$). A description of the commutator subcategory $\mtr{Rep}(A//L)^{co}$ is given in Proposition \ref{cocomm}. Next section studies kernels of representations of Drinfeld doubles $D(A)$. The key idea is to use a lemma from \cite{dg} concerning fusion subcategories of a products of two fusion categories. This lemma can be regarded as a quantum analogue of Goursat's lemma for groups. A description of normal Hopf subalgebras of $D(A)$ is also given in terms of normal Hopf subalgebras of $A$ and $A^*$. Two general examples are also considered in this section. The last two sections are concerned with the special case $D(G)$, of Drinfeld doubles of finite groups. Some basic results from group representations that are needed are recalled in the first section. A basis for central characters in $D(G)$ is also given here. In the last section the parametrization from \cite{NNW} of fusion subcategories of $\mtr{Rep}(D(G))$ is recalled. This section gives a description of all normal Hopf subalgebras of $D(G)$ and shows that the kernel of any character of $D(G)$ is normal. We work over the algebraic closed field $\mathbb{C}$. For a vector space $V$ by $|V|$ is denoted the dimension $\mtr{dim}_{\mathbb{C}}V$. We use Sweedler's notation $\D(x)=\sum x_1\ot x_2$ for comultiplication. All the other Hopf notations are those used in \cite{Montg}. \section{Preliminaries} \subsection{Notations} Let $A$ be a finite dimensional semisimple Hopf algebra over $\mathbb{C}$. Then $A$ is also cosemisimple \cite{Lard}. The character ring $C(A)$ of $A$ is a semisimple subalgebra of $A^*$ \cite{Z} and it has a vector space basis given by the set $\mtr{Irr}(A)$ of irreducible characters of $A$. Moreover, $C(A)=\mtr{Cocom}(A^*)$, the space of cocommutative elements of $A^*$. By duality, the character ring of $A^*$ is a semisimple subalgebra of $A$ and $C(A^*)=\mtr{Cocom}(A)$. If $M$ is an $A$-module with character $\chi$ then $M^*$ is also an $A$-module with character $\chi^*=\chi \circ S$. This induces an involution $``\;^*\;":C(A)\ra C(A)$ on $C(A)$. Let $m_{ _A}(\ch,\;\mu)$ be the multiplicity form on $C(A)$. For $d \in \mtr{Irr}(A^*)$ denote by $C_d$ the simple subcoalgebra of $A$ whose character as $A^*$-module equals $d$ \cite{Lar}. Denote by $t_{ _A}$ the integral in $A^*$ with $t_{ _A}(1)=|A|$. It is known that $t_{ _A}$ is also the regular character of $A$ \cite{Montg}. \subsection{Kernels of characters for semisimple Hopf algebras} Let $M$ be a representation of $A$ which affords the character $\chi$. Define $\mtr{ker}_{ _A}(\ch)$ as the set of all irreducible characters $d \in \mtr{Irr}(A^*)$ which act as the scalar $\eps(d)$ on $M$. Then Proposition 1. 2 of \cite{Bker} implies that $$\mtr{ker}_{ _A} (\chi)=\{d \in \mtr{Irr}(A^*) |\; \chi(d)=\eps(d)\chi(1)\}.$$ Similarly let $\mtr{z}_{ _A}(\ch)$ be the the set of all irreducible characters $d \in \mtr{Irr}(A^*)$ which act as a scalar $\alpha\eps(d)$ on $M$, where $\alpha$ is a root of unity. Then from the same proposition it follows $$\mtr{z}_{ _A}(\ch)=\{d \in \mtr{Irr}(A^*) |\;\;|\chi(d)|=\eps(d)\chi(1)\}.$$ Clearly $ \mtr{ker}_{ _A}(\chi) \subset \mtr{z}_{ _A}(\chi)$. Since the sets $\mtr{ker}_{ _A}(\chi)$ and $\mtr{z}_{ _A}(\chi)$ are closed under multiplication and $``\;^*\;"$ they generate Hopf subalgebras of $A$ denoted by $A_{_{\chi}}$ and $\mtr{Z}_{ _A}(\chi)$, respectively (see \cite{Bker}). \bn{rem}\label{restric}\end{rem} Suppose that $K$ is a Hopf subalgebra of a semisimple Hopf algebra $A$ via $i :K \hookrightarrow A$. The restriction functor from $A$-modules to $K$-modules induces a map $\mtr{res}:C(A)\ra C(K)$. It is easy to see that $\mtr{res}=i^*|_{C(A)}$, the restriction of the dual map $i^*:A^* \ra K^*$ to the subalgebra of characters $C(A)\subset A^*$. \section{Fusion subcategories of $\mtr{Rep}(A)$} \subsection{Fusion categories and their universal gradings} In this subsection we recall few facts on fusion categories from \cite{GN} and \cite{dg}. Let $\mtc{C}$ be a fusion subcategory. Let $\mtc{O}(\mtc{C})$ be its set of simple objects considered up to isomorphism. Recall that $\mtc{C}_{ _{ad}}$ is defined as the fusion subcategory of $\mtc{C}$ containing $X\ot X^*$ for each simple object $X$ of $\mtc{C}$. A grading of a fusion category $\mtc{C}$ by a group $G$ is a map $\mtr{deg} : \mtc{O}(\mtc{C}) \ra G$ with the following property: for any simple objects $X, Y, Z \in \mtc{ C}$ such that $X \otimes Y $ contains $Z$ one has $\mtr{deg}(Z) = \mtr{deg}(X) \mtr{deg}(Y)$ . This corresponds to a decomposition $\mtc{C} = \oplus_{ g\in G} \mtc{C}_g$, where $\mtc{C}_g\subset \mtc{C}$ is the full additive subcategory generated by simple objects of degree $g$. The subcategory $\mtc{C}_1$ corresponding to $g = 1$ is a fusion subcategory; it is called the trivial component of the grading. A grading is said to be trivial if $\mtc{C}_1 = \mtc{C}$. It is said to be faithful if the map $\mtr{deg} : \mtc{O}(\mtc{C}) \ra G$ is surjective. For any fusion category $\mtc{C}$, as explained in Sect. 3.2 of \cite{GN}, there is a notion of universal grading whose group is called the universal grading group and it is denoted by $U_{ _{\mtc{C}}}$. Its trivial component is the fusion subcategory $\mtc{C}_{ _{ad}}$. The following Lemma appears in \cite{dg}. \bn{lemma} \label{subcad} Let $\mtc{C}$ be a fusion category and $$ \mtc{C}=\oplus_{g\in U_{ _{\mtc{C}}}} \mtc{C}_g $$ be its universal grading. There is a one-to-one correspondence between fusion subcategories $\mtc{D} \subset \mtc{C}$ containing $\mtc{C}_{ _{ad}}$ and subgroups $G \subset U_{\mtc{C}}$, namely $$ D \mapsto G_{ _{\mtc{D}}} := \{g \in U_{\mtc{C}}| \mtc{D} \cap \mtc{C}_g \neq 0 \} $$ and $$ G \mapsto \mtc{D}_G :=\oplus _{g\in G}\; \mtc{C}_g.$$ \end{lemma} Let $A$ be a semisimple Hopf algebra. It is known that $\mtr{Rep}(A)$ is a fusion category. Moreover there is a maximal central Hopf subalgebra $K(A)$ of $A$ such that $\mtr{Rep}(A)_{ _{ad}}=\mtr{Rep}(A//K(A))$, see \cite{GN}. Since $K(A)$ is commutative it follows that $K(A)=k[U_{ _A}]^*$ where $U_{ _A}$ is the universal grading group of $\mtr{Rep}(A)$. For example if $A=kG$ then $K(A)=k\mtc{Z}(G)$ and $U_{ _A}=\hat{\mtc{Z}(G)}$, the linear dual group of the center $\mtc{Z}(G)$ of $G$. Let $\mtc{D}$ be a fusion subcategory of $\mtr{Rep}(A)$ and $\mtc{O}(\mtc{D})$ be its set of objects. Then $I_{ _{\mtc{D}}}:=\cap_{V \in \mtc{O}(\mtc{D})}\mtr{Ann}_A(V)$ is a Hopf ideal in $A$ \cite{PQ} and $\mtc{D}=\mtr{Rep}(A/I_{ _{\mtc{D}}})$. For a fusion category $\mtc{D} \subset \mtr{Rep}(A)$ define its regular character as $r_{ _{\mtc{D}}}:=\sum_{X \in \mtr{Irr}(\mtc{D})}\dim_{\C}(X)\ch_X$ where $\mtr{Irr}(\mtc{D})$ is the set of irreducible objects of $\mtc{D}$ and $\ch_X$ is the character of $X$ as $A$-module. Thus $r_{ _{\mtc{D}}}\in C(A)$. \bn{rem} \label{tb} Let $B$ be a normal Hopf subalgebra of a semisimple Hopf $A$ and $\ch$ a character of $A$ affording the representation $M_{ _{\ch}}$. Then $M_{ _{\ch}}$ is a representation of $A//B$ if and only if $A_{ _{\ch}} \supset B$. \end{rem} \begin{thm}\label{main} Let $A$ be a finite dimensional semisimple Hopf algebra and $\mtc{D}$ be a fusion subcategory of $\mtr{Rep}(A)$. Then $\mtc{D}=\mtr{Rep}(A//L)$ for some normal Hopf subalgebra $L$ of $A$ if and only if the regular character $r_{ _{\mtc{D}}}$ of $\mtc{D}$ is central in $A^*$. In this case $L=A_{ _{r_{ _{\mtc{D}}}}}$. \end{thm} \bn{proof} If $\mtc{D}=\mtr{Rep}(A//L)$ then by Theorem 2.4 of \cite{Bker} it follows that $r_{ _{\mtc{D}}}$ is an integral of $(A//L)^*$. Since this is a normal Hopf subalgebra of $A^*$, it follows from Lemma 1 of \cite{Masnr} that $r_{ _{\mtc{D}}}$ is central in $A^*$. Conversely, if $r_{ _{\mtc{D}}}$ is central in $A^*$ then by Proposition 3.3 of \cite{Bker} it follows that $A_{ _{r_{ _{\mtc{D}}}}}$ is a normal Hopf subalgebra of $A$. Since $ r^2_{ _{\mtc{D}}}=|\mtc{D}|r_{ _{\mtc{D}}}$ the same Proposition implies that $\mtc{D}=\mtr{Rep}(A//A_{ _{r_{ _{\mtc{D}}}}})$ \end{proof} \subsection{The commutator subcategory} Recall that the notion of commutator subcategory from \cite{GN}. If $\mtc{D}$ is a fusion subcategory of $\mtc{C}$ then $\mtc{D}^{co}$ is the full abelian subcategory of $\mtc{C}$ generated by those objects $X$ such that $X\otimes X^* \in \mtc{O}(\mtc{D})$. In this subsection we will describe the category $\mtr{Rep}(A//L)^{co}$ for $L$ a normal Hopf subalgebra of $A$. For $a, b \in A$ define $[a, b]=ab-ba$ the usual commutator. Then for $L$ a Hopf subalgebra of $A$ let $[A,\;L]$ be the ideal generated by $[a,l]$ with $a \in A$ and $l \in L$. \bn{prop}\label{cocomm} Let $L$ be a normal Hopf subalgebra of $A$. Then $[A,L]$ is a Hopf ideal of $A$ and \begin{equation}\label{co} \mtr{Rep}(A//L)^{co}=\mtr{Rep}(A/[A,L]) \end{equation} \end{prop} \bn{proof} To see that $[A, L]$ is a Hopf ideal note that \begin{eqnarray*} \D([a,\;l])&=& \sum a_1l_1\ot a_2l_2- \sum l_1a_1\ot l_2a_2 \\ &=& \sum (a_1l_1-l_1a_1)\ot a_2l_2 + \sum l_1a_1 \ot (a_2l_2-l_2a_2) \end{eqnarray*} Now consider $M$ an irreducible $A/[A,L]$-module affording the character $\ch \in C(A)$. Since $(la).m=(al).m$ for all $a \in A$, $l \in L$ and $m \in M$ it follows that left multiplication by $l$ on $M$ is a morphism of $A$-module. Schur's Lemma implies that each $l \in L$ acts by a scalar on $M$. Thus $\ch\dw_{ _L}^{ ^A}=\ch(1)\psi$ for some linear character $\psi$ of $L$. Then $(\ch\ch^*)\dw_{ _L}^{ ^A}=\ch(1)^2\psi\psi^{-1}=\ch(1)^2\eps_{ _L}$. Conversely suppose that $M\ot M^*$ is a trivial $L$-module for an irreducible $A$-module. Let $A=I_M\oplus \mtr{Ann}_{ _A}(M)$ be the decomposition of $A$ in two-sided ideals where $I_{ _M}$ is the minimal ideal in $A$ corresponding of $M$. It is well known (see \cite{somm} for example) that the minimal ideal $I_{ _M}$ in $A$ corresponding of $M$ satisfies $I_M \cong M\ot M^*$ where $I_{ _M}$ is regarded as $A$-module by the adjoint action. Therefore $l_1xSl_2=\eps(l)x$ for all $x \in I_{ _M}$ and $l \in L$. Then $lx=(l_1xS(l_2))l_3=xl$ for all $x \in I_{ _M}$ and $l \in L$. Then $(la-al)m=0$ for all $a \in A$. Indeed if $a \in \mtr{Ann}_{ _A}(M)$ this is clear since $(la).m=(al).m=0$. On the other hand if $a \in I_{ _M}$ then $al-la=0$ and thus $(al-la).m=0$. This shows that $M \in \mtr{Rep}(A/[A,\;L])$. \end{proof} \bn{defn} Let $L$ be a normal Hopf subalgebra of $A$. An irreducible character $\al$ of $L$ is called $A$-stable if there is a character $\ch \in \mtr{Rep}(A)$ such that $\ch\dw^{ ^A}_{ _L}=\frac{\ch(1)}{\al(1)}\al$. Such a character $\ch$ is said to seat over $\al$. \end{defn} Denote by $G_{ _{st}}(L)$ the set of all $A$-stable linear characters of $L$. \bn{lemma}\label{power} Let $M$ be an irreducible module of $A$ affording a character $\ch$. Then $M \in \mtc{O}(\mtr{Rep}(A//L)^{co})$ if and only if $L$ acts trivially on some tensor power $M^{\ot\;n}$ of $M$. In these conditions $\ch\dw^{ ^A}_{ _L}=\ch(1)\psi$ for some $A$-stable linear character $\psi$ of $L$. \end{lemma} \bn{proof} Suppose that $L$ acts trivially on some tensor power $M^{\ot\;n}$ of $M$. This means that $\ch^n\dw^L_A=\ch(1)^n\eps_L$. Let $$ \ch\dw^{ ^A}_{ _L}=\sum_{\al \in \mtr{Irr}(L)}m_{\al}\al $$ for some nonnegative integers $m_{\al}$ and $$ \ch^{n-1}\dw^{ ^A}_{ _L}=\sum_{\al \in \mtr{Irr}(L)}n_{\al}\al $$ for some nonnegative integers $n_{\al}$. Recall that $m_{ _L}(\eps_{ _L},\;\al\beta)>0$ if and only if $\al=\beta^*$. Since $\ch^n\dw_{ _L}^{ ^A}=\ch(1)^n\eps_{ _L}$ this implies that $\ch\dw^{ ^A}_{ _L}=\frac{\ch(1)}{\al(1)}\al$ and $\ch^{n-1}\dw^{ ^A}_{ _L}=\frac{\ch(1)^{n-1}}{\al(1)}\al^*$ for a fixed character $\al$. It follows by counting the multiplicity of $\eps_{ _L}$ in $\ch^n\dw^{ ^A}_{ _L}$ that $\al(1)=1$. Thus $\al$ is an $A$-stable linear character of $L$. . Conversely suppose that $M \in \mtc{O}(\mtr{Rep}(A//L)^{co})$ and let \\ $ \ch\dw^{ ^A}_{ _L}=\sum_{\al \in \mtr{Irr}(K)}m_{\al}\al$. Since $m_{ _L}(\eps_{ _L},\; \al\beta^*)=\delta_{\al,\;\beta}$, counting the multiplicity of $\eps_{ _L}$ in $\ch\dw^{ ^A}_{ _L}\ch^*\dw^{ ^A}_{ _L}$ implies that $\ch\dw^{ ^A}_{ _L}=\ch(1)\al$ for a $L$-linear character $\al$. If $n$ is the order of $\al$ in $G(L^*)$ then clearly $\ch^n\dw_{ _L}^{ ^A}=\ch(1)^n\eps_{ _L}$ which shows that $L$ acts trivially on $M^{\ot\;n}$ \end{proof} By duality it follows that the subcategory $\mtr{Rep}(L^*)^{co}$ of $\mtr{Rep}(A^*)$ has a similar description as in Proposition \ref{cocomm}. \section{Kernels of characters of representations of Drinfeld doubles} The Drinfeld double $D(A)$ of $A$ is defined as follows: $D(A) \cong A^{*cop} \otimes A$ as coalgebras and multiplication is given by $$(g \bowtie h)(f \bowtie l)=\sum g(h_1\rightharpoonup f \leftharpoonup S^{-1}h_3)\bowtie h_2l.$$ Its antipode is given by $S(f \bowtie h)=S^{-1}(h)S(f)$. It is known that $\mtr{Rep}(D(A)^*)=\mtr{Rep}(A^*)^{\mtr{rev}}\boxtimes \mtr{Rep}(A)$ since $D(A)$ is a cocycle twist of $A^{*\mtr{cop}}\otimes A$. It is also known that $D(A)$ is a semisimple Hopf algebra if and only if $A$ is a semisimple Hopf algebra \cite{Montg}. \subsection{Fusion subcategories of direct products of categories} Let $\mtc{C}^1$ and $\mtc{C}^2$ be two fusion categories. Identify them with the corresponding fusion subcategories of $\mtc{C}^1\boxtimes\mtc{C}^2$ given by $\mtc{C}^1\boxtimes 1$ and $1 \boxtimes \mtc{C}^2$ respectively. Then every simple object of $\mtc{C}^1\boxtimes\mtc{C}^2$ is of the form $X_1\boxtimes X_2$ where $X_i$ is a simple object of $\mtc{C}^i$. Let $\mtc{D} \subset \mtc{C}^1\boxtimes\mtc{C}^2$ be a fusion subcategory. Define $\mtc{L}^i(\mtc{D}):=\mtc{D}\cap \mtc{C}^i$, $i=1,2$. Let also $\mtc{K}^1(\mtc{D})$ be the fusion subcategory generated by all simple objects $X_1$ of $\mtc{C}^1$ such that $X_1 \boxtimes X_2 \in \mtc{D}$ for some simple object $X_2$ of $\mtc{C}^2$. Similarly define the fusion subcategory $\mtc{K}^2(\mtc{D})$. Clearly $\mtc{L}^1(\mtc{D})\boxtimes\mtc{L}^2(\mtc{D})\subset \mtc{D} \subset\mtc{K}^1(\mtc{D})\boxtimes\mtc{K}^1(\mtc{D})$. The following Lemma is taken from \cite{dg}. \bn{lemma}\label{dg} Let $\mtc{D} \subset \mtc{C}^1\boxtimes\mtc{C}^2$ be a fusion subcategory. Then there is a group $\mtc{X}$ and faithful $\mtc{X}$-gradings $\mtc{K}_i(\mtc{D})=\oplus_{x \in \mtc{X}}\mtc{K}^i(\mtc{D})_x$ with trivial components $\mtc{L}^i(\mtc{D})$, $i=1, 2$ such that \begin{equation}\label{81} \mtc{D}=\oplus_{x \in \mtc{X}}\mtc{K}^1(\mtc{D})_x\boxtimes\mtc{K}^2(\mtc{D})_x. \end{equation} \end{lemma} \bn{proof} First, let us show that \begin{equation}\label{82} \mtc{D} \supset \mtc{K}^1(\mtc{D})_{ _{ad}} \boxtimes \mtc{K}^2(\mtc{D})_{ _{ad}} = (\mtc{K}^1(\mtc{D}) \boxtimes \mtc{K}^2(\mtc{D}))_{ _{ad}} , \end{equation} where $\mtc{K}^i(\mtc{D})_{ _{ad}}$ denotes the adjoint subcategory of $\mtc{K}^i(\mtc{D})$. To prove this, note that if $X_1 \boxtimes X_2$ is an object of $\mtc{C}_1 \boxtimes \mtc{C}_2$ that is also in $\mtc{O}(\mtc{D})$ then $(X_1 \otimes X^*_1 ) \boxtimes (X_2 \otimes X^*_2) \in \mtc{D}$. Since $X_i\otimes X^*_i$ contains the unit object it follows that $(X_1 \otimes X_1^*)\boxtimes 1 \in \mtc{D}$ and $1\boxtimes (X_2\otimes X^*_2) \in \mtc{D}$. Therefore \begin{equation}\label{83} \mtc{K}^i(\mtc{D})_{ _{ad}} \subset \mtc{L}^i(\mtc{D})\;\;\text{for}\;i=1,2. \end{equation} But $\mtc{L}^1(\mtc{D}) \boxtimes \mtc{L}^2(\mtc{D}) \subset \mtc{D}$, so Equation \ref{83} implies equation \ref{82}. Now let $U_{ _{\mtc{K}^i(\mtc{D})}}$ be the universal grading group of $\mtc{K}i(\mtc{D})$. By Equation \ref{82} and Lemma \ref{subcad}, $$ \mtc{D} =\oplus_{\gamma \in \Gamma}(\mtc{K}^1(\mtc{D}) \boxtimes \mtc{K}^2(\mtc{D}))_{\gamma} $$ for some subgroup $\Gamma \in U_{ _{\mtc{K}_1(\mtc{D})\boxtimes \mtc{K}^2(\mtc{D})}}= U_{ _{\mtc{K}^1(\mtc{D})}} \times U _{ _{\mtc{K}_2(\mtc{D})}}$. By the definition of $\mtc{K}^i(\mtc{D})$, the subgroup $\Gamma \in U_{ _{\mtc{K}^1(\mtc{D})}} \times U_{ _{\mtc{K}^2(\mtc{D})}}$ has the following property: the maps $\Gamma \ra U_{ _{\mtc{K}^1(\mtc{D})}}$ and $\Gamma \ra U_{ _{\mtc{K}^2(\mtc{D})}}$ are surjective. Goursat's lemma for groups (see \cite{Gours}) implies that $\Gamma$ equals the fiber product $U_{ _{\mtc{K}^1(\mtc{D})}}\times_{ _{\mtc{X}}} U_{ _{\mtc{K}^2(\mtc{D})}}$ for some group $\mtc{X}$ equipped with epimorphisms $U_{ _{\mtc{K}^i(\mtc{D})}} \ra \mtc{X}$, $i = 1, 2$. These epimorphisms define faithful $\mtc{X}$-gradings of $\mtc{K}^i(\mtc{D})$ such that Equation \ref{81} holds. Formula \ref{81} implies that $\mtc{L}^i(\mtc{D}) := \mtc{D} \cap \mtc{C}^i$ equals the trivial component of the $\mtc{X}$-grading of $\mtc{K}^i(\mtc{D})$. \end{proof} The following remark is straightforward. \bn{rem}\label{costs} Let $\mtc{C}$ be a fusion category and $\mtc{C}=\oplus_{g \in G}\;\mtc{C}_g$ be a faithful grading of $\mtc{C}$. For any object $X_g \in \mtc{C}_g$ one has $X^{\ot n}_g \in \mtc{C}_1$ where $n\geq 1$ is the order of $g \in G$. Indeed $\mtc{C}_{g}^{\ot \;n}\subset \mtc{C}_{g^n}= \mtc{C}_{1}$. \end{rem} \bn{rem}\label{ind-restr} \end{rem} If $\al$ is an irreducible $A$-stable character of $L$ then formulae from \cite{Bcoset} shows that in this situation $$ \al\uw^{ ^A}_{ _L}=\frac{\sum_{\ch \in \mtc{A}_{\al}}\ch(1)^2}{\al\uw^{ ^A}_{ _L}(1)}\sum_{\ch \in \mtc{A}_{\al}}\ch(1)\ch $$ where $\mtc{A}_{ _{\al}}$ is the set of all irreducible characters $\ch$ of $A$ that seat over $\al$. Also if $t_{ _{A//L}} \in (A//L)^*$ is the integral on $A//L$ with $t_{ _{A//L}}(1)=\frac{|A|}{|L|}$ it follows from Theorem 4.3 of \cite{Bcoset} that \begin{equation}\label{int-ind} \frac{\al\uw^{ ^A}_{ _L}}{\al(1)}=\frac{\ch t_{ _{A//L}}}{\ch(1)} \end{equation} for any $\ch \in \mtc{A}_{ _{\al}}$. Let $\mtc{T} \subset \mtr{Irr}(A)$ be a set of irreducible characters of of $A$ and $\ch$ another character of $A$. Denote by $\mtc{T}\ch$ the set all irreducible constituents of $\mu\ch$ where $\mu \in \mtc{T}$. Also let $\mtr{Irr}(\ch)$ be the set of all irreducible constituents of $\ch$. \bn{lemma}\label{dual-costs} Suppose $A$ is a semisimple Hopf algebra and $L$ is a normal Hopf subalgebra. Suppose that there is a finite group $G$ and faithful grading on $\mtr{Rep}(A)=\oplus_{g \in G}\mtr{Rep}(A)_g$ with the trivial component $\mtr{Rep}(A)_1=\mtr{Rep}(A//L)$. Then for any $\ch_g \in \mtr{Rep}(A)_g$ it follows that $\ch_g\dw^{ ^A}_{ _L}=\ch(1)\psi_g$ where $\psi_g$ is a $A$-stable linear character of $L$. Moreover $$ \mtr{Rep}(A)_g=\mtr{Rep}(A//L)\ch_g=\mtr{Irr}(\psi_g\uw_{ _L}^{ ^A}) $$ \end{lemma} \bn{proof} By Remark \ref{costs} it follows that $\ch_g^n \in \mtr{Rep}(A)_1=\mtr{Rep}(A//L)$. Then Lemma \ref{power} implies that $\ch_g\dw^{ ^A}_{ _L}=\ch(1)\psi_g$ where $\psi_g$ is a $A$-stable linear character of $L$. The last equalities follows from Remark \ref{ind-restr}. \end{proof} \subsection{Hopf subalgebras of Drinfeld doubles $D(A)$}\label{hsalg} If $H$ is a Hopf subalgebra of $D(A)$ then $$ \mtr{Rep}(H^*)\subset \mtr{Rep}(D(A)^*)=\mtr{Rep}(A)\boxtimes \mtr{Rep}(A^*)^{\mtr{rev}} $$ Then Lemma \ref{dg} implies that any Hopf subalgebra $H$ of $D(A)$ is completely determined by four fusion subcategories satisfying the following properties: \begin{equation}\label{first} (\mtc{K}^1)_{ _{ad}}\subset \mtc{L}^1\subset \mtc{K}^1 \subset \mtr{Rep}(A) \end{equation} \begin{equation}\label{second} (\mtc{K}^2)_{ _{ad}}\subset \mtc{L}^2\subset \mtc{K}^2 \subset \mtr{Rep}(A^*) \end{equation} with faithful gradings \begin{equation}\label{gradings} \mtc{K}^i=\oplus_{x \in \mtc{X}}(\mtc{K}^i)_x \end{equation} by a given group $\mtc{X}$ such that $\mtc{K}^i_1=\mtc{L}_i$ for $i=1,2$. In this situation \begin{equation}\label{formulahopf} \mtr{Rep}(H^*)=\oplus_{x \in \mtc{X}}(\mtc{K}^1)_x \boxtimes (\mtc{K}^2)_x . \end{equation} \subsection{Kernels of representations of Drinfeld doubles $D(A)$} \bn{thm}\label{genkern} Let $M$ be a $D(A)$-module with character $\ch$. Then there is a finite group $\mtc{X}$ and irreducible characters $\eta_x $ and $d_x $ of $A$ and $A^*$ respectively with $\eta_1=\eps_{ _A}$ and $d_1=1$ such that $$ \mtr{ker}_{ _{D(A)}}(\ch)=\bigsqcup_{x \in \mtc{X}}(\mtr{ker}_{ _{A^*}}(\ch\dw_{ _{A^*}})\eta_x\times \mtr{ker}_{ _A}\;(\ch\dw_{ _A})d_x). $$ Moreover $\eta^n_x \in \mtr{ker}_{ _{A^*}}(\ch\dw_{ _{A^*}})$ and $d_x^n \in \mtr{ker}_{ _A}\;(\ch\dw_{ _A})$ for some $n \geq 1$ and all $x \in \mtc{X}$. \end{thm} \bn{proof} Since $\mtr{ker}_{ _{D(A)}}(\ch)$ is the set of simple objects of a fusion subcategory $\mtr{Rep}(A^*)^{\mtr{rev}}\boxtimes \mtr{Rep}(A)$ one can apply Lemma \ref{dg}. Note that $\mtc{L}_1(\mtr{ker}_{ _{D(A)}}(\ch))$ has as set of simple objects the set $\mtr{ker}_{ _{A^*}}\;({\ch\dw^{^{D(A)}}_{ _{A^*}}})$ and $\mtc{L}_2(\mtr{ker}_{ _{D(A)}}(\ch))$ has as set of simple objects the set $\mtr{ker}_{ _A}\;({\ch\dw^{^{D(A)}}_{ _A}})$. Let $L_{ _{A^*}}:=A^{*}_{ _{(\ch\dw^{ ^{D(A)}}_{ _{A^*}})}}\subset A^*$ and $L_{ _A}:=A_{ _{(\ch\dw^{ ^{D(A)}}_{A})}}\subset A$ be the Hopf subalgebras determined by $\mtr{ker}_{ _{A^*}}\;({\ch\dw^{ ^{D(A)}}_{ _{A^*}}})$ and $\mtr{ker}_{ _A}\;({\ch\dw^{ ^{D(A)}}_{ _A}})$ respectively. Similarly define $K_{ _{A^*}} $ as the Hopf subalgebra of $A^*$ determined by $\mtc{K}_1(\mtr{ker}_{ _{D(A)}}\;\ch)$ and $K_{ _A} \subset A$ as the Hopf subalgebra of $A$ generated by $\mtc{K}_2(\mtr{ker}_{ _{D(A)}}\;\ch)$. Then by Lemma \ref{dg} one has that $(K_{ _A})_{ _{ad}} \subset L_{ _A}$ and $(K_{ _{A^*}})_{ _{ad}} \subset L_{ _{A^*}}$. Applying Lemma \ref{dual-costs} it follows that there are irreducible characters $\ch_x$ and $d_x$ of $A$ respectively $A^*$ such that $$ \mtr{ker}_{ _{D(A)}}(\ch)=\bigsqcup_{x \in \mtc{X}}(\mtr{ker}_{ _{A^*}}(\ch\dw^{^{D(A)}}_{ _{A^*}})\eta_x\times \mtr{ker}_{ _A}\;(\ch\dw^{^{D(A)}}_{ _A})d_x). $$ Remark \ref{costs} implies that $\eta^n_x \in \mtr{ker}_{ _{A^*}}(\ch\dw_{ _{A^*}})$ and $d_x^n \in \mtr{ker}_{ _A}\;(\ch\dw_{ _A})$ for some $n \geq 1$. \end{proof} \bn{rem}\label{remarked}\end{rem} If $\eta\bowtie d \in \mtr{ker}_{ _{D(A)}}(\ch)$ one can prove something a little stronger than $\eta^n \in \mtr{ker}_{ _{A^*}}(\ch\dw_{ _{A^*}})$ and $d^n \in \mtr{ker}_{ _A}\;(\ch\dw_{ _A})$. Clearly $C_{\eta}\bowtie C_d \subset D(A)_{ _{\ch}}$. Then from \ref{second} above it follows that $C_{d^*}C_d \subset L_{ _A}$. Thus for all $m \in M$ and $x \in C_d$ one has $d^*xm=\eps(d)\eps(x)m$. Multiplying this equality by $\eta^*$ and noticing that $\eta^*d^* \in \mtr{ker}_{ _{D(A)}}\;\ch$ one obtains: $\eps(d)\eps(x)(\ch^*m)=(\ch^*d^*)xm=\ch(1)\eps(d)xm$. A basis of the simple coalgebra $C_d$ is given by the comatrix entries $x^d_{ij}$ with $1\leq i, j\leq \eps(d)$. Therefore if $x=x^d_{ij}$ with $i\neq j$ then $x^d_{ij}m=0$ for all $m\in M$. If $i=j$ then the above formula implies that $x^d_{ii}m=\frac{1}{\ch(1)}\eta^*m$. This shows that $x^d_{ii}m$ does not depend on $i$. Recall that the exponent of $A$ is the smallest positive number $n >0$ such that $h^{[n]}=\eps(h)1$ for all $h \in A$. The generalized power $h^{[n]}$ is defined by $h^{[n]}=\sum_{(h)}h_1h_2...h_n$. The exponent of a finite dimensional semisimple Hopf algebra is always finite and divides the third power of the dimension of $A$ \cite{EG'}. If $h=d$ one has $$d^{[n]}=\sum_{i, j_1, j_2, \cdots j_{n-1}}x_{i\;j_1}x_{j_1\;j_2}\cdots x_{j_{n-1}i}$$ and therefore $d^{[n]}m=\sum_{i=1}^{\eps(d)}x_{ii}^n.m=\eps(d)x_{11}^nm$. If $n$ is divisible by the exponent of $A$ then $(x^d_{ii})^nm=m$ for all $m \in M$. Thus one obtains that $(x^d_{ii})^n \in \mtr{ker}_{ _A}\;(\ch\dw_{ _A}^{ ^{D(A)}})$. \subsection{Normal Hopf subalgebras of Drinfeld doubles $D(A)$} In this subsection we give a description of normal Hopf subalgebras of Drinfeld doubles $D(A)$. \bn{rem}\label{dualover}\end{rem} Suppose that $K$ is a normal Hopf subalgebra of $A$ and let $H=A//K$ with the natural projection $\pi_{ _K}: A \ra H$. Then $H^* \subset A^*$ via $\pi_{ _K}^*$. Thus Remark \ref{restric} shows that $\pi_{ _K}|_{C(A^*)}$ is the restriction map of $A^*$-characters to $H^*$. This implies that an irreducible character $d$ of $A^*$ seats over an irreducible character $x$ of $H^*$ if and only if $m_{ _{H^*}}(\pi_{ _K}(d),\;x)>0$. Suppose now that $x$ is an $A^*$-stable character of $H^*$. In this situation we denote by $C_{ _{{\pi_{ _K}}^{-1}(x)}}$ the subcoalgebra of $A$ generated by the set $\mtc{A}_{ _x}$ of all characters $d \in \mtr{Irr}(A^*)$ that seat over $x$, i.e with the property $\pi_{ _K}(d)=\eps(d)x$. Also Remark \ref{ind-restr} shows that the set $\mtc{A}_{ _x}$ is precisely the set of irreducible constituents of $\Lam_{ _K}d$ for any $A^*$-character $d$ seating over $x$. Here $\Lam_{ _K}\in K$ is the idempotent integral of $K$. \bn{thm}\label{gennormal} Any normal Hopf subalgebra $H$ of $D(A)$ is of the following form: $$ H=\oplus_{x \in \mtc{X}}C_{x\uw^{ ^{A}}_{ _{L_1}}}\bowtie C_{ _{{\pi_{ _{L_2}}}^{-1}}(\psi(x))} $$ where $L_1$ and $L_2$ are normal Hopf subalgebras of $A$, $\mtc{X} \subset G_{ _{st}}(L_1)$ is a subgroup of $A$-stable linear characters of $L_1$ and $\psi:\mtc{X} \ra G_{ _{st}}((A//L_2)^*)$ is a group monomorphism into the group of $A^*$-stable linear characters of $(A//L_2)^*$. \end{thm} \bn{proof} By Theorem 2.4 of \cite{Bker} any normal Hopf subalgebra $H$ of $D(A)$ is the kernel of a character $\ch$ of $D(A)$, central in $D(A)^*$. It follows that its restrictions to $A$ and $A^*$ are also central and therefore $A_{ _{\ch}}$ and ${A^*}_{ _{\ch}}$ are normal Hopf subalgebras of $A$ respectively $A^*$. Put $L_2=A_{ _{(\ch\dw^{ ^{D(A)}}_{ _A})}}$ and let $L_1$ be such that $A^*_{ _{(\ch\dw^{ ^{D(A)}}_{ _{A^*}})}}=(A//L_1)^*$. With the notations from previous Theorem suppose that $\eta \bowtie d \in (\mtc{K}^1)_x \boxtimes (\mtc{K}^2)_x \subset \mtr{ker}_{ _{D(A)}}(\ch)$ for a given $x \in \mtc{X}$. Since $\eta^n \in \mtr{Rep}(A//L_1)$ Lemma \ref{power} applied for $\eta$ implies that if $\eta \dw_{ _{L_1}}^{ _A}=\eta(1)f_x$ for some $A$-stable linear character $f_x$ of $L_1$. This shows that $\mtc{X}$ can be regarded as a subgroup of $G_{ _{st}}(L_1)$. By duality, the same argument applied for $(\mtc{K}^2)_x$ gives that $d \uw^{ ^{A^*}}_{ _{(A//L_2)^*}}=\eps(d)g_x$ for some $A^*$-stable linear character of $(A//L_2)^*$. Therefore $\mtc{X}$ can also be identified with a subgroup of $G_{ _{st}}((A//L_2)^*)$. Using these identifications one can define now the map $\psi(f_x)=g_x$ for all $x \in \mtc{X}$. By Equation \ref{gradings} this is a group monomorphism from $\mtc{X}$ to $G_{ _{st}}((A//L_2)^*)$ and the proof is finished. \end{proof} \bn{cor}\label{cocor} With the notations from Theorem \ref{gennormal} it follows that $$ \mtr{Rep}(H^*)\subset \mtr{Rep}(A//L_1)^{co}\boxtimes \mtr{Rep}(L_2^*)^{co} $$ \end{cor} \bn{rem}\end{rem} Given a datum $L_1$, $L_2$ $\mtc{X}$ and $\psi$ as in the above Theorem it does not necessarily follows that $H$ is a normal Hopf subalgebra of $D(A)$. Compatibility conditions between $L_1$, $L_2$ and $\psi$ should be imposed in order to get a normal Hopf subalgebra. We will see in Theorem \ref{commutwise} that such a necessary condition is that $L_1$ and $L_2$ commute element-wise. In the case of the Drinfeld double of a group $G$ we will give in Theorem \ref{normald(g)} the necessary and sufficient conditions that has to be satisfied by this datum in order to get a normal Hopf subalgebra of $D(G)$. \bn{rem} Let $K$ be a Hopf subalgebra of $A$ and $\Lam_K$ be its idempotent integral. Then it is well known (see \cite{Bker} for example) that the induced module $A\ot_Kk$ is isomorphic to $A\Lam_K$ via the map $a\ot_K1\mapsto a\Lam_K$. \end{rem} \subsection{ Commutativity between $L_1$ and $L_2$} \bn{thm}\label{commutwise} Suppose that $K:=(A//L_1)^*\bowtie L_2$ is a normal Hopf subalgebra of $D(A)$. Then $L_1$ and $L_2$ commute elementwise. \end{thm} \bn{proof} One has by \cite{Bker} that $K$ is a normal Hopf subalgebra of $D(A)$ if and only if $K = \mtr{ker}_{ _{D(A)}}(\eps_{ _K}\uw^{D(A)}_K)$. This is equivalent to $L_2=K \cap A \subset \mtr{ker}_{ _{D(A)}}(\eps_{ _K}\uw^{D(A)}_K)$ and $(A//L_1)^*=K \cap A^* \subset \mtr{ker}_{ _{D(A)}}(\eps_{ _K}\uw^{D(A)}_K)$. Note that $k\uw^{D(A)}_K=D(A)\ot_{(A//L_1)^*\bowtie L_2}k$. Thus if $K$ is a normal Hopf subalgebra of $D(A)$ then $m((f \bowtie b)\ot_K1)=\eps(m)((f \bowtie b)\ot_K1)$ for all $m \in L_2$ and any $f \in A^*$, $b \in A$. But if $m \in L_2$ then one has \begin{eqnarray*} m((f \bowtie b)\ot_K1) &=& ((f_1(Sm_3)f_3(m_1)f_2 \bowtie m_2b)\ot_K1) \\ & = & ((f_1(Sm_3)f_3(m_1)f_2 \bowtie b_1(Sb_2m_2b_3)\ot_K1)\\ &=&((f_1(Sm_3)f_3(m_1)f_2 \bowtie b_1)\ot_K(Sb_2m_2b_3)1) \\ & = & (f_1(Sm_2)f_3(m_1)f_2 \bowtie b)\ot_K1) \end{eqnarray*} This implies that $(f_1(Sm_2)f_3(m_1)f_2 \bowtie b)\ot_K1) =\eps(m)((f \bowtie b)\ot_K1)$ and previous Remark gives that \begin{equation}\label{cond} (f_1(Sm_2)f_3(m_1)f_2 \bowtie b)(t_{A//L_1}\bowtie \Lam_{L_2}) =\eps(m)((f \bowtie b)(t_{A//L_1}\bowtie \Lam_{L_2}) \end{equation} for all $f \in A^*$ and $b \in A$. Put $b=1$ and apply $\mtr{id}\otimes \eps$ in the previous equality. Then one obtains that $(f_1(Sm_2)f_3(m_1)f_2t_{A//L_1}=\eps(m)ft_{A//L_1}$. Evaluating both sides at any $l \in L_1$ it follows that $Sm_2lm_1=\eps(l)m$ which shows that $lm=ml$. Thus $L_1$ and $L_2$ commute elementwise. \end{proof} \bn{rem} \end{rem} The proof of the previous Theorem shows that the Equation \ref{cond} is necessary in order for $K$ to be a normal Hopf subalgebra of $D(A)$. Writing a similar condition for $(A//L_1)^*=K \cap A^* \subset \mtr{ker}_{ _{D(A)}}(\eps_{ _K}\uw^{D(A)}_K)$ it follows that both conditions together are necessary and sufficient for $K$ to be a normal Hopf subalgebra of $D(A)$. At the end of this section we give two general examples of normal Hopf subalgebras of $D(A)$ of the type described in the previous Theorem. \bn{prop}\label{ka} Let $A$ be a semisimple Hopf algebra and $K(A)$ its maximal central Hopf subalgebra. Then $K(A)$ is a normal Hopf subalgebra of $D(A)$. \end{prop} \bn{proof} If $x \in K(A)$ then $a_1xS(a_2)=\eps(a)x$ for all $a \in A$ since $x$ is central in $A$. On the other hand all $h \in A$ and $f \in A^*$ one has \begin{eqnarray*} (<,\; h>\ot \id)(f_2xS(f_1)) &=& (<,\; h>\ot \id)(f_2 (x_1\rh Sf_1 \lh Sx_3)\bowtie x_2 )\\ &=& [f_2 (x_1\rh Sf_1 \lh Sx_3)](h)x_2\\ &=& f_2(h_1)f_1(S(x_1)S(h_2)x_3)x_2 \\ &=& f(S(x_1)S(h_2)x_3h_1)x_2 \\ &=& \eps(h)f(S(x_1)x_3)x_2 \end{eqnarray*} since $x \in K(A)$. This shows that \bn{equation}\label{conj} f_2xS(f_1)=\eps \bowtie f(S(x_1)x_3)x_2 \in K(A). \end{equation} Thus $K(A)$ is closed under the adjoint action of $D(A)$. \end{proof} Note that this Hopf subalgebra corresponds to $L_1=A$, $L_2=K(A)$ with $\mtc{X}$ and $\psi$ trivial. \bn{prop} Let $A$ be a semisimple Hopf algebra. Then \\$(A//K(A))^*\bowtie A$ is a normal Hopf subalgebra of $D(A)$. \end{prop} \bn{proof} Note that $$ (A//K(A))^*=\{f \in A^* \;|\;f(ax)=\eps(x)f(a)\} $$ for all $a \in A$ and $x \in K(A)$. Since $K(A) \subset \mtc{Z}(A)$ it follows that $(b \rh f \lh c) \in (A//K(A))^*$ for all $f \in (A//K(A))^*$. Indeed $(b \rh f \lh c)(ax)=f(caxb)=f(cabx)=\eps(x)f(cab)=(b \rh f \lh c)(a)\eps(x)$. Thus $a. (f \bowtie b)=a_1(f\bowtie b)Sa_2=a_1\rh f\lh Sa_3\bowtie a_2bSa_4 \in (A//K)^*\bowtie A$ if $f \bowtie b \in (A//K)^*\bowtie A$. This shows that $ (A//K)^*\bowtie A$ is closed under the adjoint action of $D(A)$ by elements of $A$. In order to show that $(A//K)^*\bowtie A$ is also closed under the adjoint action of $D(A)$ by elements of $A^*$ note that $$ g.(f \bowtie b)=g_2(f \bowtie b)Sg_1=g_2f(Sb_1 \rh S(g_1)\lh b_3)\bowtie b_2 . $$ Then it is enough to check that if $f \in (A//K)^*$ then $g_2f(b \rh S(g_1)\lh c) \in (A//K)^*$ for all $g \in A^*$. Indeed, for all $a \in A$ and $x \in K(A)$ one has \begin{eqnarray*} [g_2f(b \rh Sg_1\lh c)](ax)&=& g_2(a_1x_1)f(a_2x_2)Sg_1(ca_3x_3b) \\&=& g_2(a_1x_1)f(a_2)Sg_1(ca_3x_2b) \\ &=& g(SbSx_2Sa_3Sca_1x_1)f(a_2)\\ &=& g(SbSa_3ScSa_1)f(a_2)\eps(x) \\ &=&[g_2f(b \rh Sg_1\lh c)](a)\eps(x) \end{eqnarray*} \end{proof} Note that this Hopf subalgebra corresponds to $L_1=K(A)$, $L_2=A$ with $\mtc{X}$ and $\psi$ trivial. \section{Representations of $D(G)$ and central characters} \subsection{The Drinfeld double $D(G)$.} Let $D(G)$ be the Drinfeld double of $G$. As a coalgebra $D(G)=kG^{*\;\mtr{cop}}\ot kG $ and the multiplication is given by $$(p_x\bowtie g)(p_y\bowtie h)=p_xp_{gyg^{-1}}\bowtie gh=\delta_{x,gyg^{-1}}p_x\bowtie gh. $$ The antipode is given by the formula $S(p_x \bowtie g)=g^{-1}p_{x^{-1}}=p_{g^{-1}x^{-1}g}g^{-1}$. A vector space basis for $D(G)$ is given by $\{p_x\bowtie y\}_{x,\;y \in G}$ where $\{p_x\}_{x \in G}$ is the dual basis of the basis of $kG$ given by the group elements. \subsection{Irreducible representations of $D(G)$} Let $\mtc{R}$ be a set of representatives of conjugacy classes of $G$. The irreducible representations of $D(G)$ are parameterized by pairs $(a , \gamma)$ where $a \in \mtc{R}$ and $\gamma \in \mtr{Irr}(C_G(a))$ is an irreducible character of the the centralizer $C_G(a)$ of $a$ in $G$. Their characters are denoted by $\widehat{(a , \gamma)}$ respectively. \subsection{Few results on group representations} In this subsection we give some results on group representations that are needed in the next sections. Let $H$ be a subgroup of $G$. Define by $\mtr{core}_{ _G}(H)$ the largest normal subgroup of $G$ contained in $H$. Then $\mtr{core}_{ _G}(H)=\cap_{g \in G}gHg^{-1}$. Let $N$ be a normal subgroup of $G$. Then $G$ acts on the irreducible characters of $N$. Let $\al$ be an irreducible character of $N$ and $\ch$ be an irreducible character of $G$. The set of characters lying over a given $\al$ is denoted by $\mtr{Irr}(G)|_{\al}$. \bn{lemma}\label{groupzeros} Let $N$ be a normal subgroup of $G$ and $\al \in \mtr{Irr}(N)$. Then the induced character $\al\uw^{ ^G}_{ _N}$ vanishes outside $N$. If $\al$ is a $G$-stable character of $N$ then $\al\uw^{ ^G}_{ _N}(g)=\frac{|G|}{|N|}\al(g)$ for all $g \in N$ \end{lemma} \bn{proof} Denote by $M_{ _{\al}}$ the representation afforded by $\al$. Let $G=\cup _{i=1}^s x_iN$ a coset decomposition of $G$. Then $g(x_i\ot_Nm)=x_j\ot_N(x_j^{-1}gx_i).m$ where $j$ is chosen such that $gx_iN=x_jN$. Thus if $g \notin N$ then $i \neq j$ and $\al\uw^{ ^G}_{ _N}(g)=0$. On the other hand if $g \in N$ then $\al\uw^{ ^G}_{ _N}(g)=\sum_{i=1}^s\al(x_i^{-1}gx_i)$. But if $\al$ is $G$-stable then $\al(x_i^{-1}gx_i)=\al(g)$ for all $i$ and the formula follows. \end{proof} Let $a \in G$ and denote $\bar{C}_G(a):=\mtr{core}_{ _G}(C_G(a))$ the core of the centralizer $C_G(a)$. For any $a \in G$ let $N(a)$ be the smallest subgroup of $G$ containing $a$. It is easy to see that $N(a)$ is generated by the conjugacy class of $a$. \bn{lemma}\label{compairs} If $a \in G$ then $N(a)$ and $\bar{C}_G(a)$ form a pair of commuting groups. \end{lemma} \bn{proof} One has $\bar{C}_G(a)=\cap_{g \in G}gC_G(a)g^{-1}=\cap_{g \in G}C_G(gag^{-1})$. Since $N(a)$ is generated by the conjugacy class of $a$ it is enough to verify that any element of this conjugacy class commutes with each element of $\bar{C}_G(a)$. Let $z \in \bar{C}_G(a)$ and $g \in G$ be fixed. Then there is $z' \in C_G(a)$ such that $z=gz'g^{-1}$ and therefore $z(gag^{-1})=(gz'g^{-1})(gag^{-1})=gz'ag^{-1}=gaz'g^{-1}=(gag^{-1})z$. \end{proof} \subsection{Kernels in $kG^*$} In this subsection we apply the previous constructions of $\mtr{ker}_{ _A}\;\ch$ and $z_{ _{A}}\ch$ for $A=kG^*$. Note that in this case $\mtr{Irr}(A)\cong G$. Let $\ch \in \mtr{Irr}(G)$. It follows that $h \in \mtr{ker}_{ _{kG}}\;\ch$ if and only if $N(h) \subset \mtr{ker}_{ _{kG}}\;\ch$. \bn{lemma} \label{charnh} For any $h \in G$ one has $$kG^*_h=k[G/N(h)]^*.$$ \end{lemma} \bn{proof} By its definition ${kG^*}_h$ is determined by all the characters $\ch \in \mtr{Irr}(G)$ such that $\ch(h)=\ch(1)$. These are precisely the characters of $G/N(h)$. \end{proof} \bn{lemma}\label{z} For any $h \in G$ one has $$Z_{ _{kG^*}}h=k[G/[G,\;N(h)]]^*.$$ \end{lemma} \bn{proof} By its definition $Z_h$ is generated by the set of irreducible characters $\ch$ of $G$ with the property that $\ch(h)=\omega\ch(1)$ for some root of unity $\omega$. Since $N(h)$ is generated by the conjugacy class of $h$ it follows that every element of $N(h)$ acts as a scalar on the representation $M_{ _{\ch}}$ afforded by $\ch$. Then if $n \in N(h)$ and $g\in G$ it follows that $gng^{-1}n^{-1}$ acts as identity on $M_{ _{\ch}}$. Thus $\ch \in \mtr{Irr}(G/[G,\;N(h)])$. Conversely for any $\ch \in \mtr{Irr}(G/[G,\;N(h)])$ one has that left multiplication by any $n \in N(h)$ is a morphism of $kG$ modules and Schur's lemma implies the conclusion. \end{proof} \bn{lemma} \label{valus} Let $h \in G$ and $\ch \in Z_{ _{kG^*}}h$ with $\ch(h)=\omega \ch(1)$ with $\omega \in k^*$. Then all irreducible characters of $G$ which satisfy $\mu(h)=\omega \mu(1)$ are constituents of $\ch\eps_{ _{N(h)}}\uw^{ ^G}_{ _{N(h)}}$ where $\eps_{ _{N(h)}}$ is the trivial character of $N(h)$. \end{lemma} \bn{proof} If $\mu(h)=\omega \mu(1)$ then $h \in ker{\mu\ch^*}$. Since $N(h)$ is a normal subgroup it easily follows (see for example Theorem 4.3 of \cite{Bcoset}) that $\mu\ch^*$ has all the constituents inside $\eps\uw^{ ^G}_{ _{N(h)}}$. Thus $m_G(\mu,\;\ch\eps\uw^{ ^G}_{ _{N(h)}})=m_G(\mu\ch^*,\;\eps\uw^{ ^G}_{ _{N(h)}})>0$. \end{proof} \subsection{Central characters in $D(G)$} \bn{lemma}\label{centrality} Let $c=\sum_{h \in G}\ch_h \bowtie h$ be a character of $D(G)^*$ with $\ch_h \in C(G)$. Then $c$ is central in $D(G)$ if and only if $\ch_{ghg^{-1}}=\ch_h$ and $\ch_h$ vanishes on $G\backslash C_G(h)$. \end{lemma} \bn{proof} The character $c$ is central if and only it is invariant under the adjoint action of $D(G)$ on itself. Thus $c$ is central if and only if $gcg^{-1}=g$ for all $g \in G$ and $p_x.c=\delta_{x, 1}c$ for all $x \in G$. The first condition is equivalent to $\ch_{ghg^{-1}}=\ch_h$ for all $g \in G$. For the second condition one has \bn{eqnarray*} p_x.c& \! = \! & \sum_{uv=x}p_vcp_{u^{-1}} \\& = & \sum_{h \in G}\sum_{uv=x}p_v(\ch_h \bowtie h)p_{u^{-1}} \\& = &\sum_{h \in G}\sum_{uv=x}(p_v\ch_hp_{hu^{-1}h^{-1}} \bowtie h) \\& = &\sum_{h \in G}\sum_{\{u|\;uhu^{-1}h^{-1}=x\}}p_{hu^{-1}h^{-1}}\ch_h\bowtie h \end{eqnarray*} Suppose that $c$ is central. Then $p_x.c=\delta_{x,1}c$ if and only if $$\sum_{\{u|\;uhuh^{-1}=x\}}p_{hu^{-1}h^{-1}}\ch_h=\delta_{x,1}\ch_h$$ for all $h \in G$. If $x=1$ this means precisely that $\ch_h$ is zero outside $C_G(h)$. The converse is immediate. \end{proof} For a conjugacy class $\mtc{C}$ of $G$ let $p_{ _{\mtc{C}}}=\sum_{x \in \mtc{C}}p_x$ and $z_{ _{\mtc{C}}}=\sum_{x \in \mtc{C} }x$. The elements $p_{ _{\mtc{C}}}$ form a basis for the character ring $C(G)$ of $G$ and $z_{ _{\mtc{C}}}$ form a basis for the center $\mtc{Z}(kG)$. For a character $\ch$ of a group $G$ and a conjugacy class $\mtc{C}$ of $G$ denote by $\ch_{ _{\mtc{C}}}$ the value of $\ch$ on the conjugacy class $\mtc{C}$. Thus $\ch=\sum_{\mtc{C}}\ch_{ _{\mtc{C}}}p_{ _{\mtc{C}}}$. \bn{rem}\label{cre} Let $H$ be a subgroup of $G$. A character $\ch \in C(G)$ vanishes outside $H$ if and only if it vanishes outside $\mtr{core}_{ _G}(H)$. \end{rem} \bn{thm}\label{ctrl} A basis for $\mtc{Z}(D(G))\cap C(D(G)^*)$ is given by the elements $p_{ _{\mtc{D}}}\bowtie z_{ _{\mtc{C}}}$ where $\mtc{D}$ and $\mtc{C}$ run through all conjugacy classes of $G$ that centralize each other element-wise. \end{thm} \bn{proof} Let $a \in \mtc{C}$. Then $\bar{C}_G(a)=\cap_{g \in G}gC_G(a)g^{-1}=\cap_{x \in \mtc{C}}C_G(x)$. Thus a conjugacy class $\mtc{D}$ centralize each element of another conjugacy class $\mtc{C}$ if and only if $\mtc{D}$ is contained in $\bar{C}_G(a)$. In this case the above remark implies $p_{ _{\mtc{D}}}$ vanishes outside the centralizer of each element of $\mtc{C}$. The previous lemma implies that $p_{ _{\mtc{D}}}\bowtie z_{ _{\mtc{C}}}$ is central in $D(G)$. The same lemma also implies that any central character is a linear combination of such characters. \end{proof} \section{Normal Hopf subalgebras of Drinfeld doubles $D(G)$} For a subgroup $N$ of $G$ define $C_G(N):=\cap_{n \in N}C_G(n)$ to be the subgroup of the elements of $G$ that commute with each element of $N$. If $N$ is a normal subgroup of $G$ let $G_{ _{st}}(N)$ be the group of linear characters of $N$ that are stable under the action of $G$ induced by the conjugation on $N$. It is not difficult to check the following result: \bn{lemma} \label{multip} Let $N$ be a normal subgroup of $G$ and $x ,x' \in G_{ _{st}}(N)$. Then $$x\uw^{ ^G}_{ _N}x'\uw^{ ^G}_{ _N}=\frac{|G|}{|N|}(xx')\uw^{ ^G}_{ _N}.$$ \end{lemma} If $x$ is a character of $N$ denote by $C_{x\uw^{ ^G}_{ _N}}$ the subcoalgebra of $kG^*$ generated by $x\uw^{ ^G}_{ _N}$. Also if $N$ is a normal subgroup of $G$ with $\pi_{ _N}:G\ra G/N$ the natural group projection and $g \in G$ denote by $C_{ _{{\pi_{ _N}}^{-1}(\bar{g})}}$ the vector space with a basis given by the elements $gn$ with $n \in N$. \subsection{Hopf subalgebras of $D(G)$} Let $N$ and $M$ be normal subgroups of $G$, $\mtc{X} \subset G_{ _{st}}(N)$ and $\psi: \mtc{X} \rightarrow G/M$ a monomorphism of groups. Let $D(N, M, \mtc{X}, \psi)$ be the subcoalgebra of $D(G)$ given by: $$ D(N, M,\mtc{X},\psi):=\oplus_{x \in \mtc{X}}C_{x\uw^{ ^G}_{ _N}}\bowtie C_{ _{ {\pi_{ _N}}^{-1}(\psi(x))}}. $$ \bn{thm}\label{genhopfdg} Any Hopf subalgebra of $D(G)$ is of the type $D(N, M, \mtc{X}, \psi)$ for a given datum as above. \end{thm} \bn{proof} Since $x$ is $G$-stable by Frobenius reciprocity it follows that $x\uw^{ ^G}_{ _N}=\sum_{\ch \in \mtc{A}_{ _x}}\ch(1)\ch$ and therefore $\mtr{dim}_k\; C_{x\uw^{ ^G}_{ _N}}=\frac{|G|}{|N|}$. Then $\mtr{dim}_kD(N, M,\mtc{X},\psi)=\frac{|\mtc{X}||G||M|}{|N|}$. Let $\psi_0(x)\in G$ such that $\psi(x)=\psi_0(x)M$ for all $x\in M$. Since $\psi$ is a group morphism it follows that $(\psi_0(x)\Lam_M)(\psi_0(x')\Lam_M)=\psi_0(xx')\Lam_M$ for all $x,x' \in M$. Here $\Lam_M=\frac{1}{|M|}\sum_{m \in M}m$. Let $$ \Lam_{D(N, M,\mtc{X},\psi)}=\frac{|N|}{|\mtc{X}||G||M|}\sum_{x \in \mtc{X}\;}x\uw^{ ^G}_{ _N}\bowtie\psi_0(x)\Lam_M. $$ One has $$ \Lam^2_{D(N, M,\mtc{X},\psi)}=(\frac{|N|}{|\mtc{X}||G||M|})^2\sum_{x, x' \in \mtc{X}\;}x\uw^{ ^G}_{ _N}x'\uw^{ ^G}_{ _N}\bowtie\psi_0(x)\Lam_M\psi_0(x')\Lam_M $$ $$ =\frac{|N|}{|\mtc{X}||G||M|}\sum_{x, x' \in \mtc{X}\;}(xx')\uw^{ ^G}_{ _N}\bowtie \psi_0(xx')\Lam_M $$ by the above lemma. This shows that $D(N, M,\mtc{X},\psi)$ is a Hopf subalgebra of $D(G)$. In order to see that any Hopf subalgebra $H$ of $D(G)$ is of this type one has to apply the results from subsection \ref{hsalg} to the case $A=kG$. Note that in this case one can write $\mtc{L}^1=\mtr{Rep}(G/L_1)$ and $\mtc{K}^1=\mtr{Rep}(G/K_1)$ for some normal subgroups $L_1$ and $K_1$ of $G$. Condition \ref{first} translates by $K_1 \subset L_1 \subset \mtc{Z}_G(K_1)$ where $\mtc{Z}_G(K_1)$ is defined by $\mtc{Z}_G(K_1)/K_1=\mtc{Z}(G/K_1)$. On the other hand one can write $\mtc{L}^2=\mtr{Rep}(kL_2^*)$ and $\mtc{K}^2=\mtr{Rep}(kK_2^*)$ for some subgroups $L_2$ and $K_2$ of $G$. Then condition \ref{second} is satisfied if $L_2$ is a normal subgroup of $K_2$. Clearly the second grading ($i=2$) of \ref{gradings} implies that $L_2$ is a normal subgroup of $K_2$ and $\mtc{X}\cong K_2/L_2$. In this case the grading becomes $K_2=\cup_{x \in K_2/L_2}xL_2$. Applying Lemma \ref{dual-costs} to the first grading of \ref{gradings} ($i=1$) it follows that $\mtc{K}_x^1$ is given by all the characters of $G$ that seat over a $G$-stable linear character $\psi_x$ of $L_1$. Thus $\mtc{X}$ can also be identified with a subgroup of $G_{ _{st}}(L_1)$ by $x \mapsto \psi_x$. Thus formula \ref{formulahopf} implies that $H=D(L_1, L_2, \mtc{X}, \psi)$ where $\psi$ sends $\psi_x$ to the class of $x$ modulo $K_2$ based on the two identifications of $\mtc{X}$. \end{proof} \subsection{Normal Hopf subalgebras of $D(G)$} \bn{thm}\label{normald(g)} A Hopf subalgebra $D(N, M, \mtc{X}, \psi)$ of $D(G)$ is normal if and only $\psi(\mtc{X})\subset C_G(N)/M\cap \mtc{Z}(G/M)$ and $[N,M]=1$. \end{thm} \bn{proof} In order to see when $D(N, M, \mtc{X}, \psi)$ is a normal Hopf subalgebra of $D(G)$ it is enough \cite{Masnr} to see when its integral $\Lam_{D(N, M,\mtc{X},\psi)}$ is central in $D(G)$. In order to do this one needs to verify the conditions from Lemma \ref{centrality}. First note that $\psi_0(x)m\neq \psi_0(y)m'$ for $x \neq y$ and for any $m ,m' \in M$ since $\psi$ is a monomorphism. The equality of characters $\ch_{ghg^{-1}}=\ch_h$ from Lemma \ref{centrality} is satisfied if and only if $g\psi_0(x)Mg^{-1}=\psi_0(x)M$ for all $g \in G$ This is equivalent to the fact that $\psi(x) \in \mtc{Z}(G/M)$. On the other hand for the second condition note by that Lemma \ref{groupzeros} $x\uw^{ ^G}_{ _N}$ is zero outside $N$ and does not vanishes on any element of $N$. Thus the second condition of Lemma \ref{centrality} is equivalent to $N \subset C_G(\psi_0(x)m)$ or $\psi(x) \in C_G(N)/M$. \end{proof} \subsection{Kernels of irreducible characters of $D(G)$} Since $D(G)=kG^{*\;cop}\bowtie kG$ the dual Hopf algebra $D(G)^* $ can be identified with $kG^*\ot kG^{\;op}$ via $<f\ot l,\; p_x \bowtie g>=<f,\;g><p_x,\;l>$ for any $g, x, l\in G$ and $f \in kG^*$. For a subgroup $H$ of $G$ denote by $(G/H)_l$ a set of representatives for the left cosets of $H$ in $G$. \bn{lemma}\label{formula} For a representation $\widehat{(a , \gamma)}$ of $D(G)$ one has $$\widehat{(a , \gamma)}=\sum_{g \in (G/C_G(a))_l}\;\sum_{z \in C_G(a)}\gamma(z)p_{gag^{-1}}\ot gzg^{-1}.$$ \end{lemma} \bn{proof} It is enough to show that \bn{equation}\label{eq} \widehat{(a , \gamma)}(p_x \bowtie l)=\gamma(g^{-1}lg) \end{equation} if there is $g \in G$ such that $x=gag^{-1}$ and $g^{-1}lg \in C_G(a)$ and zero otherwise. The representation corresponding to $(a, \gamma)$ is given by $kG\ot_{kC_G(a)}M_{\gamma}$ where $M_{\gamma}$ is the module affording the character $\gamma$. The action of $kG^*$ is given by $p_x(g \ot_{kC_G(a)}m)=\delta_{x, gag^{-1}}(g \ot_{kC_G(a)}m)$ and the action of $kG$ is the action of induced module. Using Lemma \ref{groupzeros} a straightforward computation implies formula \ref{eq}. \end{proof} \bn{prop}\label{new} Let $M$ be subgroups of $G$, $\gamma$ a character of $M$ and $a \in G$. Then the set $\mtc{S}$ of pairs $(\ch, l)\in Z_{ _{kG^*}}a \times \mtr{core}_{ _G}M$ such that $\frac{\ch(a)}{\ch(1)}=(\frac{\gamma(glg^{-1})}{\gamma(1)})^{-1}$ for all $g \in G$ is of the form $$ \mtc{S}= \bigsqcup_{i=0}^s(\mtr{Irr}(G)|_{f_0^i}\times l_0^i\mtr{core}_{ _G}(\mtr{ker}_{ _M}\;(\gamma))) $$ for some $G$-stable character $f_0$ of $N(a)$ of order $s$ and some $l_0 \in Z_{ _{C_G(a)}}\gamma$ such that $l_0^s \in \mtr{core}_{ _G}(\mtr{ker}_{ _M}\;(\gamma))$. \end{prop} \bn{proof} Let \begin{equation*} H:=\{ \omega \;| \text{there is}\; (\ch ,\;l)\in \mtc{S}\;\text{ with } \omega=\frac{\ch(a)}{\ch(1)}\; \text{ for some } \;(\ch, l)\in \mtc{S} \} \end{equation*} It can be checked that $H$ is a subgroup of $k^*$. Since $k$ is algebraically closed and $H$ is finite it follows that $H$ is a cyclic group. Therefore $H=\{1, \omega, \cdots \omega^{s-1}\}$ for some root of unity $\omega$ of order $s$. Let $\ch_0$ and $l_0$ such that $(\ch_0, l_0)\in \mtc{S}$ and $\frac{\ch_0(a)}{\ch_0(1)}=\omega={(\frac{\gamma(gl_0g^{-1})}{\gamma(1))})}^{-1}$ for all $g \in G$. Then $\ch_0\dw^{ ^G}_{ _{N(a)}}=\ch_0(1)f_0$ for some $f_0 \in G_{ _{st}}(N(a))$ of order $s$. Also note that $g_0^s \in \mtr{core}_{ _G}(\mtr{ker}_{ _{M}}\;\gamma)$. Lemma \ref{valus} implies the conclusion of the Proposition. \end{proof} \bn{thm}\label{kerndescr} Let $a \in \mtc{R}$ and $\gamma \in \mtr{Irr}(C_G(a))$. Then the Hopf subalgebra $D_{(a, \;\gamma)}:=D(G)_{\widehat{(a, \gamma)}}$ is a normal Hopf subalgebra of $D(G)$. Moreover with the above notations one has $$ D_{(a, \gamma)}=D(N(a),\mtr{core}_{ _G}(\mtr{ker}_{ _{C_G(a)}}(\gamma)), <f_0>, \psi) $$ for some $G$-stable linear character $f_0$ of $N(a)$ and some monomorphism\\ $\psi: <f>\ra \mtc{Z}(G/\mtr{core}_{ _G}(\mtr{ker}_{ _{C_G(a)\;}}\;\gamma ))\cap C_G(N(a))/(\mtr{ker}_{ _{C_G(a)}}(\gamma))$. \end{thm} \bn{proof} First we will describe $\mtr{ker}\widehat{(a, \gamma)}$. An irreducible character of $D(G)^*$ is given by $\ch \bowtie l$ with $\ch \in \mtr{Irr}(G)$ and $l \in G$. It follows that $\ch \bowtie l \in \mtr{ker}_{ _{D(G)}}\;\widehat{(a, \gamma)}$ if and only if $$\widehat{(a , \gamma)}(\ch \bowtie l)=\ch(1)\gamma(1)\frac{|G|}{|C_G(a)|}.$$ It follows from Lemma \ref{formula} that $$ \widehat{(a , \gamma)}(\ch \bowtie l)=\ch(a)\gamma (\sum_{\{g\in (G/C_G(a))_l\;|l \in gC_G(a)g^{-1}\}}g^{-1}lg). $$ Since the above the sum has $\frac{|G|}{|C_G(a)|}$ terms and one has $|\ch(h)|\leq \ch(1)$ and $|\gamma(g^{-1}lg)|\leq \gamma(1)$ we deduce that the above equality is satisfied if and only if there is a root of unity $\omega \in k^*$ such that $\ch(a)=\omega \ch(1)$ and $l \in \mtr{core}_{ _G}(Z_{ _{C_G(a)}}\gamma)$ with the property that $\gamma(g^{-1}lg)=\omega^{-1}\gamma(1)$ for all $g \in G$. By Proposition \ref{new} it follows that there is $f_0$ a $G$-stable linear character of $N(a)$ and $l_0 \in G$ such that $$ \mtr{ker}_{ _{D(G)}}\;\widehat{(a, \gamma)}=\bigsqcup_{i=0}^s (\mtr{Irr}(G)|_{f_0^i}\times l_0^i\mtr{core}_{ _G}(\mtr{ker}_{ _{C_G(a)}}\gamma)) $$ Thus one can take $\mtc{X}$ to be the group generated by $f_0$ and define $\psi$ by sending $f_0^i$ to the class of $g_0^i$ modulo $\mtr{core}_{ _G}(\mtr{ker}_{ _{C_G(a)}\;\gamma})$, for all $1\leq i\leq s-1$. Note that $[N(a),\; C_G(a)]=1$ by Lemma \ref{compairs}. It can be easily checked that the map $\psi$ satisfies the additional hypothesis from Theorem \ref{normald(g)}. \end{proof} The description of the kernels from the previous theorem implies the following corollary: \bn{cor} With the above notations one has $$D_{(1,\gamma)}=kG^* \bowtie \mtr{ker}_{ _G}\;\gamma$$ \end{cor} \bn{prop} If $(a,\;\gamma)$ is a representation of $D(G)$ then $$Z_{ _{D(G)}}\widehat{(a, \gamma)}=k[G/[G, N(a)]]^*\bowtie \mtr{core}_{ _G}(Z_{ _{C_G(a)}}\gamma)$$ \end{prop} \bn{proof} The proof is similar to the proof of Theorem \ref{kerndescr}. Lemma \ref{z} is needed in order to compute $Z_{ _{kG^*}}a$. \end{proof} \subsection{Fusion subcategories of $\mtr{Rep}(D(G))$.} Let $\mtc{D}$ be a fusion subcategory of $\mtc{C}$. Following \cite{s} $\mtc{D}$ is completely determined by two canonical normal subgroups $K_{ _{\mtc{D}}}$ and $H_{ _{\mtc{D}}}$ of ${G}$ and a $G$-invariant bicharacter $B_{ _{\mtc{D}}}:K_{ _{\mtc{D}}} \times H_{ _{\mtc{D}}} \ra k^*$. The subgroups $K_{ _{\mtc{D}}}$ and $H_{ _{\mtc{D}}}$ are defined as follows: $$ K_{ _{\mtc{D}}} := \{gag^{-1}\; | g \in G\;\; \text{and} \;\;(a, \gamma) \in \mtc{D} \;\;\text{for some} \;\;\gamma\}$$ and $H_{ _{\mtc{D}}}$ is the normal subgroup of $G$ such that $\mtc{D}\cap \mtr{Rep}(G)=\mtr{Rep}(G/H_{ _{\mtc{D}}})$. Note that $K_{ _{\mtc{D}}}$ is the fusion subcategory of $kG^*$ determined by restricting all simple objects of $\mtc{D}$ to $kG^*$. The bicharacter $$B_{ _{\mtc{D}}} : K_{ _{\mtc{D}}} \times H_{ _{\mtc{D}}} \ra k^*$$ is defined by $$ B_{ _{\mtc{D}}}(g^{-1}ag, h) :=\frac{\gamma(ghg^{-1})}{\gamma(1)}$$ if $(a, \gamma) \in \mtc{D}$. This is well defined and does not depend on $\gamma$ by \cite{NNW}. Recall that a bicharacter is called $G$-invariant if and only if \\$B(xkx^{-1}, h)=B(k, h)$ and $B(k,xhx^{-1})=B(k,h)$ for all $x \in G$, $k \in K$ and $h \in H$. Conversely, any two normal subgroups $K$ and $H$ of $G$ that centralize each other element-wise together with a $G$-invariant bicharacter $B: K\times H \ra k^*$ give rise to a fusion category denoted by $S(K, H, B)$ in \cite{NNW}. It is defined as the full abelian subcategory of $\mtr{Rep}(D(G))$ generated by the objects $(a ,\gamma)$ such that $a \in K \cap \mtc{R}$ and $\gamma \in \mtr{Irr}(C_G(a))$ such that $\gamma(h)=B(a, h)\gamma(1)$ for all $h \in H$. \subsection{Normal fusion subcategories of $\mtr{Rep}(D(G))$.} In this section we will identify all the normal fusion subcategories $S(K,H,B)$ of $\mtr{Rep}(D(G))$. \bn{rem} Let $B$ be a normal Hopf subalgebra of a semisimple Hopf algebra $A$ and $a \in A$. Then $a(A//B)\neq 0$ if and only if $a\Lam_{ _B}\neq 0$. \end{rem} \bn{thm}\label{mainn} Using the notations from Theorem \ref{normald(g)} the fusion subcategory $\mtr{Rep}(D(G)//D(N, M, \mtc{X}, \psi))$ can be identified with $\mtc{S}(K, H, B)$ where $K:=N$, $H:= <\psi_0(x), M\;|x \in \mtc{X}>$ and $B: K\times H \ra k^*$ is given by $B(n, \psi_0(x)m)=x(n)^{-1}$. \end{thm} \bn{proof} Let $\mtc{D}:=\mtr{Rep}(D(G)//D(N, M,\mtc{X}, \psi))$. For the subgroup $H_{ _{\mtc{D}}}$ one has to look at $\mtr{Rep}(D(G)//D(N, M, \mtc{X}, \psi))\cap \mtr{Rep}(kG)$. Thus \begin{equation*} H_{ _{\mtc{D}}}=\cap_{\{\ch\in \mtr{Irr}(G)\;|(1,\ch)\in \mtc{D}\}}\mtr{ker}_{ _G}\;\ch . \end{equation*} Since $D_{(1, \ch)}=kG^*\bowtie \mtr{ker}_{ _G}\;\ch$ it follows that $D_{(1, \ch)}\supset D(N, M, \mtc{X}, \psi)$ if and only if $\mtr{ker}_{ _G}\;\ch \supset <\psi(x), m\;|x \in \mtc{X}, \;m \in M>$. Thus $H_{ _{\mtc{D}}}= <\psi(x), m\;|x \in \mtc{X}, \;m \in M>$. The subgroup $K_{ _{\mtc{D}}}$ is generated by all $x \in G$ such that $p_x(D(G)//D(N, M, \mtc{X}, \psi))\neq 0$. The above remark implies that $K_{ _{\mtc{D}}}$ is given by those $x \in G$ such that $p_x \Lam_{D(N, M, \mtc{X}, \psi)}\neq 0$. Formula for $\Lam_{D(N, M, \mtc{X}, \psi)}$ from Theorem \ref{genhopfdg} shows that $x\in K_{ _{\mtc{D}}}$ if and only if $(f^i\uw^{ ^G}_{ _N})(x)\neq 0$ for some $i$. Lemma \ref{groupzeros} shows that $K_{ _{\mtc{D}}} \subset N$. Since $f$ is linear it follows that $K_{ _{\mtc{D}}}=N$. In order to describe the bicharacter $B$ suppose that\\ $(a ,\gamma) \in \mtr{Rep}(D(G)//D(N, M, \mtc{X}, \psi))$. Then $D_{(a,\gamma)} \supset D(N, M, \mtc{X}, \psi) $. Thus using again Lemma \ref{groupzeros} and Remark \ref{ind-restr} one has $B(gag^{-1},h)=\frac{\gamma(g^{-1}hg)}{\gamma(1)}={(\frac{x\uw^{ ^G}_{ _N}(a)}{\frac{|G|}{|N|}})}^{-1}=x(a)^{-1}$. \end{proof} \bn{thm}\label{main2} A category $S(K,H,B)$ is normal if and only if $$ B(gag^{-1}, h)=B(a, h)=B(a, yhy^{-1}) $$ for all $a, x, y,h \in G$. In these conditions $$ S(K,H,B)=\mtr{Rep}(D(G)//D(K, K^{\perp}, \mtc{X}, \psi)) $$ where $\mtc{X}=\{ B(-,\;h)\;|h \in H\}$ and $\psi:\mtc{X} \ra \mtc{Z}(G/K^{\perp})\cap C_G(K)/K^{\perp}$ is given by $\psi(x)=\bar{h}$ for any $h \in H$ with $x=B(-,\;h)$. \end{thm} \bn{proof} It can easily be checked that $\mtc{X}$ is a group of $G$-stable linear characters of $K$ and that $\psi$ takes values in $\mtc{Z}(G/K^{\perp})\cap C_G(K)/K^{\perp}$. The rest of the theorem follows from Theorem \ref{mainn}. \end{proof} \section{Acknowledgements} This work was supported by the strategic grant POSDRU/89/1.5/S/58852, Project "Postdoctoral programme for training scientific researchers" cofinanced by the European Social Found within the Sectorial Operational Program Human Resources Development 2007 - 2013 \bibliographystyle{amsplain}
1,314,259,995,241
arxiv
\section{Introduction} The optional arguments of {\tt $\backslash$documentclass$\{$eptcs$\}$} are \begin{itemize} \item at most one of {\tt adraft}, {\tt submission}, {\tt preliminary} or {\tt replacement}, \item at most one of {\tt publicdomain} or {\tt copyright}, \item and optionally {\tt creativecommons}, \begin{itemize} \item possibly augmented with \begin{itemize} \item {\tt noderivs} \item or {\tt sharealike}, \end{itemize} \item and possibly augmented with {\tt noncommercial}. \end{itemize} \end{itemize} We use {\tt adraft} rather than {\tt draft} so as not to confuse hyperref. The style-file option {\tt submission} is for papers that are submitted to {\tt $\backslash$event}, where the value of the latter is to be filled in in line 2 of the tex-file. Use {\tt preliminary} only for papers that are accepted but not yet published. The final version of your paper that is to be uploaded at the EPTCS website should have none of these style-file options. By means of the style-file option \href{http://creativecommons.org/about/license/}{creativecommons} authors equip their paper with a Creative Commons license that allows everyone to copy, distribute, display, and perform their copyrighted work and derivative works based upon it, but only if they give credit the way you request. By invoking the additional style-file option {\tt noderivs} you let others copy, distribute, display, and perform only verbatim copies of your work, but not derivative works based upon it. Alternatively, the {\tt sharealike} option allows others to distribute derivative works only under a license identical to the license that governs your work. Finally, you can invoke the option {\tt noncommercial} that let others copy, distribute, display, and perform your work and derivative works based upon it but for noncommercial purposes only. The correct values of {\tt $\backslash$event}, {\tt $\backslash$volume}, {\tt $\backslash$anno}, {\tt $\backslash$firstpage} and {\tt $\backslash$eid} will be communicated to you upon acceptance and should then be filled in in lines 2--6 of the the tex-file. Note that {\tt $\backslash$event} may contain an explicit newline command $\backslash\backslash$. Authors' (multiple) affiliations and emails use the commands {\tt $\backslash$institute} and {\tt $\backslash$email}. Both are optional. Authors should also supply {\tt $\backslash$titlerunning} and {\tt $\backslash$authorrunning}, using {\tt $\backslash$def}. As illustrated above, heuristic solutions may be called for to share affiliations.\hfill1\\ Authors are may apply their own creativity here. Exactly 46 lines fit on a page. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\hfill6\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\hfill11\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details. Here starts a new paragraph. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\hfill16\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\hfill21\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\hfill26\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\hfill31\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\hfill36\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\hfill41\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\\ The rest is like any normal {\LaTeX} article. We will spare you the details.\hfill46\\ The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. The rest is like any normal {\LaTeX} article. We will spare you the details. \section{Bibliography} We request that you use \href{http://www.cse.unsw.edu.au/~rvg/EPTCS/eptcs.bst} {\tt $\backslash$bibliographystyle$\{$eptcs$\}$}. This style uses the bibtex fields {\tt ee} or {\tt url}, which are treated as synonyms, to make life links from your references to the response pages\footnote{Nowadays, papers that are published electronically tend to have a \emph{response page} that lists the title, authors and abstract of the paper, and links to the actual manifestations of the paper (e.g.\ as {\tt dvi}- or {\tt pdf}-file). Sometimes publishers charge money to access the paper itself, but the response page is always freely available.} of the cited papers. We recommend only using this feature when you have archival-quality links: ones that promise to be valid for centuries. You can find archival-quality URL's for most recently published papers in DBLP---they are in the bibtex-field {\tt ee}. In fact, it is often useful to check your references against DBLP records anyway, or just find them there in the first place. When using {\LaTeX} rather than {\tt pdflatex} to typeset your paper, by default no linebreaking within long URLs is allowed. This leads often to very ugly output, that moreover is different from the output generated when using {\tt pdflatex}. This problem is repaired when invoking \href{http://www.cse.unsw.edu.au/~rvg/EPTCS/breakurl.sty} {\tt $\backslash$usepackage$\{$breakurl$\}$}: it allows linebreaking within links and yield the same output as obtained by default with {\tt pdflatex}. When invoking {\tt pdflatex}, the package {\tt breakurl} is ignored. \bibliographystyle{eptcs} \section{Introduction} NASA's Mars Science Laboratory Mission (MSL), now scheduled to launch in 2011 \cite{msl}, relies on a number of different mechanisms used to command the spacecraft from Earth and to understand the behavior of the spacecraft and rover. The primary elements of this communication system are commands sent from the ground, visible events emitted by flight software (essentially formalized {\tt printf}s in the code) \cite{exploit}, snapshots of the spacecraft state, and data products --- files downlinked to earth (e.g., images of Mars or science instrument data). All of these elements may be thought of as spacecraft \emph{events}, with a canonical timestamp. Testing the flight software (beyond the unit testing done by module developers) usually relies on observing these indicators of spacecraft behavior. As expected, test engineers cannot ``eyeball'' the hundreds of thousands of events generated in even short tests of such a complex system. Previously, test automation has relied on ad hoc methods --- hand-coded Python \cite{python} scripts using a framework to query the ground communications system for various kinds of events, as a test proceeds. This was the state-of-the-art at the time members of our group, the Laboratory for Reliable Software, joined the MSL team, as developers and test infrastructure experts. Lengthy collaboration with test engineers convinced us that something better was required: the test script approach required large amounts of effort, much of it duplicated, and often failed due to changes in event timing and limitations of the query framework. Moreover, the scripts, combining test input activity, telemetry gathering, and test evaluation, proved difficult to read --- for other test engineers and for developers and systems engineers trying to extract and understand (and perhaps fix) a specification. Runtime verification using formal specifications offered a solution, and the MSL ground communications system suggested that we exploit an under-used approach to runtime verification: offline examination of event logs already produced by system execution. In the remainder of this paper we report on two aspects of this project --- in Section \ref{reqs} we discuss the general idea of event sequences as requirements, and our specification methodology; Section \ref{framework} gives a brief introduction to our application of these general ideas to MSL and our new specification language. \section{Event Sequences as Requirements} \label{reqs} Systems verification consists of proving that an artifact (hardware and/or software) satisfies a specification. In mathematical terms we have a model $M \in \mdl{ML}$ (for example the complete system) in some model language $\mdl{ML}$, and a specification $S \in \mdl{SL}$ in some specification language $\mdl{SL}$, and we want to show that the pair $(M,S)$ is member of the satisfaction relation $\models\; \subseteq\; \mdl{ML} \times \mdl{SL}$, also typically written: $M \models S$. The general problem of demonstrating correctness of a combined hardware/software system is very hard, as is well-known. Advanced techniques such as theorem proving or model checking tend not to scale for practical purposes. Extracting abstract models of the system, and proving these correct, has been shown to sometimes be useful, but also faces scalability problems when dealing with real systems or complex properties. The problem is inherently difficult because the models are complex -- because the behavior of systems is complex. The problems of full verification have long limited the adoption of formal specification. In \emph{runtime verification} a specification is used to analyze a single execution of a system, not the complete behavior. In this case a model is a single execution trace $\sigma$, and the verification problem consists of demonstrating that $\sigma \models S$, a much simpler problem with scalable solutions. The original model $M$ (the full system) can be considered as denoting the set of all possible runs $\sigma$, of which we now only verify a subset. This approach of course is less ambitious, but seems to be a practical and useful deployment of formal specification. Though these observations are well known, they are less often taken into practice. It is still rare to observe the application of formal specification --- even for runtime verification. There may be several reasons for this, one of which is the problem of program instrumentation. A large body of research considers the problem: how to we produce traces to verify? There is, however, a much simpler approach, namely to use logging information that is already generated by almost any computer system when it is tested. Our {\em first recommendation} is therefore that runtime verification should be used to formally check logged data. Our {\em second recommendation} is that logging and requirements engineering should be connected in the sense that requirements should be testable through runtime verification of logs. The common element is the {\em event}: requirements should be expressed as predicates on sequences of events, and logging should produce such sequences of events, thereby making the requirements testable. This means that logs preferably should consist of events with a formal template, connected to the original requirements. Note, however, that it is possible to extract formalized events from chaotic logging information produced by the usual ad hoc methods --- using, e.g., regular expressions. The obvious scientific and methodological question is: what kind of information should a log contain, in order to help verify requirements? We shall adopt the simple view that a log is a sequence of events, where an event is a mapping from names to values of various types (integers, strings, etc.): \[ \begin{array}{rcl} Log & = & Event^* \\ Event & = & Name \rightarrow Value \\ \end{array} \] \noindent This definition is quite general. An event can carry information about various aspects of the observed system. Events can be classified into various kinds by letting a designated name, i.e. {\em kind}, map to the kind. In the context of MSL, five forms of events are used, see Figure \ref{fig:observable-events}. We claim that these five event kinds are generally applicable to any system being monitored. The five forms of events are: (1) commands (input) issued to the monitored system, (2) products (output) delivered by the system, (3) periodic samplings of the state of the system, such as the value of continuous-valued sensors (like position coordinates), (4) changes to the state of the system, those changes that are observable, and (5) transitions performed by the system (also referred to as EVent Reports - EVRs), for example when {\tt printf} statements would normally be used to record an important event. The two forms of observation of the state (3, 4) could potentially be regarded as one kind of observation: that of the state of the system at any point in time. The state observations (3, 4) and the transitions (5) are internal events, while the commands (1) and products (2) are external events. \begin{figure}[htb] \begin{center} \includegraphics[width=0.70\textwidth]{graphics/events-as-requirements} \end{center} \caption{observable events of a system} \label{fig:observable-events} \end{figure} We have developed a specification language, and corresponding monitoring system, to be presented in the next section, which have been applied by testing engineers within the MSL project. The specification language consists of a mixture of automata and temporal logic with elements of regular expressions. The logic can specifically refer to the data in events. This mixture seems to be attractive for the engineers. In the longer term, we argue that such a specification language and monitoring system should be used in combination with a systematic logging discipline, such that \emph{requirements are formulated in terms of events}, and, minimally, \emph{these events} should be produced as part of logging. These observations may not appear to be ground-breaking, and to some extent have the flavor of {\em ``yes of course''} --- but as it turns out, considering current practices in software projects, they may be rather ground-breaking. A successful formal verification story always relies on finding the proper mixture of formality and common practice, as well as the right specification language for the task. We believe that the work described in this short paper has shown the potential for being such a success story. \section{Framework} \label{framework} MSL's ground software stores all events in a SQL database, which we interpret as a chronologically ordered sequence of events --- a log. Our Python framework, called {\sc LogScope}{}, allows us to check logs for conformance to a specification and to ``learn'' patterns from logs. The architecture of {\sc LogScope}{} divides functionality into a {\sc LogMaker}{} tool, specific to MSL, and a core {\sc LogScope}{} module for checking logs and learning specifications, which may be applied to any ordered event sequence. \subsection{LogMaker} {\sc LogMaker}{} communicates with MSL's SQL-based ground software to generate a list of events, where each {\em event} is a record mapping field names to their values. A special field indicates the type of the event: command, transition (EVR), state sampling, state change, or data product. Note how the MSL events map to the generalized idea of a monitored system shown in Figure \ref{fig:observable-events}. The log extractor sorts events according to spacecraft event times, since the order in which events are received by ground communications software does not correspond to the order in which events are generated on-board (due to varying communication priorities). Further analysis annotates the log with meta-events for ease of use in specification, and uses spacecraft telemetry to assign a spacecraft time to ground events. We hope to extend and exploit previous work on monitoring distributed systems with multiple clocks \cite{Sen06} to influence flight software's use of telemetry to ensure that effective event ordering is always possible. \subsection{Monitoring} \begin{figure} {\small \begin{verbatim} pattern CommandSuccess: COMMAND{Type : "FlightSoftwareCommand", Stem : x, Number : y} => { EVR{Dispatch : x, Number : y}, [ EVR{Success : x, Number : y}, not EVR{Success : x, Number : y} ], not EVR{DispatchFailure : x, Number : y}, not EVR{Failure : x, Number : y} } \end{verbatim} } \caption{A generic specification for flight software commands.} \label{command} \end{figure} The monitoring system of {\sc LogScope}{} takes two arguments: (1) a log generated by {\em logmaker}, and (2) a specification. Our specification language supplies an expressive rule-based language, which includes support for state machines, and a higher-level (but less expressive) {\em pattern language}, which is translated into the more expressive rule-based language before monitoring. Specifications in the pattern language are easy for test engineers and software developers to read and write. Figure \ref{command} illustrates a pattern. The {\tt CommandSuccess} pattern requires that following every command event (meaning a command is issued to the flight software), where the {\tt Type} field has the value {\tt "FlightSoftwareCommand"}, the {\tt Stem} field (the name of the command) has a value {\tt x} ({\tt x} will be \emph{bound} to that value), and the {\tt Number} field has a value {\tt y} (also a binding variable), we must see ({\tt =>}) --- in any order, as indicated by set brackets \verb+{...}+ --- (1) a dispatch of command {\tt x} with the number {\tt y}; (2) a success of {\tt x}/{\tt y}, and {\em after that} no more successes --- the square brackets \verb+[...]+ indicate an ordering of the event constraints. Furthermore, (3) we do not want to see any dispatch failures for the command; and finally (4) we do not want to see any failures for the command. Interesting features of the language include its mixture (and nesting) of ordered and unordered event sequences of event constraints, including negations, and its support for testing and capturing data values embedded in events. The pattern language is translated into our rule-based language derived from the {\sc Ruler}{} specification language \cite{barringer-ruler-07,barringer-ruler-journal-08,BarringerFM09}. A subset of this language defines state machines with parameterized events and states, where a transition may enter many target states --- essentially alternating automata with data. The language is also inspired by earlier state-machine oriented specification/monitoring languages, such as {\sc Rcat}{} \cite{smith-havelund-rcat-08} and {\sc Rmor}{} \cite{havelund-rmor-08}. In addition to exact requirements on field values, our language supports user-defined predicates (written in Python) that may take field values and bound variables as arguments, providing very high expressive power. Specifications are visualized with Graphviz \cite{graphviz}, and extensive error trace reporting with references to the log files ensures easy interpretation of detected specification violations. \subsection{Learning} {\sc LogScope}{} was well-received by test engineers, and was integrated into MSL flight software testing for two modules shortly after its release. One important result of early use was to alert us to the burden of writing patterns more specific than the kind of generic rule shown above. In order to ease this burden we introduced a facility for \emph{learning} specifications from runs. Consider a test engineer or developer who runs a flight software test one or more times. If these runs have been ``{\em good}'' runs he/she can ``endorse'' (perhaps after making manual modifications) the specification, and it can then be used to monitor subsequent executions. Learning requires a notion of event equality, and users can define which fields should be compared for testing event equality (e.g., exact timing is usually expected to change with new releases and perhaps even new test executions). We have implemented and applied a {\em concrete learner} which learns the set of all execution sequences seen so far (essentially a ``diff'' tool for logs). We also expect to learn mappings from commands to events expected in all execution contexts --- a pattern based approach, like that of Perracotta \cite{Perracotta}. More ambitiously, we hope to incorporate classic automata-learning results \cite{angluin-87} in order to generalize specifications. \section{Conclusions and Future Work} The MSL ground control and observation software demonstrates an important concept: many critical systems already implement very powerful logging systems that can be used as a basis for automated evaluation of log files against requirements. Such log files can be analyzed with scripts (programs) written using a scripting (programming) language. However, there seems to be advantages to using a formal specification language, as demonstrated with this work. A systematic study could be needed, investigating to what extent a domain specific language really is required to achieve the added benefit, or whether a well designed Python API (or API in any other programming language) would yield the same benefits. \subsubsection*{Acknowledgements} Part of the research described in this publication was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. \vspace{0.5cm} \noindent Thanks are due to many members of the Mars Science Laboratory Flight Software team, including Chris Delp, Gerard Holzmann, Rajeev Joshi, Cin-Young Lee, Alex Moncada, Cindy Oda, Glenn Reeves, Margaret Smith, Lisa Tatge, Hui Ying Wen, Jesse Wright, and Hyejung Yun. \bibliographystyle{eptcs} \section{Introduction} NASA's Mars Science Laboratory Mission (MSL), now scheduled to launch in 2011 \cite{msl}, relies on a number of different mechanisms used to command the spacecraft from Earth and to understand the behavior of the spacecraft and rover. The primary elements of this communication system are commands sent from the ground, visible events emitted by flight software (essentially formalized {\tt printf}s in the code) \cite{exploit}, snapshots of the spacecraft state, and data products --- files downlinked to earth (e.g., images of Mars or science instrument data). All of these elements may be thought of as spacecraft \emph{events}, with a canonical timestamp. Testing the flight software (beyond the unit testing done by module developers) usually relies on observing these indicators of spacecraft behavior. As expected, test engineers cannot ``eyeball'' the hundreds of thousands of events generated in even short tests of such a complex system. Previously, test automation has relied on ad hoc methods --- hand-coded Python \cite{python} scripts using a framework to query the ground communications system for various kinds of events, as a test proceeds. This was the state-of-the-art at the time members of our group, the Laboratory for Reliable Software, joined the MSL team, as developers and test infrastructure experts. Lengthy collaboration with test engineers convinced us that something better was required: the test script approach required large amounts of effort, much of it duplicated, and often failed due to changes in event timing and limitations of the query framework. Moreover, the scripts, combining test input activity, telemetry gathering, and test evaluation, proved difficult to read --- for other test engineers and for developers and systems engineers trying to extract and understand (and perhaps fix) a specification. Runtime verification using formal specifications offered a solution, and the MSL ground communications system suggested that we exploit an under-used approach to runtime verification: offline examination of event logs already produced by system execution. In the remainder of this paper we report on two aspects of this project --- in Section \ref{reqs} we discuss the general idea of event sequences as requirements, and our specification methodology; Section \ref{framework} gives a brief introduction to our application of these general ideas to MSL and our new specification language. \section{Event Sequences as Requirements} \label{reqs} Systems verification consists of proving that an artifact (hardware and/or software) satisfies a specification. In mathematical terms we have a model $M \in \mdl{ML}$ (for example the complete system) in some model language $\mdl{ML}$, and a specification $S \in \mdl{SL}$ in some specification language $\mdl{SL}$, and we want to show that the pair $(M,S)$ is member of the satisfaction relation $\models\; \subseteq\; \mdl{ML} \times \mdl{SL}$, also typically written: $M \models S$. The general problem of demonstrating correctness of a combined hardware/software system is very hard, as is well-known. Advanced techniques such as theorem proving or model checking tend not to scale for practical purposes. Extracting abstract models of the system, and proving these correct, has been shown to sometimes be useful, but also faces scalability problems when dealing with real systems or complex properties. The problem is inherently difficult because the models are complex -- because the behavior of systems is complex. The problems of full verification have long limited the adoption of formal specification. In \emph{runtime verification} a specification is used to analyze a single execution of a system, not the complete behavior. In this case a model is a single execution trace $\sigma$, and the verification problem consists of demonstrating that $\sigma \models S$, a much simpler problem with scalable solutions. The original model $M$ (the full system) can be considered as denoting the set of all possible runs $\sigma$, of which we now only verify a subset. This approach of course is less ambitious, but seems to be a practical and useful deployment of formal specification. Though these observations are well known, they are less often taken into practice. It is still rare to observe the application of formal specification --- even for runtime verification. There may be several reasons for this, one of which is the problem of program instrumentation. A large body of research considers the problem: how to we produce traces to verify? There is, however, a much simpler approach, namely to use logging information that is already generated by almost any computer system when it is tested. Our {\em first recommendation} is therefore that runtime verification should be used to formally check logged data. Our {\em second recommendation} is that logging and requirements engineering should be connected in the sense that requirements should be testable through runtime verification of logs. The common element is the {\em event}: requirements should be expressed as predicates on sequences of events, and logging should produce such sequences of events, thereby making the requirements testable. This means that logs preferably should consist of events with a formal template, connected to the original requirements. Note, however, that it is possible to extract formalized events from chaotic logging information produced by the usual ad hoc methods --- using, e.g., regular expressions. The obvious scientific and methodological question is: what kind of information should a log contain, in order to help verify requirements? We shall adopt the simple view that a log is a sequence of events, where an event is a mapping from names to values of various types (integers, strings, etc.): \[ \begin{array}{rcl} Log & = & Event^* \\ Event & = & Name \rightarrow Value \\ \end{array} \] \noindent This definition is quite general. An event can carry information about various aspects of the observed system. Events can be classified into various kinds by letting a designated name, i.e. {\em kind}, map to the kind. In the context of MSL, five forms of events are used, see Figure \ref{fig:observable-events}. We claim that these five event kinds are generally applicable to any system being monitored. The five forms of events are: (1) commands (input) issued to the monitored system, (2) products (output) delivered by the system, (3) periodic samplings of the state of the system, such as the value of continuous-valued sensors (like position coordinates), (4) changes to the state of the system, those changes that are observable, and (5) transitions performed by the system (also referred to as EVent Reports - EVRs), for example when {\tt printf} statements would normally be used to record an important event. The two forms of observation of the state (3, 4) could potentially be regarded as one kind of observation: that of the state of the system at any point in time. The state observations (3, 4) and the transitions (5) are internal events, while the commands (1) and products (2) are external events. \begin{figure}[htb] \begin{center} \includegraphics[width=0.70\textwidth]{graphics/events-as-requirements} \end{center} \caption{observable events of a system} \label{fig:observable-events} \end{figure} We have developed a specification language, and corresponding monitoring system, to be presented in the next section, which have been applied by testing engineers within the MSL project. The specification language consists of a mixture of automata and temporal logic with elements of regular expressions. The logic can specifically refer to the data in events. This mixture seems to be attractive for the engineers. In the longer term, we argue that such a specification language and monitoring system should be used in combination with a systematic logging discipline, such that \emph{requirements are formulated in terms of events}, and, minimally, \emph{these events} should be produced as part of logging. These observations may not appear to be ground-breaking, and to some extent have the flavor of {\em ``yes of course''} --- but as it turns out, considering current practices in software projects, they may be rather ground-breaking. A successful formal verification story always relies on finding the proper mixture of formality and common practice, as well as the right specification language for the task. We believe that the work described in this short paper has shown the potential for being such a success story. \section{Framework} \label{framework} MSL's ground software stores all events in a SQL database, which we interpret as a chronologically ordered sequence of events --- a log. Our Python framework, called {\sc LogScope}{}, allows us to check logs for conformance to a specification and to ``learn'' patterns from logs. The architecture of {\sc LogScope}{} divides functionality into a {\sc LogMaker}{} tool, specific to MSL, and a core {\sc LogScope}{} module for checking logs and learning specifications, which may be applied to any ordered event sequence. \subsection{LogMaker} {\sc LogMaker}{} communicates with MSL's SQL-based ground software to generate a list of events, where each {\em event} is a record mapping field names to their values. A special field indicates the type of the event: command, transition (EVR), state sampling, state change, or data product. Note how the MSL events map to the generalized idea of a monitored system shown in Figure \ref{fig:observable-events}. The log extractor sorts events according to spacecraft event times, since the order in which events are received by ground communications software does not correspond to the order in which events are generated on-board (due to varying communication priorities). Further analysis annotates the log with meta-events for ease of use in specification, and uses spacecraft telemetry to assign a spacecraft time to ground events. We hope to extend and exploit previous work on monitoring distributed systems with multiple clocks \cite{Sen06} to influence flight software's use of telemetry to ensure that effective event ordering is always possible. \subsection{Monitoring} \begin{figure} {\small \begin{verbatim} pattern CommandSuccess: COMMAND{Type : "FlightSoftwareCommand", Stem : x, Number : y} => { EVR{Dispatch : x, Number : y}, [ EVR{Success : x, Number : y}, not EVR{Success : x, Number : y} ], not EVR{DispatchFailure : x, Number : y}, not EVR{Failure : x, Number : y} } \end{verbatim} } \caption{A generic specification for flight software commands.} \label{command} \end{figure} The monitoring system of {\sc LogScope}{} takes two arguments: (1) a log generated by {\em logmaker}, and (2) a specification. Our specification language supplies an expressive rule-based language, which includes support for state machines, and a higher-level (but less expressive) {\em pattern language}, which is translated into the more expressive rule-based language before monitoring. Specifications in the pattern language are easy for test engineers and software developers to read and write. Figure \ref{command} illustrates a pattern. The {\tt CommandSuccess} pattern requires that following every command event (meaning a command is issued to the flight software), where the {\tt Type} field has the value {\tt "FlightSoftwareCommand"}, the {\tt Stem} field (the name of the command) has a value {\tt x} ({\tt x} will be \emph{bound} to that value), and the {\tt Number} field has a value {\tt y} (also a binding variable), we must see ({\tt =>}) --- in any order, as indicated by set brackets \verb+{...}+ --- (1) a dispatch of command {\tt x} with the number {\tt y}; (2) a success of {\tt x}/{\tt y}, and {\em after that} no more successes --- the square brackets \verb+[...]+ indicate an ordering of the event constraints. Furthermore, (3) we do not want to see any dispatch failures for the command; and finally (4) we do not want to see any failures for the command. Interesting features of the language include its mixture (and nesting) of ordered and unordered event sequences of event constraints, including negations, and its support for testing and capturing data values embedded in events. The pattern language is translated into our rule-based language derived from the {\sc Ruler}{} specification language \cite{barringer-ruler-07,barringer-ruler-journal-08,BarringerFM09}. A subset of this language defines state machines with parameterized events and states, where a transition may enter many target states --- essentially alternating automata with data. The language is also inspired by earlier state-machine oriented specification/monitoring languages, such as {\sc Rcat}{} \cite{smith-havelund-rcat-08} and {\sc Rmor}{} \cite{havelund-rmor-08}. In addition to exact requirements on field values, our language supports user-defined predicates (written in Python) that may take field values and bound variables as arguments, providing very high expressive power. Specifications are visualized with Graphviz \cite{graphviz}, and extensive error trace reporting with references to the log files ensures easy interpretation of detected specification violations. \subsection{Learning} {\sc LogScope}{} was well-received by test engineers, and was integrated into MSL flight software testing for two modules shortly after its release. One important result of early use was to alert us to the burden of writing patterns more specific than the kind of generic rule shown above. In order to ease this burden we introduced a facility for \emph{learning} specifications from runs. Consider a test engineer or developer who runs a flight software test one or more times. If these runs have been ``{\em good}'' runs he/she can ``endorse'' (perhaps after making manual modifications) the specification, and it can then be used to monitor subsequent executions. Learning requires a notion of event equality, and users can define which fields should be compared for testing event equality (e.g., exact timing is usually expected to change with new releases and perhaps even new test executions). We have implemented and applied a {\em concrete learner} which learns the set of all execution sequences seen so far (essentially a ``diff'' tool for logs). We also expect to learn mappings from commands to events expected in all execution contexts --- a pattern based approach, like that of Perracotta \cite{Perracotta}. More ambitiously, we hope to incorporate classic automata-learning results \cite{angluin-87} in order to generalize specifications. \section{Conclusions and Future Work} The MSL ground control and observation software demonstrates an important concept: many critical systems already implement very powerful logging systems that can be used as a basis for automated evaluation of log files against requirements. Such log files can be analyzed with scripts (programs) written using a scripting (programming) language. However, there seems to be advantages to using a formal specification language, as demonstrated with this work. A systematic study could be needed, investigating to what extent a domain specific language really is required to achieve the added benefit, or whether a well designed Python API (or API in any other programming language) would yield the same benefits. \subsubsection*{Acknowledgements} Part of the research described in this publication was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. \vspace{0.5cm} \noindent Thanks are due to many members of the Mars Science Laboratory Flight Software team, including Chris Delp, Gerard Holzmann, Rajeev Joshi, Cin-Young Lee, Alex Moncada, Cindy Oda, Glenn Reeves, Margaret Smith, Lisa Tatge, Hui Ying Wen, Jesse Wright, and Hyejung Yun. \bibliographystyle{eptcs}
1,314,259,995,242
arxiv
\section{Introduction} The problem of simulation parameter inference received a considerable amount of attention in the broad scientific community: \cite{cranmer2020frontier} provides a recent survey. The `real-to-sim' term is commonly used to refer to parameter inference for large-scale models and general-purpose simulators, recent examples include~\cite{prakash2021self, chang2020sim2real2sim}. More broadly, `real-to-sim' can refer to any problem formulations and methods that help identify and bridge the gap between reality and simulation, e.g.~\cite{zhang2019vr, liu2021real}. In robotics, this problem has been addressed extensively for the case of rigid objects~\cite{chebotar2019closing, ramos2019bayessim, barcelos2020disco, mehta2020active, muratore2021neural}. Many of these methods rely on using canonical representations of the state and assume reliable state estimation for obtaining low-dimensional representation of objects in the scene. Deformable objects present unique challenges for defining or learning low-dimensional representations. Focusing on the relevant parts of their state requires interpreting high-dimensional data, such as images or point clouds. However, computer vision methods that succeed in learning robust low-dimensional representations of rigid objects can fail to learn on the more complex data patterns that the highly deformable objects present. Even when an acceptable representation is available, the trajectories obtained when manipulating deformable objects still contain complex patterns reflecting the dynamics of the deformables. Methods that succeed using trajectories containing only rigid objects can fail to interpret the more challenging patterns in the data with deformables, even when the state representation is low-dimensional (our experiments show such examples). Given these challenges, we advocate the Bayesian treatment and cast the `real-to-sim' problem as simulation-based inference. Bayesian methods can leverage simulation as a source of prior knowledge in a principled way, and can conduct inference from real observations in a data-efficient manner. In this work, we consider likelihood-free inference techniques introduced by~\cite{ramos2019bayessim} that can infer flexible multimodal posteriors, and hence are well-equipped to cope with the challenge of interpreting complex and noisy data patterns in a principled manner. We investigate how these inference methods behave when the observed deformable object state is noisy and approximate. Keypoint-based representations are promising for deformable objects because of their representational flexibility, ability to quickly obtain a low-dimensional state of an object, and the potential to train from on a small set of observations. For example, we are able to train an unsupervised approach from ~\cite{kulkarni2019unsupervised} with data from only 1 minute of hardware data and 100 simulated trajectories. However, keypoints are not guaranteed to be consistent across frames and tend to appear on different parts of an object. Supervised keypoint extraction techniques can learn to track a certain set of locations reliably, as we show in our experiments with the model from~\cite{sundaresan2020untangling}. However, the keypoints can still permute between these locations even in adjacent frames. Under these conditions, we show that existing inference methods have significant difficulties in real-to-sim experiments, where the real data is obtained from real images of a robot manipulating deformable objects. Our contribution is a formulation of the problem of real-to-sim for deformable object manipulation as simulation-based inference over a distributional representation of the state. Specifically, we interpret keypoints on objects extracted from images as samples from an underlying state distribution. We then embed this distribution in a {\em reproducing kernel Hilbert space\/} (RKHS) via the kernel mean embedding operator~\cite{song2013kernel}. This yields a representation that is permutation-invariant, addressing the case when the keypoints are permuted. Furthermore, this distributional interpretation allows us to avoid the need for keypoints to consistently track exactly the same locations on the object. More generally, this also opens possibilities for using any probabilistic output of vision-based modules, including nonparametric particle-based methods, since RKHS mean embedding can be easily applied in these cases. We call the resulting method \textit{BayesSim-RKHS}. To show that our approach can successfully handle parameter inference for deformable objects we include experiments on 3 different scenarios on hardware with real deformables: (i) wiping a table surface with cloth; (ii) winding a rope onto a spool; (iii) spreading a highly deformable piece of cloth by flinging it in the air, then dragging it over the table surface. In all these scenarios, \textit{BayesSim-RKHS} significantly outperforms existing BayesSim variants. We demonstrate that this advantage is not tied to a specific keypoint extraction method. We show results that use a recent supervised keypoint method developed specifically for deformables~\cite{sundaresan2020untangling}, and a general unsupervised method that was developed with rigid objects as the primary use case~\cite{kulkarni2019unsupervised}. \section{Background} \label{sec:background} \subsection{Simulating, Representing and Manipulating Deformables} There are promising results for learning to perceive and manipulate deformable objects, as surveyed in~\cite{herguedas2019survey, arriola2020modeling, yin2021modeling}. However, works in this field usually construct a small self-contained scenario, and do not offer a way to align general-purpose simulators with reality. This is due to the significant challenges in simulation, perception and control of highly deformable objects even when considering only one task or scenario. Typical examples include: manipulating an elastic loop~\cite{yoshida2015simulation}; hanging a piece of cloth~\cite{matas2018sim}; assisting to put on a hat~\cite{klee2015personalized}, a shirt~\cite{clegg2018learning}, a gown~\cite{kapusta2019personalized}, a sleeveless jacket~\cite{shen2021provably}. Setting up these scenarios on hardware requires significant effort. Hence, simulation with support for various types of deformable objects is a much needed aid that can help to speed up experimentation with novel types of tasks, perception and manipulation algorithms. Interest in this direction has been indicated several recent workshops for modeling and manipulation of deformable objects, held at the leading robotics conferences~\cite{rss2021workshop, icra2021workshop, iros2020workshop}. A major obstacle is that the more advanced simulators, which support a broader range of objects and types of deformation, are difficult to tune manually. Hence, the community indicated the need for automated ways to find parameter estimates that make simulation behave stably, and make the behavior of the deformables resemble that of the real-world objects. A number of existing methods for simulation parameter inference consider the case of rigid objects~\cite{chebotar2019closing, ramos2019bayessim, barcelos2020disco, mehta2020active, muratore2021neural, hwasser2020variational} and assume access to low-dimensional state, such as object poses. One could argue that methods developed for rigid objects can be applicable to the case of deformable objects. For example, the recently popular keypoint extraction methods can generalize to the case of objects that are somewhat deformable, but still mostly maintain their shape (e.g. plush toys~\cite{florence2018dense}, flexible shoes~\cite{manuelli2019kpam}). However, most of these algorithms would not be applicable to the case of highly deformable fabrics, ropes and cables, where the object does not ever return to a canonical shape during manipulation. Such cases could benefit from new techniques that emerge in the machine learning community, e.g.~\cite{kulkarni2019unsupervised, li2020causal}. However, in this work we show that existing parameter inference methods need to be extended to work effectively with such learned state representations of deformables. \subsection{Probabilistic Parameter Inference} A recent survey~\cite{arriola2020modeling} includes an overview of methods for parameter estimation for simulators and models of deformable objects. These include direct error minimization and parameter estimation techniques that assume access to a direct way to compare the desirable and achieved deformation. To be applicable to a real-to-sim problem, this would require careful measurement of the deformation of the real deformable object, which can be intractable in many real-world scenarios. The alternatives that relax this assumption include exhaustive/random search and genetic algorithms, which require a large amount of compute resources. The techniques based on neural networks could improve data efficiency, but most lack the ability to capture uncertainty and only aid in producing a point estimate. This is problematic, because many currently available simulators for deformable objects are unable to produce behavior that exactly matches reality. Hence the need to combine such simulators with domain randomization techniques~\cite{matas2018sim, seita2019deep, andrychowicz2020learning}. This can ensure that control policies learned with the aid of simulation can handle a range of possible behaviors, instead of being narrowly focused on a mean estimate of the behavior. BayesSim~\cite{ramos2019bayessim} is a likelihood-free method that has been applied to a variety of robotics problems~\cite{possas2020online, barcelos2020disco, matl2020stressd, matl2020inferring, mehta2020calibrating}. It offers a principled way of obtaining posteriors over simulation parameters, and does not place restrictions on simulator type of properties, i.e. can work with non-differentiable black-box models. BayesSim allows to infer multimodal posteriors with a {\em mixture density neural network\/} (MDNN)~\cite{bishop1994mixture, bishop2006pattern}, obtaining full covariance Gaussian components. Since Gaussian mixtures are universal approximators for densities~\cite{kostantinos2000gaussian, goodfellow2016deep}, given enough mixture components BayesSim posteriors can ensure sufficient representational capacity. Combining techniques based on neural networks and Bayesian inference allows BayesSim to be scalable and flexible in terms of its modeling capabilities. BayesSim performs probabilistic inference by considering a prior $p(\pmb{\theta})$ over a vector of $D$ simulation parameters $\pmb{\theta} = [\theta_1, ..., \theta_D]$ and a derivative-free simulator used for obtaining trajectories of a dynamical system. Each trajectory ${\pmb{x}}^s$ is comprised of simulated observations for states $\pmb{S}=\{\pmb{s}\}_{t=1}^T$ of a dynamical system and the actions $\pmb{A}=\{\pmb{a}\}_{t=1}^T$ that were applied to the system. BayesSim then collects a few observations from the real world, e.g. a single trajectory ${\pmb{x}}^r$ and uses it to compute the posterior $p\Big(\pmb{\theta} \Big| \big\{{\pmb{x}}_{(i)}^s\big\}_{i=1}^N, {\pmb{x}}^r\Big)$. Instead of assuming a particular form for the likelihood and estimating $p(\pmb{x} | \pmb{\theta})$, BayesSim approximates the posterior by learning a conditional density $q_{\phi}(\pmb{\theta} | \pmb{x})$, represented by an MDNN with weights $\phi$. The posterior is then: \begin{align} \label{eq:bsiminfer} \hat{p}(\pmb{\theta} | \pmb{x} \!=\! {\pmb{x}}^r) \propto p(\pmb{\theta}) / \tilde{p}(\pmb{\theta}) q_{\phi}(\pmb{\theta} | \pmb{x} \!=\! {\pmb{x}}^r), \end{align} with an option for a proposal prior $\tilde{p}(\pmb{\theta})$ used to collect simulated observations to train the conditional density. In previous works, BayesSim first summarized simulated and real trajectories by extracting the sufficient statistics: \begin{align} \varphi(\pmb{S},\pmb{A}) = \big( \{\langle \pmb{\tau}_i, \pmb{a}_j \rangle\}_{i,j=1}^{D_s,D_a}, \mathbb{E}[\pmb{\tau}], \mathbb{V}ar[\pmb{\tau}] \big), \end{align} where $\pmb{\tau} = \{\pmb{s}_t - {\pmb{s}}_{t-1}\}_{t=1}^T$ contains the trajectory state differences, $D_s,D_a$ are the state and action dimensionalities, $\langle \cdot , \cdot \rangle$ denotes a dot product between the state and action feature vectors. Works that applied BayesSim to low-dimensional states (e.g. robot joint angles, object poses and velocities) used these states directly. \cite{matl2020inferring} applied BayesSim to scenarios with granular media and developed domain-specific summary statistics of depth images of granular formations, such as dispersion and statistical dependence of the grain locations (mean, standard deviation, interquartile range, kurtosis, and distance correlation). In this work, we present a novel methodology to perform inference using state trajectories directly from images, without the need to first extract sufficient statistics from trajectories. \section{Our Approach : Bayesian Inference with Distributional RKHS Embeddings} \label{sec:approach} \subsection{BayesSim with a Distributional View of Deformables} In this section we describe BayesSim for deformables. The method begins by first executing a trajectory with a robot manipulating a deformable object. For example, for the wiping scene, this could correspond to executing a simple horizontal motion, where the robot drags a cloth on a table surface. We record robot poses and RGB images of the scene. Our goal is to infer a simulation parameter posterior, such that samples from it yield simulations that resemble reality in terms of deformable object motion. We obtain simulated trajectories and RGB images from simulations of the scene with a deformable object. Simulation parameters are sampled from a uniform distribution on the 1st iteration. We use these initial real and simulated images to train keypoint extraction models. We extract the keypoints from simulated images and include them in the state observations: \begin{align} \pmb{S} = \{\pmb{s}\}_{t=1}^T: \pmb{s}_t = \big[ gripper\!\_pose, \!\ \pmb{k}_1,...,\pmb{k}_K \big], \end{align} where $gripper\!\_pose$ is the Cartesian pose of the gripper, and $\pmb{k}_1, ... , \pmb{k}_n$ are the extracted keypoints (2D in pixel coordinates or 3D in world frame if camera-to-robot transform is given). Following BayesSim, we obtain simulation training trajectories $\pmb{x}^{s}\!=\!\{\pmb{s}_t,\pmb{a}_t\}_{t=1}^T$, where $\pmb{a}_t$ are the robot actions, e.g. target Cartesian gripper poses, joint angles or velocities (depending on the desired control mode). We use a set of simulated trajectories to learn a conditional density $q_{\phi}(\pmb{\theta} | \pmb{x})$ represented by an MDNN with weights $\phi$. We then obtain the posterior $\hat{p}_1(\pmb{\theta}|\pmb{x}=\pmb{x}^r)$ using the real trajectory $\pmb{x}^r$ and Equation~\ref{eq:bsiminfer}. The above constitutes one iteration. We obtain a new set of simulation trajectories by sampling simulation parameters from the approximate posterior $\hat{p}_1$, then repeat the above steps to obtain $\hat{p}_2$ as posterior for the 2nd iteration, and so on. To guarantee that our state representation has favorable properties, both in terms of theory and practice, we propose to transform the part of the state that contains keypoints $\pmb{k}_1,...,\pmb{k}_K$. Our insight is that the keypoints can be viewed as noisy samples from a probability distribution that has support on the surface of the deformable object in the scene. The full state of the deformable is unobservable due to occlusions by other objects as well as self-occlusions. Moreover, keypoint extraction methods do not guarantee ordering, and do not guarantee placement of the keypoints on consistent parts of the object. Nonetheless, our insight of treating them as samples from the distribution that captures the state of the deformable allows us to overcome these shortcomings. Furthermore, this distributional treatment yields a method that is robust to noise by construction, and can benefit from principled theoretical tools for analysis and interpretation. \subsection{An Intuitive Explanation of Kernel Mean Embeddings} Kernel mean embeddings allow to map distributions into infinite dimensional feature spaces~\cite{song2013kernel}. Conceptually, they are able to represent probability distributions without loss of information in a feature space that is more amenable to mathematical operations. A desirable property that a kernel mean embedding satisfies is that it can recover expectations of all the functions in a given reproducing kernel Hilbert space $\mathcal{F}$. This property of the mean embedding map $\mu_X \in \mathcal{F}$ is technically stated as: \begin{align} \mathbb{E}_X[f(X)] = \langle \mu_X , f \rangle, \ \forall f \in \mathcal{F}. \end{align} To get an intuition for why this is useful, consider the following example: if an RKHS $\mathcal{F}$ is sufficiently large, e.g. includes all monomials $\{X, X^2, X^3, .... \}$, then we can obtain all the moments of the distribution of $X$ by simply taking dot products with $f \in \mathcal{F}$. In the above, $X$ is a random variable with domain $\Omega$ and distribution $P(X)$, $\mathcal{F}$ is an RKHS on $\Omega$ with a kernel $k (x, x')$. This means that $\mathcal{F}$ is a Hilbert space of functions \mbox{$f: \Omega \rightarrow \mathbb{R}$} with the inner product $\langle \cdot, \cdot \rangle_{\mathcal{F}}$. Kernel slices $k (x, \cdot)$ are functions in $\mathcal{F}$ that satisfy the \textit{reproducing property}: $\big\langle \!\ f(\cdot) \ , \ k(x, \cdot) \!\ \big\rangle_{\mathcal{F}} = f(x)$. A kernel slice $k (x, \cdot)$ can be viewed as an implicit feature map $\phi(x)$ with $k(x, x') = \big\langle \ \phi(x) \ , \ \phi(x') \ \big\rangle_{\mathcal{F}} $. \cite{fukumizu2007kernel} shows that commonly used kernels are indeed sufficiently large. These \textit{characteristic kernels} ensure that $\mu_X$ mapping is injective, hence embedding $P(X)$ as $\mu_X$ does not lose information about $P(X)$. The RBF kernel $k(x, x') = \exp(-\sigma ||x-x'||_2^2)$ is the most widely used kernel. The fact that it can be formed by taking an infinite sum over polynomial kernels connects to our `moments of distributions' example above. The Riesz representation theorem states that $\mu_X$ in Equation~4 exists and is unique. Using the reproducing property and linearity of integration, an explicit formula for $\mu_X$ can be obtained: $\mu_X = \mathbb{E}[\phi(X)] = \int_{\Omega} \phi(x) \ dP(x)$ (after simplifying). This suggests defining the empirical kernel embedding using i.i.d. samples $x^{(1)}, ..., x^{(N)}$ from $P(X)$ as: \begin{align} \hat{\mu}_X := \tfrac{1}{N} \sum_{n=1}^N \phi\big(x^{(n)}\big). \end{align} \cite{smola2007hilbert} justifies the above choice by showing that $\hat{\mu}_X$ converges to $\mu_X$ as $O(1/\sqrt{N})$, independent of the dimensionality of $X$, which avoids the curse of dimensionality. Instead of dealing with infinite-dimensional implicit maps $\phi(x)$, applying the \textit{kernel trick} allows to operate with the finite-dimensional Gram matrix $K: K_{ij} = k\big(x^{(i)}, x^{(j)}\big)$, $i,j=1...N$. \subsection{RKHS-Net Layer for Distributional Embeddings} \label{sec:rkhsnet} \begin{figure}[t] \centering \vspace{6px} \includegraphics[width=1.0\linewidth]{img/overview.pdf} \vspace{-17px} \caption{An overview of our \textit{BayesSim-RKHS} method, with a focus on the proposed RKHS-net layer, shown within the blue rectangle. The RKHS-net layer can take samples from any distribution as inputs, and in this work we compute the distributional embedding for the keypoints $\pmb{k}_1,...,\pmb{k}_K$.} \label{fig:meanembed} \vspace{-10px} \end{figure} For scalability reasons, we can avoid the computation of the Gram matrix by approximating the kernel function by its inner product: \begin{align} k(x,x') = \big \langle \ \phi(x) \ , \ \phi(x') \ \big \rangle_{\mathcal{F}} \! \approx \!\ \hat{\phi}(x)^T \hat{\phi}(x'), \end{align} where $\hat{\phi}(x)$ is a finite dimensional approximation of $\phi(x)$, known as \textit{random Fourier features}~\cite{rahimi2007random, rahimi2008weighted}. Following the derivation in~\cite{rahimi2007random}, we first employ \textit{Bochner's theorem}, which states that any continuous shift-invariant kernel $k(x,x') := k(x-x')$ can be represented in terms of its Fourier transform: \begin{align*} k(x-x') = \int p(\omega) \!\ \exp\big(i \omega^T (x-x')\big) d\omega, \end{align*} where $p(\omega)$ is the spectral density corresponding to kernel $k(x-x')$. For real a real-valued kernel $k(\cdot, \cdot)$, the right-hand side can be written without the imaginary part as $\mathbb{E}_\omega \big[\cos\big(\omega^T (x-x')\big)\big]$. This expectation can be approximated with a Monte Carlo estimate yielding: \begin{align*} &k(x-x') \approx \frac{1}{M} \sum_{m=1}^{M} \cos \big(\omega_{m}^T x - \omega_{m}^T x'\big) = \hat{\phi}(x)^T \hat{\phi}(x'), \\ &\hat{\phi}(x)^T \!\!=\!\! \tfrac{1}{\sqrt{M}} \Big[ \cos(\omega_1^T x), \sin(\omega_1^T x), ..., \cos(\omega_M^T x), \sin(\omega_M^T x) \Big] \end{align*} When $\omega \!\sim\! \mathcal{N}(\pmb{0}, I)$, the above approximates an RBF kernel with $\sigma\!\!=\!\!1$. More generally, when $\omega \!\sim\! \mathcal{N}(\pmb{0}, \sigma I)$ this approximates an RBF kernel with a hyper-parameter $\sigma$. The \textit{frequencies} $\omega$ are usually sampled randomly, yielding components of the form $\cos(\sigma^{-1} \!\circ\! \omega_M^T x)$, $\sin(\sigma^{-1} \!\circ\! \omega_M^T x)$. \cite{rahimi2007random} provides approximation bounds and further analysis of the random Fourier features (RFF) approximation. We propose to use the RFF feature approximation to construct the mean embedding for the part of the state that benefits from the distributional representation. In the current work this includes the keypoints $\pmb{k}_1,...,\pmb{k}_K$. Though, in general, the proposed approach is not limited to handling keypoint representations, and can embed any distributional part of the state. Furthermore, we propose to integrate this into the overall learning architecture in a fully differentiable manner. We accomplish this by constructing a neural network layer that obtains random samples for $\omega$, and then propagates the gradients through to adjust them during training. We also propagate the gradients through $\sigma$. Figure~\ref{fig:meanembed} illustrates this. \subsection{Keypoint Extraction Modules} \label{sec:keypointtrain} \begin{figure}[b!] \vspace{-10px} \centering \includegraphics[width=1.0\linewidth]{img/keypoints_permute.jpg} \vspace{-15px} \caption{Left: keypoints from the supervised approach~\cite{sundaresan2020untangling} appear close to the desired corner regions, despite deformations. They tend to track consistent locations, but the method does not aim to guarantee this. Right: example results from our adaptation of the unsupervised method from~\cite{kulkarni2019unsupervised}.} \label{fig:keypoints} \end{figure} We now describe two keypoint extraction methods: one is a data-efficient supervised method, for which the user needs to annotate a small set of images to indicate the desired locations for the keypoints~\cite{sundaresan2020untangling}; the other is unsupervised, and only needs unlabeled RGB frames for training~\cite{kulkarni2019unsupervised}. \cite{sundaresan2020untangling} is a recent method designed for learning features to help refine coarse keypoint prediction to a precise consistent location on an object. This method has been shown to work well as part of a larger framework aimed to solve the perception, planning and manipulation challenges for the task of untangling knots. The aim is to learn semantic keypoints that roughly capture the state of the deformable objects. For scenarios with cloth we annotate the corners of the cloth to indicate the desired areas where the algorithm should learn to place the keypoints. For ropes it is less obvious what the `best' location for placing a keypoint should be. Hence, we make a simple choice of spacing the keypoints uniformly along the rope. Using these annotated images as RGB image observations, we learn a mapping $f: \mathbb{R}^{W \!\!\times\! H \!\times\! 3} \rightarrow R^{W \!\!\times\! H \!\times\! 4}$, where each channel of the output represents a 2D heatmap for one keypoint. Given 250 images (125 simulated, 125 real), we annotate four task-relevant keypoints on each image. Then, we apply affine, lighting, and color transformations to augment the dataset, obtaining the overall augmented dataset size of 3000 images. A network with a ResNet-34 backbone is then trained to predict 2D Gaussian heatmaps centered at each keypoint. After training, the positions of the keypoints are predicted as argmax over each channel heatmap. We aim for our overall approach to effectively handle noisy outputs that unsupervised approaches can yield as well. For this, we adapt an unsupervised keypoint extraction approach based on the Transporter architecture~\cite{kulkarni2019unsupervised}. This method takes as input RGB images $x_{src}$ and $x_{tgt}$. A convolutional neural network (CNN) and a keypoint detection network encode the input to the spatial feature map $\Phi(x)$ and a keypoint network $\Psi(x)$. Then, a `transport' operation is performed to modify the feature map of the input image, such that source features at the location of the source image keypoints are subtracted out, while target features at the location of the target image keypoints are pasted in: $\hat{\Phi}(x_{src}, x_{tgt}) = (1 - \mathcal{H}_{\Psi(x_{src})}) \cdot (1 - \mathcal{H}_{\Psi(x_{tgt})}) \cdot \Phi(x_{src}) + \mathcal{H}_{\Psi(x_{tgt})} \cdot \Phi(x_{tgt})$. Here, $\mathcal{H}$ denotes the mapping from keypoint coordinates to Gaussian heatmaps. Then, a decoding CNN reconstructs the target image from the transported feature map. The training is guided only by the reconstruction loss. We extend this approach to ensure that the keypoint network $\Psi(x)$ allocates keypoints to the manipulated objects, as opposed to placing them on the robot. The pose of the robot and its geometry (mesh) are usually known to high precision. Hence, there is no benefit in tracking the motion of the robot from the RGB images. We obtain the robot mask (the part of the image that has robot in the foreground) by using depth filtering methods on the depth readings that we acquire from an RGBD camera mounted in the workspace. We mask out the areas with the robot from the reconstruction loss during training, and this helps the method focus on the regions with the deformable object. The right side of Figure~\ref{fig:keypoints} shows an example of the keypoints we obtain with this approach. \section{Experiments} \label{sec:experiments} In the following sub-sections, we first describe the scenarios we consider, then explain our hardware setup and evaluation strategy, then illustrate the results of real-to-sim experiments using two types of keypoint extraction methods. \begin{figure}[t] \vspace{5px} \centering \includegraphics[width=1.0\linewidth]{img/scenarios.jpg} \vspace{-20px} \caption{Left: scenarios we consider in our hardware experiments. Right: examples of simulation with various parameters; the initial parameter ranges are wide, which yields both realistic and unrealistic behavior, as expected.} \label{fig:scenarios} \vspace{-10px} \end{figure} \subsection{Description of Real and Simulated Scenarios.} For our experiments we consider three scenarios that involve deformable objects with various levels of deformation. The first is a wiping scenario, where a robot manipulates a thick cloth to wipe a table surface. This scenario aims to test the case when capturing the state of the real object is tractable, but finding an appropriate simulation posterior is challenging. The real wiping cloth is clearly visible in most frames, and undergoes only small deformations. In contrast, simulated cloth can be highly flexible, and easily crumbles when medium-to-low bending and elastic stiffness simulation parameters for the cloth are sampled together with medium-to-high values for the friction parameter. These cases are frequent in the initial uniform simulation parameter samples. In the second scenario the robot winds a highly flexible rope around a spool. This scenario presents a challenge for the keypoint extraction methods, since there are no obvious canonical locations for keypoints. Furthermore, parts of the rope are occluded by the spool. In the third scenario the robot flings the cloth up, then lowers it down and drags it on the table surface. This scenario is challenging for perception, both for real images and simulation, since the cloth is highly flexible and not fully visible at any point. With medium-to-high friction, the ends of the cloth spread out on the table, but the top corners of the cloth remain obscured due to self-occlusion. In all of these scenarios we infer a joint posterior for bending stiffness, elastic stiffness, friction, and the scale/size of the deformable object. The rest of the simulation parameters are left as defaults, and are the same across all the scenarios. We use the PyBullet simulator~\cite{coumans2019}, with the Finite Element Method (FEM) option for simulating deformable objects. While FEM can be computationally expensive and precise in general, the default settings we use in PyBullet prioritize speed over fidelity to obtain faster-than-realtime simulation. Figure~\ref{fig:scenarios} shows visual examples of our scenarios. In this work, we focus on inference of posteriors of physical simulation parameters, and do not explore the aspect of a large mismatch in camera perspective or visual appearance of the scene. Hence, we make simulation environments that approximately match the visual appearance, which is easy to achieve in the PyBullet simulator. We load a realistic background and texture for the deformable object, and approximately match the camera pose. Addressing a large visual gap for a simulator that cannot import custom visual elements can be done by either training keypoint methods with heavy visual domain randomization, or by exploring novel differentiable rendering techniques. We leave these directions for future work. In this work we also do not focus on the aspect of simulating grasping of the deformable object with high fidelity. In our simulated environments we attach a simple grasp anchor to the cloth/rope objects, instead of simulating the interaction between the gripper's fingers and the thin-shell deformable object. SoftGym~\cite{lin2020softgym} is a recent example of a suite of simulation environments geared towards sim-to-real that adopts a similar approach for a subset of environments. \begin{figure}[t] \vspace{10px} \centering \includegraphics[width=1.0\linewidth]{img/hardware_setup_and_GT.jpg} \vspace{-20px} \caption{Left: our hardware setup. Right: visualization of our evaluation strategy to measure the alignment between the behavior of the real and a simulated deformable objects. The `ground truth' masks are only used for evaluation, and are not given to any of the algorithms we compare.} \label{fig:hw_setup} \vspace{-10px} \end{figure} \subsection{Hardware Setup and Evaluation Methodology} Our hardware setup includes a Kinova Gen3 7DoF robot arm with a Robotiq 2F-85 gripper, and an Intel RealSense D435 camera. The camera provides RGB image data during experiments and is positioned to view the table surface. As image resolution, we use $320 \times 320$ pixels in all our experiments. To execute the desired robot trajectories we use velocity control in the Cartesian (end-effector) space, using the high-level control interface provided by Kinova. The left side of Figure~\ref{fig:hw_setup} illustrates our workspace. To evaluate the performance of our approach and baselines, we measure alignment between the motion of the real and simulated deformable object. To localize the deformable objects in the scene we construct masks based on color filters defined in HSV space. The mask extracted for the real object constitutes the `ground truth' region. Note that this ‘ground truth’ (GT) is only used for evaluation purposes and is not given to any of the algorithms we compare which only use keypoints as input. We construct a mask for the simulated object as well, then compare the two masks. To compare the two masks, we use a bidirectional Chamfer distance, which is a common metric for measuring differences between unordered point clouds. To obtain trajectories for evaluation, we command the same trajectory on the real robot and in simulation, obtain the frames from the real and simulated cameras, then compute the Chamfer distance. The mean of this distance across all timesteps constitutes our evaluation metric that quantifies alignment between the real and simulated deformable object. On the $y$ axis in our plots, we refer to this metric as the ``distance to `ground truth' state''. \subsection{Hardware Results for Real-to-sim} \begin{figure}[t] \vspace{10pt} \centering \includegraphics[width=0.9\linewidth]{img/legend.jpg} \includegraphics[width=0.495\linewidth]{img/wiping_real_results_ready_tran.jpg} \includegraphics[width=0.475\linewidth]{img/wiping_real_results_ready.jpg} \vspace{-11px} \caption{Hardware results for the wiping task. Left: using unsupervised keypoints. Right: using supervised keypoints.} \label{fig:hw_results_wiping} \vspace{-12px} \end{figure} For evaluation we compare the following 6 methods: \textit{BayesSim-MDNN} is the original BayesSim method in~\cite{ramos2019bayessim}. \textit{BayesSim-MDRFF} extracts RFF features from trajectories before passing the data to the mixture density network for training. This variant has been used in experiments with inferring material properties of granular media~\cite{matl2020inferring} and showed strong performance on that challenging task. One key aspect to note is that~\cite{matl2020inferring} domain-specific features, as we described in the background in Section~\ref{sec:background}. We aim to avoid designing domain-specific features for manipulation with deformables, since it is not plausible to presuppose that a single feature extraction technique could perform well across various scenarios/tasks and various kinds of objects and deformation types. \textit{BayesSim-RKHS} is the method we propose in Section~\ref{sec:approach}. \textit{BayesZoom-RKHS} is a variation of our method, which we test to verify that our distributional embedding of keypoints can still work with a simpler variant of BayesSim. This variant uses a \mbox{3-layer} fully-connected neural network instead of the mixture density network. The network learns to predict the mean of a unimodal Gaussian posterior. We use the L1 error on the validation set as a way to compute an estimate of the square root of the variance. We refer to this method as \textit{BayesZoom}, since it retains the core idea of shifting the posterior to re-sample simulated trajectories from the more promising regions of the search space (i.e. it `zooms' in on the useful parts of the space). \textit{BayesZoom-RKHS} incorporates our idea of using distributional embedding for keypoints. All of the above algorithms are sequential: they shift the posterior after each iteration, then sample simulation data from the new posterior on the subsequent iteration. We collect 100 simulation trajectories on each iteration, and retain all the data from the previous iterations for training. Hence, after 15 iterations these algorithms would collect a total of 1.5K simulation trajectories overall. To compare this sequential vs batch/bulk training, we test the following two `bulk' algorithms that collect 1.5K trajectories sampled from the uniform prior, then train on this dataset. \textit{NN-bulk-1.5k} is a batch method that trains a fully connected neural network on 1.5K simulated trajectories. \textit{GP-bulk-1.5k} performs batch Gaussian process regression. In all NN-based algorithms, we use 3-layer fully connected neural networks as the `backbone' with 1024 units in each layer. We use Adam optimizer with a learning rate of \mbox{$1e\text{-}6$}. In our experience, the small learning rate helps NN-based approaches to learn from noisy data. For Gaussian process regression we use GPyTorch+BOTorch with automatic hyperparameter optimization. For keypoint extraction, we experimented with using 8 and 4 keypoints with the unsupervised method. For the supervised approach we used 4 keypoints to minimize the time spent on labeling the data. \begin{figure}[t] \vspace{10pt} \centering \includegraphics[width=0.9\linewidth]{img/legend.jpg} \includegraphics[width=0.505\linewidth]{img/winding_real_results_ready_tran.jpg} \includegraphics[width=0.480\linewidth]{img/winding_real_results_ready.jpg} \vspace{-18px} \caption{Hardware results for the winding task. Left: using unsupervised keypoints. Right: using supervised keypoints.} \label{fig:hw_results_winding} \vspace{-10px} \end{figure} \begin{figure}[b] \centering \includegraphics[width=1.0\linewidth]{img/hw_posteriors.png} \vspace{-15px} \caption{Examples of 2D slices of 4D posteriors found after 15 iterations of \textit{BayesSim-RKHS}. The mixture posterior is comprised of 4 full-covariance Gaussian components, blue crosses show their means. The high-likelihood regions are denoted in magenta (blue crosses outside high-likelihood regions indicate that the mixture component's weight is low). Scale and friction tend to be the easier parameters to estimate, with posteriors becoming peaked. Bending and elastic stiffness are more difficult to infer. The middle plot shows an example where the posterior has shifted only slightly.} \label{fig:hw_posteriors} \end{figure} \begin{figure}[t] \vspace{10pt} \centering \includegraphics[width=0.9\linewidth]{img/legend.jpg} \includegraphics[width=0.50\linewidth]{img/fling_real_results_ready_tran.jpg} \includegraphics[width=0.485\linewidth]{img/fling_real_results_ready.jpg} \vspace{-20px} \caption{Hardware results for the fling task. Left: using unsupervised keypoints. Right: using supervised keypoints.} \label{fig:hw_results_fling} \vspace{-13px} \end{figure} Figures~\ref{fig:hw_results_wiping},~\ref{fig:hw_results_winding},~\ref{fig:hw_results_fling} show plots for results on all the scenarios. In these plots, the lines show mean distance to `ground truth' (mean over a set of 30 evaluation trajectories), shaded regions indicate one standard deviation. The top of each plot includes an example visualization of the keypoints. Results for the `bulk' baselines are shown on the right side in each figure. We let these methods use the supervised keypoints, since these are less noisy. \textit{BayesSim-RKHS} and \textit{BayesZoom-RKHS} outperform all other approaches, showing the largest improvement on the winding task. This result is intuitive, since the keypoints in this scenario move around the length of the rope without settling on consistent locations. Hence, the distributional embedding for keypoints is most useful. The unsupervised keypoints extracted in wiping and fling scenarios are sometimes placed on the robot. This can be resolved by modeling the robot in simulation instead of masking it out in reality; we will address this in future work. \section{Discussion and Conclusion} In this work, we introduced the concept of distributional embedding to represent deformable object state. We showed that this idea allows us to conduct Bayesian parameter inference of material properties on noisy real-world data from vision systems. Using random Fourier features approximation enabled this embedding to be computed efficiently. Previous works, including existing BayesSim variants, explored using RFF features to transform the inputs to the neural networks before training. Computer vision works, such as~\cite{tancik2020fourier}, also employed RFFs and showed favorable results in some cases, for example -- improving recovery of high-frequency signals in images. In contrast to previous results for direct application of RFFs, we show that simply using RFF features yields \mbox{\textit{BayesSim-MDRFF}} method, which is unable to produce informative parameter posteriors for deformable objects. Our approach benefits from the favorable theoretical properties of RFFs, but applies them in a different context: to embed distributions representing the state of a deformable object in an RKHS. We specifically address the challenges of handling approximate and noisy representations that frequently arise when dealing with deformable objects. Furthermore, our approach aims to enable modularity and data efficiency when applying Bayesian parameter inference methods to the challenges of learning from data with deformables. We demonstrate how to successfully use existing state representation learning methods, despite the lack of consistency and challenges with non-identifiability in these representations. Instead of requiring large-scale data collection, we focus on utilizing the data efficiently. The keypoint extraction models are trained from $\approx\!\!1$ minute of real data, hence output approximate and noisy results. Our method allows to interpret the output of these methods as samples from an underlying state distribution and create embeddings of these distributions with minimal loss of information. The RKHS-net layer we propose offers a fully automated way to construct such embeddings, without the need for hyperparameter tuning, since all variables and parameters are learned from data via gradient descent.
1,314,259,995,243
arxiv
\section{INTRODUCTION} Over the past two decades, robotics and autonomous vehicle systems have \textcolor{red}{increasingly utilized} vision sensors, using them to provide critical capabilities including localization. This usage is due in part to the rapid increase in both \textcolor{red}{camera capabilities and computational processing power}. Cameras have benefits over other sensors such as radar, providing far more information about the environment including texture and colour. Furthermore, cameras have other advantages including being passive sensing modalities, \textcolor{red}{and the potential} to be relatively inexpensive, have small form factors and relatively low power consumption \cite{milford2014condition}. One of the critical system design considerations for camera-equipped autonomous platforms is the coverage of the cameras, which is affected by a range of factors including the altitude of the platform (for aerial contexts), mounting point (for ground-based vehicles), the camera field of view and the sensor resolution. The choices made with regards to these system properties can also affect other critical system considerations like compute -- if a subset of the entire field of view of a camera can be used for effective localization, significant reductions in compute can be achieved. We addresses this challenge by presenting a novel technique that automatically identifies the trade-off between visual sensor coverage and the performance of a visual localization algorithm. The technique enables automatic selection of the minimum visual sensor coverage required to obtain optimal performance -- specifically, optimal localization recall without expending unnecessary compute on processing a larger sensor coverage field than required. We focus our research within the area of vision based surface localization, such as that demonstrated by Kelly et al \cite{Kelly:2000,Kelly:2007} for warehouse localization, Conte and Doherty \cite{conte2009vision} in aerial environments and Hover et al \cite{Hover:2012} in ship hull inspection. We evaluate the proposed method \textcolor{red}{using two surface-based visual localization techniques}, on several challenging real-world aerial and ground-based surface datasets, showing that the technique can automatically select the optimal coverage by using calibration data from environments analogous to the deployment environment. The paper proceeds as follows. Section \ref{section:RelatedWork} summarizes related works, such as surface-based visual localization and procedures for parameter tuning. Sections \ref{section:Approach} and \ref{section:ExperimentalSetup} provide an overview of the calibration procedure and the experimental setup respectively. The performance of our algorithm and a discussion is presented in Sections \ref{section:Results} and \ref{section:Discussion} respectively. \begin{figure}[!t] \centering \includegraphics[scale=0.75]{OverviewFig.png} \vspace{-0.25cm} \caption{Given a reference map and a number of query samples, our overlap coefficient-based calibration process automatically determines the optimal sensor coverage for maximizing localization performance while minimizing computational overhead. The blue and red lines in the plots are the overlapping coefficient for various patch radii for the two datasets shown and the overlapping coefficient threshold respectively.} \label{figure:OverviewFig} \vspace{-0.2cm} \end{figure} \section{RELATED WORK} \label{section:RelatedWork} In this section we present research related to surface-based visual localization and calibration procedures for parameter tuning. \textcolor{red}{The coverage here is of localization techniques themselves rather than coverage calibration approaches; as to the best of our knowledge we do not believe there is a system that is directly comparable to the technique outlined in this paper.} \subsection{Surface-Based Visual Localization} In several mobile robotics applications the system moves relative to a surface, such as a drone across the ground, an autonomous vehicle over the road or a submarine relative to a ship's hull. As a result, several approaches have proposed using the surface that the robot moves relative to as a visual reference map for localization. For example, Kelly et al. thoroughly demonstrated that surface-based visual localization using pixel-based techniques for mobile ground platforms is feasible within warehouse environments with controlled lighting using a monocular camera \cite{Kelly:2000,Kelly:2007}. Mount et al. also demonstrated this technique can be applied to autonomous vehicles and a road surface, even with day to night image data \cite{mount2017image}. \textcolor{red}{Additionally, \cite{kozak2016,zhang2019} demonstrate the use of local features for road surface-based visual localization.} Unmanned aerial vehicles (UAVs) regularly use geo-referenced aerial imagery to help alleviate errors caused by GPS outages \cite{conte2009vision, Sim2002IntegratedPE, caballero2009vision, madison2007vision}. For example, Conte et al. demonstrated that they could incorporate feature-based image registration to develop a drift-free state estimation technique for UAVs \cite{conte2009vision}. The research presented on underwater visual ship hull inspection and navigation further demonstrates that vision based surface localization is feasible even in challenging conditions \cite{Hover:2012,Kim:2013,Ozog:2014}. There has also been a variety of research into utilizing the surface as the input image stream for visual odometry \cite{Dille:2010,Nourani:2011,aqel2016adaptive}. \textcolor{red}{All these systems either have a hard-coded empirically tuned parameter defining the amount of the visual sensor to use, or simply use the entire field of view. Therefore, they could be performing unnecessary computations without any performance gains. In contrast, our system automatically selects the optimal visual sensor coverage for maximizing performance while minimizing unnecessary computation.} \subsection{Calibration Procedures for Visual Localization} The altering of configuration parameters in both deep learning and traditional computer vision algorithms can have a drastic effect on performance \cite{bergstra2011algorithms}, such as the the size of images used within appearance-based techniques \cite{milford2012seqslam}. This can cause difficulties in successfully making the transition between research and application, as well as between domains \cite{jacobson2018semi, zeng2018i2, zeng2017enhancing}. Due to these difficulties, there have been several research areas investigating the development of automatic calibration routines to improve the performance of visual localization alogrithms. Lowry et al. demonstrated online training-free procedures that could determine the probabilistic model for evaluating whether a query image came from the same location as a reference image, even under significant appearance variation \cite{lowry2014towards, lowry2015building}. In \cite{jacobson2015online, jacobson2015autonomous} and \cite{jacobson2013autonomous} Jacobson et al. explored novel calibration methods to automatically optimize sensor threshold parameters for place recognition. Several bodies of work have also used the system's state estimate to reduce the search space in subsequent iterations, such as that in \cite{aqel2016adaptive, Nourani:2011}. In all bodies of work the authors demonstrated that parameter calibration outperformed their state-of-the-art counterparts. \textcolor{red}{However, these techniques typically focused on optimizing a single metric, mainly recall/accuracy, and did not explicitly consider calibrating for both localization performance and computation load in parallel, which is the focus of the research described in this paper}. There has been considerable research into calibration routines to identify spatial and temporal transforms between pre-determined sensor configurations \cite{maddern2012lost, furgale2013unified, kelly2011visual, scaramuzza2007extrinsic, pandey2015automatic, weiss2012real}. Significant investigations into using visual sensors to overcome kinematic and control model errors used in robotics platforms has also been an area of key research \cite{meng2007autonomous, vsvaco2014calibration, du2013online}. These approaches in general have addressed a different set of challenges to those addressed here, instead focusing on the relationship between sensors and robotic platforms or between sensors and other non-localization-based competencies. The automatic selection of hyper-parameters is also related, especially in the deep learning field \cite{bergstra2011algorithms, bergstra2012random, thornton2013auto, bardenet2013collaborative, gold2005bayesian}. \section{Approach} \label{section:Approach} This section provides an overview of the approach for automatic selection of the sensor coverage required for an optimal combination of visual surface based localization performance and computational requirements. The primary aim and scope of the techniques presented here is to identify the amount of coverage with respect to the sensor field of view and the altitude of a downward-facing camera above the ground plane. The technique requires a small number of aligned training image pairs from an environment analogous to the deployment environment; although we do not attack that particular problem here, there are a multitude of techniques that could potentially be used to bootstrap this data online such as SeqSLAM \cite{milford2012seqslam}. We outline the complete calibration procedure in Algorithm \ref{algo:CalibrationProcedure}. \begin{algorithm} \color{red} \For{all patch radii in $P_N$}{ \For{$x$ calibration samples}{ run localization on sample; \\ store ground truth and all other localization scores; \\ } fit distribution to ground truth scores; \\ fit distribution to all other scores; \\ calculate OVL between distributions; \\ store patch radius and OVL in matrix; } \eIf{any OVL value $\leq$ required OVL value}{ interpolate to find optimal patch radius; \\ }{ set optimal patch radius to ${arg\, max}_N{(P_N)};$ \\ } \caption{Calibration Procedure} \label{algo:CalibrationProcedure} \end{algorithm} \subsection{Optimal Coverage Calibration Procedure} The calibration procedure works under the assumption that the similarity of the normal distributions between the ground truth only scores and all scores diverges as sensor coverage, resolution and placement changes. This divergence in distribution similarity is indicative of better single frame matching performance (see Figure \ref{figure:OVLMetricExample} for an example). In this paper we use the Overlapping Coefficient (OVL), which is an appropriate measure of distribution similarity \cite{inman1989overlapping, reiser1999confidence}. There are various measures for OVL, including Morisita's \cite{morisita1959measuring}, Matusita's \cite{matusita1955decision} and Weitzman's \cite{weitzman1970measures}. We use Weitzman's measure which is given by, \begin{equation} O = \int_{k_0}^{k_1} min(p(x), q(x)) dx \end{equation} where $p(x)$ and $q(x)$ are two normal distributions and $O$ is the resulting OVL value. \textcolor{red}{The bounds of the integral, $k_0$ and $k_1$, are the numerical limits of the technique being utilised. For example, $k_0$ and $k_1$ would be $-1$ and $1$ respectively for NCC. The Overlapping Coefficient was used as the measure of distribution similarity over other methods, such as the Kullback-Leibler divergence, as it decays to zero as two distributions become more dissimilar and because it is symmetric.} Once the OVL value goes below a given threshold there is limited to no performance gains in localization performance. It is at this point we consider the visual sensor coverage to be optimal. As the OVL threshold is most likely between two of the tested calibration OVL values, as in Figure \ref{figure:OVLMetricExample}, we use linear interpolation to select the point of intersection. If no tested calibration points achieve less than the required OVL we simply take the largest coverage tested. The selection of the optimal operating value $P_O$ hence is given by the following, \begin{equation} \color{red} P_O = \begin{cases} P_a+(P_b-P_a)\frac{O_r-O_a}{O_b-O_a} & \text{any}(P_N \leq O_r) \\ {{arg\, max}_N}{(P_N)} & \text{otherwise} \end{cases} \end{equation} where $P_O$, $P_a$ and $P_b$ are the optimal operating value, and the value above and below the required OVL threshold, $O_r$, respectively. $O_a$ and $O_b$ are the corresponding OVL values for the tested calibration values $P_a$ and $P_b$. $P_N$ are all the values tested during calibration. \begin{figure}[!t] \centering \includegraphics[scale=0.2]{OVLMetricExample.jpg} \vspace{-0.2cm} \caption{The effect of patch radius on the overlapping coefficient (OVL) between the normal distributions of all the correlation scores (solid red line) and the ground truth only scores (dashed green line). The red dotted line and solid black circle in the bottom plot represents the required OVL value $O_r$ and the selected interpolated patch radius respectively. This examples used NCC as the underlying localization technique.} \label{figure:OVLMetricExample} \vspace{-0.2cm} \end{figure} \textcolor{red}{Within this research our calibration procedure attempts to automatically select the optimal patch radius. We demonstrate the calibration algorithm using two surface-based visual localization techniques, Normalized Cross Correlation (NCC) and local features with sub-patch comparisons. NCC was selected as it has been shown to have relatively good performance within surface-based visual systems, \cite{Kelly:2007, mount2017image, Nourani:2011, aqel2016adaptive}. The local features technique (LFT) is used to demonstrate that the calibration procedure is agnostic to the front-end employed. Figure \ref{figure:LocalFeatureTechnique} shows an example of the local feature with sub-patch comparisons technique. This makes the local feature matching more sensitive to translational shifts and is similar to the regional-MAC descriptor outlined in \cite{tolias2015particular} or the patch verification technique described in \cite{milford2014_Visual}} \begin{figure}[!t] \centering \includegraphics[scale=0.085]{LocalFeatureTechnique.png} \vspace{-0.3cm} \caption{An example of the local feature with sub-patch comparison. This technique compares a patch (entire red rectangle) by comparing the corresponding smaller sub-patches. The final metric for a large patch-to-patch comparison is the average percentage of key point inliers across sub-patches. In this work the sub-patch diameter is set to 40 pixels, and we move the patch in increments of 20 pixels. We have used BRISK key points with SURF descriptors, and we only test patch sizes that are integer multiples of the sub-patch size.} \label{figure:LocalFeatureTechnique} \vspace{-0.4cm} \end{figure} \section{Experimental Setup}\label{section:ExperimentalSetup} This section describes the experimental setup, including the dataset acquisition and key parameter values. All experiments were performed either on a standard desktop running 64-bit Ubuntu 16.04 and MATLAB-2018b or on Queensland's University of Technology's High Performance Cluster running MATLAB-2018b. \subsection{Image Datasets} Datasets were either acquired from aerial photography provided by Nearmap, or from road surface imagery collected using a full-frame Sony A7s DSLR. The datasets are summarised in Table \ref{table:Datasets}. \subsubsection{Aerial Datasets} The aerial datasets were acquired by downloading high-resolution aerial photography provided by Nearmap \cite{nearmap}. To ensure suitable dataset variation for validation of our algorithm, the authors collected imagery from forest, field, rural and suburban areas at various altitudes as well as at different qualitative levels of appearance variation. Each Nearmap dataset consists of two pixel aligned images, a reference and a query map. Patches from the query map are compared to the reference map. Figure \ref{figure:NearmapDatasetExamples} shows the reference and query maps for each Nearmap dataset. The Nearmap Datasets 7a to 7c are from the same location with differing altitudes. \textcolor{red}{Similarly, the Nearmap Datasets 8a to 8c are from the same location with the same reference image, but with different query images with various levels of appearance variation (missing buildings and hue variations)}. Each Nearmap image was down-sampled to a fixed width while maintaining its aspect ratio. This down-sampling was to increase ease of comparison between different datasets. \begin{figure*}[!tbhp] \centering \includegraphics[scale=0.2]{NearmapDatasets_Small.jpg} \vspace{-0.15cm} \caption{The 12 Nearmap reference and query map pairs and 8 image pairs from the Road Surface datasets used in this research. The Nearmap environments vary significantly from grassy fields to urban environments, observed from a range of altitudes and under different appearance changes. The two road surface datasets showing the corresponding reference-query map pairs, with day-day and day-night transitions. The size difference in the images is caused by the manual pixel alignment and cropping procedure.} \label{figure:NearmapDatasetExamples} \vspace{-0.45cm} \end{figure*} \subsubsection{Road Surface Datasets} The road surface imagery datasets were acquired using a consumer grade Sony A7s, with a standard lens, capturing video while mounted to the bonnet of a Hyundai iLoad van. Three traversals of the same stretch of road were made, two during the day and one at night. Corresponding day-day (Road Surface 1) and day-night (Road Surface 2) frames with significant overlap were then selected, and the corresponding frames manually pixel aligned. This resulted in two datasets, Road Surface 1 and 2. Both datasets have four pixel aligned images, with day-day and day-night images in datasets 1 and 2 respectively. Similarly to the Nearmap datasets, the first image in each image pair is used as the reference map, while the second is used to generate query patches. Figure \ref{figure:NearmapDatasetExamples} shows the four reference and query maps for each Road Surface dataset. The road surface images were pre-processed, including down-sampling and local patch normalization, to remove the effects of lighting variation and motion blur. This has been shown to improve visual localization performance \cite{milford2012seqslam}. \begin{table}[!t] \caption{Datasets} \label{table:Datasets} \centering \begin{tabular}{|c||c||c|} \hline \textbf{Dataset Name} & \textbf{Dataset Name} & \textbf{Dataset Name} \\ \hline Nearmap 1 & Nearmap 2 & Nearmap 3 \\ \hline Nearmap 4 & Nearmap 5 & Nearmap 6 \\ \hline Nearmap 7a & Nearmap 7b & Nearmap 7c \\ \hline Nearmap 8a & Nearmap 8b & Nearmap 8c \\ \hline Road Surface 1a & Road Surface 1b & Road Surface 1c \\ \hline Road Surface 2a & Road Surface 2b & Road Surface 2c \\ \hline \end{tabular} \vspace{-0.4cm} \end{table} \subsection{Parameter Values} The key parameter values are given in Table \ref{table:KeyParameters}. \textcolor{red}{All parameters were empirically determined over a range of test datasets, and then applied to all experimental datasets. As shown by the results, the system was generally able to select a near optimal patch radius across a range of environment appearances and domains (aerial versus ground-based), even with an almost identical set of parameter values.} \textcolor{red}{The selection of the required Overlapping Coefficient ($O_r$) is a trade off between reducing computational overhead at the risk of reduced localization performance and is dependent on the localization front-end. An initial OVL value can be computed by finding the patch radius that achieves high recall on several test datasets. The remaining parameters, which are mostly dependent on the environment domain and sensor parameters, could also be tuned using exemplary data.} \begin{table}[!t] \color{red} \caption{Key Parameter List for Nearmap and Road Surface Datasets} \label{table:KeyParameters} \centering \begin{tabular}{|c|c|c|c|p{2.1cm}|} \hline \textbf{Parameter} & \multicolumn{2}{c|}{\textbf{Nearmap}} & \textbf{Road Surface} & \textbf{Description}\\ \hline & NCC & LFT & NCC & \\ \hline $I_{X}$ & 200 & 400 & 100 & Image Width \\ \hline $N_{X}$ & \multicolumn{2}{c|}{N/A} & 2 & Patch Normalization Radius \\ \hline $O_{r}$ & 0.005 & 0.0225 & 0.005 & Required OVL Threshold\\ \hline $t_{M}$ & \multicolumn{2}{c|}{10} & 5 & True Match Distance Threshold \\ \hline $N$ & 200 & 100 & 200 & Number of Calibration Samples \\ \hline $M$ & 1000 & 100 & 1000 & Number of Validation Samples \\ \hline \end{tabular} \vspace{-0.4cm} \end{table} \section{Experiments and Results}\label{section:Results} This section presents the results from the various experiments we conducted. To evaluate performance we calculate the recall, as well as a new performance metric which takes into account both recall and computational efficiency. We defined recall as the number of true single frame matches divided by the total number of samples. The second new performance metric is used to test that the calibration procedure does choose the optimal operating point. Optimal performance is defined as maximizing recall with as little computational overhead necessary. This new metric, which we call the max recall to computation efficiency, is given by \begin{equation} \color{red} M_{i} = 1 - \frac{\sqrt{(P_i - P_{g})^2}}{{arg\, max}_N{(\sqrt{(P_N - P_{g}})^2)}} \end{equation} where $M_{i}$ is the max recall to computation efficiency at patch radius $P_i$. $P_g$ and $P_N$ are optimal ground truth patch radius for the dataset and all patch radii used during validation. The ${arg\, max}_N (\sqrt{(P_N - P_{g}})^2$ is used to normalize the distances to be in the range from 0 to 1, while the $1-$ is used to invert the normalized distances so that a higher value means a higher recall to computation efficiency. The optimal ground truth patch radius, $P_g$, is defined as the patch radius which achieves 95\% of the maximum recall for that dataset. This distance metric naturally encodes the recall and computational efficiency into a single value, and it will punish either unnecessary computational overhead or points that achieve poor relative recall. Patch radius is indicative of computational load, as demonstrated in Figure \ref{figure:AveCompTimeAndPatchOverlay}a, which shows that computation time is proportional to patch radius. \subsection{Automatic Coverage Selection Evaluation} The first experiment was to investigate the performance of the calibration procedure and test whether it indeed selects the optimal coverage required to maximize localization performance. To evaluate this we ran the calibration routine on a single calibration image that was the same size as and representative of, each Nearmap reference map. We then verified the calibration procedure by testing several patch radii, including the selected patch radius from the calibration routine, on each Nearmap dataset. \textcolor{red}{It should be noted that no image pairs used for calibration are used during validation; and there is no physical overlap between the calibration and validation image pairs in any experiment (see Figure \ref{figure:NearmapDatasetExamples})}. To validate the calibration procedure we compute the percentage recall and performance metric for several patch radii on the validation image pairs. The results are shown in Figure \ref{figure:ValidationOfCalibrationProcedure} and \ref{figure:AltitudeAndApearanceVariation}. Figure \ref{figure:ValidationOfCalibrationProcedure} shows the results for Nearmap datasets 1-6. Figure \ref{figure:AltitudeAndApearanceVariation} shows the results for 7a-c and 8a-c which represent various altitudes and appearance variation. \textcolor{red}{The Overlap Coefficient for Nearmap 6 does not decay to 0 because the calibration image has an extremely limited amount of unique data (i.e. almost impossible to successfully perform patch localization). Additionally, the validation image does have some unique information which is why 100\% percent recall can be achieved.} Figure \ref{figure:AveCompTimeAndPatchOverlay}a shows the average computation time is proportional to the patch radius. Additionally, it should be noted that the optimal coverage varies between datasets, as shown in Figure \ref{figure:AveCompTimeAndPatchOverlay}b. In Figure \ref{figure:Nearmap7Traversal} we provide a visual example of a traversal through the Nearmap 8b dataset using the optimal patch radius of 30 pixels, as well as a patch radius above and below. As can be seen, the optimal patch radius results in near perfect recall with minimal computational overhead. \begin{figure}[!tbtp] \centering \includegraphics[scale=0.20]{ValidationCalibrationProcedure.jpg} \vspace{-0.15cm} \caption{Results of the calibration procedure on several Nearmap datasets, optimizing for NCC patch radius. The top plot shows the OVL using Weitzman's measure for the calibration patch radii tested, which is performed on a calibration image. The second and third plot show the percentage recall and max recall to computational efficiency curves for several patch radius, including the selected patch radius, $P_O$, indicated by a black circle, which is performed on the Nearmap dataset images. As can be seen, the calibration procedure consistently selects the patch radius near the top of the max recall to computational efficiency curves, demonstrating its success.} \label{figure:ValidationOfCalibrationProcedure} \vspace{-0.3cm} \end{figure} \begin{figure}[!tbtp] \centering \includegraphics[scale=0.2]{VaryingAltitudeAndAppearance.jpg} \vspace{-0.15cm} \caption{Results of the calibration procedure on Nearmap datasets with altitude and appearance variations, datasets 7a-c and 8a-c respectively. As can be seen in the third plot, the calibration consistently picks the near optimal patch size, as indicated by the black circles.} \label{figure:AltitudeAndApearanceVariation} \vspace{-0.25cm} \end{figure} \begin{figure}[!tbtp] \centering \includegraphics[scale=0.7]{CompTimeAndOverlay.png} \vspace{-0.3cm} \caption{(a) Computational profile: the average computation, and hence computational load, is proportional to the patch radius. (b) The optimal visual coverage required is dependent on the data. The rectangles show the optimal patch radius. The optimal patch radius are 4, 30, 7 and 15 pixels for the Nearmap 7a, Nearmap 8b, Road Surface 1 and Road Surface 2 datasets respectively (note that the Nearmap 8b patch radius looks smaller than the Road Surface patch radii because the Nearmap 8b image is 4x larger).} \label{figure:AveCompTimeAndPatchOverlay} \vspace{-0.3cm} \end{figure} \begin{figure}[!tbtp] \centering \includegraphics[scale=0.2]{Nearmap_8b_Traversal.jpg} \vspace{-0.15cm} \caption{A visual indication of the performance of the calibration procedure on a traversal across the Nearmap 8b dataset. As can be seen the optimal patch radius selected by the calibration procedure, 30 pixels, results in almost perfect recall with a much lower computation time per iteration compared to that of the traverse using a 60px patch radius. Each green and red dot indicates the center of successful or unsuccessful localization of a query patch throughout the traverse respectively} \label{figure:Nearmap7Traversal} \vspace{-0.2cm} \end{figure} \subsection{Automatic Coverage Selection on a Different Domain} The second experiment investigated how well the automatic selection of the optimal visual coverage worked on a different data domain. \textcolor{red}{For this experiment we used the two road surface datasets. For each dataset, image pair 1 was used for calibration while all four image pairs were used for validation. The results for Road Surface datasets 1 and 2 can be found in Figures \ref{figure:RoadSurfacePlots_DayDay_Single} and \ref{figure:RoadSurfacePlots_DayNight_Single} respectively. Please note we validated on all four images, even though image pair 1 is used for training, to allow us to compare results in the following experiment. We will only discuss the results of image pairs 2 to 4 here.} As can be seen, the calibration procedure successfully selects the near optimal patch radius in both Road Surface datasets. \textcolor{red}{The slightly lower max recall to computational efficiency performance of the selected patch radius on the Road Surface 2 dataset is due to the fact that the training data in this case was less representative of the deployment data than the other cases. The higher performance on validation image pairs 2 and 3 compared to validation image pair 4 is probably caused by the fact that the unique features in image pairs 2 and 3 (i.e. cracks, identifiable rocks/patterns) are more evenly distributed throughout the entire image. This means that smaller patches have a higher chance of successful localization in validation image pairs 2 and 3, despite any visual variations (i.e. hue) to the calibration image pair.} However, these results still show that the calibration procedure can select an optimal coverage that generalizes to other data (assuming the calibration data is representative of the rest of the dataset). \vspace{-0.4cm} \begin{figure}[!tbtp] \centering \includegraphics[scale=0.2]{RoadSurfacePlots_DayDay_Single.jpg} \vspace{-0.15cm} \caption{The results of the calibration procedure on the Road Surface 1 dataset (day-day images), which demonstrates that the calibration procedure consistently selects the optimal patch radius within a different data domain.} \label{figure:RoadSurfacePlots_DayDay_Single} \vspace{-0.35cm} \end{figure} \begin{figure}[!tbtp] \centering \includegraphics[scale=0.2]{RoadSurfacePlots_DayNight_Single} \vspace{-0.15cm} \caption{The results of the calibration procedure on the Road Surface 2 dataset (day-night images). The selected patch radius from calibration procedure, which was determined using the first image pair, results in the near optimal performance on the three remaining image pairs within the dataset.} \label{figure:RoadSurfacePlots_DayNight_Single} \vspace{-0.35cm} \end{figure} \textcolor{red}{\subsection{Automatic Coverage Selection using Multiple Training Images}} \textcolor{red}{The previous experiments on Road Surface 2 demonstrate what happens when the training data is not representative of the deployment environment. To mitigate this issue multiple training image pairs can be used. For this experiment we calibrate on image pairs 1 and 2 of the Road Surface 2 dataset and averaged the two optimal patch radii, which were $15$ and $8$ respectively. This average optimal patch radius, $12$, was then validated on all four images. The results are shown in Figure \ref{figure:RoadSurfacePlots_DayNight_Multi}.} \textcolor{red}{The results show that training on multiple images both positively and negatively affects performance. In the case of images 2 and 3 we can see that the selected patch radius is closer to the peak of the max recall to computational efficiency curve. However, for image pairs 1 and 4 we can see that the selected patch radius has resulted in a decrease on the max recall to computational efficiency curve. For image pairs 1 and 4 this shift on the max recall to computational efficiency curve means the overall recall is decreased (i.e. worse localization performance). In contrast, for image pairs 2 and 3, recall is still maximized but computation efficiency has been increased. This suggests the averaging of multiple training image pairs does lead to a better overall performance, since there is only a slight decrease in recall performance for image pairs 1 and 4. However, a more sophisticated approach to selecting the optimal patch radius when using multiple image pairs for training may lead to further improvements; this is an avenue for future investigation.} \begin{figure}[!t] \centering \includegraphics[scale=0.18]{RoadSurfacePlots_DayNight_Multi} \vspace{-0.15cm} \caption{The results of the multiple training image experiment performed on Road Surface dataset 2. When comparing to the results from the previous experiment we can see the use of multiple training images improves the overall performance in regards to the max recall to efficiency metric.} \label{figure:RoadSurfacePlots_DayNight_Multi} \vspace{-0.38cm} \end{figure} \textcolor{red}{\subsection{Automatic Coverage Selection Evaluation using a Feature-Based Localization Approach}} \textcolor{red}{To evaluate the generality of the automatic coverage selection process, we performed a second set of experiments with the local feature-based technique previously described as the localization front-end. Due to the extremely challenging appearance change present in much of the Nearmaps datasets, the feature-based approach only produced competitive performance on datasets 4, 7a and 7b, a result mirroring what has been observed in a range of other feature-based localization systems \cite{milford2014_Visual}. However, for these environments where the underlying front-end was functional, the calibration routine successfully selected the optimal patch radius in all cases, as can be seen in Figure \ref{figure:LocalFeatureTechnique_Results}. These results indicate that the coverage selection process can generalize across different localization front-ends.} \begin{figure}[!t] \centering \includegraphics[scale=0.18]{ValidationCalibrationProcedure_LFT} \vspace{-0.15cm} \caption{The results of using the calibration system with the local feature-based technique. As can be seen the optimal patch radius is correctly selected, showing the proposed system generalizes to other localization front-ends.} \label{figure:LocalFeatureTechnique_Results} \vspace{-0.3cm} \end{figure} \section{Discussion and Future Work}\label{section:Discussion} The presented automatic calibration procedure takes a set of aligned imagery from an environment analogous to the deployment domain, and selects the minimum sensor coverage required to achieve optimal localization performance with minimal compute requirements. Experiments run across both aerial and ground-based surface imagery demonstrated that the approach is able to consistently find this optimal coverage amount, even when it varies hugely across application domains and environments. There are a range of enhancements and extensions that can be pursued in future work. The first is to investigate the potential use of appearance-invariant visual localization algorithms to generate the aligned training data ``on the fly'' at deployment time, removing the need to have training data beforehand \textcolor{red}{and allowing for continuous online calibration}. The second is to investigate other criteria for finding the optimal operating point beyond the implementation used in this research -- such as defining a ``plateau'' threshold in the overlap coefficient curve at which point performance gains diminish with increased sensor coverage. Thirdly, we have investigated sensor coverage of the environment here but not other properties like sensor resolution. Such properties could likely be optimized through a similar process to the one used here for coverage. \textcolor{red}{Fourthly, the technique has been demonstrated to be agnostic to surface-based visual localization techniques -- it will be interesting to investigate how it performs on other visual localization systems, for example forward-facing cameras.} Additionally, there may be absolute criteria that can be used to determine the optimal coverage for a given environment, again removing the requirement to have training data with aligned imagery. \textcolor{red}{Finally, while the required OVL value is dependent on the localization technique, the heuristically determined OVL thresholds selected appear to be robust across a range of very different datasets and domains, including various image sizes and pre-processing steps. However, a sensitivity analysis would be worth investigating. Additionally, further work into the automatic selection of parameter values as well as a probabilistic interpretation of how to select the OVL value could draw on existing methods, such as \cite{lowry2015building, jacobson2015online}} Choosing the right camera configuration with respect to mounting and field of view, as well as the operating altitude of an unmanned aerial vehicle, is a critical process both during system design and during deployment operations. We hope that the research presented here will provide an additional tool with which to address these challenges. \section*{ACKNOWLEDGMENT} James Mount and Michael Milford are with the Australian Centre for Robotic Vision at the Queensland University of Technology. This work was supported by an ARC Centre of Excellence for Robotic Vision Grant CE140100016, an Australian Research Council Future Fellowship FT140101229 to Michael Milford and an Australia Postgraduate Award and a QUT Excellence Scholarship to James Mount. The authors also appreciate the support and computing resources provided by QUT’s High Performance Centre (HPC). \bibliographystyle{IEEEtran}
1,314,259,995,244
arxiv
\section{Friedmann equations in the $f(T)$ theory} In the framework of $f(T)$-gravity, the Friedmann equations in the flat spatial Friedmann-Robertson-Walker (FRW) universe are given by \cite{Karami1,Ferraro,bengochea,yerzhanov,linder,WufT,Bamba} \begin{equation} \frac{3}{k^2}H^2=\rho+\rho_T,\label{fT11} \end{equation} \begin{equation} \frac{1}{k^2}(2\dot{H}+3H^2)=-(p+p_T),\label{fT22} \end{equation} where \begin{equation} \rho_T=\frac{1}{2k^2}(2Tf_T-f-T),\label{roT} \end{equation} \begin{equation} p_T=-\frac{1}{2k^2}[-8\dot{H}Tf_{TT}+(2T-4\dot{H})f_T-f+4\dot{H}-T],\label{pT} \end{equation} with \begin{equation} T=-6H^2.\label{T} \end{equation} Here $k^2=8\pi G$, $H=\dot{a}/a$ is the Hubble parameter, $\rho$ and $p$ are the total energy density and pressure of the matter inside the universe. Also $\rho_T$ and $p_T$ are the energy density and pressure due to the contribution of the torsion scalar $T$. Furthermore, in $f_T$ and $f_{TT}$ the subscript $T$ denotes a derivative with respect to $T$. Note that if $f(T)=T$ then Eqs. (\ref{fT11}) and (\ref{fT22}) transform to the usual Friedmann equations in Einstein general relativity (GR). The energy conservation laws are \begin{equation} \dot{\rho}+3H(\rho+p)=0, \end{equation} \begin{equation} \dot{\rho}_T+3H(\rho_T+p_T)=0.\label{ecT} \end{equation} The equation of state (EoS) parameter due to the torsion contribution is defined as \cite{Karami1,yerzhanov} \begin{equation} \omega_T=\frac{p_T}{\rho_T}=-1+\frac{8\dot{H}Tf_{TT}+4\dot{H}f_T-4\dot{H}}{2Tf_T-f-T}.\label{omegaT} \end{equation} For a given $a=a(t)$, by the help of Eqs. (\ref{roT}) and (\ref{pT}) one can reconstruct the $f(T)$-gravity according to any dark energy (DE) model given by the EoS $p_T=p_T(\rho_T)$ or $\rho_T=\rho_T(a)$. Here we assume a pole-like phantom scale factor as \cite{Sadjadi} \begin{equation} a(t)=a_0(t_s-t)^{-h},~~~t\leq t_s,~~~h>0.\label{a} \end{equation} Using Eqs. (\ref{T}) and (\ref{a}) one can obtain \begin{equation} \begin{array}{l} H=\frac{h}{t_s-t},\\ T=-\frac{6h^2}{(t_s-t)^2},\\ \dot{H}=-\frac{T}{6h}.\label{respect to r} \end{array} \end{equation} From Eqs. (\ref{a}) and (\ref{respect to r}) the scale factor $a$ can be rewritten in terms of $T$ as \begin{equation} a=a_0{\left(-\frac{T}{6h^2}\right)}^{\frac{h}{2}}.\label{aT} \end{equation} \section{Polytropic $f(T)$-gravity model} Here like \cite{Karami1} we reconstruct the $f(T)$-gravity from the polytropic gas DE model. Following \cite{Karami2}, the EoS of the polytropic gas is given by \begin{equation} p_{\Lambda}=K\rho_{\Lambda}^{1+\frac{1}{n}},\label{pol1} \end{equation} where $K$ is a positive constant and $n$ is the polytropic index. Using Eq. (\ref{ecT}) the energy density evolves as \begin{equation} \rho_{\Lambda}=\left(Ba^\frac{3}{n}-K\right)^{-n},\label{pol2} \end{equation} where $B$ is a positive integration constant \cite{Karami2}. Replacing Eq. (\ref{aT}) into (\ref{pol2}) yields \begin{equation} \rho_{\Lambda}=\left(\alpha T^\frac{3h}{2n}-K\right)^{-n},\label{rhoPG} \end{equation} where \begin{equation} \alpha=Ba_{0}^{\frac{3}{n}}{\left(-6h^2\right)}^{\frac{-3h}{2n}}.\label{alphaP} \end{equation} Equating (\ref{roT}) with (\ref{rhoPG}), i.e. $\rho_T=\rho_{\Lambda}$, we obtain the following differential equation \begin{equation} 2Tf_T-f-T-2k^2\left(\alpha T^\frac{3h}{2n}-K\right)^{-n}=0.\label{dif eq1} \end{equation} Solving Eq. (\ref{dif eq1}) gives \begin{equation} f(T)=\beta~T^{1/2}+T+(-1)^{1+n}\frac{2k^2}{K^n} ~{_2}F_1\left(-\frac{n}{3h},n;1-\frac{n}{3h};\frac{\alpha}{K}T^\frac{3h}{2n}\right),\label{fPDE} \end{equation} where $_{2}F_1$ denotes the first hypergeometric function. Replacing Eq. (\ref{fPDE}) into (\ref{omegaT}) one can obtain the EoS parameter of torsion contribution as \begin{equation} \omega_{T}=-1-\frac{1}{\frac{K}{\alpha}T^{\frac{-3h}{2n}}-1}~,~~~h>0.\label{wHDE} \end{equation} Using Eqs. (\ref{T}) and (\ref{alphaP}), the above relation can be rewritten as \begin{equation} \omega_{T}=-1-\frac{1}{\frac{K}{B}\left[a_0\left(\frac{H}{h}\right)^{h}\right]^{\frac{-3}{n}}-1}~,~~~h>0.\label{wCGDE} \end{equation} We see that for $\frac{K}{B}\left[a_0\left(\frac{H}{h}\right)^{h}\right]^{\frac{-3}{n}}>1$, $\omega_T<-1$ which corresponds to a phantom accelerating universe. \section{Standard Chaplygin $f(T)$-gravity model} The EoS of the standard Chaplygin gas (SCG) DE is given by \cite{Kamenshchik} \begin{equation} p_{\Lambda}=-\frac{A}{\rho_{\Lambda}}, \end{equation} where $A$ is a positive constant. Inserting the above EoS into the energy conservation equation (\ref{ecT}), leads to a density evolving as \cite{Kamenshchik} \begin{equation} \rho_{\Lambda}=\sqrt{A+\frac{B}{a^6}},\label{CG} \end{equation} where $B$ is an integration constant. Inserting Eq. (\ref{aT}) into (\ref{CG}) one can get \begin{equation} \rho_{\Lambda}=\sqrt{A+\alpha T^{-3h}},\label{rhoCG} \end{equation} where \begin{equation} \alpha=B{a_0}^{-6}{(-6h^2)}^{3h}.\label{alphaCG} \end{equation} Equating (\ref{rhoCG}) with (\ref{roT}) one can obtain \begin{equation} 2Tf_T-f-T-2k^2\sqrt{A+\alpha T^{-3h}}=0.\label{dif eq2} \end{equation} Solving the differential equation (\ref{dif eq2}) yields \begin{eqnarray} f(T)=\beta T^{1/2}+T-2k^2A^{\frac{1}{2}}~ {_2}F_1\left(\frac{1}{6h},\frac{-1}{2};1+\frac{1}{6h};{-\frac{\alpha}{A}} T^{-3h}\right).\label{fCGDE} \end{eqnarray} Replacing Eq. (\ref{fCGDE}) into (\ref{omegaT}) one can get \begin{equation} \omega_{T}=-1+\frac{1}{\frac{A}{\alpha}T^{3h}+1}~,~~~h>0.\label{wCGDE} \end{equation} Using Eqs. (\ref{T}) and (\ref{alphaCG}), the above relation can be rewritten as \begin{equation} \omega_{T}=-1+\frac{1}{\frac{A}{B}\left[a_0\left(\frac{H}{h}\right)^{h}\right]^{6}+1}~,~~~h>0,\label{wCGDE} \end{equation} which for $B<0$ and $\frac{A}{|B|}\left[a_0\left(\frac{H}{h}\right)^{h}\right]^{6}>1$ then $\omega_T$ can cross the phantom-divide line. \section{Generalized Chaplygin $f(T)$-gravity model} The EoS of the Generalized Chaplygin Gas (GCG) DE model is given by \cite{Bento} \begin{equation} p_{\Lambda}=-\frac{A}{\rho_{\Lambda}^\alpha}, \end{equation} where $\alpha$ is a constant in the range $0\leq\alpha\leq 1$ (the SCG corresponds to the case $\alpha=1$) and $A$ a positive constant. Using Eq. (\ref{ecT}), the GCG energy density evolves as \cite{Bento} \begin{equation} \rho_{\Lambda}=\left({A+\frac{B}{a^{3(1+\alpha)}}}\right)^{\frac{1}{1+\alpha}},\label{GCG} \end{equation} where $B$ is an integration constant. Substituting Eq. (\ref{aT}) into (\ref{GCG}) one can get \begin{equation} \rho_{\Lambda}={\left(A+\gamma~ T^{\frac{-3}{2}h(1+\alpha)}\right)}^{\frac{1}{1+\alpha}},\label{rhoGCG} \end{equation} where \begin{equation} \gamma=B{a_0}^{-3(1+\alpha)}{\left(-6h^2\right)}^{\frac{3}{2}h(1+\alpha)}.\label{gammaGCG} \end{equation} Equating (\ref{rhoGCG}) with (\ref{roT}) gives \begin{equation} 2Tf_T-f-T-2k^2{\left({A+\gamma~ T^{\frac{-3}{2}h(1+\alpha)}}\right)}^{\frac{1}{1+\alpha}}=0.\label{dif eq3} \end{equation} Solving Eq. (\ref{dif eq3}) yields \begin{eqnarray} f(T)=\beta T^{1/2}+T-2k^2A^{\frac{1}{1+\alpha}}~ {_2}F_1\left(\frac{1}{3h(1+\alpha)},\frac{-1}{1+\alpha};1+\frac{1}{3h(1+\alpha)};{-\frac{\gamma}{A}} T^{\frac{-3}{2}h(1+\alpha)}\right),\label{fGCG} \end{eqnarray} Replacing Eq. (\ref{fGCG}) into (\ref{omegaT}) gives the EoS parameter as \begin{equation} \omega_{T}=-1+\frac{1}{\frac{A}{\gamma}T^{\frac{3}{2}h(1+\alpha)}+1}~,~~~h>0,~~~0\leq\alpha\leq1,\label{wCGDE} \end{equation} Using Eqs. (\ref{T}) and (\ref{gammaGCG}), the above relation can be rewritten as \begin{equation} \omega_{T}=-1+\frac{1}{\frac{A}{B}\left[a_0\left(\frac{H}{h}\right)^{h}\right]^{3(1+\alpha)}+1},~~~h>0,~~~0\leq\alpha\leq1,\label{wCGDE} \end{equation} which for $B<0$ and $\frac{A}{|B|}\left[a_0\left(\frac{H}{h}\right)^{h}\right]^{3(1+\alpha)}>1$ then $\omega_T$ can cross the phantom-divide line. \section{Modified Chaplygin $f(T)$-gravity model} The EoS of the modified Chaplygin gas (MCG) DE model is given by \cite{Benaoum} \begin{equation} p_{\Lambda}=A\rho_\Lambda-\frac{B}{\rho_{\Lambda}^\alpha}, \end{equation} where $A$ and $B$ are positive constants and $0\leq\alpha\leq1$. Using Eq. (\ref{ecT}), the MCG energy density evolves as \cite{Benaoum} \begin{equation} \rho_{\Lambda}=\left({\frac{B}{1+A}+\frac{C}{a^{3(1+\alpha)(1+A)}}}\right)^{\frac{1}{1+\alpha}},\label{MCG} \end{equation} where $C$ is an integration constant. Replacing Eq. (\ref{aT}) into (\ref{MCG}) yields \begin{equation} \rho_{\Lambda}=\left({\frac{B}{1+A}+\gamma~ T^{\frac{-3}{2}h(1+\alpha)(1+A)}}\right)^{\frac{1}{1+\alpha}},\label{rhoMCG} \end{equation} where \begin{equation} \gamma=C{a_0}^{-3(1+\alpha)(1+A)}{\left(-6h^2\right)}^{\frac{3}{2}h(1+\alpha)(1+A)}.\label{gammaMCG} \end{equation} Equating (\ref{rhoGCG}) with (\ref{roT}) gives \begin{equation} 2Tf_T-f-T-2k^2\left({\frac{B}{1+A}+\gamma~ T^{\frac{-3}{2}h(1+\alpha)(1+A)}}\right)^{\frac{1}{1+\alpha}}=0.\label{dif eq4} \end{equation} Solving Eq. (\ref{dif eq4}) yields \begin{eqnarray} f(T)=\beta~T^{1/2}+T-2k^2\left(\frac{B}{1+A}\right)^{\frac{1}{1+\alpha}} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \nonumber\\\times{_2}F_1\left(\frac{1}{3h(1+\alpha)(1+A)},\frac{-1}{1+\alpha};1+\frac{1}{3h(1+\alpha)(1+A)};{\frac{-\gamma(1+A)}{B}} T^{\frac{-3}{2}h(1+\alpha)(1+A)}\right).\label{fMCG} \end{eqnarray} Replacing Eq. (\ref{fMCG}) into (\ref{omegaT}) one can obtain the EoS parameter of torsion contribution as \begin{equation} \omega_{T}=-1+\frac{A+1}{\frac{B}{\gamma(1+A)}T^{\frac{3}{2}h(1+\alpha)(1+A)}+1}~,~~~h>0,~~~0\leq\alpha\leq1,\label{wCGDE} \end{equation} and using Eqs. (\ref{T}) and (\ref{gammaMCG}), it can be rewritten as \begin{equation} \omega_{T}=-1+\frac{A+1}{\frac{B}{C(1+A)}\left[a_0\left(\frac{H}{h}\right)^{h}\right]^{3(1+\alpha)(1+A)}+1},~~~h>0,~~~0\leq\alpha\leq1,\label{wCGDE} \end{equation} which for $C<0$ and $\frac{B}{|C|(1+A)}\left[a_0\left(\frac{H}{h}\right)^{h}\right]^{3(1+\alpha)(1+A)}>1$ then $\omega_T$ can cross the phantom-divide line. \section{Conclusions} Here we considered the polytropic gas, the SCG, the GCG and the MCG models of the DE. We reconstructed the different theories of modified gravity based on the $f(T)$ action in the spatially-flat FRW universe and according to the selected DE models. We also obtained the EoS parameter of the polytropic, standard Chaplygin, generalized Chaplygin and modified Chaplygin $f(T)$-gravity scenarios. We showed that crossing the phantom-divide line can occur when the constant parameters of the models to be chosen properly. \\ \\ \noindent{{\bf Acknowledgements}}\\ The work of K. Karami has been supported financially by Research Institute for Astronomy $\&$ Astrophysics of Maragha (RIAAM), Maragha, Iran.
1,314,259,995,245
arxiv
\section{#1}} \newcommand{\appsection}[1]{\addtocounter{section}{1} \setcounter{equation}{0} \section*{Appendix \Alph{section}~~#1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand{{\cal G}}{{\cal G}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\mathbb N}}{{\mathbb N}} \newcommand\al{\alpha} \newcommand\be{\beta} \newcommand\ga{\gamma} \newcommand\te{\theta} \newcommand\bzero{\boldsymbol{0}} \newcommand\bmu{\boldsymbol{\mu}} \newcommand\bnu{\boldsymbol{\nu}} \newcommand\brho{\boldsymbol{\rho}} \newcommand\bLambda{\boldsymbol{\Lambda}} \newcommand\blambda{\boldsymbol{\lambda}} \newcommand\rg{r_{\mathfrak{g}}} \newcommand\Rth{{\mathbb R}} \newcommand\ep{\varepsilon} \newcommand{\ivec}[1]{|\,#1\,\rangle\!\rangle} \newcommand{\tmmathbf}[1]{\ensuremath{\boldsymbol{#1}}} \newcommand{\tmop}[1]{\ensuremath{\operatorname{#1}}} \newenvironment{tab}{\linespread{1.0} \begin{table}}{\end{table}% \linespread{1.3}} \begin{document} \begin{titlepage} \vskip 0.5cm \begin{flushright} DCPT-06/41 \\ {\tt hep-th/0612298} \end{flushright} \vskip .7cm \begin{center} {\Large{\bf Pseudo-differential equations, and the Bethe Ansatz for the classical Lie algebras }} \end{center} \vskip 0.8cm \centerline{Patrick Dorey$^1$, Clare Dunning$^2$, Davide Masoero$^3$, Junji Suzuki$^4$ and Roberto Tateo$^5$} \vskip 0.9cm \centerline{${}^1$\sl\small Dept.\ of Mathematical Sciences, University of Durham,} \centerline{\sl\small Durham DH1 3LE, United Kingdom\,} \vskip 0.3cm \centerline{${}^{2}$\sl\small IMSAS, University of Kent, Canterbury, UK CT2 7NF, United Kingdom} \vskip 0.3cm \centerline{${}^{3}$\sl\small SISSA, via Beirut 2-4, 34014 Trieste, Italy} \vskip 0.3cm \centerline{${}^{4}$\sl\small Department of Physics, Shizuoka University, Ohya 836, SURUGA, Shizuoka, Japan.} \vskip 0.3cm \centerline{${}^{5}$\sl\small Dip.\ di Fisica Teorica and INFN, Universit\`a di Torino,} \centerline{\sl\small Via P.\ Giuria 1, 10125 Torino, Italy} \vskip 0.2cm \centerline{E-mails:} \centerline{p.e.dorey@durham.ac.uk, t.c.dunning@kent.ac.uk,} \centerline{ masoero@sissa.it, sjsuzuk@ipc.shizuoka.ac.jp, tateo@to.infn.it} \vskip 1.25cm \begin{abstract} \noindent The correspondence between ordinary differential equations and Bethe ansatz equations for integrable lattice models in their continuum limits is generalised to vertex models related to classical simple Lie algebras. New families of pseudo-differential equations are proposed, and a link between specific generalised eigenvalue problems for these equations and the Bethe ansatz is deduced. The pseudo-differential operators resemble in form the Miura-transformed Lax operators studied in work on generalised KdV equations, classical W-algebras and, more recently, in the context of the geometric Langlands correspondence. Negative-dimension and boundary-condition dualities are also observed. \end{abstract} {\small {\bf PACS:} 03.65.-Ge, 11.15.Tk, 11.25.HF, 11.55.DS. {\bf Keywords:} conformal field theory, Bethe ansatz, pseudo-differential equations, spectral problems. } \end{titlepage} \setcounter{footnote}{0} \def\fnsymbol{footnote}{\fnsymbol{footnote}} \resection{Introduction} A recent observation \cite{Dorey:1998pt} has established an unexpected link between two dimensional conformal field theory (CFT) and the theory of ordinary differential equations. This rests on a correspondence between the transfer matrix eigenvalues of certain integrable models (IMs), in their conformal limits~\cite{Bazhanov:1994ft,Bazhanov:1996dr}, and the spectral determinants \cite{Sha,Voros} of ordinary differential equations. The initial results~\cite{Dorey:1998pt, Bazhanov:1998wj,Suzuki:1999rj,Dorey:1999uk} connected conformal field theories with Virasoro central charge $c \le 1$ with Schr{\"o}dinger problems for one-dimensional anharmonic oscillators. These conformal field theories are naturally associated to the Lie algebra $A_1$, but a generalisation to models related to $A_{n-1}$, with additional extended W-algebra symmetries, was soon established \cite{Dorey:1999pv,Suzuki:1999hu,Dorey:2000ma,Bazhanov:2001xm}. However, contrary to initial expectations, a simple Lie-algebraic structure did not emerge immediately, and the extension of the correspondence to the theories associated with other simple Lie algebras $\mathfrak{g}$ has proved surprisingly elusive. The purpose of this paper is to begin to fill this gap, by establishing a link between CFTs related to the classical simple Lie algebras and spectral problems associated with a set of ordinary (pseudo-) differential equations. We shall also prove for $\mathfrak{g}=A_{n-1}$, and conjecture for the other simple Lie algebras, the existence of closed systems of functional equations ($\psi$-systems) among uniquely-defined solutions $\psi^{(1)},\psi^{(2)}, \dots, \psi^{(rank(\mathfrak{g}))}$ of a set of $rank(\mathfrak{g})$ pseudo-differential operators, with each pair $\psi^{(a)}$/(operator)$_{a}$ being naturally associated to a node of the Dynkin diagram. These $\psi$-systems are very similar to the systems of functional relations introduced by Mukhin and Varchenko in the framework of the Bethe ansatz (BA) method for $\mathfrak{g}$-XXX quantum spin chains~\cite{Mukhin:2002fp, Mukhin:2002fp1, Mukhin:2002fp2, Frenkel:2003op}, and in the context of the so-called Miura-opers related to the geometric Langlands correspondence (see, for example, \cite{Frenkel:2005fr,Chervov:2006xk}). This similarity is related to the fact that the homogeneous `differential' parts of the operators studied here resemble, in form, the Miura-transformed Lax operators introduced by Drinfel'd and Sokolov in their studies of generalised KdV equations and classical W-algebras~\cite{Drinfeld:1984qv}. The rest of the paper is organised as follows. \S\ref{BAe} gathers together some known, or easily deduced, properties of the Bethe equations for $\mathfrak{g}$-type quantum spin chains in their continuum limits. Our main results are summarised in \S\ref{main}, while extra details and numerical support for the specific $A,D,B$ and $C$ proposals are given in \S\ref{seca} to \S\ref{secc} respectively, and \S\ref{conclusions} contains our general conclusions. There are two appendices: appendix~\ref{appa} deals with the semiclassical analysis for $A_1$-related ODEs in the presence of string solutions, and appendix~\ref{appb} describes a simple algorithm useful for the numerical solutions of the differential equations. The algorithm is a generalisation of Cheng's method from Regge pole theory~\cite{cheng:1962}, and relies on an elegant dual formulation of the relevant boundary problems. \section{The Bethe Ansatz equations and their string solutions} \label{BAe} For any simple Lie algebra $\mathfrak{g}$ of type $A_{n-1}$ to $G_2$, a set of Bethe ansatz equations (BAEs), depending on a set of $rank(\mathfrak{g})$ twist parameters $ \gamma{=}\{ \gamma_a \}$, can be written in a universal form as \cite{Schulz:1983, Babelon:1982gp, Reshetikhin:1986vd, Reshetikhin:LOMI1984, Reshetikhin:1987nn, Reshetikhin:1987bz}\footnote{ For finite lattice models, the explicit diagonalisation of the $A_{n-1}$ cases has been performed through the algebraic Bethe ansatz by Schulz~\cite{Schulz:1983} and also by Babelon, de Vega and Viallet~\cite{Babelon:1982gp}. For $C_n$ and $D_n$ models, it has been done by Reshetikhin~\cite{Reshetikhin:1986vd, Reshetikhin:LOMI1984}. There is a shortcut to reach the same conclusions via the so-called analytic Bethe ansatz of Reshetikhin~\cite{Reshetikhin:1987nn}, and Wiegmann and Reshetikhin~\cite{Reshetikhin:1987bz}.} \eq \prod_{ b=1}^{rank(\mathfrak{g})} \Omega^{B_{ab}\gamma_b}_{\phantom a} \frac {Q^{(b)}_{B_{ab}}(E^{(a)}_{i},\gamma)} {Q^{(b)}_{-B_{ab}}(E^{(a)}_{i},\gamma)}= -1\,,\qquad i=0,1,2,\dots~ \label{dall0} \en where \eq Q^{(a)}_k(E,\gamma)=Q^{(a)}(\Omega^k E,\gamma)~, \en and the numbers $E^{(a)}_i$ are the -- in general complex -- zeros of the functions $Q^{(a)}$: \eq Q^{(a)}(E^{(a)}_{i},\gamma)=0~. \label{zq} \en In (\ref{dall0}) and (\ref{zq}) the indices $a$ and $b$ label the simple roots of the Lie algebra, the matrix $B_{ab}$ is defined by \eq B_{ab}= { (\alpha_a, \alpha_b) \over |\hbox{\rm long roots}|^2}~,~~~a,b=1,2,\dots,rank(\mathfrak{g}) \label{cab} \en and the $\alpha$'s are the simple roots of $\mathfrak{g}$. The constant $\Omega$ is a pure phase, parameterised in terms of a real number $\mu{>}0$ as \eq \Omega=\exp \left(i {2\pi \over h^{\vee} \mu} \right) \en where $h^{\vee}$ is the dual Coxeter number of $\mathfrak{g}$. Strictly speaking, the BAE (\ref{dall0}) arise from taking a suitable continuum or field theory limit of the lattice model BAE, in the fashion explained in, for example, \cite{Dorey:2004ta}. The functions $Q^{(a)}$ appearing in (\ref{dall0}) have a characteristic asymptotic behaviour at large values of $-E$ \eq \ln Q^{(a)}(-E,\gamma) = m_a {\sin(\frac{\pi}{h^{\vee}}) \over \sin (\frac{\pi}{h^{\vee}} B_{aa})}(-E)^{\mu}+ \dots~~~. \label{asQ} \en For the $A_{n-1}$, $B_n$, $C_n$ and $D_n$ models the sets $\{m_a \}$ are given in table~\ref{tb1}.\footnote{ The constants $\{ m_a \}$ are related to a particular matrix $K_{ab}$ emerging from the analysis of the Bethe ansatz. For simply-laced algebras, $K_{ab}$ is proportional to the Cartan matrix and $\vec{v}{=}(m_1,m_2,\dots, m_r)$ is its Perron-Frobenius eigenvector.} The only free parameter --the overall constant $m$ in table~\ref{tb1}-- depends on the way the conformal field theory limit is reached. \begin{table}[tb] \begin{center} \begin{tabular}{|c|c|c|} \hline \rule[-0.5cm]{0cm}{1.2cm} \small Model &\small $h^{\vee}$ &\small $m_a$ \\ \hline \hline \rule[-0.5cm]{0cm}{1.cm} $A_{n-1}$ & $n$ & $m_a= 2m\sin(a {\pi \over h^{\vee}} )$,~~$(a=1,\dots,n-1)$ \\ \hline \rule[-0.5cm]{0cm}{1.cm} $D_n$ & $2n-2$ & $m_{n-1}=m_{n}=m$,~$m_a=2m \sin(a {\pi \over h^{\vee}} )$,~~$(a=1,\dots,n-2)$ \\ \hline \rule[-0.5cm]{0cm}{1.cm} $B_n$ & $2n-1$ & $m_n=m$,~$m_a=2m \sin(a {\pi \over h^{\vee}} )$,~~$(a=1,\dots,n-1)$ \\ \hline \rule[-0.5cm]{0cm}{1.cm} $C_n$ & $n+1$ & $m_a= 2m\sin(a {\pi \over 2 h^{\vee}} )$,~~$(a=1,\dots,n)$ \\ \hline \end{tabular} \end{center} \caption{\footnotesize Dual Coxeter numbers and coefficients $\{ m_a \}$ for models based on classical simple Lie algebras.} \label{tb1} \end{table} The negative real $E$ axis is also the direction of {\em maximal}\/ growth for $\ln Q^{(a)}(E)$ as $|E|\rightarrow \infty$. {}From (\ref{asQ}), the Hadamard order of $Q^{(a)}$ is therefore $\mu$ and, in the so-called `semiclassical regime' $0<\mu<1$, $Q^{(a)}$ can be written as a convergent infinite product over its zeros as \eq Q^{(a)}(E,\gamma)=Q^{(a)}(0,\gamma) \prod_{i=0}^{\infty} \left( 1- {E \over E^{(a)}_i} \right)~. \label{fac} \en It turns out that the Bethe ansatz roots generally split into multiplets with approximately equal modulus $|E_i^{(a)}|$, and that the ground state of the quantum spin chain corresponds to a `pure' configuration of roots, containing only multiplets with a common dimension \eq d_a={K \over B_{aa}}~. \label{dd} \en The integer $K$ in (\ref{dd}) depends on the particular spin chain under discussion, and corresponds to the degree of fusion~\cite{Kulish:1981gi,Bazhanov:1989yk, Kuniba:1991bx}. For $\mathfrak{g}{=}A_1$\,, the spin-$j$ $A_1$ quantum chains, $K{=}d_1{=}2j$. It is generally expected \cite{Takahashi1972} that for large values of $i$ the zeros asymptotically tend to the {\it perfect} string configurations: \eq \arg E_i^{(a)} \sim \left( d_a +1 - 2l \right){B_{aa} \pi \over h^{\vee} \mu}~,~~~l=1,2,\dots,d_a~. \label{arge}~ \en Appendix A contains some further discussion of the asymptotic behaviour of these string solutions. \section{Summary of the main results} \label{main} This paper is about the correspondence between the Bethe ansatz equations (\ref{dall0}) for $\mathfrak{g}{=}A_{n-1},B_n,C_n,D_n$ and spectral problems associated to solutions $\psi(x,E, {\bf g})$ of particular pseudo-differential equations with vanishing boundary conditions \eq \psi(x,E, {\bf g})= o\left( e^{-{ x^{M+1} \over M+1}} \right)\quad,\quad (M >K/(h^{\vee}{-}K) ) \label{van} \en imposed at large $x$ on the positive real axis. To specify these equations we introduce the $n^{\rm th}$-order differential operator \cite{Dorey:2000ma} \eq D_n({\bf g})=D(g_{n-1}-(n{-}1))\,D(g_{n-2}-(n{-}2))\,\dots\, D(g_1-1)\,D(g_0)~, \label{dfactdef} \en \eq D(g)=\left(\frac{d}{dx}-\frac{g}{x}\right)~, \en with \eq {\bf g} {=}\{g_{n-1}, \dots,g_1, g_0 \}~~~,~~{\bf g^{\dagger}} {=}\{ n-1-g_0, n-1-g_1, \dots, n-1 -g_{n-1} \}~. \label{conj} \en We also use the inverse differential operator $(d/dx)^{-1}$, generally defined through its formal action \eq \left( { d \over dx} \right)^{-1} x^{s}= {x^{s+1} \over s+1} \label{def00}~. \en The following properties hold \eq \left( { d \over dx} \right) \left( { d \over dx} \right)^{-1} x^{s}= x^{s}~,~~~\left( { d \over dx} \right)^{-1} \left( { d \over dx} \right) x^{s}= x^{s}~~~(s \ne 0), \en and the integration by parts property \eq \left( { d \over dx} \right)^{-1} \left[ f(x) {d \over dx} g(x) \right]= f(x)g(x)-\left( { d \over dx} \right)^{-1} \left[ g(x) {d \over dx}f(x) \right] \label{intbyparts} \en is satisfied, provided in the $x$-expansion of $f(x)g(x)$ about the origin the constant term is absent. In the following we shall assume the validity of (\ref{intbyparts}) by working implicitly with non-integer values for the parameters $g_i$ introduced in (\ref{conj}), and by invoking continuity of the final results in these parameters. Finally, we define a basic `potential' \eq P_K(E,x)= ( x^{h^{\vee} M/K}-E)^K~. \label{pk} \en \\ With this notation in place, the following pseudo-differential equations are the main concern of this article: \\ {\bf $A_{n-1}$ (su(n))}: \eq \Bigl((-1)^{n}D_{n}({\bf g})-P_K(x,E) \Bigr)\psi(x,E,{\bf g} )=0~,~~ \label{sun0} \en with the constraint $\sum_{i=0}^{n-1}g_i{=}\frac{n(n{-}1)}{2}$.\\ \\ {\bf $D_n$ (so(2n))}: \eq \left( D_{n}({\bf g^{\dagger}}) \left( \frac{d}{dx} \right)^{-1} D_{n}({\bf g}) -\sqrt{P_K (x,E)} \left(\frac{d}{dx} \right) \sqrt{P_K (x,E)} \right)\psi(x,E,{\bf g})=0 \label{so2n0} \en\\ {\bf $B_n$ (so(2n+1))}: \eq \left( D_{n}({\bf g^{\dagger}}) D_{n}({\bf g}) + \sqrt{P_K (x,E)} \left(\frac{d}{dx} \right) \sqrt{P_K (x,E)} \right)\psi(x,E, {\bf g})=0 \label{so2n10} \en\\ {\bf $C_n$ (sp(2n))}: \eq \left( D_{n}({\bf g^{\dagger}}) \left(\frac{d}{dx} \right)D_{n}({\bf g}) -P_{K }(x,E) { \left(d \over dx \right)^{-1}}P_{K} (x,E) \right)\psi(x,E, {\bf g})=0 \label{sp2n0} \en\\ The correspondence we propose links the ground-state $Q^{(1)}$'s of (\ref{dall0}) to particular solutions $\psi$ (\ref{van}) of equations (\ref{sun0}--\ref{sp2n0}). In order to clarify this statement we introduce an alternative basis of solutions $\{ \chi_i (x, E, {\bf g}) \}$ to (\ref{sun0}--\ref{sp2n0}), characterised by their behaviour near the origin \eq \chi_i(x, E, {\bf g}) \sim x^{\lambda_i}+ \dots\quad,\quad x\to 0~, \label{chib} \en where the $\lambda$'s are the ordered $(\lambda_0 <\lambda_1<\dots)$ roots of the appropriate indicial equation (see table~\ref{table2}). \begin{table}[tb] \begin{center} \begin{tabular}{|c|c|} \hline \rule[-0.5cm]{0cm}{1.2cm} \small Model & \small indicial equation \\ \hline \hline \rule[-0.5cm]{0cm}{1.cm} $A_{n-1}$ & $\prod_{i=0}^{n-1} (\lambda -g_i)=0$ \\ \hline \rule[-0.5cm]{0cm}{1.cm} $D_n$ & $(\lambda-h^{\vee}/2)^{-1} \prod_{i=0}^{n-1} (\lambda -g_i)(\lambda -h^{\vee}+g_i) =0$ \\ \hline \rule[-0.5cm]{0cm}{1.cm} $B_n$ & $\prod_{i=0}^{n-1} (\lambda -g_i)(\lambda -h^{\vee}+g_i) =0$ \\ \hline \rule[-0.5cm]{0cm}{1.cm} $C_n$ & $ (\lambda-n)\;\prod_{i=0}^{n-1}(\lambda-g_i)(\lambda-2n+g_i)=0$ \\ \hline \end{tabular} \end{center} \caption{\footnotesize Indicial equations. \label{table2}} \end{table} Writing $\psi$ as a linear combination of the $\chi$'s, we have in general \eq \psi(x,E,{\bf g}) = Q^{(1)}_{[0]}(E, {\bf g}) \, \chi_0 (x, E, {\bf g})+ Q^{(1)}_{[1]}(E, {\bf g}) \, \chi_1 (x, E, {\bf g})+\dots~. \label{psichi} \en If the zeros of $Q^{(1)}_{[0]}(E,{\bf g})$ are $ \{ E_i^{(1)} \}$, then for $E{\in} \{ E_i^{(1)} \}$ the function $x^{-\lambda_0} \psi(x,E,{\bf g})$ vanishes exceptionally both at $x{=}\infty$ and at $x{=}0$. This allows the coefficient $Q^{(1)}_{[0]}(E,{\bf g})$ to be identified with the spectral determinant for a boundary problem defined on the positive real axis (see, for example, \cite{Sha, Voros}). An alternative (dual) definition of the spectral functions $Q^{(1)}_{[0]}(E,{\bf g})$ in terms of the adjoint equations to~(\ref{sun0})--(\ref{sp2n0}) is briefly discussed in appendix~\ref{appb}. We claim that for classical Lie algebras with arbitrary degree of fusion $K$, the ground-state $Q^{(1)}(E,\gamma)$'s in (\ref{dall0}) and the functions $Q^{(1)}_{[0]}(E,{\bf g})$ in (\ref{psichi}) coincide up to a trivial normalisation, so that \eq { Q^{(1)}(E, {\bf \gamma}) \over Q^{(1)}(0, {\bf \gamma}) } = { Q^{(1)}_{[0]}( E, {\bf g}) \over Q^{(1)}_{[0]}(0, {\bf g})}~. \label{eqqq} \en {}Moreover, from the WKB approximation \eq \ln Q^{(1)}_{[0]}(-E,{\bf g}) = \kappa \; (-E)^{\hat{\mu}}+\dots~~~~(E\gg 0) \label{asqq} \en with \eq \hat{\mu}={ K(M+1) \over h^{\vee} M}~,~~~\kappa=\kappa\left({ h^{\vee} M \over K} , {h^{\vee} \over K} \right)~~ \en and \eq \kappa(a,b)= \int_{0}^{\infty} dx \left( (x^{a}+1)^{b} -x^{ab} \right)= {\Gamma(1+1/a)\Gamma(1+1/b) \sin(\pi/b) \over \Gamma(1+1/a+1/b) \sin(\pi/b+\pi/a)}~. \en Therefore, in order to have a match between (\ref{asqq}) and (\ref{asQ}), we must set \eq \mu=\hat{\mu}~,~~~m_1=\kappa {\sin(\frac{\pi}{h^{\vee}}B_{11}) \over \sin (\frac{\pi}{h^{\vee}})}~,~~ \Omega=\exp \left(i {2\pi M\over K(M+1)} \right)~. \label{Omega} \en Given a particular ordering convention, the relationship between the twist parameters $\{ \gamma_a \}$ and the constants $\{ g_a\}$ is given in table~\ref{table3}. \begin{table}[tb] \begin{center} \begin{tabular}{|c|c|c|} \hline \rule[-0.5cm]{0cm}{1.2cm} \small Model & ordering~~$\forall~i<j$ & \small $\{ g_a \} \leftrightarrow \{\gamma_a \}$ \\ \hline \hline \rule[-0.5cm]{0cm}{1.cm} $A_{n-1}$ & $g_i<g_j$ &$\gamma_a= \alpha \left(\sum_{i=0}^{a-1} g_i - {a(h^{\vee}-1) \over 2} \right)~$ \\ \hline \rule[-0.5cm]{0cm}{1.cm} {} & {} &$\gamma_a= \alpha \left( \sum_{i=0}^{a-1} g_i - {a \over 2} h^{\vee} \right)~,~~(a=1,\dots, n-2)$ \\ $D_n$ & $g_i<g_j<h^{\vee}/2$ & $\gamma_{n-1}= {\alpha \over 2} \left( \sum_{i=0}^{n-1} g_i - {n \over 2} h^{\vee} \right)$ \\ {} & {} &$\gamma_{n}= {\alpha \over 2} \left( \sum_{i=0}^{n-2} g_i - g_{n-1} - {n-2 \over 2} h^{\vee} \right)$ \\ \hline \rule[-0.5cm]{0cm}{1.cm} $B_n$ & $g_i<g_j<h^{\vee}/2$ & $\gamma_a= \alpha \left( \sum_{i=0}^{a-1} g_i - {a \over 2} h^{\vee} \right)$ \\ \hline \rule[-0.5cm]{0cm}{1.cm} $C_n$ & $g_i<g_j<n$& $\gamma_a= \alpha \left(\sum_{i=0}^{a-1} g_i - a n \right)$ \\ {} & {}&$\gamma_n= { \alpha \over 2} \left( \sum_{i=0}^{n-1} g_i - n^2 \right)$ \\ \hline \end{tabular} \end{center} \caption{\label{table3} \footnotesize The relationship between the set of parameters $\{ g_a \} \leftrightarrow \{\gamma_a \}$ with $\alpha {=} 2K/Mh^\vee$.} \end{table} Various consistency checks, including the WKB approach and numerical work, support the correspondence both qualitatively and quantitatively. Finally, starting from equations (\ref{sun0}) to (\ref{sp2n0}), the Bethe ansatz equations and table~\ref{table3} were obtained with the help of a system of functional relations involving $\psi^{(1)}(x,E,{\bf g})=\psi(x,E,{\bf g})$ together with other auxiliary functions $\psi^{(a)}(x,E,{\bf g})$\,, $(a=2,\dots,rank(\mathfrak{g}))$ (see \S\ref{seca}--\S\ref{secc} and \cite{Dorey:1999pv, Dorey:2000ma}). We set\footnote{In the $A_{n-1}$ models $\psi_k^{(1)}(x,E, {\bf g})=\omega^{(n-1)k/2} y_{-k}(x,E, {\bf g})$, where $y_{k}$ is the function defined in \S3 of \cite{Dorey:2000ma}. } \eq \psi^{(a)}_k= \psi^{(a)}(\omega^{k}x, \Omega^{k}E, {\bf g}) \label{rotated} \en where \eq \Omega=\exp \left(i {2\pi M\over K(M+1)} \right) \en as in (\ref{Omega}), and \eq \omega=\Omega^{K/h^{\vee}}= \exp \left(i {2\pi \over h^{\vee} (M+1)} \right)~. \en For the simply-laced algebras the $\psi-$systems can then be written in the compact form \eq W[\psi_{- \frac{1}{ 2}}^{(a)}, \psi_{\frac{1}{2}}^{(a)}]= \prod_{b=1}^{rank(\mathfrak{g})} (\psi^{(b)})^{A_{ab}}~, \label{ssiade} \en where $A_{ab}{=} 2\delta_{ab}-2 B_{ab}$ is the incidence matrix of the corresponding Dynkin diagram and $W$ the Wronskian: \eq W[f,g]= f(x) \frac{d}{dx} g(x)- g(x) \frac{d}{dx}f(x)~. \label{2wr} \en Equation (\ref{ssiade}) is proven in \S\ref{anpsi} for $\mathfrak{g}{=}A_{n-1}$, and our numerical results indirectly support the validity of (\ref{ssiade}) for $\mathfrak{g}{=}D_n$. Currently, we have no analogous pseudo-differential equations for the exceptional Lie algebras but the similarity between (\ref{ssiade}), the relations proposed in ~\cite{Mukhin:2002fp, Mukhin:2002fp1, Mukhin:2002fp2} and the other functional equations (Y-systems and T-systems) discovered in the framework of integrable models \cite{Zamolodchikov:1991et, Kuniba:1992ev, Ravanini:1992fi, Kuniba:1993cn, Kuniba:1993nr,Kuniba:1994na} suggests the validity of (\ref{ssiade}) in its more general form. For the non simply-laced algebras our conjectures are \bea B_n&:&~~W[ \psi_{-\frac{1}{2}}^{(a)},\psi_{\frac{1}{2}}^{(a)}]=\psi^{(a-1)} \psi^{(a+1)};~~~ a=1,\dots,n-1,\nn \\ &~&~~ W[ \psi_{-\frac{1}{4}}^{(n)}, \psi_{\frac{1}{4}}^{(n)}]= \psi^{(n-1)}_{-\frac{1}{4}} \psi^{(n-1)}_{\frac{1}{4}}~. \eea (Our root convention is $(\alpha_i|\alpha_i){=}2$ for $i=1,2,\dots,n-1$ and $(\alpha_n|\alpha_n){=}1$.)\\[-5pt] \bea {}~C_n&:&~~W[\psi^{(a)}_{-\frac{1}{4}}, \psi^{(a)}_{\frac{1}{4}}] = \psi^{(a-1)} \psi^{(a+1)}~;~~~~ a=1,\dots, n-2, \nn \\ &~&~~W[\psi^{(n-1)}_{-\frac{1}{4}}, \psi^{(n-1)}_{\frac{1}{4}}] = \psi^{(n-2)}\; \psi^{(n)}_{-\frac{1}{4}} \psi^{(n)}_{\frac{1}{4}}~, \nn \\ &~&~~W[\psi^{(n)}_{-\frac{1}{2}}, \psi^{(n)}_{\frac{1}{2}}] = \psi^{(n-1)}~. \eea (Here, $(\alpha_i|\alpha_i){=}1$ for $i=1,2,\dots,n-1$ and $(\alpha_n|\alpha_n){=}2$.)\\[-5pt] \bea F_4&:&~~W[ \psi_{-\frac{1}{4}}^{(1)},\psi_{\frac{1}{4}}^{(1)}]=\psi^{(2)}~,\nn \\ &~&~~ W[ \psi_{-\frac{1}{4}}^{(2)}, \psi_{\frac{1}{4}}^{(2)}]= \psi^{(1)}\;\psi^{(3)}_{-\frac{1}{4}} \psi^{(3)}_{\frac{1}{4}}, \nn \\ &~&~~ W[ \psi_{-\frac{1}{2}}^{(3)}, \psi_{\frac{1}{2}}^{(3)}]= \psi^{(2)}\;\psi^{(4)}, \nn \\ &~&~~ W[ \psi_{-\frac{1}{2}}^{(4)}, \psi_{\frac{1}{2}}^{(4)}]= \psi^{(3)}. \eea \nobreak (Here, $(\alpha_1,\alpha_1){=}(\alpha_2,\alpha_2){=}1$ and $(\alpha_3,\alpha_3){=}(\alpha_4,\alpha_4){=}2$.)\\[-5pt] \bea G_2&:&~~W[ \psi_{-\frac{1}{2}}^{(1)},\psi_{\frac{1}{2}}^{(1)}]=\psi^{(2)}~,\nn \\ &~&~~ W[ \psi_{-\frac{1}{6}}^{(2)}, \psi_{\frac{1}{6}}^{(2)}]= \psi^{(1)}\;\psi^{(1)}_{-\frac{2}{6}} \psi^{(1)}_{\frac{2}{6}}. \eea (Here, $(\alpha_1|\alpha_1){=}3$ and $(\alpha_2|\alpha_2){=}1$.)\\ \goodbreak Again, these relations are not proven but we have indirect numerical evidence for $\mathfrak{g}{=}B_n$, $C_n$. Further details and numerical support for the above conjectures are provided in the following sections, which examine the $A$, $D$, $B$ and $C$ cases in turn. \resection{The $A_{n-1}$ models} \label{seca} The ODE for the $A_{n-1}$ models is \eq \Bigl((-1)^{n+1}D_n({\bf g})+P_K(x,E) \Bigr)\psi(x,E,{\bf g} )=0~,~~ \label{gnde} \en where the operator $D_n({\bf g})$ and the generalised potential $P_K(x,E)$ were defined in \S\ref{main}, and the additional constraint \eq \sum_{i=0}^{n-1}g_i=\frac{n(n{-}1)}{2} \label{gvan} \en ensures that the term $x^{-1}\frac{d^{n-1}}{dx^{n-1}}\,$ is absent. The function $\psi(x,E, {\bf g})$ is defined to be the most subdominant solution on the positive real axis, with asymptotic behaviour, for $M >K/(h^{\vee}{-}K)$, given by \eq \psi(x, E, {\bf g}) \sim {\cal N} \; x^{(1{-}n)M/2} \exp(-x^{M+1}/(M{+}1)) \label{asy} \en as $x \to\infty$. The coefficient ${\cal N}$ represents an $E$-- and ${\bf g}$--independent normalisation constant. The $K{=}1$ cases have been extensively discussed in \cite{Dorey:1999pv,Suzuki:1999hu,Dorey:2000ma,Bazhanov:2001xm}; they are related to the $WA_{n-1}$ conformal field theories with integer Virasoro central charge $c = n{-}1$. Alternatively, at particular values of the parameters ${\bf g}$ and $M$ they can also be put in correspondence with the minimal coset conformal field theories \eq {(\hat{A}_{n-1})_1 \times (\hat{A}_{n-1})_L \over (\hat{A}_{n-1})_{L+1}}~, \label{coset} \en with a simple relationship between $L$ in (\ref{coset}) and the parameter $M$ in~(\ref{pk}). The generalisation to integer $K{>}1$ comes from an observation by Sergei Lukyanov \cite{Luk-private} for the $A_1$ case, for which numerical and analytic support was later provided in \cite{Dorey:2003sv} and in \cite{Lukyanov:2006gv}. It is reasonable to conjecture that this generalisation works both for $A_{n-1}$ with $n{>}2$ and, up to minor modifications, for the other models to be discussed in this paper. In analogy to (\ref{coset}), at particular values of ${\bf g}$ and $M$, the integer $K{>}1$ cases should correspond to the cosets \eq {\hat{\mathfrak{g}}_K \times \hat{\mathfrak{g}}_L \over \hat{\mathfrak{g}}_{K+L}}~, \label{cosetm} \en which describe conformal field theories with $\mathfrak{g}{=}A_{n-1},B_n,C_n, D_n$. In appendix~\ref{appa} we shall explain, in the simplest case, why potentials of such forms naturally lead to string patterns of roots of the sort mentioned in the introduction. \subsection{Negative-dimension dualities} \label{negative} It is interesting to note that there are formal duality relations among our pseudo-differential equations involving negative values of $n$ and $K$. Consider the $A_{n-1}$ ODEs with the twists ${\bf g}{=}\{0,1,\dots,n-1\}$: \eq \Bigl((-1)^{n+1} {d^{n} \over dx^n}+P_K(x,E) \Bigr)\psi(x,E)=0~. \label{gnde1} \en Setting $\tilde{\psi}(x,E)= P_K(x,E) \psi(x,E )$ and multiplying from the left by $({d \over dx})^{-n}$, the result is \eq \Bigl((-1)^{n+1} \left({d \over dx} \right)^{-n} + P_{-K}(x,E) \Bigr)\tilde{\psi}(x,E )=0~.~~ \label{gnde2} \en Comparing (\ref{gnde2}) with (\ref{gnde1}) and taking into account the boundary conditions, we see that there is a formal duality and a spectral equivalence between the initial $n^{\rm th}$-order ODE and the pseudo-differential equation (\ref{gnde2}): \eq \{ n, M, K \} \leftrightarrow \{-n,M,-K \}~. \label{dualan} \en Though the above manipulation might look purely formal, it strongly resembles previously-observed W-algebra dualities~\cite{Hornfeck:1994is}: \eq {(\hat{A}_{-n})_K \times (\hat{A}_{-n})_L \over (\hat{A}_{-n}) _{K+L}} \sim {(\hat{A}_n)_{-K} \times (\hat{A}_n)_{-L} \over (\hat{A}_n)_{-K -L}}~. \label{dualaa} \en A discussion of the precise relation between $L$ and the ODE parameters $\{ n, M, K, {\bf g} \} $ is not important for the current naive considerations and we shall postpone it to the future. Whilst the duality (\ref{dualan}) remains at the moment a purely formal observation, the second duality discussed in~\cite{Hornfeck:1994is} \eq {(\hat{D}_{-n})_K \times (\hat{D}_{-n})_L \over (\hat{D}_{-n})_{K+L}} \sim { (\hat{C}_{n})_{-K/2} \times (\hat{C}_{n})_{-L/2} \over (\hat{C}_{n})_{-K/2 -L/2}}\,, \label{dualdc} \en will lead us to the proposal (\ref{sp2n0}) for the $C_n$-related equations. (For the quantisation of the classical $D_n$ W-algebras in relation to the Miura-opers see, for example, \cite{Lukyanov:1989gg}.) Negative-dimension dualities resembling those described here are also well-known to group theorists; many more details can be found in \cite{CvitanovicWEB}. \subsection{Auxiliary functions and the $\psi$-system} \label{anpsi} The solution $\psi(x,E,{\bf g})$ of (\ref{gnde}) with the asymptotic behavior (\ref{asy}) is not the only function associated to the $A_{n-1}$ Bethe ansatz equations. In order to derive the Bethe ansatz itself a total of $n{-}1$ functions $\psi^{(1)},\dots,\psi^{(n-1)}$, one for each node of the $A_{n-1}$ Dynkin diagram, should be introduced. These auxiliary functions are solutions of generally more-complicated ordinary differential equations. Following~\cite{Dorey:2000ma}, all the functions $\psi^{(a)}(x,E,{\bf g})$ decaying at large $x$ on the positive real axis can be constructed from $ \psi \equiv \psi^{(1)} $ as \eq \psi^{(a)} = W^{(a)}[\fract{1-a}{2},\fract{3-a}{2},\dots,\fract{a-1}{2} ] \equiv W^{(a)}[\psi_{{1-a \over 2}},\psi_{{3-a \over 2}},\dots,\psi_{a-1 \over 2} ]~, \label{psidef} \en where $a=1,2,\dots, n{-}1$, $W^{(a)}[f_{1},\dots,f_{a}]$ denotes the generalised Wronskian of the set of functions $\{f_{a}\}$ \eq W^{(a)}[f_{1},\dots,f_{a}] = {\bf det} \left[ \left(\vec{f},\frac{d}{dx} \vec{f},\dots,\frac{d^{a-1} }{dx^{a-1}}\vec{f} \right )\right] \en with $\vec{f}= ( f_1,f_2,\dots, f_{a} )$ (so that $W^{(2)}[f,g] \equiv W[f,g]$,~cf.\ eq.\,(\ref{2wr})\,), and $\psi_k$ denotes the `rotated' solution (\ref{rotated}). Finally, normalising $\psi^{(1)}$ by \eq {\cal N}= { i^{(n-1)/2} \over \sqrt{n}} ~~\Longrightarrow~~ \psi^{(n)}(x,E,{\bf g})=1~. \en Since $\psi$ is a solution of an $n^{\rm th}$-order ODE, a naive counting of degrees of freedom shows that the order of the ODE satisfied by $\psi^{(a)}$ should be \eq { n! \over (n-a)! \;a!}~. \label{dime} \en The $(n-a+1)$ and the $(a)$ equations are related by a (${\bf g} \leftrightarrow {\bf g}^{\dagger}$)-conjugation \cite{Dorey:1999pv,Dorey:2000ma}, arising from the ${\mathbb Z}_2$ symmetry of the Dynkin diagram. Notice that the result (\ref{dime}) exactly matches the dimensions of the basic representations of $A_{n-1}$; these are again in one-to-one correspondence with the nodes of the Dynkin diagram. Fortunately, in order to derive the Bethe ansatz equations an explicit knowledge of the remaining $n{-}2$ ODEs is unnecessary: the derivation of \cite{Dorey:2000ma} was instead based on the Stokes relation associated to (\ref{gnde}) \eq \sum_{k=0}^{n} (-1)^k C^{(k)}(E,{\bf g})\, y_k(x,E,{\bf g}) = 0 ~, \label{nstokes} \en where, according to~\cite{Dorey:2000ma}, \eq y_k(x,E,{\bf g})=\omega^{(n-1)k/2} \psi(\omega^{-k}x, \Omega^{-k} E,{\bf g} )~, \en $C^{(0)}(E,{\bf g})=1$ and the Stokes multipliers $C^{(k)}(E,{\bf g})$ with $k{>}0$ are analytic functions of $E$ and ${\bf g}$. Stokes relations for the $D_n, B_n$ and $C_n$ equations~(\ref{so2n0}), (\ref{so2n10}) and (\ref{sp2n0}) also exist, but we have encountered some subtle complications\footnote{In the $D_n$ case these complications are probably a consequence of the fact that the ODEs associated to the ${\mathbb Z}_2$-conjugate nodes in the Dynkin diagram are somehow more fundamental than eq.~(\ref{so2n0}). This latter equation is more naturally associated to the first node on the `tail' of the diagram.} in generalising the approach of \cite{Dorey:2000ma} to these cases. Instead, the strategy here is based on the conjectured validity of a simple `universal' system of functional equations among the $\psi^{(a)}$ functions, which leads immediately to the Bethe ansatz equations and bypasses the analysis of Stokes relations. We shall now prove the $A_{n-1}$ $\psi$-system~(\ref{ssiade}): \eq \psi^{(a-1)} \psi^{(a+1)} = W [\psi_{- \frac{1}{ 2}}^{(a)}, \psi_{\frac{1}{2} }^{(a)}]~, \quad \psi^{(0)}=\psi^{(n)}=1~. \label{starstar} \en The proof is based on the observation that determinants satisfy functional equations, in particular the so-called Jacobi identity \eq \Delta \; \Delta[p, q|p, q] = \Delta[p|p] \; \Delta[q|q] - \Delta[p|q] \; \Delta [q|p]~. \label{jacobi} \en Here, $\Delta$ is the determinant of an $(a+1) \times (a+1)$ matrix and $\Delta[p_1, p_2 |q_1, q_2]$ denotes the determinant of the same matrix with the $p_{1, 2}-{\rm th }$ rows and $q_{1, 2}-{\rm th}$ columns removed. In order to prove (\ref{starstar}) we have to consider three different cases: $a{=} 1$, $1 {<} a {<} n {-} 1$ and $a {=} n {-} 1$. The $a{=}1$ case follows from the definition of $\psi^{(2)}$ given in~(\ref{psidef}). Equation~(\ref{starstar}) for $1 {<} a {<} n {-} 1$ follows from the following chain of identities \bea & &\prod_b (\psi^{(b)})^{A_{ab}} = W^{(a+1)} [-\fract{a}{2}, -\fract{a-2}{2},\dots, \fract{a-2}{2}, \fract{a}{2}] \; W^{(a-1)} [- \fract{a-2}{2}, \dots, \fract{a-2}{2}]\nn \\ & &\phantom{ccc} = (- 1)^{(a - 1)} W^{(a + 1)} [-\fract{a-2}{2} , \dots, \fract{a-2}{2}, - \fract{a}{2}, \fract{a}{2}] \; W^{(a - 1)} [-\fract{a-2}{2}, \dots, \fract{a-2}{2} ] \nn \\ & &\phantom{ccc}= (- 1)^{(a-1)} \Delta \, \Delta [a, a + 1|a, a + 1] \eea where we have identified \eq \Delta \equiv W^{(a + 1)} [-\fract{a-2}{2}, \dots, \fract{a-2}{2}, - \fract{a}{2}, \fract{a}{2}] \en and \eq \Delta[a, a + 1|a,a+ 1]=W^{(a - 1)} [- \fract{a-2}{2} , \dots, \fract{a-2}{2} ]. \en This is nothing but the LHS of the Jacobi identity (\ref{jacobi}). Then an application of the Jacobi identity naturally proves (\ref{starstar}) in the following way: \bea & &\prod_b (\psi^{(b)})^{A_{ab}} = (- 1)^{(a-1)} (\Delta [a|a] \; \Delta [a+1|a+1] - \Delta [a|a+1]\; \Delta[a+1|a]) \nn \\ & &\phantom{ccc} = (- 1)^{(a-1)} {\big(}W^{' (a)} [- \fract{a-2}{2}, \dots, \fract{a-2}{2} , \fract{a}{2}]\; W^{(a)} [- \fract{a-2}{2} , \dots, \fract{a-2}{2}, - \fract{a}{2}] \nn \\ & &\phantom{cccccccccc} - W^{' (a)} [-\fract{a-2}{2}, \dots, \fract{a-2}{2}, -\fract{a}{2}]\; W^{(a)} [-\fract{a-2}{2}, \dots, \fract{a-2}{2}, \fract{a}{2}]{\big )} \nn \\ & & \phantom{ccc} = W^{' (a)} [-\fract{a-2}{2}, \dots, \fract{a-2}{2} , \fract{a}{2}]\; W^{(a)} [-\fract{a}{2}, - \fract{a-2}{2}, \dots,\fract{a-2}{2}] \nn \\ & & \phantom{cccccccccc} - W^{' (a)} [- \fract{a}{2}, - \fract{a-2}{2},\dots, \fract{a-2}{2} ]\; W^{(a)} [-\fract{a-2}{2} , \dots, \fract{a-2}{2}, \fract{a}{2}] \nn \\ & &\phantom{ccc} = \psi_{-\frac{1}{2}}^{(a)} \psi_{\frac{1}{2}}^{' (a)} - \psi_{- \frac{1}{2}}^{' (a)} \psi_{\frac{1}{2}}^{(a)} = W[\psi_{- \frac{1}{2}}^{(a)}, \psi_{\frac{1}{2}}^{(a)}]~. \eea Finally, for $a {=} n {-} 1$ the previous calculation shows that \eq \psi^{(n - 2)} \psi^{(n)} = W[\psi_{- \frac{1}{2}}^{(n - 1)}, \psi_{\frac{1}{2}}^{(n - 1)}]~. \en Choosing ${\cal N}={ i^{(n-1)/2} \over \sqrt{n}}$ gives $\psi^{(n)}{=}1$ and (\ref{starstar}) is proved. \subsection{The $A_{n-1}$ Bethe ansatz equations} \label{anBAe} In this section we shall show that the BAEs are a simple consequence of the $\psi$-system. First, recall the alternative $\chi$-basis of solutions~(\ref{chib}) and the formal ordering of table~\ref{table3} \eq g_i<g_j~~~~\forall \ i<j~. \label{gordering} \en These solutions are defined by their behaviour as $x \to 0$ \eq \chi_i(x, E, {\bf g}) \sim x^{\lambda_i}+ O (x^{\lambda_i + n})~~~,~~~~ \lambda_i=g_i~~. \en Next, expand $\psi(x, E, {\bf g})$ in the $\chi$-basis \eq \psi(x, E, {\bf g}) = \sum_{i = 0}^{n - 1} Q_{[i]}^{(1)} (E, {\bf g}) \chi_i (x, E, {\bf g})~, \en and use the property \eq \chi_i (\omega^{k} x, \Omega^{k} E, {\bf g}) = \omega^{k \lambda_i} \chi_i (x, E, {\bf g})~, \en to obtain \eq \psi_k (x, E, {\bf g}) = \sum_{i = 0}^{n - 1} Q_{[i]}^{(1)} (\Omega^{k} E, {\bf g} )\, \omega^{k \lambda_i} \chi_i (x, E, {\bf g})~. \en Now expanding the determinants in the determinants for this new basis leads to \bea \psi^{(a)} (x, E, {\bf g}) & = & \sum_{{\bf i}} \left( \prod_j Q_{[i_j]}^{(1)} (\Omega^{j} E, {\bf g}) \, \omega^{j \lambda_{i_j} } \right) W^{(a)} [\chi_{i_\frac{1-a}{2}}, \dots, \chi_{i_\frac{a-1}{2}}] \nn \\ & = & {\sum_{\bf i}}' Q_{[i_{\frac{1-a}{2}},\dots, i_{\frac{a-1}{2}}]}^{(a)} (E, {\bf g}) W^{(a)}[\chi_{i_\frac{1-a}{2}}, \dots, \chi_{i_\frac{a-1}{2}}] \label{expanding} \eea where $j=\frac{1-a}{2}, \frac{3-a}{2}, \dots,\frac{a-1}{2}$, $\sum_{{\bf i}}$ denotes the sum from $0$ to $n{-}1$ of the set $\{ i_j \}$ while in ${\sum_{{\bf i}}}'$ there is the additional constraint $ 0 \le i_{\frac{1-a}{2}} \le i_{\frac{3-a}{2}} \le \dots \le i_{\frac{a-1}{2}}$. A family of $x$-independent equations is obtained by identifying from the LHS and RHS of (\ref{starstar}) the terms corresponding to the same power. It is possible a-priori to identify from every determinant only the highest, second highest, lowest and second lowest orders. We shall extract the leading orders, though similar results can be obtained from the subdominant ones. Setting \eq Q_k^{(a)} (E,{\bf g}) = Q^{(a)}_{[0, \dots, a - 1]} (\Omega^{k} E,{\bf g})~,~~~ \bar{Q}_k^{(a)} (E,{\bf g}) = Q_{[0, \dots, a - 2, a]}^{(a)} (\Omega^{k} E,{\bf g}) \en we have \bea \psi_k^{(a)} (x, E,{\bf g})& = &\omega^{k \alpha_a} Q_k^{(a)}(E,{\bf g}) W^{(a)} [\chi_0, \dots, \chi_{a- 1}] + \nn \\ & & \omega^{k (\alpha_{a + 1} -\lambda_{a - 1})} \bar{Q}_k^{(a)} (E,{\bf g}) W^{(a)} [\chi_0, \dots, \chi_{a - 2}, \chi_a] + \dots \nonumber~.\\ & & \label{psiexpansion} \eea The orders of the first and the second terms in (\ref{psiexpansion}) are $\alpha_a{-} a (a{-} 1) / 2$ and $\alpha_{a + 1} {-} \lambda_{a - 1} {-} a (a {-} 1) / 2$ respectively, where \eq \alpha_a = \sum_{i = 0}^{a - 1} \lambda_i~. \en Substituting (\ref{psiexpansion}) into (\ref{starstar}), comparing the leading terms of both sides for small $x$ and using the relation \begin{align*} W^{(a+1)} W^{(a-1)} &= W[W^{(a)}, \hat{W}^{(a)}]~, \\ W^{(a)}&:=W^{(a)} [\chi_0, \dots, \chi_{a- 1}]~,& \hat{W}^{(a)}&:=W^{(a)} [\chi_0, \dots, \chi_{a-2},\chi_a]& \end{align*} (also proved through the Jacobi identity), we find \eq Q^{(a + 1)} Q^{(a - 1)} = \omega^{\frac{1}{2} (\lambda_{a} - \lambda_{a-1})} Q_{- \frac{1}{2}}^{(a)} \bar{Q}_{\frac{1}{2}}^{(a)} - \omega^{\frac{1}{2} (\lambda_{a-1}-\lambda_{a} )} Q_{\frac{1}{2}}^{(a)} \bar{Q}_{- \frac{1}{2}}^{(a)}~,~~ \en $Q^{(0)}=1$. Finally, let $E^{(a)}_i$ be a zero of $Q^{(a)}(E, {\bf g})$. Evaluating the above equation at $E = \Omega^{1 / 2} E^{(a)}_i$ and at $E = \Omega^{- 1/ 2} E^{(a)}_i$, and by dividing the two equations thus obtained, we find the $A_{n-1}$ Bethe Ansatz equations \eq \prod_{ b=1}^{n-1} \Omega^{B_{ab}\gamma_b}_{\phantom a} \frac {Q^{(b)}_{B_{ab}}(E^{(a)}_{i})} {Q^{(b)}_{-B_{ab}}(E^{(a)}_{i})}= -1\,,\qquad i=0,1,2,\dots~, \label{dall1} \en where \eq \gamma_a={ 2 K \over M h^{\vee}}\left(\sum_{i = 0}^{a - 1} g_i-a{(n-1) \over 2} \right)~. \label{gaan} \en Notice that in writing relation (\ref{gaan}) we have used the identity \eq { K \over M h^{\vee}}(\lambda_a-\lambda_{a-1})= - \sum_{b=1}^{n-1} B_{ab} \; \gamma_b~, \en the ordering (\ref{gordering}) and imposed the constraint $\gamma_0{=}\gamma_n{=}0$. As shown in table~\ref{taba4}, for $K{=}1$ there is very good agreement between the IM results obtained from the solution of a suitable non-linear integral equation~(NLIE)~(see \S6 in \cite{Dorey:2000ma}) and the direct numerical solution of the ODE. \begin{tab}[ht] \begin{center} \begin{tabular}{|l | l | l|} \hline ~Level &~~~~~~~~$A_4$ NLIE &~~~ODE numerics \\ \hline ~~$E_0^{(1)}$~~&~~~ 14.0495626907~~~ &~~~14.0495626922~~~\\ ~~$E_1^{(1)}$~~&~~~ 47.7146839354~~~ &~~~47.7146839363~~~\\ ~~$E_2^{(1)}$~~&~~~ 95.1785845453~~~ &~~~95.1785845456~~~\\ ~~$E_3^{(1)}$~~&~~~ 154.202021469~~~ &~~~154.202021470~~~\\ ~~$E_4^{(1)}$~~&~~~ 223.483044292~~~ &~~~223.483044292~~~\\ \hline \end{tabular} \caption{\footnotesize Comparison of IM results ($A_4$ NLIE) with the direct numerical solution of the $A_4$ ODE with $K{=}1$, $M{=}10/21$ and ${\bf g}{=} ( 0.2,1.02,2.3,3.421)$. \label{taba4}} \end{center} \end{tab} Table~\ref{taba42} also shows the good agreement between the IM results obtained using the spin-$1$ NLIEs \cite{Suzspin,Dunning:2002tt} and the exact solution of the ODE with $K{=}2$ (see \S\ref{b1}\,). \begin{tab}[ht] \begin{center} \begin{tabular}{|l | l | l|} \hline Level &~~~~~~$A_1$ NLIE with $K{=}2$ &~~~~~~~~~~ODE numerics \\ \hline ~$E_0^{(1)}$~~& $1.49259741085 \pm 1.60304589242i$~ &$1.49259741085 \pm 1.60304589242 i$~~~\\ ~$E_1^{(1)}$~~& $2.31180377628 \pm 2.38537059826i$~ &$2.31180377628 \pm 2.38537059826i$~~~\\ ~$E_2^{(1)}$~~& $2.91183770898 \pm 2.97068128676i$~ &$2.91183770898 \pm 2.97068128676i$~~~\\ ~$E_3^{(1)}$~~& $3.40837129214 \pm 3.45880577384i$~ &$3.40837129216\pm 3.45880577388i$~~~\\ ~$E_4^{(1)}$~~& $3.84143464742 \pm 3.88626414305i$~ &$3.84143464640\pm3.88626414641i$~\\ \hline \end{tabular} \caption{\footnotesize Comparison of IM results ($A_1$ NLIE) with the exact solution of the $A_1$ ODE with $K{=}2$, $M{=}1$ and $g_0 = 0$. The set $\{2 E_i^{(1)}\}$ are the exact eigenvalues of the $B_1$ linear ODE of \S\ref{b1}. \label{taba42}} \end{center} \end{tab} \resection{The $D_n$ models} \label{secd} The $D_n$ pseudo-differential equation~(\ref{so2n0}) is \eq \left( D_{n}({\bf g^{\dagger}}) \left( \frac{d}{dx} \right)^{-1} D_{n}({\bf g}) - \sqrt{P_{K }(x,E)} \left(\frac{d}{dx} \right) \sqrt{P_{K }(x,E)} \right)\psi(x,E,{\bf g})=0~. \label{so2n2} \en Following the $A_{n-1}$ example, we start from the solution $\psi(x,E,{\bf g})$ of (\ref{so2n2}) with asymptotic behaviour \eq \psi(x,E,{\bf g}) \sim {\cal N} \; x^{-h^{\vee} M/2} \exp \left(- {x^{M+1} \over M+1} \right)\quad,\quad (M >K/(h^{\vee}{-}K) ) \en as $x \to\infty$ on the positive real axis, and introduce the alternative basis of solutions~(\ref{chib}) \eq \chi_i (x, E, {\bf g})~,~~~ i = 0,1, \dots, 2n - 1 \en characterised by their behaviour near the origin \eq \chi_i(x, E, {\bf g}) \sim x^{\lambda_i}+ O (x^{\lambda_i + 2n-1})~~~,~~~~ \left\{ \begin{array}{ll} \lambda_i=g_i~, & {\rm for}~ i \le n-1, \\ \lambda_i=h^{\vee}-g_{2n-1-i}~, & {\rm for}~ i>n-1. \end{array} \right. \label{lambdag} \en In (\ref{lambdag}) the parameters $\lambda_i$ represent the $2n$ solutions of the indicial equation (see table~\ref{table2}) with the ordering \eq ~~~~g_i< g_j\le h^{\vee}/2~,~~~ \lambda_i < \lambda_j~,~~~~\forall \ i<j~. \en \subsection{ The $\psi$-system and the $D_n$ Bethe ansatz equations} \label{so2npsi} To extract the $D_n$ BAE, we start from \eq \psi^{(1)}_k= \psi_k(x,E, {\bf g})= \psi(\omega^{k} x,\Omega^{k} E,{\bf g} )~, \en where $\Omega$ and $\omega$ are as defined in~(\ref{Omega}) and (\ref{rotated}), and assume the validity, for a suitable value of the normalisation constant ${\cal N}$, of the $\psi$-system (\ref{ssiade}) \bea W[ \psi^{(a)}_{-\frac{1}{2}} ,\psi^{(a)}_{\frac{1}{2}} ] &=& \psi^{(a-1)} \psi^{(a+1)}~,~~a=1,\dots,n-3 \nn \\ W[\psi^{(n-2)}_{-\frac{1}{2}} ,\psi^{(n-2)}_{\frac{1}{2}} ] &=& \psi^{(n-3)} \psi^{(n-1)} \psi^{(n)}, \\ W[ \psi^{(n-1)}_{-\frac{1}{2}} ,\psi^{(n-1)}_{\frac{1}{2}} ]&=& W[ \psi^{(n)}_{-\frac{1}{2}} ,\psi^{(n)}_{\frac{1}{2}} ]= \psi^{(n-2)}. \nn \label{dnpsi} \eea Equations (\ref{dnpsi}) and Jacobi identity (\ref{dnpsi}) used in reverse imply the following relations linking the remaining functions $\psi^{(a)}(x,E,{\bf g})$ to $\psi^{(1)}(x,E,{\bf g})$: \eq \phi^{(a)} \equiv \psi^{(a)}= W^{(a)}[\psi_{\frac{1-a}{2}},\psi_{\frac{3-a}{2}},\dots, \psi_{\frac{a-1}{2}}]~,~~~a=1,\dots, n-2~, \label{df1} \en \eq \phi^{(n-1)} \equiv \psi^{(n-1)}\psi^{(n)}=W^{(n-1)}[\psi_{\frac{2-n}{2}},\psi_{\frac{4-n}{2}},\dots, \psi_{\frac{n-2}{2}}]~, \label{df2} \en and \eq \phi^{(n)} \equiv \psi^{(n-1)}_{-\frac{1}{2}} \psi^{(n-1)}_{\frac{1}{2}}+ \psi^{(n)}_{-\frac{1}{2}} \psi^{(n)}_{\frac{1}{2}} = W^{(n)}[\psi_{\frac{1-n}{2}},\psi_{\frac{3-n}{2}},\dots, \psi_{\frac{n-1}{2}}]~. \label{df3} \en Now notice that the auxiliary functions $\phi^{(a)}(x,E,{\bf g})$ defined in (\ref{df1}), (\ref{df2}) and (\ref{df3}) satisfy an $A$-type $\psi$-system \eq \phi^{(a-1)} \phi^{(a+1)} =W[ \phi^{(a)}_{-\frac{1}{2}} ,\phi^{(a)}_{\frac{1}{2}} ]~,~~~a=1,\dots, n-1. \label{antype} \en Therefore, the arguments applied in \S\ref{anBAe} go through in the same way: \bea \!\!\!\phi_k^{(a)} (x, E,{\bf g})& = &\omega^{k \alpha_a} \hat{Q}_k^{(a)}(E,{\bf g}) W^{(a)} [\chi_0, \dots, \chi_{a- 1}] + \nn \\ & & \omega^{k (\alpha_{a + 1} -\lambda_{a - 1})} \bar{Q}_k^{(a)} (E,{\bf g}) W^{(a)} [\chi_0, \dots, \chi_{a - 2}, \chi_a] + \dots \label{above1} \eea ($a=1,2,\dots,n-1$). The orders of the first and the second terms in (\ref{above1}) are given by $\alpha_a{-} a (a {-} 1) / 2$ and $\alpha_{a + 1} {-} \lambda_{a - 1} {-} a (a {-} 1) / 2$ respectively, with $\alpha_a = \sum_{i = 0}^{a - 1} \lambda_i$ and \eq \hat{Q}_k^{(a)} (E,{\bf g}) = \hat{Q}^{(a)}(\Omega^{k} E,{\bf g})~, ~~~ \bar{Q}_k^{(a)} (E,{\bf g}) =\bar{Q}^{(a)} (\Omega^{k} E,{\bf g})~ \en are $\phi$-related spectral determinants. Using equation (\ref{antype}) we establish the following identity among $\phi$-related spectral determinants \eq \hat{Q}^{(a + 1)} (E) \hat{Q}^{(a - 1)} (E) = \omega^{\frac{1}{2} (\lambda_{a} - \lambda_{a-1})} \hat{Q}_{- \frac{1}{2}}^{(a)}(E) \bar{Q}_{\frac{1}{2}}^{(a)}(E) - \omega^{\frac{1}{2} (\lambda_{a-1}-\lambda_{a} )} \hat{Q}_{\frac{1}{2}}^{(a)} (E) \bar{Q}_{- \frac{1}{2}}^{(a)}(E); \en which leads to \eq {\hat{Q}^{(a- 1)}_{-\frac{1}{2}} (E^{(a)}_i) \over \hat{Q}^{(a - 1)}_{\frac{1}{2}} (E^{(a)}_i)} {\hat{Q}^{(a)}_1( E^{(a)}_i) \over \hat{Q}^{(a)}_{-1} ( E^{(a)}_i)} {\hat{Q}^{(a+ 1)}_{-\frac{1}{2}} (E^{(a)}_i) \over \hat{Q}^{(a + 1)}_{\frac{1}{2}}(E^{(a)}_i)} =- \Omega^{ {\alpha \over 2} (\lambda_a-\lambda_{a-1})} \label{fBA} \en with $a=1,2,\dots, n-1$ and $\alpha{=}{2 K \over M h^{\vee}}$. We then make the following identifications \bea \hat{Q}^{(0)}(E,{\bf g}) &=& Q^{(0)}(E,{\bf g})=1;\\ \hat{Q}^{(a)}(E,{\bf g}) &=& Q^{(a)}(E,{\bf g})~,~~~(a=1,\dots,n-1) \label{DQ1}; \\ \hat{Q}^{(n-1)}(E,{\bf g}) &=& Q^{(n-1)} (E,{\bf g})\; Q^{(n)} (E,{\bf g})~~ \label{DQ2}; \\ \hat{Q}^{(n)}(E,{\bf g}) &=& Q^{(n-1)}_{-\frac{1}{2}} (E,{\bf g})\; Q^{(n-1)}_{\frac{1}{2}} (E,{\bf g}), \label{DQ3} \eea which reflect the relations among the $\phi$'s and the $\psi$'s in the above. In (\ref{DQ3}) we have implicitly assumed \eq \psi^{(n)}(x,E,{\bf g})= o(\psi^{(n-1)}(x,E,{\bf g})) ~ \en as $x \rightarrow 0$ and also that (see the discussion in \S\ref{sd2}) \eq Q^{(n-1)} (E,\{g_0,\dots,g_{n-2},g_{n-1} \}) \equiv Q^{(n)}( E,\{g_0,\dots,g_{n-2},h^{\vee}-g_{n-1} \})~. \label{dndd1} \en Plugging relations (\ref{DQ1}), (\ref{DQ2}) and (\ref{DQ3}) into (\ref{fBA}) and using (\ref{dndd1}) is it easy to check that (\ref{fBA}) can be recast in the universal form (\ref{dall0}) \eq \prod_{ b=1}^{n} \Omega^{B_{ab}\gamma_b}_{\phantom a} \frac {Q^{(b)}_{B_{ab}}(E^{(a)}_{i},\gamma)} {Q^{(b)}_{-B_{ab}}(E^{(a)}_{i},\gamma)}= -1\,,\qquad i=0,1,2,\dots~ \label{dall2} \en where $B_{ab}$ is the $D_n$-related matrix defined according to (\ref{cab}) and we have imposed the extra condition \eq \gamma_{n}-\gamma_{n-1}=n-1-g_{n-1} \label{gammann} \en to fix the exact $\{ g_a \} \leftrightarrow \{ \gamma_a \}$ relation as given in table~\ref{table3}. Condition (\ref{gammann}) guarantees that when $g_{n-1}{-}n{+}1{=}0$ the operator $(d/dx)^{-1}$ in (\ref{so2n2}) acts directly on a $d/dx$ and the relevant equation reduces to an $(2n{-}1)$-order ODE. When this occurs $\gamma_n{=}\gamma_{n-1}$ and so the pair of ${\mathbb Z}_2$-conjugate nodes of the Dynkin diagram are `twisted' in exactly the same way. Further, a change of sign in the RHS of (\ref{gammann}) swaps $\gamma_n$ and $\gamma_{n-1}$, a property that naturally reflects the presence of the ${\mathbb Z}_2$-symmetry in the $D_n$ Dynkin diagram. All these properties are confirmed by the analysis of \S\ref{sd2} and \S\ref{a3d3} and by $12$-digits of numerical agreement at $K{=}1$ with ${\bf g}{=} \{ 0,1,\dots,n-1 \}$, \eq \gamma_a= \alpha \left({(a-1)a \over 2} - a {h^{\vee} \over 2} \right)~,~~~ \gamma_{n}=\gamma_{n-1}=- \frac{n h^{\vee}}{4}~, \en between NLIE and ODE results. Table~\ref{tabd4} shows the (still-excellent) agreement at $K{=}1$ away from the $\gamma_n=\gamma_{n-1}$ surface. Appropriate $K{>}1$ NLIEs are unknown but numerical results qualitatively reproduce the expected IM scenario of \S\ref{BAe}. Further analytic support to the proposed ODE/IM correspondence for $D_n$ is given in \S\ref{sd2}, \S\ref{a3d3} and \S\ref{sgso2n} below. \noindent \begin{tab}[ht] \begin{center} \begin{tabular}{| l | l | l | } \hline ~Level &~~~~~~~~$D_4$ NLIE &~~~ODE numerics \\ \hline ~~$E_0^{(1)}$~~&~~~17.8625636061~~~ &~~~17.8625636061~~~\\ ~~$E_1^{(1)}$~~&~~~50.2942213433~~~ &~~~50.2942213430~~~\\ ~~$E_2^{(1)}$~~&~~~92.8267466445~~~ &~~~92.8267466442~~~\\ ~~$E_3^{(1)}$~~&~~~143.348705065~~~ &~~~143.348705065~~~\\ ~~$E_4^{(1)}$~~&~~~200.738324171~~~ &~~~200.738324172~~~\\ \hline \end{tabular} \caption{\footnotesize Comparison of IM results ($D_4$ NLIE) with the direct numerical solution of the $D_4$ pseudo-differential equation with $K{=}1$, $M{=}1/3$ and ${\bf g} =(0.2, 1.1,2.3,2.95 )$. \label{tabd4}} \end{center} \end{tab} \subsection{Example 1: $D_2 \sim A_1 \oplus A_1$} \label{sd2} The $D_2$ algebra can be decomposed into a pair of independent $A_1$ algebras, mirroring an analogous factorisation in the BAE. In this section we shall prove that the solution $\psi(x,E,{\bf g})$ to (\ref{so2n2}) with $n{=}2$ is the product of two solutions of $A_1$-related ODEs. We start from the general $D_2$ equation: \bea \left ( \left( {d \over dx} + {g_0 \over x} \right)\left( {d \over dx} + {g_1-1 \over x} \right) \left( {d \over dx} \right)^{-1} \left( {d \over dx} - {g_1-1 \over x} \right)\left( {d \over dx} - {g_0 \over x} \right) \right. \nonumber \\ \left. - \sqrt{P_K(x,E)} \left({d \over dx}\right) \sqrt{P_K(x,E)} \right ) \psi(x,E,\mathbf{g})=0~. \eea Expanding and integrating by parts, we obtain an equivalent equation: \bea \label{eqn:d2expanded} & &\hspace{-1cm} \left ( -{d^3 \over dx^3}+ 4P(x,E,g' ){d \over dx} + 2{dP \over dx}(x,E, g') \right. \nonumber \\ & &\phantom{ccc} \left. + { (1-g_0)^2(1-g_1)^2 \over x^2} \left( {d \over dx} \right)^{-1}{1 \over x^2} \right) \psi(x,E, \mathbf{g})=0 \eea where for this subsection it is convenient to define $P(x,E, k)= {1 \over 4}[(x^{2M/K} - E)^K + {k \over x^2}]$\,, and $g'=g_0^2 - 2 g_0 +g_1^2 -2g_1 +1$. We now set $ Z(x) = \chi_1(x)\chi_2(x)$, a product of the solutions of two $A_1$ (spin-${1 \over 2}$) equations, which for general $\rho$ and $\sigma$ satisfy \bea {d^2 \over dx^2} \chi_1(x,E,\rho)&=& P(x,E, \rho)\,\chi_1(x,E,\rho)~, \label{ab} \\ {d^2 \over dx^2}\chi_2(x,E,\sigma) &=&P(x,E, \sigma)\,\chi_2(x,E,\sigma)~. \label{ab2} \eea To show that $Z(x)$ satisfies equation (\ref{eqn:d2expanded}), we differentiate and repeatedly use the $A_1$ equations to find \bea {d^3 Z \over dx^3} &=& 2{d P \over dx}(x,E, {\rho + \sigma \over 2})\,Z + 2P(x,E, {\rho + \sigma \over 2})\,{dZ \over dx} + 2P(x,E, \rho)\,\chi_1 {d\chi_2 \over dx}\nn\\ & & \qquad \qquad + 2P(x,E, \sigma)\, {d\chi_1 \over dx} \chi_2~. \label{eqn:d2third} \eea If we now define the Wronskian \eq W=\chi_1 {d\chi_2 \over dx} -{d \chi_1 \over dx} \chi_2 \en and use \eq \chi_1 {d\chi_2 \over dx} = {1 \over 2} \left({dZ \over dx} + W \right) \quad ,\quad {d \chi_1 \over dx} \chi_2={ 1 \over 2} \left({dZ \over dx} - W \right)~, \en then (\ref{eqn:d2third}) can be written as \eq {d^3 Z \over dx^3}=2{dP \over dx}(x,E, {\rho + \sigma \over 2 } )\,Z + 4P(x,E, {\rho + \sigma \over 2 } )\,{dZ \over dx} + {\rho - \sigma \over 4 x^2} \,W~. \en In order to express $W$ in terms of $Z$ we differentiate, apply (\ref{ab}) and (\ref{ab2}), and then integrate: \eq {dW \over dx}=\chi_1 {d^2\chi_2 \over dx^2}-{ d^2\chi_1 \over dx^2}\chi_2={\sigma - \rho \over 4 x^2}Z \quad \rightarrow \quad W=(\sigma - \rho) \left( {d \over dx} \right)^{-1} {Z \over 4 x^2}~. \en The resulting equation for $Z$, \begin{align} \label{eqn:a1a1} & \!\!\!\!\! \!\!\!\!\! \!\!\!\!\! \left( -{d^3 \over dx^3} + 4P(x,E, {\rho + \sigma \over 2}) {d \over dx}+ 2{dP \over dx}(x,E, {\rho + \sigma \over 2}) \right. \nonumber \\ &\phantom{cccccccccccc} \left. - {(\rho - \sigma)^{2} \over 16x^2}\left( {d \over dx} \right)^{-1} {1 \over x^2} \right) Z(x, E, \rho,\sigma)=0\,, \end{align} exactly matches equation (\ref{eqn:d2expanded}) provided the following relations between $g_0$ and $g_1$, and $\rho$ and $ \sigma$ hold: \bea \rho + \sigma &=&2( g_0^2 -2g_0 + g_1^2 -2g_1 +1) \\ \frac{\rho -\sigma}{4}&=& (g_0-1)(g_1 -1)~. \eea If $\rho{=} \sigma$ then either $g_0$ or $g_1$ has to be zero and the integral operator in the $D_2$ equation acts on a total derivative. This observation agrees with the discussion in \S\ref{so2npsi} about the relation between $(d/dx)^{-1}$ and an asymmetric choice of the twists $\gamma_n$ and $\gamma_{n-1}$. \subsection{Example 2: $D_3 \sim A_3$} \label{a3d3} The BAE for $A_3$ and $D_3$ are the same under identification of Bethe roots. It is therefore interesting to discuss the exact correspondence between the two models at the level of the pseudo-differential equations. Actually, it was the study of this case that lead us to the general $D_n$-related equations. We start from the observation that the solution $\psi$ of the $A_3$-related ODE \eq \Bigl(D_{4}({\bf g})-P_K(x,E) \Bigr)\psi(x,E,{\bf g} )=0~,~~ \label{su40} \en is associated to the first node of the $A_3$ Dynkin diagram, while the solution of the $D_3$ equation \eq \left( D_{3}({\bf { \bar{g}}^{\dagger}}) \left( \frac{d}{dx} \right)^{-1} D_{3}({\bf \bar{g}}) - \tau \sqrt{P_K (x,E)} \left(\frac{d}{dx} \right) \sqrt{P_K (x,E)} \right)\phi(x,E,{\bf \bar{g}})=0 \label{so6n0} \en is more naturally associated to the central node of the $D_3{=}A_3$ Dynkin diagram. (A constant factor $\tau$ has been included in eq.~(\ref{so6n0}) to take into account the possibly-different normalisations for the $E$ parameters.) Therefore we are looking for a relationship between $\phi(x,E,{\bf \bar{g}})$ and \eq \psi^{(2)}(x,E,{\bf g})=W[ \psi^{(1)}_{-\frac{1}{2}}, \psi^{(1)}_{\frac{1}{2}}]\,. \en For simplicity, we perform the calculation at ${\bf g}{=}\{0,1,2\}$. We set \eq \psi^{(2)}(x,E)=[0,1] \en where we have introduced the shorthand notation \eq [i,j]= \left(\frac{d^i}{dx^i} \psi^{(1)}_{-\frac{1}{2}} \right) \left(\frac{d^j}{dx^j} \psi^{(1)}_{\frac{1}{2}} \right)-\left(\frac{d^j}{dx^j} \psi^{(1)}_{-\frac{1}{2}} \right) \left(\frac{d^i}{dx^i} \psi^{(1)}_{\frac{1}{2}} \right) \en so that \eq \frac{d}{dx} [i,j]= [i+1,j]+[i,j+1]\quad,\quad[i,i]=0~, \en and \eq { d^4 \over dx^4} \psi^{(1)}_{\pm \frac{1}{2}}(x,E) =- P_K(x,E) \psi^{(1)}_{\pm \frac{1}{2}}(x,E)~. \en Taking derivatives five times and using the above equation we have \bea \psi^{(2)}&=&[0,1] ~,~~ \frac{d}{dx}\psi^{(2)}=[0,2] ~,~~ \frac{d^2}{dx^2}\psi^{(2)}=[1,2]+[0,3] ~, \nn \\ \frac{d^3}{dx^3}\psi^{(2)}&=&2[1,3]+[0,4]=2[1,3]- P_K\;[0,0]=2[1,3] ~, \nn \\ \frac{d^4}{dx^4}\psi^{(2)}&=&2[2,3]+2[1,4]= 2[2,3]+ 2P_K \psi^{(2)} ~, \nn \\ \frac{d^5}{dx^5}\psi^{(2)}&=&2[2,4]+ 2\frac{d}{dx}(P_K \psi^{(2)}) =2P_K \frac{d}{dx}\psi^{(2)} + 2\frac{d}{dx}(P_K \psi^{(2)}) ~, \eea and therefore obtain the desired ODE \eq \left( \frac{d^5}{dx^5} - 2 \sqrt{P_{K }(x,E)} \frac{d}{dx} \sqrt{P_{K }(x,E)} \right)\psi^{(2)}(x,E)=0~, \en which matches (\ref{so6n0}) at $\tau{=}2$. We have also checked that the solution $\psi^{(1)}(x,E,{\bf g})$ of the more general $A_3$-related differential equation~(\ref{su40}) leads to a function $\psi^{(2)}(x,E,{\bf g})= W[ \psi^{(1)}_{-\frac{1}{2}}, \psi^{(1)}_{\frac{1}{2}}]$, which is the solution of \eq \left( D_{3}({\bf {\bar{g}}^{\dagger}}) \left( \frac{d}{dx} \right)^{-1} D_{3}({\bf \bar{g}}) -2 \sqrt{P_K (x,E)} \left(\frac{d}{dx} \right) \sqrt{P_K (x,E)} \right)\psi^{(2)}(x,E,{\bf g})=0~. \label{so6n0a} \en As already seen in \S\ref{sd2}, in order to recast the resulting equation in the factorised form (\ref{so6n0a}) one has perform a number of integrations by parts. The exact relation between the $A_3$ and $D_3$ sets of parameters is \bea 2 g_0 &=& 1+\bar{g}_0+\bar{g}_1-\bar{g}_2~,\nn \\ 2 g_1 &=& 1+\bar{g}_0-\bar{g_1}+\bar{g}_2~, \\ 2 g_2 &=& 1-\bar{g}_0+\bar{g_1}+\bar{g}_2~.\nn \eea \subsection{Relationship with the sine-Gordon model} \label{sgso2n} The reader may have noticed that the sets of numbers $\{ m_a\}$ for the $D_n$ and $B_n$ models summarised in table~\ref{tb1} match the mass spectra of the sine-Gordon model at particular values of the coupling constant. From the sine-Gordon point of view the $D_n$-related mass spectrum emerges at the reflectionless points where the scattering between the solitons becomes purely diagonal. This link between the sine-Gordon model, affine Toda field theories and perturbed coset CFTs has been discussed in various places~\cite{Klassen:1989ui, Braden:1989bu, Tateo:1994pb,Dorey:1996ms}. In this section we would like to point out that there is a simple connection between equation~(\ref{so2n2}) taken at \eq K=1~,~~M=1/(n-1)~,~~{\bf g}=\{0,1,\dots,n-1\} \label{values} \en and the Schr{\"o}dinger problem associated, through the first instance of the ODE/IM correspondence~\cite{Dorey:1998pt,Bazhanov:1998wj}, to the CFT limit of the sine-Gordon model. We start from (\ref{so2n2}) with parameters (\ref{values}): \eq \left(-{d^{2n-1} \over dx^{2n-1}} +(x^2-E) {d \over dx} + x \right) \chi(x,E)=0 \label{ini} \en and require $\chi(x,E)$ to be absolutely integrable on the full real line; this restricts the possible values taken by $E$ to a discrete set. Fourier transforming (\ref{ini}) yields \eq \left( -{d^2 \over dk^2}-{ 1 \over k}{d \over dk} +( (-1)^n k^{2n-2}-E) \right) \tilde{\chi}(k,E)=0~, \en and replacing \eq k \rightarrow i k~,~~E \rightarrow -E~,~~ \tilde{\chi}(k,E) \rightarrow k^{-1/2} \tilde{\chi}(k,E) \en we finally find \eq \left( -{d^2 \over dk^2} + k^{2n-2}- { 1 \over 4 k^2} -E \right) \tilde{\chi}(k,E)=0~. \label{finalsh} \en Equation (\ref{finalsh}) exactly matches the ODE associated in~\cite{Dorey:1998pt,Bazhanov:1998wj} to the reflectionless points of the untwisted sine-Gordon model at its $c{=}c_{\rm eff}{=}1$ conformal point. This simple observation gives extra support to the correctness of the $D_n$ proposal (\ref{so2n2}), and it leads naturally to the $B_n$ proposals discussed in the next section. \resection{The $B_n$ models} \label{secb} The discussion in \S\ref{sgso2n} of the link between the sine-Gordon and $D_n$ scattering theories at specific values of the parameters can be extended to the $B_n$ models~\cite{Tateo:1994pb,Dorey:1996ms}. This and further considerations led us to the ODE~(\ref{so2n10}), which we repeat here: \eq \left( D_{n}({\bf g^{\dagger}}) D_{n}({\bf g}) + \sqrt{P_K (x,E)} \left(\frac{d}{dx} \right) \sqrt{P_K (x,E)} \right)\psi(x,E, {\bf g})=0~. \label{so2n101} \en The results of \S\ref{so2npsi} and~\S\ref{sd2} suggest a link between the presence of the integral operator $(d/dx)^{-1}$ and the possibility of breaking the symmetry between the $n$ and $n{-}1$ nodes of the $D_n$ Dynkin diagram by choosing $\gamma_n \ne \gamma_{n-1}$ in the BAE. Therefore, in writing (\ref{so2n101}) we have omitted the integral operator $(d/dx)^{-1}$ of (\ref{so2n2}) because, in contrast to the $D_n$ Dynkin diagrams, the $B_n$ diagrams have no ${\mathbb Z}_2$ symmetry. The relevant solution $\psi(x,E,{\bf g})$ to (\ref{so2n101}) has the asymptotic $x \to\infty$ behaviour \eq \psi(x,E,{\bf g}) \sim {\cal N} \; x^{-h^{\vee} M/2} \exp \left(- {x^{M+1} \over M+1} \right)\quad,\quad (M >K/(h^{\vee}{-}K) ) \en on the positive real axis. The solutions~(\ref{chib}) \eq \chi_i (x, E, {\bf g}) \quad ,\quad i = 0,1, \dots, 2n - 1 \en are instead characterised by the $x \to 0$ behaviour \eq \chi_i(x, E, {\bf g}) \sim x^{\lambda_i}+ O (x^{\lambda_i + 2n})~~~,~~~~ \left\{ \begin{array}{ll} \lambda_i=g_i~, & {\rm for}~ i \le n-1~, \\ \lambda_i=h^{\vee}-g_{2n-1-i}~, & {\rm for}~ i>n-1~. \end{array} \right. \label{lambdag1} \en In (\ref{lambdag1}) the $\lambda$'s represent the $2n$ solutions of the indicial equation in table~\ref{table2} with the ordering \eq ~~~~g_i< g_j<h^{\vee}/2~,~~~ \lambda_i < \lambda_j~,~~~~\forall \ i<j~. \en \subsection{The $\psi$-system and the $B_n$ Bethe ansatz equations} The $B_n$ $\psi$-system is \eq \psi^{(a-1)} \psi^{(a+1)} = W[ \psi_{-\frac{1}{2}}^{(a)}, \psi_{\frac{1}{2}}^{(a)}]~~,~~a=1,\dots,n-1~, \label{bnpsi1} \en \eq \psi^{(n-1)}_{-\frac{1}{4}} \psi^{(n-1)}_{\frac{1}{4}} = W[ \psi_{-\frac{1}{4}}^{(n)}, \psi_{\frac{1}{4}}^{(n)}]~~. \label{bnpsi2} \en Using the identity~(\ref{jacobi}) we can express $\psi^{(a)}$ with $a{>}1$ in terms of $\psi^{(1)} \equiv \psi$. The result is \eq \psi^{(a)} =W[\psi_{\frac{1-a}{2}},\dots, \psi_{\frac{a-1}{2}}]~,~~~a=1,2,\dots,n~. \en Following the derivation in \S\ref{anBAe}, define $\alpha_a = \sum_{i = 0}^{a - 1} \lambda_i$, \eq Q_k^{(a)} (E,{\bf g}) = Q^{(a)}_{[0, \dots, a - 1]} (\Omega^{k} E,{\bf g})~,~~~ \bar{Q}_k^{(a)} (E,{\bf g}) = Q_{[0, \dots, a - 2, a]}^{(a)} (\Omega^{k} E,{\bf g}) \en where $Q^{(a)}_{[0, \dots, a - 1]}$ and $Q_{[0, \dots, a - 2, a]}^{(a)}$ are defined as in (\ref{expanding}) and \bea \psi_k^{(a)} (x, E,{\bf g})& = &\omega^{k \alpha_a} Q_k^{(a)}(E,{\bf g}) W^a [\chi_0, \dots, \chi_{a- 1}] + \nn \\ & & \omega^{k (\alpha_{a + 1} -\lambda_{a - 1})} \bar{Q}_k^{(a)} (E,{\bf g}) W^a [\chi_0, \dots, \chi_{a - 2}, \chi_a] + \dots~. \nonumber\\ & & \label{psiexpansionbn} \eea Using (\ref{psiexpansionbn}) in (\ref{bnpsi1}) and identifying the leading contributions about $x{=}0$ gives \eq Q^{(a + 1)} (E) Q^{(a - 1)} (E) = \omega^{\frac{1}{2} (\lambda_{a} - \lambda_{a-1})} Q_{- \frac{1}{2}}^{(a)}(E) \bar{Q}_{\frac{1}{2}}^{(a)}(E) - \omega^{\frac{1}{2} (\lambda_{a-1}-\lambda_{a} )} Q_{\frac{1}{2}}^{(a)} (E) \bar{Q}_{- \frac{1}{2}}^{(a)}(E)~ \en with $Q^{(0)}(E)=1$ and \eq {Q^{(a- 1)}_{-\frac{1}{2}} (E^{(a)}_i) \over Q^{(a - 1)}_{\frac{1}{2}} (E^{(a)}_i)} {Q^{(a)}_{1}( E^{(a)}_i) \over Q^{(a)}_{-1} (E^{(a)}_i)}{Q^{(a+ 1)}_{-\frac{1}{2}} ( E^{(a)}_i) \over Q^{(a + 1)}_{\frac{1}{2}}(E^{(a)}_i)} =- \Omega^{-2\gamma_a+\gamma_{a-1}+\gamma_{a+1}}~. \label{BABn} \en In (\ref{BABn}), $a=1,\dots,n-1$ and \eq \gamma_a= \alpha \left (\sum_{i=0}^{a-1} \lambda_i+ a v \right)~,~~a=1,2,\dots,n~, \en where $\alpha={2K \over M h^{\vee}}$ and $v$ is still to be fixed. Plugging (\ref{psiexpansionbn}) into (\ref{bnpsi2}) leads to \eq Q^{(n-1)}_{-\frac{1}{4}} (E) Q^{(n - 1)}_{\frac{1}{4}} (E) = \omega^{\frac{1}{4} (\lambda_{n} - \lambda_{n-1})} Q_{- \frac{1}{4}}^{(n)}(E) \bar{Q}_{\frac{1}{4}}^{(n)}(E) - \omega^{\frac{1}{4} (\lambda_{n-1}-\lambda_{n} )} Q_{\frac{1}{4}}^{(n)} (E) \bar{Q}_{- \frac{1}{4}}^{(n)}(E) \en and \eq {Q^{(n- 1)}_{-\frac{1}{2}}(E^{(n)}_i) \over Q^{(n - 1)}_{\frac{1}{2}}( E^{(n)}_i)} {Q^{(n)}_{\frac{1}{2}}(E^{(n)}_i) \over Q^{(n)}_{-\frac{1}{2}} ( E^{(n)}_i)} =- \Omega^{-\gamma_n+{1 \over 2}(\gamma_{n-1}+\gamma_{n+1})} \label{BABn2}~. \en The boundary condition $\gamma_{n+1}{=}\gamma_{n-1}$ fixes $v=-h^{\vee}/2$, the $\{ g_a \} \leftrightarrow \{ \gamma_a \}$ relation in table~\ref{table3} and allows (\ref{BABn}) and (\ref{BABn2}) to be recast in the form~(\ref{dall0}). We have checked the consistency of the $n{=}2$ case both numerically and analytically. The relation with the sine-Gordon model briefly mentioned at the beginning of \S\ref{secb} and the analysis of \S\ref{b1} and \S\ref{b2=c2} lend extra analytic support to the proposal. \subsection{Example 3: $B_1$ } \label{b1} This is a singular limit for the analytic BAE study in \cite{Kuniba:1994na}. It however suggests the equivalence of the $K{=}1$ case of $B_1$ to the $K{=}2$ case of $A_1$ in the integrable system. Indeed, the differential equation is second order in the former case and it can be written in the more-standard form \eq \left( {d^2 \over dx^2} + P_K(x,E) {d \over dx}+ {1 \over 2} \left({ d \over dx} P_K(x,E) \right) - {g_0(g_0-1) \over x^2} \right) \psi(x,E,g_0)=0~. \label{so3} \en Performing a Liouville transformation \eq \psi(x,E,g_0) \to \psi(x,E,g_0) \exp \left(-{1 \over 2} \int^x P_K(\xi,E) \,d\xi \right) \en we find \eq \left( -{d^2 \over dx^2} + {1 \over 4} \left(x^{M/K}-E \right)^{2K} + {g_0(g_0-1) \over x^2} \right) \psi(x,E,g_0)=0~. \label{so31} \en Equation (\ref{so31}) coincides with the equations studied by Sergei Lukyanov~\cite{Luk-private} which are related to the $A_1$ lattice models with integer spin \eq j=K~,~~~~K=1,2,3,\dots~. \en The cases $g_0{=}0$ with $M{=}K$ can be solved in closed form. After a shift $x \rightarrow x+E$, the simplest case $K{=}1$ becomes \eq \left( {d^2 \over dx^2} + x{d \over dx}+\fract{1}{2} \right) \psi(x)=0 \label{lineraso3} \en which has general solution \eq y(x)=c_1\; e^{-{x^2 \over 2}} H_{-\frac{1}{2}} \left(\fract{x}{\sqrt{2}}\right) +c_2 \;e^{-{x^2 \over 4}} \sqrt{x} \; I_{-\frac{1}{4}}\left(\fract{x^2}{4} \right) \en where $H$ and $I$ are respectively the Hermite and the Bessel functions. Since \eq H_{\nu}(x) \sim 2^{\nu} x^{\nu}(1+O(1/x)) \en as ${\rm Re}(x)>0$, $|x| \to \infty$, while \eq I_{\nu}(x) \sim {e^{x} \over \sqrt{2 \pi x}} (1+ O(1/x))\,, \en the most subdominant solution at large $x$ on the positive real axis is \eq \psi(x)= e^{-{x^2 \over 2}} H_{-\frac{1}{2}} \left(\fract{x}{\sqrt{2}}\right)~. \en Figure~\ref{B1fig} shows the first five complex-conjugate pairs of zeros of $\psi(-e^{\te})$. This 2-string pattern is typical of $A_1$-related spin{-}1 integrable models. The exact eigenvalues are reported in table~\ref{taba42} above. There is also good agreement between the position of the first pairs of zeros shown in figure~\ref{B1fig} and the WKB asymptotic prediction of appendix~\ref{appa}. \begin{figure}[ht] \begin{center} \epsfxsize=.585\linewidth\epsfbox{plotzeros.eps} \end{center} \caption{ 2-strings in the $B_1$ model at $g_0{=}0, M{=}K{=}1$. Contour plot of $1/(1+ |H_{-\frac{1}{2}} (-\fract{ 1}{\sqrt{2}} e^{\te})|)$ in the complex $\te$-plane. } \label{B1fig} \end{figure} \resection{The $C_n$ models} \label{secc} The analytic and numerical results of \S\ref{secd} support the conjectured link between the $D_n$ BAE (\ref{dall0}) and equation~(\ref{so2n2}). At ${\bf g}{=}\{ 0,1,\dots,n-1 \}$, the $D_n$ ODE is \eq \left( { d^{2n-1} \over dx^{2n-1}} - \sqrt{P_{K}(x,E)} \left(\frac{d}{dx} \right) \sqrt{P_{K }(x,E)} \right)\psi(x,E)=0~. \label{db00} \en In this section, we start from (\ref{db00}) and consider a second duality relation \eq {(\hat{D}_{-n})_K \times (\hat{D}_{-n})_L \over (\hat{D}_{-n})_{K+L}} \sim { (\hat{C}_{n})_{-K/2} \times (\hat{C}_{n})_{-L/2} \over (\hat{C}_{n})_{-K/2 -L/2}} \label{dualdc1} \en discussed by Hornfeck \cite{Hornfeck:1994is}. Motivated by the results of \S\ref{negative} on the analogy between the $A_n \leftrightarrow A_{-n}$ spectral duality and the $A_n$-related duality in \cite{Hornfeck:1994is}, we change $n \rightarrow -n$ and $K \rightarrow -2 K$ in (\ref{db00}): \eq \left((-1)^{-2n-1} { d^{-2n-1} \over dx^{-2n-1}} +\sqrt{P_{-2K }(x,E)} \left(\frac{d}{dx} \right) \sqrt{P_{-2K} (x,E)} \right)\psi(x,E)=0\,, \en where \eq P_{-2K}(x,E)= \left(x^{\frac{M(2n+2)}{2K}}-E \right)^{-2K}. \en Replacing \eq \psi(x,E) \rightarrow \left (\left(P_{-2K}(x,E)\right)^{-\fract{1}{2}} { \left(d \over dx \right)^{-1}} \left(P_{-2K }(x,E)\right)^{-\fract{1}{2}} \right)\psi(x,E)\,, \en multiplying by $d^{2n+1} \over dx^{2n+1}$ and noticing that \eq \left(P_{-2K}(x,E)\right)^{-\fract{1}{2}} \equiv P_K(x,E)= (x^{h^{\vee} M/K} -E)^K\,, \en where $h^{\vee}{=}n+1$ is the dual Coxeter number of $C_n$, we can write the resulting equation as \eq \left({d^{2n+1} \over dx^{2n+1}} -P_{K}(x,E) { \left(d \over dx \right)^{-1}}P_{K} (x,E) \right)\psi(x,E)=0\,. \label{finalcn} \en Equation (\ref{finalcn}) is our $C_n$ candidate at ${\bf g}{=}{\bf g_0}{=}\{0,1,2,\dots \}$. Further, adapting the discussion of \S\ref{secd} that led to the full $B_n$ equation, and noting the similarity between the pseudo-differential operators (\ref{sun0}), (\ref{so2n0}) and (\ref{so2n10}), we replace \eq {d^{2n+1} \over dx^{2n+1}} \equiv D_{n}({\bf g^{\dagger}_0}) \left(\frac{d}{dx} \right) D_{n}({\bf g}_0) \Longrightarrow D_{n}({\bf g^{\dagger}}) \left(\frac{d}{dx} \right)D_{n}({\bf g})~, \en and the final $C_n$ proposal becomes (\ref{sp2n0}): \eq \left( D_{n}({\bf g^{\dagger}}) \left(\frac{d}{dx} \right)D_{n}({\bf g}) -P_{K }(x,E) { \left(d \over dx \right)^{-1}}P_{K} (x,E) \right)\psi(x,E, {\bf g})=0~. \label{sp2n1} \en\\ The relevant solution of (\ref{sp2n1}) has the asymptotic $x \to\infty$ behaviour \eq \psi(x,E,{\bf g}) \sim {\cal N} \; x^{-n M} \exp \left(- {x^{M+1} \over M+1} \right)\quad , \quad (M >K/(h^{\vee}{-}K) ) \en on the positive real axis. The solutions~(\ref{chib}) \eq \chi_i (x, E, {\bf g})~,~~~ i = 0,1, \dots, 2n + 1 \en are characterised by the $x \to 0$ behaviour \eq \chi_i(x, E, {\bf g}) \sim x^{\lambda_i}+ O (x^{\lambda_i + 2n+1})~~~,~~~~ \left\{ \begin{array}{ll} \lambda_i=g_i~, & {\rm for}~ i \le n-1, \\ \lambda_n=n~, &~~~~~~~~~~~~ \\ \lambda_i=2n-g_{2n-i}~, & {\rm for}~ i>n~. \end{array} \right. \label{lambdag3} \en In (\ref{lambdag3}) the $\lambda$'s represent the $2n+1$ roots of the indicial equation in table~\ref{table2} with the ordering \eq ~~~~g_i< g_j<n~,~~~ \lambda_i < \lambda_j~,~~~~\forall \ i<j~. \en \subsection{The $\psi$-system and the $C_n$ Bethe ansatz equations} We now deduce the $C_n$ Bethe ansatz equations from the proposed $C_n$ $\psi$-system: \bea W[\psi^{(a)}_{-\frac{1}{4}}, \psi^{(a)}_{\frac{1}{4}}] &=& \psi^{(a-1)} \psi^{(a+1)}~,~~~~ a=1,2,\dots,n-2~, \nn\\ W[\psi^{(n-1)}_{-\frac{1}{4}}, \psi^{(n-1)}_{\frac{1}{4}}] &=& \psi^{(n-2)} \psi^{(n)}_{-\frac{1}{4}} \psi^{(n)}_{\frac{1}{4}}~, \label{wronskianCnlastm1} \\ W[\psi^{(n)}_{-\frac{1}{2}}, \psi^{(n)}_{\frac{1}{2}}] &=& \psi^{(n-1)}~. \nn \eea Using the Jacobi identity we find \eq \phi^{(a)} \equiv \psi^{(a)} =W^{(a)}[\psi_{\frac{1-a}{4}},\psi_{\frac{3-a}{4}}\dots, \psi_{\frac{a-1}{4}}]~,~~~a=1,2,\dots,n-1~, \en \eq \phi^{(n)} \equiv \psi^{(n)}_{-\frac{1}{4}} \psi^{(n)}_{\frac{1}{4}} =W^{(n)}[\psi_{\frac{1-n}{4}},\psi_{\frac{3-n}{4}},\dots, \psi_{\frac{n-1}{4}}]~, \en where the functions $\phi^{(a)}(x,E,{\bf g})$ satisfy the following system of functional relations \eq W[ \phi^{(a)}_{-\frac{1}{4}} ,\phi^{(a)}_{\frac{1}{4}} ] =\phi^{(a-1)} \phi^{(a+1)}~,~~~a=1,\dots, n-1~, \label{antype1} \en and \eq W[ \phi^{(n)}_{-\frac{1}{4}} ,\phi^{(n)}_{\frac{1}{4}} ] =\phi^{(n-1)} (\psi^{(n)})^2~. \label{expsi} \en {}From (\ref{expsi}) we see that \eq \psi^{(n)}(x,E,{\bf g})= \sqrt{ W^{(n+1)}[\psi_{-\frac{n}{4}},\dots, \psi_{\frac{n}{4}}]}~. \en Now set \bea \phi_k^{(a)} (x, E,{\bf g})& = &\omega^{k \alpha_a} \hat{Q}_k^{(a)}(E,{\bf g}) W^{(a)} [\chi_0, \dots, \chi_{a- 1}] + \nn \\ & & \!\! \!\! \omega^{k (\alpha_{a + 1} -\lambda_{a - 1})} \bar{Q}_k^{(a)} (E,{\bf g}) W^{(a)} [\chi_0, \dots, \chi_{a - 2}, \chi_a] + \dots \label{above3} \eea with $a=1,2,\dots,n$. The orders of the first and the second term in (\ref{above3}) are given respectively by $\alpha_a - a (a - 1) / 2$ and $\alpha_{a + 1} - \lambda_{a - 1} - a (a - 1) / 2$, where $\alpha_a = \sum_{i = 0}^{a - 1} \lambda_i$. Assuming \eq \psi_k^{(n)} (x, E,{\bf g})=\omega^{k \beta} x^\beta Q_k^{(n)}(E,{\bf g})+\omega^{k \sigma} x^\sigma \Tilde{Q}_k^{(n)}(E,{\bf g})+\dots ~ \en where $\beta<\sigma$, we can make the following identifications \eq \beta= \alpha_n/2 -n(n-1)/4~~,~~\sigma=\beta+(\lambda_n-\lambda_{n-1}) \en and \bea \hat{Q}^{(0)}(E) &=& Q^{(0)}(E)=1~,\\ \hat{Q}^{(a)}(E) &=& Q^{(a)}(E)~,~~~a=1,2,\dots,n-1 \label{DDQ1}~, \\ \hat{Q}^{(n)}(E) &\propto& Q^{(n)}_{-\frac{1}{4}}(E)\; Q^{(n)}_{\frac{1}{4}}(E)~~ \label{DDQ2}~, \\ \bar{Q}^{(n)}(E) &\propto& \omega^{\frac{1}{4}(\lambda_n-\lambda_{n-1})}Q^{(n)}_{-\frac{1}{4}} (E)\; \tilde{Q}^{(n)}_{\frac{1}{4}} (E) + \omega^{\frac{1}{4}(\lambda_{n-1}-\lambda_{n})} Q^{(n)}_{\frac{1}{4}} (E)\; \tilde{Q}^{(n)}_{-\frac{1}{4}}(E).~ \nn\\ ~~\label{DDQ3} \eea Using (\ref{above3}) in (\ref{antype1}) and selecting the leading terms we find \eq \hat{Q}^{(a + 1)} (E) \hat{Q}^{(a - 1)} (E) = \omega^{\frac{1}{4} (\lambda_{a} - \lambda_{a-1})} \hat{Q}_{- \frac{1}{4}}^{(a)}(E) \bar{Q}_{\frac{1}{4}}^{(a)}(E) - \omega^{\frac{1}{4} (\lambda_{a-1}-\lambda_{a} )} \hat{Q}_{\frac{1}{4}}^{(a)} (E) \bar{Q}_{- \frac{1}{4}}^{(a)}(E)\,. \en With the identifications (\ref{DDQ1}) and (\ref{DDQ2}) this leads to \eq \prod_{ b=1}^{n} \frac {Q^{(b)}_{B_{ab}}(E^{(a)}_{i})} {Q^{(b)}_{-B_{ab}}(E^{(a)}_{i})}= - \Omega^{ {\alpha \over 4} (\lambda_a-\lambda_{a-1})}\,,~ \label{dall000} \en where $a=1,2,\dots, n-1$ and $\alpha={2 K \over M h^{\vee}}$\,. Using (\ref{above3}) with $a{=}n$ in (\ref{expsi}), (\ref{DDQ3}) and (\ref{DDQ2}) gives instead \eq Q^{(n -1)} (E)= \omega^{\frac{1}{2} (\lambda_{n } - \lambda_{n-1})} Q_{- \frac{1}{2}}^{(n)}(E) \tilde{Q}_{\frac{1}{2}}^{(n)}(E) - \omega^{\frac{1}{2} (\lambda_{n-1}-\lambda_{n} )} Q_{\frac{1}{2}}^{(n)} (E) \tilde{Q}_{- \frac{1}{2}}^{(n)}(E); \en which leads to \eq {Q^{(n- 1)}_{-\frac{1}{2}} ( E^{(n)}_i) \over Q^{(n - 1)}_{\frac{1}{2}} (E^{(n)}_i)} {Q^{(n)}_1 ( E^{(n)}_i) \over Q^{(n)}_{-1} (E^{(n)}_i)} =- \Omega^{ {\alpha \over 2} (\lambda_n-\lambda_{n-1} )}~. \label{fffBA1} \en Finally, with the identification (\ref{lambdag3}) and the choice in table \ref{table3} for the $\{ g_a \} \leftrightarrow \{ {\gamma_a} \}$ relation, equations (\ref{dall000}) and (\ref{fffBA1}) can be assembled into the universal form (\ref{dall0}). \subsubsection{Example 4: $C_1$} The $n{=}1$ case is again a singular limit of the analytic BAE, but it also suggests the similarity of this case to the $A_1$ models \cite{Kuniba:1994na}. The pseudo-differential equation is, however, not second order but instead third order \eq \left( {d^3 \over dx^3}-{{\cal L} \over x^2} {d \over dx} + {{\cal L} \over x^3} - P_K(x,E) \left(d \over dx \right)^{-1} P_K(x,E) \right) \psi(x,E,g_0)=0 \label{ps1} \en where ${\cal L}=g_0(g_0-2)$. It is nevertheless easy to check that (\ref{ps1}) is solved by the product of two functions satisfying second order ODEs: \eq \psi(x,E,g_0)=\chi_-(x,E,g_0) \chi_+(x,E,g_0) \en where $\chi_{\pm}$ originates from a single function $\chi$ as follows \eq \chi_{\pm}(x,E,g_0)= \chi( \omega^{\pm 1/4}x, \Omega^{\pm 1/4} E, g_0). \label{def4} \en Since we assume that $\chi$ satisfies the standard ODE associated with $A_1$\,, \eq \left ({d^2 \over dx^2} - {1 \over 2} P_K(x,E) - {{\cal L} \over 4 x^2} \right) \chi(x,E,g_0)=0\,, \en the functions $\chi_{\pm}$ satisfy the following: \eq { d^2 \over dx^2} \chi_{\pm}(x,E,g_0)= \left(\pm { i \over 2} P_K(x,E)+ { {\cal L}\over 4x^2} \right) \chi_{\pm}(x,E,g_0)\,. \en Starting from (\ref{def4}) and differentiating $\psi=\psi(x,E,g_0)$ three times we find \bea {d\psi \over dx}&=&{d \chi_+\over dx} \; \chi_-+\chi_+{d \chi_-\over dx}\,; {}~~~ {d^2 \psi\over dx^2} ~=~2 {d \chi_-\over dx} {d \chi_+\over dx} + {{\cal L} \over 2 x^2} \psi~; \nn \\ {d^3 \psi\over dx^3}&=&i P_K \; (\chi_+ {d \chi_-\over dx} - {d \chi_+\over dx} \chi_-) + {{\cal L} \over x^2} {d\psi \over dx} - {{\cal L} \over x^3} \psi\,. \label{fn1} \eea We notice that \eq {d \over dx}\left (\chi_+ {d \chi_- \over dx}- {d\chi_+ \over dx} \chi_-\right) = -i P_K \; \psi \en and therefore \eq \chi_+ {d \chi_-\over dx} - {d \chi_+\over dx} \chi_-= -i \left(d \over dx \right)^{-1} P_K \psi. \label{lat1} \en Inserting (\ref{lat1}) into (\ref{fn1}) we finally arrive at equation (\ref{ps1}). \subsubsection{Example 5: $C_2 \sim B_2$} \label{b2=c2} The ODEs for $B_2$ and for $C_2$ are both deduced from the $D_n$-type ODE, but through completely different routes. It is thus a good test to check the equivalence of these two cases. We start with the $B_2$-related ODE at ${\bf g}{=} \{0,1 \}$ \eq {d^{4}\psi \over dx^4} +P_K {d\psi \over dx} +{1 \over 2} {dP_K \over dx}\psi=0. \en The ODE associated with the second node of $B_2$, which is nothing but the first node of $C_2$, would be satisfied by the following function \eq \psi^{(2)}=W[ \psi_{-\frac{1}{2}}, \psi_{\frac{1}{2}}]=[0,1]~, \en where \eq {d^{4}\psi_{\pm \frac{1}{2}} \over dx^4}= P_K {d\psi_{\pm \frac{1}{2}} \over dx} +{1 \over 2} {dP_K \over dx}\psi_{\pm \frac{1}{2}}~. \en We then easily evaluate the derivatives of $\psi^{(2)}$: \bea {d\psi^{(2)} \over dx} &=&[0,2]~,~~{d^2\psi^{(2)} \over dx^2}=[1,2]+[0,3]~,~~ {d^3\psi^{(2)} \over dx^3}=2[1,3] + P_K \psi^{(2)} \nn\\ \nn \\ {d^4\psi^{(2)} \over dx^4}&=&2\; [2,3]+P_K {d\psi^{(2)} \over dx}~,~~{d^5\psi^{(2)} \over dx^5} =P_K{d^2\psi^{(2)} \over dx^2} - 2P_K \;[1,2]~. \eea Finally, noticing that \eq 2[1,2]= 2 \left({d \over dx}\right)^{-1}[1,3]={d^2\psi^{(2)} \over dx^2}- \left({d \over dx}\right)^{-1} P_K \psi^{(2)} \en we have obtained a closed form ODE for $\psi^{(2)}$: \eq \left({d^5\psi^{(2)} \over dx^5} - P_K(x,E) \left({d \over dx}\right)^{-1} P_K(x,E) \right)\psi^{(2)}(x,E)=0~. \en This equation is exactly the ${\bf g}{=}\{0,1 \}$ equation associated to the first node of $C_2$. This verifies the consistency of our proposal. More generally, the correspondence between the $B_2$ parameters ${\bf g}{=}\{g_0, g_1 \}$ and the $C_2$ parameters ${\bf \bar{g}}{=}\{\bar{g}_0, \bar{g}_1 \}$ is established by \bea 2g_0 &=&\bar{g}_0+\bar{g}_1-1~, \nn \\ 2g_1 &=&\bar{g}_0-\bar{g}_1+3~. \eea Finally we have directly checked the $C_2{=}B_2$ $\psi$-system at ${\bf g}{=}\{0,1 \}$ by verifying that both sides of \eq W[ \psi^{(1)}_{-\frac{1}{4}} ,\psi^{(1)}_{\frac{1}{4}} ] =\psi^{(2)}_{-\frac{1}{4}} \psi^{(2)}_{\frac{1}{4}}~, \en satisfy the same 15th order ODE. Unfortunately we do not currently have NLIEs for the $B_n/C_n$ models and the numerical checks presented below are instead based on an approximate solution similar to that used by Voros in \cite{Voros}. For the $B_2$ BAEs for the case $ {\bf g}{=}\{0,1\}$, $K{=}1$ and $M{=}2/3$, we started from a perfect-string estimate for the first 1000 roots as follows: \begin{align} E_j &= \bigl ( 4\sqrt{\frac{\pi}{3}} \frac{\Gamma(\frac{11}{6})}{\Gamma(\frac{4}{3})} j \bigr)^{\frac{6}{5}} &\text{1st node} \\ E_j &= {\rm e}^{\pm i \frac{\pi}{5}} \bigl ( 4 \sqrt{\pi} \frac{\Gamma(\frac{11}{6})}{\Gamma(\frac{4}{3})} (j-\frac{1}{6}) \bigr)^{\frac{6}{5}} &\text{2nd node.} \label{twostr} \end{align} \begin{table}[ht] \begin{center} \begin{tabular}{|l l | l l|} \hline ~BA~numerics& &~ODE numerics & {} \\ 1st~node~~~~&2nd~node~~~~&1st~node~~~~&2nd~node~~~~ \\ \hline 6.28405 & $6.8368 \pm 5.8640i$ & 6.28390 &$6.8365 \pm 5.8637i$~ \\ 13.2379 & $18.216 \pm 14.265i$ & 13.2376 &$18.214\pm 14.264i$~ \\ 21.6307 & $30.996 \pm 23.645i$ & 21.6303 &$30.992 \pm 23.642i$~ \\ 30.5039 & $44.747 \pm 33.707i$ & 30.5034&$44.739\pm 33.700i$~ \\ 39.8617 & $59.254 \pm 44.304i$ & 39.8613&$59.240 \pm 44.292i$~ \\ \hline \end{tabular} \caption{\footnotesize $C_2{=}B_2$: comparison of Bethe ansatz results with the numerical solution of the $B_2$ and $C_2$ equations using the algorithm described in appendix~\ref{appb}. ($B_2$ node convention with ${\bf g}{=}\{0,1\}$, $K{=}1$ and $M{=}2/3$.)\label{tabc2b2}} \end{center} \end{table} We then solved the BAEs recursively using the Newton-Raphson method on the first 20 roots, keeping the remaining roots fixed. Table~\ref{tabc2b2} compares the lowest roots thus obtained with the results from the solution of the (pseudo-)differential equations. The relatively low accuracy in comparison with tables~\ref{taba4}, \ref{taba42} and \ref{tabd4} is most likely to be a consequence of the slow convergence rate of the algorithm used to solve the Bethe ansatz equations, and in particular the systematic errors introduced by fixing the higher levels ($E_j$, $j > 20$) to their perfect-string values. \section{Conclusions} \label{conclusions} There are many aspects about the correspondence between integrable models and the spectral theory of ordinary differential equations that we would like to explore and understand at a deeper level. Given the applications of the Bethe ansatz to the study of QCD in its leading-logarithm approximation~\cite{Lipatov:1994xy} and to the study of anomalous dimensions of composite operators in Yang-Mills theories~\cite{Minahan:2002ve,Ferretti:2004ba}, the extension of the correspondence to lattice models is certainly desirable from a physical point of view. On the other hand the mathematical structures arising from the generalisation of the correspondence to other conformal field theories with extended symmetries has the potential to link areas of modern and classical mathematics in an elegant way. In this paper our results were obtained very much on a case-by-case basis, but we already saw the emergence of interesting mathematical objects: the $\psi$-systems, negative-dimension dualities, and the formal similarity with the Miura-opers studied both in the classical work~\cite{Drinfeld:1984qv} (see also \cite{Gelfand:1975rn, Balog:1990mu, DiFrancesco:1990qr}) and more recently in~\cite{Mukhin:2002fp, Mukhin:2002fp1, Mukhin:2002fp2, Frenkel:2005fr, Chervov:2006xk}. It would be very interesting to generalise equations (\ref{sun0}--\ref{sp2n0}) to encompass the excited states of the integrable models; to date this has been completed for the $K{=}1$ case of $A_1$~\cite{Bazhanov:2003ni, Fioravanti:2004cz}. More challenging, but also extremely interesting, would be to extend the correspondence to perturbed conformal field theories defined on a cylinder, both for the ground state~\cite{Zamolodchikov:1989cf} and for excited states~\cite{Bazhanov:1996aq, Dorey:1996re}. Finally, even remaining inside the current setup, the ODE/IM correspondence has already had an impact on condensed matter physics: it has been applied to interacting Bose liquids~\cite{Gritsev:2006}, the single electron box~\cite{Lukyanov:2006cu} and quantum dots~\cite{Bazhanov:2003ua}. \medskip \noindent{\bf Acknowledgements --} We are very grateful to Herman Boos, Boris Dubrovin, Ludwig Faddeev, Frank Goehmann, Andreas Kl{\"u}mper, Sergei Lukyanov and Fedor Smirnov for useful conversations and kind encouragement. PED, TCD and JS thank Torino University for hospitality at beginning of this project. JS also thanks the members of the Universities of Wuppertal and Bologna for hospitality and the Ministry of Education of Japan for a `Grant-in-aid for Scientific Research', grant number 17540354. This project was also partially supported by the European network EUCLID (HPRN-CT-2002-00325), INFN grant TO12, NATO grant number PST.CLG.980424 and The Nuffield Foundation grant number NAL/32601, and a grant from the Leverhulme Trust.
1,314,259,995,246
arxiv
\section{Introduction} Nonlinear frequency conversion of far-infrared or microwave signals into the optical domain has been actively used for detection of such signals \cite{chiou72apl,abbas76ao,albota04ol,karstad05ole,temporao06ol,vandevender07josab,ding06,khan07,strekalov08THz1}. The relative ease of optical signal detection compared to e.g. those in the sub-THz range, in combination with an intrinsically noiseless character of nonlinear frequency conversion, explains the close attention this method has been receiving. Its main drawback, however, is its low conversion efficiency. The highest power conversion efficiency known to us is about 0.5\%, which has been only recently achieved for 100 GHz signal using 16 mW of CW optical pump at 1560 nm \cite{strekalov08THz1}. This number corresponds to the photon-number conversion efficiency of approximately $2.6\cdot 10^{-6}$, as follows from the Manley-Rowe relation. Highly efficient and noiseless upconversion of microwave radiation into the optical domain would open up numerous possibilities in microwave imaging and communications. One example of such a possibility, that we have discussed earlier \cite{strekalov08THz1}, is microwave photon counting at room temperature. Reaching the photon-counting regime in the sub-THz or THz range would be an achievement important for quantum information processing and computing, sub-millimeter spectroscopy and astronomy, security, and for other areas where the ultimate sensitivity detection of microwave radiation is desired. Unfortunately, even the most efficient up-converter to-date is still seven orders of magnitude short of the photon-counting regime for 100 GHz photons at room temperature \cite{strekalov08THz1}. On the other hand, the theoretical analysis \cite{Matsko08THzTheory} shows that this regime can be achieved. The key to its realization is reaching unity conversion efficiency. An all-resonant whispering gallery mode (WGM) configuration of the frequency converter should achieve this goal. WGM resonators with optical nonlinearity have been successfully used in nonlinear optics in general and in microwave photonics in particular \cite{cohen01el-a,rabiei02jlt,ilchenko03mod,savchenkov09ssb,hosseinzadeh06mtt}. In a recent experiment \cite{strekalov08THz1} a lithium niobate WGM resonator with the optical free spectral range (FSR) $\Omega=2\pi\cdot 12.64$ GHz was irradiated by a microwave signal with the frequency near $\omega_M=8\cdot\Omega\approx 2\pi\cdot 101.12$ GHz. This device is analogous to a lower-frequency electro-optical modulator \cite{cohen01el-a,savchenkov09ssb,ilchenko03mod}, except that the microwave signal excites the sidebands not in the pair of adjacent to the carrier optical WGMs, but in those across eight of optical FSRs. \section{General theoretical consideration} To describe the operation of such a modulator, we follow the steps of \cite{ilchenko03mod} and introduce the interaction Hamiltonian \begin{equation} \hat{H_i}=\hbar g(\hat{b}^\dagger_-\hat{c}^\dagger\hat{a}+\hat{b}_+\hat{c}^\dagger\hat{a}^\dagger)+c.c.,\label{H} \end{equation} which couples the optical pump and microwave signal WGMs (photon annihilation operators $\hat{a}$ and $\hat{c}$, respectively) with the Stokes and anti-Stokes optical upconverted signal ($\hat{b}_-$ and $\hat{b}_+$, respectively). In case of a single side-band modulation \cite{savchenkov09ssb} only one, either the Stokes or the anti-Stokes, term is present in Hamiltonian (\ref{H}). The coupling constant \begin{equation} g=\omega_0r_{ij}\frac{n_an_b}{n_c}\sqrt{\frac{\pi\hbar\omega_c}{2V_c}} \left(\frac{1}{V}\int_V\,dV\Psi_a^*\Psi_b\Psi_c\right) \label{g} \end{equation} includes the (bracketed) factor describing the overlap between the fields inside the resonator. In Eq.~(\ref{g}), $\omega_a\approx\omega_b\equiv\omega_0$ is the optical frequency, $n_{a,b}$ are the refraction indices and $V_a\approx V_b\equiv V$ is the optical mode volume; $\omega_c$, $n_c$ and $V_c$ are similar microwave parameters. The effective electro-optical coefficient $r_{ij}$ is determined by the fields configuration. Irradiating the resonator with the microwaves by placing the former near a waveguide opening is a very inefficient method of coupling the microwaves with the optical WGMs, as the main part of the microwave energy reflects back into the waveguide or scatters into space. This problem can be solved by using an all-resonant frequency up-converter, supporting microwave WGMs as well as optical ones. Experimental studies of microwave WGMs in crystalline disks and rings carried out in the 1980's shown remarkable quality factors of $Q\approx 10^{10}$ \cite{braginsky87}. For the all-resonant up-converter, all wave functions $\Psi$ in (\ref{g}) are optical or microwave eigenfunctions of the WGM resonator. The eigenfrequencies $\omega_a(L_a)$ of the main family of the optical WGMs is found \cite{ilchenko03mod} as \begin{equation} \omega_a=\frac{cL_a}{Rn_a},\label{eigen} \end{equation} where $L_a$ is the orbital momentum for this WGM, and $R$ is the resonator radius. Similar equations hold for the Stokes and anti-Stokes frequencies $\omega_{b_\pm}$. The Hamiltonian (\ref{H}) and Eq. (\ref{g}) lead to the following phase-matching conditions, that are essentially the energy and angular momentum conservation equations for the anti-Stokes and Stokes processes: \begin{equation} \omega_{b_\pm}=\omega_a\pm\omega_c \quad{\rm and}\quad L_{b_\pm}=L_a\pm L_c,\label{phasematch} \end{equation} Subtracting Eq.~(\ref{eigen}) for $\omega_a$ from that for $\omega_{b_\pm}$ and making substitution (\ref{phasematch}), we find the phase-matching equation \begin{equation} \omega_c(L_c)=\Omega_b L_c\pm \omega_a(n_a-n_b)/n_b\label{phasematch1} \end{equation} for the anti-Stokes (plus) and Stokes (minus) conversion processes. In (\ref{phasematch1}) $\Omega_b= c/(Rn_b)$ is the optical FSR for the signal WGM. We have neglected its frequency dispersion replacing $n_{b_\pm}\equiv n_b$, but kept the distinction between $n_a$ and $n_b$ which is due to the birefringence. The phase matching condition (\ref{phasematch1}) needs to be solved jointly with the microwave dispersion equation, which depends on the resonator geometry. Previously \cite{strekalov09lasphys} we have observed lower frequency microwave WGMs in lithium niobate disks in the 30 GHz range. One important conclusion of that study has been that the microwave WGMs in a disk resonator have poor spatial overlap with the optical modes that occupy just a few tens of microns near the disk rim. This greatly reduces the overlap factor in (\ref{g}). Therefore we decided to use ring resonators, made so that the optical axis of the crystal is parallel to the ring axis. When filled with a low-constant dielectric, such a ring tends to concentrate the microwave field inside, enforcing a better overlap with the optical WGMs. Once the phase-matching conditions are fulfilled, the microwaves-to-optics conversion will be efficient as long as the nonlinear coupling rate exceeds the loss rate in the microwave WGM, that is, if the total loss rate of the THz mode $\gamma=\gamma_{nl}+\gamma_{abs}$ is dominated by the rate of nonlinear frequency conversion $\gamma_{nl}$. Experimentally this means that the microwave WGM resonances will be considerably broadened when the optical pump is turned on. Then if the optical WGMs couple strongly to the input and output free-space beams (i.e. the resonator is optically over-coupled) the unity-efficient conversion is theoretically possible \cite{Matsko08THzTheory}. Efficient in- and out-coupling of the optical WGMs requires the external rim of the ring to be shaped as a spheroid with the radii ratio equal to \cite{strekalov08THz1} \begin{equation} \frac{\rho}{R} = \frac{n_p^2-n^2}{n_p^2},\label{r2r} \end{equation} where $n_p$ is the refraction index of the prism used for the optical coupling. Let us assume for the rest of the paper that the optical pump has the wavelength $\lambda= 1.55$ $\mu$m ($\omega_0=1.2\cdot 10^{15}$ s$^{-1}$) . Then for a diamond prism $n_p=2.384$, while e.g. for lithium niobate $n_{a,b} = n_e = 2.138$. Therefore according to (\ref{r2r}), $\rho/R=0.196$. \section{Type-I up-conversion} \begin{figure}[b] \vspace*{-0.2in} \centerline{ \input epsf \setlength{\epsfxsize}{3.3in} \epsffile{phasematch.eps}}\vspace*{-0.2in} \caption[]{\label{fig:phasematch} A phase-matching solution for lithium niobate, Type-I up-conversion, is achieved at $\omega_c= 2\pi\cdot 100$ GHz, $L_c=13$. Curve (A) is the phase-matching curve resulting from Eq.~(\ref{phasematch1}); curve (B) is the microwave dispersion curve calculated numerically. }\vspace*{-0.1in} \end{figure} Let us find the phase matching solutions. First, we assume that the microwave as well as \emph{both} optical fields are polarized along the optical axis, so that the largest nonlinearity coefficient of lithium niobate $d_{33}$ is used. In this configuration, which we dub Type-I up-conversion, $n_a=n_b$ and the last term in Eq. (\ref{phasematch1}) disappears. To find the phase-matching solutions, we plot Eq.~(\ref{phasematch1}) together with the microwave dispersion curve, see Fig.~\ref{fig:phasematch}. Intersection of the two curves at an integer value of $L_c$ indicates that the phase-matching is achieved. The accuracy to which the two frequencies should match at this value of $L_c$ is determined by the smaller of the optical and microwave WGM linewidths. Notice that the microwave dispersion curve (B) in Fig.~\ref{fig:phasematch} depends on all four geometric parameters of the ring, while the phase-matching curve (A) depends only on the external radius $R$. This gives us enough freedom to achieve the phase-matching at a desired microwave frequency. In the example shown in Fig.~\ref{fig:phasematch} the phase-matching was found for $\omega_c= 2\pi\cdot 100.0$ GHz at $L_c=13$. The ring thickness was $h=292\,\mu$m, its inner and outer radii were $R_{in}=2.48$ mm and $R=2.9$ mm, and the rim curvature found from (\ref{r2r}) was $\rho=568\,\mu$m. The microwave dispersion curve is obtained by numerical simulation. These simulations have been carried out in a finite element solver, COMSOL \cite{comsol} adapting an axis symmetrical formulation by Oxborrow \cite{oxborrow}. Fused silica was selected as the post material because of its low microwave absorption and relatively small microwave refraction index (n=1.9 at 100 GHz), compared to the larger values $n_e=5.15$ and $n_o=6.72$ for lithium niobate \cite{palik}. In this configuration the microwave field is strongly concentrated inside of the resonator ring, see Fig.~\ref{fig:wfRing}, which provides both good coupling with the external waveguide and good overlap with the optical field. \begin{figure}[t] \begin{center} \includegraphics[clip,angle=0,height=5cm]{wfM13bfCurved.eps} \caption{Absolute value of the $E_z$-component of a microwave WGM mode in a lithium niobate resonator with $\omega_c= 2\pi\cdot 100$ GHz, $L_c=13$. The plot is on a log scale with color map given on the right. A sketch of the geometry is shown in the inset, where the red box represents the calculation window.} \label{fig:wfRing} \end{center}\vspace*{-0.2in} \end{figure} We would like to point out that the purpose of the numeric simulation is not to achieve the exact phase-matching, but only to unambiguously determine the angular momentum $L_c$. An error small compared to the microwave FSR would be tolerable, because the phase matching can be fine-tuned via modifying the microwave dispersion of the system. For example, placing a metal ring on the fused silica post near the resonator ring, we have been able to tune the microwave frequency by more than 0.5 GHz without appreciable degradation of the quality factor. \section{Type-II up-conversion} Now let us consider the Type-II up-conversion, i.e. when the signal polarization is orthogonal to the pump polarization \cite{savchenkov09ssb}. In this case the birefringence term in (\ref{phasematch1}) does not vanish. In fact, in strongly birefringent materials it can be quite large. In lithium niobate, for example, it is equal to approximately $\pm 2\pi\cdot 6.6$ THz. The ``plus" sign corresponds to either anti-Stokes conversion of the ordinary polarized pump, or Stokes conversion of the extraordinary polarized pump. Similarly, the ``minus" sign corresponds to either Stokes conversion of the extraordinary polarized pump, or anti-Stokes conversion of the ordinary polarized pump. It is easy to see that neither situation allows for the phase matching solution compatible with the microwave dispersion equation and lying in the transparency window of lithium niobate. \begin{figure}[t] \vspace*{-0.2in} \centerline{ \input epsf \setlength{\epsfxsize}{3.3in} \epsffile{phasematchT.eps}}\vspace*{-0.2in} \caption[]{\label{fig:phasematchT} Phase-matching solutions for lithium tantalate, Type-II up-conversion, can be achieved at $L_c=76,77,78$ with the DC bias voltages shown on the plot. Curve (A) is the phase-matching curve resulting from Eq.~(\ref{phasematch1}); curve (B) is the microwave dispersion curve calculated numerically. }\vspace*{-0.1in} \end{figure} The Type-II up-conversion can be realized, however, in weakly birefringent materials. For example, in stoichiometric lithium tantalate the birefringence term is equal to approximately $\pm 2\pi\cdot 0.366$ THz \cite{bruner03}. While the ``positive" solution is still impossible, the ``negative" phase matching solution now can be achieved, see Fig.~\ref{fig:phasematchT}. One important distinction between the Type-I and Type-II configurations is that in the latter the microwave field has to be polarized in the plane of the ring, as required for efficient nonlinear coupling \cite{savchenkov09ssb}. The radial field distribution for one of the microwave WGMs found in Fig.~\ref{fig:phasematchT} is shown in Fig.~\ref{fig:wfRing1}. \begin{figure}[b] \begin{center} \includegraphics[clip,angle=0,height=5cm]{wfM80.eps} \caption{Absolute value of the $E_r$-component of a microwave WGM mode in a lithium tantalate resonator with $\omega_c= 2\pi\cdot 238$ GHz, $L_c=80$. The plot is on a log scale with color map given on the right. A sketch of the geometry is shown in the inset, where the red box represents the calculation window.} \label{fig:wfRing1} \end{center}\vspace*{-0.2in} \end{figure} For the numeric simulations shown in Figs.~\ref{fig:phasematchT} and \ref{fig:wfRing1} we used the microwave refraction index of lithium tantalate $n_e\approx n_o=6.5$ \cite{auston88}. All sizes of this ring were taken the same as for the lithium niobate ring, except the inner radius that was assumed to be 2.61 mm. In these simulations we have not attempted to achieve the integer-valued solution by adjusting the ring sizes. Instead, we take advantage of the fact that the ordinary and extraordinary optical WGM families can be frequency-tuned relative to each other by temperature, or by bias DC voltage. For voltage tuning, the viable case of Eq.~(\ref{phasematch1}) can be put in the form \begin{equation} \omega_c(L_c)\approx\Omega L_c-\omega_0\frac{\Delta n}{n}-\omega_0n^2\frac{r_{33}-r_{31}}{2}\frac{U}{h},\label{phasematch2} \end{equation} where the difference of electro-optical constants $r_{33}-r_{31}\approx 22$ pm/V \cite{boyd} determines the phase matching frequency tuning with bias DC voltage $U$ that can be applied to the ring along the axis. Equation (\ref{phasematch2}) allows us to calculate the bias voltages required to achieve the phase matching in three cases shown in Fig.~\ref{fig:phasematchT}. We have already pointed out that the microwave WGMs also can be frequency-tuned by a significant fraction of their FSR. It is interesting to contemplate a possibility of making a broadly-tunable microwave up-converter enabled by the combination of tunable phase-matching (\ref{phasematch2}) and tunable microwave WGMs. \section{Conversion efficiency} To estimate the Type-I conversion efficiency, we use the result by \cite{ilchenko03mod} for the ratio of the optical sidebands powers $P_\pm$ to the pump power $P_0$ in an electro-optical modulator: \begin{equation} \frac{P_\pm}{P_0}=\left(\frac{2\xi\sqrt{P_M}}{1+2\xi^2P_M}\right)^2,\quad \xi=\frac{4gQ}{\omega_0}\sqrt{\frac{Q_M}{\hbar\omega^2_c}},\label{efficiency1} \end{equation} where $Q$ and $Q_M$ are the optical and microwave WGM quality factors, respectively, and $P_M$ is the microwave power. We are interested in up-conversion of very low microwave powers, eventually at the single photon levels, which allows us to make the approximation $1+2\xi^2P_M\approx 1$, turn Eq.~(\ref{efficiency1}) around and find the up-conversion efficiency in terms of the photon numbers: \begin{equation} \frac{<N_\pm>}{<N_M>}=\frac{P_\pm}{P_M}\frac{\omega_c}{\omega_0}\approx 4\xi^2P_0\frac{\omega_c}{\omega_0}.\label{efficiency2} \end{equation} Our numerical simulations allow us to estimate the coupling constant (\ref{g}) and to find the optical pump power $P_0$ required for conversion efficiency (\ref{efficiency2}) to approach unity. Near the unity conversion efficiency the estimate (\ref{efficiency2}) may not be accurate, and saturation effects need to be taken into account. However the purpose of our estimate is to show that nearly absolute conversion efficiency can be achieved even with modest assumptions concerning the system parameters. For the Type-I up-conversion in lithium niobate, $r_{ij}=r_{33}= 29$ pm/V $= 8.7\cdot 10^{-7}$ esu \cite{boyd}, $n_{a,b} = 2.137$ and $n_c=5.15$. The eigenfunctions of the optical modes are practically equivalent, $\Psi_a\approx \Psi_b$, and the microwave eigenfunction $\Psi_c$ can be treated as a constant and taken out from the integral. The ``mode volume" $V$ in (\ref{g}) is defined as a volume integral of the absolute square of the eigenfunction. Therefore the optical mode volume cancels out, while the microwave mode volume and field amplitude at the optical WGM location are estimated based on the simulation data. To complete the estimate we assume $Q=10^8$ and $Q_M=100$. Substituting these numbers into Eq.~(\ref{efficiency2}) we find the photon-number conversion efficiency approaching unity (1/2 for the Stokes, and 1/2 for the anti-Stokes conversion efficiencies) at the optical pump power $P_0\approx$50 mW. For the Type-II up-conversion in lithium tantalate, $r_{ij}=r_{42}=r_{51}= 20$ pm/V $= 6\cdot 10^{-7}$ esu \cite{boyd}, $n_{a,b} = 2.12$ and $n_c=6.5$, \cite{roberts92,shoji97}. Eq.~(\ref{efficiency1}) in this case remains the same except for the factor 2 in the denominator, which does not affect the expression for the conversion efficiency (\ref{efficiency2}). The latter approaches unity at $P_0\approx$ 120 mW. \section{Demonstration of microwave WGMs} \begin{figure}[b] \centerline{ \input epsf \setlength{\epsfxsize}{2.7in} \epsffile{schematic3.eps}} \caption[]{\label{fig:schematic2}A lithium niobate WGM ring resonator mounted on a fused silica post (photo) coupled with a tapered dielectric waveguide (drawing). The ring height is 0.29 mm, outer radius is 2.9 mm.} \end{figure} As an experimental demonstration, we machined a lithium niobate ring closely matching the parameters calculated above and mounted it on a slightly tapered fused silica post as shown in Fig.~\ref{fig:schematic2}. In this experiment the outer rim of the ring was shaped as a cylinder ($\rho = \infty$) instead of a spheroid, which had little effect on its microwave spectrum. At the mounting location, the outer radius of the silica stem was 2.48 mm. However due to the roughness and tapering caused by the drilling process, the effective inner radius of the lithium niobate ring was slightly larger. In our analysis we treated this radius as a free parameter and found an excellent agreement between the numerical simulations and experimental data for $R_{in}=$ 2.61 mm. \begin{figure}[t]\vspace*{-0.1in} \centerline{ \input epsf \setlength{\epsfxsize}{3.4in} \epsffile{RFringTuning1.eps}}\vspace*{-0.2in} \caption{Dependence of the microwave spectrum on the resonators outer radius. The experimental data and theoretical calculations done by finite element method show very good agreement for the effective inner ring radius determined to be $R_{in}=2.61$ mm. }\label{fig:ringtuning} \end{figure} Microwaves were supplied by a tapered dielectric waveguide coupling to the resonator through the evanescent field. In this experiment we used a fused silica rod stretched over a hydrogen torch to approximately half of its initial diameter of 3 mm. This technique allowed us to achieve the optimal coupling by translating the resonator along the waveguide until their \emph{effective} microwave refraction indices match. We gradually polished the disk rim, reducing the outer radius from its initial value of 2.99 mm to 2.90 mm. The microwave dispersion curves were thereby shifted, see Fig.~\ref{fig:ringtuning}. \begin{figure}[t] \centerline{ \input epsf \setlength{\epsfxsize}{3.3in} \epsffile{RFringSpectrum.eps}}\vspace*{-0.2in} \caption[]{\label{fig:ringspectrum} Microwave spectrum of the ring resonator shown in Fig.~\ref{fig:schematic2}. The modes are labeled by the orbital number $L_c$. } \end{figure} The thin ring used in our experiment acts as a looped single-mode microwave waveguide. Therefore the mode sequence in Fig.~\ref{fig:ringtuning} is indexed by the orbital number $L_c$ determined from the numeric simulation. The simulated dispersion curves are also shown in Fig.~\ref{fig:ringtuning}. The microwave spectrum for the final value $R=2.90$ mm is shown in Fig.~\ref{fig:ringspectrum}. From this spectrum we see that signal coupling from the THz waveguide into the resonator as high as 82\% was achieved. We determine the microwave WGM quality factor to be $Q\approx 100$. \section{Summary} To summarize, we have studied microwave WGMs in the ring resonators. The purpose of this study has been to determine the utility of such systems as efficient microwave-to-optics converters, with the focus made on efficient coupling of microwaves into the resonator's WGMs; improving the overlap between the microwave and optical fields; and theoretical demonstration of the phase-matching for desired signal frequency. All of this has been achieved, and we believe that the actual demonstration of nearly unity-efficient microwave-to-optics conversion with subsequent optical photon counting is now feasible. The benefits from its practical implementation are expected in the areas of quantum information (e.g., quantum computing with quantum electronic circuits), astronomy and spectroscopy. \section{Acknowledgements} The experimental research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the NASA.
1,314,259,995,247
arxiv
\section{Introduction} Recently, some intriguing anomalies have been found in the branching ratios of semi-leptonic decays of B-mesons into electron and muon pairs, \begin{equation}\label{eq:rk} \mathcal{R}_{K}=\frac{Br\left(B^+\rightarrow K^+\mu^+\mu^-\right)}{Br\left(B^+\rightarrow K^+ e^+e^-\right)},\; \mathcal{R}_{K^*}=\frac{Br\left(B\rightarrow K^{*}\mu^+\mu^-\right)}{Br\left(B\rightarrow K^{*}e^+e^-\right)}. \end{equation} The LHCb collaboration presented the following values in Refs.~\cite{Aaij:2014ora, Aaij:2017vbb, RKstar}: \begin{align} \mathcal{R}_{K} &= 0.745^{+0.090}_{-0.074}({\rm stat}) \pm 0.036({\rm syst}), \\ \mathcal{R}_{K^*} &= \left\{ \begin{array}{ll} 0.66^{+0.11}_{-0.07} \pm 0.03, & \textrm{for } (2m_\mu)^2 < q^2 < 1.1 \ \mathrm{GeV}^2, \\[2mm] 0.69^{+0.11}_{-0.07} \pm 0.05, & \textrm{for } 1.1 \ \mathrm{GeV}^2 < q^2 < 6.0 \ \mathrm{GeV}^2, \end{array} \right. \end{align} where $q^2$ is the invariant mass for the final lepton pair. However, the standard model (SM) predicts $\mathcal{R}_{K}\approx 1\approx\mathcal{R}_{K^*}$~\cite{Hiller:2003js, Bordone:2016gaq} in the above kinematic region. It has been claimed that the overall deviation from the SM in B-physics is more than $4\sigma$ when global analyses are performed~\cite{Capdevila:2017bsm, DAmico:2017mtc, Altmannshofer:2017yso} by including other anomalies (the branching ratios of $B\rightarrow K^{(*)}\mu^+\mu^-$~\cite{Aaij:2014pli} and $B_s\rightarrow \phi\mu^+\mu^-$~\cite{Aaij:2015esa}, the angular distribution of decay rate of $B\rightarrow K^{(*)}\mu^+\mu^-$ and $P_5'$ observables~\cite{Aaij:2015oid, Aaij:2014pli, Aaij:2013qta}). If these anomalies are truly due to some physical effects, then lepton flavour universality (LFU) is violated and new physics beyond the SM is warranted. We shall focus exclusively on anomalies in $\mathcal{R}_{K}$ and $\mathcal{R}_{K^*}$, since the theoretical uncertainty is expected to be small. Economical explanations are involved with four-fermion effective operators, such as $(\bar s \gamma_\mu P_L b)(\bar \ell \gamma^\mu \ell)$, see Ref.~\cite{Alonso:2014csa} for more systematic discussion. More concrete models have also been constructed to generate such an effective operator~\cite{Alonso:2017uky, Sala:2017ihs, Bishara:2017pje, Ellis:2017nrp, Bonilla:2017lsq, Feruglio:2017rjo, Greljo:2017vvb, Alonso:2017bff, Wang:2017mrd, Alok:2017sui, Alok:2017jaf, DiChiara:2017cjq, Kamenik:2017tnu, Cai:2017wry, Ghosh:2017ber, Becirevic:2017jtw, Celis:2017doq}. Various models~\cite{Gauld:2013qja, Buras:2013dea, Altmannshofer:2014cfa, Crivellin:2015mga, Crivellin:2015lwa, Celis:2015ara, Greljo:2015mma, Altmannshofer:2015mqa, Niehoff:2015bfa, Belanger:2015nma, Falkowski:2015zwa, Carmona:2015ena, Chiang:2016qov, Becirevic:2016zri, Boucenna:2016wpr, Megias:2016bde, GarciaGarcia:2016nvr, Ko:2017lzd, Megias:2017ove, Hiller:2014yaa, Gripaios:2014tna, Crivellin:2017zlb, Alonso:2015sja, Sahoo:2016pet, Hu:2016gpe, Chen:2017hir, Ko:2017yrd, Geng:2017svp, Altmannshofer:2017poe} in the literature, including extra $Z'$, lepto-quark and loop-induced mechanisms, were proposed to address similar issues in the past. In this paper we focus on $Z'$ models with an extra $U(1)$ gauge symmetry. In the literature, usually just a specific charge assignment is chosen, without noting that many other options could be equally possible. Here, we provide a systematic investigation of general family $U(1)$ gauge symmetry and illustrate how to choose charges consistently to get anomaly-free models without introducing new fermions, except for three right-handed neutrinos. For family universal models, there is only one non-trivial charge assignment, the well-known $B-L$ symmetry. For family non-universal models, however, infinitely many solutions exist as linear combinations of five independent anomaly-free bases. We also show how some models can provide explanations for the anomalies in B-meson decays. This paper is organized as follows. In Section~\ref{sec:model}, we discuss the consistent conditions for $U(1)'$ charge assignment first, then give an example to show how a realistic model can be constructed to match the observed fermion masses and mixings. In Section~\ref{sec:pheno}, we exemplify one charge assignment in the context of anomalies in B-meson decays. Finally, we give our conclusion. \begin{table}[t] \begin{tabular}{|c|c|c|c|c|} \hline & $SU(3)_{c}$ & $SU(2)_{L}$ & $U(1)_{Y}$ & $U(1)^{\prime}$ \tabularnewline \hline $Q_{L}^{i}$ & $3$ & $2$ & $+1/3$ & $z_{Q^{i}}=\left( a, b, c \right)$\tabularnewline \hline $u_{R}^{i*}$ & $\bar{3}$ & $1$ & $-4/3$ & $z_{u_{i}^{*}}=\left(-a, -b, -c\right)$\tabularnewline \hline $d_{R}^{i*}$ & $\bar{3}$ & $1$ & $+2/3$ & $z_{d_{i}^{*}}=\left(-a,-b, -c\right)$\tabularnewline \hline\hline $L^{i}$ & $1$ & $2$ & $-1$ & $ z_{L^{i}}= \left(d, e, f\right)$\tabularnewline \hline $e_{R}^{i*}$ & $1$ & $1$ & $+2$ & $ z_{e_{i}^{*}}=\left(-d,-e,-f\right)$\tabularnewline \hline $\nu_{R}^{i*}$ & $1$ & $1$ & $0$ & $ z_{\nu_{i}^{*}}=\left(-d,-e,-f\right)$\tabularnewline \hline \end{tabular} \caption{An example with anomaly-free realization of $U(1)'$ charges for SM chiral fermions. The last column shows the $U(1)'$ charges for three generations, where $a,b,c,d,e$ and $f$ are arbitrary real numbers. The $U(1)'$ gauge anomaly is canceled in the quark and lepton sectors separately if $c=-(a+b)$ and $f=-(d+e)$, otherwise Eq.~(\ref{eq:abc}) has to hold. \label{tab:charges}} \end{table} \section{Anomaly free Family-Nonuniversal $U(1)'$ Models}\label{sec:model} In this section, we give some general discussion about anomaly-free conditions for $U(1)'$ models, without introducing extra chiral fermions other than three right-handed neutrinos. We denote the weak doublets and singlets as follows, $\psi=u,d,e,\nu$, \[ Q_{L}^{i}=\left(\begin{array}{c} u_{L}^{i}\\ d_{L}^{i}\end{array}\right),\; L^{i}=\left(\begin{array}{c} \nu_{L}^{i}\\ e_{L}^{i}\end{array}\right),\;\psi_{L,R}^{i}=P_{L,R}\psi^{i}, \] with $ P_{L}=(1-\gamma_{5})/2,P_{R}=(1+\gamma_{5})/2$ and $i=1,2,3$ as the family/generation index. The anomaly is proportional to the completely symmetric constant factor, \[ D_{\alpha\beta\gamma}\equiv\textrm{tr}\left[\left\{ T_{\alpha},T_{\beta}\right\} T_{\gamma}\right] \] $T_{\alpha}$ is the representation of the gauge algebra on the set of all left-handed fermion and anti-fermion fields, and $``\textrm{tr}"$ stands for summing over those fermion and anti-fermion species. Note that the $T$s above may or may not be the same since they depend on the referred gauge groups and also the chiral fermions running in the loop of the triangle anomaly-diagram. The anomaly free conditions for the theory are given by \begin{eqnarray} 0 & = & \sum_{i=1}^{3}(2z_{Q_{i}}+z_{u_{i}^{*}}+z_{d_{i}^{*}}),\quad \left[SU(3)^{2}U(1)^{\prime}\right], \nonumber\\ 0 & = & \sum_{i=1}^{3}(6z_{Q_{i}}+3z_{u_{i}^{*}}+3z_{d_{i}^{*}}+2z_{L_{i}}+z_{e_{i}^{*}}+z_{\nu_{i}^{*}}),\quad \left[\textrm{global } U(1)^{\prime}\right] \nonumber\\ 0 & = & \sum_{i=1}^{3}(z_{Q_{i}}^{2}-2z_{u_{i}^{*}}^{2}+z_{d_{i}^{*}}^{2}-z_{L_{i}}^{2}+z_{e_{i}^{*}}^{2}),\quad \left[U(1)^{\prime}{}^{2}U(1)_{Y}\right], \nonumber\\ 0 & = & \sum_{i=1}^{3}(6z_{Q_{i}}^{3}+3z_{u_{i}^{*}}^{3}+3z_{d_{i}^{*}}^{3}+2z_{L_{i}}^{3}+z_{e_{i}^{*}}^{3}+z_{\nu_{i}^{*}}^{3}),\quad \left[U(1)^{\prime}{}^{3}\right].\nonumber\\ 0 & = & \sum_{i=1}^{3}(3z_{Q_{i}}+z_{L_{i}}),\quad \left[SU(2)^{2}U(1)^{\prime}\right], \nonumber\\ 0 & = & \sum_{i=1}^{3}(\frac{1}{6}z_{Q_{i}}+\frac{4}{3}z_{u_{i}^{*}}+\frac{1}{3}z_{d_{i}^{*}}+\frac{1}{2}z_{L_{i}}+z_{e_{i}^{*}}),\quad \left[U(1)_{Y}^{2}U(1)^{\prime}\right]. \label{eq:anomaly} \end{eqnarray} So far, the discussion has been standard and the solution space of the above equations is expected to be large since we have more variables than equations. Interestingly, one can easily check that the first four equations are satisfied automatically if fermions are vector-like under the new $U(1)'$ gauge symmetry, namely \begin{equation} z_{Q_{i}}=-z_{u_{i}^{*}}=-z_{d_{i}^{*}},\; z_{L_{i}}=-z_{e_{i}^{*}}=-z_{\nu_{i}^{*}}. \end{equation} With vector-like charge assignment, we only need take care of the last two linear equations, which are actually reduced to just one, \begin{equation}\label{eq:linear} 3\sum_{i=1}^{3}z_{Q_{i}}=-\sum_{i=1}^{3}z_{L_{i}}. \end{equation} This equation is much easier to solve, but could have multiple solutions. For example, \begin{enumerate} \item Family universal model: \begin{equation} z_{Q}=-z_{L}/3, \end{equation} which is the unique non-trivial solution, the well-known $B-L$ gauge symmetry. \item Family non-universal models: \begin{equation} 3\sum_{i=1}^{3}z_{Q_{i}}=-\sum_{i=1}^{3}z_{L_{i}}, \end{equation} where $z_{i}$ are not identical. Since we have six variables but just one constraint, infinitely many solutions exist. For example, we are free to choose just one generation to be charged, the other two as singlets, or any assignments for quark sector with a proper choice of charges for leptons. Some models have been discussed in Refs.~\cite{Liu:2011dh, Xing:2015fdg, Kownacki:2016pmx, Asai:2017ryy}. In general, we can have the charge assignment as in Table~\ref{tab:charges}, where $a,b,c,d,e$ and $f$ are arbitrary real numbers but satisfy \begin{equation}\label{eq:abc} 3(a+b+c)=-(d+e+f). \end{equation} As a special case, we could also imagine that anomalies are canceled separately in the quark and leptons sectors, namely $\sum z_{Q_{i}}=0=\sum z_{L_{i}}$ if $c=-(a+b)$ and $f=-(d+e)$. Such a parametrization includes some well-studied models, such as $a=b=c=0$ and $d=0, e=-f\neq 0$ corresponds to $L_{\mu}-L_{\tau}$, $d=-e\neq 0$ and $f=0$ for $L_{e}-L_{\mu}$, and so on. Note that Eq.~\ref{eq:linear} is linear, so any linear combinations of anomaly-free realizations would also satisfy this equation, like $x(B-L)+y(L_{\mu}-L_{\tau})+z(L_{e}-L_{\mu})+...$ . The solution space for Eq.~(\ref{eq:abc}) is five-dimensional, so we can choose the following five independent solutions as the bases, \begin{equation}\label{eq:bases} L_{e}-L_{\mu},L_{\mu}-L_{\tau}, B_{u}-B_{c}, B_{c}-B_{t}, B-L. \end{equation} \end{enumerate} As emphasized above, we are restricting ourselves to extended models with only three additional right-handed neutrinos. If more particles are to be introduced, requirements on the charge assignment should change correspondingly. For example, one could also introduce more SM-singlet Weyl fermions $\chi_j$ with $U(1)'$ charge $X_j$, in cases where SM fermions are vector-like in $U(1)'$, giving \begin{equation}\label{eq:abcx} 3(a+b+c)+(d+e+f)=0,\; \sum_j X_j=0,\; \sum_j X^3_j = 0. \end{equation} Some fermion $\chi_k$ actually could be a dark matter (DM) candidate. For instance, a Majorana mass term $\bar{\chi}_k^c \chi_k$ would be induced after $U(1)'$ symmetry breaking by a SM-singlet scalar $S$ with $U(1)'$ charge $2X_k$, since interactions like $\bar{\chi}_k^c \chi_k S^\dagger$ are allowed. Vector-like $\chi_k$ is another popular scenario for DM where the Dirac mass term $\bar{\chi}_k\chi_k$ is allowed. In both cases, $Z_2$ symmetry can protect the stability of DM. To build realistic models with correct SM fermion masses and mixings, we need to introduce some scalar fields $H_i$ to spontaneously break gauge symmetries. The scalar contents would be highly dependent on the charge assignments for these chiral fermions. In the most general cases, for the quark sector we can introduce several Higgs doublets with hypercharge $Y=-1$ and $U(1)'$ charges, $a-b, a-c$ and $b-c$, to make renormalizable Yukawa interactions, giving the desired quark masses and CKM mixing matrix. In the lepton sector, Higgs doublets with $U(1)'$ charges, $d-e, d-f$ and $e-f$, suffice to give lepton masses and neutrino mixing. Below, we shall give an example with explicit charge assignment to illustrate how consistent models can be constructed~\cite{Liu:2011dh}. Let us focus on the quark sector first. We shall use the following setup: \begin{equation} z_{Q_{i}}=(1,1,-2). \end{equation} The above symmetry can be regarded as $3(B_{u}-B_{c})+6(B_{c}-B_{t})$, expanded in the five bases of Eq.~\ref{eq:bases}. Some phenomenologies have been studied first in Ref.~\cite{Liu:2011dh}, and later in Ref.~\cite{Crivellin:2015lwa} along with $L_{\mu}-L_{\tau}$ symmetry in the lepton sector. Here, this model is introduced just for illustration and will be referred to in comparison with the model for the B-anomaly in Section~\ref{sec:pheno}. With the above $U(1)'$ charges, a SM Higgs doublet $H_{1}$ with zero $U(1)'$ charge can cause spontaneous electroweak symmetry breaking to generate the masses of all the SM particles, but not correct flavor mixing. To see what happens in the quark sector, we can write the gauge-invariant Yukawa terms as \begin{eqnarray} \mathcal{L}_{H_{1}}=\sum_{i,j=1}^{2}\left(y_{ij}^{u}\bar{Q}_{L,i}\tilde{H}_{1}u_{R,j}+y_{ij}^{d}\bar{Q}_{L,i}H_{1}d_{R,j}\right)+y_{33}^{u}\bar{Q}_{L,3}\tilde{H}_{1}u_{R,3}+y_{33}^{d}\bar{Q}_{L,3}H_{1}d_{R,3}+h.c,\label{eq:Y_H1} \end{eqnarray} where $y_{ij}^{u,d}$ are the Yukawa couplings. After $H_1$ gets a vacuum expectation value (VEV), the resulting mass matrices for $u$ and $d$ have the following form: \[ \mathcal{M}_{u,d}^{H_{1}}\sim\left(\begin{array}{ccc} \times & \times & 0\\ \times & \times & 0\\ 0 & 0 & \times\end{array}\right). \] This kind of mass matrix cannot give the correct CKM matrix, since the third generation will not mix with the other two. Now if we have two more Higgs doublets, $H_{2}$ with $U(1)'$ charge $-3$ and $H_{3}$ with $+3$, the following Yukawa term are allowed: \begin{eqnarray} \mathcal{L}_{H_{2/3}} & = & y_{13}^{u}\bar{Q}_{L,1}\tilde{H}_{2}u_{R,3}+y_{23}^{u}\bar{Q}_{L,2}\tilde{H}_{2}u_{R,3} + y_{31}^{u}\bar{Q}_{L,3}\tilde{H}_{3}u_{R,1}+y_{32}^{u}\bar{Q}_{L,3}\tilde{H}_{3}u_{R,2}\nonumber \\ & + & y_{13}^{d}\bar{Q}_{L,1}H_{3}d_{R,3}+y_{23}^{d}\bar{Q}_{L,2}H_{3}d_{R,3} + y_{31}^{d}\bar{Q}_{L,3}H_{2}d_{R,1}+y_{32}^{d}\bar{Q}_{L,3}H_{2}d_{R,2}+h.c. \label{eq:Y_H23} \end{eqnarray} When both $H_{2/3}$ get VEVs, these terms contribute to the mass matrices with \[ \mathcal{M}_{u,d}^{H_{2/3}}\sim\left(\begin{array}{ccc} 0 & 0 & \times\\ 0 & 0 & \times\\ \times & \times & 0\end{array}\right).\] Now diagonalizing the total mass matrices, $\mathcal{M}_{u,d}^{H_{1}}+\mathcal{M}_{u,d}^{H_{2/3}}$, would result in three-flavor mixing. Note that we cannot replace $\tilde{H}_3 (H_3)$ with $H_2 (\tilde{H}_2)$ in Eq.~(\ref{eq:Y_H23}) because the $U(1)_Y$ symmetry would forbid that, although only one of them is necessary to give three-flavor mixing. In the case of no $H_3$ or $H_3$ not getting a VEV, the mass matrices are: \[ \mathcal{M}_{u}^{H_{2}}\sim\left(\begin{array}{ccc} 0 & 0 & \times\\ 0 & 0 & \times\\ 0 & 0 & 0\end{array}\right), \mathcal{M}_{d}^{H_{2}}\sim\left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & 0\\ \times & \times & 0\end{array}\right). \] Three-flavor mixing can still arise after diagonalization of $\mathcal{M}_{u,d}^{H_{1}}+\mathcal{M}_{u,d}^{H_{2}}$. One can easily discuss leptons, since similar physics appears. For example if $z_{L_{i}}=(0,1,-1)$, extra Higgs doublets with charges $\pm 1$ and/or $\pm 2$ would be able to achieve the required lepton masses and mixing. Gauge bosons will get their masses through the Higgs mechanism. When $H_2$ and $H_3$ get VEVs, the $U(1)'$ gauge symmetry is also broken. If the $U(1)'$ gauge coupling is comparable to the electroweak coupling, the $Z'$ boson is expected to have a mass around the electroweak scale, which is highly constrained. To get a heavy $Z'$ boson, an electroweak singlet scalar $S$ with $U(1)'$ charge $z_{s}$ can be introduced. Then the following vacuum configuration would break the gauge symmetries to $U(1)_{em}$, \begin{equation} \langle H_{i}\rangle=\left( 0\; v_{i}/\sqrt{2}\right)^T,\; i= 1,2,3; \qquad \langle S\rangle=v_{s}/\sqrt{2}. \label{eq:VEV} \end{equation} The kinetic terms for scalars are \[ \mathcal{L}_{H}=\sum_{i=1}^{3}\left(D^{\mu}H_{i}\right)^{\dagger}\left(D_{\mu}H_{i}\right)+\left(D^{\mu}S\right)^{\dagger}\left(D_{\mu}S\right), \] where $D_\mu$ is the covariant derivative. From this Lagrangian, the $W^\pm$ mass can be simply read out, $ g_{2}\sqrt{v_{1}^{2}+v_{2}^{2}+v_{3}^{2}}/2$. Neutral gauge bosons, on the other hand, are generally mixed, but it is possible to make $Z'$ heavy when $v_s \gg v_i$ such that experimental constraints from $Z-Z'$ mixing can be safely evaded, since the mixing is proportional to $v^2_i/v^2_s$; see Ref.~\cite{Langacker:2008yv} for a general review. The interaction for $\bar{\psi}\psi Z'$ can be obtained from $gZ_{\mu}^{'}J_{Z^{'}}^{\mu}$, where $g$ is the gauge coupling constant of $U(1)'$ and the current $J_{Z^{'}}^{\mu}$ in the gauge eigenstates is given by \begin{equation} J_{Z^{'}}^{\mu}=\sum_{\psi}\sum_{i=1}^{3}\bar{\psi}_{i}\gamma^{\mu}\left[\epsilon_{i}^{\psi_{L}}P_{L}+\epsilon_{i}^{\psi_{R}}P_{R}\right]\psi_{i}\;,\;\psi=u, d, e, \nu. \end{equation} The above $\epsilon_{i}^{\psi_{L/R}}$s are the $U(1)'$ charges $z_{\psi_i}$ for fermions $\psi^{L/R}_i$. Rotating the fermion fields with unitary transformations such that their mass matrices are diagonalized, we get \begin{eqnarray}\label{eq:f_rotation} \psi_{R}^{i} & = & \left(V_{\Psi_{R}}\right)_{ij}\Psi_{R}^{j},\; \psi_{L}^{i}=\left(V_{\Psi_{L}}\right)_{ij}\Psi_{L}^{j}, \end{eqnarray} where $\Psi=U,D, \bm{e}, \bm{\nu}$ are the mass eigenstates. The CKM matrix is given by $ V_{\text{CKM}}=V_{U_{L}}^{\dagger}V_{D_{L}}$ and the neutrino mixing matrix by $V_{\text{PMNS}}=V_{\bm{e}_{L}}^{\dagger}V_{\bm{\nu}_{L}}$. The rotation of fermion fields in Eq.(\ref{eq:f_rotation}) leads to \begin{equation}\label{eq:current} J_{Z^{'}}^{\mu}=\sum_{\Psi=(U,D, \bm{e}, \bm{\nu})}\sum_{i,j=1}^{3}\bar{\Psi}_{i}\gamma^{\mu}\left[ \left(V_{\Psi_{L}}^{\dagger}\epsilon^{\psi}V_{\Psi_{L}}\right)_{ij}P_{L}+\left(V_{\Psi_{R}}^{\dagger}\epsilon^{\psi}V_{\Psi_{R}}\right)_{ij}P_{R}\right]\Psi_{j}. \end{equation} We have used $ \epsilon^{\psi}\equiv \epsilon^{\psi_{L}} = \epsilon^{\psi_{R}}$, since we are considering the vector-like charge assignment. One can immediately notice that generally $V^{\dagger}\epsilon V\not\propto I$ if $\epsilon\not\propto I$, namely family non-universal gauge interactions. In our previous examples, we have $ \epsilon^{\psi}\propto\mathrm{diag}\left(1,1,-2\right)$ or $\mathrm{diag}\left(0,1,-1\right)$, and we expect flavor-changing effects to arise. Since only $V_{\text{CKM}}$ or $V_{\text{PMNS}}$ is experimentally measured, the individual matrix $V_{\psi_{L,R}}$ is unknown. Thus the resulting products $V_{\Psi_{L,R}}^{\dagger}\epsilon^{\psi}V_{\Psi_{L,R}}$ are also unknown. \section{Phenomenologies and anomalies in B-meson decays}\label{sec:pheno} In this section, we discuss how the above framework can address recent anomalies in B physics. Since left-handed fermions have the same charges as the right-handed ones, we can reparametrize \begin{equation} \epsilon^{\psi}= z_{\psi_1}I + \mathrm{diag}\left(0,z_{\psi_2}-z_{\psi_1},z_{\psi_3}-z_{\psi_1}\right)\equiv z_{\psi_1}I + \delta \epsilon^{\psi}, \end{equation} where $z_{U}=(a,b,c), z_{L}=(d,e,f)$, and \begin{equation} B^{\psi_{L,R}}_{ij}\equiv \left(V_{\psi_{L,R}}^{\dagger}\epsilon^{\psi}V_{\psi_{L,R}}\right)_{ij}=z_{\psi_1}\delta_{ij}+(V_{\psi_{L,R}}^{\dagger} \delta \epsilon^{\psi}V_{\psi_{L,R}})_{ij}\equiv z_{\psi_1}\delta_{ij} + \delta B^{\psi_{L,R}}_{ij}. \end{equation} Flavor changing processes can happen when $\delta \epsilon^{\psi}\neq 0$ or $\delta B^{\psi_{L,R}}_{ij}\neq 0$. Note that elements in the matrix $\delta B^{\psi_{L,R}}$ are not necessarily smaller than $ z_{\psi_1}$ for a general setup, since $z_{\psi_1}$ can be zero if fermions in the first generation are $U(1)'$ singlets. To illustrate how it affects B meson decay, we exemplify the following anomaly-free charge assignment \begin{equation}\label{eq:example} z_{U}=(0,0,1), z_{L}=(0, q_\mu, -3-q_\mu). \end{equation} This assignment can be expanded by the bases in Eq.~(\ref{eq:bases}), \begin{equation} (B-L)-(B_{u}-B_{c})-2(B_{c}-B_{t})+(L_{e}-L_{\mu})+(q_\mu+2)(L_{\mu}-L_{\tau}), \end{equation} which is a nice example in the sense that it involves all five anomaly-free bases. If $q_\mu=-3/2$, the lepton sector has some kind of $L_{\mu}+L_{\tau}$ symmetry. If $|q_\mu|\ll 1$, only the third generation is effectively $U(1)'$-charged. We should emphasize again that it is free to change the above assignment by adding any linear combinations of other anomaly-free solutions. For example, we could use $z'_{U}=(1,1,-1)$ which is just the sum of the above charges with $(1,1,-2)$ mentioned earlier. However, these two models give different signal strengths in experiments, such as LHC dijet events, therefore they are subject to different constraints. The $b\rightarrow s$ transitions are usually analyzed in terms of the following effective Hamiltonian \begin{equation} \mathcal{H}_{\mathrm{eff}} = - \frac{4G_F}{\sqrt{2}} V_{tb} V_{ts}^* \frac{\alpha}{4\pi} \sum_{i}\left(C_i O_i + C_i'O_i'\right) + h.c. \end{equation} Here $V$ is the CKM matrix and $\alpha=1/137$ is the fine-structure constant. Note that the coefficients $C_i$ and $C_i'$ are scale-dependent, governed by the renormalization group equation. They are first calculated at high scales and then run to a lower scale, which is usually taken as the bottom quark mass $m_b$ for decay processes. We just list some relevant operators for our later discussions: \begin{align*} O_9&=(\bar{s}\gamma_\mu P_L b)(\bar{l}\gamma^\mu l), &&O_9'=(\bar{s}\gamma_\mu P_R b)(\bar{l}\gamma^\mu l), \\ O_{10}&=(\bar{s}\gamma_\mu P_L b)(\bar{l}\gamma^\mu \gamma_5 l), &&O_{10}'=(\bar{s}\gamma_\mu P_R b)(\bar{l}\gamma^\mu \gamma_5 l). \end{align*} In general, all the above operators can be generated. Since anomalies are closely related to $O_9$, we calculate the induced coefficient for $O_9$ by $Z'$-mediated new physics \begin{equation} C_9^{\textrm{NP}}\simeq \frac{g^2 \delta B^{D_{L}}_{sb} \left(B^{\bm{e}_{L}}_{\mu\mu}+B^{\bm{e}_{R}}_{\mu\mu}\right)}{2M^2_{Z'}}\biggl{/}\left[\frac{G_F}{\sqrt{2}} \frac{V_{tb} V_{ts}^*\alpha}{\pi}\right]. \end{equation} To resolve the anomalies, $C_9^{\textrm{NP}}$ should be around $\simeq -1.1$~\cite{Capdevila:2017bsm, DAmico:2017mtc}, which can be translated into \begin{equation}\label{eq:para} \frac{M_{Z'}}{g\sqrt{|\delta B^{D_{L}}_{sb} \left(B^{\bm{e}_{L}}_{\mu\mu}+B^{\bm{e}_{R}}_{\mu\mu}\right)|}} \simeq 24\ \mathrm{TeV}. \end{equation} The above formula is generally applicable to any non-trivial charge assignment. In some cases, we can simplify it further. For instance, since $B^{\bm{e}_{L/R}}_{\mu\mu}$ are elements in the diagonal, we could expect $B^{\bm{e}_{L}}_{\mu\mu}\sim q_\mu$ if $|q_\mu|\gg|3+q_\mu|$, or no rotation in the charged lepton sector ($V_{\textrm{PMNS}}=V_{\bm{\nu}_{L}}$), and Eq.~(\ref{eq:para}) can be approximated as \begin{equation}\label{eq:result} \frac{M_{Z'}}{g\sqrt{| q_\mu \delta B^{D_{L}}_{sb}|}} \simeq 35\ \mathrm{TeV}. \end{equation} Now with the charge assignment as in Eq.~(\ref{eq:example}), we explicitly have $\delta B^{D_{L}}_{sb}= (V_{D_{L}}^{\dagger})_{23}(V_{D_{L}})_{33}$. If the CKM matrix comes solely from the rotation of down quarks, we would have $\delta B^{D_{L}}_{sb} =V_{tb} V_{ts}^*$ and \begin{equation}\label{eq:parameter} \frac{M_{Z'}}{g\sqrt{| q_\mu |}} \simeq 7\ \mathrm{TeV}. \end{equation} Other coefficients can also be calculated similarly. Also, if $q_\mu \neq -3$, we would expect new physics effects to show up in $B\rightarrow K^{(*)} \tau^+\tau^-$. Since we mainly focus on $O_9$-related anomalies in B-meson decays to muons, we shall neglect other operators as long as the setup does not violate current limits. For example, we can freely choose $B^{\bm{e}_{L}}_{\mu\mu}=B^{\bm{e}_{R}}_{\mu\mu}$, which results in $C_{10}^{\textrm{NP}} = 0 = C_{10}^{'\textrm{NP}}$. $Z'$ may also mediate $B_s-\bar{B}_s$ mixing in the above scenario, since the operator $(\bar{s}\gamma_\mu P_L b)^2$ is inevitably induced, which actually gives the most stringent limit at the moment. Current bounds~\cite{Arnan:2016cpy} can be put on the following quantity: \begin{equation}\label{eq:bs} \frac{g^2|\delta B^{D_{L}}_{sb}|^2}{M^2_{Z'}}\lesssim \frac{1}{(300\ \mathrm{TeV})^2}, \textrm{ or } \frac{M_{Z'}}{g}> 12 \ \mathrm{TeV} \textrm{ for }\delta B^{D_{L}}_{sb} =V_{tb} V_{ts}^*\simeq 0.04 . \end{equation} Comparing with Eq.~(\ref{eq:parameter}), we can safely evade this constraint for $| q_\mu |\gtrsim 3$ and resolve B anomalies at the same time. \begin{figure}[t] \includegraphics[width=0.5\textwidth,height=0.46\textwidth]{MzBs} \caption{Contours with $C_9^{\textrm{NP}}\simeq -1.1$ in the $M_{Z'}/g$ and $\delta B^{D_{L}}_{sb}$ plane for $|q_\mu|=1,3,5$, shown by dashed purple, dot-dashed red and dotted blue lines, respectively. The region above the black line is excluded by $B_s-\bar{B}_s$ mixing. \label{fig:MzBs}} \end{figure} In Fig.~\ref{fig:MzBs}, motivated by B physics anomalies, we plot several contours with $C_9^{\textrm{NP}}\simeq -1.1$ in the $M_{Z'}/g$ and $\delta B^{D_{L}}_{sb}$ plane for $|q_\mu|=1,3$ and $5$. They are shown by the dashed purple, dot-dashed red and dotted blue lines, respectively. The region above the black line is excluded by $B_s-\bar{B}_s$ mixing. As expected from Eq.~(\ref{eq:result}), increasing $|q_\mu|$ would allow a larger parameter space. For small $|q_\mu|$, $Z'$ could then be tested by other means. Since $Z'$ couples to both quarks and leptons, dilepton and dijet searches for heavy resonances at colliders can probe $Z'$. The expected signal strength depends on \begin{equation} \sigma \left(f\bar{f}\rightarrow Z'\right)\times Br\left(Z'\rightarrow f'\bar{f}'\right), \end{equation} where $\sigma$ is the cross section for $Z'$ production, $f$ and $f'$ are SM fermions, and $Br$ denotes the decay branching ratio. For hadron colliders, we shall integrate the above quantity over the quark parton distribution functions (PDFs) (throughout our calculations, we have used {\tt MMHT2014}~\cite{Harland-Lang:2014zoa} PDFs). In the case of charge assignment for quarks, $(0,0,1)$, hadron colliders such as LHC with energy $\sqrt{s}=13\ \mathrm{TeV}$ have less discovery potential for $Z'$, since $Z'$ would couple weakly to the first two generations through quark mixing only, but strongly to the third generation, which has small PDFs. A future $100\ \mathrm{TeV}$ hadron collider has a better chance because the production rate is increased thanks to the enhancement of the PDFs of bottom and top quarks. In Fig.~\ref{fig:limits}(a), we give the ratio of $Z'$ production from bottom and top quarks in our model to that from light quarks if $Z'$ also couples to $u$ and $d$. We have normalized the cross section to a $3\ \mathrm{TeV}$ $Z'$ at LHC with $\sqrt{s}=13\ \mathrm{TeV}$. As shown, $Z'$ from the bottom channel is reduced by a factor of $\mathcal{O}(10^3)$ at $\sqrt{s}=13\ \mathrm{TeV}$ and $\mathcal{O}(10^2)$ at $\sqrt{s}=100\ \mathrm{TeV}$. Because of that, the limits from hadron collider searches are relaxed dramatically and $M_{Z'}\lesssim 1\ \mathrm{TeV}$ would still be allowed, which can be inferred from Fig.~\ref{fig:limits}(b), where we show dilepton searches for a Sequential SM (SSM) $Z'$ (SSM $Z'$ is identical to SM $Z$ except for the mass) as the dashed black line. The region above the solid red curve is excluded by dilepton searches~\cite{Aaboud:2016cth}. However, if the signal strength is reduced by $10$ or $100$, the exclusion limit would be shifted to $\simeq 2.4 \ \mathrm{TeV}$ (dashed blue) and $1.2\ \mathrm{TeV}$ (dot-dashed purple), respectively. Since in the model discussed, the cross section is lowered by $\mathcal{O}(10^3)$, taking the branching ratio into account would give $M_{Z'}\gtrsim \mathcal{O}(600\ \mathrm{GeV})$, with some dependences on $q_\mu$. Similarly, constraints from dijets are also weakened. In comparison, the charge assignment $(1,1,-1)$, which is the linear combination of $(1,1,-2)$ in Section~\ref{sec:model} and $(0,0,1)$, will give different results. In such a case, $Z'$ can couple to light quarks and the cross section for production can be sizable. If $g$ is at the same order as the weak coupling, the limit would be similar to the dilepton search at LHC with $\sqrt{s}=13\ \mathrm{TeV}$ for SSM $Z'$ $M_{Z'}\gtrsim 3.4\ \mathrm{TeV}$~\cite{Aaboud:2016cth, Khachatryan:2016zqb} and dijet channel $M_{Z'}\gtrsim 3.4\ \mathrm{TeV}$~\cite{Aaboud:2017yvp} for $g=0.5$. These limits might fluctuate, since the values of the decay branching ratio of $Z'$ would be different from those in $Z'_{SSM}$. In general, direct searches at colliders are complementary to the bound from $B_s-\bar{B}_s$ mixing, Eq.~(\ref{eq:bs}). \begin{figure}[t] \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth,height=0.9\linewidth]{ratio.pdf} \caption{} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth,height=0.89\linewidth]{Limits.pdf} \caption{} \end{subfigure} \caption{(a) Various ratios of the production cross section for $Z'$ as functions of energy $\sqrt{s}$, normalized by the cross section when $Z'$ couples to light quarks $u$ and $d$ at LHC with $\sqrt{s}=13\ \mathrm{TeV}$. The solid black curve shows the ratios of the contribution from $u$ and $d$ to that from $b$ and $t$. The dotted vertical line indicates $\sqrt{s}=13\ \mathrm{TeV}$. (b) The mass limit for SSM $Z'$ (dashed black line) from dilepton searches shifts when the signal strength is reduced by factors of $10$ (dashed blue) and $100$ (dot-dashed purple). \label{fig:limits}} \end{figure} \section{Conclusions}\label{sec:concl} Motivated by the anomalies in semi-leptonic B-meson decays, we have discussed an explanation in models with general family $U(1)'$ gauge symmetry. We have presented a systematic investigation on how to consistently assign charges to SM chiral fermions with three right-handed neutrinos. If fermions in the standard model are vector-like under this new $U(1)'$ symmetry, their charges in Table~\ref{tab:charges} have to, and only need to, satisfy the condition given in Eq.~(\ref{eq:abc}). Generally, infinitely many anomaly-free family non-universal models exist, as linear combinations of five independent anomaly-free bases. If both bottom quark and muon couple to this new $U(1)$, typically the anomalies in B-meson decays can be explained. We have also discussed several other experimental searches for such models, including $Z'$-mediated effects in $B_s-\bar{B}_s$ mixing, dilepton and dijet searches for heavy resonances at colliders. Some viable parameter space has already been probed by these searches. Future searches in colliders and other B-meson decay modes should be able to provide more powerful information on the physical parameters and test different scenarios for $Z'$ charge assignment. \vspace*{0.5cm} \vspace{1 cm} \centerline{{\bf Acknowledgement}} YT is grateful to Koichi Hamaguchi, Chengcheng Han and Kazunori Nakayama for enlightening conversations. The work of YT was supported by the Grant-in-Aid for Innovative Areas No.16H06490. \vspace{20 pt} \providecommand{\href}[2]{#2}\begingroup\raggedright
1,314,259,995,248
arxiv
\section{Introduction} Tests of quantum mechanics are of increasing interest in recent years, in particular, the optical tests of quantum mechanics carried out on systems of two correlated photons. Such systems---showing Einstein--Podolsky--Rosen (EPR) correlations---are suitable to discriminate between quantum mechanics and any local realistic (hidden variable) theory via Bell inequalities \cite{Bell} (see, e.g., Ref. \cite{Bertlmann} for a quick introduction into the field). All recent experiments using laser beams confirm quantum mechanics in an impressive way (see, e.g., Refs. \cite{Aspect, Kwiat}) and they teach us that under certain circumstances quantum systems extend over macroscopic scales. We find it interesting and desireable to perform tests of EPR correlations also with massive particles. Analogously to the entangled photons one can create at $B$ factories states of EPR-correlated $B^0 \bar B^0$\ pairs as decay products of the upsilon $\Upsilon(4S)$ resonance (see, e.g., Refs. \cite{Datta,Kayser}). More precisely, $B^0_d \bar B^0_d$ pairs are produced since $\Upsilon(4S)$ is not heavy enough to decay into $B^0_s \bar B^0_s$. We drop the index $d$ for convenience. $B$ mesons have a lifetime of the order of a picosecond. If a $B^0 \bar B^0$\ pair is produced by the decay of $\Upsilon(4S)$ there is very little kinetic energy left per $B$ meson, namely roughly 10 MeV. Multiplying the corresponding velocity $v$ of such a $B$ meson by its lifetime one obtains $v \tau_{B^0} \approx 3 \times 10^{-2}$ mm. This shows that in average the separation of the decaying $B$ mesons originating in $\Upsilon(4S)$ is macroscopic. The $B^0 \bar B^0$\ system as the decay product of $\Upsilon(4S)$ is a superposition of two states because the $B^0 \bar B^0$\ state inherits the charge conjugation quantum number $C=-1$ of the $\Upsilon(4S)$. This system offers therefore the possibility to test, within particle physics, quantum-mechanical interference over macroscopic distances. Similar tests involving two-kaon systems have been proposed in the past in Refs. \cite{Six,Selleri} and recently for Da$\Phi$ne in Ref. \cite{ebe}. To realize the above idea we consider the ratio $R=\,$(\# like-sign dilepton events)/ (\#~opposite sign-dilepton events) of lepton pairs generated in the decay chain $\Upsilon(4S) \rightarrow B^0 \bar B^0 \rightarrow \ell^\pm \ell^\pm +$ anything. In order to discriminate between quantum mechanics and local realistic theories we introduce a {\it decoherence parameter} $\zeta$ \cite{ebe} such that the interference term present in the quantum-mechanical calculation of $R$ is multiplied by a factor $1~-~\zeta$ where $\zeta$ parameterizes deviations from quantum mechanics. $\zeta$ is called decoherence parameter because at $\zeta = 1$ the interference is totally gone. It turns out that, including this modification, $R$ is a function of $\Delta m/\Gamma$, $\Delta \Gamma/2 \Gamma$ and $\zeta$ with $\Delta m$, $\Delta \Gamma$ and $\Gamma$ being mass difference, decay width difference and average decay width, respectively, of the heavy and light neutral $B$ mass eigenstates. There is also a parameter involved characterizing CP violation in $B^0 \bar B^0$\ mixing. We will argue below that this parameter can be set equal to one (no CP violation) and that taking $\Delta \Gamma = 0$ is sufficient for our purpose. The main idea of this paper is to compare the experimental value $R_{\mbox{\scriptsize exp}}$ of $R$ measured at $\Upsilon(4S)$ with the theoretical expression $R(\Delta m/\Gamma, \zeta)$. Taking $\Delta m$ from independent experiments which study the time dependence of $B^0 \bar B^0$\ mixing and thus interference effects of {\it single} $B$ states, the relation $R_{\mbox{\scriptsize exp}}= R(\Delta m/\Gamma, \zeta)$ allows us to obtain information on $\zeta$ and thus to test the long-range interference effects of quantum mechanics in the $B^0 \bar B^0$\ system. \section{The $B^0 \bar B^0$\ system} To begin with we discuss the quantum mechanics of the $B^0 \bar B^0$\ system. Phenomenologically, there are the two independent amplitudes \begin{equation} {\cal A}(B^0 \rightarrow f) \equiv A \quad \mbox{and} \quad {\cal A}(\bar B^0 \rightarrow f) \equiv B \end{equation} which enter into the description of the decays of the neutral $B$ mesons into an arbitrary final state $f$. The mass eigenstates of the neutral $B$ mesons are given by \begin{eqnarray} | B_H \rangle & = & p |B^0 \rangle + q |\bar B^0 \rangle\, , \nonumber \\ | B_L \rangle & = & p |B^0 \rangle - q |\bar B^0 \rangle\, , \end{eqnarray} with $ |p|^2 + |q|^2 = 1 $ and \begin{equation} \label{q/p} \frac{q}{p} = \frac{\Delta m - \frac{i}{2} \Delta \Gamma}{2 (M_{12} - \frac{i}{2} \Gamma_{12})} = \frac{2 (M_{12}^* - \frac{i}{2} \Gamma_{12}^*)}{\Delta m - \frac{i}{2} \Delta \Gamma} = \sqrt{\frac{ M_{12}^* - \frac{i}{2} \Gamma_{12}^* }{ M_{12} - \frac{i}{2} \Gamma_{12}}}\, , \end{equation} where $\Delta m = m_H - m_L > 0$ ($H$=heavy, $L$=light), $\Delta \Gamma = \Gamma_H - \Gamma_L$ and $M_{12} - \frac{i}{2} \Gamma_{12}$ is the off-diagonal matrix element in the effective time evolution in the $B^0 \bar B^0$\ space \cite{gel}. The positivity of $\Delta m$ fixes the sign of the square root in Eq.~(\ref{q/p}). The $B^0 \bar B^0$\ pair produced in the decay of $\Upsilon(4S)$ is in the state \begin{equation} \label{psi} \Psi(t=0) = \frac{1}{\sqrt{2}} \left( |B^0 \rangle \otimes |\bar B^0 \rangle -|\bar B^0 \rangle \otimes |B^0 \rangle \right) \end{equation} with charge conjugation quantum number $C=-1$ because the $\Upsilon(4S)$ has quantum numbers $J^{CP}=1^{--}$ and its decay into $B^0 \bar B^0$\ proceeds via strong interactions. The subsequent time evolution of (\ref{psi}) is given by \begin{eqnarray} |B^0 (t) \rangle & = & g_+(t) |B^0 \rangle + \frac{q}{p} g_-(t) |\bar B^0 \rangle \, , \nonumber\\ |\bar B^0 (t) \rangle & = & \frac{p}{q} g_-(t) |B^0 \rangle + g_+(t) |\bar B^0 \rangle \end{eqnarray} with \begin{equation} g_\pm (t) = \frac{1}{2} \mbox{e}^{-i(m - \frac{i}{2}\Gamma)t} \left[ \mbox{e}^{-\frac{i}{2}(\Delta m - \frac{i}{2}\Delta \Gamma)t} \pm \mbox{e}^{\frac{i}{2}(\Delta m - \frac{i}{2}\Delta \Gamma)t} \right] \end{equation} and \begin{equation} m = \frac{1}{2} (m_H + m_L)\, , \; \Gamma = \frac{1}{2} (\Gamma_H + \Gamma_L)\, . \end{equation} After having introduced the basic formalism we now come to the point where we modify the result of ordinary quantum mechanics and subject this modification to a comparison with experimental results. The class of observables we are interested in is the probability that $\Psi$ decays into final states $f_1$ and $f_2$ with momenta $\vec p$ and $-\vec p$, respectively, in its restframe. This probability is calculated by the integral \cite{car} \begin{eqnarray} \lefteqn{N(f_1, f_2) =} \nonumber \\ & & \frac{1}{2} \int_0^\infty dt \int_0^\infty dt' \left\{ \left| \langle f_1 | B^0(t) \rangle \right|^2 \left| \langle f_2 | \bar B^0(t') \rangle \right|^2 + \left| \langle f_1 | \bar B^0(t) \rangle \right|^2 \left| \langle f_2 | B^0(t') \rangle \right|^2 - \right. \nonumber \\ & & -2 \, (1-\zeta) \, \mbox{Re} \, \Big[ \langle f_1 | B^0(t) \rangle^* \langle f_2 | \bar B^0(t') \rangle^* \langle f_1 | \bar B^0(t) \rangle \langle f_2 | B^0(t') \rangle \Big] \bigg\}. \label{N12} \end{eqnarray} The last term in Eq. (\ref{N12}) is the usual quantum-mechanical interference term as it results from the two summands of the wave function (\ref{psi}) modified by a factor $1-\zeta$ \cite{ebe}. In the following we will rather arbitrarily assume that $0 \le \zeta \le 1$ to incorporate quantum mechanics with $\zeta = 0$ at one end of the interval and no interference corresponding to $\zeta = 1$ at the other end. Our aim is to test which range of $\zeta$ is experimentally allowed if we use information on semileptonic decays of the $B^0 \bar B^0$\ system. To apply Eq. (\ref{N12}) we have to perform the integrals and we arrive at the general formula \begin{eqnarray} \label{N} \lefteqn{N(f_1, f_2) =} \nonumber \\ & & \frac{1}{2} \left\{ I_1 \bigg|A_1B_2-B_1A_2\bigg|^2 + I_2 \left|\frac{p}{q}A_1A_2-\frac{q}{p}B_1B_2\right|^2 \right\} + \nonumber \\ & & + \zeta \, \mbox{Re} \, \left\{ \left( I_+ A_1^*B_1 + I_- \left(\frac{q}{p}\right)^* \frac{p}{q} B_1^*A_1 + I_{+-} \frac{p}{q} |A_1|^2 + I_{-+} \left(\frac{q}{p}\right)^* |B_1|^2 \right) \right. \nonumber \\ & & \left. \cdot \left( I_+ B_2^*A_2 + I_- \left(\frac{p}{q}\right)^* \frac{q}{p} A_2^*B_2 + I_{+-} \frac{q}{p} |B_2|^2 + I_{-+} \left(\frac{p}{q}\right)^* |A_2|^2 \right) \right\} \end{eqnarray} with \begin{eqnarray} I_1 & = & \int_0^\infty dt \int_0^\infty dt' \, | g_+(t)g_+(t') - g_-(t)g_-(t') |^2 = I_+^2 + I_-^2 - 2 \, \mbox{Re} \, (I_{+-})^2 = \frac{1}{\Gamma} I_+\, ,\nonumber \\ I_2 & = & \int_0^\infty dt \int_0^\infty dt' \, | g_+(t)g_-(t') - g_-(t)g_+(t') |^2 = 2 I_+ I_- - 2 | I_{+-} |^2 = \frac{1}{\Gamma} I_- \, , \end{eqnarray} \begin{eqnarray} I_\pm & = & \int_0^\infty dt \, | g_\pm(t) |^2 = \frac{1}{2\Gamma} \left( \frac{1}{1-y^2} \pm \frac{1}{1+x^2} \right) \, ,\nonumber \\ I_{+-} & = & \int_0^\infty dt \, g_+(t)^* g_-(t) = -\frac{1}{2\Gamma} \left( \frac{y}{1-y^2} + i\frac{x}{1+x^2} \right) \, , \end{eqnarray} and $x$ and $y$ are defined as \begin{equation} x = \frac{\Delta m}{\Gamma} \quad \mbox{and} \quad y = \frac{\Delta \Gamma}{2 \Gamma}. \end{equation} Furthermore, the relation $I_{-+}=(I_{+-})^*$ is valid. In principle, measurements of $N(f_1, f_2)$ for any $f_1$, $f_2$ could be used to obtain information on $x$ (see Ref. \cite{ARGUS0}), $y$ and $\zeta$. In this case one would have to know the quantities $|A_1B_2-B_1A_2|$, $|\frac{p}{q}A_1A_2-\frac{q}{p}B_1B_2|$, etc. which, in general, require additional experimental information. However, for semileptonic decays the situation is very simple because in lowest order in weak interactions only the tree-level $W$ exchange graphs are responsible for such decays. In addition, since the quark content of $B^0 \bar B^0$\ is given by $B^0 = (\bar b d)$ and $\bar B^0 = (b \bar d)$ the lepton $\ell^+$ in the final state tags $B^0$ whereas $\ell^-$ tags $\bar B^0$. Therefore, with $f_+ \equiv X \ell^+ \nu_\ell$ and $f_- \equiv \bar X \ell^- \bar \nu_\ell$ and the labels $+$, $-$ pertaining to $f_+$, $f_-$, respectively, we have \begin{equation} \label{semileptamp} |A_+| = |B_-| \quad \mbox{and} \quad B_+ = A_- = 0. \end{equation} In these final states $X$ denotes an arbitrary kinematically allowed hadronic state and $\bar X$ its charge-conjugate counterpart. Defining $N_{++} \equiv N(f_+, f_+)$, etc., and using Eq. (\ref{semileptamp}) we obtain the following very simple expression for $N(f_1, f_2)$, Eq. (\ref{N}), in the case of semileptonic decays: \begin{eqnarray} \label{N++} N_{++} & = & \frac{1}{2} |A_+|^4 \left| \frac{p}{q} \right|^2 ( I_2 + 2 \zeta |I_{+-}|^2 ), \\ \label{N--} N_{--} & = & \frac{1}{2} |B_-|^4 \left| \frac{q}{p} \right|^2 ( I_2 + 2 \zeta |I_{+-}|^2 ), \\ \label{N+-} N_{+-} = N_{-+} & = & \frac{1}{2} |A_+|^2 |B_-|^2 ( I_1 + 2 \zeta \, \mbox{Re} \, (I_{+-})^2 ). \end{eqnarray} Defining the ratio of like-sign dilepton events to opposite-sign dilepton events \cite{pai,oku} \begin{equation} R \equiv \frac{N_{++} + N_{--}}{N_{+-} + N_{-+}} \end{equation} the amplitudes cancel and we find $R$ as a function of $|p/q|$, $x$, $y$ and $\zeta$: \begin{equation} \label{R} R = \frac{1}{2} \left( \left| \frac{p}{q} \right|^2 + \left| \frac{q}{p} \right|^2 \right) \frac{x^2 + y^2 + \zeta \left[ y^2 \frac{1+x^2}{1-y^2} + x^2 \frac{1-y^2}{1+x^2} \right] }% {2+x^2-y^2 + \zeta \left[ y^2 \frac{1+x^2}{1-y^2} - x^2 \frac{1-y^2}{1+x^2} \right]}. \end{equation} It is well known that a deviation of $|p/q|$ from 1 is a signal for CP violation in $B^0 \bar B^0$\ mixing. A suitable measure for $|p/q|$ and CP violation in mixing is thus given by \cite{oku} \begin{equation} \label{CP} A_{\mbox{\scriptsize CP}} \equiv \frac{N_{++} - N_{--}}{N_{++} + N_{--}} = \frac{|\frac{p}{q}|^2 - |\frac{q}{p}|^2}{|\frac{p}{q}|^2 + |\frac{q}{p}|^2}. \end{equation} To derive this formula, Eqs. (\ref{N++}), (\ref{N--}) and (\ref{N+-}) have been used which correspond to odd relative angular momentum of the $B^0 \bar B^0$\ pair. It is easy to show with the methods expounded here that the same formula (\ref{CP}) is valid for even relative angular momentum. Moreover, Eq. (\ref{CP}) is also valid for any statistical mixture of odd and even \cite{gri} and does not depend on the parameter $\zeta$ which could even be different for odd and even. This shows that it is consistent to take any measurement of $A_{\mbox{\scriptsize CP}}$ and use it as information on $|p/q|$ in $R$ (\ref{R}). A recent measurement of the CDF Collaboration \cite{CDF} gives $A_{\mbox{\scriptsize CP}} = (2.4 \pm 6.3 \,(\mbox{stat}) \pm 3.3 \,(\mbox{sys})) \times 10^{-2}$. The factor in front of $R$ which depends on $|p/q|$ is expressed by $A_{\mbox{\scriptsize CP}}$ as \begin{equation} \label{factor} \frac{1}{2} \left( \left| \frac{p}{q} \right|^2 + \left| \frac{q}{p} \right|^2 \right) = (1-A_{\mbox{\scriptsize CP}}^2)^{-1/2} \approx 1 + \frac{1}{2} A_{\mbox{\scriptsize CP}}^2. \end{equation} With the above value of $A_{\mbox{\scriptsize CP}}$ the quantity (\ref{factor}) differs less than a percent from 1. In view of the experimental errors associated with $R$ and $x$ we will simply set (\ref{factor}) equal to 1 in the rest of this paper. \section{Discussion of the experimental data} Having disposed of $|p/q|$, there remain three variables in $R$, namely $x=\Delta m/\Gamma$, $y$ and $\zeta$. To test the quantum-mechanical interference term, i.e., to get information on $\zeta$, we want to take $x$ from measurements of the time dependence of $B^0 \bar B^0$\ mixing \cite{ALEPH,DELPHI,L3,OPAL} and compare $R$ with $R_{\mbox{\scriptsize exp}}$ measured at the $\Upsilon (4S)$ \cite{ARGUS,CLEO}. In the concrete, we apply the following procedure. We take the values of $\Delta m$ from the results of the LEP experiments ALEPH \cite{ALEPH}, DELPHI \cite{DELPHI}, L3 \cite{L3} and OPAL \cite{OPAL} which are $\Delta m = 0.436 \pm 0.033 \:\hbar/$ps, $\Delta m = 0.531 \begin{array}{l} +0.050 \\ -0.046 \end{array} \pm 0.078 \:\hbar$/ps, $\Delta m = 0.496 \begin{array}{l} +0.055\\ -0.051 \end{array} \pm 0.043 \:\hbar$/ps and $\Delta m = 0.548 \pm 0.050 \begin{array}{l} +0.023 \\ -0.019 \end{array}\hbar/$ps, respectively. The first error is the statistical and the second one the systematic. For each experiment, we simply add the squares of the statistical and systematic error (we select the larger value where positive and negative errors are different) and use the law of combination of errors to get the combined value of $\Delta m$. After division by $\tau_{B^0} = (1.56 \pm 0.06)$ ps \cite{RPP} we arrive at the final value $\bar x = 0.74 \pm 0.05$ which will be used in the figures. As for $R$ we take the experimental input $R_{\mbox{\scriptsize exp}} = 0.194 \pm 0.062 \pm 0.054$ obtained by ARGUS \cite{ARGUS} and $R_{\mbox{\scriptsize exp}} = 0.187 \pm 0.022 \pm 0.025 \begin{array}{l} +0.040 \\ -0.030 \end{array}$, the result of the CLEO Collaboration \cite{CLEO}, where the third error reflects a $\pm 15$ \% uncertainty in the assumption that charged and neutral $B$ pairs contribute equally to dilepton events. Performing the same steps as for $\Delta m$ we obtain $\bar R_{\mbox{\scriptsize exp}} = 0.189 \pm 0.044$. It remains to discuss $y$ in the context of the determination of the decoherence parameter. The Standard Model predicts a very small difference between the lifetimes of the heavy and the light neutral $B$ meson such that $|y|/x \,\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}\, 10^{-2}$ (see, e.g., Ref. \cite{y}). This alone would already constitute a strong motivation for putting $y=0$ in $R$ (\ref{R}). Furthermore, plotting $R$ as a function of the decoherence parameter $\zeta$ $(0 \le \zeta \le 1)$ and comparing the curves with $y=0$ and $y=0.1$ there is practically no difference. Last but not least, studying the ratio $R$ as a function of $y$ and $\zeta$ numerically reveals that with increasing $y^2$ the restriction on $\zeta$ gets stronger. Therefore, for our purpose of getting information on the quantum-mechanical interference term it is sufficient to study $R$ as a function of $\zeta$ with $y=0$. This is done in Figs. 1 and 2. The three curves in Fig. 1 correspond to $R$ with $y=0$ and the three $x$ values $\bar x - \Delta \bar x$ (lower curve), $\bar x$ (middle curve) and $\bar x + \Delta \bar x$ (upper curve). The horizontal lines indicate the mean value $\bar R_{\mbox{\scriptsize exp}}$ and $\bar R_{\mbox{\scriptsize exp}} \pm \Delta \bar R_{\mbox{\scriptsize exp}}$. In Fig. 2 we have again plotted $R$ and $R_{\mbox{\scriptsize exp}}$ but the error bands correspond to 1.64 standard deviations or 90 \% CL if the distributions are Gaussian. As a side-remark we want to stress that the method discussed here can also be used to get a bound on $|y|$. For simplicity we assume quantum mechanics to be valid ($\zeta = 0$) and compare $R$ as a function of $y$ with $R_{\mbox{\scriptsize exp}}$. Then we obtain $|y| \le 0.40$ at 90 \% CL. This shows that $y = \Delta \Gamma /2 \Gamma$ and thus the difference in the decay widths of the heavy and light neutral $B$ mesons is only mildly restricted by present data. There is still a large gap between experimental information and the Standard Model prediction for $y$. \section{Conclusions} We observe that the overlap of the allowed areas of $R$ and $R_{\mbox{\scriptsize exp}}$ restricts the decoherence parameter to $\zeta \le 0.26$ in Fig. 1 and $\zeta \le 0.53$ in Fig. 2. This result conformes nicely with quantum mechanics and leaves little room for local realistic theories ($\zeta = 1$). Of course, our statistical analysis is rather crude and the experimental errors of $x$ and $R_{\mbox{\scriptsize exp}}$ are large. Nevertheless, there is a clear sign of long-range interference effects in $B^0 \bar B^0$\ in agreement with quantum mechanics. This is not so surprising in view of the overwhelming success of quantum mechanics. We expect that with the improvement of the experimental errors the bound on $\zeta$ will become much tighter in the future. However, we also notice that the mean value of $R$ at $\zeta=0$ is slightly higher than the mean value of $R_{\mbox{\scriptsize exp}}$. This acts in favour of smaller bounds on $\zeta$ (see figures) and has to be kept in mind when considering their above numerical values. Adding a note of scepticism concerning tests like the one discussed here, we want to remark that changing quantum mechanics in one point, in the present case in the two-particle interference term, but assuming its validity in all other domains, e.g., one-particle interference terms from which $\Delta m$ is extracted, is an arbitrary procedure. However, since no consistent {\it local} theory encompassing quantum mechanics is known, all parameterizations of deviations from quantum mechanics involve a certain amount of arbitrariness. \vspace{12mm}
1,314,259,995,249
arxiv
\section{Introduction} The ab-initio simulation of periodic solids with strong correlations is an important problem in condensed matter physics. While reliable computational methods exist for weakly correlated solids, they tend to be less suitable where the underlying independent electron approximation fails, such as in systems with d-electrons. Where strong correlations are important, the condensed matter community has historically resorted to the construction of low-energy effective models, such as single- or multi-orbital Hubbard models, in order to describe effective low-lying degrees of freedom. For systems where a treatment of the electronic structure in addition to strong correlation physics is desired, embedding methods such as a combination of the dynamical mean field theory (DMFT)~\cite{DMFT_infinite_dim_Georges_Kotliar_1992,Hubbard_inf_dim_Vollhardt_Metzner,Georges96} with density functional theory (DFT) electronic structure codes~\cite{LDAplusDMFT_Anisimov1997, Lichtenstein98,Kotliar06}, led to a combination of both approaches. These methods are very successful in their region of applicability. However, they suffer fundamentally from the need to determine free parameters, such as the double counting correction or the values of screened interaction parameters, at the interface between the electronic structure and strong correlation calculations. Diagrammatic perturbation theory provides an alternative route to standard electronic structure methods such as DFT. Perturbative methods are free from adjustable parameters, and solutions of the (bare or self-consistent) second-order perturbation theory~\cite{GF2_Alexander16, GF2_Sergei19} and several variants of Hedin's GW approximation~\cite{G0W0_Pickett84, G0W0_Hybertsen86, GW_Aryasetiawan98, QSGW_Kotani07, scGW_Andrey09} can be performed for realistic solids. However, due to their perturbative nature, these methods are not able to access the strong correlation regime. Nevertheless, the diagrammatic language in which these theories are formulated lends itself ideally to embedding methods, which aim to selectively enhance the solution of a weakly correlated problem with non-perturbative strong-correlation answers in a small but potentially strongly correlated subset of orbitals. Moreover, the Green's function language in which they are formulated allows one to calculate experimentally observable quantities such as the (momentum- and energy-resolved) spectral function, making them ideal candidates for studying condensed matter systems. A combination of extended DMFT (EDMFT)~\cite{EDMFT_Qimiao_2000,PhysRevB.63.115110,PhysRevLett.77.3391} with a perturbative method such as GW lead to a formulation of the GW+EDMFT approach~\cite{GWplusEDMFT_Sun02,PhysRevLett.90.086402,PhysRevB.88.235110, Tomczak_2012,PhysRevB.87.125149,PhysRevB.90.195114,TMO_MIT_Leonov16,PhysRevB.95.245130,haule_lee_h2_2017, Boehnke16,multitier_GW+DMFT_werner_2017, ComDMFT_2019, chan_zhu_2020_gw_dmft,PhysRevB.93.165106}, where the weakly correlated electrons are treated at the GW level and the strongly correlated electrons are handled by an accurate non-perturbative approach. In this paper, we focus on the discussion and performance assessment of another diagrammatic ab-initio embedding theory - the self-energy embedding theory (SEET)~\cite{Kananenka15,Zgid17,Lan17}. This theory combines the GW approximation~\cite{Hedin1965} with the non-perturbative solution of quantum impurity models. SEET was extensively tested on molecular problems~\cite{Tran_jcp_2015,Tran_jctc_2016,Lan17,Tran_Shee_2017,Tran_useet,PhysRevX.7.031059,PhysRevX.10.011041} and very simple solids~\cite{doi:10.1021/acs.jctc.8b00927}. However, this paper presents first tests for fully realistic solids. The two antiferromagnetic compounds NiO and MnO are ideal materials for testing the capabilities of SEET. Correlation effects in those materials are believed to be strong, and Mott~\cite{Mott_1949} considered NiO as a paradigmatic example of a `Mott' insulator. The NiO solid has been carefully studied with a wide range of experiments, including angle-integrated and angle-resolved photoemission and bremsstrahlung-isochromat spectroscopy for NiO~\cite{NiO_expt_Powell70, NiO_expt_Sawatzky84, NiOCoO_expt_Shen90, NiO_expt_Shen91, NiO_expt_Tjernberg96, NiO_expt_AFMandPM_Tjernberg96, NiO_expt_Jauch04, NiO_expt_Schuler05, NiO_expt_MIT_Gavriliuk12, NiO_expt_MIT_Potapkin16, MnONiO_expt_Eastman75, PE_review_SHEN95} and MnO~\cite{MnONiO_expt_Eastman75, MnO_expt_Lad88, MnO_expt_Elp91, PE_review_SHEN95, MnO_PT_Kondo00, MnO_MIT_Patterson04}. The material has also been studied with a wide array of theoretical methods, including the Hartree-Fock (HF) approximation~\cite{NiOMnO_HF_Towler94}, configuration interactions within the metal-ligand clusters~\cite{NiO_Fujimori84}, density functional theory(DFT)~\cite{MnO_DFT_PT_Fang99}, LDA+U~\cite{LDAplusU_Anisimov97}, different variants of the GW approximation~\cite{NiO_GW_Aryasetiawan95,GW_Aryasetiawan98, Sergey04, NiO_QSGW_Li05, Rodl09}, the variational cluster approximation (VCA)~\cite{NiO_VCA_Eder15}, LDA+DMFT~\cite{LDADMFT_Ren06, Kunes07_prb,Kunes07_prl}, and Linearized QSGW+DMFT~\cite{ComDMFT_2019}. This paper proceeds as follows. Sec.~\ref{sec:method} introduces the GW approximation and presents the self-energy embedding theory. Sec.~\ref{sec:compasp} describes the computational details necessary for reproducing our calculations. Sec.~\ref{sec:results} shows theoretical photoemission results as compared to experiment, and Sec.~\ref{sec:conclusions} presents our conclusions. \section{Method}\label{sec:method} We model a solid as an arrangement of atoms in a Bravais lattice with periodicity in all three directions. We employ the Born-Oppenheimer approximation and choose a basis of single-particle wave functions. In this work we use Bloch waves constructed from Gaussian basis functions as \begin{equation}\label{Gaussian} \phi_{\mathbf{k}_i,i}(\mathbf{r}) = \sum_{\mathbf{R}} \phi^{\mathbf{R}}_{i}(\mathbf{r})e^{i\mathbf{k}\cdot\mathbf{R}}, \end{equation} where $\phi^{\mathbf{R}}_{i}(\mathbf{r})$ is a Gaussian atomic orbital centered in Bravais lattice cell $\mathbf{R}$. These states are not orthogonal and define the overlap matrix \begin{equation} \mathbf{s}_{\bm{i}\bm{j}} = \int_{\Omega} d\mathbf{r} \phi^{*}_{\mathbf{k}_i,i}(\mathbf{r}) \phi_{\mathbf{k}_j,j}(\mathbf{r})\delta_{\mathbf{k}_i,\mathbf{k}_j} \label{Eqn:Ovlp}. \end{equation} The electronic structure Hamiltonian in second quantization is \begin{align} H = \sum_{\bm{i}\bm{j},\sigma} h^{0}_{\bm{i}\bm{j}} c^{\dagger}_{\bm{i}\sigma} c_{\bm{j}\sigma} + \frac{1}{2} \sum_{\substack{\bm{i}\bm{j}\bm{k}\bm{l}\\\sigma\sigma'}} v_{\bm{i}\bm{j}\bm{k}\bm{l}} c^{\dagger}_{\bm{i}\sigma} c_{\bm{k}\sigma'}^\dagger c_{\bm{l}\sigma'} c_{\bm{j}\sigma}. \label{Eqn:Hamiltonian} \end{align} Where $c_{\bm{i}\sigma}$ ($c_{\bm{i}\sigma}^{\dagger}$) are annihilation (creation) operators corresponding to the single particle state $\phi_{\mathbf{k}_i,i}(\mathbf{r})$, with spin $\sigma$ and index $\bm{i} (\bm{j}, \bm{k}, \bm{l})$ denotes the combined orbital-momenta index $\bm{i} = (i, \mathbf{k}_i)$. The single-particle operator $h^{0}_{\bm{i}\bm{j}}$ and two-particle operator $v_{\bm{i}\bm{j}\bm{k}\bm{l}}$ are defined respectively as \begin{subequations} \begin{align} h^{0}_{\bm{i}\bm{j}} &= \int_{\Omega} d\mathbf{r} \phi^{*}_{\mathbf{k}_i,i}(\mathbf{r}) \left[-\frac{1}{2}\nabla^{2}_{\mathbf{r}} - \sum_{\alpha} \frac{Z_\alpha}{r_{\alpha,\mathbf{r}}} \right] \phi_{\mathbf{k}_j,j}(\mathbf{r})\label{Eqn:H0}, \\ v_{\bm{i}\bm{j}\bm{k}\bm{l}} &= \frac{1}{V} \int_{\Omega} d\mathbf{r} \int_{\mathbb{R}^3} d\mathbf{r}' \frac{\phi^{*}_{\mathbf{k}_i,i}(\mathbf{r})\phi_{\mathbf{k}_j,j}(\mathbf{r})\phi^{*}_{\mathbf{k}_k,k}(\mathbf{r}')\phi_{\mathbf{k}_l,l}(\mathbf{r}')} {|\mathbf{r}-\mathbf{r}'|}, \label{Eqn:V} \end{align} \end{subequations} where $Z_{\alpha}$ is the nuclear charge of atom $\alpha$, $r_{\alpha,\mathbf{r}}= |\mathbf{r}-\mathbf{r}_\alpha|$ is the distance to nucleus $\alpha$ at $r_\alpha$, $\Omega$ is the volume of the unit cell and $V$ is the volume of the system. The primary object of interest in this paper is in the single-particle imaginary time Green's function $G^{\mathcal{H},\sigma}_{\bm{i}\bm{j}}(\tau)$ for Hamiltonian $\mathcal{H}$ and indices $\bm{i}$ and $\bm{j}$, \begin{align} G^{\mathcal{H},\sigma}_{\bm{i}\bm{j}}(\tau) = -\frac{1}{\mathcal{Z}} Tr\left[e^{-(\beta-\tau) (\mathcal{H} -\mu N)} c_{\bm{i},\sigma} e^{-\tau (\mathcal{H} -\mu N)} c^{\dagger}_{\bm{j},\sigma}\right]. \end{align} Here $\mathcal{Z} = Tr\left[e^{-\beta(\mathcal{H} -\mu N)}\right]$ is the grand partition function, $\mu$ is the chemical potential, $\beta$ is the inverse temperature and $N$ is the number of particles in the system. We define the non-interacting Green's function as $G^{0,\sigma}_{\bm{i}\bm{j}}(\tau) = G^{H^{0},\sigma}_{\bm{i}\bm{j}}(\tau)$, where $H^{0} = \sum_{\bm{i}\bm{j},\sigma} h^{0}_{\bm{i}\bm{j}} c^{\dagger}_{\bm{i}\sigma} c_{\bm{j}\sigma}$, and the interacting one as $G^{\sigma}_{\bm{i}\bm{j}}(\tau) = G^{H,\sigma}_{\bm{i}\bm{j}}(\tau)$. Translation symmetry implies that Green's functions are diagonal in reciprocal space but dense in orbital space and can be defined as \begin{align} G^{\mathbf{k}\sigma}_{ij}(\tau) = G^{\sigma}_{\bm{i}\bm{j}}(\tau), \end{align} with $\mathbf{k} = \mathbf{k}_i = \mathbf{k}_j$. The Matsubara frequency Green's function is defined through the Fourier transform \begin{align} G^{\sigma}_{\bm{i}\bm{j}}(\omega_n) = \int_{0}^{\beta} d\tau G^{\sigma}_{\bm{i}\bm{j}}(\tau) e^{i\omega_n\tau}, \end{align} where $\omega_n = (2n + 1)\frac{\pi}{\beta}$ is the fermionic Matsubara frequency with $n$ integer. The self-energy is defined by the Dyson equation \begin{align} \Sigma^{\sigma}_{\bm{i}\bm{j}}(\omega_n) = \left(G^{0,\sigma}_{\bm{i}\bm{j}}(\omega_n)\right)^{-1} - \left(G^{\sigma}_{\bm{i}\bm{j}}(\omega_n)\right)^{-1} \label{Eqn:Dyson}. \end{align} Knowledge of the single particle Green's function allows the computation of the spectral function or density of states as \begin{align} G^{\sigma}_{ij}(\tau)= \int d\omega \frac{A^{\sigma}_{ij}(\omega)e^{-\tau\omega}}{1+e^{-\beta\omega}}.\label{eqn:dos_tau} \end{align} \subsection{GW approximation} In a first step, we solve the system in the fully self-consistent finite temperature GW approximation introduced by Hedin~\cite{Hedin1965}. This approximation is thermodynamically consistent and conserving but neglects second-order and higher exchange terms. The GW self-energy is given by \begin{align} \Sigma^{\mathbf{k},\sigma}_{ij}(\omega_n) &= - \frac{1}{\beta V}\sum_{\substack{m\\\mathbf{k}',kl}} \Big[G^{\mathbf{k}',\sigma}_{lk}(\omega_n + \Omega_m) W^{\mathbf{k}\tk'\mathbf{k}'\mathbf{k}}_{ilkj}(\Omega_m) \nonumber \\ & - \sum_{\sigma'}G^{\mathbf{k}',\sigma'}_{lk}(\omega_m) v^{\mathbf{k}\tk\mathbf{k}'\mathbf{k}'}_{ijkl} \Big], \label{eq:sigma_gw} \end{align} where $\Omega_m = \frac{2m\pi}{\beta}$ are the bosonic Matsubara frequencies and the `screened interaction' $W^{\mathbf{k}\tk'\mathbf{k}'\mathbf{k}}_{ilkj}$ is defined as \begin{align} W_{\bm{i}_1\bm{i}_2\bm{i}_3\bm{i}_4}(\Omega_n) &= v_{\bm{i}_1\bm{i}_2\bm{i}_3\bm{i}_4} + \tilde{W}_{\bm{i}_1\bm{i}_2\bm{i}_3\bm{i}_4}(\Omega_n) \nonumber \\ \tilde{W}_{\bm{i}_1\bm{i}_2\bm{i}_3\bm{i}_4}(\Omega_n) &= \frac{1}{V}\nonumber \\ \sum_{\bm{i}_5\bm{i}_6\bm{i}_7\bm{i}_8}&v_{\bm{i}_1\bm{i}_2\bm{i}_5\bm{i}_6} \Pi_{\bm{i}_5\bm{i}_6\bm{i}_7\bm{i}_8}(\Omega_n)W_{\bm{i}_7\bm{i}_8\bm{i}_3\bm{i}_4}(\Omega_n), \label{eqn:secondhedin} \end{align} with the approximate polarization operator \begin{align} \Pi_{\bm{i}_1\bm{i}_2\bm{i}_3\bm{i}_4}(\Omega_n) &= \frac{1}{\beta}\sum_{m}G^{\sigma}_{\bm{i}_1\bm{i}_3}(\omega_m)G^{\sigma}_{\bm{i}_4\bm{i}_2}(\omega_m + \Omega_n). \end{align} Eq.~\ref{eq:sigma_gw} can be written as \begin{subequations} \begin{align} \Sigma^{\mathbf{k},\sigma}_{ij}(\omega_n) &= (\Sigma^{\text{GW}}_{\infty})^{\mathbf{k},\sigma}_{ij} + (\Sigma^{\text{GW}})^{\mathbf{k},\sigma}_{ij}(\omega_n) \\ (\Sigma^{\text{GW}})^{\mathbf{k},\sigma}_{ij}(\omega_n) &= - \frac{1}{\beta V}\sum_{\substack{m\\\mathbf{k}',kl}} G^{\mathbf{k}',\sigma}_{l,k}(\omega_n + \Omega_m) \tilde{W}^{\mathbf{k}\tk'\mathbf{k}'\mathbf{k}}_{ilkj}(\Omega_m),\label{eq:gwcorr_sigma} \end{align} \end{subequations} where $(\Sigma^{\text{GW}}_{\infty})^{\mathbf{k},\sigma}_{ij}$ is the Hartree-Fock self-energy. The self-consistent GW correction to the Hartree-Fock self-energy, $(\Sigma^{\text{GW}})^{\mathbf{k},\sigma}_{ij}(\omega_n)$, contains an infinite series of `bubble' diagrams as shown in Fig. ~\ref{fig:bubbles}. \begin{figure}[tbh] \includegraphics[width=\linewidth]{Fig1.pdf} \caption{Diagrams beyond the Hartree diagram in the self-consistent GW approximation. Wiggly lines denote bare interactions $v$, lines with arrow dressed Green's functions $G$.} \label{fig:bubbles} \end{figure} In our GW implementation, we use a Coulomb integral decomposition since due to its size, it is not practical to store the full four-index Coulomb integral. Several ways to employ its symmetry to decompose it are known, such as Cholesky decomposition~\cite{Cholesky2008} or the resolution of identity (also known as density fitting)~\cite{Werner2003,Ren2012,Sun2017}. Here, we write $v_{\bm{i}_1\bm{i}_2\bm{i}_3\bm{i}_4} = V^{Q}_{\bm{i}_1\bm{i}_2}V^{Q}_{\bm{i}_3\bm{i}_4}$ where $Q$ is an auxiliary index and $V^{Q}_{\bm{i}_1\bm{i}_2}$ is a three-point integral defined as \begin{align}\label{eqn:VQ} V^{Q}_{\bm{i}_1\bm{i}_2}= \sum_{P}\int_{\Omega} d\mathbf{r} d\mathbf{r}' \frac{\phi^{*}_{\bm{i}_1}(\mathbf{r})\phi_{\bm{i}_2}(\mathbf{r})\chi^{\mathbf{q}}_{P}(\mathbf{r}')} {|\mathbf{r}-\mathbf{r}'|} \mathbf{J^{-\frac{1}{2}}}^{\mathbf{q}}_{PQ}, \end{align} with momentum transfer $\mathbf{q} = \mathbf{k}_{i_1}-\mathbf{k}_{i_2} = \mathbf{k}_{i_3} - \mathbf{k}_{i_4}$, $\chi^{\mathbf{q}}_{P}(\mathbf{r}')$ an auxiliary basis function and $\mathbf{J}^{-1} = \mathbf{J}^{-\frac{1}{2}}\mathbf{J}^{-\frac{1}{2}}$ the inverse of \begin{align} J^{\mathbf{q}}_{PQ} = \int_{\Omega} d\mathbf{r} d\mathbf{r}' \frac{\chi^{\mathbf{q}*}_{P}(\mathbf{r})\chi^{\mathbf{q}}_{Q}(\mathbf{r}')}{|\mathbf{r}-\mathbf{r}'|}. \end{align} This allows to simplify Eq.~\ref{eqn:secondhedin} to \begin{align} \tilde{W}_{\bm{i}_1\bm{i}_2\bm{i}_3\bm{i}_4}(\Omega_n) = -\sum_{Q,Q'} V^{Q}_{\bm{i}_1\bm{i}_2} \tilde{P}_{QQ'}^{\mathbf{q}}(\Omega_n) V^{Q'}_{\bm{i}_3\bm{i}_4}, \label{eqn:screened_series} \end{align} \\ where the renormalized polarization matrix $\tilde{P}^{\mathbf{q}} (\Omega_n)$ is \begin{align}\label{eqn:polar} \tilde{P}^{\mathbf{q}} (\Omega_n) &= [ \mathbb{1} - \tilde{P}_{0}^{\mathbf{q}}(\Omega_{n})]^{-1} \tilde{P}_{0}^{\mathbf{q}}(\Omega_{n}), \end{align} and \begin{align}\label{eqn:barepolar} \tilde{P}_{0,Q,Q'}^{\mathbf{q}}(\Omega_{n}) &= \frac{1}{V}\sum_{\substack{\mathbf{k},m,\sigma\\i_1,i_2,i_3,i_4}} \nonumber\\ V^{Q,\mathbf{k},\mathbf{k}+\mathbf{q}}_{i_1i_2} &G^{\mathbf{k},\sigma}_{i_1,i_4}(\omega_m) G^{\mathbf{k+q},\sigma}_{i_3,i_2}(\omega_m + \Omega_n) V^{Q'\mathbf{k}+\mathbf{q},\mathbf{k}}_{i_3i_4}. \end{align} Eq.~\ref{eq:gwcorr_sigma} then simplifies to \begin{align} \label{eqn:GWselfenergy} \tilde{\Sigma}^{\mathbf{k},\sigma}_{i_1i_2}(\tau)\nonumber &= \\-\frac{1}{V}\sum_{\substack{\mathbf{q},i_3,i_4\\Q,Q'}}& V^{Q,\mathbf{k}\tk-\mathbf{q}}_{i_1,i_4} G^{\mathbf{k}-\mathbf{q},\sigma}_{i_3,i_4}(\tau)\tilde{P}^{\mathbf{q}}_{Q, Q'}(\tau) V^{Q',\mathbf{k}-\mathbf{q},\mathbf{k}}_{i_3i_2}. \end{align} We diagrammatically represent this decomposition in Fig.~\ref{fig:self_contract}. \begin{figure}[thb] \includegraphics[width=0.6\linewidth]{Fig2.pdf} \caption{Diagrams of Fig.~\ref{fig:bubbles} expressed with the decomposition of Eq.~\ref{eq:gwcorr_sigma}. Interrupted wiggly lines denote the auxiliary basis decomposition indices $Q$ and $Q'$.}\label{fig:self_contract} \end{figure} \subsection{Self-energy embedding method}\label{sec:seet} GW is an approximate method with well known limitations. To capture correlation effects beyond the GW approximation, either high-order diagrammatic methods or quantum embedding methods can be used. Embedding theories that are $\Phi$-derivable and based on diagrammatic expansions such as DMFT, GW+EDMFT, SEET, or self-energy functional theory aim to systematically improve low-order perturbative results. These embedding theories satisfy conservation laws and are thermodynamically consistent. Here, we briefly summarize the SEET equations used by in this paper. In this section we assume that all quantities are expressed in an orthogonal basis, which we will discuss later. The real space Green's function and the lattice (k-space) Green's function are related by the Fourier transform \begin{align} G_{ij}^{\mathbf{R}\tR'}(\omega_n)=\frac{1}{V} \sum_k e^{i \mathbf{k} \mathbf{R}}G_{ij}^{\mathbf{k}}(\omega_n) e^{-i \mathbf{k}\mathbf{R}'}. \end{align} The GW momentum resolved Green's function of the entire lattice is defined as \begin{align} (G^\text{GW}(\omega_n))^\mathbf{k}=\big[(\omega_n+\mu)\mathbb{1} -h^{0,\mathbf{k}}-(\Sigma^\text{GW})^{\mathbf{k}}\big]^{-1}, \end{align} where $(\Sigma^\text{GW})^{\mathbf{k}}=(\Sigma^\text{GW}_\infty)^{\mathbf{k}}+(\Sigma^\text{GW}(\omega))^{\mathbf{k}}$. As a result of embedding procedure, we define a lattice Green's function in the following way \begin{align}\label{eq:embedding_lattice_gf} (G(\omega_n))^\mathbf{k}=\big[(\omega+\mu)\mathbb{1} -h^{0,\mathbf{k}}-\Sigma^\mathbf{k} \big]^{-1}, \end{align} where \begin{align}\label{eq:embedding_lattice_se} \Sigma^\mathbf{k}_{ij}=(\Sigma^{GW})_{ij}^{\mathbf{k}}+\sum_A\left((\Sigma^\text{imp}_A)_{ij}-(\Sigma_A^\text{DC-GW})_{ij}\right)\delta_{(ij)\in A } \end{align} with $\Sigma^\text{imp}=\Sigma^\text{imp}_\infty+\Sigma^\text{imp}(\omega_n)$ containing non-perturbatively added self-energy diagrams and $\Sigma^\text{DC-GW}=\Sigma^\text{DC-GW}_\infty+\Sigma^\text{DC-GW}(\omega_n)$ subtracting those diagrams that are contained both in the GW solution and the non-perturbative construction. Subsets $A$ of impurity orbitals with indices $ij \in A$, sometimes also called active orbitals, are defined as groups of the most physically relevant orbitals for the problem that have correlations that are necessary to be included at a higher than perturbative level. To define the self-consistency condition used in SEET we perform Fourier transform of $(G(\omega))^\mathbf{k} $, $\Sigma^\mathbf{k}$, and $h^{0,\mathbf{k}}$ from momentum to real space obtaining $G^{\mathbf{R}\tR'}$, $\Sigma^{\mathbf{R}\tR'}$, and $h^{0,\mathbf{R}\tR'}$. The Fourier transform results in the following structure of the self-energy matrix in the real space \begin{align} \Sigma_{ij}^{\mathbf{R}\tR'} &= (\Sigma^{GW})_{ij}^{\mathbf{R}\tR'} \nonumber \\ &+\sum_A\left((\Sigma^\text{imp}_A)_{ij}- (\Sigma_A^\text{DC})_{ij}\right)\delta_{\mathbf{R}\tR'}\delta_{(ij)\in A},\label{eq:SigmaWithA} \end{align} for unit cells away from central cell ($\mathbf{R} \neq \mathbf{R}'$) the self-energies are treated at the weakly correlated level $\Sigma^{\mathbf{R}\tR'}_{ij}=(\Sigma^{GW})_{ij}^{\mathbf{R}\tR'}$ while the local, central cell self-energy for $\mathbf{R} = \mathbf{R}'$ includes non-perturbative corrections $(\Sigma^\text{imp}_A)_{ij}$ for every orbital group $A$. This leads us to a definition of an embedding condition in SEET, where we apply the block-matrix inversions of real space quantities and absorb all terms containing contributions connecting orbitals in $A$ to the remainder of the system in the matrix $\Delta^A_{ij}(\omega)$ in the following way \begin{align}\label{eq:seet_embedding_condition} (G(\omega_n))^{\mathbf{R}\tR}_{ij\in A}=\big[ (\omega_n +\mu)\mathbb{1}-h^{0,\mathbf{R}\tR}_{ij\in A}-\Sigma^{\mathbf{R}\tR}_{ij\in A} - \Delta^A_{ij}(\omega_n)\big]^{-1}. \end{align} The hybridization matrix $\Delta^A_{ij}(\omega_n)$ arises since an inverse of a subset is not equal to a subset of an inverse, namely $(G(\omega_n))^{\mathbf{R}\tR}_{ij\in A} \ne [(G(\omega_n))^{\mathbf{R}\tR'})^{-1}]^{\mathbf{R}\tR}_{ij\in A}= \big[ (\omega_n +\mu)\mathbb{1}-h^{0,\mathbf{R}\tR}_{ij\in A}-\Sigma^{\mathbf{R}\tR}_{ij\in A}\big]^{-1}$. Note that Eq.~\ref{eq:seet_embedding_condition} can further be rewritten as \begin{align}\label{eq:Gloc} [(G(\omega_n))^{\mathbf{R}\tR}_{ij\in A}]^{-1} &=(i\omega_n+\mu)\mathbb{1}-\tilde{h}^{0,\mathbf{R}\tR}_{ij\in A} +\\ &- \Sigma^{\text{corr},\mathbf{R}\tR}_{ij\in A}(\omega_n) -\Sigma^{\text{imp}}_{ij\in A} - \Delta^A_{ij}(\omega_n),\nonumber \end{align} where $\tilde{h}^{0,\mathbf{R}\tR}_{ij} = h^{0,\mathbf{R}\tR}_{ij} + (\Sigma^\text{GW})^{\mathbf{R}\tR}_{\infty,ij} - \Sigma^\text{DC}_{\infty,ij}$ is the renormalized non-interacting Hamiltonian, and $\Sigma^{\text{corr},\mathbf{R}\tR}_{ij}(\omega_n) = (\Sigma^\text{GW})^{\mathbf{R}\tR}_{ij}(\omega_n) - \Sigma^\text{DC}_{ij}(\omega_n)$ is the local correction from the weakly correlated method. We emphasize that, in SEET, the substantial contribution of $\Sigma^{\text{corr},\mathbf{R}\tR}_{ij\in A}(\omega_n) $ to the local correlated orbitals is included explicitly in the real space self-consistency condition in Eq.~\ref{eq:seet_embedding_condition} and is not included as a part of hybridization as done in the GW+DMFT schemes described in Ref.~\onlinecite{haule_lee_h2_2017,chan_zhu_2020_gw_dmft}. These contributions stem from GW diagrams that have both external legs $i$ and $j$ in the active space but contain one or more internal indices on the remaining orbitals. Furthermore, the explicit treatment of $\Sigma^{\text{corr},\mathbf{R}\tR}_{ij\in A}(\omega_n) $ prevents us from observing non-causality problems with hybridization as described in Ref.~\onlinecite{haule_lee_h2_2017} since $\Delta^A_{ij}(\omega_n)$ as defined in Eq.~\ref{eq:seet_embedding_condition} is always causal. We also emphasize that, while the total chemical potential is adjusted to give a fixed number of particles in the unit cell, each impurity subspace $A$ may have any non-integer occupancy. In addition, the number of particles in each subspace may change substantially during the iterative procedure as electrons shift from the subspaces to the rest of the system and back, while maintaining the total number of particles. To evaluate $\Sigma^\text{imp}_{ij\in A}$, we define the auxiliary propagator \begin{align}\label{eq:Gimp} \mathcal{G}_{A}^{-1}(\omega_n)=\mathcal{G}_{A}^{0,-1}(\omega_n) -\Sigma^\text{imp}_{ij\in A}, \end{align} where the zeroth order $\mathcal{G}_{A}^{0,-1}(\omega_n)$ is defined as \begin{align}\label{eq:Gimp0} \mathcal{G}_{A}^{0,-1}(\omega_n) =(i\omega_n+\mu)\delta_{ij}-\tilde{h}^{0,\mathbf{R}\tR}_{ij\in A} - \Delta^A_{ij}(\omega_n). \end{align} As realized in the context of DMFT~\cite{Georges96}, a propagator of the form of Eq.~\ref{eq:Gimp} can be obtained by solving the quantum impurity model with impurity orbitals defined as the active orbitals from a space $A$. In SEET, the two-body interactions in the impurity remain the bare, unchanged interactions of the original lattice Hamiltonian, since screening is included by the explicit treatment of $\Sigma^{\text{corr},\mathbf{R}\tR}_{ij\in A}(\omega_n)$ at the level of the embedding condition and Eq.~\ref{eq:Gloc}. The fact that the bare interactions do not need to be adjusted in the impurity model is a major difference to formulations of GW+EDMFT, as implemented {\it e.g.} in Ref.~\cite{Boehnke16}. The GW+EDMFT double counting correction due to the presence of screened $W^\text{imp}(\omega_n)$ removes local correction to the self-energy from the weakly correlated method, therefore $\Sigma^{corr,\mathbf{R}\tR}_{ij}(\omega_n) \equiv 0$~\cite{PhysRevLett.90.086402}. This GW+EDMFT construction containing $W^\text{imp}(\omega_n)$ leads to an impurity model with a different hybridization and noninteracting Hamiltonian and, as the model needs to take into account correlations outside the active space accordingly, to a rescaling of the interactions. However, while operationally different, both GW+EDMFT and SEET are consistent, conserving, and contain RPA screening by GW diagrams. In practice, our method starts from a self-consistent finite temperature GW solution of the lattice problem. It then proceeds by solving all independent impurity problems for the different disjoint subspaces $A$ independently. The non-perturbative solution of $\Sigma^\text{imp}_{ij}$ is used to update the lattice self-energy and the Green's function from Eq.~\ref{eq:embedding_lattice_se} and ~\ref{eq:embedding_lattice_gf}, followed by a new calculation of the real space Green's function and hybridization (Eq.~\ref{eq:seet_embedding_condition}) and a subsequent solution of the impurity model. In principle, after obtaining the self-consistent solution of Eq.~\ref{eq:seet_embedding_condition}, the GW solution would need to be iterated again. This has not been done in this work. \section{Computational aspects}\label{sec:compasp} \subsection{Basis and lattice structure} We study the electronic properties of antiferromagnetic fcc NiO and MnO with lattice constant $a=4.1705$\si{\angstrom}~\cite{PhysRevB.3.1039} and $4.4450$\si{\angstrom}~\cite{MnO_a} at temperature $T\sim 451$ K($\beta = 700$ Ha$^{-1}$). In order to capture the type-II anti-ferromagnetic ordering we double the unit cell along the [$111$] direction. The resulting unit cell is rhombohedral with two transition metal atoms and two oxygen atoms. Any small rhombohedral distortion below the Ne\'{e}l temperature is neglected. For both systems we use the \emph{gth-dzvp-molopt-sr} basis~\cite{GTHBasis} with \emph{gth-pbe} pseudopotential~\cite{GTHPseudo}. The \emph{def2-svp-ri} basis is chosen as the auxiliary basis for the Coulomb integral decomposition~\cite{RI_auxbasis}. The finite-size errors of the GW exchange diagram are corrected by the Ewald probe-charge approach~\cite{EwaldProbeCharge, CoulombSingular}. The Coulomb integrals (Eq.~\ref{eqn:VQ}) and non-interacting matrix elements (Eq.~\ref{Eqn:H0}) are prepared by \texttt{PySCF}~\cite{PySCF}. The use of a finite basis of Gaussian orbitals introduces an error which is difficult to assess independently. We therefore compared results of simple DFT calculations of our systems in this basis to those obtained in a plane wave code~\cite{Kresse1999} and found satisfactory agreement. \subsection{Imaginary-time/Matsubara frequency grid} All dynamical functions, such as the Green's function, polarization, or self-energy, are computed in an imaginary time formalism. We use the compact intermediate representation (IR)~\cite{PhysRevB.96.035147} with sparse frequency sampling~\cite{PhysRevB.101.035144} for their storage and manipulation. The IR has one dimensionless parameter $\Lambda$ that should be chosen larger than $\beta \omega_\text{max}$, where $\omega_\text{max}$ is the bandwidth of the system (difference between highest and lowest single particle energy). In this work we use $\Lambda = 10000$ and generate the IR basis functions using the \texttt{irbasis}~\cite{CHIKANO2019181} open-source software package. Other representations such as Legendre~\cite{Boehnke2011,dong2020legendrespectral} or Chebyshev polynomials~\cite{PhysRevB.98.075127} and other sparse grids~\cite{kaltak2019minimax} could be used instead. \subsection{Orthogonalization} We solve the GW approximation in the basis of atomic orbitals. In this basis, obtaining analytically continued results is difficult as the spectral functions of Eq.~\ref{eqn:dos_tau} are not strictly positive, nor normalized, and straightforward application of the Maximum Entropy Method~\cite{Jarrell1996} is not possible. In addition, most impurity solvers (including the exact diagonalization solver used in this work~\cite{ISKAKOV2018128}) require orthogonal orbitals. Finally, to perform the SEET embedding procedure the Green's functions have the be in an orthogonal basis. It is therefore convenient to orthogonalize the basis and express the GW Green's function in it before performing further analysis. In this paper we use two types of orbital orthogonalization, $G^\text{orth} = X G X^{*}$, which differ in the transformation matrix $X$ employed. Symmetrical orbital orthogonalization~\cite{LOWDIN1970185} uses $X = S^{\frac{1}{2}}$, with $\mathbf{s} = S^{\frac{1}{2}} S^{\frac{1}{2}}$, and $\mathbf{s}$ defined in the Eq.~\ref{Eqn:Ovlp}. In the canonical orthogonalization~\cite{LOWDIN1970185} the transformation matrix is $X = (V_S s^{\frac{1}{2}})^{-1}$, where $V_S$ is the matrix constructed from the eigenvectors of the overlap matrix $\mathbf{s}$ and $s^{\frac{1}{2}}$ is the diagonal matrix constructed from the square-roots of the corresponding eigenvalues of the overlap matrix~\cite{LOWDIN1970185}. \subsection{Analytical continuation} The $\mathbf{k}$-space spectral function measured in ARPES, \begin{align} A^{\mathbf{k}}(\omega) = \sum_{j} A_{jj}^{\mathbf{k}}(\omega) \end{align} is the trace of the orbitally resolved spectral functions $A_{jj}^{\mathbf{k}}(\omega)$ which are determined by the Green's functions $G^{\mathbf{k}}_{ij}(\tau)$ according to Eq.~\ref{eqn:dos_tau}. In the orthogonal basis $A_{jj}^{\mathbf{k}}(\omega)$ is normalized to one and strictly positive. It can therefore be obtained from a maximum entropy continuation~\cite{Jarrell1996}. We use the open-source \texttt{ALPS}~\cite{Gaenko2017} \texttt{Maxent} package~\cite{Levy2017} with a truncated continuation kernel, with the Green's functions defined on the grid points of the IR basis~\cite{PhysRevB.101.035144}. We have verified for select data points that our results are consistent with the Pad\'{e} continued fraction method. Alternative methods for continuation exist, including the stochastic optimization method~\cite{Mishchenko2000} and the Sparse Modelling~\cite{Otsuki2017} approach. In addition, continuations of derived quantities, such as the cumulant~\cite{Stanescu2006} or the self-energy~\cite{Wang2009}, are possible. We have not explored these methods. In order to obtain the local spectral function we first perform the summation over momenta and then continue the resulting orbitally-resolved local Green's function $G^\text{loc}_{ij}(\tau) = \frac{1}{V}\sum G^{\mathbf{k}}_{ij}(\tau) $ as \begin{align} G^\text{loc}_{ii}(\tau)= \int d\omega \frac{A^\text{loc}_{ii}(\omega)e^{-\tau\omega}}{1+e^{-\beta\omega}}.\label{eqn:ldos_tau} \end{align} While continuation and linear transforms, such as basis change and transforms to real space, commute in principle, in practice analytically continued data will depend on the order of these operations due to the ill conditioned nature of the analytical continuation kernel. The total local spectral function is defined as \begin{align} A^\text{loc}(\omega) = \sum_{i} A^\text{loc}_{ii}(\omega).\label{eq:ldos} \end{align} \subsection{Attribution of the orbital character} In order to gain additional understanding of the spectral function, it is useful to ascribe orbital character to analytically continued function. The basis transformation to the orthogonal orbitals allows such an identification by writing the spectral function of an orthogonalized orbital as a sum of contributions from various atomic orbitals. We find that the symmetrical orthogonal basis provides an almost unique correspondence between orthogonal and atomic states. Basis functions in the canonical orthogonal basis typically mix several atomic states, such that the attribution to a single atomic state is more difficult. However, we find that in some cases orbitals of similar type, such as Ni $t_{2g}$ states, are grouped together. While the attribution of the orbital character is quite straightforward in reciprocal space, where each k-point can be analyzed independently, it may be problematic in real space, since the basis transformation will mix Gaussian basis functions centered in different unit cells. However, we found that in the symmetrical orthogonal basis the configuration (contribution from different atomic orbitals) of each orthogonal orbital remains the same for different k-points. In the local unit cell, each orthogonal orbital can therefore be uniquely traced back to its corresponding atomic orbital. \subsection{Solution of the impurity model} SEET is based on the embedding of a non-perturbative impurity model into a self-consistently adjusted hybridization with the environment. Solving impurity models is a computationally difficult problem and requires a quantum impurity solver such as QMC~\cite{Gull2011}, NRG~\cite{Bulla2008}, Exact Diagonalization (ED)~\cite{doi:10.1063/1.4823192}, CI~\cite{Zgid_2011,Zgid2012}, or Coupled Clusters~\cite{Shee2019,Zhu2019}. SEET requires the solution of impurity problems with general off-diagonal interactions and hybridizations at potentially strong interaction. However, the ability to treat multiple active spaces keeps the size of the impurities to be treated relatively moderate. We found ED to be an ideal impurity solver for SEET problems with $2-5$ orbitals. ED requires discretization of the continuous hybridization function $\Delta(\omega_n)$ in Eq.~\ref{eq:Gimp0} and its approximation by a finite, typically small, number of discrete bath sites. In the symmetric orthogonal basis, off-diagonal elements of the hybridization function are in our experience 1-2 orders of magnitude smaller than diagonal elements. This allows us to neglect them entirely and fit $\Delta^{\sigma}_{ii}(\omega_n)$ by minimizing the fit residue \begin{align} \chi^2_{\sigma i} = \sum_n f(n)\lVert\Delta^{\sigma}_{ii}(\omega_n) - \sum_{b=1}^{N_b} \frac{\mathcal{V}^{\sigma}_{ib} \mathcal{V}^{\sigma*}_{ib}}{i\omega_n - \epsilon^{\sigma}_b}\rVert, \label{eqn:delta_ed} \end{align} with weight function $f(n)$ chosen to suppress high frequency contributions to $\Delta^{\sigma}_{ii}(\omega_n)$ (we usually choose $f(n)=1/\omega_n$). Using a bound-constrained nonlinear least square method~\cite{Voglis_arectangular,Branch1999} we enforce the constraint that $\mathcal{V}_{ib}$ be positive and $\epsilon_b$ in the vicinity of the Fermi energy. For two-orbital problems we use 5 bath sites per orbital; for three orbitals we use 3, and for four orbitals we use 3. We solve impurity problems using the open-source ED impurity solver of Ref.~\cite{ISKAKOV2018128}. \begin{figure}[tbh] \includegraphics[width=0.47\textwidth]{Fig3_a.pdf} \includegraphics[width=0.47\textwidth]{Fig3_b.pdf} \caption{Local total density of states of NiO (top panel) and MnO (bottom panel), for systems of size $2$~$\times$~$2$~$\times$~$2$ (dashed green), $4$~$\times$~$4$~$\times$~$4$ (dash-dotted dark red) and $6$~$\times$~$6$~$\times$~$6$ (orange), obtained with self-consistent GW.} \label{Fig:FiniteSize} \end{figure} \section{Results}\label{sec:results} \subsection{Finite size effects in NiO and MnO} Fig.~\ref{Fig:FiniteSize} shows the self-consistent GW approximation to the local spectral function $A^\text{loc}(\omega)$, as defined in Eq.~\ref{eq:ldos}, obtained for NiO (top panel) and MnO (bottom panel). We show curves for three different momentum discretizations; $2$~$\times$~$2$~$\times$~$2$, $4$~$\times$~$4$~$\times$~$4$ and $6$~$\times$~$6$~$\times$~$6$, to examine finite size effects. These appear to be substantial between $2$~$\times$~$2$~$\times$~$2$ and $4$~$\times$~$4$~$\times$~$4$ lattices. Increasing lattice size further to $6$~$\times$~$6$~$\times$~$6$ shows the saturation of the local density of states In particular, in both NiO and MnO the size of the gap shrinks substantially ($\sim$~2~eV) as the system size is enlarged from $2$~$\times$~$2$~$\times$~$2$ to $4$~$\times$~$4$~$\times$~$4$, with an additional correction of $0.5$~eV as the system size is enlarged to $6$~$\times$~$6$~$\times$~$6$. An extrapolation in the inverse linear size to the infinite system size limit suggests that an additional reduction of the gap size by $0.5$~$-$~$0.7$~eV is present when comparing the $6$~$\times$~$6$~$\times$~$6$ lattice to the thermodynamic limit. Consequently, when analyzing all our results from the embedding procedure presented in the subsequent sections one should be aware that they will be affected by the presence of the finite size effects and the resulting gaps should be ``rescaled'' by an additional $0.5$~$-$~$0.7$~eV. Additional basis truncation effects due to the incompleteness of the basis set are possible in addition. These effects have not been assessed in this work due to the prohibitive computational cost and linear dependency issues in Gaussian basis sets. Spectral function from the self-consistent GW method do not consist of sharp peaks but rather exhibit smooth, broad features. This smoothness is a result of the analytical continuation procedure which leads to a significant broadening of features, especially at energies far from the Fermi level. The self-consistent GW response function in Fig.~\ref{Fig:FiniteSize} corresponds to photoemission (occupied part) and inverse photoemission (unoccupied part). Note that this theoretical assignment is done while neglecting the effect of element- and energy-dependent photoemission cross-sections present when collecting experimental data. (In angle-resolved photoemission spectroscopy (ARPES), the registered spectrum is equal to a sum of orbital photocurrents multiplied by cross-section matrix elements.) \begin{figure}[tbh] \includegraphics[width=0.47\textwidth]{Fig4.pdf} \caption{Orbitally resolved $\mathbf{k}$-space spectral function obtained with GW in the canonical orthogonal basis at the $\Gamma$ point. Dashed black line: experimental ARPES data near $\Gamma$, Ref.~\cite{NiOCoO_expt_Shen90}.} \label{Fig:GammaVsArpes} \end{figure} \begin{figure}[tbh] \includegraphics[width=0.47\textwidth]{Fig5.pdf} \caption{Orbitally resolved $\mathbf{k}$-space spectral function obtained with GW in the symmetrical orthogonal basis at $\frac{2}{3}$ of the distance between $\Gamma$ and $X$ (i.e. closer to $X$). Dashed black line: experimental ARPES data, Ref.~\cite{NiOCoO_expt_Shen90}.} \label{Fig:05GammaXVsArpes} \end{figure} \subsection{NiO} \begin{figure*}[bth] \includegraphics[width=0.47\textwidth]{Fig6_a.pdf} \includegraphics[width=0.47\textwidth]{Fig6_b.pdf} \includegraphics[width=0.47\textwidth]{Fig6_c.pdf} \includegraphics[width=0.47\textwidth]{Fig6_d.pdf} \caption{ Orbitally resolved SEET local spectral function for NiO. Panels in the plots correspond to impurity choices from Table~\ref{tab:impurities_NiO}. (Specifically, the panel a) corresponds to impurity choice a), etc.) Dash-dotted lines: GW, solid lines: SEET; Dashed lines: experimental data (see Ref.~\cite{NiO_expt_Sawatzky84}).} \label{Fig:NiOLocal_SEETvsGW} \end{figure*} \begin{figure*}[htp] \includegraphics[width=0.47\textwidth]{Fig7_a1.pdf} \includegraphics[width=0.47\textwidth]{Fig7_a2.pdf} \includegraphics[width=0.47\textwidth]{Fig7_b1.pdf} \includegraphics[width=0.47\textwidth]{Fig7_b2.pdf} \includegraphics[width=0.47\textwidth]{Fig7_c1.pdf} \includegraphics[width=0.47\textwidth]{Fig7_c2.pdf} \includegraphics[width=0.47\textwidth]{Fig7_d1.pdf} \includegraphics[width=0.47\textwidth]{Fig7_d2.pdf} \caption{$\mathbf{k}$-resolved SEET (solid lines) and GW (dashed lines) spectral functions for NiO at $\Gamma$ (left column) and at $\frac{2}{3}$ distance between $\Gamma$ and $X$ (right column). Different impurities were chosen in each of the rows. These impurity choices correspond to the rows of Table~\ref{tab:impurities_NiO}, with first-to-fourth rows corresponding to impurities chosen in a-d rows of Table~\ref{tab:impurities_NiO}, respectively. } \label{Fig:NiOSEETvsGW} \end{figure*} ARPES obtains the $\mathbf{k}$-resolved spectral function of materials. Traditional band-structure simulations can then be used to attribute features in the spectral function to their atomic origin. In the case of moderately to strongly correlated materials, where band-structure methods may become unreliable, this attribution may break down, due to both broadening effects and shifts of the spectral functions caused by the presence of higher level correlations. GW is expected to remain reliable for stronger correlation strengths than standard band-structure calculations. Consequently, for NiO in Figs.~\ref{Fig:GammaVsArpes}~and~\ref{Fig:05GammaXVsArpes}, we attempt to assign an atomic orbital character to the features present in the GW spectrum. Fig.~\ref{Fig:GammaVsArpes} presents the orbitally resolved spectral function in canonical orthogonal orbitals at the $\Gamma$ point while Fig.~\ref{Fig:05GammaXVsArpes} shows results in symmetric orthogonal orbitals for a point in the $\Delta$ direction at $\frac{2}{3}$ distance between $\Gamma$ and $X$. Note that in the canonical orthogonal basis, orbitals do not correspond to a single atomic state but to a linear combination of atomic states. However, in our case they are dominated (for the curves shown on the level of 70-80\%) by a specific atomic state. We label the important orbitals near the Fermi energy by the dominant atomic orbital character. Fig.~\ref{Fig:GammaVsArpes} superimposes the experimental ARPES data from Ref.~\cite{NiOCoO_expt_Shen90}. Experimental data is shifted such that the highest occupied state lies at zero energy. As discussed earlier, our GW calculations suffer from the finite size effects identified in Fig.~\ref{Fig:FiniteSize} estimated to be of $0.5$~$-$~$0.7$~eV, introducing an additional relative shift. Nevertheless, even with this shift a clear identification of the main feature with atomic orbitals is possible. We find that the dominant peak stems from the Ni $t_{2g}$ orbitals, whereas the states closest to the Fermi energy contain a mixture of both nickel and oxygen $p$-states. Fig.~\ref{Fig:05GammaXVsArpes} shows data for the point at $\frac{2}{3}$ distance along the $\Delta$ direction, between the $\Gamma$ and $X$ points, along with identifications of dominant atomic contributions. Ni$_1$ and Ni$_2$ denote the two antiferromagnetically ordered Nickel contributions. Data is obtained in the symmetric orthogonal orbital basis, where we find that atomic orbital character can be attributed almost uniquely ($\sim$ 95\%) for each of the linear combinations present in this basis. While ARPES shows a sequence of clearly distinct peaks, our results are smoother and only allow a general attribution of the dominant contribution in a broad energy window, which we indicate in the plot. We emphasize here that these GW results, when accounting for the systematic error due to finite size effects, are qualitatively correct for NiO. This means that as a result of embedding procedure, we only expect small improvements and we predict that SEET results should remain mostly unchanged when compared to GW. \begingroup \squeezetable \begin{table}[tbh] \begin{ruledtabular} \begin{tabular}{c|c|c|p{6cm}} Name & Imp & Orb & Description \\ \hline a & 2 & 2 & Ni$_1$ $e_g$; Ni$_2$ $e_g$ \\ b & 1 & 4 &Ni$_1$ $e_g$ + Ni$_2$ $e_g$ \\ c & 4 & 3 &Ni$_1$ $e_g$; Ni$_2$ $e_g$; Ni$_1$ $t_{2g}$; Ni$_2$ $t_{2g}$ \\ d & 6 & 3 &Ni$_1$ $e_g$; Ni$_2$ $e_g$; Ni$_1$ $t_{2g}$; Ni$_2$ $t_{2g}$; O$_1$ $p$; O$_2$ $p$ \\ \end{tabular} \end{ruledtabular} \caption{Choice of the active space for NiO. Imp denotes the number of distinct disjoint impurity problems. Orb stands for the number of impurity orbitals in the largest impurity problem.\label{tab:impurities_NiO}} \end{table} \endgroup \subsection{Effect of strong electron correlations in NiO} We now turn our attention to results from our embedding construction. The identification of the orbitals near the Fermi level in Figs.~\ref{Fig:GammaVsArpes}~and~\ref{Fig:05GammaXVsArpes} suggests a choice of active orbital set as Ni $e_g$, Ni $t_{2g}$ and O $p$ states. These orbitals will be used to construct impurity models in SEET. We perform the SEET embedding in symmetrical orthogonal orbitals, where {\bf i)} the attribution to atomic orbital character is straightforward and {\bf ii)} off-diagonal hybridization elements are 2-3 orders of magnitude smaller than the diagonal ones. Note that in SEET we do not use any Wannierization procedure as it is commonly done in LDA+DMFT or GW+EDMFT. The ability to embed multiple impurities is crucial, as non-perturbative impurity solvers such as the ED solver used here scale exponentially in the number of impurity orbitals. Table~\ref{tab:impurities_NiO} shows four choices of embedded orbital subsets. Subset (a) consists of two disjoint impurities on each of the nickels, made out of two Ni $e_g$ orbitals. Subset (b) combines those two impurities into a single four-orbital impurity. Subset (c) builds four impurities consisting of two disjoint Ni $e_g$ and two additional disjoint Ni $t_{2g}$ orbitals (each with three impurity orbitals). Subset (d) supplements the four impurities of subset (c) with two additional disjoint three-orbital impurities of the oxygen $p$ orbitals. \subsubsection{Local DOS for NiO} Fig.~\ref{Fig:NiOLocal_SEETvsGW} shows the orbitally resolved local spectral function of NiO for the four impurity choices of Table~\ref{tab:impurities_NiO}. Shown are also the orbitally resolved GW results corresponding to Fig.~\ref{Fig:FiniteSize}, as well as experimental local spectral functions obtained with x-ray photoemission (XPS) and bremsstrahlung-isochromat-spectroscopy (BIS)~\cite{NiO_expt_Sawatzky84}. Note that the gap edge of XPS is shifted to zero energy and the relative height of XPS and BIS data is arbitrary. The experimental error present in this experiment~\cite{NiO_expt_Sawatzky84} is estimated as 0.6~eV and the resulting band gap, measured at half-maxima of both XPS and BIS peaks, is estimated to be 4.3~$\pm$~0.6~eV. \begin{figure*}[tbh] \includegraphics[width=0.47\textwidth]{Fig8_a.pdf} \includegraphics[width=0.47\textwidth]{Fig8_b.pdf} \includegraphics[width=0.47\textwidth]{Fig8_c.pdf} \includegraphics[width=0.47\textwidth]{Fig8_d.pdf} \caption{Orbitally resolved SEET local spectral function for MnO with impurity choice of Table~\ref{tab:impurities_MnO}. Panels in the plots correspond to impurity choices from Table~\ref{tab:impurities_MnO}. (Specifically, the panel a) corresponds to impurity choice a), etc.) Dash-dotted lines: GW, solid lines: SEET; Dashed lines: experimental data (see Ref.~\cite{MnO_expt_Elp91}).} \label{Fig:MnOLocal_SEETvsGW} \end{figure*} Panel a) shows results from two disjoint two-orbital impurities that only consist of the nickel $e_g$ states. Substantial shifts that arise due to embedding are evident for $e_g$ states. All other orbitals are adjusted only via the Dyson equation~\ref{Eqn:Dyson}, and these changes are small. As discussed previously, the GW results and consequently SEET results are biased by finite size effects. Introducing a correction due to fine size effects will shrink the current GW gap (which is around $5.6$~eV for the $6$~$\times$~$6$~$\times$~$6$ lattice when measured at half peak height) by $0.5$~$-$~$0.7$~eV resulting in the GW band gap between $5$~$-$~$5.5$~eV. SEET widens this gap to $6.5$~$-$~$6.7$~eV resulting in a band gap of $5.5$~$-$~$6.0$~eV after accounting for finite size effects. Results for panel b) are obtained with a single four-orbital impurity that contains the same active orbitals as subset (a) but they are contained within a single impurity. Plots from panel b) are essentially indistinguishable from panel a), indicating that cross-correlations at the GW level are sufficient for describing the coupling between those two disjoint impurities. In panel c), where additional $t_{2g}$ states are considered, a small change of the magnitude but not of the overall peak position is visible for Ni $t_{2g}$ while the $e_g$ orbitals are comparable to the ones in panels a) and b). Adding additional correlations on the oxygen $p$ orbitals (in panel d)) substantially shifts all states including Ni $t_{2g}$ and $e_g$ states, causing a build-up of the shoulder density for frequencies between -5~and~-10~eV. Here the contributions from Ni t$_{2g}$ together with oxygen p start to be responsible for this buildup. Note that these results allows us to determine the atomic character of peaks and the size of the band gap is reasonably matching the experimental results when accounting for finite size effects, experimental uncertainties of 0.6~eV, as well as possible inaccuracies stemming from using a Gaussian basis set. \subsubsection{Momentum resolved DOS for NiO} Fig.~\ref{Fig:NiOSEETvsGW} shows $\mathbf{k}$-resolved spectral functions at $\Gamma$ and at $\frac{2}{3}$ distance between $\Gamma$ and $X$ for the impurity choices of Table~\ref{tab:impurities_NiO}. These calculations and plots were performed in the symmetrical orthogonal basis. The spectral function at the $\Gamma$ point shows similar behavior to Fig.~\ref{Fig:NiOLocal_SEETvsGW}. However, a notable difference is in the Ni $4s$ orbital. This orbital while present near the gap edge at the $\Gamma$ point, it rapidly moves to higher energies away from $\Gamma$ point thus contributing only little to the local spectral function. Data between $\Gamma$ and $X$ look more complicated as the nickel $t_{2g}$ and oxygen $p$ states split away from the high symmetry $\Gamma$ point. The assignment of orbital character that arises due to SEET is largely consistent with the angle resolved photoemission experiment presented in Fig. 2 of Ref.~\onlinecite{NiOCoO_expt_Shen90}. Only the features at low energies (lower than -8~eV) cannot be assigned without doubt, most likely due to the deficiencies of analytical continuation and artificial broadening of existing features at these energy ranges. \subsubsection{Local magnetic moment in NiO} Finally, we briefly discuss the staggered magnetization of NiO. A Mulliken analysis (see e.g.~\cite{Mulliken1955}) yields the values of Table~\ref{tab:mu_NiO}. It is evident that the most important contribution to magnetism comes from the Ni $e_g$ states, which we treat non-perturbatively in all four choices of active space. Changes between different impurities (corresponding to the influence of strong correlations on $t_{2g}$ or oxygen $p$ orbitals) are much smaller. \begin{table}[tbh] \begin{tabular}{l|ccccccc} \hline \hline &\multirow{2}{*}{expt}&\multirow{2}{*}{HF}&\multirow{2}{*}{GW}& \multicolumn{4}{c}{GW+SEET} \\ \cline{5-8} &&&& a & b & c & d\\ \hline NiO & 1.77, 1.90(6) & 1.816 &1.701&1.750&1.751&1.752&1.754\\ \hline \hline \end{tabular} \caption{Local magnetic moment of Ni from Mulliken analysis. Impurity choices a,b,c, and d correspond to the rows of Table~\ref{tab:impurities_NiO}. The experimental data is obtained from Refs.~\cite{doi:10.1063/1.1668855} and \cite{mu_expt_Cheetham83}}\label{tab:mu_NiO} \end{table} \begin{figure*}[htp] \includegraphics[width=0.47\textwidth]{Fig9_a1.pdf} \includegraphics[width=0.47\textwidth]{Fig9_a2.pdf} \includegraphics[width=0.47\textwidth]{Fig9_b1.pdf} \includegraphics[width=0.47\textwidth]{Fig9_b2.pdf} \includegraphics[width=0.47\textwidth]{Fig9_c1.pdf} \includegraphics[width=0.47\textwidth]{Fig9_c2.pdf} \includegraphics[width=0.47\textwidth]{Fig9_d1.pdf} \includegraphics[width=0.47\textwidth]{Fig9_d2.pdf} \caption{$\mathbf{k}$-resolved SEET (solid lines) and GW (dashed lines) spectral functions for MnO at $\Gamma$ (left column) and at $\frac{2}{3}$ distance between $\Gamma$ and $X$ (right column) for the impurity choices of Table~\ref{tab:impurities_MnO}. Different impurities were chosen in each of the rows. These impurity choices correspond to the rows of Table~\ref{tab:impurities_MnO}, with first-to-fourth rows corresponding to impurities chosen in a-d rows of Table~\ref{tab:impurities_MnO}, respectively.} \label{Fig:MnOSEETvsGW} \end{figure*} \subsection{Effect of strong correlations in MnO} \subsubsection{Local DOS for MnO} For MnO solid, similarly to NiO, we solve the problem in symmetrical orthogonal orbitals and, after identification of the relevant orbitals, choose a set of disjoint impurities for higher order treatment within SEET. Active space choices a,b and c (see Table~\ref{tab:impurities_MnO}) correspond to a, c and d in Table~\ref{tab:impurities_NiO}. For MnO, active space d combines some of the Mn $e_g$ orbitals with neighboring oxygen $p$ states. \begingroup \squeezetable \begin{table}[tbh] \begin{ruledtabular} \begin{tabular}{c|c|c|p{7.2cm}} Name & Imp & Orb & Description \\ \hline a & 2 & 2 & Mn$_1$ $e_g$; Mn$_2$ $e_g$ \\ b & 4 & 3 & Mn$_1$ $e_g$; Mn$_2$ $e_g$; Mn$_1$ $t_{2g}$; Mn$_2$ $t_{2g}$ \\ c & 6 & 3 & Mn$_1$ $e_g$; Mn$_2$ $e_g$; Mn$_1$ $t_{2g}$; Mn$_2$ $t_{2g}$; O$_1$ $p$; O$_2$ $p$ \\ \multirow{2}{*}{d} & \multirow{2}{*}{6} & \multirow{2}{*}{3} & Mn$_1$$d_{z^2}$ + O$_1$ $p_z$; Mn$_1$$d_{x^2-y^2}$ + O$_1$ $p_x,p_y$; Mn$_1$ $t_{2g}$\\ & & & Mn$_2$$d_{z^2}$ + O$_2$ $p_z$; Mn$_2$$d_{x^2-y^2}$ + O$_2$ $p_x,p_y$; Mn$_2$ $t_{2g}$\\ \end{tabular} \end{ruledtabular} \caption{Choice of the active space for MnO. Imp denotes the number of distinct disjoint impurity problems. Orb stands for the number of impurity orbitals in the largest impurity problem.}\label{tab:impurities_MnO} \end{table} \endgroup We emphasize that while an a-priori identification of the best orbital combination may in principle be possible, here we make our orbital choice based on physical/chemical intuition and the available data from GW calculation that precedes SEET. Fig.~\ref{Fig:MnOLocal_SEETvsGW} shows the orbitally resolved local spectral function of MnO for the four impurity choices of Table~\ref{tab:impurities_MnO}. The panel a) of Fig.~\ref{Fig:MnOLocal_SEETvsGW} shows that similarly to the case of NiO, there is a significant adjustment of the Mn $e_g$ orbitals when embedding is performed. Adding only Mn $t_{2g}$, as illustrated in panel b), keeps $e_g$ states unchanged and introduces additional renormalization in $t_{2g}$ states, as expected. However, adding oxygen $p$ orbitals (as shown in panel c) has a large effect and leads to adjustment of all the bands, both $e_g$ and $t_{2g}$. Embedding active orbitals from subset d (in panel d) with combined Mn and O orbitals has little effect when compared to panel c. In Ref.~\cite{MnO_expt_Elp91}, to define the band gap from experimental XPS and BIS data, the top of the valence band is taken at 50\% of the intensity of the shoulder and the end of the gap is defined at 10\% intensity of the rising Mn 3d structure. This yields the experimental band gap of 3.9~$\pm$~0.4~eV. The GW band gap using the same method of evaluating it as in experiment is approximately equal to around 5.0~eV. Note that as we discussed previously, the GW result itself displays finite size effects that, on a $6$~$\times$~$6$~$\times$~$6$ lattice, we estimate to be around $0.5$~$-$~$0.7$~eV. After accounting for these effects, we estimate the GW gap to be between $4.3$~$-$~$4.5$~eV. SEET inherits the finite size and yields the band gap of $4.8$~$-$~$5.0$~eV after accounting for them. \subsubsection{Momentum resolved DOS for MnO} Fig.~\ref{Fig:MnOSEETvsGW} shows $\mathbf{k}$-resolved spectral functions at $\Gamma$ (left column) and at $\frac{2}{3}$ distance between $\Gamma$ and $X$ (right column) for the impurity choices of Table~\ref{tab:impurities_MnO}. Note that similarly to our previous discussion concerning local DOS of MnO inclusion of both $e_g$ and $t_{2g}$ orbitals into embedding subspace (shown in the second row) leads to a large adjustments of these bands. However, adding oxygen $p$, with a large contribution near the Fermi energy, has even a larger effect resulting in substantially renormalized bands as shown in the third row. This is different from the case of NiO, where the inclusion of oxygen orbitals led to much smaller changes. As observed previously for local DOS, embedding active orbitals from the subset d (4th row) with combined Mn and O orbitals has little effect when compared to subset c (presented in the 3rd row). \subsubsection{Local magnetic moment in MnO} \begin{table}[tbh] \begin{tabular}{l|ccccccc} \hline \hline &\multirow{2}{*}{expt}&\multirow{2}{*}{HF}&\multirow{2}{*}{GW}& \multicolumn{4}{c}{GW+SEET} \\ \cline{5-8} &&&& a & b & c & d\\ \hline MnO&4.79,4.58(3)&4.937&4.870&4.887&4.897&4.897&4.897\\ \hline \hline \end{tabular} \caption{Local magnetic moment of Mn from Mulliken analysis. Impurity choices a,b,c, and d correspond to the rows of Table~\ref{tab:impurities_MnO}. The experimental data is obtained from Refs.~\cite{doi:10.1063/1.1668855}~and~\cite{mu_expt_Cheetham83}}\label{tab:mu_MnO} \end{table} Staggered magnetic moments from a Mulliken analysis are shown in Table~\ref{tab:mu_MnO}. In contrast to NiO, where only changes due to inclusion of $e_g$ were observed, the major correction to GW comes here from both Mn $e_g$ and Mn $t_{2g}$ orbitals to an equal degree. This effect can be explained by the fact that in MnO both $t_{2g}$ and $e_g$ states are partially occupied, whereas in NiO only $e_g$ states contribute to the local magnetization since the $t_{2g}$ ones are occupied. Correlations from oxygen do not influence the population on Mn sufficiently to have an influence on the magnetization. \section{Conclusions}\label{sec:conclusions} We have presented results from our implementation of the self-energy embedding theory for realistic materials. For both solid NiO and MnO, our results agreed with experimental data reasonably well and were used to assign orbital character to the local DOS and ARPES spectra. SEET is thermodynamically consistent and conserving and, after the selection of active orbitals that is done on the basis of a previous weakly correlated calculation, it does not contain any ad-hoc choices of parameters such as a choice of density functional, downfolding scheme, double counting correction, or ad-hoc truncation and re-adjustment of screened interactions. We have shown that in SEET, we do not need to rely on quantum impurity constructions with more than a few orbitals. While treatment of large impurity problems is possible using modern quantum chemistry impurity solvers such as zero temperature Coupled Cluster solvers, the results of such a treatment may not be correct when correlations within the impurity are strong. In SEET, the size of the impurity is moderately small, since much of the weakly correlated physics including screening at the level of GW is absorbed properly when the embedding condition from Eq.~\ref{eq:seet_embedding_condition} is defined. At present, when running SEET, some of the methodological aspects still require physical insight (such as the choice of active spaces), or suffer from technical limitations (such as the analytic continuation step), or are otherwise computationally expensive (such as the simulation with larger Gaussian bases or more momentum points); however, we believe that despite these short term technical limitations SEET is a practicable embedding theory that can be applied to interesting correlated materials. \acknowledgments{ We thank Runxue Yu for performing density functional calculations for our systems. SI and EG are supported by the Simons foundation via the Simons Collaboration on the Many-Electron Problem, DZ and CY by the US Department of Energy (DOE) grant No. ER16391. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. } \bibliographystyle{apsrev4-2}
1,314,259,995,250
arxiv
\section*{Introduction} It is well known that the spectrum of the Laplace-Beltrami operator $\Delta_Y$ on a (closed, viz., compact without boundary) Riemannian manifold $Y$ does not necessarily capture its isometry type (cf.\ Milnor \cite{Milnor} and later work); actually, it does not even determine the homeomorphism type of the manifold (\cite{Ikeda}, \cite{Vigneras}). Knowledge of the spectrum $\Lambda_Y = \{ \lambda \}$ (with multiplicities) is equivalent to knowledge of the zeta function $$ \zeta_Y(s) := \mathrm{tr}(\Delta^{-s}) = \sum_{0 \neq \lambda \in \Lambda_Y} \lambda^{-s}. $$ In this paper, we study what happens if one considers this zeta function as only one member of a family of zeta functions / Dirichlet series associated with the algebra of functions on $Y$. Namely, for a function $a_0 \in C^{\infty}(Y)$, let $$\zeta_{Y,a_0}(s):=\mathrm{tr}(a_0\Delta_Y^{-s})$$ denote the zeta function associated to $a_0$ and $\Delta_Y$, where the trace is taken in $L^2(Y,d\mu_Y)$. It is a generalized Dirichlet series in $\lambda^{-s}$ for $\lambda\in \Lambda_Y$, and it can be extended to a meromorphic function on the complex plane. We will actually also need the following higher order version of this zeta function, which arises naturally for example in noncommutative geometry. For functions $a_1 \in C^{\infty}(Y)$, we define $${\tilde{\zeta}}_{Y,a_1}(s):=\mathrm{tr}(a_1[\Delta_Y,a_1]\Delta_Y^{-s}).$$ These zeta functions are diffeomorphism invariants by construction (cf.\ Lemma \ref{diffinv} for an exact statement). We have the following relation between the zeta functions: $${\tilde{\zeta}}_{Y,a_1}(s):=\zeta_{Y,g_Y(da_1,da_1)}(s)$$ (thus, we could take the right hand side as a definition, but in that way we would obscure the possibility of generalizing our constructions to the case of noncommutative Riemannian geometries). In the first part of this paper, we study the meaning of equality of these families of zeta functions (rather than just the spectrum) under pullback by a map: \begin{introtheorem} \label{Z0thm} Let $\varphi \, : \, X \rightarrow Y$ denote a smooth diffeomorphism between closed connected smooth Riemannian manifolds with smooth metric. The following are equivalent: \begin{enumerate} \item[\textup{(i)}] We have that \begin{enumerate} \item[\textup{(a)}] $ \zeta_{Y,a_0} = \zeta_{X,\varphi^*(a_0)}$ for all $a_0 \in C^{\infty}(Y)$, and \item[\textup{(b)}] $ {\tilde{\zeta}}_{Y,a_1} = {\tilde{\zeta}}_{X,\varphi^*(a_1)}$ for all $a_1 \in C^{\infty}(Y)$. \end{enumerate} \medskip \item[\textup{(ii)}] The map $\varphi$ is an isometry. \end{enumerate} \end{introtheorem} \noindent To shorten notation, if a map $\varphi \, : \, (X,g_X) \rightarrow (Y,g_Y)$ is fixed, we set $$ a^* := \varphi^*(a) $$ for $a \in C^{\infty}(Y)$, unless confusion can arise. \begin{remarks*} \mbox{ } - Since we are dealing with usual Dirichlet series (the spectrum is an increasing sequence of positive real numbers with finite multiplicities), the condition $\zeta_{Y,a_0}=\zeta_{X,a_0^*}$ is satisfied when $\zeta_{Y,a_0}(k)=\zeta_{X,a_0^*}(k) $ for all sufficiently large integers $k$, and similarly for the higher order zeta functions (cf.\ \cite{Serre}, Section 2.2). - The dependence of the zeta functions on $a_i$ is $\mathbf{C}$-linear. Thus, we may for example restrict to functions of unit norm (supremum over the manifold). - One can replace the condition ``$a \in C^{\infty}(Y)$'' in Theorem \ref{Z0thm} by ``$a \in A$'' for $A$ some dense subset of $C^{\infty}(Y)$. Since we assume the manifolds compact, we can pick such a countable set. We do not know whether it is possible to pick a \emph{finite} set $A$, depending only on some topological characteristics of the manifold. - Combining the above three remarks, we see that we have actually found a \emph{countable} sequence of spectrally defined, diffeomorphism invariant real numbers that characterize the manifold up to isometry \emph{in a fixed $C^{\infty}$-diffeomorphism class}: the values $\zeta_{X,a_0}(k)$ and ${\tilde{\zeta}}_{X,a_1}(k)$ for $a_i$ in a countable basis for $C^{\infty}(Y)$, and $k$ running through all sufficiently large integers. Notice that this invariant presupposes knowledge of the manifold: one needs to be able to ``label'' by the smooth functions, and by integers $k$. \end{remarks*} The proof of Theorem \ref{Z0thm} is rather formal (essentially, a suitable residue of the two-variable zeta function contains the metric tensor). By contrast, the next theorem, which studies what happens if we only have equality of the one-variable zeta functions, actually uses some analysis of PDE's and structure of nodal sets: \begin{introtheorem} \label{extraprop} Let $\varphi \, : \, X \rightarrow Y$ denote a smooth diffeomorphism between closed Riemannian manifolds with smooth metric. Then \begin{itemize} \item In the previous theorem, it suffices in condition \textup{(b)} to have equality of \emph{one} coefficient of the Dirichlet series (for all $a_1$) for conditions \textup{(a)} and \textup{(b)} to be equivalent to \textup{(ii)}. \item If the spectrum of $X$ or $Y$ is simple, then condition \textup{(a)} alone is equivalent to \textup{(ii)} in the previous theorem. \end{itemize} \end{introtheorem} \begin{remarks*} \mbox{ } - We can use the method of proof to show for example that if $g$ and $g'$ are two smooth Riemannian structures on a closed connected manifold and with simple Laplace spectrum, then an equality of heat kernels ``on the diagonal'' $K_g(t,x,x)=K_{g'}(t,x,x)$ for sufficiently small $t>0$ implies that $K_g(t,x,y)=K_{g'}(t,x,y)$ for all $t>0$ (and hence $g=g'$), cf.\ Corollary \ref{verysmooth}. - In section \ref{tori} we compute these zeta functions for flat tori. In this case, condition (a) is equivalent to isospectrality and the fact that $\varphi$ has trivial jacobian. \end{remarks*} We can rephrase Theorem \ref{Z0thm} in terms of the \emph{length of $\varphi$}, a concept that we now introduce. The basic idea is to measure in some way the distance between the zeta functions on the one manifold and the pull back by the map of the zeta function on the other manifold. \begin{definition*} Let $\varphi \, : \, X \rightarrow Y$ denote a smooth diffeomorphism of closed Riemannian manifolds of dimension $d$. Define $$ d_1(\varphi,a_0,a_1):= \mathop{\sup_{d \leqslant s\leqslant d+1}} \! \! \! \max \, \{ | \log \left| \frac{\zeta_{X,a^*_0}(s)}{\zeta_{Y,a_0}(s)} \right| |, | \log \left| \frac{{\tilde{\zeta}}_{X,a^*_1}(s)}{{\tilde{\zeta}}_{Y,a_1}(s)} \right| | \}. $$ The \emph{length of $\varphi$} is defined by $$ \ell(\varphi):= \mathop{\sup_{a_0 \in C^{\infty}(Y,\mathbf{R}_{\geq 0})-\{0\}}}_{a_1\in C^{\infty}(Y)-\mathbf{R}} \frac{d_1(\varphi,a_0,a_1)}{1+d_1(\varphi,a_0,a_1)}.$$ \end{definition*} We discuss this notion in a more abstract framework of general \emph{length categories}, a concept we believe to be useful in non-abelian categories such as closed Riemannian manifolds up to isometry, cf.\ Section \ref{lengthcat}. For now, we just state the main properties of $\ell$: \begin{introproposition} The function $\ell$ satisfies \begin{enumerate} \item[\textup{(i)}] For all smooth diffeomorphisms of closed Riemannian manifolds ${X \stackrel{\varphi}{\rightarrow} Y}, {Y\stackrel{\psi}{\rightarrow} Z}$, we have $$\ell(\varphi \circ \varphi) \leqslant \ell(\varphi) + \ell (\psi);$$ \item[\textup{(ii)}] $\ell(\varphi)=0$ if and only if $\varphi$ is an isometry. \end{enumerate} \end{introproposition} We compute some examples, such as the length of the natural map between circles of different radii, see Figure \ref{fig1}; and we show that the length of a linear map between isospectral tori is bounded in terms of its spectral norm. We then show that the notion of length leads to a meaningful concept of \emph{distance between Riemannian manifolds $X,Y$} as infimum of the length of all possible maps between $X$ and $Y$. \begin{remarks*} \mbox{ } - To consider not a single zeta function, but rather a family of zeta functions over the algebra of functions is natural in noncommutative geometry (\cite{ConnesLMP}), where the unit of the algebra of functions does not have to play a distinguished role (the underlying $C^*$-algebra could even be non-unital): our zeta functions bear some resemblance to the construction of Hochschild homology, but with genuine traces instead of residual traces. We list some other manifestation of the philosophy behind our main theorem. In \cite{CM}, it is shown how to associate a spectral triple to a compact hyperbolic Riemann surface (by considering the action of a uniformizing Schottky group on the (fractal) boundary of the Poincar\'e disk), such that he following property holds: if a map between two Riemann surfaces induces equal families of zeta functions of the associated spectral triples, then the map is conformal or anti-conformal. This construction was adopted to the case of finite \emph{graphs} in \cite{dJ}. Finally, for \emph{number fields}, see \cite{CM2}. - One may now wonder whether a similar theory persists to the case of spin manifolds with the Dirac operator replacing the Laplace operator, and whether it applies to noncommutative ``Riemannian geometries'', a.k.a.\ spectral triples (finitely summable). \end{remarks*} \part{SPECTRAL DIRICHLET SERIES} \section{Notations and preliminaries} \begin{notations} To set up notation, suppose $(X,g_X)$ is a closed (i.e., compact without boundary) smooth manifold of dimension $>0$, with smooth Riemannian metric. Denote by $\mu_X$ the induced measure on $X$, and let $\Delta_X$ denote the Laplace-Beltrami operator acting on $L^2(X)=L^2(X,\mu_X)$, with domain the smooth functions. Write $\Lambda_X$ for its spectrum with multiplicities. Suppose we have picked an orthonormal basis of smooth real eigenfunctions for the Laplacian on a Riemannian manifold $X$. We will use various notations depending on the context. We denote an eigenfunction in the chosen basis, with eigenvalue $\lambda$, by $\Psi_{X,\lambda}$ or $\Psi_\lambda$ if the manifold is fixed. We also write $$\Psi_X \vdash \lambda$$ if $\Psi_X$ is an eigenfunction on $X$ in our chosen basis that belongs to the eigenvalue $\lambda$. Let $C^{\infty}(X)$ denote the set of smooth real-valued functions on $X$ (most of the time, one may also use complex valued functions --- this should be clear from the context). Define the zeta-function parametrized by $a_0 \in C^{\infty}(X)$ as \bea \zeta_{X,g_X;a_0}(s) = \zeta_{X,a_0}:=\mathrm{tr}(a_0 \Delta_X^{-s}) \eea where the complex exponent is taken in the sense of spectral theory (see Formula (\ref{zetaexpansion})); and the double zeta function parametrized by $a_1, a_2 \in C^{\infty}(Y)$ as \bea \zeta_{X,g_X;a_1,a_2}(s)=\zeta_{X,a_1,a_2}(s)=\mathrm{tr}(a_1 [ \Delta_X,a_2] \Delta_X^{-s}). \eea We will mainly be concerned with the diagonal version of this two-variable zeta function: \bea {\tilde{\zeta}}_{X,g_X;a_1}(s):=\zeta_{X,g_X;a_1,a_1}(s)=\mathrm{tr}(a_1 [ \Delta_X,a_1] \Delta_X^{-s}). \eea Finally, let $$K_{X,g}(t,x,y)=K_X(t,x,y)$$ denote the heat kernel of $X$. Sometimes we will write $K_g$ if we are on a fixed manifold. Otherwise, our notation will mostly suppress the metric $g$. We also make the convention to write in the usual way $\zeta_X=\zeta_{X,1_X}.$ \end{notations} \begin{se} By expanding in the given orthonormal basis of real eigenfunctions, we get \bea \label{zetaexpansion} \zeta_{Y,a_0}(s) = \sum_{\Psi_Y} \langle \Psi_Y|a_0 \Delta_Y^{-s} |\Psi_Y \rangle = \sum_{\lambda \in \Lambda_Y-\{0\}} \lambda^{-s} \sum_{\Psi \vdash \lambda} \int_Y a_0 \Psi^2 d\mu_Y. \eea \end{se} In the ``commutative'' case considered here, one can express the two-variable zeta function in terms of the one-variable version, as follows: \begin{lemma} \label{2to1} $\zeta_{Y,a_1,a_2}(s):=\zeta_{Y,g_Y(da_1,da_2)}(s).$ \end{lemma} \begin{proof} We expand in a chosen basis of eigenfunctions: \begin{eqnarray} \label{goodold} \mathrm{tr}(a_1[\Delta_Y,a_2]\Delta_Y^{-s}) &=& \sum_{\Psi_Y} \langle \Psi_Y | a_1[\Delta_Y,a_2]\Delta_Y^{-s} | \Psi_Y \rangle \\ &=& \sum_{\lambda \neq 0} \lambda^{-s} \sum_{\Psi_Y \vdash \lambda} \int_Y \left( \Psi_Y a_1 \Delta_Y(a_2 \Psi_Y) - a_1 a_2 \lambda(\Psi_Y)^2 \right) \, d\mu_Y, \nonumber \end{eqnarray} and use the product rule for the Laplacian to find \begin{eqnarray} \label{abcd} & & \mathrm{tr}(a_1[\Delta_Y,a_2]\Delta_Y^{-s}) \\ & & =\sum_{\lambda \neq 0} \lambda^{-s} \sum_{\Psi_Y \vdash \lambda} \int_Y \left( \Psi_Y a_1 \Delta_Y(a_2) \Psi_Y - 2 a_1 g_Y(da_2,d\Psi_Y) \Psi_Y \right)\, d\mu_Y.\nonumber \end{eqnarray} Now apply the divergence theorem to simplify the first summand in the integral: \bea \int_Y \Psi^2_Y a_1 \Delta_Y(a_2) \, d\mu_Y &=& \int_Y g_Y(d(a_1 \Psi^2_Y),da_2)\, d\mu_Y \nonumber \\ &=& \int_Y \Psi_Y^2 g_Y(da_1,da_2) + 2 \int a_1 \Psi_Y g_Y(d\Psi_Y,da_2)\, d\mu_Y. \nonumber \eea So we finally get $$ \mathrm{tr}(a_1[\Delta_Y,a_2]\Delta_Y^{-s}) = \sum_{\lambda \neq 0} \lambda^{-s} \sum_{\Psi_Y \vdash \lambda} \int_Y \Psi_Y^2 g_Y(da_1,da_2)\, d\mu_Y = \mathrm{tr}(g(da_1,da_2) \Delta_Y^{-s}). $$ \end{proof} \begin{lemma}\label{higs} The series $\zeta_{X,a_0}$ and $\zeta_{X,a_1,a_2}$ converge for $\Re(s)>\frac{\dim(X)}{2}$ and can be extended to a meromorphic function on $\mathbf{C}$ with at most simple poles at $\frac12(\dim(X)-\mathbf{Z}_{\geqslant 0})$. \qed \end{lemma} \begin{proof} See Higson \cite{Higsonzeta}, Thm.\ 2.1 for a more general statement that for a smooth linear partial differential operator $D$ of order $q$ on $X$, $\mathrm{tr}(D \Delta^{-s})$ has at most simple poles at $\frac12(\dim(X)+q-\mathbf{Z}_{\geqslant 0})$. For $\zeta_{X,a_0}$, we have $q=0$ and the statement follows; for $\zeta_{X,a_1,a_2}$, we have $q=1$, but from the previous lemma it follows that there is no pole at $\frac12(\dim(X)+1)$. \end{proof} \begin{lemma} \label{diffinv} The zeta-functions $\zeta_{X,a_0}$ and $\zeta_{X,a_1,a_2}$ are diffeomorphism invariants, in the sense that if $\varphi \, : X \rightarrow X$ is a smooth diffeomorphism, then $$ \zeta_{X,g_X,a_0} = \zeta_{X,\varphi^*(g_X),\varphi^*(a_0)}, $$ and$$ \zeta_{X,g_X,a_1,a_2} = \zeta_{X,\varphi^*(g_X),\varphi^*(a_1), \varphi^*(a_2)}. $$ \end{lemma} \begin{proof} The map $\varphi$ is a Riemannian isometry $(X,g_X) \rightarrow (X,\varphi^*(g_X))$ and hence $$\varphi^{*} \Delta_{X,g_X} = \Delta_{X,g_X*} \varphi^{*}$$ and $\varphi$ preserves integrals, i.e., $$\int f d\mu_{g_X} = \int f^{*} d\mu_{g_X^{*}}$$ (cf.\ also Lemma \ref{Watson}). Hence $\varphi^{*}$ sends normalized eigenfunctions to normalized eigenfunctions with the same eigenvalue, so that we have: \beastar \zeta_{X,g_{X},a_{0}}(s) & = & \sum_{\lambda \neq 0} \lambda^{-s} \sum_{\Psi \vdash \lambda} \langle \Psi | a_{0} | \Psi \rangle_{g_{X}} \\ & = & \sum_{\lambda \neq 0} \lambda^{-s} \sum_{\Psi \vdash \lambda} \langle \Psi^{*} | a_{0}^{*} | \Psi^{*} \rangle_{g^{*}_{X}} \\ & = & \zeta_{X,g^{*}_{X},a_0^*}(s). \eeastar For the 2-variable version, note that by assumption $g(da_{1},da_{2})^{*} =g(da_{1}^{*},da_{2}^{*})$ (see \cite{Watson}) and hence the invariance follows from that of the one-variable version by Lemma \ref{2to1}. \end{proof} \begin{remarks} \mbox{} - Observe that in conditions (a) and (b) in the main theorem, we only pull back the functions $a_i$, not the Riemannian structure with corresponding Laplace operator, so the identities in (a) and (b) are in general non-void. For diffeomorphism invariance in Lemma \ref{diffinv}, however, we pull back all structure, including the Laplace operator. - The spectrum (viz., $\zeta_X(s)$) is an \emph{incomplete} invariant of a Riemannian manifold: this is the problem of isospectrality. Connes (\cite{ConnesInv}) described a complete diffeomorphism invariant of a Riemannian manifold by adding to the spectrum the ``relative spectrum'' (viz., the relative position of two von Neumann algebras in Hilbert space). In another direction, B\'erard, Besson and Gallot (\cite{BBG}) gave a faithful embedding of Riemannian manifolds into $\ell^2(\mathbf{Z})$, but by ``wave functions'', which are not diffeomorphism invariant. The family of zeta functions introduced here is some kind of diffeomorphism invariant when the $C^{\infty}$-diffeomorphism type of the manifold is fixed (viz., the algebras of functions $C^{\infty}(X)$ is given): these algebras of functions are used as ``labels'' for the zeta functions. We do not know whether the \emph{sets} of functions $\zeta_{X,a_0}, {\tilde{\zeta}}_{X,a_1}$ (without an explicit labeling) determine the isometry type of the manifold. - Using eigenvalues (viz, $\zeta_{X}(s)$) as dynamical variables in gravity was brought up by Landi and Rovelli (\cite{LR1}, \cite{LR2}). It would be interesting to adapt their theory by using \emph{all} zeta functions. \end{remarks} \section{A residue computation---Proof of Theorem \ref{Z0thm}} \label{mmm} The fact that (ii) implies (i) is easy, using the following lemma (see \cite{Watson}): \begin{lemma} \label{Watson} Suppose that $\varphi : X \rightarrow Y$ is a smooth diffeomorphism of closed Riemannian manifolds. Let $$U=\varphi^* \; : \; L^2(Y) \rightarrow L^2(X)$$ denote the induced pullback map. Then $\varphi$ is a Riemannian isometry if and only if $U$ is a unitary operator that intertwines the Laplace operators on smooth functions: $\Delta_XU =U \Delta_Y$. \qed \end{lemma} \begin{remark} If we don't assume that $U$ arises as actual pullback from a map, then the existence of such a $U$ merely implies that $X$ and $Y$ are isospectral, cf.\ also the discussion in \cite{ZelditchCFO}. \end{remark} \begin{se}[Proof of (ii) $\Rightarrow$ (i) in Theorem \ref{Z0thm}] Pull-back by $\varphi$ induces a unitary transformation $U$ between $L^2(Y)$ and $L^2(X)$ that intertwines the respective Laplace operators. From this intertwining, we find that for every $\lambda$, $U \Psi_{Y,\lambda}$ is a normalized eigenfunction of eigenvalue $\lambda$. From (\ref{zetaexpansion}), we get that $\zeta_{Y,a_0}(s)=\zeta_{X,a_0^*}(s)$ for all functions $a \in C^{\infty}(Y)$, and similarly for the two-variable version (cf.\ proof of Lemma \ref{diffinv}). \qed \end{se} \bigskip For the other, more interesting direction of the proof, we first present a short and formal argument, by computing suitable residues of the zeta functions. In later sections, we will also compare expansion coefficients in the region of absolute convergence, rather than residues. This is computationally convenient, and it will allow us to prove some of the ``harder'' statements in Theorem \ref{extraprop}. \begin{notation} If $\varphi \, : \, X \rightarrow Y$ is a smooth diffeomorphism of Riemannian manifolds, we denote by $w_\varphi$ the change of the volume element by the map $\varphi$ (Radon-Nikodym derivative), i.e., locally in a chart, $$ w_\varphi = |\det(J_\varphi)| \sqrt{\det(g_Y)/\det(g_X)}, $$ where $J_\varphi$ is the jacobian matrix of $\varphi$ (sometimes, $w_\varphi$ is called the jacobian of $\varphi$). We then have the change of variables formula \bea \label{chvar} \int_{Y} a_0 d\mu_Y & = & \int_{X} a_0^* w_\varphi d\mu_X, \eea for any function $a_0 \in C^{\infty}(Y)$. \end{notation} \begin{lemma}[e.g., \cite{Gilkey}, Lemma 1.3.7 and Thm.\ 3.3.1(1)] \label{residue} Let $X$ denote a closed $d$-dimensional Riemannian manifold, $d>0$. Then for $a_{0} \in C^{\infty}(Y)$ the function $\Gamma(s) \zeta_{X,a_0}(s)$ has a simple pole at $d/2$ with residue \bea \mathrm{Res}_{s=\frac{d}{2}} \zeta_{X,a_0} = \frac{1}{\Gamma(\frac d2)(4 \pi)^{d/2}} \int_X a_0\, d\mu_X. \qed \nonumber \eea \end{lemma} \begin{lemma} \label{res2} Let $X$ be a closed $d$-dimensional Riemannian manifold, $d>0$. For any $a_1,a_2 \in C^{\infty}(X)$ we have $$ \mbox{Res}_{s=d/2} \zeta_{X,a_1,a_2} = \frac{1}{\Gamma(\frac d2)(4 \pi)^{d/2}} \int_X g_X(da_1,da_2)\, d\mu_X. $$ \end{lemma} \begin{proof} Follows from the previous Lemma and Lemma \ref{2to1}. \end{proof} \begin{proof}[First proof of Theorem \ref{Z0thm} \textup{((i)} $\Rightarrow$ \textup{(ii))}] \begin{lemma} \label{basechange} The map $\varphi$ has $w_\varphi=1$.\end{lemma} \begin{proof} It follows from $\zeta_{Y,a_0} = \zeta_{X,a_0^*}$ by taking residues that $$ \mathrm{Res}_{s=\frac{d}{2}} \zeta_{Y,a_0}(s) = \mathrm{Res}_{s=\frac{d}{2}} \zeta_{X,a_0^*}(s). $$ At $a_0=1$, we find that $X$ and $Y$ have the same volume, and then by Lemma \ref{residue}, the general equality of residues becomes \bea \int_Y a_0 \, d\mu_Y = \int_X a_0^*\, d\mu_X. \eea The change of variables formula (\ref{chvar}) implies \bea \int_X a_0^* (1-w_\varphi)\, d\mu_X = 0 \ \ \ \ (\forall a_0^* \in C^{\infty}(X)). \nonumber \eea and the fundamental lemma of the calculus of variations gives that \bea w_\varphi=1. \eea \end{proof} By using the polarisation identity for the quadratic form $g$, we see that \begin{eqnarray*} 4\zeta_{X,a^*_1,a^*_2} &=& 4\zeta_{X,g_X(da^*_1,da^*_2)} \\ &=&\zeta_{X,g_X(d(a^*_1+a^*_2),d(a^*_1+da^*_2))}-\zeta_{X,g_X(d(a^*_1-a^*_2),d(a^*_1-da^*_2))}\\ &=&{\tilde{\zeta}}_{X,(a_1+a_2)^*} - {\tilde{\zeta}}_{X,(a_1-a_2)^*} \\ &=& {\tilde{\zeta}}_{Y,(a_1+a_2)} - {\tilde{\zeta}}_{Y,(a_1-a_2)} \\ &=& 4 \zeta_{Y,a_1,a_2} \end{eqnarray*} for all $a_1,a_2 \in C^{\infty}(Y)$. From Lemma \ref{res2}, we then get $$\int_Y g_Y(da_1,da_2)\, d\mu_Y = \int_X g_X(da^*_1,da^*_2)\, d\mu_X. $$ After base change (using $w_\varphi=1$), the previous equality of integrals gives \bea \label{mw} \int_X a_{1}^{*} \left( (\Delta_{Y} (a_{2}))^{*} - \Delta_{X}(a_{2}^{*})) \right) d\mu_X = \int_X a_{1}^{*} \left( \Delta_{Y}^{*} - \Delta_{X} \right)( a_{2}^{*} )d\mu_X \eea for all $a_1,a_2 \in C^{\infty}(Y)$. Here, $\Delta_{Y}^{*} = U \Delta_{Y} U^{*}$ with $$U=\varphi^* \, : \, L^{2}(Y) \rightarrow L^{2}(X)$$ the pullback, and $U^{*}$ the push-forward. Since this holds for all $a_1^*$, we find that $$ \Delta_{Y}^{*} = \Delta_{X}. $$ Also, $U$ is unitary, since $w_\varphi=1$, whence $$\langle Uf, Ug \rangle_X = \int_X f^* g^* w_\varphi d\mu_X = \int_Y fg d\mu_Y = \langle f, g \rangle_Y $$ for all $f,g \in L^{2}(Y)$. Hence from Lemma \ref{Watson}, we find that $\varphi$ is an isometry. \end{proof} \section{Matching squared eigenfunctions} \label{sq} In this section, we investigate more closely the meaning of condition (a) in the main theorem. \begin{notation} We use the following notation for the sum of the squares of the eigenfunctions belonging to a fixed eigenvalue and basis: $$\sigma_{X,\lambda}=\sigma_\lambda:=\sum_{\Psi_X \vdash \lambda} \Psi_X^2 . $$ \end{notation} \begin{proposition} \label{squares0} Suppose that $\varphi \, : \, X \rightarrow Y$ is a smooth diffeomorphism between connected closed Riemannian manifolds. Let $\{\Psi_{X,\lambda}\}$ and $\{\Psi_{Y,\mu}\}$ denote two complete sets of orthonormal real eigenfunctions for $\Delta_X$ and $\Delta_Y$, respectively. Condition (a) in Theorem \ref{Z0thm} is equivalent to the statement that the spectra of $\Delta_X$ and $\Delta_Y$ agree with multiplicities, we have $w_\varphi=1$, and for any eigenvalue $\lambda$ we have $$ \sigma_{X,\lambda} := \sum_{\Psi_X \vdash \lambda} (\Psi_X)^2=\sum_{\Psi_Y \vdash \lambda} (\Psi^*_Y)^2 =: \sigma_{Y,\lambda}^*.$$ \end{proposition} \begin{proof} The assumption is $$\zeta_{Y,a_0}(s)=\zeta_{X,a_0^*}(s)$$ for all $a_0 \in C^{\infty}(Y)$. Evaluated at the unit $a_0=1$, it follows that the nonzero spectra of $\Delta_X$ and $\Delta_Y$, including multiplicities, agree (using the identity principle for generalized Dirichlet series, cf. \cite{HardyDirichlet}, Thm.\ 6), and this implies that both the volumes and dimensions of $X$ and $Y$ agree as well. The coefficients of the above Dirichlet series (when grouped according to fixed $\lambda$) as in (\ref{zetaexpansion}) are uniquely determined by it, again by the identity theorem for Dirichlet series. If we spell out the assumption for the individual coefficients in $\lambda^s$ in the two Dirichlet series, we find that for any $a_0 \in C^{\infty}(Y)$ we have $$\int_{Y} \sum_{\Psi_Y \vdash \lambda} |\Psi_Y|^2 a_0 d\mu_Y = \int_{X} \sum_{\Psi_X \vdash \lambda} |\Psi_X|^2 a_0^* d\mu_X $$ for $\lambda \neq 0$. We perform a coordinate change in the first integral by using the map $\varphi : X \rightarrow Y$. Since $w_\varphi=1$, we find \bea \int_{X} \sum_{\Psi_Y \vdash \lambda} |\Psi^*_Y|^2 a_0^* d\mu_X & = & \int_{X} \sum_{\Psi_X \vdash \lambda} |\Psi_X|^2 a_0^* d\mu_X \eea for any $a_0 \in C^{\infty}(Y)$. Again, the fundamental lemma of the calculus of variations gives \bea \label{eqeig} \sum_{\Psi_X \vdash \lambda} (\Psi_X)^2= \sum_{\Psi_Y \vdash \lambda} (\Psi^*_Y)^2, \eea for $\lambda \neq 0$. The eigenvalue $\lambda=0$ has multiplicity one, since the manifold is connected, and the normalized eigenfunction on $Y$ is equal to $1/\sqrt{\mathrm{vol}(Y)}$, which pulls back to $1/\sqrt{\mathrm{vol}(Y)}$. Since $X$ and $Y$ have the same volume (from equality of their zeta functions at $a_0=1$), we find this is equal to $1/\sqrt{\mathrm{vol}(X)}$, the normalized eigenfunction for $\lambda=0$ on $X$. The other direction of the equivalence is obtained by reversing steps. This finishes the proof. \end{proof} We deduce a corollary about the diagonal of the heat kernel: \begin{corollary} If $\varphi : X \rightarrow Y$ is a smooth diffeomorphism of closed connected smooth Riemannian manifolds, then the following conditions are equivalent: \begin{enumerate} \item[\textup{(A)}] For all $a_0 \in C^{\infty}(Y)$, we have that $ \zeta_{Y,a_0} = \zeta_{X,\varphi^*(a_0)}$; \item[\textup{(B)}] $K_X(t,x,x) = K_Y(t,\varphi(x),\varphi(x))$ for all $t>0$ and all $x \in X$, and $w_\varphi=1$. \end{enumerate} \end{corollary} \begin{proof} Recall the following expression for the heat kernel (e.g., \cite{Berard}, V.3) \bea \label{kernelxy} K_X(t,x,y) = \mathop{\sum_{\lambda \in \Lambda}}_{\textrm{distinct}} e^{-\lambda t} \sum_{\Psi \vdash \lambda} \Psi(x){\Psi(y)}, \, \, (t>0) \eea and setting $x=y$, we find \bea \label{kernel} K_X(t,x,x) = \mathop{\sum_{\lambda \in \Lambda}}_{\textrm{distinct}} e^{-\lambda t} \sigma_{X,\lambda}(x), \, \, (t>0). \eea This implies the result. \end{proof} \section{Expansion coefficients of the two-variable zeta function} \label{arbspec} We now take a closer look at the expansion coefficients of the two-variable zeta functions, under the assumptions of (i) in Theorem \ref{Z0thm}. This computation provides an alternative proof of Theorem \ref{Z0thm}, and will be used in proving part of Theorem \ref{extraprop}. We find that $X$ and $Y$ have the same spectra with multiplicities. As usual, we denote this spectrum by $\{ \lambda \}$ (with multiplicities). We have already seen that the polarisation identity for the quadratic form $g$ implies that $\zeta_{Y,a_1,a_2} = \zeta_{X,a_1^*,a_2^*}$. Our starting point is the expression for $\mathrm{tr}(a_1[\Delta,a_2]\Delta_Y^{-s})$ from Equation (\ref{abcd}). The coefficient in $\lambda^{-s}$ is $$ \int_Y a_1 \left[ \sigma_{Y,\lambda} \Delta_Y(a_2) - 2 g_Y (da_2,d\sigma_{Y,\lambda}) \right] d\mu_Y. $$ If we equate this to the corresponding coefficient of the other zeta function, and then perform a base change to $X$ (using $w_\varphi=1$) and use the fundamental lemma of calculus of variations to remove the integral over $X$, we find \bea \sigma_\lambda (\Delta_Y^*-\Delta_X) &=& 2 g_Y^*(d-,d\sigma_{\lambda}) - 2 g_X(d-,d\sigma_{\lambda}) \label{used} \\ &=& \mbox{ first order operator. } \nonumber \eea Equation (\ref{used}) means that the leading symbol of $\Delta_Y^*-\Delta_X$ vanishes outside the zero set of $\sigma_\lambda$, which (by \cite{Watson}) implies that $g_Y^*=g_X$. Since for every $x$ there is a $\lambda$ with $\sigma_\lambda(x) \neq 0$, we find $g_Y^*=g_X$ everywhere. Hence $\varphi$ is an isometry. \section{Improvements in the case of simple spectrum} In this section, we consider how to improve the theorem in case the spectrum of $\Delta_X$ is simple; we will prove Theorem \ref{extraprop}. We first start by listing some consequences of known results related to condition (a): \begin{remarks} \mbox{} - Condition (a) in Theorem \ref{Z0thm} does not {always} suffice to imply that $\varphi$ is an isometry, cf.\ Corollary \ref{exist}. - There exist isospectral, non-isometric compact Riemannian manifolds with simple spectrum (cf.\ Zelditch \cite{Zelditchmult}, Theorem C), so (for maps with unit jacobian) condition (a) is not equivalent to isospectrality (which would be condition (a) only for the identity function). - A result of Uhlenbeck (\cite{Uh1}) says that the condition of having non-simple Laplace spectrum is meager in the space of smooth Riemannian metrics on a given manifold $X$. Thus, Theorem \ref{extraprop} treats the `generic' situation. But there do exist Riemannian manifolds for which the multiplicity of the spectrum grows polynomially in the eigenvalues, cf.\ e.g.\ Donnelly \cite{Don}. \end{remarks} \begin{lemma} \label{squares} Suppose that $X$ is a closed smooth Riemannian manifold. Then the zero set of any nonzero eigenfunction of $\Delta_X$ is not dense. If we let $\tilde{X} \subseteq X$ denote complement of the union of all such zero sets, then $\tilde{X}$ is dense in $X$, and the following holds: for any real $\Delta_X$-eigenfunction $\Phi$, and any function $h \in C(X)$ that satisfies $h^2=\Phi^2$, we have that $h=\pm \Phi$ on every connected component of $\tilde{X}$. \end{lemma} \begin{proof} We can write $c \Phi =h$ where $c$ is a function (a priori not necessarily globally constant) that takes values in $\{+1,-1\}$. We can assume that $X$ is connected. We choose for $\tilde{X}$ the complement of the union of all zero sets of non-zero $\Delta_X$-eigenfunctions on $X$. By continuity, $c$ is obviously constant on connected components of $\tilde{X}$. All we have to show is that $\tilde{X}$ is dense. We claim that the complement of the zero set of an eigenfunction $\Phi$ is an open dense subset of $X$. Granting this for the moment, since the spectrum is discrete, the intersection $\tilde{X}$ of all such complements of zero sets is a countable intersection of open dense subsets of $X$. Since $X$ is compact and hence a complete metric space (for the Riemannian metric), the Baire category theorem implies that this intersection is itself dense. The claim will follow if we show that the zero set $\mathcal{Z}$ of $\Phi$ is nowhere dense. So suppose on the contrary that $\mathcal{Z}$ is dense in a neighbourhood $U$ of some point $x \in X$. Then $\Phi \equiv 0$ on $\overline{U}$. Since the unique continuation theorem applies to the Laplacian with smooth coefficients (cf.\ \cite{AKS}, Rmk.\ 3, p.\ 449) we find $\Phi \equiv 0$ on all of $X$ (assumed connected), a contradiction. \end{proof} \begin{proof}[Proof of Theorem \ref{extraprop}] First, suppose $\varphi$ is an isometry between $X$ and $Y$. Pull-back by $\varphi$ induces a unitary transformation $U$ between $L^2(Y)$ and $L^2(X)$ that intertwines the respective Laplace operators. From this intertwining, we find that for every $\lambda$, $U \Psi_{Y,\lambda}$ is a normalized eigenfunction of eigenvalue $\lambda$, hence equal to $\pm \Psi_{X,\lambda}$ by the simplicity assumption on the spectrum. From (\ref{zetaexpansion}), we get that $\zeta_{Y,a_0}(s)=\zeta_{X,a_0^*}(s)$ for all functions $a_0 \in C^{\infty}(Y)$. For the more interesting converse direction, we know from the previous section that the pullback map $U=\varphi^*$ takes on the form \beastar U : L^ {2}(Y) & \to & L^{2}(X) \\ \Psi_{Y,\lambda} (\lambda \neq 0) & \mapsto & \varphi^{*}(\Psi_{Y,\lambda}) = c_\lambda \Psi_{X,\lambda}; \ \ c_\lambda \in \{ \pm 1\}\\ \Psi_{Y,0}=\frac{1_Y}{\sqrt{\mathrm{vol}(Y)}} & \mapsto & \varphi^*(\Psi_{Y,0}) = \frac{1_X}{\sqrt{\mathrm{vol}(Y)}}=\frac{1_X}{\sqrt{\mathrm{vol}(X)}}=\Psi_{X,0} \eeastar The map is clearly unitary and bijective. We prove that the map $U$ also intertwines the Laplace-Beltrami operators. For this, let $\tilde{X}_{\lambda}$ denote the zero set of the eigenfunction $\Psi_{X,\lambda}$. Let $x \in \tilde{X}_{\lambda}$. We can find an open neighbourhood $\mathcal{U}_x$ of $x$ on which $c_\lambda=\pm 1$ (defined by $\varphi^{*}(\Psi_{Y,\lambda}) = c_\lambda \Psi_{X,\lambda}$ as above) is constant. For any $\tilde{x} \in \mathcal{U}_x$, we find $$ \Delta_X U \Psi_{Y,\lambda}(\tilde{x}) = \Delta_X (c_\lambda \Psi_{X,\lambda})(\tilde{x}) = c_\lambda \lambda \Psi_{X,\lambda}(\tilde{x}) = U(\lambda \Psi_{Y,\lambda})(\tilde{x}) = U \Delta_Y \Psi_{Y,\lambda} (\tilde{x}).$$ This equality of continuous functions (for the continuity of the left hand side, use that the map $\varphi$ is assumed to be smooth) holds on $\tilde{X}_{\lambda}$, and since $\tilde{X}_{\lambda}$ is dense in $X$ (see the previous proof), we find that it holds on $X$. Now since the eigenfunctions form a basis for $L^2(X)$, we find an equality of operators $$\Delta_X U = U \Delta_Y.$$ This implies that $X$ and $Y$ are isometric by the previous lemma, and finishes the proof of the second part of Theorem \ref{extraprop}. For the first part, we observe that the zero set of $\sigma_\lambda$ is nowhere dense (since $\sigma_\lambda$ is a finite linear combination of the positive functions $\Psi^2$ for $\Psi \vdash \lambda$, so $\sigma_\lambda=0$ implies $\Psi=0$ for all $\Psi \vdash \lambda$, and use Lemma \ref{squares}). Hence in this case, in the proof of Theorem \ref{Z0thm}, it suffices to have formula (\ref{used}) for \emph{only one $\lambda$}, i.e., equality of \emph{one} coefficient of the Dirichlet series in condition (b) suffices. \end{proof} \section{Further improvements} At the cose of using more ``hard'' analysis, we can improve some of the auxiliary results from the previous section even further. \begin{lemma} If in Lemma \ref{squares} we assume that $h \in C^{\infty}(X)$, then $h=\pm \Phi$ with the sign constant everywhere. \end{lemma} \begin{center} \begin{figure}[h] \includegraphics[width=8cm]{smoothcase.pdf} \caption{An eigenfunction around a nodal set} \label{fig0} \end{figure} \end{center} \begin{remark} We can write $c \Phi =h$ where $c$ is a function (a priori not necessarily globally constant) that takes values in $\{+1,-1\}$. We have to prove that $c$ is globally constant. The gist of the proof is to use the regularity of eigenfunctions at zeros. Think of the prototypical $\sin(x)=c(x)h(x)$ on $[0,2\pi]$. If $h(x)$ is \emph{not} equal to $\pm \sin(x)$, then we have $h(x)=|\sin(x)|$. But that function is not even $C^1$ at $x=\pi$. On the other hand, functions like $ f(x)^2 =e^{-2/|x|} (x \neq 0); \ f(0)=0 $ have different smooth $f(x)$ as square root, but have a zero of infinite order. See Figure \ref{fig0}. \end{remark} \begin{proof} It follows e.g.\ from the analysis in Caffarelli and Friedman (\cite{CF}, Example 3 pp.\ 432--433, compare: \cite{HL}, chapter 4, proof of Lemma 4.1.1) that for every point $x_0$ of the manifold $X$, there exists a small enough neighbourhood $U$ of $x_0$ that intersects the zero set $\mathcal{Z}$ of $\Phi$ in the union of finitely many submanifolds of dimension $\leqslant m-1$. First note that if $x_{0} \notin \mathcal{Z}$ there exists an open set $W \ni x_{0}$ for which $\Phi\mid_{W} \neq 0$. Then the function $$\left( \frac{h}{\Phi} \right) \mid_{W} = c\mid_{W}$$ is smooth and hence $c \mid_{W}$ must be constant. \begin{center} \begin{figure}[h] \includegraphics[width=4cm]{zeros.pdf} \caption{Local structure of the zero set $\mathcal{Z}$ in a neighborhood $U$ of $x_0$, with independent paths $i_1,i_2$} \label{fig01} \end{figure} \end{center} Now, let $x_{0} \in \mathcal{Z}$, then both $h$ and $\Phi$ vanish at $x_{0}$. Choose any $C^{\infty}$ path $i: ]-\varepsilon,\varepsilon[ \to X$ such that $$\mathrm{im}(i) \cap \mathcal{Z}=\{x_0\} \mbox{ with }i(0)=x_0 \mbox{ and }||i'(t)||>0.$$ Then we get: \bea h(i (t)))=c( i (t))\cdot \Phi(i(t)) \eea Assume that $c( i (t))$ changes sign at $0$. Differentiating the equation at $t=0$ gives: $$ \lim_{t \downarrow 0} h(i(t))' = \lim_{t \downarrow 0} \Phi(i(t))' = - \lim_{t \uparrow 0} \Phi(i(t))'. $$ Because of smoothness, this implies $$(\Phi \circ i)'(0) = (h\circ i)'(0)=0.$$ Any path in $\mathcal{Z}$ is mapped identically to $0$ by both $h$ and $\Phi$ and hence also has derivative zero. It follows that both $h$ and $\Phi$ have all (directional) derivatives in $x_{0}$ equal to $0$. Indeed, because locally around $x_0$ the zero set $\mathcal{Z}$ is contained in a finite union of codimension $\geqslant 1$ submanifolds, we can find $m$ paths $i$ as above whose tangent vectors at $x_0$ span $T_{x_0} X$, and since all directional derivatives along these vectors are zero, so is the total derivative. \\ By induction it follows that up to any order all derivatives vanish. Hence $x_{0}$ is a zero of the eigenfunction $\Phi$ of infinite order, which is impossible by Aronszajn's unique continuation theorem. We conclude that locally $c$ does not change sign at zeros (and anyhow not at nonzeros). We assume $X$ to be connected, so this implies that $c$ is constant. \end{proof} We deduce the following corollary: \begin{corollary} \label{verysmooth} If $\varphi : X \rightarrow Y$ is a $C^{\infty}$-diffeomorphism of closed connected $C^\infty$-Riemannian manifolds and with simple Laplace spectrum, such that the diagonals of the heat kernels match up in the sense that $K_X(t,x,x)=K_Y(t,\varphi(x),\varphi(x))$ for sufficiently small $t>0$, then the heat kernels match up in the sense that $K_X(t,x,y)=K_Y(t,\varphi(x),\varphi(y))$ for all $t>0$. In particular, if $g$ and $g'$ are two smooth Riemannian structures on a closed connected manifold and with simple Laplace spectrum, then $K_g(t,x,x)=K_{g'}(t,x,x)$ for sufficiently small $t>0$ implies that $K_g(t,x,y)=K_{g'}(t,x,y)$ for all $t>0$ and hence $g=g'$. \end{corollary} \begin{proof} Since $$K_X(t,x,x)=K_Y(t,\varphi(x),\varphi(x))$$ for sufficiently small $t>0$, and we have simple Laplace spectrum, formula (\ref{kernel}) implies that the corresponding spectra, and the squares of eigenfunctions match up: $$(\Psi_{X,\lambda})^2=(\varphi^*\Psi_{Y,\lambda})^2.$$ The smoothness on both sides implies via the previous lemma that the functions agree up to a global sign. Applying (\ref{kernelxy}), we find \begin{eqnarray*} K_X(t,x,y) &=& \sum_\lambda e^{-\lambda t} \Psi_{X,\lambda}(x){\Psi_{X,\lambda}(y)} \\ &=& \sum_\lambda e^{-\lambda t} (\pm 1)^2\varphi^*\Psi_{Y,\lambda}(x){\varphi^*\Psi_{Y,\lambda}(y)} \\ &=& K_Y(t,\varphi(x),\varphi(y)). \end{eqnarray*} The particular case follows by setting $\varphi$ to be the identity map. \end{proof} \section{Example: flat tori} \label{tori} \begin{se} Let $\mathbf{T}=\mathbf{R}^n/\Lambda$ denote a flat torus, corresponding to a lattice $\Lambda$ in $\mathbf{R}^n$. Let $\Lambda^\vee$ denote the dual lattice to $\Lambda$. The Laplacian is $$\Delta_\mathbf{T}= -\sum_k \partial^2_k,$$ the spectrum is $$\{ 4 \pi^2 ||\lambda^\vee||^2\}_{\lambda^\vee \in \Lambda^\vee},$$ a basis of orthogonal eigenfunctions of eigenvalue $\ell$ is given by $$\Psi_{\lambda^\vee}:=\frac{e^{2 \pi i\langle \lambda^\vee,x \rangle}}{\sqrt{\mathrm{vol}(T)}}$$ if $||\lambda^\vee||^2=\ell.$ (This is not a real basis as usual in this paper, but we will make appropriate adaptations.) The crucial property for us is that these functions satisfy $$|\Psi_{\lambda^\vee}|^2=\Psi_{\lambda^\vee} \cdot \overline{\Psi_{\lambda^\vee}}= \frac{1}{\mathrm{vol}(T)}.$$ \end{se} \begin{se} We consider the meaning of condition (a) in Theorem \ref{Z0thm} for the torus $\mathbf{T}$. Let $a_0 \in C^{\infty}(\mathbf{T})$. Then \begin{eqnarray} \label{torus1} \zeta_{\mathbf{T},a_0}(s) &=& \sum_{\lambda^\vee \in \Lambda^\vee} \frac{1}{||4 \pi^2 \lambda^\vee||^{2s}} \cdot \frac{1}{\mathrm{vol}(T)} \int_\mathbf{T} a_0 |\Psi_{\lambda^\vee}|^2 \, d\mu_{\mathbf{R}^n} \\ &=& \left( \frac{1}{\mathrm{vol}(T)} \int_\mathbf{T} a_0 \, d\mu_{\mathbf{R}^n} \right) \cdot \zeta_\mathbf{T}(s). \nonumber \end{eqnarray} We conclude from this by noting that the volume is determined by the spectrum: \end{se} \begin{proposition} Let $\varphi \, : \, \mathbf{T}_1\rightarrow \mathbf{T}_2$ denote a smooth diffeomorphism between two flat tori. Then the following are equivalent: \begin{enumerate} \item[\textup{(i)}] For all $a_0 \in C^{\infty}(T_{2})$, we have that $ \zeta_{\mathbf{T}_2,a_0} = \zeta_{\mathbf{T}_1,\varphi^*(a_0)}$; \item[\textup{(ii)}] $\mathbf{T}_1$ and $\mathbf{T}_2$ are isospectral, and $\varphi$ has jacobian $w_\varphi= 1$. \qed \end{enumerate} \end{proposition} \begin{corollary} \label{exist} There exist non-isometric manifolds for which condition (a) of Theorem \ref{Z0thm} holds. \end{corollary} \begin{proof} Take the following isospectral, non-isometric tori $\mathbf{T}_{\pm}$ (\cite{Schiemann}, \cite{CS}) in dimension 4, spanned by the column vectors in the respective matrices $G_+$ and $G_-$ $$ G_{\pm}=\frac{1}{2\sqrt{3}} \left( \begin{array}{rrrr} \pm 3 & -\sqrt{7} & -\sqrt{13} & -\sqrt{19} \\ 1 & \pm 3 \sqrt{7} & \sqrt{13} & -\sqrt{19} \\ 1 & -\sqrt{7} & \pm 3 \sqrt{13} & \sqrt{19} \\ 1 & \sqrt{7} & \sqrt{13} & \pm 3 \sqrt{19} \end{array} \right). $$ Consider the linear map $ A \, : \, \mathbf{R}^4 \rightarrow \mathbf{R}^4$ given by $$ A= G_+ G_-^{-1} = \frac{1}{5} \left( \begin{array}{rrrr} -3 & -2 & -1 & -3 \\ 2 & -2 & 4 & -3 \\ 3 & -3 & -4 & 3 \\ 1 & 4 & 2 & -4 \end{array} \right) $$ with determinant $\det(A)=1$. This map factors through to a map $\mathbf{T}_- \rightarrow \mathbf{T}_+$ with determinant ($=$ jacobian) $1$. \end{proof} \begin{se} We now consider condition (b) in Theorem \ref{Z0thm} for the torus $\mathbf{T}$. We compute for $a_1, a_2 \in C^{\infty}(\mathbf{T})$, using Lemma \ref{2to1}, that \begin{equation} \label{torus2} \zeta_{\mathbf{T},a_1,a_2}(s)= \left( \frac{1}{\mathrm{vol}(T)} \int_\mathbf{T} \nabla(a_1)^\top \nabla(a_2) \, d\mu_{\mathbf{R}^n} \right) \zeta_\mathbf{T}(s).\end{equation} \end{se} \part{LENGTHS AND DISTANCES} \section{Length categories} \label{lengthcat} \begin{definition} We call a pair $(\mathcal{C},\ell)$ a \emph{length category} if $\mathcal{C}$ is a category endowed with a subcategory $\mathcal{D}$, full on objects, such that every morphism in $\mathcal{D}$ is an isomorphism in $\mathcal{C}$ and $\mathcal{D}$. These are called $\mathcal{D}$-isomorphisms from now on. Furthermore, for every $X,Y \in \mathrm{Ob}(\mathcal{C})$ and every $\varphi \in \mathrm{Hom}(X,Y)$, there is defined a positive real number $\ell(\varphi) \in \mathbf{R}_{\geqslant 0}$, called the \emph{length of $\varphi$} such that \begin{enumerate} \item[\textup{\textbf{(L1)}}] $\ell(\varphi)=0$ if and only if $\varphi$ is an $\mathcal{D}$-isomorphism; \item[\textup{\textbf{(L2)}}] If $X,Y,Z \in \mathrm{Ob}(\mathcal{C})$ and $\varphi \in \mathrm{Hom}(X,Y), \psi \in \mathrm{Hom}(Y,Z)$, then $$ \ell(\psi \circ \varphi) \leqslant \ell(\varphi) + \ell(\psi).$$ \end{enumerate} \end{definition} \begin{remark} In particular, in $\textup{\textbf{(L1)}}$ we do not assume that the $\mathcal{D}$-isomorphism classes are necessarily the categorical isomorphism classes (i.e, the maps for which there exists an inverse in the category $\mathcal{C}$), but we do assume that the $\mathcal{D}$-isomorphisms are (some of the) categorical isomorphisms of $\mathcal{C}$. For instance, think of the category $\mathcal{C}$ of metric spaces and continuous maps, but with $\mathcal{D}$-isomorphisms the isometries (instead of the homeomorphisms). Note also that the morphisms of $\mathcal{D}$ can be recovered from the pair $(\mathcal{C},\ell)$ as those morphisms in $\mathcal{C}$ with length zero. \end{remark} To illustrate the concept, let us look at some examples. If there is no subcategory $\mathcal{D}$ specified then implicitly it is understood that the $\mathcal{D}$-isomorphisms are the $\mathcal{C}$-isomorphisms. \begin{examples} \mbox{ } - Any category is a length category in a trivial way, defining the ``discrete'' length by $$\ell(X \cong Y)=0 \mbox{ and }\ell(\varphi)=1 \mbox{ otherwise.} $$ However, categories can carry other, more meaningful lengths. - Let $\mathsf{Grp}$ denote the category of finite commutative groups, and for $\varphi \in \mathrm{Hom}(G,H)$ a homomorphism of groups $G$ and $H$, define its length as $$\ell(\varphi) = \max \{ \log(|\ker(\varphi)|), \log(|\mathrm{coker}(\varphi)|) \}. $$ This obviously satisfies \textbf{(L1)} and also \textbf{(L2)} since $|\ker(\psi \circ \varphi)| \leq |\ker(\varphi)| \cdot |\ker(\psi)|$ and similarly for the cokernel. Hence $(\mathsf{Grp},\ell)$ is a length category. - More generally, an \emph{abelian} category with in some sense ``measurable'' kernels and cokernels is a length category by a similar construction. However, non-abelian categories can also be length categories for an interesting length function. In some sense, this is a metric substitute for the non-existence of kernels/cokernels. - The category of compact metric spaces with bi-Lipschitz homeomorphisms is a length category for the length $$ \ell(\varphi) := \max\{ \left|\log \mathrm{dil}(\varphi)\right|, \left|\log \mathrm{dil}(\varphi^{-1})\right| \}, $$ where $\mathrm{dil}(\varphi)$ is the dilatation of the map $\varphi$. This length induces Lipschitz distance between compact metric spaces. \end{examples} Lengths in categories sometimes give rise to a metric on the moduli space of objects of the category $\mathcal{C}$ up to $\mathcal{D}$-isomorphism, as the following lemma shows (note that the condition is sufficient, but not necessary): \begin{lemma} \label{ddd} If $(\mathcal{C},\ell)$ is a length category and we put $$d(X,Y)=\frac{1}{2} \left(\inf_{\varphi \in \mathrm{Hom}_{\mathcal{C}}(X,Y)} \{ \ell(\varphi),+\infty\} \, + \, \inf_{\psi \in \mathrm{Hom}_{\mathcal{C}}(Y,X)} \{\ell(\psi),+\infty\} \right) $$ then $d$ is an extended (i.e., $(\mathbf{R} \cup\{ +\infty \})$-valued) metric on the ``moduli space'' $\mathrm{Ob}(\mathcal{C})/\mathcal{D} \mathrm{-iso}$ if for $d(X,Y)=0$, the infimum in the definition of $d$ is attained in $\mathrm{Hom}(X,Y)$. If $\mathrm{Hom}(X,Y) \neq \emptyset$ for any $X,Y \in \mathrm{Ob}(\mathcal{C})$, $d$ is a (finite) metric. \end{lemma} \begin{proof} First of all, length is well-defined on objects up to isomorphism: if $\varphi$ is arbitrary and $\psi$ is an isomorphism, then $$\ell(\varphi \circ \psi) \mathop{\leqslant }_{\mbox{\footnotesize{\textbf{(L2)}}}} \ell(\varphi) + \ell(\psi) \mathop{=}_{\mbox{\footnotesize{\textbf{(L1)}}}} \ell(\varphi) = \ell(\varphi \circ \psi \circ \psi^{-1}) \mathop{\leqslant }_{\mbox{\footnotesize{\textbf{(L2)}}}}\ell(\varphi \circ \psi) + \ell(\psi^{-1}) \mathop{=}_{\mbox{\footnotesize{\textbf{(L1)}}}} \ell(\varphi \circ \psi). $$ The positivity of $d$ is clear. For the triangle inequality, since $d$ is defined as the symmetrization of the hemimetric $$d'(X,Y)=\inf_{\varphi \in \mathrm{Hom}(X,Y)} \{\ell(\varphi),+\infty\}, $$ it suffices to prove the triangle inequality for $d'$. Let $\varepsilon>0$. Let $\varphi \in \mathrm{Hom}(X,Y)$ and $\psi \in \mathrm{Hom}(Y,Z)$ be such that $$\ell(\varphi) \leqslant d'(X,Y)+\varepsilon/2\mbox { and } \ell(\psi) \leqslant d'(Y,Z)+\varepsilon/2$$ (which is possible by the definition of length as an infimum). We have $$ d'(X,Z) = \inf_{\theta \in \mathrm{Hom}(X ,Z)} \ell(\theta) \leqslant \ell(\psi \circ \varphi). $$ By axiom \textbf{(L2)} we find $$ \ell(\psi \circ \varphi) \leqslant \ell(\psi) + \ell(\varphi) \leqslant d'(Y,Z)+d'(X,Y) + \varepsilon. $$ The triangle inequality follows by letting $\varepsilon$ tend to zero. Finally, assume $d(X,Y)=0$. Since the infimum in the definition is attained, we find a map $\varphi \in \mathrm{Hom}(X,Y)$ of length zero. Then axiom \textbf{(L1)} implies that $X\cong Y$. \end{proof} \section{The length of a map between Riemannian manifolds} \label{length} We will now consider the category $\mathcal{R}$ of closed smooth Riemannian manifolds, with homomorphisms smooth diffeomorphisms and $\mathcal{D}$-isomorphisms the isometries. We define a length function in this category using our diffeomorphism invariant. The idea is to measure how far the one- and two-variable zeta functions $\zeta_{Y,a_0}$ and ${\tilde{\zeta}}_{Y,a_1}$ are apart under pullback by the map $\varphi$ in some suitable distance on the set of meromorphic functions, and to test this over certain well-behaved sets of test-functions $a_0,a_1$. \begin{definition} Let $f$ and $g$ denote two functions that are holomorphic non-zero in a right half line $$H_{\sigma}:=\{ s \in \mathbf{R} \mid s \geqslant \sigma \},$$ where $\sigma$ is fixed once and for all. Define $$ \delta_1(f,g):= \mathop{\sup_{\sigma \leqslant s\leqslant \sigma+1}} \{ | \log \left| \frac{f(s)}{g(s)} \right| | \}, $$ and set $$d_\sigma(f,g):= \frac{\delta_1(f,g)}{1+\delta_1(f,g)}.$$ \end{definition} Convergence in $d_\sigma$ is \emph{not} uniform convergence of general analytic functions without zeros on $H_\sigma$ (because the absolute value signs cause an indeterminacy up to an analytic function with values in the unit circle), but when specialized to our Dirichlet series, this problem disappears, cf.\ infra. \begin{definition} The \emph{length of a smooth diffeomorphism $\varphi\, : \, X \rightarrow Y$} of Riemannian manifolds of dimension $N$ is defined by $$ \ell(\varphi):= \mathop{\sup_{a_0 \in C^{\infty}(Y,\mathbf{R}_{\geqslant 0})-\{0\}}}_{a_1 \in C^{\infty}(Y)-\mathbf{R}} \! \! \! \max \, \{ d_{N}(\zeta_{X,a^*_0},\zeta_{Y,a_0}), d_{N}({\tilde{\zeta}}_{X,a^*_1}, {\tilde{\zeta}}_{Y,a_1}) \}.$$ \end{definition} \begin{remarks} \mbox{ } - Since $d_\sigma$ is obviously bounded by $1$, the length of a map also takes values in $[0,1]$. - There is some arbitrariness in the definition of $\ell(\varphi)$: our zeta functions are holomorphic in $\Re(s) > \frac{N}{2}$, so one might also take the product metric of the suprema over an other suitable subset. \end{remarks} The main theorem can be rephrased as follows, which shows that $(\mathcal{R},\ell)$ satisfies axiom \textbf{(L1)} of a length category: \begin{proposition} If $X$ and $Y$ are closed Riemannian manifolds, then a smooth diffeomorphism $\varphi \, : \, X \rightarrow Y$ has length zero if and only if it is an isometry. \end{proposition} \begin{proof} If $\varphi$ has length zero, then we have an equality of absolute values of zeta functions under pullback, at positive functions $a_0 \in C^{\infty}(Y,\mathbf{R}_{\geqslant 0})$ and functions $a_{1} \in C^{\infty}(Y)$. Since all eigenvalues are positive, and all Dirichlet series coefficients of the zeta functions are positive when evaluated at a positive function $a_0$ (cf.\ section \ref{sq}), the values for $s \in H_{d+1}$ of such zeta functions are positive, and hence equal. Now a standard theorem (e.g., \cite{Serre}, Section 2.2) implies that the two Dirichlet series are everywhere equal. We conclude that $\zeta_{X,a^*_0}=\zeta_{Y,a_0}$ for $a_0 \in C^{\infty}(Y,\mathbf{R}_{\geqslant 0})$ and ${\tilde{\zeta}}_{X,a^*_1}(s) = {\tilde{\zeta}}_{Y,a_1}$ for all $a_1 \in C^{\infty}(Y)$. Since any $a_0 \in C^{\infty}(Y)$ is a linear combination of such positive functions, we can apply Theorem \ref{Z0thm} to conclude that $\varphi$ is an isometry. The converse statement follows directly from the same theorem. \end{proof} We now prove that $(\mathcal{R},\ell)$ also satisfies axiom \textbf{(L2)} of a length category: \begin{proposition} If $X,Y,Z$ are closed Riemannian manifolds, and $\varphi \, : \, X \rightarrow Y$ and $\psi \, : \, Y \rightarrow Z$ are two smooth diffeomorphisms, then $$ \ell(\psi \circ \varphi) \leqslant \ell(\varphi) + \ell(\psi). $$ \end{proposition} \begin{proof} We observe that we have injections of algebras of functions $$\psi^* \, : \, C^{\infty}(Z,\mathbf{R}_{\geq 0}) \hookrightarrow C^{\infty}(Y,\mathbf{R}_{\geq 0}) \mbox{ and }\psi^* \, : \, C^{\infty}(Z) \hookrightarrow C^{\infty}(Y).$$ It then suffices to use the identity $$ \frac{\zeta_{Z,a_0}}{\zeta_{X,\varphi^*\psi^*(a_0)}} = \frac{\zeta_{Z,a_0}}{\zeta_{Y,\psi^*(a_0)}} \cdot \frac{\zeta_{Y,\psi^*a_0}}{\zeta_{X,\varphi^*\psi^*(a_0)}}, $$ and similarly for the two-variable version. \end{proof} We cannot directly apply Lemma \ref{ddd} to conclude that $\ell$ induces a distance, but see Section \ref{fff}. \section{Example: length of rescaling a circle} \label{circular} \begin{se} Let $S_r$ denote the circle of radius $r$, which we parameterize by an angle $\theta \in [0,2\pi[$. The metric is $ds^2=r^2d\theta$, $g_{11}=r^2, g^{11}=r^{-2}$, the Laplacian is $-r^{-2} \partial^2_\theta$, with spectrum $\{n^2r^{-2}\}_{n \in \mathbf{Z}_{>0}}$ with multiplicity two, and eigenspace for $n$ spanned by $\{\sin(n\theta),\cos(n\theta)\}$. Let $\zeta(s)$ denote the Riemann zeta function. One sees directly that $$\zeta_{S_r,a_0}= 2r^{2s+1} \left( \int_{0}^{2 \pi} a_0(\theta) d\theta \right) \zeta(2s)$$ and $$\zeta_{S_r,a_1,a_2}= 2 r^{2s-1} \left( \int_{0}^{2\pi} a_1(\theta)\partial_\theta^2(a_2)(\theta) d\theta \right) \zeta(2s) .$$ Hence \bea \label{zetacirc} \zeta_{S_r,a_1,a_2}= r^{2} \zeta_{S_r,a_1\partial_\theta^2(a_2)}.\eea \end{se} \begin{se} Let us compute the length of the natural rescaling homeomorphism $$\varphi_{r_1,r_2} \, : \, S_{r_1} \rightarrow S_{r_2} \, \, : \, \, \theta \mapsto \theta \ \ \ (\theta \in [0,2\pi[).$$ We find $$ \left| \frac{\zeta_{S_{r_1},a^*_0}}{\zeta_{S_{r_2},a_0}}\right| = (r_1/r_2)^{2s+1}\mbox{ and } \left| \frac{\zeta_{S_{r_1},a^*_1,a^*_2}}{\zeta_{S_{r_2},a_1,a_2}}\right|= (r_1/r_2)^{2s-1},$$ so we find for the length of $\varphi_{r_1,r_2}$: $$ \ell(\varphi_{r_1,r_2}) = \frac{1}{1+\frac{1}{5|\log(r_1/r_2)|}}.$$ Figure \ref{fig1} depicts the $\ell(\varphi_{r,1})$ for $0\leqslant r \leqslant 2$. Observe the nice ``conformal'' symmetry $\ell(\varphi_{r_1,r_2})=\ell(\varphi_{r_2,r_1})$. Also, the two-variable zeta function does not affect this computation (as is to be expected from the fact that the spectrum characterizes a circle); in the next section, we will consider isospectral tori, for which exactly the one-variable zeta function plays no role. \begin{center} \begin{figure}[h] \includegraphics[width=10cm]{lr1r2.pdf} \caption{Length of the rescaling homeomorphism $\varphi_r$ between a circle of radius $r$ and a circle of unit radius} \label{fig1} \end{figure} \end{center} \end{se} \section{Example: length of a linear map between isospectral tori} \label{lengthtori} \begin{se}Let $\mathbf{T}_1$ and $\mathbf{T}_2$ denote two \emph{isospectral} tori. Let $\varphi \, : \, \mathbf{T}_1 \rightarrow \mathbf{T}_2$ denote a smooth bijection, and assume that $\varphi$ arises from a linear map $A$ in the universal cover (any map of tori is homotopic to such a linear map with the same action on the homology of the torus, cf.\ \cite{Halpern}, Lemma 1): $$ \xymatrix{ \mathbf{R}^n \ar@{->}[r]^{A} \ar@{->}[d]_{\pi_1} & \mathbf{R}^n \ar@{->}[d]^{\pi_2} \\ \mathbf{T}_1=\mathbf{R}^n/\Lambda_1 \ar@{->}[r]^{\varphi} & \mathbf{T}_2=\mathbf{R}^n/\Lambda_2 } $$ This makes sense if $A\Lambda_1 \subseteq \Lambda_2$. If we denote by $G_1$ and $G_2$ the generator matrices of the two tori (matrices whose columns are basis vectors of the lattice), the condition is that \begin{equation} \label{intmat} G_2^{-1}AG_1 \in \mathrm{GL}(n,\mathbf{Z}). \end{equation} Taking determinants, we find $$w_\varphi = |\det(A)| = |\det(G_1^{-1} G_2)| = \mathrm{vol}(\mathbf{T}_2)/\mathrm{vol}(\mathbf{T}_1).$$ An example of such a map is the ``change of basis'' $A=G_2 G_1^{-1}$. Write $A^\top$ for the transpose of the matrix $A$. \end{se} \begin{se} Since we assume $\mathbf{T}_1$ and $\mathbf{T}_2$ isospectral tori, they have the same (common) spectral zeta function. Hence from formula (\ref{torus1}) we find that $$ \left| \frac{\zeta_{\mathbf{T}_1,a^*_0}}{\zeta_{\mathbf{T}_2,a_0}}\right| = \left| \frac{\int_{\mathbf{T}_2} a_0 w_{\varphi^{-1}}\, d\mu_{\mathbf{R}^n} }{\int_{\mathbf{T}_2} a_0 \, d\mu_{\mathbf{R}^n} } \right| = | \det(A^{-1}) | = \frac{\mathrm{vol}(\mathbf{T}_1)}{\mathrm{vol}(\mathbf{T}_2)} = 1.$$ Via formula (\ref{torus2}), the two variable zeta functions satisfy $$ \sup_{\nabla a_1 \neq 0} \left| \frac{{\tilde{\zeta}}_{\mathbf{T}_1,a^*_1}}{{\tilde{\zeta}}_{\mathbf{T}_2,a_1}}\right| = \sup_{\nabla a_1 \neq 0} \frac{\int_{\mathbf{T}_1} |\nabla(a^*_1)|^2 \, d\mu_{\mathbf{R}^n} }{\int_{\mathbf{T}_2} |\nabla(a_1)|^2\, d\mu_{\mathbf{R}^n} } = \sup_{\nabla a_1 \neq 0} \frac{\int_{\mathbf{T}_1} |A\nabla(a_1)|^2\, d\mu_{\mathbf{R}^n} }{\int_{\mathbf{T}_2} |\nabla(a_1)|^2\, d\mu_{\mathbf{R}^n} }. $$ For every $v \in T_x \mathbf{T}_2$, we have $ |Av|^2 \leqslant ||A||_2 |v|^2 $, where $||A||_2$ is the spectral norm of the matrix $A$ ($=$ the square root of the largest eigenvalue of $A A^\top$). Hence $$ d(\mathbf{T}_1,\mathbf{T}_2) \leqslant \ell(A) \leqslant \frac{\log ||A||_2}{1+\log ||A||_2}. $$ One may wonder whether this bound is attained. \end{se} \begin{example} The smallest dimension in which there exist non-isometric isospectral tori is four, as was shown by Schiemann (\cite{Schiemann}), and an example is given by the two tori in the proof of \ref{exist}. For the specific map $A=G_- G_+^{-1}$ between these tori, we have $||A||_2 \approx 3.21537$, and $$d(\mathbf{T}_+, \mathbf{T}_-) \leqslant \ell(A) \leqslant 0.538733.$$ \end{example} \section{Convergence in the spectral metric} \label{fff} \begin{theorem} \label{convmet} Suppose we are given two Riemannian manifolds $(X,g_X)$ and $(Y,g_Y)$ and a collection of smooth diffeomorphisms $\varphi_i \, : \, X \rightarrow Y$ whose length converges to zero. Then $X$ and $Y$ are isometric. \end{theorem} \begin{proof} The proof is basically the ``convergent'' version of the first proof of Theorem \ref{Z0thm}. In the definition of $\ell(\varphi)$, we observe that both zeta functions $\zeta_{Y,a_0}(s)$ and $\zeta_{X,\varphi_i^*(a_0)}(s)$ have their right most pole at $s=d/2$. Both poles are simple, hence they cancel in the quotient. Therefore, the quotient function $\zeta_{X,\varphi_i^*(a_0)}(s)/\zeta_{Y,a_0}(s)$ is holomorphic in $s \geqslant d/2-1/2$. Also, since $a_0$ is positive, the quotient is a positive real valued function. We conclude from $\ell(\varphi) \rightarrow 0$ that $$\zeta_{X,\varphi_i^*(a_0)}(s)/\zeta_{Y,a_0}(s) \rightarrow 1 \mbox{ for } s \in \mathbf{R}.$$ In particular, we have convergence at $s=d/2$, and hence a convergence of residues (uniformly in $a_0$) $$ \mathrm{Res}_{s=\frac d2} \zeta_{X,\varphi_i^*(a_0)}(s) = \lim_{s \rightarrow \frac d2+} \zeta_{X,\varphi_i^*(a_0)}(s) (s-\frac d2) \rightarrow \lim_{s \rightarrow \frac d2+} \zeta_{Y,a_0}(s) (s-\frac d2) = \mathrm{Res}_{s=\frac d2} \zeta_{Y,a_0}(s), $$ (not just in absolute value), where the limits are taken along the positive real axis. By the computation of these residues in Lemma \ref{residue}, we conclude that the jacobians converge to $1$: $$ w_{\varphi_i} \rightarrow 1. $$ For the two-variable zeta functions, one may reason in a similar way, using that $g(da,da)$ is a totally positive function. From Lemma \ref{res2}, we get in a similar way a convergence of metrics $$ \varphi_i^*(g_Y) \rightarrow g_X, $$ uniformly on $X$. Recall that the \emph{distortion} of a map $\varphi \, : \, X \rightarrow Y$ is defined to be $$ \mathrm{dis}(\varphi) := \sup_{x_1,x_2 \in X} \left| d_Y(\varphi(x_1),\varphi(x_2))-d_X(x_1,x_2)\right|. $$ The distance in terms of the metric tensor is $$ d(x_1,x_2) := \mathop{\inf_{\gamma \in C^1([0,1],X)}}_{\gamma(0)=x_1, \gamma(1)=x_2} \int_0^1 \sqrt{ \sum g^{ij}(\gamma(t)) \gamma(t)'_i \gamma(t)'_j} \, dt .$$ By uniform convergence of metric tensors on the manifold $X$, we can interchange the infimum in the definition of the distance with the limit in metrics to conclude that \begin{equation} \label{dis} \mathrm{dis}(\varphi_i) \rightarrow 0. \end{equation} We can now finish the proof as in \cite{Burago} (proof of Thm.\ 7.3.30). Since $X$ is compact, we can find a dense countable set $S \subset X$, and we can find a subsequence $\{ \varphi'_i\}$ of $\{ \varphi_i \}$ that converges pointwise in $Y$ at every $x \in S$. This allows us to define a limit map $$\varphi \, : \, S \rightarrow Y \mbox{\ \ by \ \ } \varphi(x):= \lim \varphi'_i(x)$$ for $x \in S$. This limit map is distance-preserving by (\ref{dis}), and so can be extended to a distance-preserving bijection from $X \rightarrow Y$. Now the Myers-Steenrod theorem (\cite{KN}, 3.10) implies that $\varphi$ is a smooth isometry between $X$ and $Y$. \end{proof} \begin{corollary} The function ``zeta-distance'' $$d_\zeta(X,Y):=\min \{ \inf_{C^{\infty}(X \stackrel{\varphi}{\rightarrow} Y)} \ell(\varphi),+\infty\}$$ defines an extended metric between isometry classes of Riemannian manifolds. \end{corollary} \begin{proof} It suffices to prove that if $d_\zeta(X,Y)=0$, then $X$ and $Y$ are isometric, and this follows from the previous theorem. \end{proof} \begin{remark} There are other distance functions between Riemannian manifolds up to isometry, such as Lipschitz, uniform or Gromov-Hausdorff distance (e.g., \cite{Gromov}, \cite{Burago}), the distances $d_t$ and $\delta_t$ of B\'erard-Besson-Gallot (\cite{BBG}) and the spectral distances of Fukaya (\cite{Fukaya}) and Kasue-Kumura (\cite{KK1}). These distances pose various computational challenges --- in the previous sections, we hope to at least have hinted at the computational aspects of the ``zeta-distance'' we propose. We finally observe that such distances play an increasingly important role in physics and cosmology (compare, e.g., \cite{MS0}, \cite{Douglas}). \end{remark} \begin{se} We conclude by comparing our ``zeta-distance'' $d_\zeta$ to the other distances. For this, we recall in the following diagram the relation between various forms of convergence: \begin{center} \begin{figure}[h] $ \xymatrix{ & \mbox{Lipschitz-conv.} \ar@{=>}[dl]_{\mbox{\hspace*{-2cm} {\footnotesize simple spectrum}}}^{\mbox{\cite{BBG}}} \ar@{=>}[d]^{\mbox{\cite{BBG}}} \ar@{=>}[dr]^{\mbox{\cite{Burago}}}\\ d_t\mbox{-conv.} & \delta_t\mbox{-conv.} & \mbox{unif.\ conv.} \ar@{=>}[d]^{\mbox{\cite{Burago}}} \ar@{<=>}[r] & \mathbf{d_\zeta}\mbox{\textbf{-conv.}} \\ \mbox{Kasue-Kumura-conv.} \ar@{=>}[rr]_{\mbox{\cite{KK1}}} & & \mbox{GH-conv.} & } $ \caption{Some relations between convergence in various distances (in a fixed $C^{\infty}$-type)} \label{figgraph} \end{figure} \end{center} \end{se} \begin{proposition} Let $\mathcal{M}$ denote the set of closed Riemannian manifolds up to isometry. Then $d_\zeta$ induces the topology of uniform convergence in $C^{\infty}$-diffeomorphic types on $\mathcal{M}$, i.e., if two such manifolds are not $C^{\infty}$-diffeomorphic, then the manifolds are at infinite distance, and otherwise, a sequence of manifolds converge if and only if there is a sequence of $C^{\infty}$-diffeomorphisms between them whose distortion tends to zero. \end{proposition} \begin{proof} Suppose $(X_i,g_i) \rightarrow (X,g)$ converges in $d_\zeta$. This means that there is a sequence of $C^{\infty}$-diffeomorphisms $\varphi_i \, : \, (X_i,g_i) \rightarrow (X,g)$ whose length converges to zero. We precompose this with $\varphi_i^{-1}$: $$ \xymatrix{ (X_i,g_i) \ar@{->}[r]^{\varphi_i} \ar@{<-}[d]^{\varphi_i^{-1}} & (X,g) \\ (X,(\varphi_i^{-1})^*(g_i)) \ar@{->}[ru]_{\mbox{Id}} } $$ Hence we have a sequence of metrics $h_i:=(\varphi_i^{-1})^*(g_i)$ for which the length of the identity map converges to $0$. Taking residues in the two-variable zeta functions, we find that $h_i \rightarrow g$ uniformly. \end{proof} \bibliographystyle{amsplain}
1,314,259,995,251
arxiv
\section{Introduction} The initial impetus for this text was given by an observation by John Baez \cite[p. 200]{baez} in his celebrated paper {\em The Octonions}, concerning the construction of the exceptional Lie algebra $\ee_8$ as the direct sum of $\spin(16)$ and the real half-spin representation $\Sigma^+_{16}$ of $\Spin(16)$. After showing that the verification of the Jacobi identity reduces to the case where all three vectors lie in $\Sigma^+_{16}$, he writes {\em``...unfortunately, at this point a brute-force calculation seems to be required. For two approaches that minimize the pain, see the books by Adams \cite{adams} and by Green, Schwartz and Witten \cite{gsw}. It would be nice to find a more conceptual approach."} This problem can be rephrased in terms of so-called $s$-representations, introduced by Kostant \cite{ko56} and studied by Heintze and Ziller \cite{hz} or Eschenburg and Heintze \cite{eh99} among others. Roughly speaking, an $s$-representation is a real representation of some Lie algebra of compact type $\hh$ which can be realized as the isotropy representation of a symmetric space. It turns out that the obstruction for a given representation $\mm$ to be an $s$-representation is encoded in some invariant element in the fourth exterior power of $\mm$, defined by the image of the Casimir element from the universal enveloping algebra of $\hh$. In particular this obstruction automatically vanishes when $(\Lambda^4\mm)^\hh=0$. Now, this condition could seem {\em a priori} quite restrictive. For instance, it can never hold if the representation carries a complex (or all the more quaternionic) structure. This is due to the existence of universal elements in $(\Lambda^4\mm)^\hh$ which are inherent to the structure of $\mm$. Nevertheless, if these are the only invariant elements, one can adapt our construction by adding a $\uu(1)$ or $\sp(1)$ summand to $\hh$, depending on whether $\mm$ is complex or quaternionic, in order to ``kill" the obstruction given by the Casimir element of $\hh$, and turn $\mm$ into an $s$-representation of $\hh\oplus\uu(1)$ or $\hh\oplus\sp(1)$ respectively. These results are summarized in Propositions \ref{complex} and \ref{quaternionic} below. The idea of adding a summand to $\hh$ in order to obtain $s$-representations already appears in \cite{hz}, in a setting which presents many similarities with ours. However, the criteria for $s$-representations obtained by Heintze and Ziller are somewhat complementary to ours. In the complex setting, for example, Theorem 2 in \cite{hz} can be stated as follows: If $\mm$ is a faithful complex representation of $\hh$ such that the orthogonal complement of $\hh\subset \llbracket \Lambda^{1,1} \mm \rrbracket$ is irreducible, then $\mm$ is an $s$-representation with respect to $\hh\oplus\uu(1)$. In contrast, in Theorem \ref{com} we prove a similar statement, but under the different assumption that $\Lambda^2\mm$ is irreducible. As applications of our results we then obtain in Theorem \ref{com} a geometrical proof of the classification by Dynkin of complex representations with irreducible second exterior power as well as a classification result in Theorem \ref{33} for representations with quaternionic structure whose second exterior power decomposes in only two irreducible summands. This classification is based on a correspondence between such representations and $s$-representations already pointed out by Wang and Ziller in \cite[p. 257]{wz84} where a framework relating symmetric spaces and strongly isotropy irreducible homogeneous spaces is introduced. These results can also be compared to the classification by Calabi and Vesentini of complex representations whose symmetric power has exactly two irreducible components based on the classification of symmetric spaces, reproved by Wang and Ziller \cite{wz93} using representation theoretical methods. In the last section we apply the above ideas in order to give a purely conceptual proof for the existence of $\ee_8$ in Proposition \ref{41}. The same method, using spin representations, also works for the other exceptional simple Lie algebras except $\gg_2$. Conversely, we show that most spin representations which are isotropy representations of equal rank homogeneous spaces are actually $s$-representations and define the exceptional Lie algebras. Note that an alternative geometrical approach to the construction of $\ff_4$ and $\ee_8$ using the so-called Killing superalgebra was recently proposed by J. Figueroa-O'Farrill \cite{of08}. {\sc Acknowledgments.} We are grateful to Jost-Hinrich Eschenburg and Wolfgang Ziller for having brought to our attention the rich literature on $s$-representations and to Steffen Koenig for his helpful remarks about some classification results in representation theory. Special thanks are due to John~Baez whose questions motivated our work and who pointed out other related references. \section{A characterization of $s$-representations} Let $(\hh,B_\hh)$ be a real Lie algebra of compact type endowed with an $\ad_\hh$-invariant Euclidean product and let $\rho:\hh\to \End(\mm)$ be a faithful irreducible representation of $\hh$ over $\RM$, endowed with an invariant Euclidean product $B_{\mm}$ (defined up to some positive constant). In order to simplify the notation we will denote $\rho(a)(v)$ by $av$ for all $a\in\hh$ and $v\in \mm$. Our first goal is to find necessary and sufficient conditions for the existence of a Lie algebra structure on $\gg:=\hh\oplus\mm$ compatible with the above data. \begin{elem} \label{l1} There exists a unique $(\RM$-linear$)$ bracket $[.,.]:\Lambda^2\gg^*\to\gg$ on $\gg:=\hh\oplus\mm$ such that \begin{enumerate} \item $[.,.]$ restricted to $\hh$ is the usual Lie algebra bracket. \item $[a,v]=-[v,a]=av$ for all $a\in\hh$ and $v\in \mm$. \item $[\mm,\mm]\subset\hh$. \item $B_{\hh}(a,[v,w])=B_{\mm}(av,w)$ for all $a\in\hh$ and $v,w\in \mm$. \end{enumerate} \end{elem} \bp The uniqueness is clear. For the existence we just need to check that the restriction of $[.,.]$ to $\mm\otimes\mm$ given by (4) is skew-symmetric. This follows from the $\ad_{\hh}$-invariance of $B_{\mm}$. \r \begin{ede}({\em cf.} \cite{ko56}) An irreducible representation $\mm$ of a normed Lie algebra $(\hh,B_\hh)$ such that the bracket given by Lemma \ref{l1} defines a Lie algebra structure on $\gg:=\hh\oplus\mm$ is called an {\em $s$-representation}. The Lie algebra $\gg$ is called the {\em augmented} Lie algebra of the $s$-representation $\mm$. \end{ede} Note that the above construction was studied in greater generality by Kostant \cite{ko}, who introduced the notion of {\em orthogonal representation of Lie type}, satisfying conditions (1), (2) and (4) in Lemma \ref{l1}. One can compare his characterization of representations of Lie type (\cite{ko}, Thm. 1.50 and 1.59) with Proposition \ref{cas} below. \begin{ere} If $\gg$ is the augmented Lie algebra of an $s$-representation of $(\hh,B_\hh)$ on $\mm$, then the involution $\s:=\id_\hh-\id_\mm$ is an automorphism of $\gg$, and $(\gg,\hh,\s)$ is a symmetric pair of compact type. Conversely, every irreducible symmetric pair of compact type can be obtained in this way. \end{ere} In the sequel $\{e_i\}$ will denote a $B_\mm$-orthonormal basis of $\mm$ and $\{a_k\}$ a $B_\hh$-orthonormal basis of $\hh$. \begin{elem}\label{cs} If an irreducible $s$-representation $(\mm,B_\mm)$ of $\hh$ has a $\hh$-invariant orthogonal complex structure, then $\hh$ is not semi-simple. \end{elem} \bp If $J$ denotes the complex structure of $\mm$ commuting with the $\hh$-action, then for all $\a\in\hh$ and $v,w\in\mm$ we have $$B_\hh(a,[Jv,w])=B_\mm(aJv,w)=B_\mm(Jav,w)=-B_\mm(av,Jw)=-B_\hh(a,[v,Jw]),$$ whence \beq\label{j}[Jv,w]=-[v,Jw].\eeq We now consider the element \beq\label{daj}a_J:=\sum_i[e_i,Je_i]\in\hh\eeq which clearly belongs to the center of $\hh$. In order to show that it does not vanish, we compute using the Jacobi identity $$a_Jv=[a_J,v]=\sum_i[[e_i,Je_i],v]=-\sum_i([[Je_i,v],e_i]+[[v,e_i],Je_i])=-2\sum_i[[Je_i,v],e_i].$$ We take $v=e_j$, make the scalar product with $Je_j$ and sum over $j$. Using \eqref{j} we get \beq\label{aj}\sum_jB_\mm(a_Je_j,Je_j)=-2\sum_{i,j}B_\hh([Je_i,e_j],[e_i,Je_j])=2\sum_{i,j}B_\hh([Je_i,e_j],[Je_i,e_j]).\eeq If $a_J=0$ this equation would imply that the bracket vanishes on $\mm$, whence $$0=B_\hh(a,[v,w])=B_\mm(av,w),\qquad \forall a\in\hh,\ \forall v,w\in\mm,$$ which is clearly impossible. \r The element $a_J$ defined in the proof above plays an important r\^ole in the theory of Hermitian symmetric spaces. For now, let us remark that because of the irreducibility of $\mm$ and of the fact that $a_J$ belongs to the center of $\hh$, there exists a non-zero constant $\mu$, depending on the choice of $B_\mm$, such that \beq\label{aj1} a_Jv=\mu Jv, \qquad\forall v\in\mm, \eeq in other words $a_J$ acts like $\mu J$ on $\mm$. Using the $\hh$-invariant scalar product $B_\mm$, the representation $\rho$ induces a Lie algebra morphism $\tilde\rho:\hh\to\L^2\mm\subset \L^{even}\mm$, $a\mapsto\tilde\rho(a):=\tilde a$ where \beq\label{rho} \tilde a(u,v)=B_\mm(au,v)=B_\hh(a,[u,v]). \eeq For later use, we recall that the induced Lie algebra action of $\hh$ on exterior 2-forms is $a_*(\tau)(u,v):=-\tau(au,v)-\tau(u,av)$, for all $\tau\in\Lambda^2\mm$, whence, in particular, the following formula holds: \beq\label{ind} a_*\tilde b=\widetilde{[a,b]},\qquad\forall a,b\in\hh. \eeq The Lie algebra morphism $\tilde\rho$ extends to an algebra morphism $\tilde\rho:\U(\hh)\to \L^{even}\mm$, where $\U(\hh)$ denotes the enveloping algebra of $\hh$. This morphism maps the Casimir element $\Cas_\hh=\sum_k (a_k)^2$ of $\hh$ to an invariant element $\tilde\rho(\Cas_\hh)\in(\L^4\mm)^\hh$. It was remarked by Kostant \cite{ko56} and Heintze and Ziller \cite{hz} that this element is exactly the obstruction for $\mm$ to be an $s$-representation. We provide the proof of this fact below for the reader's convenience. \begin{epr}[\cite{hz}, \cite{ko56}] \label{cas} An irreducible representation $(\mm,B_\mm)$ of $\hh$ is an $s$-representation if and only if $\tilde\rho(\Cas_\hh)=0$. \end{epr} \bp We need to check that the Jacobi identity for the bracket defined in Lemma \ref{l1} on $\hh\oplus\mm$ is equivalent to the vanishing of $\tilde\rho(\Cas_\hh)$. Note that the Jacobi identity is automatically satisfied by $[.,.]$ whenever one of the three entries belongs to $\hh$. We now take four arbitrary vectors $u,v,w,z\in\mm$ and compute the obstruction $$\mathcal{J}(u,v,w):=[[u,v],w]+ [[v,w],u]+ [[w,u],v]$$ using \eqref{rho} as follows: \bea B_\mm(\mathcal{J}(u,v,w),z)&=&B_\mm([[u,v],w]+ [[v,w],u]+ [[w,u],v],z)\\ &=&B_\hh([u,v],[w,z])+B_\hh([v,w],[u,z])+B_\hh([w,u],[v,z])\\ &=&\sum_k\big(B_\hh(a_k,[u,v])B_\hh(a_k,[w,z])+B_\hh(a_k,[v,w])B_\hh(a_k,[u,z])\\ &&\qquad+B_\hh(a_k,[w,u])B_\hh(a_k,[v,z])\big)\\ &=&\sum_k(\tilde a_k(u,v)\tilde a_k(w,z)+\tilde a_k(v,w)\tilde a_k(u,z)+\tilde a_k(w,u)\tilde a_k(v,z))\\ &=&\frac12\sum_k(\tilde a_k\wedge\tilde a_k)(u,v,w,z)=\frac12\tilde\rho(\Cas_\hh)(u,v,w,z). \eea \r The above result yields a simple criterion for $s$-representations: \begin{ecor}\label{c1} If $(\L^4\mm)^\hh=0$, then $\mm$ is an $s$-representation. \end{ecor} Conversely, one could ask whether every $s$-representation arises in this way. One readily sees that this is not the case, since the condition $(\L^4\mm)^\hh=0$ can only hold if $\hh$ is simple and $\mm$ is a purely real representation ({\em cf.} Lemma \ref{tensor} below). Nevertheless, under these restrictions, the converse to Corollary~\ref{c1} also holds, {\em cf.} Proposition \ref{rem} below. \begin{elem}\label{tensor} Let $\mm$ be an irreducible real representation of $(\hh,B_\hh)$ with $\dim_\RM(\mm)\ge4$. Then $(\L^4\mm)^\hh$ is non-zero if either $\mm$ has a complex structure or $\hh$ is not simple. \end{elem} \bp If $J$ is a $\hh$-invariant complex structure on $\mm$, then $B_\mm(J.,J.)$ is a positive definite $\hh$-invariant scalar product on $\mm$ so by the irreducibility of $\mm$ there is some positive constant $\nu$ such that $B_\mm(Ju,Jv)=\nu B_\mm(u,v)$ for every $u,v\in\mm$. Applying this relation to $Ju,Jv$ yields $\nu^2=1$, so $\nu=1$, {\em i.e.} $J$ is orthogonal. The corresponding 2-form $\omega\in\Lambda^2\mm$ defined by \beq\label{o}\omega(u,v):=B_\mm(Ju,v) \eeq is $\hh$-invariant. Moreover, since $\dim_\RM(\mm)\ge4$, the four-form $\omega\wedge\omega$ is a non-zero element in $(\L^4\mm)^\hh$. Assume now that $\hh=\hh_1\oplus\hh_2$ is not simple. Then $\mm=\mm_1\otimes\mm_2$ is the tensor product of irreducible representations $\mm_i$ of $\hh_i$. We endow each $\mm_i$ with an $\hh_i$-invariant scalar product and identify $\mm$ with the representation $L(\mm_1,\mm_2)$ of linear maps between $\mm_1$ and $\mm_2$. If $u\in L(\mm_1,\mm_2)$ we denote by $ u^*\in L(\mm_2,\mm_1)$ its adjoint. We now consider the element $R\in\Sym^2(\Lambda^2\mm)$ given by $$R(u,v,w,z):=\tr((uv^*-vu^*)(wz^*-zw^*)), $$ and the four-form $\Omega:=\b(R)$, where $\b$ is the Bianchi map $\b:\Sym^2(\Lambda^2(\mm))\to\Lambda^4(\mm)$ defined by $$\b(T)(u,v,w,z):=T(u,v,w,z)+T(v,w,u,z)+T(w,u,v,z).$$ It is clear that $\Omega$ belongs to $(\L^4\mm)^\hh$ (it is actually $\so(\mm_1)\oplus\so(\mm_2)$-invariant). To see that it is non-zero, take orthonormal bases $\{x_i\}$, $\{y_j\}$ of $\mm_1$ and $\mm_2$ and check that $\Omega(z_{11}, z_{12},z_{21},z_{22})=2$ for $z_{ij}:=x_i\otimes y_j$. \r In view of Lemma \ref{tensor}, it would be interesting to relax the condition $(\L^4\mm)^\hh=0$ in Corollary~\ref{c1} in order to obtain a criterion which could cover also the cases of complex or quaternionic representations. Let us first clarify the terminology. It is well-known that if $\rho:\hh\to\End(\mm)$ is an irreducible $\RM$-representation of $\hh$, the centralizer of $\rho(\hh)$ in $\End(\mm)$ is an algebra isomorphic to $\RM$, $\CM$ or $\HM$. Correspondingly, we will say that $\mm$ has real, complex or quaternionic type respectively. Remark that if a real representation $\mm$ of a semi-simple Lie algebra $(\hh,B_\hh)$ of compact type has a complex structure $I$, then it can not be an $s$-representation by Lemma \ref{cs}. Nevertheless, it turns out that the natural extension of $\rho$ to $\hh\oplus\uu(1)$ defined on the generator $\ii\in\uu(1)$ by $\rho(\ii)=I\in\End(\mm)$ can be an $s$-representation provided the space of invariant four-forms on $\mm$ is one-dimensional. More precisely, we have the following: \begin{epr} \label{complex} Let $\mm$ be a real representation of complex type of a semi-simple Lie algebra $(\hh,B_\hh)$ of compact type and consider the representation of $\hh\oplus\uu(1)$ on $\mm$ induced by the complex structure. If $\dim_\RM(\L^4\mm)^{\hh\oplus\uu(1)}=1$, then there exists a unique positive real number $r$ such that $\mm$ is an $s$-representation of $\hh\oplus \uu(1)$ with respect to the scalar product $B_\hh+rB_{\uu(1)}$. We denote here by $B_{\uu(1)}$ the scalar product on $\uu(1)$ satisfying $B_{\uu(1)}( \ii,\ii)=1$. \end{epr} \bp For every $a,b\in\hh$ we have $\tr(abI)=0$ since $a$ is skew-symmetric and $bI$ is symmetric as endomorphisms of $\mm$. Consequently $\tr([a,b]I)=\tr(abI)-\tr(baI)=0$. Since $\hh$ is semi-simple we have $[\hh,\hh]=\hh$, so $\tr(aI)=0$ for all $a\in\hh$. Let $\omega\in\Lambda^2\mm$ be the two-form corresponding to $I$ by \eqref{o}. An orthonormal basis of $(\hh\oplus \uu(1),B_\hh+rB_{\uu(1)})$ is $\{a_k,\tfrac1{\sqrt r}\ii\}$. The element in $\Lambda^2\mm$ induced by $\ii$ being $\tilde\rho(\ii)=\omega$, the image of the Casimir element corresponding to $B_\hh+rB_{\uu(1)}$ in $\Lambda^4\mm$ is $\tilde\rho(\Cas_\hh)+\tfrac1r\omega\wedge\omega$. Both summands are clearly $\hh$-invariant. To see that they are $\uu(1)$-invariant, note that by \eqref{ind}, both $\omega$ and the 2-forms $\tilde a\in\Lambda^2\mm$ for $a\in\hh$ are invariant under the induced action of $\uu(1)$ on $\Lambda^2\mm$. The hypothesis thus shows that there exists some real constant $c$ with \beq\label{ca}\tilde\rho(\Cas_\hh)= c\,\omega\wedge\omega.\eeq It remains to show that $c$ is negative (since then one can apply Proposition \ref{cas} for $r=-1/c$). Let $\lambda:\Lambda^k\mm\to\Lambda^{k-2}\mm$ denote the metric adjoint of the wedge product with $\omega$. It satisfies $$\lambda(\tau)=\tfrac12\sum_iIe_i\lrcorner e_i\lrcorner\tau$$ for every $\tau\in\Lambda^k\mm$, where $\lrcorner$ denotes the inner product. Let $2n\ge 4$ be the real dimension of $\mm$. Then $\lambda(\omega)=n$ and $\lambda(\omega\wedge\omega)=(2n-2)\omega$. In terms of $\lambda$, the relation $\tr(aI)=0$ obtained above just reads $\lambda(\tilde a)=0$ for all $a\in\hh$. We then get $$\lambda(\tilde\rho(\Cas_\hh))=\lambda(\sum_k(\tilde a_k\wedge\tilde a_k))=\sum_{i,k}(a_ke_i\wedge a_kIe_i),$$ whence $$\lambda^2(\tilde\rho(\Cas_\hh))=-\frac12\sum_{i,k}B_\mm(Ia_ke_i,Ia_ke_i).$$ From \eqref{ca} we thus find $c(2n^2-2n)=-\frac12\sum_{i,k}B_\mm(Ia_ke_i,Ia_ke_i)$, so $c$ is negative.\r We consider now the quaternionic case. It turns out that a real representation $\mm$ of quaternionic type is never an $s$-representation. Indeed, if $\mm$ is an $s$-representation, it follows from the proof of Lemma \ref{cs} that the three elements $a_I,\ a_J$ and $a_K$ defined from the quaternionic structure by \eqref{daj} belong to the center of $\hh$, so in particular $a_I$ and $a_J$ commute. On the other hand, \eqref{aj1} shows that $a_I$ and $a_J$ anti-commute when acting on $\mm$. However, like in the complex case, there are situations when one may turn $\mm$ into an $s$-representation by adding an extra summand $\sp(1)$ to $\hh$, and making it act on $\mm$ via the quaternionic structure. \begin{epr} \label{quaternionic} Let $\mm$ be a real representation of quaternionic type of a Lie algebra $(\hh,B_\hh)$ of compact type and consider the representation of $\hh\oplus\sp(1)$ on $\mm$ induced by the quaternionic structure. If $\dim_\RM(\L^4\mm)^{\hh\oplus\sp(1)}=1$, then there exists a unique positive real number $r$ such that the induced representation of $(\hh\oplus \sp(1),B_\hh+rB_{\sp(1)})$ on $\mm$ is an $s$-representation, where $B_{\sp(1)}$ denotes the scalar product of $\sp(1)$ such that $\ii,\jj,\kk$ is an orthonormal basis. \end{epr} \bp Let $\omega_I$, $\omega_J$ and $\omega_K$ denote the elements in $\Lambda^2\mm$ induced by the quaternionic structure $\{\ii,\jj,\kk\}$ via \eqref{o}. Like before, the image of the Casimir element corresponding to $B_\hh+rB_{\sp(1)}$ in $\Lambda^4\mm$ is $$\tilde\rho(\Cas_\hh)+\tfrac1r(\omega_I\wedge\omega_I+\omega_J\wedge\omega_J+\omega_K\wedge\omega_K).$$ Both terms are clearly $\hh$-invariant by \eqref{ind}. To see that they are $\sp(1)$-invariant, we use \eqref{ind} again to see that the induced action of $\sp(1)$ on $\Lambda^2\mm$ satisfies $$\ii_*(\tilde a)=0\qquad \forall a\in\hh\qquad\hbox{and}\qquad \ii_*(\omega_I)=0,\ \ii_*(\omega_J)=2\omega_K,\ \ii_*(\omega_K)=-2\omega_J,$$ whence $\ii_*(\tilde\rho(\Cas_\hh))=0$ and $\ii_*(\omega_I\wedge\omega_I+\omega_J\wedge\omega_J+\omega_K\wedge\omega_K)= 4\omega_J\wedge\omega_K-4\omega_K\wedge\omega_J=0.$ The invariance with respect to $\jj_*$ and $\kk_*$ can be proved in the same way. The hypothesis thus shows that there exists some real constant $c$ with \beq\label{ca1}\tilde\rho(\Cas_\hh)=c(\omega_I\wedge\omega_I+\omega_J\wedge\omega_J+\omega_K\wedge\omega_K),\eeq and again it remains to show that $c$ is negative. Let $\lambda_\ii:\Lambda^k\mm\to\Lambda^{k-2}\mm$ denote the metric adjoint of the wedge product with $\omega_I$. From the computations in the complex case we have $$\lambda_\ii^2(\tilde\rho(\Cas_\hh))=-\frac12\sum_{i,k}B_\mm(Ia_ke_i,Ia_ke_i)\qquad\hbox{and}\qquad \lambda_\ii^2(\omega_I\wedge\omega_I)=2n(n-1),$$ where $2n$ denotes the real dimension of $\mm$. An easy computation gives $$\lambda_\ii^2(\omega_J\wedge\omega_J)=\lambda_\ii^2(\omega_K\wedge\omega_K)=2n,$$ so from \eqref{ca1} we get $c(2n^2+2n)=-\frac12\sum_{i,k}B_\mm(Ia_ke_i,Ia_ke_i)$, showing that $c$ is negative.\r We can summarize Corollary \ref{c1} and Propositions \ref{complex}, \ref{quaternionic} by saying that a certain condition on the invariant part of $\Lambda^4 \mm$ is sufficient for the existence of an $s$-representation on $\mm$. Conversely one might ask whether this condition is also necessary for a given $s$-representation. It turns out that this is always the case if $\hh$ is simple. More precisely, we have: \begin{epr}\label{rem} Let $(\hh,B_\hh)$ be a simple Lie algebra of compact type and $\mm$ an irreducible representation of $\hh$ over $\RM$. \begin{enumerate} \item If $\mm$ is an $s$-representation representation of $\hh$, then $(\L^4\mm)^\hh=0$. \item If $\mm$ has complex type and is an $s$-representation of $(\hh\oplus \uu(1),B_\hh+rB_{\uu(1)})$ for some positive real number $r$, then $\dim_\RM(\L^4\mm)^{\hh \oplus \uu(1)}=1$. \item If $\mm$ has quaternionic type and is an $s$-representation of $(\hh\oplus \sp(1),B_\hh+rB_{\sp(1)})$ for some positive real number $r$, then $\dim_\RM(\L^4\mm)^{\hh\oplus\sp(1)}=1$. \end{enumerate} \end{epr} \bp The statement of (1) is already contained in \cite{ko56}. For the proof one has to use the well-known fact that the dimension of $(\Lambda^4 \mm)^\hh$ is just the fourth Betti number of the corresponding symmetric space $G/H$. Now, since in the case of compact symmetric spaces the Poincar\'e polynomials are known explicitly, it is easy to check that $\bb_4(G/H)$ vanishes. The proof in the cases (2) and (3) is similar, and is based on the computation of the fourth Betti numbers of Hermitian symmetric spaces and Wolf spaces. \r \section{Applications to complex representations} In this section we will give some applications of our characterization of $s$-representations in order to classify complex representations whose exterior powers have certain irreducibility properties. From now on $\mm$ will denote a {\em complex} irreducible representation of some Lie algebra $\hh$ of compact type. We will study three instances, corresponding to the cases where $\mm$ has a real structure, $\mm$ is purely complex, or $\mm$ has a quaternionic structure. \subsection{Representations with a real structure} Our main result in this case is the classification of all complex representations $\mm$ with real structure whose fourth exterior power has no trivial summand. Let $\llbracket \mm\rrbracket$ denote the real part of $\mm$, {\em i.e.} the fix point set of the real structure. Then $\llbracket \mm\rrbracket$ is a real representation of $\hh$ and $\mm=\llbracket \mm\rrbracket\otimes_\RM\CM$. \begin{ath}\label{l4} Let $\mm$ be a complex irreducible faithful representation with real structure of a Lie algebra $\hh$ of compact type such that $(\Lambda^4 \mm)^\hh = 0$. Then $\hh$ is simple and the pair $(\hh, \llbracket \mm\rrbracket)$ belongs to the following list: \begin{center} \begin{tabular}{|l|l|l|} \hline Helgason's type & $\hh$ & $\llbracket \mm\rrbracket$ \\ \hline \hline & $\hh$ & $\hh$ \\ \hline \rm{BD I} & $\so(n), n \neq 4$ & $\RM^n$ \\ \hline \rm{A I} & $\so(n), n \neq 4$ & $\Sym^2_0 \, \RM^n$ \\ \hline \rm{A II} & $\sp(n)$ & $\Lambda^2_0\, \HM^n$ \\ \hline \rm{F II} & $\spin(9)$ & $\Sigma_9$ \\ \hline \rm{E I} & $\sp(4)$ & $\Lambda^4_0 \, \HM^4$ \\ \hline \rm{E IV} & $F_4$ & $V_{26}$ \\ \hline \rm{E V} & $\su(8)$ & $\llbracket\Lambda^4 \CM^8\rrbracket$ \\ \hline \rm{E VIII} & $\spin(16)$ & $\Sigma^+_{16}$ \\ \hline \end{tabular} \end{center} where $V_{26}$ denotes the real $26$-dimensional irreducible representation of $F_4$ and $\llbracket \mm\rrbracket = \hh$ in the first row denotes the adjoint representation of $\hh$. \end{ath} \begin{proof} Since $(\Lambda^4 \mm)^\hh=(\Lambda^4_\RM\llbracket \mm\rrbracket)^\hh\otimes\CM$, the hypothesis implies that $(\Lambda^4_\RM\llbracket \mm\rrbracket)^\hh=0$. From Lemma~\ref{tensor} it follows that $\hh$ has to be simple, and Corollary~\ref{c1} implies that $\llbracket \mm\rrbracket$ is an $s$-representation and thus $\llbracket \mm\rrbracket$ is the isotropy representation of an irreducible symmetric space $G/H$ of compact type with $H$ simple. The list of possible pairs $(\hh, \llbracket \mm\rrbracket)$ then follows from the list of irreducible symmetric spaces of compact type \cite[p. 312-314]{besse}. Here the adjoint representation on $\llbracket \mm\rrbracket = \hh$ corresponds to the isotropy representation on symmetric spaces of the type II, {\em i.e.} of the form $(H\times H)/ H$. Conversely, the fourth exterior power of the representations in the table above have no invariant elements by Proposition \ref{rem}. In some cases a direct proof can also be given, see Proposition \ref{41} below. \end{proof} \begin{ere}\label{outer} We will see later on in Proposition \ref{spin} that the real half-spin representation $\Sigma_8^+$ is also an $s$-representation, and has no invariant elements in its fourth exterior power ({\em cf.} also Proposition \ref{rem}). One may thus wonder why it does not appear in the above table. The explanation is that it actually appears in a disguised form, as the standard representation of $\so(8)$ on $\RM^8$. To make this more precise, note that $n$-dimensional representations are usually classified up to isomorphism, {\em i.e.} up to composition with some element in the inner automorphism group of $\so(n)$. On the other hand, if one wants to classify all pairs $(\hh,\mm)$ with $(\Lambda^4\mm)^\hh=0$, then there is another group acting on the space of solutions: the outer automorphism group of $\so(n)$. Our classification above is up to the action of this group. In particular, the triality phenomenon in dimension 8 can be interpreted by saying that the outer automorphism group of $\so(8)$ is isomorphic to the permutation group of the set $\{\RM^8,\Sigma_8^+,\Sigma_8^-\}$ of 8-dimensional representations of $\so(8)\cong\spin(8)$. The same remark also applies below to complex representations, where taking the conjugate of a representation can be viewed as composing it with the non-trivial outer automorphism of $\su(n)$. \end{ere} \subsection{Representations with irreducible second exterior power} As another application of the above ideas, we will now obtain in a simple geometrical way Dynkin's classification \cite[Thm. 4.7]{dynkin} of complex representations $\mm$ with $\Lambda^2\mm$ irreducible. \begin{ath}\label{com} Let $\mm$ be a complex irreducible faithful representation of a Lie algebra $\hh$ of compact type such that $\Lambda^2 \mm$ is irreducible. Then either $\hh = \hh_0$ is simple, or $\hh = \hh_0 \oplus \uu(1)$ with $\hh_0$ simple, and the pair $(\hh_0, \mm)$ belongs to the following list: \begin{center} \begin{tabular}{|l|l|l|} \hline Helgason's type & $\hh_0$ & $\mm$ \\ \hline \hline \rm{A III} & $\su(n)$ & $\CM^n$ \\ \hline \rm{D III} & $\su(n)$ & $ \Lambda^2\CM^n$ \\ \hline \rm{C I} & $\su(n)$ & $\Sym^2 \CM^n$ \\ \hline \rm{BD I} & $\so(n), n \neq 4$ & $\RM^n \otimes \CM$ \\ \hline \rm{E III} & $\spin(10)$ & $\Sigma_{10}$ \\ \hline \rm{E VII} & $\E_6$ & $V_{27}$ \\ \hline \end{tabular} \end{center} where $V_{27}$ denotes the $27$-dimensional irreducible representation of $\E_6$. \end{ath} \begin{proof} If $\hh$ is not simple, it may be written as the sum of two ideals, $\hh = \hh_0 \oplus \hh_1$, and $\mm$ is the tensor product representation $\mm = E \otimes F$. It follows that $$ \Lambda^2 \mm = \Lambda^2 (E \otimes F) \cong (\Lambda^2 E \otimes \Sym^2F)\, \oplus \, (\Sym^2 E \otimes \Lambda^2 F). $$ Hence $\Lambda^2 \mm$ can only be irreducible if one factor, say $F$, is one-dimensional. Since $F$ is a faithful representation, one must have $\hh_1 = \uu(1)$. This argument shows that every ideal of $\hh$ has either dimension or co-dimension at most one. In particular, $\hh_0$ is simple. Consider now the real representation $\mm^\RM$ of $\hh_0$ obtained by forgetting the complex multiplication in $\mm$. The Lie algebra $\uu(1)$ acts on the fourth exterior power $\Lambda^4_\RM \mm^\RM$ by extending the action of the complex structure $J$ from $\mm^\RM$. We claim that the space of invariant elements $ (\Lambda_\RM^4 \mm^\RM)^{\hh_0 \oplus \uu(1)}$ is one-dimensional. If we denote as usual by $\Lambda^{p,q}\mm:=\Lambda^p\mm\otimes\Lambda^q\bar \mm$, then $$ \Lambda_\RM^4 \mm^\RM = \llbracket \Lambda^{4,0} \mm \rrbracket \oplus \llbracket \Lambda^{3,1} \mm \rrbracket \oplus \llbracket \Lambda^{2,2} \mm \rrbracket. $$ Since $J^2$ acts as $-(p-q)^2\id$ on $\llbracket \Lambda^{p,q}\mm \rrbracket$, it follows that the $\uu(1)$-invariant part of $\Lambda^4_\RM \mm^\RM$ is the third summand $\llbracket \Lambda^{2,2} \mm \rrbracket = \llbracket \Lambda^2 \mm \otimes \Lambda^2 \bar{ \mm}\rrbracket = \llbracket \End(\Lambda^2 \mm) \rrbracket$. Consequently, \beq\label{u1} (\Lambda_\RM^4 \mm^\RM)^{\hh_0 \oplus \uu(1)} = \llbracket \End(\Lambda^2 \mm) \rrbracket^{\hh_0}=\llbracket (\End(\Lambda^2 \mm))^{\hh_0} \rrbracket\eeq is one-dimensional since by assumption $\Lambda^2 \mm$ is irreducible as representation of $\hh$, so also of $\hh_0$. We can therefore apply Proposition~\ref{complex} to realize $\mm^\RM$ as an $s$-representation of $\hh_0\oplus\uu(1)$, so $\mm^\RM$ is the isotropy representation of some Hermitian symmetric space. Checking again the list in \cite[pp. 312-314]{besse} we obtain the possible pairs $(\hh_0, \mm)$ as stated above. Conversely, if $(\hh_0,\mm)$ belongs to the above list, then $ (\Lambda_\RM^4 \mm^\RM)^{\hh_0 \oplus \uu(1)}$ is one-dimensional by Proposition \ref{rem}, thus \eqref{u1} shows that $\Lambda^2 \mm$ is irreducible. \end{proof} \subsection{Representations with quaternionic structure} As another application we will now consider complex representations $\mm$ of $\hh$ with quaternionic structure. Such representations can be characterized by the existence of an invariant element in $\Lambda^2 \mm$, which is therefore never irreducible. Considering the $\hh$-invariant decomposition $\Lambda^2 \mm = \Lambda^2_0\mm \oplus \CM$, one can nevertheless ask whether $\Lambda^2_0 \mm$ can be irreducible. The classification of such representations is given by the following: \begin{ath}\label{33} Let $\mm$ be a complex irreducible faithful representation of a Lie algebra $\hh$ of compact type with a quaternionic structure, and let $\Lambda^2\mm = \Lambda^2_0\mm \oplus \CM$ be the standard decomposition of the second exterior power of $\mm$. If the $\hh$-representation $\Lambda^2_0\mm$ is irreducible then $\hh$ is simple and the pair $(\hh, \mm)$ belongs to the following list: \begin{center} \begin{tabular}{|l|l|l|} \hline Helgason's type & $\hh$ & $\mm$ \\ \hline \hline \rm{C II} & $\sp(n)$ & $\HM^n$ \\ \hline \rm{F I} & $\sp(3)$ & $ \Lambda^3_0 \HM^3$ \\ \hline \rm{G I} & $\sp(1)$ & $\Sym^3 \HM$ \\ \hline \rm{E II} & $\su(6)$ & $\Lambda^3 \CM^6$ \\ \hline \rm{E VI} & $\spin(12)$ & $\Sigma_{12}^+$ \\ \hline \rm{E IX} & $\E_7$ & $V_{56}$ \\ \hline \end{tabular} \end{center} where $V_{56}$ is the $56$-dimensional irreducible representation of $\E_7$. \end{ath} \begin{proof} Let $\ii$ denote the complex structure of $\mm$ and let $\jj$ be the quaternionic structure, {\em i.e.} a real endomorphism of $\mm^\RM$ anti-commuting with $\ii$ and satisfying $\jj^2=-\id$. Like before, if $\hh$ is not simple, one can write $\hh = \hh_0 \oplus \hh_1$, $\mm =E \otimes F$ and $$ \Lambda^2 \mm \cong (\Lambda^2 E \otimes \Sym^2F)\, \oplus \, (\Sym^2 E \otimes \Lambda^2 F). $$ If $E$ and $F$ have both dimension larger than one, then both summands in the above expression have the same property, which is impossible because of the hypothesis. Assume that one factor, say $F$, is one-dimensional. Since $F$ is a faithful representation, one must have $\hh_1 = \uu(1)$. Let $a\ne 0$ be the endomorphism of $\mm$ determined by the generator of $\uu(1)$. By the Schur Lemma there exists $z\in\CM^*$ such that $a=z\,\id$. Since $a$ commutes with the quaternionic structure we must have $z\in\RM$, which is impossible since $a$ has to be a skew-symmetric endomorphism of $\mm^\RM$ with respect to some $\hh$-invariant scalar product. Thus $\hh$ is simple. The Lie algebra $\sp(1)$ acts on the fourth exterior power $\Lambda^4_\RM \mm^\RM$ by extending the action of the (real) endomorphisms $\ii$ and $\jj$ of $\mm^\RM$. We claim that the space of invariant elements $(\Lambda_\RM^4 \mm^\RM)^{\hh \oplus \sp(1)}$ is one-dimensional. Using \eqref{u1} we see that $$ (\Lambda_\RM^4 \mm^\RM)^{\hh \oplus \uu(1)} =\llbracket( \End(\Lambda^2 \mm))^{\hh} \rrbracket =\llbracket (\End(\Lambda^2_0 \mm\oplus\CM))^{\hh} \rrbracket=\llbracket (\End(\Lambda^2_0 \mm))^{\hh} \rrbracket\oplus \RM$$ is two-dimensional since by assumption $\Lambda^2_0 \mm$ is irreducible. The first summand is generated by $\Omega_1:=\omega_I\wedge\omega_I$, whereas the second one is generated by $\Omega_2:=\omega_J\wedge\omega_J+\omega_K\wedge\omega_K$. Using \eqref{ind} we readily obtain $\jj_*\Omega_1=-4\omega_K\wedge\omega_I$ and $\jj_*\Omega_2=4\omega_I\wedge\omega_K$, thus showing that $(\Lambda_\RM^4 \mm^\RM)^{\hh \oplus \sp(1)}$ is one-dimensional and spanned by $\Omega_1+\Omega_2$. We can therefore apply Proposition~\ref{complex} to realize $\mm^\RM$ as an $s$-representation of $\hh\oplus\sp(1)$. Consequently, $\mm$ is the isotropy representation of a Wolf space, and thus belongs to the above table by \cite[pp. 312-314]{besse}. Conversely, it is standard fact that $\Lambda^2_0\HM^n$ is an irreducible $\sp(n)$-representation, and one can check ({\em e.g.} using the LiE software \cite{lie}) that for all other representations $\mm$ in this table, $\Lambda^2_0\mm$ is indeed irreducible. \end{proof} \section{Spin representations and exceptional Lie algebras} In this section we obtain a completely self-contained construction of exceptional simple Lie algebras based on the results in Section 2. We will only give the details for the construction of $\E_8$ arising from the half-spin representation of $\Spin(16)$, since all the other exceptional simple Lie algebras can be constructed by similar methods using spin representations. Conversely, we give a short algebraic argument showing that there are no other spin representations which are $s$-representations but those giving rise to exceptional Lie algebras. \subsection{A computation-free argument for the existence of $\E_8$} As already mentioned in the introduction, the only non-trivial part in the construction of $\E_8$ is to check that the natural bracket on $\spin(16)\oplus\Sigma_{16}^+$ constructed in Lemma \ref{l1} satisfies the Jacobi identity. This follows directly from Corollary \ref{c1}, together with the following: \begin{epr}\label{41} The fourth exterior power of the real half-spin representation $\Sigma_{16}^+$ has no trivial summand. \end{epr} \bp One can use the plethysm function of the LiE software \cite{lie} to check that the fourth exterior power of $\Sigma_{16}^+$ has nine irreducible summands, each of them being non-trivial. However, our purpose is exactly to replace such brute force computations by conceptual arguments! Let $\langle.,.\rangle_\S$ and $\langle.,.\rangle_\Sigma$ be $\Spin(16)$-invariant scalar products on $\spin(16)$ and $\Sigma_{16}^+$ respectively. We start by recalling that the second exterior power of the real half-spin representation in dimension $8k$ decomposes in irreducible summands as $$\Lambda^2(\Sigma_{8k}^+)\simeq\bigoplus_{i=1}^k\Lambda^{4i-2}(\RM^{8k}).$$ This isomorphism can also be proved in an elementary way. Indeed, the right hand term acts skew-symmetrically and faithfully by Clifford multiplication on $\Sigma_{8k}^+$ and thus can be identified with a sub-representation of $\Lambda^2(\Sigma_{8k}^+)$. On the other hand, its dimension is equal to $$\dim\bigoplus_{i=1}^k\Lambda^{4i-2}(\RM^{8k})=\frac18\left(2^{8k}-(1+i)^{8k}-(1-i)^{8k}\right)= 2^{4k-2}(2^{4k-1}-1)=\dim\,\Lambda^2(\Sigma_{8k}^+).$$ For $k=2$ we thus get \beq \label{s16}\Lambda^2(\Sigma_{16}^+)\simeq \Lambda^2(\RM^{16})\oplus\Lambda^6(\RM^{16}).\eeq Recall the standard decomposition $$\Sym^2(\Lambda^2(\Sigma_{16}^+))\simeq\R\oplus\Lambda^4(\Sigma_{16}^+),$$ where $\R$ is the kernel of the Bianchi map $\b:\Sym^2(\Lambda^2(\Sigma_{16}^+))\to\Lambda^4(\Sigma_{16}^+).$ The trace element $R_1\in\Sym^2(\Lambda^2(\Sigma_{16}^+))$ defined by $$R_1(v, w,v', w'):=\langle v\wedge w,v'\wedge w'\rangle_\Sigma $$ is invariant under the action of $\Spin(16)$ and belongs to $\R$ since $\b(R_1)=0$. Assume for a contradiction that $\Lambda^4(\Sigma_{16}^+)$ contains some invariant element $\Omega$ and consider the invariant element $R_2\in\Sym^2(\Lambda^2(\Sigma_{16}^+))$ defined by $$R_2(v, w,v', w'):=\langle [v,w],[v',w']\rangle_\S ,$$ where $[.,.]$ is the bracket defined by Lemma \ref{l1}. Since the two irreducible summands in \eqref{s16} are not isomorphic, the space of invariant elements in $\Sym^2(\Lambda^2(\Sigma_{16}^+))$ has dimension two. Hence there exist real constants $k,l$ such that $R_2=kR_1+l\Omega$. In particular we would have \beq\label{a}|[v,w]|_\S^2=k|v\wedge w|_\Sigma^2,\qquad\forall v,w\in\Sigma^+_{16}.\eeq Since $\dim(\spin(16))=120$ is strictly smaller than $\dim(\Sigma^+_{16})-1=127$, one can find non-zero vectors $v_0,w_0\in \Sigma^+_{16}$ such that $v_0\wedge w_0\ne 0$ and $\langle v_0,aw_0\rangle_\Sigma =0$ for all $a\in\spin(16)$. By the definition of the bracket in Lemma \ref{l1} (4), this implies $[v_0,w_0]=0$, so using \eqref{a} for $v=v_0$ and $w=w_0$ yields $k=0$. By \eqref{a} again, this would imply $[v,w]=0$ for all $v,w\in\Sigma^+_{16}$, so we would have $$0=\langle a,[v,w]\rangle_\S =\langle av,w\rangle_\Sigma ,\qquad \forall a\in\spin(16),\ \forall v,w\in\Sigma^+_{16},$$ which is clearly a contradiction. \r \subsection{The construction of $\F_4$, $\E_6$ and $\E_7$} Consider the following spin representations: $\Sigma_9$, which is real, $\Sigma_{10}$ which is purely complex, and $\Sigma_{12}^+$ which is quaternionic. In order to show that they give rise to $s$-representations of $\spin(9)$, $\spin(10)\oplus\uu(1)$ and $\spin(12)\oplus\sp(1)$ respectively, we need to check that one can apply the criteria in Corollary~\ref{c1}, Proposition~\ref{complex} and Proposition~\ref{quaternionic}. Taking into account the results in Section 3, it suffices to show that $(\Lambda^4\Sigma_9)^{\spin(9)}=0$, and that $\Lambda^2\Sigma_{10}$ and $\Lambda^2_0\Sigma_{12}^+$ are irreducible. The first assertion can be proved like in Proposition \ref{41}, whereas the two other follow from the classical decompositions of the second exterior power of spin representations $$\Lambda^2\Sigma_{10}\cong \Lambda^3(\CM^{10}),\qquad \Lambda^2\Sigma_{12}^+\cong \Lambda^0(\CM^{12})\oplus \Lambda^4(\CM^{12}).$$ \subsection{On spin representations of Lie type} In this final part we will show that very few spin representations are of Lie type. To make things precise, recall that the real Clifford algebras $\Cl_n$ are of the form $\KM(r)$ or $\KM(r)\oplus \KM(r)$ where: \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $n:$ & $8k+1$ & $8k+2$ & $8k+3$ & $8k+4$ & $8k+5$ & $8k+6$ & $8k+7$ & $8k+8$ \\ \hline $r:$ & $2^{4k}$ & $2^{4k}$ & $2^{4k}$ & $2^{4k+1}$ & $2^{4k+2}$ & $2^{4k+3}$ & $2^{4k+3}$ & $2^{4k+4}$\\ \hline $\Cl_n:$ & $\CM(r)$ & $\HM(r)$ & $\HM(r)\oplus \HM(r)$ & $\HM(r)$ & $\CM(r)$ & $\RM(r)$ & $\RM(r)\oplus \RM(r)$ & $\RM(r)$ \\ \hline \end{tabular} \end{center} The Clifford representation of the real Clifford algebra is by definition the unique irreducible representation of $\Cl_n$ for $n\ne 3\mod 4$, and the direct sum of the two inequivalent representations for $n= 3\mod 4$. The real spinor representation $\Sigma_n$ is the restriction of the Clifford representation to $\spin(n)\subset\Cl_{n-1}$ (note the shift from $n$ to $n-1$). For $n\ne 0\mod 4$ the spin representation is irreducible, and for $n=0\mod 4$ it decomposes as the direct sum of two irreducible representations $\Sigma_n=\Sigma^+_n\oplus\Sigma^-_n$. We introduce the notation $$\Sigma_n^{(+)}:=\begin{cases}\Sigma_n^+& \hbox{if}\ n=0\mod 4,\\ \Sigma_n& \hbox{if}\ n\ne 0\mod 4.\end{cases}$$ The table above shows that the spin representation $\Sigma_n^{(+)}$ is of real type for $n=0,1,7$ mod 8, of complex type for $n=2$ or $6$ mod 8 and of quaternionic type for $n=3,4,5$ mod 8. We define the Lie algebras $$\widetilde\spin(n):=\begin{cases}\spin(n) &\hbox{if}\ n=0,1,7 \mod 8,\\ \spin(n)\oplus \uu(1) &\hbox{if}\ n=2\ \hbox{or}\ 6 \mod 8,\\ \spin(n)\oplus \sp(1) &\hbox{if}\ n=3,4,5 \mod 8. \end{cases}$$ We can view $\Sigma_n^{(+)}$ as a $\widetilde\spin(n)$-representation, where the $\uu(1)$ or $\sp(1)$ actions are induced by the complex or quaternionic structure of the spin representation in the last two cases. We study the following question: {\em for which $n\ge 5$ is $\Sigma_n^{(+)}$ a representation of Lie type of $\widetilde\spin(n)$?} We will see that there are almost no other examples but the above examples which lead to the construction of exceptional Lie algebras. This is a consequence of the very special structure of the weights of the spin representations (see also \cite{mihaela}, \cite{cliff} for a much more general approach to this question). \begin{epr}\label{spin} For $n\ge 5$, the representation $\Sigma_n^{(+)}$ of $\widetilde\spin(n)$ is of Lie type if and only if $n\in\{5,6,8,9,10,12,16\}$. \end{epr} \bp The representation $\Sigma_n^{(+)}$ of $\widetilde\spin(n)$ is of Lie type if and only if there exists a Lie algebra structure on $\gg:=\widetilde\spin(n)\oplus\Sigma_n^{(+)}$ satisfying conditions (1), (2) and (4) in Lemma \ref{l1} with respect to some $\ad_{\widetilde\spin(n)}$ invariant scalar products on $\widetilde\spin(n)$ and $\Sigma_n^{(+)}$. We will always consider some fixed Cartan subalgebra of $\widetilde\spin(n)$, which is automatically a Cartan subalgebra of $\gg$ since the (half-)spin representations have no zero weight. Consider first the case $n=8k$. Since $\widetilde\spin(8k)=\spin(8k)$, the scalar products above are unique up to some constant. We choose the scalar product $\la.,.\ra$ on $\spin(8k)$ such that in some orthonormal basis $\{e_1,\ldots,e_{4k}\}$ of the Cartan subalgebra of $\spin(8k)$, the roots of $\spin(8k)$ are $$\R=\{\pm e_i\pm e_j\ |\ 1\le i<j\le 4k\},$$ and the weights of the (complexified) half-spin representation $\Sigma_{8k}^+\otimes\CM$ are $$\W=\{\tfrac12\sum_{i=1}^{4k}\e_ie_i\ |\ \e_i=\pm1, \ \e_1\ldots\e_{4k}=1\}.$$ The union $\R\cup\W$ is then the root system of $\gg$, which is a Lie algebra of compact type. In particular, the quotient \beq\label{q}q(\a,\b):=\frac{2\la \a,\b\ra}{\la \b,\b\ra}\eeq is an integer satisfying $|q(\a,\b)|\le 3$ for all $\a,\b\in \R\cup\W$ (cf. \cite{adams}, p. 119). Taking $\a=e_1+e_2$ and $\b=\tfrac12\sum_{i=1}^{4k}e_i$ we get $q(\a,\b)=2/k$, whence $k=1$ or $k=2$, so $n=8$ or $n=16$. Conversely, the real half-spin representations $\Sigma^+_8$ and $\Sigma^+_{16}$ are of Lie type (actually they are $s$-representations with augmented Lie algebras $\spin(9)=\spin(8)\oplus\Sigma^+_8$ and $\ee_8=\spin(16)\oplus\Sigma^+_{16}$). If $n=8k-1$, a similar argument using the root $\a=e_1$ of $\spin(8k-1)$ and the weight $\b=\tfrac12\sum_{i=1}^{4k-1}e_i$ of the spin representation shows that $q(\a,\b)=2/(4k-1)$ cannot be an integer. If $n=8k+1$, one has $q(\a,\b)=1/k$ for $\a=e_1$ and $\b=\tfrac12\sum_{i=1}^{4k}e_i$, so $k=1$. Conversely, $\Sigma_9$ is an $s$-representation, as shown by the exceptional Lie algebra $\ff_4=\spin(9)\oplus\Sigma_9$. Consider now the case when the spin representation is complex, {\em i.e.} $n=4k+2$, with $k\ge 1$. Assume that on $\gg:=(\spin(4k+2)\oplus \uu(1))\oplus\Sigma_{4k+2}$ there exists a Lie algebra structure satisfying conditions (1), (2) and (4) in Lemma \ref{l1} with respect to some $\ad_{\spin(4k+2)\oplus \uu(1)}$ invariant scalar products on $\spin(4k+2)\oplus \uu(1)$ and $\Sigma_{4k+2}$. The latter scalar product is defined up to a scalar, whereas for the first one there is a two-parameter family of possible choices. By rescaling, we may assume that the restriction of the scalar product on the $\spin(4k+2)$ summand is such that in some orthonormal basis $\{e_1,\ldots,e_{2k+1}\}$ of the Cartan subalgebra, the root system of $\spin(4k+2)$ is $$\R=\{\pm e_i\pm e_j\ |\ 1\le i<j\le 2k+1\}.$$ There exists a unique vector $e_{2k+2}\in\uu(1)$ such that the set of weights of the representation $\Sigma_{4k+2}\otimes\CM\cong\Sigma_{4k+2}\oplus\overline{\Sigma}_{4k+2}$ of $\spin(4k+2)\oplus \uu(1)$ is $$\W=\{\tfrac12\sum_{i=1}^{2k+2}\e_ie_i\ |\ \e_i=\pm1, \ \e_1\ldots\e_{2k+2}=1\}.$$ We denote by $x:=|e_{2k+2}|^2$ its square norm. The root system of $\gg$ is clearly $\R(\gg)=\R\cup\W$. Recall that for every non-orthogonal roots $\a$ and $\b$ of $\gg$, their sum or difference is again a root \cite{adams}. On the other hand, neither the sum, nor the difference of the two roots $\a:=\tfrac12(\sum_{i=1}^{2k+2}e_i)$ and $\b:=\tfrac12(\sum_{i=1}^{2k}e_i-e_{2k+1}-e_{2k+2})$ of $\gg$ belongs to $\R(\gg)=\R\cup\W$. Thus $\la\a,\b\ra=0$, which implies $x=2k-1$. Consider now the root $\gamma:=e_1+e_2$. The integer defined in \eqref{q} is $$q(\gamma,\a)=\frac{2\la \gamma,\a\ra}{\la \a,\a\ra}=\frac{2}{\tfrac14(2k+1+x)}=\frac2k,$$ showing that necessarily $k=1$ or $k=2$. Conversely, both cases do occur, since $\Sigma_6$ and $\Sigma_{10}$ are $s$-representations of $\spin(6)\oplus\uu(1)\cong\uu(4)$ and $\spin(10)\oplus\uu(1)$ with augmented Lie algebras $\uu(5)$ and $\ee_6$ respectively. Similar arguments (see also \cite{mihaela}) show that in the quaternionic case (when $n=3,4,5 \mod 8$) there are only two representations $\Sigma_n^{(+)}$ of $\spin(n)\oplus\sp(1)$ which are of Lie type, for $n=5$ and $n=12$. They are both $s$-representations and their augmented Lie algebras are $\spin(5)\oplus\sp(1)\oplus\Sigma_5\cong\sp(2)\oplus\sp(1)\oplus\HM^2\cong\sp(3)$ and $\spin(12)\oplus\sp(1)\oplus\Sigma_{12}^+\cong\ee_7$.\r Note that J. Figueroa-O'Farrill has recently asked in \cite[p. 673]{of08} about the existence of Killing superalgebra structures on spheres other that $\SM^7$, $\SM^8$ and $\SM^{15}$. Part I. in Proposition~\ref{spin} can be interpreted as a negative answer to this question. \labelsep .5cm
1,314,259,995,252
arxiv
\section{Introduction} Coreference resolution is one of the best known tasks in Natural Language Processing (NLP). Despite a large body of work in the area over the last few decades \cite{morton2000coreference,bean2004unsupervised,mccallum2005conditional,rahman2009supervised}, the task remains challenging. Many resolution decisions require extensive world knowledge and understanding common points of reference \cite{pradhan2011conll}. In the case of pronominal anaphora resolution, these forms of ``common sense'' become much more important when cues like gender and number do not by themselves indicate the correct resolution \cite{trichelair2018evaluation}. To date, most existing methods for coreference resolution \cite{raghunathan2010multi,lee2011stanford,durrett2013decentralized,lee2017end,lee2018higher} have been evaluated on a few popular datasets, including the CoNLL 2011 and 2012 shared coreference resolution tasks \cite{pradhan2011conll,pradhan2012conll}. These datasets were proposed as the first comprehensively tagged and large-scale corpora for coreference resolution, to spur progress in state-of-the-art techniques. According to \citet{durrett2013easy}, this progress would contribute in the ``uphill battle" of modelling not just syntax and discourse, but also semantic compatibility based on world knowledge and context. Despite improvements in benchmark dataset performance, the question of what exactly current systems learn or exploit remains open, particularly with recent neural coreference resolution models. \citet{lee2017end} note that their model does ``little in the uphill battle of making coreference decisions that require world knowledge,'' and highlight a few examples in the CoNLL 2012 task that rely on more complex understanding or inference. Because these cases are infrequent in the data, systems can perform very well on the CoNLL tasks according to standard metrics by exploiting surface cues. High-performing models have also been observed to rely on social stereotypes present in the data, which could unfairly impact their decisions for some demographics \cite{zhao2018gender}. There is a recent trend, therefore, to develop more challenging and diverse coreference tasks. Perhaps the most popular of these is the Winograd Schema Challenge (WSC), which has emerged as an alternative to the Turing test~\cite{levesque2011winograd}. The WSC task is carefully controlled such that heuristics involving syntactic salience, the number and gender of the antecedents, or other obvious syntactic/semantic cues are ineffective. Previous approaches to common sense reasoning, based on logical formalisms \cite{bailey2015winograd} or deep neural models \cite{liu2016probabilistic}, have solved only restricted subsets of the WSC with high precision. These shortcomings can in part be attributed to the limited size of the corpus (273 instances), which is a side effect of its hand-crafted nature. \citet{webster2018mind} recently presented a corpus called GAP that consists of about 4,000 unique binary coreference instances from English Wikipedia. This corpus is intended to address gender bias and the mentioned size limitations of the WSC. We believe that gender bias in coreference resolution is part and parcel of a more general problem: current models are unable to abstract away from the entities in the sentence to take advantage of the wider context to make a coreference decision. To tackle this issue, we present a coreference resolution corpus called \corpusname{} that specifically targets the ability of systems to reason about a situation described in the context.\footnote{The corpus, the code to scrape the sentences from the source texts, as well as the code to reproduce all of our experimental results are available at https://github.com/aemami1/KnowRef.} We designed this task to be challenging, large-scale, and based on natural text. The main contributions of this paper are as follows: \begin{enumerate} \item We develop mechanisms by which we construct a human-labeled corpus of 8,724 Winograd-like text samples whose resolution requires significant common sense and background knowledge. As an example: \emph{Marcus is undoubtedly faster than Jarrett right now but in [his] prime the gap wasn't all that big.} (answer: Jarrett) \item We propose a task-specific metric called \emph{consistency} that measures the extent to which a model uses the full context (as opposed to a surface cue) to make a coreference decision. We use this metric to analyze the behavior of state-of-the-art methods and demonstrate that they generally under-utilize context information. \item We find that a fine-tuned version of the recent large-scale language model, BERT \cite{devlin2018bert}, performs significantly better than other methods on \corpusname{}, although with substantial room for improvement to match human performance. \item We demonstrate the benefits of a data-augmentation technique called \emph{antecedent switching} in expanding our corpus, further deterring models from exploiting surface cues, as well as in transferring to models trained on other co-reference tasks like GAP, leading to state-of-the-art results. \end{enumerate} \section{Related Work} \subsection{General coreference resolution} Automated techniques for standard coreference resolution --- that is, the task of correctly partitioning the entities and events that occur in a document into resolution classes --- date back to decision trees and hand-written rules \cite{hobbs1977pronoun,mccarthy1995using}. The earliest evaluation corpora were the Message Understanding Conferences (MUC) \cite{grishman1996message} and the ACE \cite{doddington2004automatic}. These focused on noun phrases tagged with coreference information, but were limited in either size or annotation coverage. The datasets of \citet{pradhan2011conll,pradhan2012conll} from the CoNLL-2011 and CoNLL-2012 Shared Tasks were proposed as large-scale corpora with high inter-annotator agreement. They were constructed by restricting the data to coreference phenomena with highly consistent annotations, and were packaged with a standard evaluation framework to facilitate performance comparisons. The quality of these tasks led to their widespread use and the emergence of many resolution systems, ranging from hand-engineered methods to deep-learning approaches. The multi-pass sieve system of \citet{raghunathan2010multi} is fully deterministic and makes use of mention attributes like gender and number; it maintained the best results on the CoNLL 2011 task for a number of years \cite{lee2011stanford}. Later, lexical learning approaches emerged as the new state of the art \cite{durrett2013easy}, followed more recently by neural models \cite{wiseman2016learning,clark2016deep}. The current state-of-the-art result on the CoNLL 2012 task is by an end-to-end neural model from \citet{lee2018higher} that does not rely on a syntactic parser or a hand-engineered mention detector. \subsection{Gender bias in general coreference resolution} \citet{zhao2018gender} observed that state-of-the-art methods for coreference resolution become gender-biased, exploiting various stereotypes that leak from society into data. They devise a dataset of 3,160 manually written sentences called \textit{WinoBias} that serves both as a gender-bias test for coreference resolution models and as a training set to counter stereotypes in existing corpora (i.e., the two CoNLL tasks). The following example is representative: \begin{exe} \ex The physician hired the secretary because \underline{he} was overwhelmed with clients. \ex The physician hired the secretary because \underline{she} was overwhelmed with clients. \end{exe} Experiments conducted on various models demonstrated that an end-to-end neural model \cite{lee2017end} maintains its performance without the gender bias when trained partially on both the previous datasets and on \textit{WinoBias}. A concurrent work by \citet{rudinger2018gender} also proposed an empirical study of the biases in coreference resolution systems. In contrast to \citet{zhao2018gender}, who attribute the bias in part to the datasets, they conjecture that the gender bias comes primarily from the models themselves. Based on statistics from the Bureau of Labor, they show that various systems all exhibit significant gender bias. This work on gender stereotypes provides some insight into the behavior of current models. In the example above, if \textit{she} is predicted incorrectly to refer to \textit{the secretary}, it is likely because the model learned a representation for the secretary profession that encodes gender information. Current models do not capture the context nor the relation between \textit{was overwhelmed} and \textit{hired} that lead to the correct resolution. The subject of our work is to investigate the potential for models to capture contextual relationships instead of cues from, e.g., gender stereotypes. Unlike WinoBias, our task is composed of passages that occur naturally in text and it is several times larger. \begin {figure*}[!ht] \centering \scalebox{0.70}{ \begin{tikzpicture} [ mynode/.style={rectangle, draw, align=center, text width =3cm ,minimum width=3cm, minimum height=1cm}, widenode/.style={rectangle, draw, align=center, text width =4.5cm ,minimum width=4.5cm, minimum height=1cm} ] \node[mynode] (a) at ($(0, 0)$) {\textbf{Initial filtering:}\\ Clean up raw text and split it into sentences.}; \node[mynode] (b) at ($(a) + (4, 0)$) {\textbf{Connective Filtering:}\\ Ensure the occurrence of a single connective in the sentence.}; \node[mynode] (c) at ($(b) + (4, 0)$) {\textbf{Antecedent Filtering:}\\ Use POS information to ensure the occurrence of exactly two NPs before the connective.}; \node[widenode] (d1) at ($(c) + (5, 2.5)$) {\textbf{Training set\\(Reddit Comments):}\\Heuristically predict the labels to a filtered set of sentences for which there is a gender ``giveaway".}; \node[widenode] (d2) at ($(c) + (5, -2.5)$) {\textbf{Test Set \\(Wikipedia + OpenSubtitles):}\\ Human annotators predict the labels to the collected \\sentences.}; \node[mynode] (e) at ($(c) + (10, 0)$) {\textbf{Quality Control:}\\ Human annotators examine collected sentences}; \draw (a) -> ($(a)!0.5!(b)$) edge[->] (b); \draw (b) -> ($(b)!0.5!(c)$) edge[->] (c); \draw (c) -| ($(c)!0.4!(d1)$) edge[->] (d1); \draw (c) -| ($(c)!0.4!(d2)$) edge[->] (d2); \draw (d1) -| ($(d1)!0.55!(e)$) edge[->] (e); \draw (d2) -| ($(d2)!0.55!(e)$) edge[->] (e); \node[below =5mm of a,text width=3cm,fill=white] (why1) {$>$100 million \\sentences}; \node[below=5mm of b,text width=2cm,inner sep=.05cm,fill=white] (why2) {$>$1 million sentences}; \node[below=5mm of c,text width=3cm,inner sep=.05cm,fill=white] (why3) {$>$100 thousands sentences}; \node[below=5mm of e,text width=2.5cm,fill=white] (why4) {8,724 sentences}; \node[below=0.5cm of d1,text width=2cm] (why4) {\textbf{Label \\Generation}}; \end{tikzpicture} } \caption{The corpus construction process for \corpusname{}} \label{fig:corpcons} \end{figure*} \subsection{Difficult cases in coreference resolution} As the creators of the CoNLL tasks note, most coreference techniques rely primarily on surface-level features, like the proximity between mentions, or shallow semantic features like number, gender, named entities, semantic class, etc., rather than knowledge and context. To address this, \citet{levesque2011winograd} manually constructed a dataset of challenging pronoun disambiguation problems called the Winograd Schema Challenge. The goal was that any successful system would necessarily use common-sense knowledge. Although the WSC is an important step in evaluating systems en route to human-like language understanding, its size and other characteristics are a bottleneck for progress in pronoun disambiguation \cite{trichelair2018evaluation}. A Winograd-like expanded corpus was proposed by \citet{rahman2012resolving} to address the WSC's size limitations; however, systems that perform well on the expanded dataset do not transfer successfully to the original WSC \cite{rahman2012resolving,peng2015solving}, likely due to loosened constraints in the former.\par The task that we propose distinguishes itself from the WSC by building on sentences that occur in natural text. This yields highly diverse problem instances. It is particularly important that, as well as being challenging, tasks are representative of natural text, so that improvements are more likely to transfer to the full coreference setting. Recently, \citet{webster2018mind} presented a corpus called GAP that consists of 4,454\footnote{In GAP, one unique coreference instance corresponds to two pronoun-name pairs, for which they report 8,908 pairs.} unique binary coreference instances from English Wikipedia. It is meant to address gender bias and the described size limitation of the WSC. For instance, it exposes the unbalanced performance of current state-of-the-art resolvers, which more accurately resolve masculine pronouns than feminine pronouns. As for the difficulty of the task, the models tested on GAP were not trained directly on the corpus, which does not give a clear picture of the task's difficulty. A simple heuristic called \textit{Parallelism+URL}, which is based on using the syntactic distance between antecedents and the target pronoun, is so far the strongest GAP baseline, at above 70\% accuracy. This suggests that GAP is vulnerable to exploits that circumvent a need for knowledge, albeit not the gender and number cues that coreference resolvers have exploited before. Finally, our corpus construction process differs from that of GAP's by more strictly requiring that the sentences are in WSC-format, that is, there are exactly two named entities that occur strictly before the pronoun and only one of which may co-refer with the pronoun (in GAP, the pronoun may occur between and before the named entities and may in fact co-refer with both named entities). In addition, our corpus construction process exploits the fact that the named entities can be replaced with any name in order to increase the task difficulty by automatically removing gender giveaways as well as to significantly increase the size of the corpus by switching the named entities to create a new task instance. As such, our paper seeks to explore a wider problem of which gender bias may be one facet: current models do not effectively abstract away from the entities (and instead rely on exploits using gender or plurality) to make the coreference resolution. By developing a benchmark task consisting strictly of sentences for which such cues are ineffective, we seek to challenge and potentially improve current coreference resolution models. In addition, based on our new benchmark, \corpusname{}, we introduce a data-augmentation mechanism, called \textit{antecedent switching}, to encourage models to perform this abstraction. \section{The \corpusname{} Coreference Task} We develop a coreference task called \corpusname{} that features 8,724 difficult pronoun disambiguation problems. Each instance is a short passage containing a target pronoun that must be correctly resolved to one of two possible antecedents. Formally, each problem instance can be described as a tuple $P=\{S,C_1,C_2,T,K\}$, where $S$ is the sentence, $C_1$ and $C_2$ are the candidate antecedents, $T$ is the target pronoun to be resolved to one of $C_1$ and $C_2$, and $K$ indicates the correct antecedent. Note that $C_1$, $C_2$, $T$ and $K$ appear in $S$. \corpusname{} provides $\{S,C_1,C_2,T\}$ as input for models, which must predict $K$ (e.g., as the output of a binary classification over $C_1,C_2$). A representative sentence $S$ is the following. \begin{exe} \ex \label{ex-1} \{Paul\} helped \{Lionel\} hide when [he] was pursued by the authorities. \end{exe} Here, $C_1=\text{Paul}$, $C_2=\text{Lionel}$, $T=\text{he}$, and $K=C_2=\text{Lionel}$. We control the text so as not to give away the pronoun's correct antecedent in surface-level cues involving syntactic salience or the number and gender of the antecedent. Successful systems must instead make use of the context, which may require world knowledge and common-sense inferences; i.e., that someone who is being helped to hide may be one who is being pursued by the authorities. In the following section, we describe the methodology used to construct our corpus, provide a glimpse of a few of its instances and their resolution rationales, outline the task's evaluation criteria, and describe its characteristics. \begin{table*}[ht] \small \begin{center} \begin{tabu}to\linewidth{@{}X[l]X[l,4]@{}} \toprule \corpusname{} Example 1: & \{Radu\} appeared to be killed by \{Brother Paulo\}, but [he] reappears a short while later injured, but alive. ($K=\text{Radu}$) \\ Original sentence: & Radu appeared to be killed by Sister Paula, but he reappears a short while later injured, but alive.\\ \midrule \corpusname{} Example 2: & \{Wanda\} tries to apologize to \{Rose\}, but [she] refuses to accept. ($K=\text{Rose}$) \\ Original sentence: & Warren tries to apologize to Rose, but she refuses to accept.\\ \midrule \corpusname{} Example 3: & \{Tom\} arrives to where \{Alex\} was tied, but [he] has come free of his lead. ($K=\text{Alex}$)\\ Original sentence: & Tom arrives to where Vanessa was tied, but she has come free of her lead.\\ \bottomrule \end{tabu} \caption{Examples of \corpusname{} instances. } \label{tab:examples} \end{center} \vskip -.1in \end{table*} \subsection{Corpus construction} To construct \corpusname{}, we scrape text samples from a large collection of documents: the combination of 2018 English Wikipedia, OpenSubtitles, and Reddit comments dating from 2006--2018. We filter this text through a multi-stage process to ensure quality and diversity as depicted in Figure \ref{fig:corpcons}, and described in more detail below. \subsubsection{Initial Filtering} After removing markup, non-ASCII characters, parenthetical expressions, headings and lists, we split the text into sentences. We keep sentences of token length between 9 and 33 words after na\"ive tokenization, which start with an upper case letter, and which contain no math. \subsubsection{Connective Filtering} Our first substantial filtering step uses regular expressions to ensure that each passed sentence contains connectives.\footnote{comma, semicolon, \textit{or}, \textit{since}, \textit{but}, \textit{because}, \textit{although}, etc.} We use a regular expression to ensure that there is only one connective cluster (e.g. ``, and though''), and that there are at least two non-stopwords before this connective and a pronoun after it. As a final check, we ensure that no pronoun occurs before the connective, which tends to remove sentences which are not self-contained. \subsubsection{Antecedent Filtering} On the remaining set of sentences, we use Stanford's Maxent tagger \cite{toutanova2003feature} to infer a flat part-of-speech (POS) labelling. Using the inferred POS tags, we ensure that there are exactly two noun phrases (NPs) before the connective that do not re-occur after it (a re-occurrence after the connective means that the pronoun likely refers to the non-repeated noun phrase). The mentioned checks resulted in roughly 100,000 sentences across all three corpora. At least some of these remaining sentences have similar properties to Winograd schema sentences; that is, the two noun phrases (NPs) and the pronoun share the same type. From here, we keep only sentences where the type indicates that both NPs correspond to persons, which further filters the remaining sentences. We do this because NPs that denote people are often named entities or can easily be replaced by named entities without loss of information. We targeted these instances also because we investigate how resolution systems use gender cues and most gendered pronouns occur with person-type NPs. \subsubsection{Label Generation} We generate our training and test sets from distinct sources of text using two different methods. \paragraph{Training set:}We automatically collect 70,000 sentences from Reddit that have passed the filters described above, and filter these down to roughly 7,500 sentences for which the antecedents are named entities of different genders. We use a Python library\footnote{\url{https://pypi.org/project/SexMachine/}} to infer the genders, based on a list of 40,000 names categorized as female or male compiled by Jörg Michael. Given the pronoun and the distinct predicted genders for the antecedents, we can infer the label for the pronoun's correct resolution with high accuracy and without the need for expensive human annotation. After assigning this label, we remove the gender giveaway by replacing one of the named entities so that both entities and the pronoun all match in gender (e.g., in a sentence with ``James'', ``Jessica'', and ``she'' as the NPs and pronoun, we replace ``James'' with ``Jane''). These sentences form our training set. To assess its quality, we gave an annotator a random sample of 100 training instances with their heuristically determined labels. The annotator then evaluated each sentence as ``correctly labelled", ``incorrectly labelled", or ``unresolvable" if neither of the two candidates were more suitable than the other to corefer with the pronoun.\footnote{The details and result of this quality-testing study will also be made public along with the code and dataset.} In total, 86\% of the instances were deemed to be labelled correctly, 11\% incorrectly labelled, and 3\% were not resolvable, implying that our automatic selection heuristic is strong but imperfect. \paragraph{Test set:}Human annotators examined all collected sentences for quality control. We also use a source for the test sentences that is distinct from that of the training set, directing our pipeline to collect sentences from Wikipedia and OpenSubtitles rather than Reddit. This is to ensure that stylistic cues common in the training source cannot be exploited by models at test time. In total, roughly 10,000 candidate sentences were extracted initially. As before, we automatically remove gender giveaways by replacing the named entities with names of the same gender, rendering the pronoun ambiguous. Then, six human annotators predicted which antecedent was the correct coreferent of the pronoun for a sample of 2,000 candidate sentences, or they labeled the sentence with ``neither" (in the case where neither antecedent feasibly corefers with the pronoun) or ``unclear" (if the sentence was not intelligible). Sentences that have a strong agreement from 5 or more annotators on a single antecedent (and which are not labeled as ``neither" or ``unclear") are kept for testing. This yielded 1,269 test sentences. We measured high inter-annotator agreement on the test set with a Fleiss’ Kappa of $\kappa=0.78$. \begin{table}[t] \begin{center} \begin{tabu}to\linewidth{@{}X[3,l]X[2,r]@{}} \toprule Sentence Characteristic & \% of Data \\ \midrule Masculine target pronouns & 52.7 \\ Feminine target pronouns & 47.3 \\ \cmidrule(r){1-1} First Antecedent Correct & 50.7 \\ Second Antecedent Correct & 49.2 \\ \bottomrule \end{tabu} \caption{Characteristics of the dataset, in terms of pronoun distribution and correct label.} \label{tab:characteristics} \end{center} \end{table} Our pipeline thus yields a total of 8,724 sentences (7,455 training and 1,269 test) whose pronoun disambiguation should not be clear from shallow features like gender, number, and semantic type -- they should instead require varying degrees of external knowledge. These sentences constitute the \corpusname{} corpus. Examples of some instances are given in Table~\ref{tab:examples}. As these examples reveal, each instance may require a unique bit of common sense knowledge to resolve. In the first example, the common understanding that death (by way of killing) causes a disappearance helps us to conclude that Radu, the victim of murder, is the one to who reappears. In the next example, human readers recognize that to \textit{accept} is something one does with an \textit{apology}. Therefore, \textit{she} refers to the one that accepts the apology, i.e., Rose. For the third example, an understanding that \textit{being tied} is related to being deprived of freedom leads us to conclude that Alex has come free. \subsection{Task Characteristics} \begin{table*}[t] \begin{center} \begin{tabu}to \linewidth{@{}X[l,3]*5{X[1.5,c]}@{}} \toprule Model & Both Antecedents Predicted & No Decision & Incorrect Decision & Correct Decision & Task-\linebreak Specific \linebreak Accuracy \\ \midrule Random & -- & -- & -- & -- & 0.50 \\ Human\footnotemark[5] & -- & -- & -- & -- & 0.92 \\ \cmidrule(r){1-1} Rule & 0.001 & 0.12 & 0.43 & 0.45 & 0.52 \\ Stat & 0.006 & 0.09 & 0.45 & 0.45 & 0.50 \\ Deep-RL & 0.001 & 0.09 & 0.46 & 0.45 & 0.49 \\ Latent & 0.000 & 0.12 & 0.41 & 0.47 & 0.54 \\ E2E (CoNLL only) & 0.01 & 0.42 & 0.23 & 0.35 & 0.60 \\ \cmidrule(r){1-1} E2E (\corpusname{}) & 0.000 & 0.26 & 0.31 & 0.43 & 0.58 \\ E2E (\corpusname{}+CoNLL) & 0.000 & 0.19 & \textbf{0.28} & 0.52 & \textbf{0.65} \\ BERT (\corpusname{}) & 0.000 & 0.000 & 0.39 & \textbf{0.61} & 0.61 \\ \bottomrule \end{tabu} \caption{Coverage and performance of various representative systems on the \corpusname{} Test set.} \label{tab:performance} \end{center} \end{table*} In Table \ref{tab:characteristics}, we report several statistical characteristics of the data. These suggest a near-equal distribution of feminine and masculine target pronouns (\textit{he/him/his} vs. \textit{she/her}) as well as an equal distribution of the two labels, which keeps chance-based performance at 50\% expected accuracy. \subsection{Evaluation} Our task requires a model to choose between two candidates, but classical coreference models build clusters of expressions that refer to the same entity. With respect to our setting, several errors can be made by these existing models: predicting that the two entities and the pronoun share a similar cluster (\textit{Both Antecedents Predicted}), that none of the two candidates shares a cluster with the pronoun (\textit{No Decision}), or creating a cluster that contains the pronoun with the wrong candidate (\textit{Incorrect Decision}). To obtain a score specific to our task, we compute a \textit{Task-Specific Accuracy} which discards all of the cases in which the model makes no decision relevant to the target pronoun or chooses both entities as co-referring to the target pronoun. \section{Experiments and Results} In this section, we compare the performance of five representative coreference systems on our task: Stanford’s rule-based system \cite{raghunathan2010multi} (\textbf{Rule}), Stanford’s statistical system \cite{clark2015entity} (\textbf{Stat}), \citet{clark2016deep}’s deep reinforcement learning system (\textbf{Deep-RL}), \citet{martschat2015latent}’s latent tree model (\textbf{Latent}), and \citet{lee2018higher}’s end-to-end neural system (\textbf{E2E}). We also report the accuracy of the state-of-the-art model, \textbf{E2E}, after retraining on \corpusname{} and on \corpusname{}+CoNLL. Additionally, we develop a task-specific model for \corpusname{}: a discriminatively trained fine-tuned instance of Bidirectional Encoder Representations from Transformers (\textbf{BERT}) \cite{devlin2018bert}. We train our task-specific \textbf{BERT} according to recent work on language models (LMs) for the WSC \cite{trinh2018simple}. We first construct a modified version of the data wherein we duplicate each sentence, replacing the pronoun with one of the two antecedents in each copy. The task, akin to NLI, is then to predict which of the two modified sentences is most probable. To compute probabilities, we add a softmax layer with task-specific parameter vector $v\in\mathcal{R}^H$. Denote by $h_{S1}\in\mathcal{R}^H$ (respectively $h_{S2}$) the final hidden state for the sentence copy with the pronoun replaced by the first antecedent (respectively the second). Then the probability assigned to the first antecedent is \begin{align} P_1 = \frac{e^{v^\top h_{S1}}}{e^{v^\top h_{S2}}+e^{v^\top h_{S2}}}. \end{align} The probability assigned to the second antecedent is $P_2=1-P_1$. We use $H=768$ hidden units in our BERT implementation and learn $v$ by minimizing the binary cross entropy with the ground-truth antecedent labels (in one-hot format). \paragraph{Human Performance:} We determined human performance on \corpusname{} by collecting the predictions of six native English speakers on a randomly generated sub-sample of 100 problem instances; we consider correct those predictions that agreed with the majority decision and matched the ground-truth label derived from the original sentence. We report the performance of the five coreference systems and the human baseline in Table~\ref{tab:performance}.\par The human performance of 0.92 attests to the task's viability. The performance of the automatic systems pretrained on CoNLL, at random or slightly above random, demonstrates that state-of-the-art coreference resolution systems are unable to solve the task. This suggests the existence in the wild of difficult but realistic coreference problems that may be under-represented in CoNLL.\par After training on \corpusname{}, \textbf{E2E} improves by more than 5\% in task-specific accuracy. We can infer from this result that the model can make some use of context to make predictions if trained appropriately, but that the CoNLL shared tasks may not contain enough of such instances for models to generalize from them. Finally our task-specific model reaches an accuracy of at best 65\%, far below human performance despite having access to the two candidates. \footnotetext[5]{This is an estimate based on a subsample of the data.} \subsection{Analysis by Switching Entities} Inspired by \citet{trichelair2018evaluation}, we propose to use a task-specific metric, \textit{consistency}, to measure the ability of a model to use context in its coreference prediction, as opposed to relying on gender and number cues related to the entities. Accounting for this is critical, as we desire models that can capture social, situational, or physical awareness.\par To measure consistency in the \corpusname{} corpus, we duplicate the data set but switch the candidate antecedents each time they appear in a sentence. This changes the correct resolution. If a coreference model relies on knowledge and contextual understanding, its prediction should change as well, thus it could be called \textit{consistent} in its decision process. If, however, its decision is influenced solely by the antecedent, its output would stay the same despite the change in context induced by switching. We define the \textit{consistency} score as the percentage of predictions that change from the original instances to the switched instances. An example of a switching is: \begin{exe} \ex \label{ex-2} \textbf{Original}: \{Alex\} tells \{Paulo\}, but [he] does not believe him.\\ \textbf{Switched}: \{Paulo\} tells \{Alex\}, but [he] does not believe him. \end{exe} The correct answer switches from $K=\text{Paulo}$ to $K=\text{Alex}$. \begin{table}[t] \begin{center} \begin{tabu}to\linewidth{@{}X[l,3]X[c,1]@{}} \toprule Model & Consistency \\ \midrule Rule & 0\% \\ Stat & 76\% \\ Deep-RL & 66\% \\ Latent & 78\% \\ E2E & 62\%\\ \cmidrule(r){1-1} E2E (\corpusname{}) & 66\% \\ E2E (\corpusname{}+CoNLL) & 67\% \\ BERT (\corpusname{}) & 69\% \\ \bottomrule \end{tabu} \caption{The sensitivity of various systems to the instance antecedents, according to the number of changed decisions when the antecedents are switched. Higher is better.} \label{tab:sensitivity} \end{center} \end{table} Table~\ref{tab:sensitivity} shows the consistency scores of the various baseline models evaluated on the original and switched duplicates of \corpusname{}. The rule-based system \cite{raghunathan2010multi} always resolves to the same entity, suggesting that context is ignored. Indeed, the mechanisms underlying this model mostly rely on a gender and number dictionary \cite{bergsma2006bootstrapping}. This dictionary informs a count-based approach that assigns a masculine, feminine, neutral, and plural score to each word. If the pronoun is \textit{his}, the candidate with the higher masculine score is likely to be linked to the pronoun.\par The other models, Stat, Deep-RL, E2E, Latent and BERT are much more robust to the switching procedure, demonstrating that the resolution partially relies on context cues. Regarding \textbf{E2E}, we can observe that training the model on \corpusname{} forces the model to rely more on the context, leading to an improvement of 5\%. It further demonstrates the usefulness of the corpus to obtain a better representation of the context. \subsection{Data Augmentation by Switching} Inspired by the switching experiment, we propose to extend the \corpusname{} training set by switching every entity pair (thereby doubling the number of instances). We hypothesize that this data augmentation trick could force the model to abstract away from the entities to the context in order to boost performance, since it encounters the same contextual scenario in the doubled sentences. \begin{table}[t] \begin{center} \begin{tabu}to\linewidth{@{}X[l,3]X[c,1.8]X[c,0.4]X[c,2]@{}} \toprule Model & Accuracy & $\Delta$ & Consistency\\ \midrule BERT (\corpusname{}) & 71\% & +10\% & 89\% \\ E2E (\corpusname{}) & 61\% & +3\% & 71\% \\ E2E (\corpusname{}+CoNLL) & 66\% & +1\% & 75\% \\ \bottomrule \end{tabu} \caption{Accuracy on the \corpusname{} test set for each model after augmenting the training set, as well as the difference from the result without data augmentation.} \label{tab:afterswitch} \end{center} \end{table} Training on the augmented data, we observe an improvement of 10\% for fine-tuned BERT (Table~\ref{tab:afterswitch}), yielding a task-specific accuracy of 71\% on the \corpusname{} test set. The improvement in accuracy is marginal for \textbf{E2E}, but we observe a large gain in consistency. We suspected that the data augmentation trick might also be useful in mitigating a model's gender bias, by encouraging the model to rely more on the context than on gendered entity names. To test this hypothesis, we train the same model with and without the data augmentation trick on the recently released GAP corpus \cite{webster2018mind}. \begin{table}[ht] \begin{center} \begin{tabu}to\linewidth{@{}X[l,5]X[c,1.5]X[c,1.5]@{}} \toprule Model & $\frac{F_1^{F}}{F_1^{M}}$ & $F_1$ \\ \midrule Parallelism\footnotemark[6] & 0.93 & 66.9 \\ Parallelism+URL\footnotemark[6] & 0.95 & 70.6 \\ BERT (GAP) & 1.02 & 69.2 \\ BERT (GAP) + Data Aug. & \textbf{1.00} & \textbf{71.1} \\ \bottomrule \end{tabu} \caption{Performance on the GAP test set} \label{tab:afterswitch-gap} \end{center} \end{table} \footnotetext[6]{Scores reported in the original paper \cite{webster2018mind}} \begin{table*}[!ht] \small \centering \begin{tabu*}to\linewidth{@{}X[0.15]X[0.65]X[0.2]@{}} \toprule Sentence Type & Sentence & Answer \\ \midrule Original \newline Switched & {Kara} is in love with {Tanya} but she is too shy to tell [her]. \newline {Tanya} is in love with {Kara} but she is too shy to tell [her]. & Tanya \checkmark \newline Kara \checkmark \newline (consistently correct) \\ \midrule Original \newline Switched & {Peter} had not realised how old {Henry} was until [he] sees his daughter. \newline {Henry} had not realised how old {Peter} was until [he] sees his daughter. & Henry \xmark \newline Peter \xmark \newline (consistently incorrect) \\ \midrule Original \newline Switched & {Poulidor} was no match for {Merckx}, although [he] offered much resistance . \newline {Merckx} was no match for {Poulidor}, although [he] offered much resistance . & Poulidor \checkmark \newline Poulidor \xmark \newline (inconsistent) \\ \bottomrule \end{tabu*} \caption{Examples of various success/failure cases of BERT on the \corpusname{} test set} \label{tab:examples2} \end{table*} BERT fine-tuned on GAP achieves a state of the art $F_1$ of 71.1 after data augmentation (Table~\ref{tab:afterswitch-gap}). Not only does the augmentation improve the overall performance (+1.9) but it further balances the predictions' female:male ratio to 1:1. \subsection{Error Analysis} We show examples of BERT’s performance (trained on \corpusname{}) on our test set in Table \ref{tab:examples2}. This includes instances on which it succeeds and fails for both original and switched sentences. In general, it is not clear why certain instances are more difficult for BERT to resolve, although training BERT on the augmented, switched corpus significantly reduces the frequency of inconsistent resolutions (from 31\% to 11\%). These examples illustrate how challenging certain real-world situations can be for models to understand, compared to humans who can reason about them with ease. \section{Conclusion} We present a new corpus and task, \corpusname{}, for coreference resolution. Our corpus contains difficult problem instances that require a significant degree of common sense and world knowledge for accurate coreference link prediction, and is larger than previous similar datasets. Using a task-specific metric, consistency, we demonstrate that training coreference models on \corpusname{} improves their ability to build better representations of the context. We also show that progress in this capability is linked to reducing gender bias, with our proposed model setting the state of the art on GAP. \par In the future, we wish to study the use of \corpusname{} to improve performance on general coreference resolution tasks (e.g., the CoNLL 2012 Shared Tasks). We also plan to develop new models on \corpusname{} and transfer them to difficult common sense reasoning tasks. \section*{Acknowledgements} This work was supported by the Natural Sciences and Engineering Research Council of Canada and by Microsoft Research. Jackie Chi Kit Cheung is supported by the Canada CIFAR AI Chair program.
1,314,259,995,253
arxiv
\section{Introduction} Pions, the Nature's simplest hadrons, are simultaneously Nambu-Goldstone modes generated by dynamical chiral symmetry breaking in the Standard Model (SM) and bound states of first-generation light quarks and anti-quarks. This key feature explains why symmetries and their breaking play a crucial role in accounting for pions' properties. More importantly, it is also why charting and understanding pions' structure and mass distribution in terms of SM strong interactions is a cumbersome, central problem in modern physics, demanding a coherent effort both in QCD continuum and lattice calculations and in experiments shedding a light on this understanding \cite{Aguilar:2019teb}. A basic quantity revealing the pion's structure is its parton distribution function, ${\mathpzc q}^{\pi}(x;\zeta)$, expressing the probability that a ${\mathpzc q}$-flavour valence quark carries a light-front momentum fraction $x$ in the pion. In particular, this density has been the object of a long controversy, since that leading-order perturbative QCD analysis of $\pi N$ Drell-Yan data (E615 experiment \cite{Conway:1989fs}) drew as a conclusion that, at the relevant energy scale for the experiment, $\zeta_5$=5.2 GeV, ${\mathpzc q}^\pi(x;\zeta_5) \sim (1-x)$ when $x \to 1$; in clear contradiction with the result early predicted from parton model and perturbative QCD \cite{Ezawa:1974wm, Farrar:1975yb, Berger:1979du}: ${\mathpzc q}^\pi(x;\zeta_H) \sim (1-x)^2$, where $\zeta_H$ is an energy scale characteristic of nonperturbative dynamics; while QCD evolution is expected to make the exponent increase by the effect of the logarithmic running and thus become effectively $2+\gamma$, with $\gamma \gtrsim 0$, for any scale $\zeta > \zeta_H$. Subsequent continuum QCD calculations~\cite{Hecht:2000xa,Chang:2014lva,Ding:2019lwe} and further careful re-analyses of E615 data~\cite{Wijesooriya:2005ir, Aicher:2010cb}, including soft-gluon resummation, have recently reported results consistent with an exponent equal to 2+$\gamma$; while other calculations disregarding symmetry-preserving diagrams~\cite{Bednar:2018mtf} or data analysis not yet including relevant threshold resummation effects~\cite{Barry:2018ort} claimed to have found an exponent closely around 1. As discussed in Ref.~\cite{Ding:2019lwe}, two key issues in determining the pion's parton distribution function, ${\mathpzc q}^\pi(x;\zeta)$, are: (i) accounting, beyond the impulse approximation, for a class of corrections to the handbag-diagram representation of the virtual-photon-pion forward Compton scattering amplitude, restoring basic symmetries in the calculation of parton distributions~\cite{Chang:2014lva,Mezrag:2014jka}; and (ii) dealing adequately with the QCD evolution of these parton distributions, from the nonperturbative scale at which they have been obtained up to one accessible to experiment. In the following, we will sketch about (i) and elaborate further on the issue (ii), particularly in connection with a recently proposed process-independent effective charge ~\cite{Binosi:2016nme,Rodriguez-Quintero:2018wma}. \section{The pion parton distribution function} The pion's parton distribution function can be obtained on the ground of the knowledge of the dressed light-quark propagator and pion Bethe-Salpeter amplitude (BSA), computed by solving the appropriate Dyson-Schwinger and Bethe-Salpeter equations (BSE). In order to keep a natural connection for the renormalisation scale and the reference one for QCD evolution, the Dyson-Schwinger equations (DSE) should be renormalised at a typical hadronic scale, $\zeta_H$, where the dressed quasiparticles become the correct degrees-of-freedom~\cite{Gao:2017mmp,Ding:2018xwy}. Within this DSE and BSE approach but employing algebraic {\it ans\"atze}, a first study in Ref.~\cite{Chang:2014lva} yielded some new insight to the calculation by identifying the above-mentioned symmetry-preserving corrections, eventually leading to \begin{equation} {\mathpzc q}^\pi(x;\zeta_H) = N_c {\rm tr}\! \int_{dk}\! \delta_n^{x}(k_\eta) \; n\cdot\partial_{k_\eta} \left[ \Gamma_\pi(k_\eta,-P) S(k_\eta) \right] \Gamma_\pi(k_{\bar\eta},P)\, S(k_{\bar\eta})\,, \label{qFULL} \end{equation} after implementing the appropriate truncation; where $\int_{dk}:=\int \frac{d^4}{(2\pi)^4}$ is a Poincar\'e-invariant regularisation of the integral, $\delta_n^{x}(k_\eta):= \delta(n\cdot k_\eta - x n\cdot P)$; $n$ is a light-like four-vector, $n^2=0$, $n\cdot P = -m_\pi$; and $k_\eta = k + \eta P$, $k_{\bar\eta} = k - (1-\eta) P$, $\eta\in [0,1]$; $\Gamma_\pi$ is the pion BSA, $S(k)$ is the dressed light-quark propagator, the trace is taken over spinor indices with $N_c$=3, such that, if the BSA is canonically normalised, then $\int_0^1 dx {\mathpzc q}^\pi(x;\zeta_H)$=1. Owing to Poincar\'e covariance, no observable can be expected to depend on $\eta$, \emph{i.e}.\ the definition of the relative momentum, and this can be algebraically proved from~Eq.~\eqref{qFULL}. Another important property of~Eq.~\eqref{qFULL}, that can be made apparent after straightforward algebra, is: ${\mathpzc q}^\pi(x;\zeta_H)={\mathpzc q}^\pi(1-x;\zeta_H)$; which is the consequence of the bound system being described in terms of two identical dressed quasiparticles, in the isospin-symmetric limit. Then, in a further recent work~\cite{Ding:2018xwy}, realistic numerical solutions of both DSE and BSE have been applied to compute the first six Mellin moments of the valence-quark parton distribution, derived from Eq.~\eqref{qFULL} as follows \begin{equation} \langle x^m \rangle^\pi_{\zeta_H} = \int_0^1dx\, x^m {\mathpzc q}^\pi(x;\zeta_H) = \frac{N_c}{n\cdot P} {\rm tr}\! \int_{dk}\! \left[\frac{n\cdot k_\eta}{n\cdot P}\right]^m \Gamma_\pi(k_{\bar\eta},P)\, S(k_{\bar\eta}) n\cdot\partial_{k_\eta} \left[ \Gamma_\pi(k_\eta,-P) S(k_\eta) \right] \; ; \label{MellinMoments}\\ \end{equation} the Schlessinger point method (SPM) has been then used to extend this set of moments and thus get a reliable approximant for any moment; and, finally, the SPM-approximant has been applied for the reconstruction of the valence-quark distribution, ${\mathpzc q}^\pi(x;\zeta_H)$\,\cite{Ding:2018xwy}. The parton distribution is therefore fully determined, within this approach, by the kernel interaction specified for both the quark-gap and Bethe-Salpeter equations. An alternative approach results from the so-called overlap representation, in which the forward limit of the generalised parton distribution gives\,\cite{Burkardt:2002uc,Diehl:2003ny} \begin{equation}\label{eq:qfromLFWF} {\mathpzc q}^\pi(x;\zeta_H) = \int \frac{d^2 {\bf k}_\perp}{16\pi^3} \; | \psi(x,{\bf k}_\perp^2;\zeta_H) |^2 \end{equation} for the valence-quark parton distribution in terms of the lowest Fock-space light-front wave function (LFWF) at the hadronic scale, $\psi(x,{\bf k}_\perp^2;\zeta_H)$; its leading-twist contribution resulting from the Bethe-Salpeter wave function, $\chi_\mu(k+q,k)=S(k+q)\Gamma_\pi (k+q,k) S(k)$, as \begin{equation}\label{eq:LFWF} f_\pi \psi(x,{\bf k}^2_\perp) = \mbox{\rm tr}_{\rm CD} \int \frac{d{\bf k}_\parallel}{\pi} \; \delta_n^{x}(k) \gamma_5 \gamma \cdot n \; \chi_\mu(k-\frac{P}{2},P)\; , \end{equation} where $f_\pi$ is the pion's leptonic decay constant and the trace is here applied over color and spinor indices. As both the quark propagator and the BSA are in hand, basic ingredients for the realistic computation made in Ref.~\cite{Ding:2018xwy}, Eqs.~(\ref{eq:qfromLFWF},\ref{eq:LFWF}) can be also implemented to get a realistic estimate for the parton distribution within the DSE approach. Alternatively, one can follow the approach of Ref.~\cite{Xu:2018eii} and use an appropriate Nakanishi representation of the BSA, such that the LFWF eventually results from a closed expression only involving compact integrals of the so-called Nakanishi weight, a distribution defined on a support $[-1,1]$. Then, this distribution can be adjusted to reproduce the same SPM-approximant Mellin moments of Ref.~\cite{Ding:2018xwy} and, as can be seen in Fig.~\ref{fig:PDFs}, almost pointwise identical parton distributions at the hadronic scale result from both approaches. One is thus left with a realistic estimate of the LFWF which can be subsequently applied to computing the generalised parton distribution function~\cite{Rayaetal}. \begin{figure}[t!] \begin{minipage}{18pc} \includegraphics[width=18pc]{PDFs.png} \vspace*{-0.75cm} \caption{\label{fig:PDFs}{\small Predicted parton distribution function from Eq.~\eqref{MellinMoments} (brown)\,\cite{Ding:2019lwe} and from Eq.~\eqref{eq:qfromLFWF} here (red) and in Ref.~\cite{Chouika:2017rzs} (blue dashed) with an algebraic model, both at the hadronic scale $\zeta_H$; evolved then up to $\zeta_5$ (red dashed), as explained in the text, and successfully compared to reanalysed E615 data\,\cite{Wijesooriya:2005ir,Aicher:2010cb} (blue circles).}} \end{minipage}\hspace{2pc}% \begin{minipage}{16pc} \includegraphics[width=16pc]{alpha-corr.pdf} \vspace*{-0.75cm} \caption{\label{fig:aPI}{\small Predicted PI effective charge as obtained in Ref\,\cite{Binosi:2016nme} (dot-dashed blue) and improved in Ref.\,\cite{Rodriguez-Quintero:2018wma}, compared to the world's data of the Bjorken sum-rule charge and to the light-front holographic model (red dotted) canvassed in Ref.~\cite{Deur:2016tte}.}} \end{minipage} \end{figure} \section{DGLAP evolution of hadron structure functions} Once the parton distribution obtained at the hadronic scale, $q^\pi(x;\zeta_H)$, one should employ the QCD-evolution equations to make it evolve up to the relevant scale for E615 and thus obtain $q^\pi(x;\zeta_5)$. The equations describing the scale violations of the hadron structure functions read \begin{eqnarray} \left\{ \zeta^2 \frac{d}{d\zeta^2} \ - \ \frac{\alpha(\zeta^2)}{4\pi} \int_x^1 \; \frac{dy}{y} \left( \begin{array}{ccc} \displaystyle P_{qq}^{NS}\left(\frac x y\right) & 0 & 0 \\ 0 & P_{qq}^{S}\left(\frac x y\right) & \displaystyle 2 n_f P_{qG}^{S}\left(\frac x y\right) \\ 0 & \displaystyle P_{Gq}^{S}\left(\frac x y\right) & \displaystyle P_{GG}^{S} \left(\frac x y\right) \end{array} \right) \right\} \; \left( \begin{array}{c} {\mathpzc q}^{NS}(x;\zeta) \\ q^{S}(y;\zeta) \\ G^{S}(y;\zeta) \end{array} \right) \ = \ 0 \ , \label{eq:master} \end{eqnarray} written in terms of integral equations, where ${\mathpzc q}^{NS}$ stands for the non-singlet pure valence-quark and $q^S=\sum_{\mathpzc q} {\mathpzc q}+{\overline {\mathpzc q}}$ and $G^S$ represent, respectively, the singlet quark and gluon distribution functions in the pion; the elements of the matrix correspond to the so-called splitting functions as can be found, at the leading order, in Ref.~\cite{Altarelli:1977zs}, and $\alpha(\zeta)$ is the strong running coupling. Then, if the $m$-th order Mellin moment is considered, one is left with \begin{eqnarray} \left\{ \zeta^2 \frac{d}{d\zeta^2} + \frac{\alpha(\zeta^2)}{4\pi} \left( \begin{array}{ccc} \gamma_{0,qq}^{NS,(m)} & 0 & 0 \\ 0 & \gamma_{0,qq}^{S,(m)} & 2 n_f \gamma_{qG}^{S,(m)} \\ 0 & \gamma_{0,Gq}^{S,(m)} & \gamma_{0,GG}^{S,(m)} \end{array} \right) \right\} \left( \begin{array}{c} \langle x_q^m \rangle_{\zeta}^{NS} \\ \langle x_q^m \rangle_{\zeta}^{S} \\ \langle x_G^m \rangle_{\zeta}^{S} \end{array} \right) \ = \ 0 \ ; \label{eq:anomalous} \end{eqnarray} where the coefficients for the anomalous dimension of the Mellin moments, as defined in \eqref{MellinMoments}, result from $\gamma_{0,ij}^{k,(m)} = - \; \int_0^1 \; dx \; x^m P_{0,ij}^k(x)$ with $i,j$=$q,G$ and $k$=$S,NS$; and can be also found in Ref.~\cite{Altarelli:1977zs} (see Eqs.~(71-74)). The first row in the matrix of equation \eqref{eq:anomalous} leaves us with the standard one-loop DGLAP valence-quark evolution equation. The 2$\times$2 non-diagonal matrix block in \eqref{eq:anomalous} describes the evolution of the singlet components and makes also apparent how gluon and quarks become coupled. Indeed, one only needs to deal with the eigenvalue's problem for the matrix in Eq.~\eqref{eq:anomalous}, and its solutions can be formally written as \begin{eqnarray} \left( \begin{array}{cc} 1 & 0 \\ 0 & {\bf P}^{-1} \end{array} \right) \left( \begin{array}{c} \langle x_q^m \rangle_{\zeta}^{NS} \\ \langle x_q^m \rangle_{\zeta}^{S} \\ \langle x_G^m \rangle_{\zeta}^{S} \end{array} \right) \ = \ \exp{\left( - \Gamma_D^{(m)} \int_{\ln{\zeta_0^2}}^{\ln{\zeta^2}} dt \; \frac{\alpha(e^t)}{4\pi} \right)} \left( \begin{array}{cc} 1 & 0 \\ 0 & {\bf P}^{-1} \end{array} \right) \left( \begin{array}{c} \langle x_q^m \rangle_{\zeta_0}^{NS} \\ \langle x_q^m \rangle_{\zeta_0}^{S} \\ \langle x_G^m \rangle_{\zeta_0}^{S} \end{array} \right) \ , \label{eq:solution} \end{eqnarray} where $\Gamma_D^{(m)}={\rm Diag}( \gamma_{0,qq}^{NS,(m)},\lambda_{+}^{(m)},\lambda_{-}^{(m)})$ is the matrix of eigenvalues for Eq.~\eqref{eq:anomalous} and ${\bf P}$ is the matrix which diagonalizes its 2$\times$2 non-diagonal block and, eventually, couples singlet quark and gluon distributions. At the leading order, $\alpha(\zeta)$ is taken from the integration of the 1-loop $\beta$-function and \eqref{eq:solution} can be thus displayed in terms of simple analytic expressions featuring the logaritmic running of the moments from $\zeta_0$ up to $\zeta$, both scales lying in the perturbative domain. However, our aim here (and so was in Ref.~\cite{Ding:2019lwe}) is evolving the valence-quark parton distribution obtained at a naturally nonperturbative hadronic scale, where the pion is only a bound sate of a dressed quark and a dressed antiquark, up to larger energy scales. To this goal, one can go beyond the leading-order approximation by recognising that an effective charge for the strong coupling in Eq.~\eqref{eq:master} can be defined such that the higher-order corrections become therein optimally neglected and, in order to make predictions, assuming then a phenomenological correspondence of this charge with a well-known effective coupling. \section{The interaction kernel and the process-independent effective strong coupling} The interaction kernel used to get realistic solutions of the DSE gap equation for the quark propagator and for the BSA in Ref.~\cite{Ding:2019lwe}, and to compute thus the valence-quark parton distribution, is the one explained in Refs.~\cite{Qin:2011dd,Qin:2011xq}. This interaction has been found to coincide, in the infrared domain and within the error uncertainties, with a renormalisation-group-invariant (RGI) running-interaction resulting from contemporary studies of QCD's gauge sector\,\cite{Binosi:2014aea}, \begin{equation} \label{allhatd} \mathcal{I}(k^2) \ = \ k^2 \widehat{d}(k^2) \ = \ \frac{\alpha_{\rm T}(k^2)}{[1-L(k^2;\zeta^2)F(k^2;\zeta^2)]^2}\, , \end{equation} with $\zeta$ still standing for the renormalisation scale; $\widehat{d}(k^2)$ is a RGI function, owing to a sensible rearrangement of the diagramatic DSE expansion of the involved QCD Green's functions, within the approach given by the pinch technique and background field method (PTBFM)~\cite{Cornwall:1981zr,Binosi:2009qm}, as discussed in Ref.\,\cite{Aguilar:2009nf}; $F$ is the dressing function for the ghost propagator; $L$ is a longitudinal piece of the gluon-ghost vacuum polarisation, playing a key role within the context of the PTBFM approach, and that vanishes at $k^2=0$\,\cite{Aguilar:2009nf}; and $\alpha_{\rm T}$ stands for the strong running coupling derived from the ghost-gluon vertex \cite{Sternbeck:2007br,Boucaud:2008gn,Aguilar:2009nf}, also called the ``Taylor coupling'' \cite{Blossier:2011tf,Blossier:2012ef,Blossier:2013ioa}. Further, on the ground of this running-interaction as a basic ingredient, the process-independent (PI) effective coupling, \begin{equation} \widehat{\alpha}_{\rm PI}(k^2) = \widehat{d}(k^2) / \mathpzc{D}(k^2) \label{widehatalpha} \end{equation} has been introduced and discussed in Refs.~\cite{Binosi:2016nme,Rodriguez-Quintero:2018wma,Binosietal}; where $1/{\mathpzc D}(k^2)$ is a mass-dimension-two RGI function defined from the gluon two-point Green's function, as explained therein. As can be seen in Fig.~\ref{fig:aPI}, this PI coupling is also found to describe well the world's data for the process-dependent Bjorken sum-rule effective charge (the roots of this striking coincidence, which opens a window for a direct experimental measure of the PI charge defined on the QCD's gauge sector, are largely discussed in Ref.~\cite{Binosi:2016nme}). Following Ref.\,\cite{Binosietal}, the coincidence of the PI effective coupling and the DSE interaction kernel, within the infrared domain, supports the assumption that the parametrisation \begin{equation}\label{eq:newalpha} \frac{\alpha(\zeta^2)}{4\pi} = \left(\beta_0 \ln{\left(\frac{m_\alpha^2 + \zeta^2}{\Lambda^2_{\rm QCD}}\right)}\right)^{-1} \ , \end{equation} introduced in Ref.~\cite{Ding:2019lwe}, is the best candidate for the effective charge from Eq.~\eqref{eq:master}, where $\Lambda_{\rm QCD}$=234 MeV and $m_\alpha$=300 MeV are defined to make it coincides with the PI effective coupling, in the infrared, and smoothly connects with the pQCD tail of the kernel interaction in the ultraviolet. Then, as discussed in Ref.\,\cite{Ding:2019lwe}, $m_\alpha \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}} \Lambda_{\rm QCD}$ is a nonperturbative scale screening the soft gluon modes from interaction, which can be naturally identified with the hadronic scale $\zeta_H$. Therefore, plugging \eqref{eq:newalpha} into \eqref{eq:solution}, the valence-quark parton distribution can be unambigously evolved from $\zeta_H=m_\alpha$ up to $\zeta_5$, and then succesfully compared with the reanalysed E615 data\,~\cite{Wijesooriya:2005ir, Aicher:2010cb} (see Fig.~\ref{fig:PDFs}). Furthermore, at the hadronic scale, the pion is a two-valence-body bound-state with no explicit gluon or sea-quark contribution to the singlet distributions, only from valence dressed-quarks. Thus, QCD evolution effectively driven by \eqref{eq:newalpha} applied into \eqref{eq:solution} can be employed to estimate gluon and sea-quark distributions at any larger energy scale, accessible to experiment. The case for the $m=1$ Mellin moment (momentum fraction average) is particularly simple and the singlet distributions from \eqref{eq:solution} can be recast as ($N_f$=4) \begin{eqnarray} \langle x_q \rangle^{S}_\zeta = \frac 3 7 + \frac 4 7 \exp{\left( - \frac{56}{36\pi} \int_{\ln{\zeta_H^2}}^{\ln{\zeta^2}} dt \; \alpha(t) \right)} \, , \; \; \; \langle x_G \rangle^{S}_\zeta = \frac 4 7 \left[ \rule[0cm]{0cm}{0.75cm} 1 - \exp{\left( - \frac{56}{36\pi} \int_{\ln{\zeta_H^2}}^{\ln{\zeta^2}} dt \; \alpha(t) \right)} \right] ; \label{eq:solMm1} \end{eqnarray} which make apparent that, for $\Lambda_{\rm QCD}^2/\zeta^2 \to 0$, sea-quark and gluon momentum fractions tend logarithmically to 3/4 and 4/7, respectively, while the valence-quark tends to 0\,\cite{Altarelli:1981ax}. \section{Conclusions} We conclude by shortly sumarising. Continuum predictions for the pointwise behaviour of the pion's distribution functions for valence-quarks, gluons and sea are now consistently available for the first time, obtained within a Dyson-Schwinger-equations' approach and owing to the implementation of a symmetry-preserving interaction kernel. To this goal, we capitalised on a QCD's process-independent effective charge, driving the QCD evolution from a nonperturbative scale, unambigously defined by the freeze-out of interacting gluons, below its dynamical mass, up to any larger scale accessible to experiment. This leads to a parameter-free prediction of the pion's valence-quark distribution function that is in agreement with a modern analysis of the E615 data. The approach herein sketched can be potentially applied to extend the calculations to the spin-dependent structure functions and, beyond the kinematic forward limit, to the generalised parton distributions. \section*{Acknowledgements} This discussion is based on work completed by an international collaboration involving many remarkable people, to all of whom we are greatly indebted, and it is in connection with other contributions in this volume, e.g. Craig D. Roberts'. J. R-Q would like to express his gratitude to the organisers of 27th International Nuclear Physics Conference (INPC 2019), who made possible my participation in a meeting that was both enjoyable and fruitful. The work has been partially supported by the Spanish ministry research project FPA2017-86380 and by the Jiangsu Province {\it Hundred Talents Plan for Professionals}. \section*{References} \bibliographystyle{iopart-num}
1,314,259,995,254
arxiv
\section{Introduction} Transitional disks (TDs) are circumstellar disks that exhibit inner clearings or gaps induced by physical processes such as photoevaporation, grain growth, and dynamical interactions with stellar companions or candidate planets. This causes a wide range of spectral energy distribution (SED) properties to be observed \citep[][and references therein]{Williams2011}. Disks are identified as transitional if they have no or small near-infrared excess, steep slopes in the mid-infrared, and large far-infrared excesses \citep[e.g.][]{Merin2010}. However, different definitions used sometimes make their identification problematic. In fact, similar SEDs can be reproduced by a number of environments, including background objects and nebulosity in the source surroundings, which can contaminate and bias the sample of known transitional disks. In addition to this, asymptotic giant branch stars and classical Be stars can easily be mistaken as TDs with significant flux deficit at all wavelengths and $\alpha_{excess}$$<$0 \citep[e.g.][]{Cieza2010}, and SEDs of edge-on protoplanetary disks can look like those of TDs with a sharp rise in the mid-IR \citep{Merin2010}. See review from \cite{Williams2011} and references therein for a more detailed description. \\ \indent In order to accomplish a more thorough characterization of this class of young objects, there is a need to examine their fluxes at far-infrared wavelengths. \emph{Herschel}, with its improved spatial resolution, can greatly serve the purpose of ruling out sources affected by contamination. In the Chamaeleon I star-forming region (Cha I), the young object T54 (also \object{NAME HM Anon}) is one of 8 candidate transitional disks \citep[][]{Manoj2011}. The star has been reported as a spectral type G8 \citep{Luhman2007}, weak-lined T Tauri (WTTS) \citep{Nguyen2012}, with visual extinction of 1.78 mag and luminosity of 4.1 L$_{\odot}$ \citep{Kim2009}. T54 is a known subarcsecond binary \citep{Ghez1997,Lafreniere2008}. The companion is located at a projected separation of 0$\farcs$247 (43\,AU) and position angle (PA) of $246.5^{\circ}$. Resolved optical spectra by \citet{Nguyen2012} suggest that the optical luminosity of the system is dominated by the primary component, although we cannot rule out that the IR excess at the position of T54 discussed later is partially originated from the secondary. \section{Observations}\label{Observations} \subsection{Herschel} The Chamaeleon I region was observed by the \emph{Herschel} Space Observatory \citep{Pilbratt2010} as part of the Gould Belt Survey \citep{Andre2010}. Detailed description of the observations can be found in \citet{Winston2012}. Observations used in this paper have obsids 1342213178, 1342213179 (22 Jan. 2011) for parallel-mode PACS \citep{Poglitsch2010} 70 and 160 $\mu$m and SPIRE \citep{Griffin2010} 250 and 350 $\mu$m bands. Additional PACS observations were obtained at 100 $\mu$m (obsids 1342224782, 1342224783, 1342225003, and 1342225004; dates 27 Jul. 2011 and 1 Aug. 2011). The data were reduced using HIPE \citep{Ott2010} version 8.2.0 and processed using the Scanamorphos software v14 \citep{Roussel2012} for PACS, and the Scan Map Destriper pipeline in HIPE for SPIRE. The astrometry of the PACS images was refined using 2MASS PSC positions of nearby point sources through the Astrometrical Calibration tool in the Aladin v7 software \citep{Aladdin}. Photometry was extracted using the HIPE \emph{annularSkyAperturePhotometry} task, which performs aperture photometry with background subtraction. For PACS, aperture corrections were applied and photometric errors were estimated as specified in the PACS Point-Source Flux Calibration Technical Note from April 2011. For SPIRE, the same was applied using apertures and corrections from Section 5.7.1.2 of the SPIRE Data Reduction Guide, with a conservative uncertainty of $\pm$10\%. \subsection{Ancillary data} Ancillary images were retrieved in order to check, through visual inspection, the possible presence of counterparts for the extended emission around T54 discovered by \emph{Herschel}. The FORS 1 H$\alpha$ image (PROG ID 075.C-0809(A), 26 Jul. 2005) was obtained from the \emph{ESO Archive}, while 2MASS \emph{All-Sky Release Survey} images (25 Jan. 2000) were obtained from the 2MASS \emph{Interactive Image Service}. IRAC (AOR key 20014592, 16 May 2007) and MIPS (AOR key 5706752, 24 Dec. 2009) Post BCD images were obtained from the \emph{Spitzer Heritage Archive}. The IRS spectrum (AOR key 12695552) was extracted from the \emph{Spitzer Heritage Archive} and reduced with SPICE v2.5.0. The short low slit (5.2--14.5$\,\mu$m) included only T54, while the long low slit (14.0--38.0$\,\mu$m) included both T54 and 2MASS J11124076-7722378. While the PSFs slightly overlap, the sources are resolved enough that an uncontaminated spectrum of T54 could be extracted by using a narrow aperture. \section{Results} \begin{figure*} \centering \includegraphics[width=1.00\hsize]{PACS70img.eps} \caption{Images of T54 at different wavelengths. North is up, east is left. The bottom row images show a region twice as large as the other ones. The diameter of the circles in the bottom left of each image represents the FWHM of the PSF for the different observations. The northern cross is centered at the position of T54 from the 2MASS \emph{Point Source Catalog}, while the southern one corresponds to the position of 2MASS J11124076-7722378. Arrows indicate the location of extended emission as detected at 70 $\mu$m, and the dashed ellipse represents the same in all other images. Non-labelled objects represent artifacts. \label{fig:1}} \end{figure*} Figure \ref{fig:1} presents \emph{Herschel} and ancillary images centered at $RA_{2000} = 11^h12^m42 \fs029$ $Dec_{2000} = -77^{\circ}22'28\farcs58$ to display the source and its surroundings. T54 is clearly detected as a point source (top-left cross in Fig. \ref{fig:1}) at all optical, near- and mid-IR wavelengths. Extended emission is discovered in the PACS 70 $\mu$m image at a distance of $\sim$6$''$ and PA of 196$^{\circ}$ from T54. In the 100 $\mu$m image we observe emission in an elongated shape whose photocenter is offset from the star. This off source emission observed is unique to T54: it was not found in any \emph{Herschel} images of the other known transitional disks in the Chamaeleon I and II regions (Ribas et al., in prep.). In 160, 250 and 350 $\mu$m images, we also observe extended emission centered off source. However, we note that the increased PSF size makes it impossible to rule out a possible contribution from the nearby point source 2MASS J11124076-7722378 (bottom-right cross in Fig. \ref{fig:1}) at PA 203$^{\circ}$ and distance of 16$\arcsec$ from T54. Inspection of 2MASS color-color and color-magnitude diagrams shows very extreme colors compared to young Cha I members, indicating that this is likely an unrelated background object. No detection was obtained in SPIRE 500 $\mu$m, due to the presence of strong emission from the environment of the Cha I cloud. \begin{table} \caption{\emph{Herschel} photometry} \label{tab:1} \centering \begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline\rule{0mm}{3mm} Instrument & Wavelength ($\mu$m) & Flux (Jy) & Aperture radius ($\arcsec$) \\ \\\hline\rule{0mm}{3mm} PACS & 70 (star-only) & 0.23$\pm$0.03 & 4 \\ \rule{0mm}{3mm} PACS & 70 (offset em.-only) & 0.22$\pm$0.03 & 4 \\ \rule{0mm}{3mm} PACS & 70 (star+offset em.) & 0.48$\pm$0.03 & 8 \\ \rule{0mm}{3mm} PACS & 100 & 0.61$\pm$0.02 & 8 \\ \rule{0mm}{3mm} PACS & 160 & 1.10$\pm$0.11 & 22 \\ \rule{0mm}{3mm} SPIRE & 250 & 0.49$\pm$0.05 & 22 \\ \rule{0mm}{3mm} SPIRE & 350 & 0.24$\pm$0.02 & 30 \\ \hline\rule{0mm}{3.0mm} \end{tabular} \end{table} Table \ref{tab:1} presents \emph{Herschel} aperture corrected fluxes, together with the apertures used. Background sky annuli with radii of 25$\arcsec$ and 40$\arcsec$ for all PACS images, and 60$\arcsec$ and 90$\arcsec$ for SPIRE were subtracted. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{T54_SED.eps}} \caption{Spectral energy distribution of T54. Open circles are observed optical, DENIS, 2MASS, WISE PRC, \textit{Spitzer} IRAC and MIPS 24 $\mu$m fluxes from the literature. Solid black circles are the fluxes dereddened with an A$_V$ of 1.78 mag using the law from \cite{Weingar2001}. The red line is the \textit{Spitzer} IRS spectrum. The dashed line is the G8 NEXTGEN stellar model that best fits the optical photometry. At 70 $\mu$m, PACS fluxes displayed are from the star only (red solid circle), from the extended emission only (blue solid triangle) and from both (green solid circle). Longer wavelength \emph{Herschel} flux values, not originated from the star position, are shown as blue solid triangles. The green solid line represents a T = 94 K blackbody curve fit to the mid-IR fluxes. \label{fig:2}} \end{figure} Figure \ref{fig:2} presents the SED of the object updated with the values from Table \ref{tab:1}. We display dereddened fluxes obtained from the literature, and a best-fit NEXTGEN stellar model \citep{Hauschildt1999,Allard2000} for spectral type G8. The SED shows no or very little infrared excess at wavelengths $\lesssim$10 $\mu$m. The mid-IR excess at wavelengths up to 70 $\mu$m (red solid circle) is attributed to the unresolved circumstellar environment at the star position only, as confirmed by the \textit{Spitzer} MIPS 24 $\mu$m image. The green solid circle in the SED represents the PACS 70 $\mu$m flux from both the extended emission observed in the corresponding image (Fig. \ref{fig:1}) and from the star position. At longer wavelengths we cannot attribute the measured fluxes to the star, but mostly to the nearby extended emission and possible additional contamination from 2MASS J11124076-7722378 (triangles in Fig. \ref{fig:2}). Hence, a significant part of the far-infrared excess seen in the SED of T54 (about half at 70 $\mu$m, likely more at increasing wavelengths) does not originate from the star position, but from nearby extended emission resolved by \emph{Herschel}. If we assume that the remaining downscaled excess arises from thermal emission from dust grains, a single-temperature blackbody fit yields a dust temperature of approximately 94 K. We estimate the luminosity of the star by integration of the photospheric fluxes as $\sim$3.66 L$_{\odot}$, and the luminosity of the dust from integration under the mid-IR blackbody curve. This leads to a fractional luminosity value of L$_{IR}$/L$_{\ast}$ = 0.005. We finally use the pre-main sequence tracks from \citet{Palla1999} in the census of Chamaeleon I by \citet{Luhman2004} to estimate the age and mass of T54. We find that the system is between 3 and 10 Myr old, with a mass of approximately 1.5 M$_{\odot}$. This is consistent with the age spread in the Chamaeleon I region found by \citet{Luhman2004}, in which objects of different mass range between 1 and 10 Myr of age. However, they also find a median age of $\sim$2 Myr, implying that T54 has a more evolved nature compared to most sources in the cluster. \section{Discussion} \label{discussion} In this section we discuss previous informations about T54 from the literature in the context of the new \textit{Herschel} data. The main result is the discovery of extended emission contaminating the far-infrared part of the SED (discussed in Sect. \ref{4.1}), which scales down the total excess attributed to the source significantly. This potentially changes the nature of T54: in Sect. \ref{4.2} we draw a comparison with similar objects, and then conclude in Sect. \ref{4.3} by considering the new characteristics of the system if a disk is still to be present. \subsection{Far-infrared extended emission} \label{4.1} In the PACS 70 $\mu$m image we clearly resolve extended emission with its flux peaking at a projected distance of 1040 AU and PA of 196$^{\circ}$ from the star. At longer wavelengths this emission is not resolved, but most of the flux comes from its position. Presence of nebulous emission has been reported in the literature. Both \citet{Gauvin1992} and \citet{Spangler2001} report the emission from the source being extended at IRAS and ISO wavelengths. Furthermore, the presence of a reflection nebula (referred to as \object{GN11.11.2} or \object{BRAN341E}) was reported in a study by \citet{Brand1986} and its position is consistent with the emission discovered. We conclude that about half of the excess at 70 $\mu$m is not to be associated with a transitional disk at the star position but to extended emission offset from it, likely associated with a small reflection nebula. At longer \emph{Herschel} wavelengths the latter is not resolved, but most of the excess seem to originate from it, and not from the position of the system. These results cause a reduction of the total excess flux attributed to T54 and thus modify the view we had of its nature. \subsection{Comparison with other objects} \label{4.2} The system is classified as non-accreting due to its H$\alpha$ line in absorption \citep{Nguyen2012, Feigelson1989, Walter1992}, with the latter, however, indicating substantial filling in of the line, as in the case of the young object DoAr 21 \citep{Jensen2009}, discussed further below. This is in contrast with all but two of the rest of transitional disks in Chamaeleon I \citep{Manoj2011}. On the other hand, we note that \cite{Kim2009} report the source to be accreting from analysis of its \emph{U}-band flux. High H$\alpha$ and \emph{U}-band variability has been reported in objects of similar nature, such as T Cha \citep{Schisano2009} and DoAr21 \citep{Jensen2009}, even on a timescale of days. In addition to this, flaring activity in T54 has been suggested in the X-ray study by \citet{Feigelson1993} to explain the significant increase in flux between the \textit{ROSAT} and the previous \textit{Einstein} observations \citep{Feigelson1989}. This again enhances the similarity with DoAr 21, for which \citet{Jensen2009} suggested that flares could account for the U-band excess and variability observed. In summary, most references point towards a non-accreting stellar environment, but no conclusions can be drawn without further spectral observations. T54 also presents polycyclic aromatic hydrocarbon (PAH) emission at 11.3 $\mu m$, which is undetected in the majority of disks around T Tauri stars \citep{Geers2006}. In this context, it is useful to compare T54 to the case of DoAr 21, since the latter is similarly a late spectral type with PAH features and lack of silicates. Interestingly, \citet{Jensen2009} obtained narrow-band images centered on the 11.3 $\mu m$ feature and found a partial arc or ring of dust at a projected distance of 134 AU from the source. Hence, even in the case of T54, it is possible that the PAH emission originates from an extended area and is not associated with a circumstellar disk. We also compare the X-ray properties of T54 to those of DoAr 21, and analyse how these can influence the presence of PAH emission. In the case of T54, \citet{Feigelson1993} report considerable X-ray luminosity (log(L$_X$) = 30.5 erg s$^{-1}$), which is not as extreme as the value for DoAr 21 \citep[log(L$_X$) $\simeq$ 32,][]{Jensen2009}. \citet{Jensen2009} suggest that strong X-ray emission could be responsible for exciting the PAHs. However, a more recent study by \citet{Siebenmorgen2010} shows that X-rays destroy PAH molecules efficiently at all distances. This would make the detection of PAHs around such strong X-ray emitters very unlikely, and therefore cannot explain the 11.3 $\mu$m feature observed. Furthermore, T54 does not display a 10 $\mu$m silicate feature even though transitional disks commonly show it \citep{Manoj2011}. A broad study of disks with inner holes by \cite{Merin2010} presents only three objects lacking this feature, namely DoAr 21, SSTc2d J18285808+0017243 and Sz 84, which interestingly also have SEDs very similar to that of T54, and PAH features in two of them. These three out of a sample of 35 disks were all classified as probable extended sources and therefore dubious transitional disks. However, we cannot exclude the presence of a disk with a very clean inner hole, such as T25 in Cha I \citep{Kim2009} and IRAS 04125+2902 in Taurus \citep{Furlan2011}. Therefore, the lack of the 10 $\mu$m silicate feature also raises doubts about the disk nature of the source. \subsection{Concluding remarks} \label{4.3} If the downscaled excess, however, originated from a disk, this would have an evolved nature. In fact, its fractional luminosity of 0.005 is consistent with the definition of debris disk from \citet[][]{Wyatt2008} (L$_{IR}$/L$_{\ast}$ $<$ $10^{-2}$) and our age estimate sets the system among the most evolutionary advanced in the Cha I region. Using equation 3 in \cite{Wyatt2008}, we obtain a disk radius of 16.8 AU, which is smaller than the 43 AU binary separation \citep{Lafreniere2008} and would make T54 an interesting case of an evolved disk around a single binary component. However, no conclusions can be drawn without observations at higher spatial resolution, needed to better constrain the nature of this system. \begin{acknowledgements} We thank the referee for the useful comments and the suggested improvements to the structure of the paper. This work has been possible thanks to the support from the ESA Trainee and ESAC Space Science Faculty and of the Herschel Science Centre. Support was also provided by the Enterprise Ireland Space Education Program, and by the Belgian Federal Science Policy Office via the PRODEX Programme of ESA. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA; on data obtained from the ESO Science Archive Facility under request number MATRA 35567; and on data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France and of NASA's Astrophysics Data System. \end{acknowledgements} \bibliographystyle{aa}
1,314,259,995,255
arxiv
\section{Introduction} Accreting high mass X-ray binary (HMXB) pulsars are among the brightest X-ray sources in our Galaxy (Nagase et al. 1989). In these binaries, a neutron star and a massive ($\geq$ 10 $M_{\odot}$) main-sequence star rotate around the common center of mass of the system in a wide and eccentric orbit (Tauris \& van den Heuvel 2006). The neutron star accretes matter from the companion star through the capture of stellar wind or Roche-lobe overflow. A majority of the HMXB systems are known to be Be/X-ray binaries (BeXRBs) in which the mass-donor is a non-supergiant B or O spectral type star which shows emission lines in its optical/infrared spectrum (Reig 2011). Rapid rotation of the companion Be star in the BeXRB system expels its photospheric matter equatorially, forming a circumstellar disk around it. The continuously evolving, equatorial circumstellar disk is known to be the cause of the emission lines and infrared excess in the optical/infrared spectrum of the companion star in the BeXRBs. Significant evolution of the circumstellar disk allows the neutron star to capture copious amount of matter while passing through the periastron. This abrupt accretion of matter by the neutron star enhances its X-ray emission by several orders of magnitude which lasts for several days to tens of days. These events are termed as Type-I X-ray outbursts. Once the neutron star moves away from the periastron, accretion from the circumstellar disk is no more possible and the X-ray source returns to the quiescence. The long term X-ray activity in BeXRBs is characterized by the regular Type-I outbursts with peak luminosity of the order of $ L_{x} \le 10^{37}$~erg~s$^{-1}$ and irregular rare giant (Type-II) X-ray outbursts with peak luminosity of $L_{x} \geq 10^{37}$~erg~s$^{-1}$. The Type-I X-ray outbursts are of short duration, covering 20--30 \% of orbit and coincide with the periastron passage of the neutron star whereas the Type-II outbursts show no preferred orbital phase dependence but once set in, they tend to cover a large fraction of the orbital period or even several orbital periods (see, e.g., Okazaki \& Negueruela 2001, Reig 2011, Jaisawal \& Naik 2016, Wilson-Hodge et al. 2018, Jaisawal et al. 2019). EXO~2030+375 is one of the well studied Be/X-ray binary pulsars associated with regular Type-I outbursts during almost every periastron passage. This transient accreting X-ray pulsar was discovered in 1985 with {\it EXOSAT} during a giant outburst (Parmar et. al. 1989) with $\sim$42~s pulsations. The transient behaviour of this pulsar could be traced since its discovery when its initial 1-20 keV outburst luminosity (1.0$\times$10$^{38}~d_{5}^{2}$)~erg~s$^{-1}$ on 1985 May 18 declined by a factor of $\ge$2600 within 100 days of the outburst. The associated optical counterpart of EXO~2030+375 is a highly reddened B0 Ve star (Motch \& Janot-Pascheco 1987) showing infrared excess and H$\alpha$ in emission (Coe et al. 1988). Using the relationship between extinction and distance of sources in the Galactic plane, Wilson et al. (2002) estimated the distance of EXO~2030+375~ to be 7.1 kpc. The regular Type-I X-ray outbursts of EXO~2030+375, occurring almost at every periastron passage of its $\sim$46 day orbit (Wilson et al. 2008), have been extensively monitored with the X-ray instruments onboard {\it RXTE}, {\it INTEGRAL}, {\it XMM-Newton}, {\it Suzaku} and {\it Swift/BAT} observatories to understand the characteristic properties of the pulsar (Wilson et al. 2002; Naik et al. 2013; Naik \& Jaisawal 2015, Ferrigno et al. 2016; Epili et al. 2017 and references therein). In June 2006, EXO 2030+375 was caught for the second time in a giant (Type-II) X-ray outburst with initial flux of 180 mCrab. This surpassed the previous peak flux of about 50~mCrab observed during the entire life of the {\em RXTE}/ASM mission (Corbet \& Levine 2006). The 2006 Type-II outburst was also followed by {\em Swift}/BAT which reported the peak flux steadily increased to 750 mCrab (Krimm et. al. 2006). Spin-up trend was observed in the pulsar during the giant X-ray outbursts in 1985 (Parmar et al. 1989) and 2006 (Wilson, Finger \& Camero-Arranz 2008) whereas spin-down episodes have been observed at low luminous outbursts in 1994-2002 (Wilson et al. 2002; Wilson, Fabregatet \& Coburn 2005) and during faint outbursts after March 2016 (Kretschmar et al. 2016). The phase-averaged spectra of EXO~2030+375 during normal and giant outbursts prior to 2006 giant outburst were described with various phenomenological and physical models (in some cases) along with an iron emission line at 6.4~keV and interstellar absorption (Epili et al. 2017 and references therein). Apart from the continuum spectrum, several interesting features have also been observed in the pulsar spectrum. {\it Suzaku} observations of the pulsar EXO~2030+375 during less intense Type-I outbursts in 2007 and 2012 did not show any evidence of cyclotron absorption features in the X-ray spectrum of EXO~2030+375. However, presence of additional matter locked at certain pulse phases of the pulsar was reported and interpreted as the cause of several prominent absorption dips in the pulse profiles (Naik et al. 2013; Naik \& Jaisawal 2015). During the brighter Type-I outburst in 2007, Naik et al. (2013) detected several narrow emission lines (i.e Si~{\sc XII}, Si~{\sc XIV}, S~{\sc XV}) for the first time along with Fe~K$_{\alpha}$ and Fe~{\sc XVI} in the X-ray spectrum. \begin{table*}[bt!] \tabularfont \centering \caption{Log of observations of EXO~2030+375~ with {\em AstroSat}, \textit{NuSTAR}\xspace and \textit{Swift}\xspace/XRT.} \begin{tabular}{ccllcclc} \hline \hline &Observation ID &\multicolumn{2}{c}{Start of Observation} &\multicolumn{2}{c}{Exposure (in ks)} &Spin period &Count \\ & &Date &MJD &LAXPC &SXT & (s) &Rate $^b$\\ \hline {\underline {\em AstroSat}} \\ Obs-1 &G06\_089T01\_9000000746 &23 October 2016 &57684.99 &48.5 &12.1 &41.2895(7) &32\\ Obs-2 &G08\_081T01\_9000002144 &6 June 2018 &58275.53 &43.6 &24.1 &41.272(9) &24\\ Obs-3 &G08\_081T01\_9000002178 &19 June 2018 &58288.88 &46.4 &23 &41.30(1) &11\\ Obs-4 &G08\_081T01\_9000002350 &9 September 2018 &58370.3 &46.5 &22.8 &41.2747(8) &65\\ Obs-5 &T03\_244T01\_9000003912 &2 October 2020 &59124.57 &95 &8.9 &41.306(3) &27\\ \hline \textit{NuSTAR}\xspace &90201029002 &25 July 2016 &57594.36 &\multicolumn{2}{c}{56.7} &41.287054$^a$ &---\\ \textit{Swift}\xspace &00030799022 &25 July 2016 &57594.85 &\multicolumn{2}{c}{1} &--- &---\\ \hline \hline \end{tabular} \label{log} \tablenotes{$^a$: from F{\"u}rst et al. (2017). $^b$: Average source count rate (in counts s$^{-1}$) per LAXPC unit is given in 3-80 keV energy range. } \end{table*} \begin{figure}[bt!] \centering \includegraphics[width=0.5\textwidth, angle=0]{fig1.pdf} \caption{MAXI (2-20 keV, blue data points) and \textit{Swift}\xspace/BAT (15-50 keV, shaded) long-term monitoring light curves of EXO~2030+375~ ranging from (a). 21 June 2016 (MJD 57560) to 27 November 2016 (MJD 57719), (b). 12 April 2018 (MJD 58220) to 14 October 2018 (MJD 58405), and (c). 20 July 2020 (MJD 59050) to 13 October 2020 (MJD 59135) are shown in top, middle, and bottom panels, respectively. Arrow marks in the panels represent the epochs of \textit{AstroSat}\xspace and \textit{NuSTAR}\xspace observations of the pulsar. } \label{maxi-bat} \end{figure} A detailed and comprehensive study of EXO~2030+375 was carried out by using extensive {\it RXTE} pointed observations during many Type-I and 2006 Type-II outbursts starting from 1995 till 2011 (Epili et al. 2017). Timing and spectral studies of the pulsar were carried out in 3--30 keV luminosity range from 3.8$\times$10$^{36}$ to 2.6$\times$10$^{38}$ erg s$^{-1}$, covered during the entire duration of {\it RXTE} campaign. Timing studies of more than 600 {\it RXTE} pointings revealed the evolution of pulse profiles of the pulsar with luminosity - a main peak and a minor peak at low luminosity evolved into a two-peaked profile along with minor dips at high luminosity. This study revealed that pulse profiles of the pulsar at a particular luminosity were identical irrespective of the type of X-ray outbursts, indicating that the emission geometry depends mainly on the accretion rate. Since the discovery in 1985, the pulsar had been showing regular X-ray outbursts for about 25 years. Since early 2015, however, the Type-I outbursts appeared to be of decreasing intensity and eventually vanished from the light curve towards the end of 2015 or early 2016 (F{\"u}rst et al. 2016). The Type-I X-ray outburst activity commenced again in early 2016 and still continuing, though with much fainter peak luminosity ($\le$10$^{36}$ erg s$^{-1}$) than the usual ones. F{\"u}rst et al. (2017) reported the detection of pulsation at a minimum luminosity of 6.8$\times$10$^{35}$ erg s$^{-1}$ in 3-78 keV range, considered to be the lowest luminosity of the pulsar with X-ray pulsations in the light curve. Though the pulsar was observed with \textit{Swift}\xspace/XRT at a fainter phase, the data quality was not good enough for pulsation search. As the pulsar is still showing Type-I X-ray outbursts with fainter peak luminosity, it is interesting to carry out timing and spectral studies with {\it Astrosat}~ to explore whether the pulsar has gone into the propeller regime or still undergoing accretion. Detection of pulsations in the light curve at lower luminosity compared to that during the earlier \textit{Swift}\xspace/XRT observation (F{\"u}rst et al. 2017), would rule out the onset of propeller regime. Further, detection of pulsation at a limiting luminosity may allow us in estimating the magnetic field of the pulsar. The \textit{AstroSat}\xspace observations at lower luminosity, therefore, are important to investigate above properties of the pulsar. In this paper, we investigate the pulsation activities, shape of pulse profiles and spectral properties of the pulsar at a significantly lower luminosity level using five epochs of \textit{AstroSat}\xspace observations. For comparison, data from \textit{NuSTAR}\xspace observation of the pulsar on 25 July 2016, reported in F{\"u}rst et al. (2017), were also used in present work. The observations of the pulsar and data reduction procedures are described in Section~2, results obtained from the timing and spectral analysis are presented in Section~3 \& 4, respectively. The implication of our results are discussed in Section~5. \begin{figure}[t!] \centering \includegraphics[height=2.6in, width=3.2in, angle=0]{Spin-EXO.pdf} \caption{Spin period evolution of the pulsar with luminosity during \textit{AstroSat}\xspace observations. L$_{36}$ represents the 3-30 keV unabsorbed luminosity in unit of 10$^{36}$~erg~s$^{-1}$.} \label{spin-period} \end{figure} \begin{figure*} \centering \includegraphics[width=0.23\textwidth,angle=-90]{fig2.pdf} \caption{ Pulse profiles of EXO~2030+375~ in 0.3-7 keV range with SXT instrument are shown for all five \textit{AstroSat}\xspace observations (left to right). These profiles are obtained by folding the light curves at respective pulse period determined from LAXPC data. Two pulses are shown for clarity.} \label{sxt-profile} \end{figure*} \begin{figure}[bt!] \centering \includegraphics[height=4.8in, width=3.2in, angle=0]{fig3.pdf} \caption{ Pulse profiles of EXO~2030+375~ in 3-80 keV range are shown for all five \textit{AstroSat}\xspace observations (top to bottom). L$_{\rm 36}$ denotes the 0.5-30 keV unabsorbed luminosity of the pulsar in 10$^{\rm 36}$~erg~s$^{-1}$ at a distance of 7.1~kpc. Two pulses are shown for clarity.} \label{lxp-profile} \end{figure} \begin{figure*}[bt!] \centering \includegraphics[width=.5\textwidth,angle=-90]{fig4-obs1-pp.pdf}\quad \includegraphics[width=.5\textwidth, angle=-90]{fig4-obs2-pp.pdf}\quad \includegraphics[width=.5\textwidth, angle=-90]{fig4-obs3-pp.pdf}\\ \medskip \includegraphics[width=.5\textwidth, angle=-90]{fig4-obs4-pp.pdf}\quad \includegraphics[width=.5\textwidth, angle=-90]{fig4-obs5-pp.pdf} \caption{Energy resolved pulse profiles of EXO~2030+375~ obtained by folding the energy resolved light curves from LAXPC instrument(s) onboard \textit{AstroSat}\xspace at the respective estimated spin period(s). Two pulses are shown in each panel for clarity. The error-bars in the figure represent 1$\sigma$ uncertainties.} \label{resolved-profiles} \end{figure*} \begin{figure}[t!] \centering \includegraphics[height=3.3in, width=2.5in, angle=-90]{fig5-pf.pdf} \caption{Pulse fraction variation of EXO~2030+375~ with energy, obtained from the pulse profiles in multiple energy bands from five \textit{AstroSat}\xspace observations.} \label{pf} \end{figure} \begin{figure*}[bt!] \begin{center}$ \begin{array}{cccccc} \includegraphics[width=0.35\textwidth, angle=-90]{fig6-spec1.pdf} & \includegraphics[width=0.35\textwidth, angle=-90]{fig6-spec2.pdf} \\ \includegraphics[width=0.35\textwidth, angle=-90]{fig6-spec3.pdf} & \includegraphics[width=0.35\textwidth, angle=-90]{fig6-spec4.pdf} \\ \includegraphics[width=0.35\textwidth, angle=-90]{fig6-spec5.pdf} & \includegraphics[width=0.35\textwidth, angle=-90]{fig6-nustar-xrt.pdf} \\ \end{array}$ \end{center} \caption{Best-fitting energy spectra obtained from first, second, third, fourth and fifth \textit{AstroSat}\xspace observations of EXO~2030+375~. Broadband energy spectra from \textit{NuSTAR}\xspace and {\em Swift}/XRT data in 2016 July are also shown.} \label{spec} \end{figure*} \section{Observations and Data Reduction} \subsection{\textit{AstroSat}\xspace} The first Indian multi-wavelength astronomical satellite, \textit{AstroSat}\xspace was launched by Indian Space Research Organization on 28 September 2015 (Agrawal 2006, Singh et al. 2014). The observatory is sensitive to photons from optical to hard X-ray ranges by using five sets of instruments such as Ultraviolet Imaging Telescope (UVIT; Tandon et al. 2017), Soft X-ray Telescope (SXT; Singh et al. 2017), Large Area X-ray Proportional Counters (LAXPCs; Agrawal et al. 2017, Antia et al. 2017), Cadmium Zinc Telluride Imager (CZTI; Rao et al. 2017), and a Scanning Sky Monitor (SSM; Ramadevi et al. 2018). However, in the present study, five epochs of \textit{AstroSat}\xspace observations of EXO~2030+375~ with SXT and LAXPC instruments are used. Details of the observations are summarised in Table 1. As the source was very faint during all five epochs, it was not detected in the CZTI. The UVIT was not operational during these observations. \textit{AstroSat}\xspace caught the source at different phases of the regular Type~I X-ray outbursts. Top and bottom panels of Figure~\ref{maxi-bat} show the MAXI (Monitor of All-sky X-ray Image, Matsuoka et al. 2009) and \textit{Swift}\xspace/BAT (Burst Alert Telescope, Krimm et al. 2013) monitoring light curves of the pulsar covering the epochs of \textit{AstroSat}\xspace observations. The first, fourth and fifth \textit{AstroSat}\xspace observations were carried out at the declining phase of the Type-I X-ray outbursts at a source luminosity of 15-40 mCrab with BAT. However, during the second observation, monitoring data from MAXI or {\it Swift}/BAT were not available. An extremely low level of X-ray intensity, $\sim$10 mCrab in 15-50 keV range, was estimated during the third \textit{AstroSat}\xspace observation. A log of these \textit{AstroSat}\xspace pointings of EXO~2030+375~ is given in Table~1. The SXT is a soft X-ray focusing telescope onboard \textit{AstroSat}\xspace. It consists of shells of conical mirrors that focus the soft X-ray photons in 0.3--8~keV energy range on a CCD detector. The field of view of the SXT is 40 arcmin. The effective area of the telescope is 90~cm$^2$ at 1.5 keV. The energy resolution of the detector is 90 eV at 1.5 keV and 136~eV at 5.9~keV. The source was observed with SXT in the photon counting mode, yielding a time resolution of 2.4~s. We followed standard analysis procedure for the SXT data reduction as suggested by the \textit{AstroSat}\xspace Science Support Cell (ASSC\footnote{\url{http://astrosat-ssc.iucaa.in/}}). The source spectrum was extracted from a 8~arcmin circular region centered at the source coordinate on the SXT chip using {\tt XSELECT} package. The background spectrum was extracted from the blank sky region on the chip. The LAXPC is a proportional counter detector sensitive to X-ray photons in the 3--80 keV energy range. There are three identical detector units onboard \textit{AstroSat}\xspace with an effective area of about 8000 cm$^2$ at 15~keV. The time and energy resolutions of these units are 10~$\mu$s and 12\% at 22~keV, respectively. Standard data analysis routines ({\tt LAXPCsoftware}) are used to obtain the source light curves and spectral products from the event mode data. We have used SXT and LAXPC data in our timing study. Depending on the quality of the LAXPC data and instrument gain stability, we have considered events from single or combined LAXPC units. For timing studies, combined data from LAXPC-10, 20 \& 30 are used during Obs-1, while data from LAXPC-20 only are considered for Obs-2, Obs-3, and Obs-5. The events from LAXPC-10 \& 20 are used for timing studies from Obs-4. Background products corresponding to each observation are accumulated from the same data by analysing the Earth occultation period. A systematic uncertainty of 2\% is also added in the LAXPC spectra. \subsection{\textit{NuSTAR}\xspace and \textit{Swift}\xspace/XRT} In the present study, we also used \textit{NuSTAR}\xspace (Harrison et al. 2013) and \textit{Swift}\xspace/XRT (X-Ray Telescope; Burrows et al. 2005) observations on 25 July 2016, at a reported lowest luminosity of EXO~2030+375~ till date (F{\"u}rst et al. 2017), to compare the results obtained from the \textit{AstroSat}\xspace observations. For \textit{NuSTAR}\xspace observation, we used the {\tt NuSTARDAS} 1.6.0 software in {\tt HEASoft} version 6.24. Unfiltered events from the FPMA and FPMB were reprocessed by using the {\it nupipeline} routine in the presence of CALDB of version 20191219. Source products were then extracted by selecting circular region of 120~arcsec radius with souce coordinates as center by using the {\it nuproducts} task. Background products were also accumulated in a similar manner by selecting a source-free region. Data from the \textit{Swift}\xspace/XRT observation in photon counting mode, with an effective exposure of 1 ks, are also used. We obtained XRT products by using the online standard tool provided by the UK Swift Science Data Centre\footnote{\url{http://www.swift.ac.uk/user_objects/}} (Evans et al. 2009). \section{Timing Analysis} We extracted source and background light curves from the SXT and LAXPC event data at 2.4~s and 0.1~s binning time, respectively. After subtracting the background, X-ray pulsations were searched in the barycentric corrected light curves of EXO~2030+375~ from all five observations. We applied the chi-square maximization technique using {\tt efsearch} task of {\tt FTOOLS} package (Leahy 1987). The spin period of the pulsar is estimated to be 41.2895(7) s, 41.272(9) s, 41.30(1) s, 41.2747(8) s, and 41.306(3) from first, second, third, fourth, and fifth \textit{AstroSat}\xspace/LAXPC observations, respectively. The spin period and its error are also estimated by using the Lomb-Scargle and Clean techniques in the publicly available {\tt PERIOD} package (Currie et al. 2014). This package has been used for period estimation in several other binary X-ray pulsars e.g. 4U~2206+54 (Torrejón et al. 2018), 2S~1417-624 (Gupta et al. 2019), Swift~J0243.6+6124 (Beri et al. 2021). The results obtained from these methods are found to agree with the above quoted values. The evolution of pulse period with luminosity using {\it Astrosat} observations is presented in Figure~\ref{spin-period}. As the source was observed at a low luminosity level, a few measurements have large errors on the spin period. A marginal spinning-up with increasing luminosity can be seen in the figure, though it is not adequate to draw any significant claim on this. The light curves in 0.3-7 keV and 3-80 keV ranges from the SXT and LAXPC data from each epoch of observations are folded with the corresponding estimated pulse period to obtain the pulse profiles of the pulsar. The pulse profiles obtained from the SXT and LAXPC data for all five \textit{AstroSat}\xspace observations are shown in Figure~\ref{sxt-profile} \& ~\ref{lxp-profile}, respectively. Phases of the pulse profiles are adjusted manually to align the minima at phase zero. The profiles obtained from the SXT data (Figure~\ref{sxt-profile}) appears single peaked. This is possibly due to the the fact that the soft X-ray photons are largely affected by absorption due to the material along the line of sight and low source count rate in SXT. The profiles from the LAXPC data, however, are found to be complex due to the presence of multiple structures at various pulse phases of the pulsar during the first, third, fourth and fifth observations (see Figure~\ref{lxp-profile}). Sharp dip-like features were detected in 0.1--0.2 and 0.60-0.85 phase ranges during these observations. Pulse profile of the pulsar from the second epoch of observations, however, appears relatively simpler. To investigate the observed features in the LAXPC profiles with energy, barycenter corrected light curves in 3-10, 10-25, 25-50, and 50-80~keV ranges are extracted from the LAXPC data from all epochs of observations and folded with the respective spin period and shown in Figure~\ref{resolved-profiles}. The energy resolved pulse profile are found to be strongly energy dependent. The presence of dip-like features are seen up to higher energies in the profiles from all observations. The observed dips are evident up to $\sim$50~keV, especially during the first observation, whereas during second, fourth and fifth observations, the features are present up to $\sim$25~keV. However, during the third observation, the dips are present up to $\sim$10 keV (Figure~\ref{resolved-profiles}). We checked the significance of pulsations in the hard X-ray band by taking a ratio between the peak count rate and the standard deviation of the low or minimum intensity interval observed in the pulse profile. It is found that the significance of detection of pulsation in 50-80 keV range is more than 15$\sigma$ during all observations except the third observation. We calculated the pulse fraction of the pulsar using the pulse profiles in various energy bands and presented in Figure~\ref{pf}. It is done to determine the nature of pulsating component. In our study, we define the pulse fraction as the ratio between the difference and sum of maximum and minimum intensities observed in the pulse profile of the pulsar. For all the observations, we found that the pulse fraction decreases with energy. A maximum value of pulse fraction of $\sim$60\% is detected in the profiles below 20 keV during the first observation. A relatively lower value is observed in rest of the data sets. \begin{table*} \tabularfont \centering \caption{Best-fitting spectral parameters (with 90\% errors) of EXO~2030+375~ during \textit{AstroSat}\xspace and {\em NuSTAR}+XRT observations.} \begin{tabular}{lccccccc} \hline \hline Parameters &Obs-1 &Obs-2 &Obs-3 &Obs-4 &Obs-5 &{\em NuSTAR}+XRT\\ \hline N$_{\rm H}$ (10$^{22}$~cm$^{-2}$) &4.5$\pm$0.3 &4.7$\pm$0.3 &3.5$\pm$0.5 &4.6$\pm$0.3 &4$\pm$0.4 &5.8$\pm$0.6\\ Photon index ($\Gamma$) &1.61$\pm$0.07 &1.92$\pm$0.05 &2.1$\pm$0.2 &1.54$\pm$0.1 &1.85$\pm$0.05 &1.64$\pm$0.05 \\ Norm (10$^{-2}$) &3.7$\pm$0.4 &3.4$\pm$0.3 &1.3$\pm$0.5 &6.5$\pm$1 &2.9$\pm$0.3 &2.5$\pm$0.2 \\ E$_{\rm fold}$ (keV) &7.2$\pm$0.7 &-- &-- &6$\pm$2 &-- &6.5$\pm$0.4\\ E$_{\rm cut}$ (keV) &36$^{+8}_{-6}$ &-- &-- &32$^{+11}_{-8}$ &-- &27$\pm$2 \\ Flux$^a$ (3-30 keV) &2.8$\pm$0.1 &1.5$\pm$0.1 &0.42$\pm$0.03 &5$\pm$0.1 &1.51$\pm$0.1 &1.67$\pm$0.1\\ Flux$^a$ (0.5-30 keV) &3.1$\pm$0.1 &1.7$\pm$0.1 &0.50$\pm$0.05 &5.5$\pm$0.1 &1.71$\pm$0.1 &2$\pm$0.1\\ Luminosity$^b$ (10$^{36}$~erg~s$^{-1}$) &1.9 &1.0 &0.25 &3.3 &1.0 &1.21\\ $\chi^2_\nu$ ($\nu$) &1.06 (363) &1.28 (258) &0.91 (105) &1.18 (454) &1.48 (100) &0.96 (808) \\ \hline \hline \end{tabular} \label{table-spec} \tablenotes{$^a$: unabsorbed flux in 10$^{-10}$~erg~s$^{-1}$~cm$^{-2}$; $^b$: 0.5-30 keV unabsorbed luminosity at a distance of 7.1 kpc.\\ Note: By fitting \textit{NuSTAR}\xspace and XRT data, we estimated the unabsorbed flux of EXO~2030+375~ in 3-10 and 0.5-79 keV ranges to be 8.8$\times$10$^{-11}$ and 2.3$\times$10$^{-10}$~erg~s$^{-1}$~cm$^{-2}$, respectively. This is for comparison with the quoted values in F{\"u}rst et al. (2017).\\ } \end{table*} \begin{figure}[t!] \centering \includegraphics[height=3.3in, width=2.7in, angle=-90]{fig7.pdf} \caption{Variation of power-law photon index with 3-30 keV luminosity are shown for {\em AstroSat} and {\em NuSTAR} observations of EXO~2030+375 with solid bullets and star symbols (red points), respectively, along with the corresponding data from the {\it RXTE} observations (black points) as shown in Figure~6 of Epili et al. (2017). The power-law photon index obtained from the present study follows the anti-correlated pattern with the luminosity in the sub-critical regime of the pulsar. L$_{\rm 37}$ denotes the 3-30~keV unabsorbed luminosity of the pulsar in 10$^{\rm 37}$~erg~s$^{-1}$~ at a distance of 7.1~kpc.} \label{lum-ind} \end{figure} \section{Spectral Studies} The spectral properties of EXO~2030+375~ are studied using data from all five \textit{AstroSat}\xspace observations. Using the source and background spectra extracted from the SXT and LAXPC data (as described above) and response files provided by the instrument team, we carried out spectral fitting of the data in 0.5--7 keV from SXT and 3.5-25 keV from LAXPC by using {\tt XSPEC} package of version 12.10.0 (Arnaud 1996). The LAXPC data were limited to 25 keV in our spectral fitting because of background uncertainties at high energies. Various standard models such as power-law, cutoff power-law, high energy cutoff power-law are attempted to fit the 0.5-25 keV spectrum, along with a component for photo-electric absorption ({\tt TBabs}, Wilms, Allen \& McCray 2000). We found that a cutoff based model is necessary to describe the spectra obtained from the first and fourth observations when the pulsar was relatively brighter (Figure~\ref{maxi-bat}). However, a simple absorbed power-law model can describe the spectra from second, third and fifth observations, satisfactorily. These models provided goodness of fit per degree of freedom close to $\chi^2_\nu$=$\chi^2/\nu$ $\approx$1 in all cases. In place of a power-law component, we also tried to fit the spectra from second, third and fifth observations with a thermal blackbody component. This yielded a poor fit with a $\chi^2/\nu$ of more than 5. We do not detect any signature of cyclotron absorption scattering feature(s) (Jaisawal \& Naik 2017; Staubert et al. 2019) in 0.5-25 keV spectral range. The spectra obtained from SXT and LAXPC data also do not show any iron emission line(s) in 6-7 keV range. Spectral parameters estimated from these fittings are given in Table~2. In our fitting, the relative instrument normalization for SXT was found to be in the range of 0.65-0.80 with respect to LAXPC. We fitted the \textit{NuSTAR}\xspace data from FPMA and FPMB detectors along with the \textit{Swift}\xspace/XRT data in 1-79 keV energy range with a high energy cutoff power-law model along with the interstellar absorption. This model fitted the spectrum well. The spectral parameters such as column density, power-law photon index, cutoff and folding energies obtained from our fitting are found to be consistent with the values reported in Table~1 of F{\"u}rst et al. (2017). The energy spectra corresponding to each observation along with best-fit model and corresponding residuals are shown in Figure~\ref{spec}. The {\tt cflux} convolution model is used for flux estimation in our study. We would like to mention here that the quoted flux and luminosity in Table~2 are estimated in 0.5-30 and 3-30 keV range though 0.5-25 keV data were used in spectral fitting with \textit{AstroSat}\xspace. Estimation of flux and luminosity in 3-30 keV range was done for comparison with the earlier values reported in literature. It is done by extrapolating the best-fit model up to 30 keV. We attempted to find any correlation between the power-law photon index and luminosity of the pulsar during the \textit{AstroSat}\xspace observations. For this, we plotted the photon index with the observed 3--30 keV luminosity of the pulsar from \textit{AstroSat}\xspace and combined \textit{NuSTAR}\xspace and \textit{Swift}\xspace/XRT observations in Figure~\ref{lum-ind}. Along with this, corresponding data from the \textit{RXTE}\xspace observations of EXO~2030+375, as reported in the top left panel of Figure~6 of Epili et al. (2017), are also shown for comparison. The figure shows that the photon index and luminosity are anti-correlated during all five epochs of \textit{AstroSat}\xspace and combined \textit{NuSTAR}\xspace and XRT observations. The results obtained from the present work extended the observed spectral behaviour of EXO~2030+375~ in sub-critical regime at much lower luminosity. \section{Discussion} Be/X-ray binary pulsars are expected to show X-ray enhancements, termed as Type-I X-ray outbursts, at the periastron passage of the neutron star (Okazaki \& Negueruela 2001, Reig 2011). However, in most of the cases, such enhancements are not always observed which has been interpreted as due to the lack of significant evolution of the equatorial circumstellar disk around the Be star companion. An alternative interpretation of the lack of X-ray activities (Type-I or Type-II outbursts) is related to the Be-disk dynamics due to Kozai-Lidov effect (Laplace et al. 2017). EXO~2030+375~ is unique in the sense that the pulsar shows Type-I X-ray outburst almost at every periastron passage of binary. The pulsar had been observed and studied extensively with the {\it Rossi X-ray Timing Explorer (RXTE)} for 606 epochs, spanning over 15 years, during Type-I and Type-II (giant) X-ray outbursts (Epili et al. 2017) though there are many pointing observations with other observatories used to study the characteristics of the source. Long term monitoring data from {\it RXTE}/ASM, {\it Swift}/BAT and MAXI/GSC show regular Type~I X-ray outbursts at the periastron passage of the pulsar. However, it has been noticed that the intensity at the peak of the Type-I X-ray outbursts has been in decline since last several years (Naik \& Jaisawal 2015, F{\"u}rst et al. 2016, Laplace et al. 2017) along with an extended period of low X-ray activity without any Type-I outbursts from 57200 MJD (27 June 2015) to 57600 MJD (31 July 2016) (Kretschmar et al. 2016). Following the extended low state, the transient activities started with the appearances of outbursts, though of lower peak intensities to date. \subsection{Detection of X-ray pulsations at the lowest observed luminosity} The {\it NuSTAR} and \textit{Swift}\xspace/XRT observations on 25 July 2016 were reported to be carried out at the lowest luminosity of EXO~2030+375~ during which pulsations were detected in the light curves (F{\"u}rst et al. 2017). Though the pulsar was observed at even lower luminosity of $\sim$10$^{34}$ erg s$^{-1}$ with the {\it Swift}/XRT in 3--10 keV range, poor data quality refrained from pulsation search (F{\"u}rst et al. 2017). On reanalysis of the \textit{NuSTAR}\xspace plus \textit{Swift}\xspace/XRT data, the 0.5-30 keV range luminosity of the pulsar on 25 July 2016 was estimated to be 1.21$\times$10$^{36}$ erg s$^{-1}$ (Table~2). On comparing the luminosities during five \textit{AstroSat}\xspace observations with the \textit{NuSTAR}\xspace and \textit{Swift}\xspace/XRT observations, it is interesting to point out that the pulsar was caught at even lower luminosities during second, third and fifth \textit{AstroSat}\xspace observations. Among these, the lowest luminosity of 2.5$\times$10$^{35}$erg~s$^{-1}$ in 0.5-30 keV range was estimated during the third epoch of \textit{AstroSat}\xspace observation. The LAXPC data from this observation showed a clear pulsation at 41.3~s in the light curve. Since the discovery in 1985, the luminosity of 2.5$\times$10$^{35}$erg~s$^{-1}$ in 0.5--30 keV range, observed with \textit{AstroSat}\xspace/LAXPC on 19 June 2018 is the lowest luminosity of EXO~2030+375~ at which X-ray pulsations are detected in the light curves. In accretion powered X-ray pulsars, material is channeled from the disk to magnetic poles. Decrease in the mass accretion rate decreases the ram pressure, eventually leading to the increase in the size of the magnetosphere (Illarionov \& Sunyaev 1975, Nagase et al. 1989). In case magnetosphere exceeds beyond co-rotation radius, the centrifugal barrier prohibits accreting material to fall onto the neutron star. This leads to the cessation of pulsations of the pulsar and is referred as propeller effect (Illarionov \& Sunyaev 1975). Though, EXO~2030+375~ was detected at a lowest luminosity level of $\sim$10$^{34}$~erg~s$^{-1}$ using {\em Swift}/XRT data (F{\"u}rst et al. 2017), non-detection of X-ray pulsations and presence of softer thermal component with a temperature of 1.22~keV suggest the neutron star surface to be the source of observed emission, which occurs when the neutron star enters into the propeller phase (see, e.g., Wijnands \& Degenaar 2016, Tsygankov et al. 2016, F{\"u}rst et al. 2017). This allowed us to consider the luminosity of 2.5$\times$10$^{35}$erg~s$^{-1}$ (third \textit{AstroSat}\xspace observation) as the lowest during which pulsations are seen. Assuming above luminosity as the upper limit for the onset of propeller effect, we can calculate the pulsar magnetic field as follows (Campana et al. 2002, F{\"u}rst et al. 2017) \begin{equation}\label{eq} L_{lim}=7.3 \times k^{7/2} P^{-7/3}~R{_{6}^5}~B_{12}^{2}~M_{1.4}^{-2/3} \times 10^{37} {\rm erg~s^{-1}} \end{equation} where $P$ is spin period in $s$, $B_{12}$ is magnetic field in unit of 10$^{12}$~G, $R_6$ is the neutron star radius in unit of 10$^{6}$~cm, $M_{1.4}$ is the mass of the neutron star in unit of 1.4 \ensuremath{M_{\odot}}\xspace. The factor $k$ is related to the accretion geometry with a value of $k$=0.5 and 1 in case of disk and spherical wind accretions, respectively. Using above equation and assuming disk accretion scenario for EXO~2030+375, we obtain a range of equatorial magnetic field between (3--15)$\times$10$^{12}$~G for a minimum luminosity of $\approx$1$\times$10$^{34}$ and 2.5$\times$10$^{35}$erg~s$^{-1}$, respectively. Based on the detection of cyclotron line, the polar magnetic field of the neutron star is tentatively estimated to be 1$\times$10$^{12}$~G (Wilson et al. 2008) and 5$\times$10$^{12}$~G (Klochkov et al. 2008). However, later studies did not confirm the cyclotron feature in a broad energy range (Naik et al. 2013, Naik \& Jaisawal 2015, F{\"u}rst et al. 2017). In absence of firm detection of cyclotron line, we calculate the magnetic field by putting standard parameters of a neutron star in Equation~\ref{eq} and found that EXO~2030+375~ hosts a highly magnetized neutron star with a field strength between (3--15)$\times$10$^{12}$~G. \subsection{Pulse profiles and spectroscopy} Extensive studies of \textit{RXTE}\xspace observations of EXO~2030+375~ revealed that the pulse profiles of the pulsar strongly depend on the luminosity or mass accretion rate (Epili et al. 2017). Irrespective of the type of X-ray outbursts, whether regular Type-I or giant Type-II, or phases of the outbursts, the morphology of the pulse profiles remain same at a certain luminosity. In the present study, the pulse profiles of the pulsar during all five epochs of \textit{AstroSat}\xspace observations are characterised with the presence of narrow dip-like features, though the features are prominent in the lowest luminosity phase on 19 June 2018. These features are commonly seen in Be/X-ray binary pulsars (see, e.g., Devasia et al. 2011, Jaisawal, Naik \& Epili 2016, Gupta et al. 2018, Jaisawal et al. 2018). At a luminosity an order of 10$^{36}$~erg~s$^{-1}$, Ferrigno et al. (2016) and F{\"u}rst et al. (2017) have detected a sharp absorption feature in the pulse profile of EXO~2030+275. This feature is interpreted as due to the effect of obscuration through accretion column along the line of sight. This is supported by the phase-resolved spectroscopy which revealed a high column density and effectively harder spectrum due to reprocessing of the emission. In our study, the low luminosity of the pulsar and limited understanding of the background and spectral calibration of the instruments at high energies (Antia et al. 2017) prevented us to investigate the cause of the prominent dips in the pulse profiles through pulse-phase resolved spectroscopy. From energy resolved pulse profile (Figure~\ref{resolved-profiles}), clear pulsations up to $\sim$50 keV are seen during all five epochs of observations. Significance of pulsation can also be seen from the values of pulse fraction with energy (Figure~\ref{pf}). The fraction of number of photons contributing towards pulsation can be found to decrease with energy as well as luminosity. Broad-band energy spectrum of accretion powered X-ray pulsars originates due to thermal and bulk Comptonization of soft X-rays photons from the thermal mound on the neutron star surface (Becker \& Wolff 2007). In spite of complex processes taking place in the accretion column, the observed spectrum can be described by high energy cutoff power-law, exponential cutoff power-law models along with components for emission lines and absorption due to the interstellar medium. We have studied five \textit{AstroSat}\xspace and {\em NuStar}+XRT observations between 2016 and 2020 after renewed activities from EXO~2030+375. Spectral analysis of these observations revealed the dependence of power-law photon index with luminosity. Extensive studies of available {\it RXTE} observations of the pulsar established the relation between the power-law photon index with source luminosity (Epili et al. 2017). From above study, the photon index are found to be distributed in three distinct regions depending on the 3-30 keV luminosity, suggesting the spectral transition from sub-critical to super-critical regimes through the critical luminosity of (2-4)$\times$10$^{37}$erg~s$^{-1}$ for EXO~2030+375~ at constant photon index. The source spectrum became harder with the luminosity in the sub-critical regime. A softening in the spectral emission was thereafter detected in the super-critical regime of the pulsar. As quoted, \textit{AstroSat}\xspace observations were carried out at lower luminosities compared to the \textit{RXTE}\xspace observations. In this study, we found that the power-law photon index is anti-correlated with the luminosity of the pulsar (Figure~\ref{lum-ind}) in the same manner as reported by Epili et al. (2017) at lower luminosities. This confirms that the the spectral shape of the pulsar depends on the mass accretion rate. \section{Conclusion} In this paper, we carried out timing and spectral studies of EXO~2030+375~ using five \textit{AstroSat}\xspace observations at various phases of its Type-I X-ray outbursts. The source luminosity was detected to as low as 2.5$\times$10$^{35}$~erg~s$^{-1}$ in 0.5-30 keV range at which clear pulsations are detected. This is the first time when pulsations at such a low luminosity level is detected in the pulsar. Considering this as an limiting luminosity for propeller regime, we calculated the magnetic field of the neutron star. We have also studied pulse profiles of the pulsar. The pulse morphology is found to be complex due to presence of multiple absorption like features. The energy spectrum of EXO~2030+375~ can be described by a high energy cutoff power-law model during brighter (first and fourth) \textit{AstroSat}\xspace observations. The power-law photon index shows an anti-correlation with the source luminosity which is expected when the source is below the critical luminosity. \vspace{2em} \section*{Acknowledgements} We thank the anonymous reviewer for suggestions on the paper. This publication uses the data from \textit{AstroSat}\xspace mission of the ISRO, archived at the Indian Space Science Data Centre. We thank members of SXT and LAXPC instrument teams for their contribution to the development of the instruments and analysis software. The SXT and LAXPC Payload Operations Centers (POCs) at TIFR are acknowledged for verifying and releasing the data via the ISSDC data archive and providing the necessary software tools for data analyses. We also acknowledge the contributions of the \textit{AstroSat}\xspace project team at ISAC and IUCAA. This research has made use of data obtained through HEASARC Online Service, provided by the NASA/GSFC, in support of NASA High Energy Astrophysics Programs. This work used the NuSTAR Data Analysis Software ({\tt NuSTARDAS}) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA).
1,314,259,995,256
arxiv
\section{Introduction} \label{sec:intro} \noindent Imagine looking through an open doorway. Most of the room on the other side is invisible. Nevertheless, we can estimate how the room \emph{likely} looks. The few visible features enable an informed guess about the height of the ceiling, the position of walls and lighting etc. Given this limited information, we can then imagine several plausible realizations of the room on the other side. This 3D geometric reasoning and the ability to predict what the world will look like \emph{before} we move is critical to orient ourselves in a world with three spatial dimensions. Therefore, we address the problem of novel view synthesis \cite{DBLP:conf/siggraph/LevoyH96, DBLP:conf/siggraph/GortlerGSC96, DBLP:conf/siggraph/DebevecTM96} based on a single initial image and a desired change in viewpoint. In particular, we aim at specifically modeling \emph{large} camera transformations, \eg rotating the camera by $90\degree$ and looking at previously unseen scenery. As this is an underdetermined problem, we present a probabilistic generative model that learns the distribution of possible target images and synthesizes them at high fidelity. Solving this task has the potential to transform the passive experience of viewing images into an interactive, 3D exploration of the depicted scene. This necessitates an approach that both understands the geometry of the scene and, when rendering novel views of an input image, considers their semantic relationships to the visible content. \noindent \textbf{Interpolation vs. Extrapolation} Recently, impressive synthesis results have been obtained with geometry-focused approaches in the multi-view setting \cite{DBLP:conf/eccv/RieglerK20, DBLP:journals/corr/abs-2011-07233, DBLP:conf/eccv/MildenhallSTBRN20}, where not just a single but a large number of images or a video of a scene are available such that the task is closer to a view interpolation than a synthesis of genuinely novel views. In contrast, if only a single image is available, the synthesis of novel views is always an extrapolation task. Solving this task is appealing because it allows a 3D exploration of a scene starting from only a single picture. \noindent While existing approaches for single-view synthesis make small camera transformations, such as a rotation by a few degrees, possible, we aim at expanding the possible camera changes to include \emph{large} transformations. The latter necessitates a probabilistic framework: Especially when applying large transformation, the problem is underdetermined because there are many possible target images which are consistent with the source image and camera pose. This task cannot be solved with a reconstruction objective alone, as it will either lead to averaging, and hence blurry synthesis results, or, when combined with an adversarial objective, cause a significant mode-dropping when modeling the target distribution. Thus, in order to remedy these issues, we propose to model this task with a powerful, autoregressive transformer model, trained to directly maximize the likelihood of the target data. \noindent \textbf{Explicit vs. Implicit Geometry} The success of transformers is often attributed to the fact that they enforce less inductive biases compared to convolutional neural networks (CNNs), which are biased towards local context. Relying mainly on CNNs, this locality-bias required previous approaches for novel view synthesis to explicitly model the overall % geometric transformation, thereby enforcing yet another inductive bias regarding the three dimensional structure. In contrast, by modeling interactions between far-flung regions of source and target images, transformers have the potential to learn to represent the required geometric transformation implicitly without requiring such hand engineered operations. % This raises the question whether it is at all necessary to explicitly include such biases in a transformer model. To address this question, we perform several experiments with varying degrees of inductive bias and find that our autoregressively trained transformer model is indeed capable of learning this transformation completely without built-in priors and can even learn to predict depth in an unsupervised fashion. \noindent \textbf{To summarize our contributions,} we \emph{(i)} propose to learn a probabilistic model for single view synthesis that properly takes into account the uncertainties inherent in the task and show that this leads to significant benefits over previous state-of-the-art approaches; see Fig.~\ref{fig:firstpagefigure}. We \emph{(ii)} analyze the need for explicit 3D inductive biases in transformer architectures and find that transformers make it obsolete to explicitly code 3D transformations into the model and instead can learn the required transformation implicitly themselves. We also \emph{(iii)} find that the benefits of providing them geometric information in the form of explicit \emph{depth maps} are relatively small, and investigate the ability to recover an explicit depth representation from the layers of a transformer which has learned to represent the geometric transformation implicitly and without any depth supervision. \section{Related Work} \noindent \textbf{Novel View Synthesis} We can identify three seminal works which illustrate different levels of reliance on geometry to synthesize novel views. \cite{DBLP:conf/siggraph/LevoyH96} describes an approach which requires no geometric model, but requires a large number of structured input views. \cite{DBLP:conf/siggraph/GortlerGSC96} describes a similar approach but shows that unstructured input views suffice if geometric information in the form of a coarse volumetric estimate is employed. \cite{DBLP:conf/siggraph/DebevecTM96} can work with a sparse set of views but requires an accurate photogrammetric model. Subsequent work also analyzed the commonalities and trade-offs of these approaches \cite{DBLP:conf/siggraph/BuehlerBMGC01}. Ideally, an approach could synthesize novel views from a single image without having to rely on accurate geometric models of the scene and early works on deep learning for novel view synthesis explored the possibility to directly predict novel views \cite{DBLP:conf/cvpr/DosovitskiySB15, DBLP:journals/pami/DosovitskiySTB17, DBLP:conf/nips/KulkarniWKT15, DBLP:conf/nips/YangRYL15, DBLP:conf/eccv/TatarchenkoDB16} or their appearance flows \cite{DBLP:conf/eccv/ZhouTSME16, DBLP:conf/cvpr/ParkYYCB17, DBLP:conf/eccv/SunHLZL18} with convolutional neural networks (CNNs). However, results of these methods were limited to simple or synthetic data and subsequent works combined geometric approaches with CNNs. \noindent Among these deep learning approaches that explicitly model geometry, we can distinguish between approaches relying on a proxy geometry to perform a warping into the target view, and approaches predicting a 3D representation that can subsequently be rendered in novel views. For the proxy geometry, \cite{DBLP:conf/cvpr/MeshryGKHPSM19} relies on point clouds obtained from structure from motion (SfM) \cite{DBLP:conf/iccv/AgarwalSSSS09, DBLP:conf/cvpr/SchonbergerF16} and multi-view stereo (MVS) \cite{DBLP:conf/eccv/SchonbergerZFP16, DBLP:journals/pami/FurukawaP10}. To perform the warping, \cite{DBLP:journals/corr/FlynnNPS15, DBLP:journals/tog/XuBSHSR19} use plane-sweep volumes, \cite{DBLP:journals/tog/KalantariWR16} estimates depth at novel views and \cite{DBLP:conf/iccv/ChoiGT0K19, DBLP:conf/eccv/XieGF16} a depth probability volume. \cite{DBLP:conf/eccv/RieglerK20, DBLP:journals/corr/abs-2011-07233} post-process MVS results to a global mesh and \cite{DBLP:journals/tog/HedmanPPFDB18} relies on per-view meshes \cite{DBLP:journals/tog/HedmanRDB16}. Other approaches learn 3D features per scene, which are associated with a point cloud \cite{DBLP:conf/eccv/AlievSKUL20} or UV maps \cite{DBLP:journals/tog/ThiesZN19}, and decoded to the target image using a CNN. However, all of these approaches rely on multi-view inputs to obtain an estimate for the proxy geometry. \noindent Approaches which predict 3D representations mainly utilize layered representations such as layered depth images (LDIs) \cite{DBLP:conf/siggraph/ShadeGHS98, DBLP:journals/tog/HedmanASK17, DBLP:journals/tog/HedmanK18}, multi-plane images (MPIs) \cite{DBLP:journals/ijcv/SzeliskiG99, DBLP:journals/tog/ZhouTFFS18, DBLP:conf/cvpr/SrinivasanTBRNS19, DBLP:conf/cvpr/FlynnBDDFOST19} and variants thereof \cite{DBLP:journals/tog/PennerZ17, DBLP:conf/eccv/LiXDS20}. While this allows an efficient rendering of novel views from the obtained representations, their layered nature limits the range of novel views that can be synthesized with them. Another emerging approach \cite{DBLP:conf/eccv/MildenhallSTBRN20} represents a five dimensional light field directly with a multi-layer-perceptron (MLP), but still requires a large number of input views to correctly learn this MLP. \figapproachmerged \noindent In the case of novel view synthesis from a single view, SfM approaches cannot be used to estimate proxy geometries and early works relied on human interaction to obtain a scene model \cite{DBLP:conf/siggraph/HorryAA97}. \cite{DBLP:conf/iccv/SrinivasanWSRN17} uses a large scale, scene-specific light field dataset to learn CNNs which predict light fields from a single image. \cite{DBLP:conf/cvpr/LiuHS18} assumes that scenes can be represented by a fixed set of planar surfaces. To handle more general scenes, most methods rely on monocular depth estimation \cite{Ranftl2020, DBLP:conf/eccv/GargKC016, DBLP:conf/cvpr/GodardAB17, DBLP:conf/cvpr/ZhouBSL17, DBLP:conf/iccv/GodardAFB19} to predict warps \cite{DBLP:journals/tog/NiklausMYL19, DBLP:conf/cvpr/WilesGS020, DBLP:journals/corr/abs-2012-09855} or LDIs \cite{DBLP:journals/prl/DhamoTLNT19, DBLP:journals/tog/KopfMAQGCPFWYZH20, DBLP:conf/cvpr/ShihSKH20}, and \cite{DBLP:conf/cvpr/TuckerS20} directly predicts an MPI from a single view. To handle disocclusions, most of these methods rely on adversarial losses, inspired by generative adversarial networks (GANs) \cite{DBLP:conf/nips/GoodfellowPMXWOCB14}, to perform inpainting in these regions. However, the quality of these approaches quickly degrades for larger viewpoint changes because they do not model the uncertainty of the task. While adversarial losses can remedy an averaging effect over multiple possible realizations to some degree, our empirical results demonstrate the advantages of properly modeling the probabilistic nature of novel view synthesis from a single image. \noindent \textbf{Self-Attention and Transformers} The \emph{transformer} \cite{DBLP:conf/nips/VaswaniSPUJGKP17} is a sequence-to-sequence model that models interactions between learned representations of sequence elements by the so-called attention mechanism \cite{DBLP:journals/corr/BahdanauCB14, DBLP:conf/emnlp/ParikhT0U16}. Importantly, this mechanism does not introduce locality biases such as those present in \eg CNNs, as the importance and interactions of sequence elements are weighed regardless of their relative positioning. We build our autogressive transformer from the GPT-2 architecture \cite{radford2019language}, \ie multiple blocks of multihead self-attention, layer norm \cite{DBLP:journals/corr/BaKH16} and position-wise MLP. \noindent \textbf{Autoregressive Two Stage Approaches} Our approach is based on work in neural discrete representation learning (VQVAE) \cite{DBLP:conf/nips/OordVK17}, which aims to learn a discrete, compressed representation through either vector quantization or soft relaxation of the discrete assignment \cite{DBLP:conf/iclr/MaddisonMT17, DBLP:conf/iclr/JangGP17}. \noindent This training paradigm provides a suitable space~\cite{DBLP:conf/iclr/SalimansK0K17, dieleman2020typicality, DBLP:conf/iclr/0022KSDDSSA17} to train autoregressive likelihood models on the latent representations and has been utilized to train generative models for hierarchical, class-conditional image synthesis \cite{DBLP:conf/nips/RazaviOV19}, text-controlled image synthesis \cite{DBLP:journals/corr/abs-2102-12092} and music generation \cite{DBLP:journals/corr/abs-2005-00341}. Recently, \cite{DBLP:journals/corr/abs-2012-09841} demonstrated that adversarial training of the VQVAE improves compression while retaining high-fidelity reconstructions, subsequently enabling efficient training of an autoregressive transformer model on the learned latent space (yielding a so-called VQGAN). We directly build on this work and use VQGANs to represent both source and target views and, when needed, depth maps. \section{Approach} \noindent To render a given image $x^{\text{src}}$ experienceable in a 3D manner, we allow the specification of arbitrary new viewpoints, including in particular \emph{large} camera transformations $T$. As a result we expect multiple plausible realizations $x^{\text{dst}}$ for the novel view, which are all consistent with the input, since this problem is highly underdetermined. Consequently, we follow a probabilistic approach and sample novel views from the distribution \begin{equation} x^{\text{dst}} \sim p(x^{\text{dst}} \vert x^{\text{src}}, T). \label{eq:dstdistr} \end{equation} To solve this task, a model must explicitly or implicitly learn the 3D relationship between both images and $T$. In contrast to most previous work that tries to solve this task with CNNs and therefore oftentimes includes an explicit 3D transformation, we want to use the expressive transformer architecture and investigate to what extent the explicit specification of such a 3D model \emph{is necessary at all}. \noindent Sec.~\ref{subsec:transformertraining} describes how to train a transformer model in the latent space of a VQGAN. Next, Sec.~\ref{subsec:encodingbiases} shows how inductive biases can be build into the transformer and describes all bias-variants that we analyze. Finally, Sec.~\ref{subsec:depthreadout} presents our approach % to extract geometric information from a transformer where no 3D bias has been explicitly specified. \subsection{Probabilistic View Synthesis in Latent Space} \label{subsec:transformertraining} \noindent Learning the distribution in Eq.~\eqref{eq:dstdistr} requires a model which can capture long-range interactions between source and target view to implicitly represent geometric transformations. Transformer architectures naturally meet these requirements, since they are not confined to short-range relations such as CNNs with their convolutional kernels and exhibit state-of-the-art performance \cite{DBLP:conf/nips/VaswaniSPUJGKP17}. Since likelihood-based models have been shown \cite{DBLP:conf/iclr/SalimansK0K17} to spend too much capacity on short-range interactions of pixels when modeling images directly in pixel space, we follow \cite{DBLP:journals/corr/abs-2012-09841} and employ a two-stage training. The first stage performs adversarially guided discrete representation learning (VQGAN), obtaining an abstract latent space % that has proved to be well-suited for efficiently training generative transformers \cite{DBLP:journals/corr/abs-2012-09841}. % \newcommand{g}{g} \noindent \textbf{Modeling Conditional Image Likelihoods} VQGAN consists of an encoder $E$, a decoder $G$ and a codebook $\mathcal{Z} = \{z_i\}_{i=1}^{\vert \mathcal{Z} \vert}$ of discrete representations $z_i \in \mathbb{R}^{d_z}$. % The trained VQGAN allows to encode any $x \in \mathbb{R}^{H \times W \times 3}$ into the discrete latent space as $E(x) \in \mathbb{R}^{h\times w \times d_z}$\footnote{This includes the vector quantization step as described in \cite{DBLP:conf/nips/OordVK17}}. Unrolled in raster-scan order, this latent representation corresponds to a sequence $s \in \mathbb{R}^{h\cdot w \times d_z}$ and can be equivalently expressed as a sequence of integers which index the learned codebook $\mathcal{Z}$. Following the usual designation \cite{DBLP:conf/nips/VaswaniSPUJGKP17} we refer to the sequence elements as ``tokens''. An embedding function $g=g(s)\in\mathbb{R}^{h\cdot w \times d_e}$ maps each of these tokens into the embedding space of the transformer $\mathcal{T}$ and adds learnable positional encodings. Similarly, to encode the input view $x^{\text{src}}$ and the camera transformation $T$, both are mapped into the embedding space by a function $f$: \begin{equation} f: (x^{\text{src}}, T) \mapsto f(x^{\text{src}}, T) \in \mathbb{R}^{n \times d_e}, % \end{equation} where $n$ denotes the length of the conditioning sequence. By using different functions $f$ various inductive biases can be incorporated into the architecture as described in Sec.~\ref{subsec:encodingbiases}. The transformer $\mathcal{T}$ then processes the concatenated sequence $[f(x^{\text{src}}, T), g(s^{\tdst})]$ to learn the distribution of plausible novel views conditioned on $x^{\text{src}}$ and $T$, \begin{equation} p_{\mathcal{T}}\Big(s^{\tdst} | f(x^{\text{src}}, T) \Big) = \prod_i p_{\mathcal{T}}\Big(s^{\tdst}_i \vert s^{\tdst}_{<i}, f(x^{\text{src}}, T)\Big). \end{equation} \noindent Hence, to train an autoregressive transformer by next-token prediction $p_{\mathcal{T}}(s_i \vert s_{<i},f(x^{\text{src}}, T))$ we maximize the log-likelihood of the data, leading to the training objective % \begin{equation} \mathcal{L}_{\mathcal{T}} = \mathbb{E}_{x^{\text{src}}, x^{\text{dst}} \sim p(x^{\text{src}}, x^{\text{dst}})} \Big[-\log p_{\mathcal{T}}\big(s^{\tdst} \vert f(x^{\text{src}}, T)\big) \Big]. \end{equation} \vspace{-1.75em} \subsection{Encoding Inductive Biases} \vspace{-0.35em} \label{subsec:encodingbiases} \noindent Besides achieving high-quality novel view synthesis, we aim to investigate to what extent transformers depend on a 3D inductive bias. To this end, we compare approaches where a geometric transformation is built \emph{explicitly} into the conditioning function $f$, and approaches where no such transformation is used. In the latter case, the transformer itself must learn the required relationship between source and target view. If successful, the transformation will be described \emph{implicitly} by the transformer. \noindent \textbf{Geometric Image Warping} We first describe how an explicit geometric transformation results from the 3D relation of source and target images. For this, pixels of the source image are back-projected to three dimensional coordinates, which can then be re-projected into the target view. We assume a pinhole camera model, such that the projection of 3D points to homoegenous pixel coordinates is determined through the intrinsic camera matrix $K$. The transformation % between source and target coordinates is given by a rigid motion, consisting of a rotation $R$ and a translation $t$. Together, these parameters specify the desired control over the novel view to be generated, \ie $T=(K,R,t)$. \tabmergedbiascomp To project pixels back to 3D coordinates, we require information about their depth $d$, since this information has been discarded by their projection onto the camera plane. Since we assume access to only a single source view, we require a monocular depth estimate. Following by previous works \cite{DBLP:conf/cvpr/ShihSKH20, DBLP:journals/corr/abs-2012-09855}, we use MiDaS \cite{Ranftl2020} in all of our experiments which require monocular depth information. The transformation can now be described as a mapping of pixels $i \in \{1,\dots,H\}, j \in \{1,\dots,W\}$ in the source image $x^{\text{src}} \in \mathbb{R}^{H\times W\times 3}$ to pixels $i', j'$ in the target image. In homogeneous coordinates, their relationship is given by \begin{equation} \label{eq:flow} % \begin{pmatrix} j' \\ i' \\ 1 \end{pmatrix} \simeq K\left( R K^{-1} d(i,j) \begin{pmatrix} j \\ i \\ 1 \end{pmatrix} + t\right) \end{equation} This relationship defines a forward flow field $F^{\tsrc \rightarrow \tdst}=F^{\tsrc \rightarrow \tdst}(K, R, t, d) \in \mathbb{R}^{H\times W\times 3}$ from source to target as a function of depth and camera parameters. The flow field can then be used to warp the source image $x^{\text{src}}$ into the target view with a warping operation $\mathcal{S}$: \begin{equation} x^\text{wrp}=\mathcal{S}(F^{\tsrc \rightarrow \tdst}, x^\text{src}). \end{equation} Because the target pixels obtained from the flow are not necessarily integer valued, we follow \cite{DBLP:conf/cvpr/Niklaus020} and implement $\mathcal{S}$ by bilinearly splatting features across the four closest target pixels. When multiple source pixels map to the same target pixels, we use their relative depth to give points closer to the camera more weight---a soft variant of z-buffering. In the simplest case, we can now describe the difference between explicit and implicit approaches in the way that they receive information about the source image and the desired target view. Here, explicit approaches receive source information warped using the camera parameters, whereas implicit approaches receive the original source image and the camera parameters themselves, \ie \begin{align} &\text{\emph{explicit:}} & \mathcal{S}(F^{\tsrc \rightarrow \tdst}(K, R, t, d), x^{\text{src}}) \\ &\text{\emph{implicit:}} & (K, R, t, d, x^{\text{src}}) \end{align} Thus, in explicit approaches we enforce an inductive bias on the 3D relationship between source and target by making this relationship explicit, while implicit approaches have to learn it on their own. Next, we introduce a number of different variants for each, which are summarized in Fig.~\ref{fig:approachmerged}. \noindent \textbf{Explicit Geometric Transformations} \newcommand{e}{e} \newcommand{e^\tpos}{e^\text{pos}} In the following, we describe all considered variants in terms of the transformer's conditioning function $f$. Furthermore, $e$ denotes a learnable embedding mapping the discrete VQGAN codes $E(x)$ into the embedding space of the transformer. Similarly, $e^\tpos \in \mathbb{R}^{n\times d_e}$ denotes a learnable positional encoding. The flow field $F^{\tsrc \rightarrow \tdst}(K, R, t, d)$ is always computed from $x^{\text{src}}$ and, to improve readability, we omit it from the arguments of the warping operation, \ie $\mathcal{S}(\cdot)=\mathcal{S}(F^{\tsrc \rightarrow \tdst}(K, R, t, d), \cdot)$. \emph{(I)} Our first explicit variant, \emph{expl.-img}{}, warps the source image and encodes it in the same way as the target image: \begin{equation} \label{eq:fvari} f(x^{\text{src}}, T) = e(E(\mathcal{S}(x^{\text{src}})))+e^\tpos \end{equation} \emph{(II)} Inspired by previous works \cite{DBLP:conf/eccv/RieglerK20,DBLP:conf/eccv/AlievSKUL20} we include a \emph{expl.-feat}{} variant which first encodes the original source image, and subsequently applies the warping on top of these features. We again use the VQGAN encoder $E$ to obtain \begin{equation} \label{eq:fvarii} f(x^{\text{src}}, T) = e(\mathcal{S}(E(x^{\text{src}})))+e^\tpos \end{equation} \emph{(III)} To account for the fact that the warped features in Eq.~\eqref{eq:fvarii} remain fixed (due to $E$ being frozen), we also consider a \emph{expl.-emb}{} variant that warps the \emph{learnable} embeddings and positional encodings of the transformer model. More precisely, we concatenate original embeddings with their warped variants and merge them with a learnable matrix. Doing this for both the embeddings of the codes and for the positional encodings using matrices $W^\text{emb}, W^\text{pos} \in \mathbb{R}^{d_e \times 2\cdot d_e}$, the conditioning function $f$ then reads: \begin{align} \label{eq:fvariii} f(x^{\text{src}}, T) = &W^\text{emb}[ e(E(x^{\text{src}})), \mathcal{S}(e(E(x^{\text{src}}))) ] +\nonumber\\ &W^\text{pos}[ e^\tpos, \mathcal{S}(e^\tpos) ] \end{align} \noindent \textbf{Implicit Geometric Transformations} Next, we describe implicit variants that we use to analyze if transformers---with their ability to attend to all positions equally well---require an explicit geometric transformation built into the model. We use the same notation as for the explicit variants. \emph{(IV)} The first variant, \emph{impl.-catdepth}{}, provides the transformer with all the same components which are used in the explicit variants: Camera parameters $K, R, t$, estimated depth $d$ and source image $x^{\text{src}}$. Camera parameters are flattened and concatenated to $\hat{T}$, which is mapped via $W^\text{cam} \in \mathbb{R}^{d_e \times 1}$ to the embedding space. Depth and source images are encoded by VQGAN encoders $E^\text{d}$ and $E$ to obtain \begin{equation} \label{eq:fvariv} f(x^{\text{src}}, T) = [W^\text{cam} \hat{T}, e(E^\text{d}(d)), e(E(x^{\text{src}}))] + e^\tpos \end{equation} Compared to the other variants, this sequence is roughly $\frac{3}{2}$ times longer, resulting in twice the computational costs. \figreconstructionbothdepth \emph{(V)} Therefore, we also include a \emph{impl.-depth}{} variant, which concatenates the discrete codes of depth and source image, and maps them with a matrix $W \in \mathbb{R}^{d_e \times 2\cdot d_z}$ to the embedding space to avoid an increase in sequence length: \begin{equation} \label{eq:fvarv} f(x^{\text{src}}, T) = [W^\text{cam} \hat{T}, W[E^\text{d}(d), E(x^{\text{src}})]] +e^\tpos \end{equation} \emph{(VI)} Implicit approaches offer an intriguing possibility: Because they do not need an explicit estimate of the depth to perform the warping operation $\mathcal{S}$, they hold the potential to solve the task without such a depth estimate. Thus, \emph{impl.-nodepth}{} uses only camera parameters and source image---the bare minimum according to our task description. \begin{equation} \label{eq:fvarvi} f(x^{\text{src}}, T) = [W^\text{cam} \hat{T}, e(E(x^{\text{src}}))] + e^\tpos \end{equation} \emph{(VII)} Finally, we analyze if explicit and implicit approaches offer complementary strengths. Thus, we add a \emph{hybrid}{} variant whose conditioning function is % the sum of the $f$'s of \emph{expl.-emb}{} in Eq.~\eqref{eq:fvariii} and \emph{impl.-depth}{} in Eq.~\eqref{eq:fvarv}. \subsection{Depth Readout} \vspace{-0.35em} \label{subsec:depthreadout} \noindent To investigate the ability to learn an implicit model of the geometric relationship between different views, we propose to extract an explicit estimate of depth from a trained model. To do so, we use linear probing \cite{DBLP:conf/icml/ChenRC0JLS20}, which is commonly used to investigate the feature quality of unsupervised approaches. More specifically, we assume a transformer model consisting of $L$ layers and of type \emph{impl.-nodepth}{}, which is conditioned on source frame and transformation parameters only. Next, we specify a certain layer $0 \leq l \leq L$ (where $l=0$ denotes the input) and extract its latent representation $e^l$, corresponding to the positions of the provided source frame $x^{\text{src}}$. We then train a position-wise linear classifier $W$ to predict the discrete, latent representation of the depth-encoder $E^\text{d}$ (see Sec.~\ref{subsec:encodingbiases}) via a cross-entropy objective from $e^l$. Note that both the weights of the transformer and the VQGANs remain fixed. \section{Experiments} \vspace{-0.35em} \label{sec:experiments} \noindent First, Sec.~\ref{subsec:implicitvsexplicit} integrates the different explicit and implicit inductive biases into the transformer to judge if such geometric biases are needed at all. Following up, % Sec.~\ref{subsec:comparesota} compares implicit variants to previous work and evaluates both the visual quality and fidelity of synthesized novel views. Finally, we evaluate the ability of the least biased variant, \emph{impl.-nodepth}{}, to implicitly represent scene geometry, % observing that they indeed capture such 3D information. \subsection{Comparing Implicit and Explicit Transformers} \vspace{-0.35em} \label{subsec:implicitvsexplicit} \noindent To investigates if transformers need (or benefit from) an explicit warping between source and target view we first compare how well the different variants from Sec.~\ref{subsec:encodingbiases} (see also Fig.~\ref{fig:approachmerged}) can learn a probabilistic model for novel view synthesis. Afterwards, we directly evaluate both the quality and fidelity of samples obtained from these models. To prepare, we first train VQGANs on frames of the RealEstate10K \cite{DBLP:journals/tog/ZhouTFFS18} and ACID \cite{DBLP:journals/corr/abs-2012-09855} datasets, whose preparation is described in the supplementary. We then train the various transformer variants % on the latent space of the respective first stage models. Note that this procedure ensures comparability of different settings within a given dataset, as the space in which the likelihood is measured remains fixed. \noindent \textbf{Comparing Density Estimation Quality} A basic measure for the performance of probabilistic models is the likelihood assigned to validation data. Hence, we begin our evaluation of the different variants by comparing their (minimal) negative log-likelihood (NLL) on RealEstate and ACID. % \noindent Based on the results in Tab.~\ref{tab:mergedbiascomp}, we can identify three groups with significant performance differences on ACID: The implicit variants \emph{impl.-catdepth}{}, \emph{impl.-depth}{}, and \emph{impl.-nodepth}{} and \emph{hybrid}{} achieve the best performance, which indicates an advantage over the purely explicit variants. Adding an explicit warping as in the \emph{hybrid}{} model does not help significantly. \noindent Moreover, % \emph{expl.-feat}{} is unfavorable, possibly due to the features $E(x^{\text{src}})$ remaining fixed while training the transformer. The \emph{learnable} features which are warped in variant \emph{expl.-emb}{} obtain a lower NLL and thereby confirm the former hypothesis. Still there are no improvements of warped features over warped pixels as in variant \emph{expl.-img}{}. \figentropy \figrealestate \noindent The results on RealEstate look similar but in this case the implicit variant without depth, \emph{impl.-nodepth}{}, performs a bit worse than \emph{expl.-img}{}. % Presumably, accurate depth information obtained from a supervised, monocular depth estimation model are much more beneficial in the indoor setting of RealEstate compared to the outdoor setting of ACID. \noindent \textbf{Visualizing Entropy of Predictions} The NLL % measures the ability of the transformer to predict target views. The entropy of the predicted distribution over the codebook entries for each position captures the prediction uncertainty of the model. See Fig.~\ref{fig:entropy} for a visualization of variant \emph{impl.-nodepth}{}. The model is more confident in its predictions for regions which are visible in the source image. This indicates that it is indeed able to relate source and target via their geometry instead of simply predicting an arbitrary novel view. \noindent \textbf{Measuring Image Quality and Fidelity} Because NLL does not necessarily reflect the visual quality of the images \cite{DBLP:journals/corr/TheisOB15}, we evaluate the latter also directly. Comparing predictions for novel views with ground-truth helps to judge the faithfulness with which the model transforms the source view into the novel view as specified by the camera. However, since we are especially interested in the case of large camera movements where large parts of the target image have not been observed in the source view, we must also evaluate the quality of the content imagined by the model. Note that a sample from the model might be fairly different from the content in the ground-truth, since the latter is just one of many possible realizations of the real-world. \noindent To evaluate the image quality without a direct comparison to the ground-truth, we report FID scores \cite{DBLP:conf/nips/HeuselRUNH17}. To evaluate the fidelity to the ground-truth, we report the low-level similarity metrics SSIM \cite{DBLP:journals/tip/WangBSS04} and PSNR, and the high-level similarity metric PSIM \cite{DBLP:conf/cvpr/ZhangIESW18}, which has been shown to better represent human assessments of visual similarity. \noindent Tab.~\ref{tab:mergedbiascomp} contains the results for RealEstate10K and ACID. In general, these results reflect the findings from the NLL values: Image quality and fidelity of implicit variants with access to depth are superior to explicit variants. The implicit variant without depth (\emph{impl.-nodepth}{}) consistently achieves the same good FID scores as the implicit variants with depth (\emph{impl.-catdepth}{} \& \emph{impl.-depth}{}), but cannot achieve quite the same level of performance in terms of reconstruction fidelity. However, it is on par with the explicit variants, albeit requiring no depth supervision. \vspace{-0.75em} \subsection{Comparison to Previous Approaches} \vspace{-0.35em} \label{subsec:comparesota} \tabrealsotacomp \enlargethispage{1\baselineskip} \noindent Next, we compare our best performing variants \emph{impl.-depth}{} and \emph{impl.-nodepth}{} to previous approaches for novel view synthesis: 3DPhoto \cite{DBLP:conf/cvpr/ShihSKH20}{}, SynSin \cite{DBLP:conf/cvpr/WilesGS020}{} and InfNat \cite{DBLP:journals/corr/abs-2012-09855}{}. 3DPhoto \cite{DBLP:conf/cvpr/ShihSKH20}{} has been trained on MSCOCO \cite{DBLP:conf/eccv/LinMBHPRDZ14} to work on arbitrary scenes, whereas SynSin \cite{DBLP:conf/cvpr/WilesGS020}{} and InfNat \cite{DBLP:journals/corr/abs-2012-09855}{} have been trained on RealEstate and ACID, respectively. \noindent To assess the effect of formulating the problem probabilistically, we introduce another baseline to compare probabilistic and deterministic models with otherwise equal architectures. Specifically, we use the same VQGAN encoder and decoder architectures as used in the first stage described in Sec.~\ref{subsec:transformertraining}. However, they are not trained as an autoencoder, but instead the encoder receives the warped source image $x^\text{wrp}$, and the decoder predicts the target image $x^\text{dst}$. This model, denote by \emph{expl.-det}{}, represents an \emph{explicit and deterministic} baseline. Finally, we include the warped source image itself as a baseline denoted by MiDaS \cite{Ranftl2020}{}.\\ \figacid \noindent Utilizing the probabilistic nature of our model, we analyze how close we can get to a particular target image with a fixed amount of samples. Tab.~\ref{tab:realsotacomp} and \ref{tab:acidsotacomp} report the reconstruction metrics with 32 samples per validation example of RealEstate and ACID. \enlargethispage{1\baselineskip} \noindent The probabilistic variants consistently achieve the best values for the similarity metrics PSIM, SSIM and PSNR on RealEstate, and are always among the best three values on ACID, where \emph{expl.-det}{} achieves the best PSIM values and the second best PSNR values after \emph{impl.-depth}{}. We also show the reconstruction metrics on RealEstate as a function of the number of samples in Fig.~\ref{fig:reconstructionbothdepth}. We observe that already with four samples, the performance of \emph{impl.-depth}{} is better than all other approaches except for the SSIM values of 3DPhoto \cite{DBLP:conf/cvpr/ShihSKH20}{}, which are overtaken by \emph{impl.-depth}{} with 16 samples, and does not saturate even when using 32 samples, which demonstrates the advantages of a probabilistic formulation of novel view synthesis. \\ \tabacidsotacomp \noindent These results should be considered along with the competitive FID scores in Tab.~\ref{tab:realsotacomp} and \ref{tab:acidsotacomp} (where the implicit variants always constitute the best and second best value) and the qualitative results in Fig.~\ref{fig:realestate} and \ref{fig:acid}, underlining the high quality of our synthesized views. Furthermore, it is striking that IS assign the best scores to 3DPhoto \cite{DBLP:conf/cvpr/ShihSKH20}{} and MiDaS \cite{Ranftl2020}{}, the two variants which contain large and plain regions of gray color in regions where the source image does not provide information about the content. In cases where the monocular depth estimation is accurate, 3DPhoto \cite{DBLP:conf/cvpr/ShihSKH20}{} shows good results but it can only inpaint small areas. SynSin \cite{DBLP:conf/cvpr/WilesGS020}{} and InfNat \cite{DBLP:journals/corr/abs-2012-09855}{} can fill larger areas but, for large camera motions, their results quickly become blurry. A similar observation holds for \emph{expl.-det}{}, except that it replaces blurriness with repetitive patterns. Both of the two probabilistic variants \emph{impl.-depth}{} and \emph{impl.-nodepth}{} consistently produce plausbile results which are largely consistent with the source image, although small details sometimes differ from the source image. Overall, the results show that only the probabilistic variants are able to synthesize high quality images for large camera changes. \subsection{Probing for Geometry} \vspace{-0.5em} \label{subsec:extractthedepth} \noindent Based on the experiments in Sec.~\ref{subsec:implicitvsexplicit} and Sec.~\ref{subsec:comparesota}, which showed that the unbiased variant \emph{impl.-nodepth}{} is mostly on-par with the others, we investigate the question whether this model is able to develop an implicit 3D ``understanding'' without explicit 3D supervision. To do so, we perform linear probing experiments as described in Sec.~\ref{subsec:depthreadout}. \enlargethispage{1\baselineskip} \noindent Fig.~\ref{fig:depthprobinglayers} plots the negative cross-entropy loss and the negative PSIM reconstruction error of the recovered depth maps against the layer depth of the transformer model. Both metrics are consistent and quickly increase when probing deeper representations of the transformer model. Furthermore, both curves exhibit a peak for $l=4$ (\ie after the third self-attention block) and then slowly decrease with increasing layer depth. The depth maps obtained from this linear map resemble the corresponding true depth maps qualitatively well as shown in Fig.~\ref{fig:depthprobinglayers}. This figure demonstrates that a linear estimate of depth only becomes possible through the representation learned by the transformer ($l=4$) but not by the representation of the VQGAN encoder ($l=0$). We hypothesize that, in order to map an input view onto a target view, the transformer indeed develops an implicit 3D representation of the scene to solve its training task. \vspace{-0.85em} \label{subsec:probingdepth} \figdepthprobinglayers \figdepthprobesqualitative \section{Conclusion} \vspace{-0.65em} \enlargethispage{1\baselineskip} \noindent We have introduced a probabilistic approach based on transformers for novel view synthesis from a single source image with strong changes in viewpoint. Comparing various explicit and implicit 3D inductive biases for the transformer showed that explicitly using a 3D transformation in the architecture does not help their performance. Moreover, even with no depth information as input the model learns to infer depth within its internal representations. Both of our implicit transformer approaches showed significant improvements over the state of the art in visual quality and fidelity.% \newpage
1,314,259,995,257
arxiv
\section{Introduction} The imposition of normal ordering on the Lagrangian, energy-momentum tensor and currents has become a standard feature in QFT, designed to eliminate the infinities in the vacuum expectation values (v.e.v.) of these dynamical entities. Although axiomatic components in fundamental theories are generally acceptable, ad hoc features like this should eventually be replaced by a more principled approach or be shown to be a consequence of such principles. As Dirac has noted in the past ``We want to understand how Nature works; to understand the basic ideas which Nature is using, and for that purpose it is not sufficient to have rules giving results in agreement with observation'' (p. 759, \cite{Dirac}). His comments were directed at renormalization, but they are equally valid here. Let us see how the ad hoc usage of the normal product was defended in the past. The basic texts on QFT usually start with a discussion of the scalar case which then sets the tone for the rest of QFT and its principles. This order of presentation is an unfortunate one in the current context, as the scalar case does not easily reveal the cause of these unphysical infinities. In the classic texts by Gasiorowicz ([2], p.16) and Bogoliubov and Shirkov ([3], p.103-104, [4], p.76) it is stated that expressions containing products of operators at the same point lead to infinities and must be normal ordered, e.g. in the case of Hamiltonians and currents. Since there is no other obvious cure to the infinity problem in the scalar case, the application of the ad hoc normal ordering recipe has been widely accepted. One has to go to the fermion case to acquire more insight in the source of these infinities. Schweber ([5], p.228) does discuss the fermion case and notices that the infinite contributions come from anti-particles. However, he associates these with the charge of the sea of occupied negative energy states. Sakurai ([6], p.151) has a similar comment, contributing the negative infinite vacuum energy of the vacuum to anti-particles in the Dirac sea. Neither author appears concerned about the implied asymmetry between particles and anti-particles. This is unfortunate as this asymmetry holds the key towards a solution of the infinity problem. In more recent texts by Itzykson and Zuber (\cite{1980Itzykson}, p.111), Kaku (\cite{1993Kaku}, p.67), Peskin and Schroder (\cite{1995Peshkin}, p.22), and Ryder (\cite{1996Ryder}, p.131) the normal product is also introduced for the scalar field, but the dropping of the infinite vacuum term is justified by arguing that such an energy shift is not measurable. However under general relativity (GR) the absolute energy plays an essential role, so this argument is not a principled one. In addition, the shift argument would not apply to currents as the absolute magnitude of a current is physically important. In discussing the zero point momentum some authors argue that the cancellation between positive and negative momenta in the vacuum integral render the v.e.v. zero, so that no ad hoc prescription is required (e.g. \cite{1980Itzykson}, p.115). This survey shows that the arguments to eliminate the infinite contributions are very diverse (Dirac sea, irrelevant absolute energy scale, ad hoc operational rule to simplify calculations) or opportunistic (we do not need a recipe if there happens to be a cancelation between infinite terms). The discussion is also ambiguous about the nature of zero-point energy: is it physical and is it removed for practical reasons, or is it unphysical and does one have to revise the formulation to ensure that it vanishes? Most theorists seem to think that zero point energy is real and contributes towards the cosmological constant, despite the extreme conflict of this assumption with cosmological observations (a conflict which is not really addressed by introducing cutoffs and renormalization). Even in string theory the role of the ad hoc application of the normal product is still prominent (\cite{1987Green1}, p. 77). In supersymmetry the infinite contributions are considered to be real and the possible cancelation of bosonic and fermionic contributions to the v.e.v. becomes an important motivation for the approach. Clearly, there is no coherent understanding of this problem and a more principled approach is called for. We will show that by starting with the fermion case, one gets a physical understanding of the reason behind these infinite terms and thereby is able to identify the key towards their removal. This fermionic analysis can then also form the basis for a better understanding and treatment of this phenomenon in the boson case by fermionization of the bosons fields. \section{Analysis of infinities in the fermion problem} Most discussions of the application of the ad hoc normal product start with the scalar field. The operator expression for the energy reads: \begin{equation} \label{eq:zeropoint} \hbar \omega \frac{1}{2}[a^{\dag}(\vec{k})a(\vec{k})+ a(\vec{k})a^{\dag}(\vec{k})] =\hbar \omega a^{\dag}(\vec{k}) a(\vec{k})+\frac{1}{2} \hbar \omega \delta_{p}(0)\ . \end{equation} After discretization in $\vec{k}$ the delta function is unity and $\frac{1}{2} \hbar \omega$ can be identified as the zero-point energy, corresponding to the lowest harmonic oscillator energy. Since the v.e.v. of this term in the Hamiltonian is infinite one usually applies the normal product (commuting the creation operators to the left) to eliminate it: \begin{equation} \label{eq:zeropointx} \hbar \omega \frac{1}{2}:[a^{\dag}(\vec{k})a(\vec{k})+ a(\vec{k})a^{\dag}(\vec{k})]: =\hbar \omega a^{\dag}(\vec{k}) a(\vec{k}). \end{equation} Although this procedure is completely ad hoc and is not justified by any physical principle, there is no other obvious way how to eliminate this infinite term. In fact, many theorists nowadays consider this elimination of the infinite term a question of convenience, as they believe that this zero point energy is a real effect, and that the infinity of the v.e.v. can be controlled somehow by renormalization, although the discrepancy with the observed cosmological constant remains totally unexplained. We now consider the fermion case, taking the discussion by Sakurai (\cite{1967Sakurai}, Eq. 3.364) as a guide. He analyzes the Dirac Hamiltonian: \begin{equation} \label{eq:DiracLagrangian} {\cal{L}}=\bar{\psi}(x)\left(i \gamma_{\mu}\partial^{\mu} -m\right)\psi(x) \end{equation} and after expanding the Hamiltonian in creation and annihilation operators he obtains (\cite{1967Sakurai}, Eq. 3.393): \begin{eqnarray} \label{eq:Hamiltonian} H=\sum_{s}\int d\vec{p} E_{\vec{p}} \left( b^{\dag}_{s,\vec{p}}b_{s,\vec{p}}-d_{s,\vec{p}}d^{\dag}_{s,\vec{p}} \right) \nonumber \\ =\sum_{s} \int d\vec{p} E_{\vec{p}} \left( b^{\dag}_{s,\vec{p}}b_{s,\vec{p}}+d^{\dag}_{s,\vec{p}}d_{s,\vec{p}}-\delta_{p}(0) \right)\ . \end{eqnarray} Again the v.e.v. of the last term is infinite. In this case the problematic term is exclusively due to anti-particles, suggesting that the cure to this problem must feature a similar opposing asymmetry. Just like in the scalar case, the standard response to the infinity is to normal order the whole Hamiltonian. However, in this case there is an alternative featuring the asymmetry we anticipated, namely to just normal order the anti-particle term: \begin{equation} \label{eq:NormalOrder} -:d_{s,\vec{p}}d^{\dag}_{s,\vec{p}}:=d^{\dag}_{s,\vec{p}}d_{s,\vec{p}}. \end{equation} At the moment this procedure looks just as ad hoc as the general one. However, we can generalize this procedure to longer chains of operators and provide a physical justification for it. Under this so-called $\mathbb{R}$-product, which was first introduced in \cite {1GrebenQuark}, the order of all anti-particle operators belonging to a \emph{single} space-time variable is reversed between the bra-state on the left, and the ket-state on the right. This operation restores the symmetry (or duality) between particles and anti-particles and as a by-product eliminates the aforementioned infinite terms. To understand the need for this re-ordering product we have a closer look at the nature of the QFT formulation for Dirac particles. After quantization the Dirac field $\psi$ contains the particle annihilation operator $b_{s,\vec{p}}$ and the anti-particle creation operator $d^{\dag}_{s,\vec{p}}$, while the reverse statements apply to the conjugate field $\bar{\psi}$. The creation of a particle and the destruction of an anti-particle both increase the particle number by one, which is why they belong in one expression. The same is true for the creation of an anti-particle and the destruction of a particle, both of which decrease the particle number by one so that they also belong together. Now this is all pretty obvious, however, its consequences are not. When we now multiply these combinations in order to describe a Lagrangian or Hamiltonian that conserves lepton or baryon number, then we can choose between two options (we only display the operators to make our point): \begin{equation} \label{eq:normal} \left(\cdots b^{\dag}_{t,\vec{q}}+\cdots d_{t,\vec{q}}\right) \times\left(\cdots b_{s,\vec{p}}+\cdots d^{\dag}_{s,\vec{p}}\right) \end{equation} or \begin{equation} \label{eq:abnormal} \left(\cdots b_{s,\vec{p}}+\cdots d^{\dag}_{s,\vec{p}}\right) \times\left(\cdots b^{\dag}_{t,\vec{q}}+\cdots d_{t,\vec{q}}\right). \end{equation} Clearly, in QFT one uses the first form. The historical reason is that this corresponds to the form one would use in a non-relativistic theory with only particles: if we have a one particle state (ket vector) $|b^{\dag}_{s,\vec{p}}|0 \rangle$ then the operator $b_{s,\vec{p}}$ cancels that state, while the left-hand operator $b^{\dag}_{t,\vec{q}}$ cancels the final state (bra vector) $\langle 0| b_{t,\vec{q}}|$, thereby selecting the matrix element $A_{ts}(\vec{q},\vec{p})$ in the expansion. However, if the current universe would be dominated by anti-particles then one would need the second form to reach the same goal for anti-particles. This shows that the standard formulation of QFT displays a fundamental asymmetry, due to the historical bias towards particles in non-relativistic formulations. Unfortunately this bias cannot be redressed under the current notational framework of QFT, and there does not seem to exist another mathematical notation which can both capture the usual constraints of QFT, as well as display the required symmetry and duality between particles and anti-particles. Hence, our only alternative appears to be to reverse the order of the \emph{anti-particle} operators between the bra and ket vector, so that the anti-particle operator chains become an exact mirror image of the chains of particle operators. This reversal is effected by the $\mathbb{R}$-product. Had the current universe been dominated by anti-particles then one could have expected a bias towards anti-particles so that the form (\ref{eq:abnormal}) would have been used and the $\mathbb{R}$-product would have to reverse the order of the \emph{particle} operators. We can also illustrate the need for this product using a less formal, more intuitive pictorial analysis by studying some higher order self-interaction diagrams. Consider an operator expressions like $b_{\alpha}^{\dag}d_{\beta}^{\dag}d_{\gamma}b_{\delta}$ which is accompanied by a product of amplitudes like $A^{pa}_{\alpha\beta} (x) A^{ap}_{\gamma\delta}(x)$, where the superscript p(a) indicates whether the state is a particle or anti-particle. Other indices of the amplitudes are suppressed as they play no role for the argument. Let us try to analyze the physical meaning of such a term for a particle state like $|b_{\varepsilon}^{\dag}|0\rangle$. Clearly we cannot use standard Feynman diagrams to represent this term as all associated amplitudes depend on the same $x=(\vec{x},t)$, so all lines would share a single vertex, which makes it difficult to express the physical content of these terms. The Dirac notation implies a sequence of operations between the bra and ket state, which are best represented by a sequence of individual vertices, i.e. the physical mechanisms can best be represented by a diagram that may look like a time ordered Feynman diagram, despite the fact that all space-time points are identical. The particle component of the operator expression, namely $b_{\alpha}^{\dag} b_{\delta}$, has a clear meaning (as one may expect from a formulation that is biased towards particles): a quark in the initial state characterized by a ket vector $|b_{\varepsilon}^{\dag}|0\rangle$, is cancelled by the operator $b_{\delta}$ for $\delta=\varepsilon$, while at the end a quark is recreated to form the final state $\langle0|b_{\alpha}$ by cancelling the operator $b_{\alpha}^{\dag}$. However, to account for the anti-particle operators we need two vertices: one where particle $\delta$ joins the anti-particle $\gamma$, represented by the amplitude $A^{ap}_{\gamma\delta}(x)$, and another vertex where the anti-particle $\beta$ joins the final particle $\alpha$, represented by the amplitude $A^{pa}_{\alpha\beta} (x)$. Diagrammatically the particle can be represented by a line moving upwards, while the anti-particle moves downwards. \begin{comment} \begin{tikzpicture} \draw[red,thick,dashed] (0,0) -- (4,0) ; \draw[thick,->] (0,0) -- (4.5,0) node[anchor=north west] {x axis}; \label{fig:graph} \end{tikzpicture} \end{comment} \begin{figure}[h!] \begin{center} \includegraphics[width=10cm]{graph3zz} \end{center} \caption{Diagram for vertices with same x} \label{fig:graph} \end{figure} These vertices are shown in Fig.\ref{fig:graph}. The $\beta \alpha$ vertex appears first, and the $\delta \gamma$ vertex last. So the operator $d_{\beta}^{\dag}$ in the expression $b_{\alpha}^{\dag}d_{\beta}^{\dag}d_{\gamma}b_{\delta}$ could represent an anti-quark being created from the vacuum followed by its annihilation with $d_{\gamma}$. In terms of our bra-ket representation this would imply that $d_{\beta}^{\dag}$ operates on the ket state, while $d_{\gamma}$ operates on the bra state. Hence, one needs to reverse the order of the anti-particle operators in order to turn this expression in to a physically acceptable one. We thus find that the natural representation for this process in the bra ket notation is the reversed operator $-b_{\alpha}^{\dag} d_{\gamma}d_{\beta}^{\dag}b_{\delta}$ (we added a minus sign to respect the anti-commuting nature of fermion operators). The original order with $b_{\alpha}^{\dag}d_{\beta}^{\dag}d_{\gamma}b_{\delta}$ would yield zero on the particle state $|b_{\varepsilon}^{\dag}|0\rangle$, so that this physical interaction term would be eliminated. On the other hand, the operator $d_{\gamma}b_{\delta} b_{\alpha}^{\dag}d_{\beta}^{\dag}$, which also appears in the expansion, would generate too many terms if left untreated, namely for $\alpha=\delta$ and $\beta=\gamma$. These contributions are not dependent on the character of the initial or final state and thus are unphysical. By similarly reversing the order of the anti-particle operators in this expression, we get the operator $-d_{\beta}^{\dag}b_{\delta} b_{\alpha}^{\dag}d_{\gamma}$, which eliminates the unphysical terms. The physical meaning of this re-ordered operator becomes obvious if we operate on an initial anti-particle state $|d_{\gamma}^{\dag}|0\rangle$. In this case the initial anti-particle state $\gamma$ is followed by the creation of an internal particle line $\alpha$ and the final anti-particle line $\beta$ at the first vertex. The $\alpha$ particle line changes to $\delta$ and meets the initial anti-particle $\gamma$ at the second vertex. Hence, one can only assign a physical meaning to these single variable diagrams if the order of the anti-particle operators is reversed. Of course, this illustration deals with a very simple diagram. Much more elaborate diagrams can occur, as was the case when we tried to solve the QCD operator field equations for quarks self-consistently \cite{1GrebenQuark}. There long - even infinite - series of operators for a single space-time variable occurred and one had to decide how to deal with such long sequences. However, an exact operator solution of the field equations could be constructed once the $\mathbb{R}$-product was implemented. This product ensured that the important physical interactions were retained, while the prodigious unphysical interactions, which only lead to infinities, were eliminated, just as they were in the example above. The exact operator solution allowed the reduction of the field equations to a finite set of differential equations, which in turn led to a localized solution which could be interpreted as a dressed quark. This illustrates the power of - and need for - this new principle in the case of expectation values or self-consistent QFT calculations of this bound-state character. Having explained the basic origin of - and need for - the $\mathbb{R}$-product, we now spell out a number of properties of this product. This will also make it plausible why most QFT calculations could be so successful despite the incompleteness of the standard theory. We will discuss later how these properties and results can be applied to the boson case. \begin{enumerate} \item \label{1} The general rule for applying the $\mathbb{R}$-product is that for a product of particle and anti-particle operators all belonging to the \emph{same} space-time point we have: \begin{equation} \label{eq:definition} \langle f|\mathbb{R}[b_1 \cdots b_n d_1 \cdots d_m]|i\rangle =(-1)^{m(m-1)/2}\langle f| b_1 \cdots b_n d_m \cdots d_1|i\rangle , \end{equation} where the final expression can be used as a standard operator product without any reference to the space-time variable they belong to. Whether the operators in the chain are creation or annihilation operators is immaterial for the application of this rule (this is the reason for not displaying the creation or annihilation character of these operators in this equation). \item \label{2}To emphasize the importance of the link to the space-time variable we give an example of a mixed expression where we give the linked space-time point as a (silent) label: \begin{eqnarray} \label{eq:mixed} \mathbb{R}[d^{(x)}_1 d^{(y)}_2d^{(x)}_3 d^{(y)}_4]= \mathbb{R}[ d^{(x)}_1 \left\{ d^{(y)}_2,d^{(x)}_3\right\} d^{(y)}_4]- \mathbb{R}[d^{(x)}_1 d^{(x)}_3 d^{(y)}_2 d^{(y)}_4]= \nonumber \\ =\left\{ d_2,d_3 \right\} d_1 d_4- \mathbb{R}[d^{(x)}_1 d^{(x)}_3 ]\mathbb{R}[d^{(y)}_2 d^{(y)}_4]= \left\{ d_2,d_3 \right\} d_1 d_4 -d_3 d_1 d_4 d_2 \end{eqnarray} where we used the fact that the anti-commutator $\left\{ d^{(y)}_2,d^{(x)}_3\right\}$ is not subjected to the $\mathbb{R}$-product and can be considered a c-number, so that it can be taken out of the $\mathbb{R}$-product. The $\mathbb{R}$-product prescription can be dropped as soon as none of the operators inside it have the same space-time coordinate or are all particle operators. One can also factorize the $\mathbb{R}$-product into separate $\mathbb{R}$-products for each space-time variable provided the operators are already ordered according to space-time variable. Once the $\mathbb{R}$-product has been carried out the space-time labels become redundant and can be dropped. If we apply the same reduction without the $\mathbb{R}$-product we would have obtained $\left\{ d_2,d_3 \right\} d_1 d_4 -d_1 d_3 d_2 d_4 $. \item \label{3} For a common physical expression like $\mathbb{R}[d^{(x)}_1 d^{(x)\dag}_3 d^{(y)}_2 d^{(y)\dag}_4]$ the result is $d^{\dag}_3 d_1 d^{\dag}_4 d_2 $, i.e. the same as if one had started with the normal ordered expression $:d^{(x)}_1 d^{(x)\dag}_3:$ $:d^{(y)}_2 d^{(y)\dag}_4:= d^{\dag}_3 d_1 d^{\dag}_4 d_2 $. The equivalence of these prescriptions for such common physical expressions explains why the normal ordered Lagrangian yields correct results in most cases. Notice, that if we apply the normal product we need to specify the nature of the operator (creation or annihilation). This accentuates the very different philosophies of the two prescriptions. \item \label{4} Under point (\ref{2}) we illustrated the use of the anti-commutator to reduce complex expressions of a mixed form. The anti-commutator referred to operators belonging to different space-time coordinates and could be treated as a c-number. In the case of a single space-time variable one can also apply such a reduction, however, the result is rather surprising: \begin{eqnarray} \label{eq:commutator} \mathbb{R}[d_1 \cdots \{d_p,d_{p+1}\}\cdots d_m] \equiv \mathbb{R}[d_1 \cdots d_p d_{p+1}\cdots d_m+d_1 \cdots d_{p+1} d_p \cdots d_m] \nonumber \\ =(-1)^{m(m-1)/2} \left[d_m \cdots d_{p+1} d_p \cdots d_1+ d_m \cdots d_p d_{p+1}\cdots d_1\right] \nonumber \\ =(-1)^{m(m-1)/2} d_m \cdots \{d_p, d_{p+1}\} \cdots d_1 \nonumber \\ =\{d_p, d_{p+1}\}(-1)^{m(m-1)/2}(-1)^{(m-2)(m-3)/2}\mathbb{R}[d_1 \cdots d_{p-1}d_{p+2}\cdots d_m] \nonumber \\ =-\{d_p, d_{p+1}\}\mathbb{R}[d_1 \cdots d_{p-1}d_{p+2}\cdots d_m] \end{eqnarray} So the anti-commutator between anti-particle operators linked to the same space-time variable inside an $\mathbb{R}$-product can still be treated as a c-number, but its value features an extra minus sign. In the self-consistent QFT calculations of dressed quarks \cite {1GrebenQuark} we first discovered the need for such minus signs for intermediate anti-particle states. Without these minus signs the equations of motion took on an ugly and unmanageable form, with them they displayed a high degree of symmetry and elegance, and allowed an analytic solution, despite the highly non-linear and strongly coupled nature of the equations. For a long time we suspected a sign error somewhere, however, eventually we found that the needed minus signs could be explained by the identity above. This anti-commutator property is an important tool in the solution of the operator field equations, since it can be applied directly to these equations in $\mathbb{R}$-product form. This facilitates the reduction of these operator equations to manageable differential equations. In perturbative QFT calculations one usually encounters only mixed anti-commutators like $\left\{ d^{(y)}_2,d^{(x)}_3\right\}$ which do not give rise to the extra minus sign, so that the standard techniques remain valid. \item \label{5} The $\mathbb{R}$-product needs to be applied between the bar and ket vector. So we no longer define the Hamiltonian or Lagrangian itself as normal (or $\mathbb{R}$) ordered, since there may be a chain of operators (all referring to the same space-time point) \emph{between} the ket and bra state vector, and the $\mathbb{R}$-product must be applied to the whole matrix element. So the Lagrangian, Hamiltonian or currents should be defined without any further (normal ordering) prescription. \item \label{6} Because of the special role of the bra and ket vector, one cannot simply insert bra and ket vectors inside an existing QFT expression. In particular a closure expansion like: \begin{equation} \label{eq:false} \left \langle f|\mathbb{R}[d_1d_2\cdots d_m]|i\right \rangle = \sum_j\left \langle f|\mathbb{R}[d_1d_2\cdots d_p|j\right \rangle \left \langle j|\mathbb{R}[d_{p+1} \cdots d_m]|i\right \rangle \end{equation} (where the states $\left |j\right \rangle $ form a complete set) is no longer valid in general. Naturally, there are enough circumstances where the insertion works, for example if there are only particle operators or if all anti-particle operators belong to different space-time coordinates (in which case the $\mathbb{R}$-product can be omitted and the resulting chain of operators can be handled in the standard way). But we no longer can rely blindly on this expansion as a basis for general proofs. If one wants to use such an insertion on an anti-particle expression then one should first carry out the $\mathbb{R}$-product, and only then apply closure. Afterwards one can revert to the $\mathbb{R}$-product expression. In this case this would lead to: \begin{eqnarray} \label{eq:true} \left \langle f|\mathbb{R}[d_1d_2\cdots d_m]|i\right \rangle \nonumber \\ =(-1)^{p(m-1)} \sum_j\left \langle f|\mathbb{R}[d_{p+1} \cdots d_m]|j\right \rangle \left \langle j|\mathbb{R}[d_1 \cdots d_p]|i\right \rangle. \end{eqnarray} In practice this expansion will hardly be useful, as closure will usually be applied in cases where the operators belong to different space-time variables. \item \label{7} The fact that many vacuum matrix elements now yield zero does not prevent the creation of particles from the vacuum (vacuum fluctuations): \begin{equation} \label{eq:quantum fluctuation} \langle f|\mathbb{R}[b^{(x)\dag}_1 d^{(x)\dag}_2]|0\rangle \neq 0 \end{equation} where $ <f |$ is a particle-anti-particle state. \item \label{8} In higher order Feynman diagrams long chains of operators might appear. However, since the operators involved usually refer to different space-time variables, the $\mathbb{R}$-product usually does not play a role or has the same effect as the ad hoc normal product. This is one reason why the incompleteness of standard field theory has gone unnoticed in perturbative calculations. \end{enumerate} \section{A fermionic expansion of the boson field} The ad hoc use of normal ordering is usually introduced for scalar fields, as was already noted in the introduction. Since the key to the resolution of the infinity problem in the fermionic case lay in the application of the $\mathbb{R}$-product to anti-particle operators, we can expect that a similar solution would work for bosons. However, in order to apply the $\mathbb{R}$-product to boson fields one must first expand these fields in terms of fermionic operators. While in nuclear structure calculations fermionic expansions of bosons are common (see for example an old paper of ours \cite{Grebenpauli}), in QFT this seems like a drastic step considering the fact that the boson fields are considered very basic and fundamental. However, it is mainly the operator algebra that is modified and after these operations have been carried out the reduced boson and fermion field entities again act to a large extent as elementary boson and fermion fields. So the fermionization of bosons is not as dramatic a step as one might have anticipated. This will be confirmed later in this section, where we demonstrate that the boson field propagator retains a conventional form under the fermionic expansion. Boson {-} and in particular scalar {-} fields are generally considered simpler than fermion fields and with the commutators being analogous to the classical Poisson brackets, these fields are often used to introduce field quantization, before the supposedly \emph{more involved} case of Dirac fields is entertained. The current discussion turns this picture around as it suggests that the truly fundamental fields are the fermionic fields which have no clear analogy in classical physics. The boson fields can only be correctly understood after they are expressed in terms of fermionic operators, and so the usual analogies with classical physics are misleading at best. As we will demonstrate below, this fermionic representation of bosons ensures that the usual infinite terms in the v.e.v. do not occur. The vanishing of these vacuum terms implies that the huge discrepancy between theory and observation of the cosmological constant is no longer present. Naturally, one still has to show in more detail that the fermionization of boson fields leads to a consistent theory of bosons and is able to reproduce the successful quantitative results obtained in the standard formulation. The upcoming discussion will also address this point. In the self-consistent bound-state QFT calculations \cite {1GrebenQuark}, the fermionic expansion of the boson fields was dictated uniquely by the fermionic source term in the quantized boson field equations. These equations automatically ensured that the boson fields had the correct structure and symmetries. The free boson field satisfies a source-less field equation, so one must use other constraints and considerations to determine the nature of a fermionic expansion. After an in depth analysis we found that to lowest order the following expansion satisfies the necessary demands: \begin{eqnarray} \label{eq:boson field} A^\mu_a (x)=\sqrt{\frac{1}{8SV}}\sum_{\alpha,\beta} \int\frac{d\vec{p}}{\sqrt{2p_0}} \left[\left(\bar{u}_{\alpha,\frac{\vec{p}}{2}}\gamma^{\mu} \mathbb{O}_a v_{\beta,\frac{\vec{p}}{2}} \right) e^{ipx} b_{\alpha,\frac{\vec{p}}{2}}^{\dag}d_{\beta,\frac{\vec{p}}{2}}^{\dag}\right. \nonumber \\ +\left.\left(\bar{v}_{\alpha,\frac{\vec{p}}{2}} \gamma^{\mu} \mathbb{O}_a u_{\beta,\frac{\vec{p}}{2}}\right) e^{-ipx} d_{\alpha,\frac{\vec{p}}{2}}b_{\beta,\frac{\vec{p}}{2}}\right] \nonumber \\ +C\sum_{\alpha,\beta} \int d\vec{p} \left[\left(\bar{u}_{\alpha,\frac{\vec{p}}{2}} \gamma^{\mu} \mathbb{O}_a u_{\beta,\frac{\vec{p}}{2}}\right) b_{\alpha,\frac{\vec{p}}{2}}^{\dag}b_{\beta,\frac{\vec{p}}{2}}\right. +\left.\left(\bar{v}_{\alpha,\frac{\vec{p}}{2}} \gamma^{\mu} \mathbb{O}_a v_{\beta,\frac{\vec{p}}{2}}\right) d_{\alpha,\frac{\vec{p}}{2}}d_{\beta,\frac{\vec{p}}{2}}^{\dag}\right] \end{eqnarray} In this equation $u(v)$ are the free particle (antiparticle) spinors with the discrete quantum number $\beta$, which collectively represents quantum numbers like spin, isospin and colour. The factor $8=2^3$ is a geometric integration constant due to the composition of the boson momentum of two identical contributions from the particle and anti-particle. The inverse volume factor is needed for the proper normalization of the bilinear expansion. The factor $S$ counts the number of discrete states. The second term is constant and as such represents a new type of contribution to the boson field made possible by the fermionic representation. We will discuss a possible role for this term when we discuss the Higgs field. In the self-consistent quark \emph{bound-state} calculation \cite {1GrebenQuark} terms with this operator structure play an essential role and are not constant. The matrix elements of the free spinors can be evaluated and we can write more explicitly: \begin{eqnarray} \label{eq:boson explicit} A^\mu_a (x)=\sqrt{\frac{1}{8SV}}\sum_{\alpha,\beta} \int\frac{d\vec{p}}{\sqrt{2p_0}} \left[e^{ipx} b_{\alpha,\frac{\vec{p}}{2}}^{\dag}d_{\beta,\frac{\vec{p}}{2}}^{\dag} + e^{-ipx} d_{\alpha,\frac{\vec{p}}{2}}b_{\beta,\frac{\vec{p}}{2}}\right] \nonumber \\ \times ({\mathbb{O}_a})_{\alpha\beta}\left[ \frac{p^\mu}{2m p_0}\vec{\sigma} \bullet \vec{p}+ \left( \begin{array}{c} 0\\ \vec{\sigma}-\hat{p}(\vec{\sigma}\bullet \hat{p})\\ \end{array} \right)^\mu +\frac{2m}{p_0} \left( \begin{array}{c} 0\\ \hat{p}(\vec{\sigma}\bullet \hat{p})\\ \end{array} \right)^\mu \right]_{\alpha \beta} \nonumber \\ +\frac{C}{m}\sum_{\alpha,\beta} \int d\vec{p}p^{\mu}({\mathbb{O}_a})_{\alpha\beta}{\mathbb{\delta}}_{\alpha\beta}^{spin} \left[b_{\alpha,\frac{\vec{p}}{2}}^{\dag}b_{\beta,\frac{\vec{p}}{2}} +d_{\alpha,\frac{\vec{p}}{2}}d_{\beta,\frac{\vec{p}}{2}}^{\dag}\right] \end{eqnarray} The fermion field in which we expand the boson field should have all the quantum numbers $\beta$ needed to calculate the matrix element $({\mathbb{O}_a})_{\alpha\beta}$ for all known interactions, i.e. they should be quark spinors. As we want to describe massless photons and gluons these quarks should be bare and massless, so that the parameter $m$ is infinitesimal and should be taken in the limit $m\downarrow 0$ once matrix elements are calculated. Slightly different expressions hold for the scalar Higgs field, which will be discussed at the end of this section. If we calculate the boson vacuum energy we obtain operator products of the following from (the coefficients are irrelevant for our current considerations): \begin{eqnarray} \label{eq:fermion energy} \langle 0| \mathbb{R}\left\{\left[\exp(ipx) b_{\alpha,\frac{\vec{p}}{2}}^{\dag}d_{\beta,\frac{\vec{p}}{2}}^{\dag} + \exp(-ipx) d_{\alpha,\frac{\vec{p}}{2}}b_{\beta,\frac{\vec{p}}{2}}\right] \right. \nonumber \\ \left.\left[\exp(iqx) b_{\gamma,\frac{\vec{q}}{2}}^{\dag}d_{\delta,\frac{\vec{q}}{2}}^{\dag} + \exp(-iqx) d_{\gamma,\frac{\vec{q}}{2}}b_{\delta,\frac{\vec{q}}{2}}\right]\right\} |0\rangle \nonumber \\ =\langle 0|\left[e^{ i(p-q)x}\mathbb{R}\{b_{\alpha,\frac{\vec{p}}{2}}^{\dag}d_{\beta,\frac{\vec{p}}{2}}^{\dag} d_{\gamma,\frac{\vec{q}}{2}}b_{\delta,\frac{\vec{q}}{2}}\} +e^{ -i(p-q)x}\mathbb{R}\{d_{\alpha,\frac{\vec{p}}{2}}b_{\beta,\frac{\vec{p}}{2}} b_{\delta,\frac{\vec{q}}{2}}^{\dag}d_{\gamma,\frac{\vec{q}}{2}}^{\dag}\}\right]|0\rangle \nonumber \\ =-e^{ i(p-q)x}\langle 0|b_{\alpha,\frac{\vec{p}}{2}}^{\dag} d_{\gamma,\frac{\vec{q}}{2}}d_{\beta,\frac{\vec{p}}{2}}^{\dag} b_{\delta,\frac{\vec{q}}{2}}|0\rangle -e^{ -i(p-q)x}\langle 0|d_{\gamma,\frac{\vec{q}}{2}}^{\dag}b_{\beta,\frac{\vec{p}}{2}} b_{\delta,\frac{\vec{q}}{2}}^{\dag}d_{\alpha,\frac{\vec{p}}{2}}|0\rangle \end{eqnarray} The final result displays a very pleasing duality and symmetry. The first term has an anti-particle creation operator on the right, but it yields zero because of the presence of the particle annihilation operator. The second term has a particle creation operator on the right, but it yields zero because of the presence of the anti-particle annihilation operator. Hence, both contributions are zero for {-} what one could call {-} complementary reasons. This must be contrasted with the result for the normal boson representation, when the combination $a^\dag a+a a^\dag$ appears symmetric, but is not: the first term yields nothing, while the last term yields a finite (and after integration infinite) contribution to the v.e.v. The consequence of the fermionic representation is that the bosonic quantum contribution to the cosmological constant also vanishes, thereby resolving the biggest discrepancy in modern physics (see also the discussion in \cite {GrebenCC}). The cosmological constant is thus not determined by QFT processes but must be seen as a constant of Nature, whose presence is a direct consequence of the imposition of the symmetries of general relativity. Its value must be determined by (cosmological) observations. This interpretation of the nature of the cosmological constant also underlies our conformal cosmological theory \cite {GRcosmology}. The elimination of the vacuum energy, better known as the zero point energy, seems to conflict with phenomena like the Casimir effect. However, Jaffe has shown a couple of years ago that such effects can also be explained through regular QFT calculations and do not necessarily require the presence of the zero-point vacuum energy \cite{Jaffe}. The fact that the v.e.v of an expression like $A_{\mu}(x)A_{\nu}(x)$ (we suppress indices and additional factors which are irrelevant for our argument) vanishes in our approach, may lead to the false impression that our theory is in conflict with established results that feature non-zero v.e.v.'s, such as condensates. The reason for this apparent discrepancy is that many of these so-called v.e.v.'s are derived from limiting procedures involving expressions that originally have distinct space-time variables where the $\mathbb{R}$-product does not apply. For example, in the derivation of the Gell-Mann-Oakes-Renner relation \cite{Gell-Mann} one takes the limit $x\rightarrow y$ in an expression like $\langle 0|[A_{\mu}(x),A_{\nu}(y)]|0\rangle $, where $A$ is a pseudo-vector amplitude. This procedure leads to matrix elements \begin{eqnarray} \delta^{(4)} (x-y)\langle 0|\bar{u}(x)u(y)|0\rangle \equiv\delta^{(4)}(x-y)\langle 0|\bar{u}(x)u(x)|0\rangle , \end{eqnarray} where $u$ is the quark spinor field. It is then stated that the v.e.v. of $\bar{u}u$ is non-zero and can be identified as the pion condensate $\langle 0|\bar{u}u|0 \rangle $. It is clear that under the applied limiting procedure this condensate cannot be identified with the physical vacuum matrix element $\langle 0|\mathbb{R}\{\bar{u}(x)u(x)\}|0\rangle$, as the latter is zero. So: \begin{eqnarray} \lim_{x\rightarrow y}\langle 0|\bar{u}(x)u(y)|0\rangle \neq\langle 0|\mathbb{R}\{\bar{u}(x)u(x)\}|0\rangle \ . \end{eqnarray} This does not mean that the matrix element $\langle 0|\bar{u}(x)u(x)|\rangle 0$ without implied $\mathbb{R}$-product is not useful: it is clear that in these derivations this non-zero matrix element plays a significant role, and therefore is a useful intermediate theoretical expression. Because most standard QFT calculations are based on Feynman diagrams with distinct space-time points, concepts - like condensates - are not affected by our $\mathbb{R}$-product and the vacuum matrix elements based on this limiting procedure retain their usefulness, although denoting them as v.e.v.'s is misleading. Let us now discuss some of the properties of the new representation for boson fields and indicate where it has advantages over the standard representation. For the electromagnetic field we must replace the matrix element $({\mathbb{O}_a})_{\alpha\beta}$ by $\delta_{\alpha\beta}$ in Eq. (\ref{eq:boson explicit}). \begin{comment} \begin{eqnarray} \label{eq:boson} A^\mu (x)=\sqrt{\frac{1}{8SV}}\sum_{\alpha,\beta} \int\frac{d\vec{p}}{\sqrt{2p_0}} \left[e^{ipx} b_{\alpha,\frac{\vec{p}}{2}}^{\dag}d_{\beta,\frac{\vec{p}}{2}}^{\dag} + e^{-ipx} d_{\alpha,\frac{\vec{p}}{2}}b_{\beta,\frac{\vec{p}}{2}}\right] \nonumber \\ \times\left[ \frac{p^\mu}{2m p_0}\vec{\sigma} \bullet \vec{p}+ \left( \begin{array}{c} 0\\ \vec{\sigma}-\hat{p}(\vec{\sigma}\bullet \hat{p})\\ \end{array} \right)^\mu +\frac{2m}{p_0} \left( \begin{array}{c} 0\\ \hat{p}(\vec{\sigma}\bullet \hat{p})\\ \end{array} \right)^\mu \right]_{\alpha \beta} \nonumber \\ +C^\prime\sum_{\alpha} \int d\vec{p}p^{\mu} \left[b_{\alpha,\frac{\vec{p}}{2}}^{\dag}b_{\alpha,\frac{\vec{p}}{2}} +d_{\alpha,\frac{\vec{p}}{2}}d_{\alpha,\frac{\vec{p}}{2}}^{\dag}\right], \end{eqnarray} where $C^\prime=C/m$ is dimensionless. \end{comment} The field automatically satisfies the condition $\partial_\mu A^\mu_a (x)=0$ as an operator equation (as do all boson fields defined by Eq.(\ref{eq:boson explicit})). Hence we do not need to limit ourselves to a physical Hilbert space to impose this relationship, as is done in theories like the Gupta-Bleuler model (\cite{6Gupta}, \cite{7Bleuler}). The standard way to quantize fields is to decompose them into normal modes where each field component is associated with a different elementary particle. This is clearly the case in our representation as each mode corresponds to (combinations of) different quark states. However, in the standard representation this implies that the four components of $A_{\mu}$ should correspond to four different polarization states. But the photon only has two polarization degrees of freedom, so this creates a problem. Several techniques have been introduced to fix these problems, amongst which so-called gauge fixing which requires the addition of a non-gauge invariant term to the Lagrangian. These problems also affect the Hilbert space which no longer has the character of a quantum state vector space. In our fermionic representation such problems do not appear. Also, we do not have awkward commutation rules between the boson operators $a_\mu(\vec{p})$ and $a_\mu^{\dag}(\vec{p})$, which have the wrong sign for $\mu=0$, as all relevant (anti{-}) commutation rules are between fermion field operators for well-defined states. If we calculate the electro-magnetic propagator in this representation we find: \begin{eqnarray} \label{eq:propagator} \langle 0|T\left\{A^{\mu} (x)A^{\nu} (y)\right\}|0\rangle =\frac{1}{(2\pi)^3} \nonumber \\ \times \int \frac{d\vec{p}}{2p_0}\{e^{-ip(x-y)}\Theta(x_0-y_0)+e^{ip(x-y)}\Theta(y_0-x_0)\}\left[\frac{p^{\mu}p^{\nu}}{4m^2}-g^{\mu \nu }\right] \ , \end{eqnarray} i.e. there is no trace of the original fermionic representation. The result is a standard massive propagator in the limit $m^2\downarrow 0$. In this derivation we used the identities $\delta(\vec{p}/2-\vec{q}/2)=8\delta(\vec{p}-\vec{q})$ and $\delta(0)=(2\pi)^{-3}\int d^3x=(2\pi)^{-3}V$. The spin factor is $S=2$ in the current case. \begin{comment} \begin{eqnarray} g^{\mu \nu }\frac{i}{(2\pi)^4}\int d^4 p \frac{e^{-ip(x-y)}}{p^2-4m^2+i\epsilon} \end{eqnarray} plus an extra contribution which does not contribute to physical matrix elements but ensures that $\partial_\mu \langle 0| A^{\mu} (x)A^{\nu} (y))|0\rangle =\partial_\nu \langle 0| A^{\mu} (x)A^{\nu} (y))|0\rangle =0$ as $p^\mu p_\mu=4m^2$ in Eq. (\ref{eq:propagator}). Usually one introduces an extra quadratic term $(\partial_{\mu}A^{\mu})^2$ in the Lagrangian to fix the gauge. The propagator then features the extra term $\frac{p^{\mu}p^{\nu}}{p^2}$, rather than the term $\frac{p^{\mu}p^{\nu}}{4m^2}$ in Eq. (\ref{eq:propagator}). \end{comment} Finally, we want to discuss the consequences of the higher order terms in the boson field equations for non-Abelean theories such as SU(2) or SU(3). After each iteration of the operator field equations new terms in the solution are introduced which are higher order in terms of bilinear fermion operators. Surprisingly, one can obtain an exact solution of the free non-linear field equations using a single infinite operator $\Lambda$: \begin{eqnarray} \label{eq:boson general} A^\mu_a (x)=\sqrt{\frac{1}{8SV}}\sum_{\alpha,\beta} \int\frac{d\vec{p}}{\sqrt{2p_0}} \left[e^{ipx} b_{\alpha,\frac{\vec{p}}{2}}^{\dag}\Lambda d_{\beta,\frac{\vec{p}}{2}}^{\dag} + e^{-ipx} d_{\alpha,\frac{\vec{p}}{2}}\Lambda b_{\beta,\frac{\vec{p}}{2}}\right] \nonumber \\ \times ({\mathbb{O}_a})_{\alpha\beta}\left[ \frac{p^\mu}{2m p_0}\vec{\sigma} \bullet \vec{p}+ \left( \begin{array}{c} 0\\ \vec{\sigma}-\hat{p}(\vec{\sigma}\bullet \hat{p})\\ \end{array} \right)^\mu +\frac{2m}{p_0} \left( \begin{array}{c} 0\\ \hat{p}(\vec{\sigma}\bullet \hat{p})\\ \end{array} \right)^\mu \right]_{\alpha \beta} \nonumber \\ +\frac{C}{m}\sum_{\alpha,\beta} \int d\vec{p}p^{\mu}({\mathbb{O}_a})_{\alpha\beta}{\mathbb{\delta}}_{\alpha\beta}^{spin} \left[b_{\alpha,\frac{\vec{p}}{2}}^{\dag}\Lambda b_{\beta,\frac{\vec{p}}{2}} +d_{\alpha,\frac{\vec{p}}{2}}\Lambda d_{\beta,\frac{\vec{p}}{2}}^{\dag}\right] \end{eqnarray} where the operator $\Lambda=\Lambda^{p}\Lambda^{a}=\Lambda^{a}\Lambda^{p}$ is defined as follows: \begin{eqnarray} \label{eq:lambdap} \Lambda^{p}=\lim_{n \to \infty }\Lambda^{p}_n; \ \Lambda^{p}_n=\frac{(1-N^{p})}{1}\frac{(2-N^{p})}{2}\cdots \frac{(n-N^{p})}{n}\equiv \left(\begin{array}{c} n-N^{p}\\ n\\ \end{array}\right) \\ \Lambda^{a}=\lim_{n \to \infty }\Lambda^{a}_n; \ \Lambda^{a}_n=\frac{(1-N^{a})}{1}\frac{(2-N^{a})}{2}\cdots \frac{(n-N^{a})}{n} \equiv \left(\begin{array}{c} n-N^{a}\\ n\\ \end{array}\right) \end{eqnarray} and \begin{equation} \label{eq:N_operator} N^{p}=\sum_{\alpha}\int d^3 p b_{\alpha,\vec{p}}^{\dag} b_{\alpha,\vec{p}}\ ; \qquad N^{a}=-\sum_{\alpha}\int d^3 p d_{\alpha,\vec{p}} d_{\alpha,\vec{p}}^{\dag}\ . \end{equation} This is a remarkable result as it constitutes an exact solution to the non-linear field equations. The basic properties used in this derivation are: \begin{eqnarray} \label{eq:identities} b\Lambda^{p}=\Lambda^{p}b^{\dag}=0 \qquad \rm{and} \qquad d^{\dag}\Lambda^{a}=\Lambda^{a}d=0\ . \end{eqnarray} The $\mathbb{R}$-product and the minus sign of the anti-particle anti-commutator Eq.(\ref{eq:commutator}) are absolutely essential for the derivation of this exact solution. This result provides a correction to the operator $\Lambda$ defined in \cite{1GrebenQuark}, which was expressed in terms of products of $(n-N^{p}-N^{a})$, rather than being first factorized in terms of $\Lambda^{p}$ and $\Lambda^{a}$. The particle and anti-particle $\Lambda$-operators look like projection operators, as: \begin{equation} \label{eq:projection operator} \Lambda^{p}\Lambda^{p}=\Lambda^{p};\ \Lambda^{a}\Lambda^{a}=\Lambda^{a}\ , \end{equation} however, $\Lambda^{p}\Lambda^{a}\neq 0$ so these are not ordinary projection operators. Rather these operators project out only one-particle states. We illustrate this for the case of two-particle ket-states, when $\Lambda$ yields zero in all possible combinations. For a two-particle state we get: \begin{equation} \label{eq:one particle operator} b^{\dag}_\delta\Lambda^{p}b_\varepsilon| b^{\dag}_\alpha b^{\dag}_\beta|0\rangle =\delta_{\varepsilon\alpha}b^{\dag}_\delta\Lambda^{p}b^{\dag}_\beta|0\rangle- b^{\dag}_\delta\Lambda^{p}b^{\dag}_\alpha b_\varepsilon| b^{\dag}_\beta|0\rangle=0 \ \quad {\rm{as}} \quad \Lambda^{p} b^{\dag}=0\ , \end{equation} while for a two-anti-particle state we get: \begin{eqnarray} \label{eq:one anti particle operator} \mathbb{R}[ d_\delta\Lambda^{a}d^{\dag}_\varepsilon]| d^{\dag}_\alpha d^{\dag}_\beta|0\rangle =-d^{\dag}_\varepsilon \mathbb{R}[ \Lambda^{a}]d_\delta d^{\dag}_\alpha d^{\dag}_\beta|0\rangle= \nonumber \\ =-\delta_{\delta \alpha} d^{\dag}_\varepsilon \mathbb{R}[ \Lambda^{a}] d^{\dag}_\beta|0\rangle +d^{\dag}_\varepsilon \mathbb{R}[ \Lambda^{a}] d^{\dag}_\alpha d_\delta d^{\dag}_\beta|0\rangle= \nonumber \\ \equiv -\delta_{\delta \alpha} d^{\dag}_\varepsilon \mathbb{R}[ d^{\dag}_\beta \Lambda^{a}] |0\rangle +d^{\dag}_\varepsilon \mathbb{R}[ d^{\dag}_\alpha\Lambda^{a}] d_\delta d^{\dag}_\beta|0\rangle=0 \quad {\rm{as}} \quad d^{\dag} \Lambda^{a}=0 \ . \end{eqnarray} Finally, for a particle-anti-particle state we get: \begin{eqnarray} \label{eq:boson operation} \mathbb{R}[ d_\delta\Lambda^{a}\Lambda^{p}d^{\dag}_\varepsilon]| b^{\dag}_\alpha d^{\dag}_\beta|0\rangle=0 \quad {\rm{as}} \quad \Lambda^{p}b^{\dag}_\alpha=0 \ \nonumber \\ \mathbb{R}[ b^{\dag}_\delta\Lambda^{a}\Lambda^{p}b_\varepsilon]| b^{\dag}_\alpha d^{\dag}_\beta|0\rangle=0 \quad {\rm{as}} \quad \mathbb{R}[ \Lambda^{a}]d^{\dag}_\beta=\mathbb{R}[ d^{\dag}_\beta \Lambda^{a}]=0 \ . \end{eqnarray} The one-particle projection property has important consequences, which were already highlighted in \cite{1GrebenQuark} for the bound-state case. For the current scattering-case it implies that the (energy) expectation values of non-Abelean bosons fields vanish, so that the expectation value differs from the expected value $E=p_0$. This inconsistency might well imply that bosons associated with non-Abelean fields cannot exist as stable physical particles (apart from the fact that they can decay into lighter particles). For QED (an Abelean theory) the operator $\Lambda$ is not required as there are no non-linear terms in the free field equations. Hence, for the photon the demand that the energy expectation value equals $\hbar\omega$ does not lead to this inconsistency. However, even there this consistency condition leads to further demands as the fermionic representation leads to additional unphysical terms in the expectation energy value, which only vanish after further generalizations of the field expansion. This has significant physical consequences which will be discussed in future paper(s). For the scalar Higgs field the constant term has interesting characteristics and a possible physical role. We write this term as follows \begin{eqnarray} \label{eq:Higgs} \phi^{(c)}(x)=C\sum_{\alpha} \int d\vec{p} \left[b_{\alpha,\frac{\vec{p}}{2}}^{\dag}\Lambda b_{\alpha,\frac{\vec{p}}{2}} -d_{\alpha,\frac{\vec{p}}{2}} \Lambda d_{\alpha ,\frac{\vec{p}}{2}}^{\dag}\right]. \end{eqnarray} If this field is used as an intermediate operator then it simply acts as a constant, as $\phi^{(c)}(x) b_{\alpha,\vec{p}}^{\dag}\Lambda =8C b_{\alpha,\vec{p}}^{\dag}\Lambda $ and $\phi^{(c)}(x) d_{\alpha,\vec{p}}\Lambda =8C d_{\alpha,\vec{p}}\Lambda $ . Note that these identities only hold if the operators on the right also refer to the same space-time coordinate $x$. Hence, this constant operator $\phi^{(c)}(x)$ can play the same role as the v.e.v $\langle 0|\phi |0\rangle$ of the Higgs field in the SM Higgs theory by setting $C=\langle 0|\phi |0\rangle/8$ without leading to the usual disturbing infinite vacuum energy contributions. Its other advantage over the standard theory is that one does not have to make an expansion around a classical vacuum expectation value and can maintain the quantum operator character of the field and the field equations. In our opinion classical concepts in QFT should be treated with great care because of the danger of oversimplification. We already saw how the classical analogy with harmonic oscillators was rather misleading. \section {Summary and historical perspective} \begin{comment} A new ordering principle, embodied by the R-product, can remove the bias towards particles over anti-particles hidden in the standard formulation of QFT. This R-product operates specifically on QFT expressions which contain multiple interactions sharing the same space-time variable. Its application affects vacuum expectation values such as the vacuum energy which no longer feature infinite contributions. The product also enables self-consistent solutions of field equations for self-interacting bound systems, the dressing of single quarks being the prime example \cite{1GrebenQuark}. To apply this new principle to boson fields they must be represented in terms of bilinear fermionic operators. Since all QFT energy vacuum expectation values vanish now, the enormous discrepancy between the theoretical estimate of the cosmological constant and observation is no longer present. The fermionic representation of the boson fields also avoids certain technical problems in the quantization of vector fields and creates the possibility of adding constant components to the boson fields. These constant components can take over the role of the v.e.v. of the Higgs field in the SM, thereby maintaining the operator nature of the Higgs field and avoiding the hybrid SM formulation with classical as well as quantum components in the Higgs field. These results suggest that there is a deeper level below the SM populated by bare massless quarks. Both the boson expansion and the bound-state calculations require the underlying bare quarks to be massless; in the boson case because photons and gluons are massless, in the bound-state case as it would be very inelegant to have unexplained dimensionfull parameters at this fundamental level. The boson fields then emerge at the SM level, being a result of pointlike particle-anti-particle operator combinations. This picture of a deeper hierarchical level in QFT with (un)broken symmetries and fewer particles and parameters is very appealing, but needs a lot of further development and supporting evidence. Although the average mass of light quarks was predicted very accurately in terms of fundamental constants of Nature \cite{1GrebenQuark} in an exclusive QCD model, it is not clear how this success can be extended to the other generations of quarks. We expect that the Higgs field plays an important role for these generations, however, our first attempts to incorporate this field at the fundamental level have only met with partial success. The lepton sector is even more difficult to model because of the peculiar properties of the weak interactions and the low masses involved. Our hope is that the renormalization scheme and the self-consistent bound-state calculations can be stitched together in a consistent scheme, where the bound-state calculations furnish (some of) the quark and lepton masses of the SM, while the renormalization procedures can be maintained at the effective SM level. In this paper we did not just explain the (possible) consequences of the new principle in QFT, we also tried to make plausible why this principle was not identified earlier during the long history of QFT. One reason might be that the urge for developing a QFT bound-state formulation based on the self-consistent solution of the field equations – where the R-product is indispensable - diminished after it was demonstrated that standard QFT scattering techniques can very well be used in bound-state problems. For example, in the Bethe-Salpeter equations \cite{Bethe} and the Blankenbecler-Sugar approach \cite{Sugar} the usual scattering techniques and diagrams are used. Even the currently popular lattice calculations of nucleons \cite{Fodor} use techniques inspired by the usual QFT methodology. An example of a QFT bound-state formulation was the extensible electron model of Dirac \cite{paper1} However, this QED model is linear, while the binding mechanisms in localized systems are most likely due to non-linear mechanisms, so that Dirac's model could not be self-contained and required extra assumptions and inputs. Nonetheless, these efforts showed the desire of eminent theorist(s) to develop QFT methods which can deal with localized bound-state systems and are distinct from the usual scattering methods. Our results explain why such spatial QFT methods cannot be applied to multi-particle systems as the operator solution only yields non-trivial solutions for a one-body state. However, the conclusion that our QFT bound-state methods are limited to the description of single-particle states, i.e. the dressing of bare quarks, can also explain why the urge for developing a self-consistent QFT bound-state formulation never took off, as dressing is described phenomenologically through the renormalization scheme in QFT. Hence there was no active desire to describe dressing more dynamically with a spatial model. Now we have developed such a model the benefits of a spatial model should be clear in view of its physical appeal and its potential to calculate mass parameters form first principles and in terms of constants of Nature. The need for additional QFT principles could also have emerged if problems had surfaced with the QFT description of scattering processes. However, in most scattering applications the ad hoc application of the normal product has the same effect as the proposed $\mathbb{R}$-product, so that no examples emerged where the failure of the ad hoc normal product came to the surface or could not be repaired in other ways. Consequently no incentive emerged from this perspective to search for new principles. One instance where the need for new principles seems obvious is in the cosmological constant problem, where the standard approach has led to the biggest discrepancy in modern physics. However, since the solution for this problem (the application of the R-product) is hidden in the boson case, it has been difficult to discover this solution. Supersymmetry theorists have recognized the failure of the standard formulation because of the large infinite contributions to the energy expectation value by basing their formulation on the fact that under supersymmetry the standard fermionic and bosonic contributions to the vacuum energy cancel. However, according to our theory both contributions are unphysical and vanish individually, so that the imposition of such a cancellation condition is of no obvious relevance. As an effective theory the SM is indeed extremely effective as nearly all calculations can be performed without any reference to the internal or composite structure of the SM elementary particles which we claim to be present. Our finding that (light) dressed quarks have a very small radius of about 8 Planck lengths \cite{1GrebenQuark} and that the composite nature of bosons in terms of fermions and anti-fermions can be imposed without leading to any internal structure of boson fields, goes a long way to explaining this effectiveness. The hierarchical structure of particle physics which plays an essential role in our description, is another example of the hierarchical structures which abound in Nature. Such structures are instrumental for our ability to model Nature scientifically. Each level of the hierarchy in Nature carries its own characteristic laws, but the SM has been singularly successful in hiding the underlying structure. \end{comment} A new ordering principle, embodied by the $\mathbb{R}$-product, can remove the bias towards particles over anti-particles hidden in the standard formulation of QFT. This $\mathbb{R}$-product operates specifically on QFT expressions which contain multiple interactions sharing the same space-time variable. Its application affects vacuum expectation values such as the vacuum energy which no longer feature infinite contributions. The product also enables self-consistent solutions of field equations for self-interacting bound systems, the dressing of single quarks being the prime example \cite{1GrebenQuark}. To apply this new principle to boson fields these fields must be represented in terms of bilinear fermionic operators. Since both fermionic and bosonic QFT energy vacuum expectation values vanish now, the enormous discrepancy between the theoretical estimate of the cosmological constant and observation is no longer present. The fermionic representation of the boson fields also avoids certain technical problems in the quantization of vector fields and creates the possibility of adding constant components to the boson fields. These constant components can take over the role of the v.e.v. of the Higgs field in the SM, thereby maintaining the operator nature of the Higgs field and avoiding the hybrid SM formulation with classical c-number as well as quantum components in the Higgs field. These results suggest that there is a deeper level below the SM populated by bare massless quarks. Both the boson expansion and the bound-state calculations require the underlying bare quarks to be massless; in the boson case because photons and gluons are massless, in the bound-state case as it would be very inelegant to have unexplained dimensionfull parameters at this fundamental level. The boson fields then emerge at the SM level, being a result of pointlike particle-anti-particle operator combinations. This picture of a deeper hierarchical level in QFT with fewer elementary particles and parameters is very appealing, but obviously requires further development. Although the average mass of light quarks was predicted very accurately in terms of fundamental constants of Nature \cite{1GrebenQuark}, it is not (yet) clear how this success can be extended to the other generations of quarks. We expect that the Higgs field plays an important role for the higher generations. Our first attempts to incorporate this field in the bound-state calculation has yielded interesting - yet inconclusive - results. The lepton sector is more difficult to model because of the peculiar properties of the weak interactions and the low masses involved. Nonetheless, we hope that the renormalization scheme and the self-consistent bound-state calculations can be stitched together in a consistent scheme, where the bound-state calculations together with the Higgs field furnish the quark and lepton SM masses, while the renormalization procedures can be maintained at the effective SM level. \begin{comment} In this paper we did not just explain the (possible) consequences of the new principle in QFT, we also tried to make plausible why this principle was not identified earlier during the long history of QFT. One reason might be that the urge for developing a QFT bound-state formulation based on the self-consistent solution of the field equations $-\;$where the $\mathbb{R}$-product is indispensable $-\;$diminished after it was demonstrated that standard QFT scattering techniques can very well be used in bound-state problems. For example, in the Bethe-Salpeter equations \cite{Bethe} and the Blankenbecler-Sugar approach \cite{Sugar} the usual scattering techniques and diagrams are used. Even the currently popular lattice calculations of nucleons \cite{Fodor} use techniques inspired by the usual QFT methodology. An example of a QFT bound-state formulation was the extensible electron model of Dirac \cite{paper1}. However, this QED model is linear, while the binding mechanisms in localized systems most likely require non-linear theories, so that Dirac's model could not be self-contained and required extra assumptions and inputs. Nonetheless, these efforts showed the desire of eminent theorist(s) to develop QFT methods which can deal with localized bound-state systems and are distinct from the usual scattering methods, even at a time that the scattering QFT methods already had been shown to be very potent. Our results explain why such spatial QFT methods cannot be applied to multi-particle systems as the operator solution only yields non-trivial solutions for a one-body state. However, the conclusion that our QFT bound-state methods are limited to the description of single-particle states, i.e. the dressing of bare quarks, can also explain why the urge for developing a self-consistent QFT bound-state formulation never took off, as dressing is described phenomenologically through the renormalization scheme in QFT. Hence there was no active desire to describe dressing more dynamically with a spatial model. Now that we have developed such a theroy the benefits of a spatial model should be clear in view of its physical appeal and its potential to calculate mass parameters from first principles and in terms of constants of Nature. In this paper we also presented some arguments why this important ordering principle was not identified much earlier during the development of QFT. To understand this aspect fully requires a historical understanding of the development of QFT with time and the changing perspectives in the course of this development. Instead, I just present a few elements which (could have) played a role in this late discovery. First, there are only a few instances where the need for this principle becomes evident, while in many cases (such as in the boson case) it is well hidden (which does not imply that the principle does not play an essential role in these cases). Even in the case where the principle was first formulated \cite{1GrebenQuark} it was only discovered indirectly because of the need for (unexplained) minus signs for diagrams of a particular character. In \cite{1GrebenQuark} we tried to develop a self-consistent spatial QFT bound-state formulation, where the binding was also described in terms of the underlying QFT theory. Bound-state theories involving QFT elements where extensively developed in the period 1975-1985, when the MIT bag model became fairly successful in describing nucleons in terms of three valence quarks. However, in this theory the infinite bag was a phenomenological element not derived from QFT. In \cite{1GrebenQuark} we obtained an absolute spherical boundary condition using QFT (or more specifically QCD), however, the emergence of this binding mechanism required very delicate interplay between the self-interactions which would not be possible in a hybrid model with phenomenological inputs. Hence, one reason for this late discovery of the ordering principle must thus be that the self-consistent bound-state theory was not developed earlier. This last \emph{failure} has a couple of possible reasons. 1. Hybrid models did not test QFT and failed to identify the QFT binding mechanism 2. Only single particle application 3. Bound state theories dominated by scattering methods 3. Dressing elementary particles already described by scattering methods 4. Early efforts failed because of linearity QED (dirac) be that the applications of this bound-state formulation are limited to single-particle systems, so that they do not extend to multi-particle systems where the normal applications of bound-state formulations lie. For the latter standard QFT methods based on the S-matrix expansion turnaed out te be quite adequate so that . For example, well-established bound-state methods like the Bethe-Salpeter equations \cite{Bethe} and the Blankenbecler-Sugar approach \cite{Sugar} still employ the usual scattering techniques and diagrams. Even the currently popular lattice calculations of nucleons \cite{Fodor} use techniques inspired by the usual QFT methodology. Hence, while in the beginning of the development of QFT theorists were keenly aware of the limitations of QFT to scattering problems, this realization faded away with time as the success of scattering methods in multi-particle bound systems began to emerge. Only a few theorists kept pursuing spatial QFT approaches, such as Dirac who developed an extensible electron model \cite{paper1}. However, this model was based on QED, while the binding mechanisms in localized systems are likely to require non-linear field theories, so that Dirac's model could not become a fully-fledged QFT theory and required extra assumptions and inputs. Since standard QFT theory accounts for the dressing of elementary particles via the scattering series and renormalization, the incentive for developing a separate spatial and more dynamical theory of dressing was also suppressed. This despite the obvious objections to the QFT treatment: no physical picture of a dressed particle and its spatial extent and dressing diagrams that are infinite and can only be controlled by mathematical renormalization techniques which required the introduction of phenomenological renormalization parameters. Again, Dirac - despite being one of the founders of QFT - kept feeling uncomfortable about these aspects \cite{Dirac}. In summary, the limited applicability of the rigorous spatial bound-state QFT treatments goes a long way towards explaining why this methodology did not come to fruition earlier. This meant that the opportunities for discovering the need for this new ordering principle were strongly inhibited. We did not mean to suggest that the success of scattering methods in bound-state problems has stopped any development of spatial treatments of bound-state systems involving elements of QFT. However, nearly all of these applications have had a hybrid character, with elements that do not follow from QFT. An example is the MIT bag model of nucleons \cite{Chodos1} which is based on an infinite bag that is not derived from QFT. In our QCD quark calculation \cite{1GrebenQuark} an infinite bag is derived from QFT, however, its emergence is due to a very delicate interplay between the self-interactions which is possible only within the context of a single-quark system. The need for additional QFT principles could also have emerged if problems had surfaced with the QFT description of scattering processes. However, in most scattering applications the ad hoc application of the normal product has the same effect as the proposed $\mathbb{R}$-product, so that no examples emerged where the failure of the ad hoc normal product came to the surface or could not be repaired in other ways. Consequently no incentive to search for new principles emerged from this perspective, either. One instance where the need for new principles seems obvious is in the cosmological constant problem, where the standard approach has led to the biggest discrepancy in modern physics. However, since the solution for this problem (the application of the $\mathbb{R}$-product) is hidden in the boson case, this route was blocked as well. Concern about the infinities in the vacuum energy expectation values were made prominent again by supersymmetry theorists, who elevated the cancellation between the fermionic and bosonic contributions to the vacuum energy in the supersymmetry as an important property of - and constraint on - supersymmetry. From our perspective, both contributions are unphysical and vanish individually, so that the imposition of such a cancellation condition is of no obvious relevance. \end{comment} In this paper we also presented some arguments why the new ordering principle took so long to be discovered. There are some specific reasons for this which are worth mentioning, as they also clarify the role of this principle in relation to other developments. Most importantly, despite its important consequences, there are only a few instances where the need for this principle becomes evident, while in many other cases (such as in the boson case) it is well hidden or can be mimicked by ad hoc procedures like the normal ordering prescription. Even in the case where this principle was first discovered \cite{1GrebenQuark}, its need was established indirectly because it could explain the extra minus signs which were required for diagrams involving anti-particles. Without these minus signs the formulation became unmanageable, while including them led to a highly symmetric, elegant and solvable set of equations. However, once these minus signs were linked to the new ordering principle, the nature and further consequences of this principle became obvious. One may thus ask why a formulation like this was not developed earlier, as any such study would most certainly have revealed the new principle. In order to answer this question we review some aspects of the analysis in \cite{1GrebenQuark}. In \cite{1GrebenQuark} we developed a self-consistent spatial bound-state formulation within the framework of QCD, where the binding was also described in terms of the underlying QCD theory. This theory was inspired by the MIT quark bag model of nucleons (\cite{Chodos1},\cite{Chodos2}), with the main aim of deriving the confining bag from the non-linear QCD field theory itself, instead of postulating it. The quantum field equations could be solved in terms of the field operators (creation and annihilation operators for the bound wave functions), with two important results for the current context. First, the need for the minus signs mentioned above. Second, after introducing the $\mathbb{R}$-product the field equations yield one exact operator solution which describes a single-particle state, and thus appear to constitute a model of a dressed quark. This suggests that multi-particle bound-state problems cannot really be handled by means of such a spatial QCD formulation with single variable field equations, so that the original intention to derive a common binding potential in a many-body system is not realizable in such a framework. This might explain why most bound-state methods in QFT have continued to use standard S-matrix tools, when in non-relativistic quantum mechanics spatial methods based on the field equations (i.e. the Schrodinger equation) are more common. Examples are the Bethe-Salpeter equations \cite{Bethe} and the Blankenbecler-Sugar approach \cite{Sugar}. Even the currently popular lattice calculations of nucleons \cite{Fodor} use techniques inspired by S-matrix theory. We may thus conclude that via the popular QFT bound-state theories it is difficult to discover the new principle. In the one case where the principle plays an essential role (the dressing of elementary quarks) standard QFT uses renormalization techniques and renormalization parameters to capture the dressing process, so that the study of this case via more dynamical spatial methods without free parameters is also discouraged. Even in the case where the new principle has a big impact, namely in the cosmological constant problem, an analysis of possible solutions of this problem will not easily guide one towards this new principle. Instead, in supersymmetry one relies on the cancellation between the fermionic and bosonic contributions to the vacuum energy. From our perspective, both contributions are unphysical and vanish individually, so that the imposition of such a cancellation condition does not have any special significance. In keeping with this paper the SM should be seen as an effective theory, formulated in terms of dressed quarks and bosons that must be represented by pairwise fermion anti-fermion operators. However, for (nearly) all practical purposes the SM fields can be considered as elementary and pointlike, so the SM has proved to be an extremely effective effective theory. Our finding that (light) dressed quarks have a very small radius of about 8 Planck lengths \cite{1GrebenQuark} and that the composite nature of bosons in terms of fermions and anti-fermions does not contradict the pointlike elementary nature of boson fields, goes a long way to explaining this effectiveness. The implied hierarchical structure of particle physics is another example of the hierarchical structures which abound in Nature and make it susceptible to scientific methods. \section*{References}
1,314,259,995,258
arxiv
\section*{Appendix: #1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \DeclareMathOperator{\dist}{dist} \renewcommand{\O}{O} \renewcommand{\i}{i} \renewcommand{\j}{j} \newcommand{\mathcal{I}}{\mathcal{I}} \newcommand{\ivnum}[1]{{\i^{#1}}} \newcommand{\jvnum}[1]{{\j^{#1}}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\ve}[1]{{#1}} \newcommand{\vcenteredinclude}[1]{\begingroup \setbox0=\hbox{\includegraphics[width=3.5cm]{#1}} \parbox{\wd0}{\box0}\endgroup} \makeatletter \newcommand*{\extendadd}{ \mathbin{ \mathpalette\extend@add{} } } \newcommand*{\extend@add}[2]{ \ooalign{ $\m@th#1\leftrightarrow$ \vphantom{$\m@th#1\updownarrow$} \cr \hfil$\m@th#1\updownarrow$\hfil } } \makeatother \begin{document} \title{Multidimensional Butterfly Factorization} \author{Yingzhou Li$^\sharp$, Haizhao Yang$^*$, Lexing Ying$^{\dagger\sharp}$ \vspace{0.1in}\\ $\dagger$ Department of Mathematics, Stanford University\\ $\sharp$ ICME, Stanford University\\ $*$ Department of Mathematics, Duke University } \maketitle \begin{abstract} This paper introduces the multidimensional butterfly factorization as a data-sparse representation of multidimensional kernel matrices that satisfy the complementary low-rank property. This factorization approximates such a kernel matrix of size $N\times N$ with a product of $\O(\log N)$ sparse matrices, each of which contains $\O(N)$ nonzero entries. We also propose efficient algorithms for constructing this factorization when either (i) a fast algorithm for applying the kernel matrix and its adjoint is available or (ii) every entry of the kernel matrix can be evaluated in $\O(1)$ operations. For the kernel matrices of multidimensional Fourier integral operators, for which the complementary low-rank property is not satisfied due to a singularity at the origin, we extend this factorization by combining it with either a polar coordinate transformation or a multiscale decomposition of the integration domain to overcome the singularity. Numerical results are provided to demonstrate the efficiency of the proposed algorithms. \end{abstract} {\bf Keywords.} Data-sparse matrix factorization, operator compression, butterfly algorithm, randomized algorithm, Fourier integral operators. {\bf AMS subject classifications: 44A55, 65R10 and 65T50.} \section{Introduction} \label{sec:intro} \subsection{Problem statement} This paper is concerned with the efficient evaluation of \begin{equation}\label{eq:kernel} u(x) = \sum_{\xi\in \Omega} K(x,\xi)g(\xi),\quad x\in X, \end{equation} where $X$ and $\Omega$ are typically point sets in $\mathbb{R}^d$ for $d\geq 2$, $K(x,\xi)$ is a kernel function that satisfies a complementary low-rank property, $g(\xi)$ is an input function for $\xi\in \Omega$, and $u(x)$ is an output function for $x\in X$. To define this complementary low-rank property for multidimensional kernel matrices, we first assume that without loss of generality there are $N$ points in each point set. In addition, the domains $X$ and $\Omega$ are associated with two hierarchical trees $T_X$ and $T_\Omega$, respectively, where each node of these trees represents a subdomain of $X$ or $\Omega$. Both $T_X$ and $T_\Omega$ are assumed to have $L=\O(\log N)$ levels with $X$ and $\Omega$ being the roots at level $0$. The computation of \eqref{eq:kernel} is essentially a matrix vector multiplication \[ u = K g, \] where $K := (K(x,\xi))_{x\in X,\xi\in\Omega}$, $g:=(g(\xi))_{\xi\in\Omega}$, and $u := (u(x))_{x\in X}$ by a slight abuse of notations. The matrix $K$ is said to satisfy the {\em complementary low-rank property} if for any level $\ell$ between $0$ and $L$ and for any node $A$ on the $\ell$-th level of $T_X$ and any node $B$ on the $(L-\ell)$-th level of $T_\Omega$, the submatrix $K_{A,B}:=(K(x_i,\xi_j))_{x_i\in A, \xi_j\in B}$ is numerically low-rank with the rank bounded by a uniform constant independent of $N$. In most applications, this numerical rank is bounded polynomially in $\log(1/\epsilon)$ for a given precision $\epsilon$. A well-known example of such a matrix is the multidimensional Fourier transform matrix. For a complementary low-rank kernel matrix $K$, the {\em butterfly algorithm} developed in \cite{fio07,fio09,mmd,1dba,wavemoth} enables one to evaluate the matrix-vector multiplication in $O(N\log N)$ operations. More recently in \cite{1dbf}, we introduced the {\em butterfly factorization} as a data-sparse multiplicative factorization of the kernel matrix $K$ in the one-dimensional case ($d=1$): \begin{equation}\label{eq:GBF} K\approx U^LG^{L-1}\cdots G^{L/2}M^{L/2} \left(H^{L/2}\right)^*\cdots\left(H^{L-1}\right)^*\left(V^L\right)^*, \end{equation} where the depth $L=\O(\log N)$ is assumed to be an even number and every factor in \eqref{eq:GBF} is a sparse matrix with $\O(N)$ nonzero entries. Here the superscript of a matrix denotes the level of the factor rather than the power of a matrix. This factorization requires $\O(N\log N)$ memory and applying \eqref{eq:GBF} to any vector takes $\O(N\log N)$ operations once the factorization is computed. In fact, one can view the factorization in \eqref{eq:GBF} as a compact algebraic representation of the butterfly algorithm. In \cite{1dbf}, we also introduced algorithms for constructing the butterfly factorization for the following two cases: \begin{enumerate}[(i)] \item A black-box routine for rapidly computing $Kg$ and $K^*g$ in $\O(N\log N)$ operations is available; \item A routine for evaluating any entry of $K$ in $\O(1)$ operations is given. \end{enumerate} In this paper, we turn to the butterfly factorization for the multidimensional problems and describe how to construct them for these two cases. When the kernel strictly satisfies the complementary low-rank property (e.g., the non-uniform FFT), the algorithms proposed in \cite{1dbf} can be generalized in a rather straightforward way. This is presented in detail in Section \ref{sec:gbf}. However, many important multidimensional kernel matrices fail to satisfy the complementary low-rank property in the entire domain $X\times\Omega$. Among them, the most significant example is probably the Fourier integral operator, which typically has a singularity at the origin $\xi=0$ in the $\Omega$ domain. For such an example, existing butterfly algorithms provide two solutions. \begin{itemize} \item The first one, proposed in \cite{fio09}, removes the singularity by applying a polar transformation that maps the domain $\Omega$ into a new domain $P$. After this transformation, the new kernel matrix defined on $X\times P$ satisfies the complementary low-rank property and one can then apply the butterfly factorization in the $X$ and $P$ domain instead. This is discussed in detail in Section \ref{sec:pbf} and we refer to this algorithm as the {\em polar butterfly factorization} (PBF). \item The second solution proposed in \cite{mba} is based on the observation that, though not on the entire $\Omega$ domain, the complementary low-rank property holds in subdomains of $\Omega$ that are well separated from the origin in a certain sense. For example, one can start by partitioning the domain $\Omega$ into a disjoint union of a small square $\Omega_C$ covering $\xi=0$ and a sequence of dyadic coronas $\Omega_t$, i.e., $\Omega = \Omega_C \cup \left(\cup_t\Omega_t\right)$. Accordingly, one can rewrite the kernel evaluation \eqref{eq:kernel} as a summation of the form \begin{equation} \label{eq:Ksum} K = K_C R_C +\sum_t K_t R_t, \end{equation} where $K_C$ and $K_t$ are the kernel matrices restricted to $X\times \Omega_C$ and $X\times \Omega_t$, $R_C$ and $R_t$ are the operators of restricting the input functions defined on $\Omega$ to the subdomain $\Omega_C$ and $\Omega_t$, respectively. In fact, each kernel $K_t$ satisfies the complementary low-rank property and hence one can approximate it with the multidimensional butterfly factorization in Section \ref{sec:gbf}. Combining the factorizations for all $K_t$ with \eqref{eq:Ksum} results the {\em multiscale butterfly factorization} (MBF) for the entire matrix $K$ and this will be discussed in detail in Section \ref{sec:mbf}. \end{itemize} In order to simplify the presentation, this paper focuses on the two dimensional case ($d=2$). Furthermore, we assume that the points in $X$ and $\Omega$ are uniformly distributed in both domains as follows: \begin{equation} \label{eq:X} X = \left\{ x = \left( \frac{n_1}{n}, \frac{n_2}{n}\right), 0 \leq n_1,n_2 < n\text{ with } n_1, n_2 \in \mathbb{Z} \right\} \end{equation} and \begin{equation} \label{eq:Omega} \Omega = \left\{ \xi = (n_1, n_2), - \frac{n}{2} \leq n_1,n_2 < \frac{n}{2}\text{ with } n_1, n_2 \in \mathbb{Z} \right\}, \end{equation} where $n$ is the number of points in each dimension and $N=n^2$. This is the standard setup for two dimensional Fourier transforms and FIOs. \subsection{Related work} \label{sec:moti} For a complementary low-rank kernel matrix $K$, the butterfly algorithm provides an efficient way for evaluating \eqref{eq:kernel}. It was initially proposed in \cite{mmd} and further developed in \cite{fio09,hu,mba,1dba,fio13,wavemoth,sht,sft}. One can roughly classify the existing butterfly algorithms into two groups. \begin{itemize} \item The first group (e.g. \cite{1dba,wavemoth,sht}) requires a precomputation stage for constructing the low-rank approximations of the numerically low-rank submatrices of \eqref{eq:kernel}. This precomputation stage typically takes $\O(N^2)$ operations and uses $\O(N\log N)$ memory. Once the precomputation is done, the evaluation of \eqref{eq:kernel} can be carried out in $\O(N\log N)$ operations. \item The second group (e.g. \cite{fio09,hu,mba,fio13}) assumes prior knowledge of analytic properties of the kernel function. Under such analytic assumptions, one avoids precomputation by writing down the low-rank approximations for the numerically low-rank submatrices explicitly. These algorithms typically evaluate \eqref{eq:kernel} with $\O(N\log N)$ operations. \end{itemize} In a certain sense, the algorithms proposed in this paper can be viewed as a compromise of these two types. On the one hand, it makes rather weak assumptions about the kernel. Instead of requiring the kernel function as was done for the second type, we only assume that either (i) a fast matrix-vector multiplication routine or (ii) a kernel matrix sampling routine is available. On the other hand, these new algorithms reduce the precomputation cost to $O(N^{3/2} \log N)$, as compared to the quadratic complexity of the first group. The multidimensional butterfly factorization can also be viewed as a process of recovering a structured matrix via either sampling or matrix-vector multiplication. There has been a sequence of articles in this line of research. For example, we refer to \cite{randsvd,Rec2,Rec3} for recovering numerically low-rank matrices, \cite{HSSMatrix} for recovering an {HSS} matrices, and \cite{HMatrix} for recovering $\mathcal{H}$-matrices. This paper generalizes the work of \cite{1dbf} by considering complementary low-rank matrices coming from multidimensional problems. \subsection{Organization} \label{sec:orga} The rest of this paper is organized as follows. Section \ref{sec:gbf} reviews the basic tools and describes the {\em multidimensional butterfly factorization} for kernel matrices that strictly satisfy the complementary low-rank property. We then extend it in two different ways to address the multidimensional Fourier integral operators. Section \ref{sec:pbf} introduces the polar butterfly factorization (PBF) based on the polar butterfly algorithm proposed in \cite{fio09}. Section \ref{sec:mbf} discusses the multiscale butterfly factorization (MBF) based on the multiscale butterfly algorithm proposed in \cite{mba}. Finally, in Section \ref{sec:conc}, we conclude with some discussions. \section{Two-Dimensional {Butterfly Factorization}} \label{sec:gbf} This section presents the two-dimensional butterfly factorization for a kernel matrix $K = (K(x,\xi))_{x\in X,\xi\in \Omega}$ that satisfies the complementary low-rank property in $X\times \Omega$ with $X$ and $\Omega$ given in \eqref{eq:X} and \eqref{eq:Omega}. \subsection{Randomized low-rank factorization} \label{sec:randlr} The butterfly factorization relies heavily on randomized procedures for computing low-rank factorizations. For a matrix $Z\in \mathbb{C}^{m\times n}$, a rank-$r$ approximation in 2-norm can be computed via the truncated singular value decomposition ({SVD}), \begin{equation} \label{eq:Z} Z\approx U_0\Sigma_0V_0^*, \end{equation} where $U_0\in \mathbb{C}^{m\times r}$ and $V_0\in \mathbb{C}^{n\times r}$ are unitary matrices, $\Sigma_0\in \mathbb{R}^{r\times r}$ is a diagonal matrix with the largest $r$ singular values of $Z$ in decreasing order. Once $Z\approx U_0\Sigma_0V_0^*$ is available, we can also construct different low-rank factorizations of $Z$ in three forms: \begin{align} & Z\approx USV^*,\quad U=U_0\Sigma_0,\quad S = \Sigma_0^{-1}, \quad V^*=\Sigma_0V_0^*;\label{eq:lowrank1}\\ & Z\approx UV^*,\quad U=U_0\Sigma_0, \quad V^*=V_0^*;\label{eq:lowrank2}\\ & Z\approx UV^*,\quad U=U_0, \quad V^*=\Sigma_0V_0^*.\label{eq:lowrank3} \end{align} As we shall see, the butterfly factorization uses each of these three forms in different stages of the algorithm. In \cite{1dbf}, we showed that the rank-$r$ {SVD} \eqref{eq:Z} can be constructed approximately via either random matrix-vector multiplication \cite{randsvd} or random sampling \cite{randsamp1,randsamp2}. In both cases, the key is to find accurate approximate bases for both the column and row spaces of $Z$ and approximate the largest $r$ singular values using these bases. \paragraph{SVD via random matrix-vector multiplication.} This algorithm proceeds as follows. \begin{itemize} \item This algorithm first applies $Z$ to a Gaussian random matrix $C\in \mathbb{C}^{n\times (r+k)}$ and its adjoint $Z^*$ to a Gaussian random matrix $R\in \mathbb{C}^{m\times (r+k)}$, where $k$ is the oversampling constant. \item Second, computing the pivoted {QR} decompositions of $ZC$ and $Z^*R$ identifies unitary matrices $Q_{col}\in \mathbb{C}^{m\times r}$ and $Q_{row}\in \mathbb{C}^{n\times r}$, which approximately span the column and row spaces of $Z$, respectively. \item Next, the algorithms seeks a matrix $M$ that satisfies \[ Z \approx Q_{col} M Q_{row}^* \] by setting $M = (R^*Q_{col})^\dagger R^*ZC(Q_{row}^*C)^\dagger$, where $(\cdot)^\dagger$ denotes the pseudo inverse. \item Finally, combining the singular value decomposition $M=U_M \Sigma_M V_M^*$ of the matrix $M$ with the above approximation results in the desired approximate rank-$r$ SVD \[ Z \approx (Q_{col} U_M) \Sigma_M (Q_{row} V_M)^*. \] \end{itemize} Suppose that the cost of applying $Z$ and $Z^*$ to an arbitrary vector is $C_Z(m,n)$. Then the construction complexity of this procedure is $\O(C_Z(m,n) r+\max(m,n)r^2)$. As we shall see later, when the black-box routines for rapidly applying $K$ and $K^*$ are available, this procedure would be embedded into the algorithms for constructing the butterfly factorizations. \paragraph{SVD via random sampling.} This algorithm proceeds as follows. \begin{itemize} \item The first stage discovers the representative columns and rows progressively via computing multiple pivoted {QR} factorizations on randomly selected rows and columns of $Z$. The representative columns and rows are set to be empty initially. As the procedure processes, more and more columns (rows) are marked as {\em representative} and they are used in turn to discover new representative rows (columns). The procedure stops when the sets of the representative rows and columns stabilize. At this point, the representative columns (rows) approximately span the column (row) spaces of $Z$. \item Second, computing the pivoted {QR} decompositions of the representative columns and rows identifies unitary matrices $Q_{col}\in \mathbb{C}^{m\times r}$ and $Q_{row}\in \mathbb{C}^{n\times r}$, which approximately span the column and row spaces of $Z$, respectively. \item Next, the algorithm seeks a matrix $M$ that satisfies \[ Z \approx Q_{col} M Q_{row}^*. \] This is done by restricting this equation to a random row set $I_{row}$ and a random column set $I_{col}$ and consider \[ Z(I_{row},I_{col}) \approx Q_{col}(I_{row},:) M Q_{row}(I_{col},:)^*. \] Here both $I_{row}$ and $I_{col}$ are of size $O(r)$ and we require $I_{row}$ and $I_{col}$ to contain the set of representative rows and columns, respectively. From the above equation, we can solve $M$ by setting \[ M = (Q_{col}(I_{row},:))^\dagger Z(I_{row},I_{col}) (Q_{row}(I_{col},:)^*)^\dagger. \] \item Finally, combining the singular value decomposition $M=U_M \Sigma_M V_M^*$ of the matrix $M$ with the approximation $Z \approx Q_{col} M Q_{row}^*$ results in the desired approximate rank-$r$ SVD \[ Z \approx (Q_{col} U_M) \Sigma_M (Q_{row} V_M)^*. \] \end{itemize} The construction complexity of this procedure is $\O(\max(m,n)r^2)$ in practice. When an arbitrary entry of $Z$ can be evaluated in $\O(1)$ operations, this procedure is the method of choice for constructing low-rank factorizations. \subsection{Notations and overall structure} We adopt the notation of the one-dimensional butterfly factorization introduced in \cite{1dbf} and adjust them to the two-dimensional case of this paper. Recall that $n$ is the number of grid points on each dimension and $N=n^2$ is the total number of points. Suppose that $T_X$ and $T_\Omega$ are complete quadtrees with $L = \log n$ levels and, without loss of generality, $L$ is an even integer. For a fixed level $\ell$ between $0$ and $L$, the quadtree $T_X$ has $4^\ell$ nodes at level $\ell$. By defining $\mathcal{I}^\ell = \{0,1,\ldots,4^\ell-1\}$, we denote these nodes by $A^\ell_\i$ with $\i\in \mathcal{I}^\ell$. These $4^\ell$ nodes at level $\ell$ are further ordered according to a Z-order curve (or Morton order) as illustrated in Figure \ref{fig:domain-order-2D}. Based on this Z-ordering, the node $A^\ell_\i$ at level $\ell$ has four child nodes denoted by $A^{\ell+1}_{4\i + t}$ with $t=0,\dots,3$. The nodes plotted in Figure \ref{fig:domain-order-2D} for $\ell = 1$ (middle) and $\ell=2$ (right) illustrate the relationship between the parent node and its child nodes. Similarly, in the quadtree $T_\Omega$, the nodes at the $L-\ell$ the are denoted as $B^{L-\ell}_\j$ for $\j\in\mathcal{I}^{L-\ell}$. For any level $\ell$ between $0$ and $L$, the kernel matrix $K$ can be partitioned into $O(N)$ submatrices $K_{A^\ell_\i,B^{L-\ell}_\j} :=(K(x,\xi))_{x\in A^\ell_\i,\xi\in B^{L-\ell}_\j}$ for $\i\in\mathcal{I}^\ell$ and $\j\in\mathcal{I}^{L-\ell}$. For simplicity, we shall denote $K_{A^\ell_\i,B^{L-\ell}_\j}$ as $K^{\ell}_{\i,\j}$, where the superscript $\ell$ denotes the level in the quadtree $T_X$. Because of the complementary low-rank property, every submatrix $K^\ell_{\i,\j}$ is numerically low-rank with the rank bounded by a uniform constant $r$ independent of $N$. \input{figure/fig-domain-order-2D} The two-dimensional butterfly factorization consists of two stages. The first stage computes the factorizations \[ K^h_{\i,\j}\approx U^h_{\i,\j}S^h_{\i,\j}\left(V^h_{\j,\i}\right)^* \] for all $\i,\j\in\mathcal{I}^h$ at the middle level $h=L/2$, following the form \eqref{eq:lowrank1}. These factorizations can then be assembled into three sparse matrices $U^h$, $M^h$, and $V^h$ to give rise to a factorization for $K$: \begin{equation} K\approx U^h M^h\left(V^h\right)^*. \end{equation} This stage is referred to as the \emph{middle level factorization} and is described in Section \ref{sec:mlf}. In the second stage, we recursively factorize the left and right factors $U^h$ and $V^h$ to obtain \begin{equation*} U^h \approx U^LG^{L-1}\cdots G^h \quad\text{and}\quad \left(V^h\right)^* \approx \left(H^h\right)^*\cdots \left(H^{L-1}\right)^*\left(V^L\right)^*, \end{equation*} where the matrices on the right hand side in each formula are sparse matrices with $O(N)$ nonzero entries. Once they are ready, we assemble all factors together to produce a data-sparse approximate factorization for $K$: \begin{equation}\label{eq:gbf} K\approx U^LG^{L-1}\cdots G^hM^h\left(H^h\right)^*\cdots \left(H^{L-1}\right)^*\left(V^L\right)^*, \end{equation} This stage is referred to as the \emph{recursive factorization} and is discussed in Section \ref{sec:rf}. \subsection{Middle level factorization} \label{sec:mlf} Recall that we consider the construction of multidimensional butterfly factorization for two cases: \begin{enumerate}[(i)] \item A black-box routine for rapidly computing $Kg$ and $K^*g$ in $\O(N\log N)$ operations is available; \item A routine for evaluating any entry of $K$ in $\O(1)$ operations is given. \end{enumerate} In Case (i), we construct an approximate rank-$r$ {SVD} of each $K^h_{\i,\j}\in \mathbb{R}^{n\times n}$ with $\i,\j\in \mathcal{I}^h$ using the SVD via random matrix-vector multiplication (the first option in Section \ref{sec:randlr}). This requires applying each $K^h_{\i,\j}$ to a {Gaussian} random matrix $C_\j\in\mathbb{C}^{n\times(r+k)}$ and its adjoint to a Gaussian random matrix $R_\i\in\mathbb{C}^{(r+k)\times n}$. Here $r$ is the desired numerical rank and $k$ is the oversampling parameter. If a black box routine for applying the matrix $K$ and its adjoint is available, this can be done in an efficient way as follows. For each $\j\in\mathcal{I}^h$, one constructs a zero-padded random matrix $C^P_\j\in\mathbb{C}^{N\times (r+k)}$ by padding zero to $C_\j$. From the relationship \begin{equation} KC^P_\j = K \begin{pmatrix} 0\\ C_\j\\ 0 \end{pmatrix} = \begin{pmatrix} K^h_{{0},\j}C_\j\\ \vdots\\ K^h_{{4^h-1},\j}C_\j \end{pmatrix}, \end{equation} it is clear that applying $K$ to the matrix $C^P_\j$ produces $K^h_{\i,\j}C_\j$ for all $\i\in\mathcal{I}^h$. Similarly, we construct zero-padded random matrices $R^P_\i\in \mathbb{C}^{N\times (r+k)}$ by padding zero to $R_\i$ and compute \begin{equation} K^* R^P_\i = K^* \begin{pmatrix} 0\\ R_\i\\ 0 \end{pmatrix} = \begin{pmatrix} \left( K^h_{\i,{0}}\right)^* R_\i \\ \vdots \\ \left( K^h_{\i,{4^h-1}}\right)^* R_\i \end{pmatrix} \end{equation} by using the black-box routine for applying the adjoint of $K$. Finally, the approximated rank-$r$ {SVD} of $K^h_{\i,\j}$ for each pair of $\i\in\mathcal{I}^h$ and $\j\in\mathcal{I}^h$ is computed from $K^h_{\i,\j}C_\j$ and $\left(K^h_{\i,\j}\right)^*R_\i$. In Case (ii), since an arbitrary entry of $K$ can be evaluated in $\O(1)$ operations, the approximate rank-$r$ {SVD} of $K^h_{\i,\j}$ is computed using the SVD via randomized sampling \cite{randsamp1,randsamp2} (the second option in Section \ref{sec:randlr}). In both cases, once the approximate rank-$r$ SVD is ready, we transform it into the form of \eqref{eq:lowrank1}: \begin{equation} \label{eq:Kij} K^h_{\i,\j}\approx U^h_{\i,\j}S^h_{\i,\j}\left(V^h_{\j,\i}\right)^*. \end{equation} Here the columns of the left and right factors $U^h_{\i,\j}$ and $V^h_{\j,\i}$ are scaled by the singular values of $K^h_{\i,\j}$ such that $U^h_{\i,\j}$ and $V^h_{\j,\i}$ keep track of the importance of the column and row bases for further factorizations. \input{figure/fig-compression-2D} After computing the rank-$r$ factorization in \eqref{eq:Kij} for all $\i$ and $\j$ in $\mathcal{I}^h$, we assemble all left factors $U^h_{\i,\j}$ into a matrix $U^h$, all middle factors into a matrix $M^h$, and all right factors into a matrix $V^h$ so that \begin{equation} \label{eq:UMV} K\approx U^hM^h(V^h)^*. \end{equation} Here $U^h$ is a block diagonal matrix of size $N\times rN$ with $n$ diagonal blocks $U^h_{\i}$ of size $n\times rn$: \begin{equation*} U^h= \begin{pmatrix} U^h_{{0}} & & &\\ & U^h_{{1}} & &\\ & & \ddots &\\ & & & U^h_{{4^h-1}} \end{pmatrix}, \end{equation*} where each diagonal block $U^h_{\i}$ consists of the left factors $U^h_{\i,\j}$ for all $\j$ as follows: \begin{equation} U_{\i}^h= \begin{pmatrix} U^h_{\i,{0}} & U^h_{\i,{1}} & \cdots & U^h_{\i,{4^h-1}} \end{pmatrix} \in \mathbb{C}^{n \times rn}. \label{eq:expression-U} \end{equation} Similarly, $V^h$ is a block diagonal matrix of size $N\times rN$ with $n$ diagonal blocks $V^h_{\j}$ of size $n\times rn$, where each diagonal block $V^h_{\j}$ consists of the right factors $V^h_{\j,\i}$ for all $\i$ as follows: \begin{equation} V^h_{\j}= \begin{pmatrix} V^h_{\j,{0}} & V^h_{\j,{1}} & \cdots & V^h_{\j,{4^h-1}} \end{pmatrix} \in \mathbb{C}^{n \times rn}. \label{eq:expression-V} \end{equation} The middle matrix $M^h\in \mathbb{C}^{rN\times rN}$ is an $n\times n$ block matrix. The $(\i,\j)$-th block $M^h_{\i,\j}\in \mathbb{C}^{rn\times rn}$ is itself an $n\times n$ block matrix. The only nonzero block of $M^h_{\i,\j}$ is the $(\j,\i)$-th block, which is equal to the $r\times r$ matrix $S^h_{\i,\j}$, and the other blocks of $M^h_{\i,\j}$ are zero. We refer to Figure \ref{fig:compression-2D} for a simple example of the middle level factorization when $N=4^2$. \subsection{Recursive factorization} \label{sec:rf} In this section, we shall discuss how to recursively factorize \begin{equation} \label{eq:RSU} U^\ell\approx U^{\ell+1}G^\ell \end{equation} and \begin{equation} \label{eq:RSV} (V^\ell)^*\approx (H^\ell)^*(V^{\ell+1})^* \end{equation} for $\ell=h,h+1,\dots,L-1$. After these recursive factorizations, we can construct the two-dimensional butterfly factorization \begin{equation} K\approx U^LG^{L-1}\cdots G^hM^h\left(H^h\right)^*\cdots \left(H^{L-1}\right)^*\left(V^L\right)^* \end{equation} by substituting these recursive factorizations into \eqref{eq:UMV}. \subsubsection{Recursive factorization of $U^h$} \label{sec:rfu} In the middle level factorization, we utilized the low-rank property of $K^h_{\i,\j}$, the kernel matrix restricted in the domain $A^h_\i\times B^h_\j\in T_X\times T_\Omega$, to obtain $U^h_{\i,\j}$ for $\i,\j\in\mathcal{I}^h$. We shall now use the complementary low-rank property at level $\ell=h+1$, i.e., the matrix $K^{h+1}_{\i,\j}$ restricted in $A^{h+1}_\i\times B^{h-1}_\j\in T_X\times T_\Omega$ is numerical low-rank for $\i\in\mathcal{I}^{h+1}$ and $\j\in\mathcal{I}^{h-1}$. These factorizations of the column bases from level $h$ generate the column bases at level $h+1$ through the following four steps: splitting, merging, truncating, and assembling. \paragraph{Splitting.} In the middle level factorization, we have constructed \begin{equation*} U^h= \begin{pmatrix} U^h_{{0}} & & &\\ & U^h_{{1}} & &\\ & & \ddots &\\ & & & U^h_{{4^h-1}} \end{pmatrix} \quad \text{with} \quad U_{\i}^h= \begin{pmatrix} U^h_{\i,{0}} & U^h_{\i,{1}} & \cdots & U^h_{\i,{4^h-1}} \end{pmatrix}\in \mathbb{C}^{n\times r n}, \end{equation*} where each $U^h_{\i,\j}\in \mathbb{C}^{n\times r}$. Each node $A^h_\i$ in the quadtree $T_X$ on the level $h$ has four child nodes on the level $h+1$, denoted by $\{A^{h+1}_{4\i+{t}}\}_{t=0,1,2,3}$. According to this structure, one can split $U^h_{\i,\j}$ into four parts in the row space, \begin{equation}\label{eq:splitandmerge} U^h_{\i,\j}= \begin{pmatrix} U^{h,0}_{\i,\j}\\ \midrule U^{h,1}_{\i,\j}\\ \midrule U^{h,2}_{\i,\j}\\ \midrule U^{h,3}_{\i,\j} \end{pmatrix}, \end{equation} where $U^{h,t}_{\i,\j}$ approximately spans the column space of the submatrix of $K$ restricted to $A^{h+1}_{4\i+{t}}\times B^h_{\j}$ for each $t=0,\ldots,3$. Combining this with the definition of $U^h_i$ gives rise to \begin{equation} U^h_{\i}= \begin{pmatrix} U^h_{\i,{0}} & U^h_{\i,{1}} & \cdots & U^h_{\i,{4^h-1}} \end{pmatrix} = \begin{pmatrix} U^{h,0}_{\i,{0}} & U^{h,0}_{\i,{1}} &\dots & U^{h,0}_{\i,{4^h-1}}\\ \midrule U^{h,1}_{\i,{0}} & U^{h,1}_{\i,{1}} &\dots & U^{h,1}_{\i,{4^h-1}}\\ \midrule U^{h,2}_{\i,{0}} & U^{h,2}_{\i,{1}} &\dots & U^{h,2}_{\i,{4^h-1}}\\ \midrule U^{h,3}_{\i,{0}} & U^{h,3}_{\i,{1}} &\dots & U^{h,3}_{\i,{4^h-1}} \end{pmatrix} =: \begin{pmatrix} U^{h,0}_{\i}\\ \midrule U^{h,1}_{\i}\\ \midrule U^{h,2}_{\i}\\ \midrule U^{h,3}_{\i} \end{pmatrix}, \end{equation} where $U^{h,t}_{\i}$ approximately spans the column space of the matrix $K$ restricted to $A^{h+1}_{4\i+{t}}\times \Omega$. \paragraph{Merging.} The merging step merges adjacent matrices $U^{h,t}_{\i,\j}$ in the column space to obtain low-rank matrices. For any $\i\in \mathcal{I}^h$ and $\j\in \mathcal{I}^{h-1}$, the merged matrix \begin{equation} \begin{pmatrix} U^{h,t}_{\i,4\j+{0}} & U^{h,t}_{\i,4\j+{1}} & U^{h,t}_{\i,4\j+{2}} & U^{h,t}_{\i,4\j+{3}} \end{pmatrix} \in \mathbb{C}^{n/4\times 4r} \label{eq:blocks} \end{equation} approximately spans the column space of $K^{h+1}_{4\i+{t},\j}$ corresponding to the domain $A_{4\i+t}^{h+1}\times B^{h-1}_{\j}$. By the complementary low-rank property of the matrix $K$, we know $K^{h+1}_{4\i+{t},\j}$ is numerically low-rank. Hence, the matrix in \eqref{eq:blocks} is also a numerically low-rank matrix. This is the merging step equivalent to moving from level $h$ to level $h-1$ in $T_\Omega$. \paragraph{Truncating.} The third step computes its rank-$r$ approximation using the standard truncated {SVD} and putting it to the form of \eqref{eq:lowrank2}. For each $\i\in\mathcal{I}^h$ and $\j\in\mathcal{I}^{h-1}$, the factorization \begin{equation} \begin{pmatrix} U^{h,t}_{\i,4\j+{0}} & U^{h,t}_{\i,4\j+{1}} & U^{h,t}_{\i,4\j+{2}} & U^{h,t}_{\i,4\j+{3}} \end{pmatrix} \approx U^{h+1}_{4\i+{t},\j}G^{h}_{4\i+{t},\j}, \label{eq:blockF} \end{equation} defines $U^{h+1}_{4\i+{t},\j}\in \mathbb{C}^{n/4\times r}$ and $G^{h}_{4\i+{t},\j}\in\mathbb{C}^{r\times 4r}$. \paragraph{Assembling} In the final step, we construct the factorization $U^h \approx U^{h+1} G^h$ using \eqref{eq:blockF}. Since $\mathcal{I}^{h+1}$ is the same as $\{4\i+\ve{t}\}_{\i\in\mathcal{I}^h,t=0,1,2,3}$, one can arrange \eqref{eq:blockF} for all $\i$ and $\j$ into a single formula as follows: \begin{equation*} \begin{split} &U^h \approx U^{h+1}G^{h} =\\ &\begin{pmatrix} U^{h+1}_{{0}} & & & & & & & & &\\ & \ddots & & & & & & & &\\ & & U^{h+1}_{{3}} & & & & & & &\\ & & & U^{h+1}_{{4}} & & & & &\\ & & & & \ddots & & & &\\ & & & & & U^{h+1}_{{7}} & & &\\ & & & & & & \ddots & &\\ & & & & & & & U^{h+1}_{{4^{h+1}-4}} &\\ & & & & & & & & \ddots \\ & & & & & & & & & U^{h+1}_{{4^{h+1}-1}} \end{pmatrix} \begin{pmatrix} G^{h}_{{0}} & & &\\ \vdots & & &\\ G^{h}_{{3}} & & &\\ & G^{h}_{{4}} & &\\ & \vdots & &\\ & G^{h}_{{7}} & &\\ & & \ddots &\\ & & & G^{h}_{{4^{h+1}-4}}\\ & & & \vdots\\ & & & G^{h}_{{4^{h+1}-1}} \end{pmatrix}, \end{split} \end{equation*} where the blocks are given by \begin{equation*} U_{\i}^{h+1}= \begin{pmatrix} U^{h+1}_{\i,{0}} & U^{h+1}_{\i,{1}} & \cdots & U^{h+1}_{\i,{4^{h-1}-1}} \end{pmatrix} \end{equation*} and \begin{equation*} G^{h}_{\i}= \begin{pmatrix} G^{h}_{\i,{0}} & & &\\ & G^{h}_{\i,{1}} & &\\ & & \ddots &\\ & & & G^{h}_{\i,{4^{h-1}-1}}\\ \end{pmatrix} \end{equation*} for $\i\in\mathcal{I}^{h+1}$. Figure \ref{fig:compression-2DU} shows a toy example of the recursive factorization of $U^h$ when $N = 4^2$, $h=2$ and $r=1$. Since there are $\O(1)$ nonzero entries in each $G^h_{\i,\j}$ and $\O(4^{h+1}\cdot 4^{h-1})=\O(N)$ such matrices, there are only $\O(N)$ nonzero entries in $G^h$. \input{figure/fig-compression-2DU} In a similar way, we can now factorize $U^\ell\approx U^{\ell+1}G^\ell$ for $h< \ell\leq L-1$. As before, the key point is that the columns of \begin{equation} \begin{pmatrix} U^{\ell,t}_{\i,4\j+{0}} & U^{\ell,t}_{\i,4\j+{1}} & U^{\ell,t}_{\i,4\j+{2}} & U^{\ell,t}_{\i,4\j+{3}} \end{pmatrix} \end{equation} approximately span the column space of $K^{\ell+1}_{4\i+{t},\j}$, which is of rank $r$ numerically due to the complementary low-rank property. Computing its rank-$r$ approximation via the standard truncated {SVD} results in a form of \eqref{eq:lowrank2} \begin{equation} \begin{pmatrix} U^{\ell,t}_{\i,4\j+{0}} & U^{\ell,t}_{\i,4\j+{1}} & U^{\ell,t}_{\i,4\j+{2}} & U^{\ell,t}_{\i,4\j+{3}} \end{pmatrix} \approx U^{\ell+1}_{4\i+{t},\j} G^{\ell}_{4\i+{t},\j} \label{eq:RSh2} \end{equation} for $\i\in\mathcal{I}^\ell$ and $\j\in \mathcal{I}^{L-\ell-1}$. After assembling these factorizations together, we obtain \begin{equation*} \begin{split} &U^\ell \approx U^{\ell+1}G^{\ell} =\\ &\begin{pmatrix} U^{\ell+1}_{{0}} & & & & & & & & &\\ & \ddots & & & & & & & &\\ & & U^{\ell+1}_{{3}} & & & & & & &\\ & & & U^{\ell+1}_{{4}} & & & & &\\ & & & & \ddots & & & &\\ & & & & & U^{\ell+1}_{{7}} & & &\\ & & & & & & \ddots & &\\ & & & & & & & U^{\ell+1}_{{4^{\ell+1}-4}} &\\ & & & & & & & & \ddots \\ & & & & & & & & & U^{\ell+1}_{{4^{\ell+1}-1}} \end{pmatrix} \begin{pmatrix} G^{\ell}_{{0}} & & &\\ \vdots & & &\\ G^{\ell}_{{3}} & & &\\ & G^{\ell}_{{4}} & &\\ & \vdots & &\\ & G^{\ell}_{{7}} & &\\ & & \ddots &\\ & & & G^{\ell}_{{4^{\ell+1}-4}}\\ & & & \vdots\\ & & & G^{\ell}_{{4^{\ell+1}-1}} \end{pmatrix}, \end{split} \end{equation*} where \begin{equation*} U_{\i}^{\ell+1}= \begin{pmatrix} U^{\ell+1}_{\i,{0}} & U^{\ell+1}_{\i,{1}} & \cdots & U^{\ell+1}_{\i,{4^{L-\ell-1}-1}} \end{pmatrix} \end{equation*} and \begin{equation*} G^{\ell}_{\i}= \begin{pmatrix} G^{\ell}_{\i,{0}} & & &\\ & G^{\ell}_{\i,{1}} & &\\ & & \ddots &\\ & & & G^{\ell}_{\i,{4^{L-\ell-1}-1}}\\ \end{pmatrix} \end{equation*} for $\i \in \mathcal{I}^{\ell+1}$. After the $L-h$ step of recursive factorizations $U^\ell \approx U^{\ell+1}G^{\ell}$ for $\ell=h,h+1,\dots,L-1$, the recursive factorization of $U^h$ takes the following form: \begin{equation} \label{eq:RFUh} U^h\approx U^{L}G^{L-1}\cdots G^{h}. \end{equation} Similarly to the analysis of $G^{h}$, it is also easy to check that there are only $\O(N)$ nonzero entries in each $G^{\ell}$ in \eqref{eq:RFUh}. As to the first factor $U^{L}$, it has $\O(N)$ nonzero entries since there are $\O(N)$ diagonal blocks in $U^L$ and each block contains $\O(1)$ entries. \subsubsection{Recursive factorization of $V^h$} \label{sec:rfv} The recursive factorization of $V^\ell$ is similar to that of $U^\ell$ for $\ell=h,h+1,\dots,L-1$. At each level $\ell$, we benefit from the fact that \[ \begin{pmatrix} V^{\ell,t}_{\j,4\i+{0}} & V^{\ell,t}_{\j,4\i+{1}} & V^{\ell,t}_{\j,4\i+{2}} & V^{\ell,t}_{\j,4\i+{3}} \end{pmatrix} \] approximately spans the row space of $K^{L-\ell-1}_{\i,4\j+{t}}$ and hence is numerically low-rank for $\j\in\mathcal{I}^{L-\ell}$ and $\i\in\mathcal{I}^{\ell-1}$. Applying the same procedure in Section \ref{sec:rfu} to $V^{h}$ leads to \begin{equation} \label{eq:VF} V^h\approx V^LH^{L-1}\cdots H^h. \end{equation} \subsection{Complexity analysis} \label{sec:ca} By combining the results of the middle level factorization in \eqref{eq:UMV} and the recursive factorizations in \eqref{eq:RFUh} and \eqref{eq:VF}, we obtain the final butterfly factorization \begin{equation} K\approx U^LG^{L-1}\cdots G^hM^h\left(H^h\right)^*\cdots \left(H^{L-1}\right)^*\left(V^L\right)^*, \end{equation} each factor of which contains $\O(N)$ nonzero entries. We refer to Figure \ref{fig:compression-2D-real} for an illustration of the butterfly factorization of $K$ when $N=16^2$. \input{figure/fig-compression-2D-real} The complexity of constructing the butterfly factorization comes from two parts: the middle level factorization and the recursive factorization. For the middle level factorization, the construction cost is different depending on which of the two cases mentioned in Section \ref{sec:mlf} is under consideration, since they use different approaches in constructing rank-$r$ SVDs at the middle level. \begin{itemize} \item In Case (i), the dominant cost is to apply $K$ and $K^*$ to $N^{1/2}$ Gaussian random matrices of size $N\times O(1)$. Assuming that the given black-box routine for applying $K$ and $K^*$ to a vector takes $\O(C_K(N))$ operations, the total operation complexity is $\O(C_K(N)N^{1/2})$. \item In Case (ii), we apply the SVD procedure with random sampling to $N$ submatrices of size $N^{1/2}\times N^{1/2}$. Since the operation complexity for each submatrix is $\O(N^{1/2})$, the overall complexity is $\O(N^{3/2})$. \end{itemize} In the recursive factorization stage, most of the work comes from factorizing $U^h$ and $V^h$. There are $\O(\log N)$ stages appeared in the factorization of $U^h$. At the $\ell$ stage, the matrix $U^\ell$ to be factorized consists of $4^\ell$ diagonal blocks. There are $\O(N)$ factorizations and each factorization takes $\O(N/4^\ell)$ operations. Hence, the operation complexity to factorize $U^\ell$ is $\O(N^2/4^\ell)$. Summing up all the operations in each step yields the overall operation complexity for recursively factorizing $U^h$: \begin{equation} \sum^{L-1}_{\ell=h}\O(N^2/4^\ell) = \O(N^{3/2}). \end{equation} The peak of the memory usage of the butterfly factorization is due to the middle level factorization where we need to store the results of $\O(N)$ factorizations of size $\O(N^{1/2})$. Hence, the memory complexity for the two-dimensional butterfly factorization is $\O(N^{3/2})$. For Case (ii), one can actually do better by following the same argument in \cite{1dbf}. One can interleave the order of generation and recursive factorization of $U^h_{\i,\j}$ and $V^h_{\j,\i}$. By factorizing $U^h_{\i,\j}$ and $V^h_{\j,\i}$ individually instead of formulating \eqref{eq:UMV}, the memory complexity in Case (ii) can be reduced to $\O(N\log N)$. The cost of applying the butterfly factorization is equal to the number of nonzero entries in the final factorization, which is $\O(N\log N)$. Table \ref{tab:complexity} summarizes the complexity analysis for the two-dimensional butterfly factorization. \begin{table}[ht!] \centering \begin{tabular}{llcc} \toprule & & SVD via rand. matvec & SVD via rand. sampling \\ \toprule \multirow{5}{3cm}{Factorization Complexity} & \parbox{2.5cm}{Middle level\\factorization} & $\O(C_K(N) N^{1/2} )$ & $\O(N^{3/2})$ \\ \cmidrule(r){2-4} & \parbox{2.5cm}{Recursive\\factorization} & \multicolumn{2}{c}{$\O(N^{3/2})$}\\ \cmidrule(r){2-4} & Total & $\O(C_K(N) N^{1/2} )$ & $\O(N^{3/2})$\\ \toprule \parbox{3cm}{Memory\\Complexity} & & $\O(N^{3/2})$ & $\O(N\log N)$ \\ \toprule \parbox{3cm}{Application\\Complexity} & & \multicolumn{2}{c}{$\O(N\log N)$}\\ \bottomrule \end{tabular} \caption{The time and memory complexity of the two-dimensional butterfly factorization. Here $C_K(N)$ is the complexity of applying the matrices $K$ and $K^*$ to a vector. For most butterfly algorithms, $C_K(N)=O(N\log N)$. } \label{tab:complexity} \end{table} \subsection{Extensions}\label{sec:ext} We have introduced the two-dimensional butterfly factorization for a complementary low-rank kernel matrix $K$ in the entire domain $X\times \Omega$. Although we have assumed the uniform grid in \eqref{eq:X} and \eqref{eq:Omega}, the butterfly factorization extends naturally to more general settings. In the case with non-uniform point sets $X$ or $\Omega$, one can still construct a butterfly factorization for $K$ following the same procedure. More specifically, we still construct two trees $T_X$ and $T_\Omega$ {\em adaptively} via hierarchically partitioning the square domains covering $X$ and $\Omega$. For non-uniform point sets $X$ and $\Omega$, the numbers of points in $A^\ell_\i$ and $B^{L-\ell}_\j$ are different. If a node does not contain any point inside it, it is simply discarded from the quadtree. The complexity analysis summarized in Table \ref{tab:complexity} remains valid in the case of non-uniform point sets $X$ and $\Omega$. On each level $\ell = h,\dots,L$ of the butterfly factorization, although the sizes of low-rank submatrices are different, the total number of submatrices and the numerical rank remain the same. Hence, the total operation and memory complexity remains the same as summarized in Table \ref{tab:complexity}. \section{Polar Butterfly Factorization} \label{sec:pbf} In Section \ref{sec:gbf}, we have introduced a two-dimensional {butterfly factorization} for a complementary low-rank kernel matrix $K$ in the entire domain $X\times \Omega$. In this section, we will introduce a polar butterfly factorization to deal with the kernel function $K(x,\xi)=e^{2\pi \imath \Phi(x,\xi)}$. Such a kernel matrix has a singularity at $\xi=0$ and the approach taken here follows the polar butterfly algorithm proposed in \cite{fio09}. \subsection{Polar butterfly algorithm} \label{sec:pba} The multidimensional {Fourier} integral operator (FIO) is defined as \begin{equation}\label{eq:fio} u(x) = \sum_{\xi\in\Omega} e^{2\pi \imath \Phi(x,\xi)}g(\xi), \quad x\in X, \end{equation} where the phase function $\Phi(x,\xi)$ is assumed to be real-analytic in $(x,\xi)$ for $\xi\neq 0$, and is homogeneous of degree 1 in $\xi$, namely, $\Phi(x,\lambda\xi) = \lambda\Phi(x,\xi)$ for all $\lambda>0$. Here the grids $X$ and $\Omega$ are the same as those in \eqref{eq:X} and \eqref{eq:Omega}. As the phase function $\Phi(x,\xi)$ is singular at $\xi=0$, the numerical rank of the kernel $e^{2\pi \imath \Phi(x,\xi)}$ in a domain near or containing $\xi=0$ is typically large. Hence, in general $K(x,\xi) = e^{2\pi \imath \Phi(x,\xi)}$ does not satisfy the complementary low-rank property over the domain $X\times\Omega$ with quadtree structures $T_X$ and $T_\Omega$. To fix this problem, the polar butterfly algorithm introduces a scaled polar transformation on $\Omega$: \begin{equation} \xi=(\xi_1,\xi_2) = \frac{\sqrt{2}}{2}n p_1 \cdot (\cos{2\pi p_2},\sin{2\pi p_2}), \label{eq:polar} \end{equation} for $\xi\in \Omega$ and $p=(p_1,p_2)\in [0,1]^2$. In the rest of this section, we use $p$ to denote a point in the polar coordinate and $P$ for the set of all points $p$ transformed from $\xi\in\Omega$. This transformation gives rise to a new phase function $\Psi(x,p)$ in variables $x$ and $p$ satisfying \begin{equation} \Psi(x,p) = \frac{1}{n}\Phi(x,\xi(p)) = \frac{\sqrt{2}}{2}\Phi\left(x, (\cos{2\pi p_2},\sin{2\pi p_2})\right) \cdot p_1, \end{equation} where the last equality comes from the fact that $\Phi(x,\xi)$ is homogeneous of degree 1 in $\xi$. This new phase function $\Psi(x,p)$ is smooth in the entire domain $X\times P$ and the FIO in \eqref{eq:fio} takes the new form \begin{equation} \label{eq:polarsum} u(x) = \sum_{p\in P} e^{2\pi\imath n\Psi(x,p)}g(p),\quad x\in X. \end{equation} The transformation \eqref{eq:polar} ensures that $X\times P\subset [0,1]^2\times [0,1]^2$. By partitioning $[0,1]^2$ recursively, we can construct two quadtrees $T_X$ and $T_P$ of depth $L=\O(\log n)$ for $X$ and $P$, respectively. The following theorem is a rephrased version of Theorem 3.1 in \cite{fio09} that shows analytically the complementary low-rank property of $e^{2\pi \imath n\Psi(x,p)}$ in the $(X,P)$ domain. \begin{theorem} \label{thm:pba} Suppose $A$ is a node in $T_X$ at level $\ell$ and $B$ is a node in $T_P$ at level $L-\ell$. Given an {FIO} kernel function $e^{2\pi\imath n\Psi(x,p)}$ with a real-analytic phase function in the joint variables $x$ and $p$, there exist $\epsilon_0>0$ and $n_0>0$ such that for any positive $\epsilon\leq \epsilon_0$ and $n\geq n_0$, there exist $r_\epsilon$ pairs of functions $\{\alpha_t^{A,B}(x), \beta_t^{A,B}(p)\}_{1\leq t\leq r_\epsilon}$ satisfying that \begin{equation*} \left|e^{2\pi\imath n\Psi(x,p)} - \sum^{r_\epsilon}_{t=1} \alpha_t^{A,B}(x)\beta_t^{A,B}(p) \right|\leq \epsilon, \end{equation*} for $x\in A$ and $p\in B$ with $r_\epsilon\lesssim \log^4(1/\epsilon)$. \end{theorem} Based on Theorem \ref{thm:pba}, the polar butterfly algorithm traverses upward in $T_\Omega$ and downward in $T_X$ simultaneously and visits the low-rank submatrices $K_{A,B}=\{K(x_i,\xi_j)\}_{x_i\in A, \xi_j\in B}$ for pairs $(A,B)$ in $T_X\times T_P$. The polar butterfly algorithm is asymptotically very efficient: for a given input vector $g(p)$ for $p\in P$, it evaluates \eqref{eq:polarsum} in $\O(N\log N)$ steps using $\O(N)$ memory space. We refer the readers to \cite{fio09} for a detailed description of this algorithm. \subsection{Factorization algorithm} \label{sec:algopbf} Combining the polar butterfly algorithm with the butterfly factorization outlined in Section \ref{sec:gbf} gives rise to the following polar butterfly factorization (PBF). \begin{enumerate} \item \emph{Preliminary.} Take the polar transformation of each point in $\Omega$ and reformulate the problem \begin{equation} u(x) = \sum_{\xi\in \Omega} e^{2\pi\imath\Phi(x,\xi)}g(\xi),\quad x\in X, \end{equation} into \begin{equation} u(x) = \sum_{p\in P} e^{2\pi\imath n\Psi(x,p)}g(p),\quad x\in X. \end{equation} \item \emph{Factorization.} Apply the two-dimensional {butterfly factorization} to the kernel $e^{2\pi\imath n\Psi(x,p)}$ defined on a non-uniform point set in $X\times P$. The corresponding kernel matrix is approximated as \begin{equation} K\approx U^LG^{L-1}\cdots G^hM^h\left(H^h\right)^*\cdots \left(H^{L-1}\right)^*\left(V^L\right)^*. \end{equation} \end{enumerate} Since the polar butterfly factorization essentially applies the original butterfly factorization to non-uniform point sets $X$ and $P$, it has the same complexity as summarized in Table \ref{tab:complexity}. Depending on the SVD procedure employed in the middle level factorization, we refer to it either as PBF-m (when SVD via random matrix-vector multiplication is used) or as PBF-s (when SVD via random sampling is used). \subsection{Numerical results} \label{sec:numpbf} This section presents two numerical examples to demonstrate the efficiency of the polar butterfly factorization. The numerical results were obtained in MATLAB on a server with 2.40 {GHz} {CPU} and 1.5 {TB} of memory. In this section, we denote by $\{u^p(x)\}_{x\in X}$ the results obtained via the {PBF}. The relative error of the {PBF} is estimated as follows, by comparing $u^p(x)$ with the exact values $u(x)$. \begin{equation} e^p = \sqrt{\cfrac{\sum_{x\in S}|u^p(x)-u(x)|^2}{\sum_{x\in S}|u(x)|^2}}, \end{equation} where $S$ is a set of 256 randomly sampled points from $X$. {\bf Example 1.} The first example is a two-dimensional generalized {Radon} transform that is an {FIO} defined as follows: \begin{equation} \label{eq:example1fio} u(x) = \sum_{\xi\in\Omega} e^{2\pi \imath \Phi(x,\xi)}g(\xi), \quad x\in X, \end{equation} with the phase function given by \begin{equation}\label{eq:example1phi} \begin{split} \Phi(x,\xi) =& x\cdot \xi+\sqrt{c_1^2(x)\xi_1^2+c_2^2(x)\xi_2^2},\\ c_1(x) = & (2+\sin(2\pi x_1)\sin(2\pi x_2))/16,\\ c_2(x) = & (2+\cos(2\pi x_1)\cos(2\pi x_2))/16, \end{split} \end{equation} where $X$ and $\Omega$ are defined in \eqref{eq:X} and \eqref{eq:Omega}. The computation in \eqref{eq:example1fio} approximately integrates over spatially varying ellipses, for which $c_1(x)$ and $c_2(x)$ are the axis lengths of the ellipse centered at the point $x\in X$. The corresponding matrix form of \eqref{eq:example1fio} is simply \begin{equation} u=Kg, \quad K = (e^{2\pi\imath\Phi(x,\xi)})_{x\in X,\xi\in\Omega}. \label{eq:uKg} \end{equation} As $e^{2\pi \imath \Phi(x,\xi)}$ is known explicitly, we are able to use the {PBF-s} (i.e., the one with random sampling in the middle level factorization) to approximate the kernel matrix $K$ given by $e^{2\pi \imath \Phi(x,\xi)}$. After the construction of the butterfly factorization, the summation in \eqref{eq:example1fio} can be evaluated efficiently by applying these sparse factors to $g(\xi)$. Table \ref{tab:sec4example1} summarizes the results of this example. \begin{table}[ht!] \centering \begin{tabular}{rcccc} \toprule $n,r$ & $\epsilon^p$ & $T_{f,p}(min)$ & $T_p(sec)$ & Speedup \\ \toprule 64,6 & 2.46e-02 & 6.51e-01 & 2.37e-02 & 1.54e+02 \\ 128,6 & 7.55e-03 & 9.84e+00 & 2.30e-01 & 1.67e+02 \\ 256,6 & 5.10e-02 & 2.73e+01 & 6.23e-01 & 7.55e+02 \\ 512,6 & 1.46e-02 & 4.00e+02 & 7.88e+00 & 4.15e+02 \\ \toprule 64,14 & 7.93e-04 & 7.34e-01 & 5.98e-02 & 8.72e+01 \\ 128,14 & 7.28e-04 & 1.17e+01 & 7.15e-01 & 4.28e+01 \\ 256,14 & 2.15e-03 & 3.93e+01 & 1.46e+00 & 2.86e+02 \\ 512,14 & 1.25e-03 & 5.63e+02 & 1.05e+01 & 3.35e+02 \\ \toprule 64,22 & 6.96e-05 & 7.40e-01 & 8.24e-02 & 4.51e+01 \\ 128,22 & 7.23e-05 & 1.16e+01 & 1.04e+00 & 3.69e+01 \\ 256,22 & 2.44e-04 & 5.14e+01 & 5.94e+00 & 7.74e+01 \\ \bottomrule \end{tabular} \caption{Numerical results provided by the {PBF} with randomized sampling algorithm for the FIO in \eqref{eq:example1fio}. $n$ is the number of grid points in each dimension; $N=n^2$ is the size of the kernel matrix; $r$ is the max rank used in the low-rank approximation; $T_{f,p}$ is the factorization time of the {PBF}; $T_d$ is the running time of the direct evaluation; $T_p$ is the application time of the {PBF}. The last column shows the speedup factor compared to the direct evaluation. } \label{tab:sec4example1} \end{table} {\bf Example 2.} The second example evaluates the composition of two {FIOs} with the same phase function $\Phi(x,\xi)$. This is given explicitly by \begin{equation}\label{eq:example2mfio} u(x) =\sum_{\eta\in\Omega} e^{2\pi \imath \Phi(x,\eta)} \sum_{y\in X}e^{-2\pi \imath y\cdot \eta} \sum_{\xi\in\Omega} e^{2\pi \imath \Phi(y,\xi)}g(\xi), \quad x\in X, \end{equation} where the phase function is given in \eqref{eq:example1phi}. The corresponding matrix representation is \begin{equation}\label{eq:example2mfiomat} u =KFKg, \end{equation} where $K$ is the matrix given in \eqref{eq:uKg} and $F$ is the matrix representation of the discrete Fourier transform. Under relatively mild assumptions (see \cite{Theory} for details), the composition of two FIOs is again an FIO. Hence, the kernel matrix \begin{equation} \widetilde{K} := KFK \label{eq:KFK} \end{equation} of the product can be approximated by the butterfly factorization. Notice that the kernel function of $\widetilde{K}$ defined by \eqref{eq:KFK} is not given explicitly. However, \eqref{eq:KFK} provides fast algorithms for applying $\widetilde{K}$ and its adjoint through the fast algorithms for $K$ and $F$. For example, the butterfly factorization of Example 1 enables the efficient application of $K$ and $K^*$ in $\O(N\log N)$ operations. Applying of $F$ and $F^*$ can be done by the fast Fourier transform in $\O(N\log N)$ operations. Therefore, we can apply the {PBF-m} (i.e., the one with random matrix-vector multiplication) to factorize the kernel $\widetilde{K} = KFK$. Table \ref{tab:sec4example2} summarizes the numerical results of this example, the composing of two FIOs. \begin{table}[ht!] \centering \begin{tabular}{rcccc} \toprule $n,r$ & $\epsilon^p$ & $T_{f,p}(min)$ & $T_p(sec)$ & Speedup \\ \toprule 64,12 & 3.84e-02 & 6.22e+00 & 2.18e-02 & 3.34e+02 \\ 128,12 & 1.31e-02 & 3.86e+02 & 1.80e-01 & 4.25e+02 \\ \toprule 64,20 & 2.24e-03 & 8.58e+00 & 3.04e-02 & 2.39e+02 \\ 128,20 & 2.23e-03 & 3.68e+02 & 3.60e-01 & 2.13e+02 \\ \bottomrule \end{tabular} \caption{Numerical results provided by the {PBF} with randomized {SVD} algorithm for the composition of FIOs given in \eqref{eq:example2mfiomat}.} \label{tab:sec4example2} \end{table} {\bf Discussion.} The numerical results in Tables \ref{tab:sec4example1} and \ref{tab:sec4example2} support the asymptotic complexity analysis. When we fix $r$ and let $n$ grow, the actually running time fluctuates around the asymptotic scaling since the implementation of the algorithms differ slightly depending on whether $L$ is odd or even. However, the overall trend matches well with the $O(N^{3/2})$ construction cost and the $O(N\log N)$ application cost. For a fixed $n$, one can improve the accuracy by increasing the truncation rank $r$. From the tables, one observes that the relative error decreases by a factor of 10 when we increase the rank $r$ by $8$ every time. In the second example, since the composition of two {FIOs} typically has higher ranks compared to a single FIO, the numerical rank $r$ used for the composition is larger than that for a single FIO in order to maintain comparable accuracy. \section{Multiscale Butterfly Factorization} \label{sec:mbf} In this section, we discuss yet another approach for constructing butterfly factorization for the kernel $K(x,\xi)=e^{2\pi\imath \Phi(x,\xi)}$ with singularity at $\xi=0$. This is based on the multiscale butterfly algorithm introduced in \cite{mba}. \subsection{Multiscale butterfly algorithm} \label{sec:mba} The key idea of the multiscale butterfly algorithm \cite{mba} is to hierarchically partition the domain $\Omega$ into subdomains excluding the singular point $\xi=0$. This multiscale partition is illustrated in Figure \ref{fig:domain-decomp} with \begin{equation}\label{eq:domaindecomp} \Omega_t = \left\{(\xi_1,\xi_2):\frac{n}{2^{t+2}}<\max(|\xi_1|,|\xi_2|) \leq \frac{n}{2^{t+1}}\right\}\cap \Omega, \end{equation} for $t=0,1,\dots,\log_2 n-s$, $s$ is a small constant, and $\Omega_C=\Omega\setminus\cup_t\Omega_t$. Equation \eqref{eq:domaindecomp} is a corona decomposition of $\Omega$, where each $\Omega_t$ is a corona subdomain and $\Omega_C$ is a square subdomain at the center containing $\O(1)$ points. \input{figure/fig-domain-decomp} The FIO kernel $e^{2\pi\imath\Phi(x,\xi)}$ satisfies the complementary low-rank property when it is restricted in each subdomain $X\times \Omega_t$. This observation is supported by the following theorem rephrased from Theorem 3.1 in \cite{mba}. Here the notation $\dist(B,0) = \min_{\xi\in B}\norm{\xi-0}$ is the distance between the square $B$ and the origin $\xi=0$ in $\Omega$. \begin{theorem} \label{thm:mba} Given an {FIO} kernel function $e^{2\pi\imath\Phi(x,\xi)}$ with a real-analytic phase function $\Phi(x,\xi)$ for $x$ and $\xi$ away from $\xi=0$, there exist a constant $n_0>0$ and a small constant $\epsilon_0$ such that the following statement holds. Let $A$ and $B$ be two squares in $X$ and $\Omega$ with sidelength $w_A$ and $w_B$, respectively. Suppose $w_Aw_B\leq 1$ and $\dist(B,0)\geq \frac{n}{4}$. For any positive $\epsilon\leq\epsilon_0$ and $n\geq n_0$, there exist $r_\epsilon$ pairs of functions $\{\alpha_t^{A,B}(x), \beta_t^{A,B}(p)\}_{1\leq t\leq r_\epsilon}$ satisfying that \begin{equation*} \left|e^{2\pi\imath \Phi(x,\xi)} - \sum^{r_\epsilon}_{t=1} \alpha_t^{A,B}(x) \beta_t^{A,B}(\xi) \right|\leq \epsilon, \end{equation*} for $x\in A$ and $\xi\in B$ with $r_\epsilon\lesssim \log^4(1/\epsilon)$. \end{theorem} According to the low-rank property in Theorem \ref{thm:mba}, the multiscale butterfly algorithm rewrites \eqref{eq:fio} as a multiscale summation, \begin{equation}\label{eq:mba} u(x) = u_C(x) + \sum_{t=0}^{\log_2n-s} u_t(x) = \sum_{\xi\in \Omega_C}e^{2\pi\imath \Phi(x,\xi)}g(\xi) +\sum_{t=0}^{\log_2n-s}\sum_{\xi\in \Omega_t}e^{2\pi\imath \Phi(x,\xi)}g(\xi). \end{equation} For each $t$, the multiscale butterfly factorization algorithm evaluates $u_t(x) = \sum_{\xi\in\Omega_t} e^{2\pi\imath\Phi(x,\xi)} g(\xi)$ with a standard butterfly algorithm such as the one that relies on the oscillatory {Lagrange} interpolation on {Chebyshev} grid (see \cite{fio09}). The final piece $u_C(x)$ is evaluated directly in $\O(N)$ operations. As a result, the multiscale butterfly algorithm asymptotically takes $\O(N\log N)$ operations to evaluate \eqref{eq:mba} for a given input function $g(\xi)$ for $\xi\in\Omega$. We refer the reader to \cite{mba} for the detailed exposition. \subsection{Factorization algorithm} \label{sec:algombf} Combining the multiscale butterfly algorithm with the butterfly factorization outlined in Section \ref{sec:gbf} gives rise to the following multiscale butterfly factorization (MBF). \begin{enumerate} \item \emph{Preliminary.} Decompose domain $\Omega$ into subdomains as in \eqref{eq:domaindecomp}. Reformulate the problem into a multiscale summation according to \eqref{eq:mba}: \begin{equation} \label{eq:msumd} K = K_C R_C + \sum_{t=0}^{\log_2n-s} K_t R_t. \end{equation} Here $K_C$ and $K_t$ are kernel matrices corresponding to $X\times \Omega_C$ and $X\times \Omega_t$. $R_C$ and $R_t$ are the restriction operators to the domains $\Omega_C$ and $\Omega_t$ respectively. \item \emph{Factorization.} Recall that $L=\log_2 n$. For each $t=0,1,\dots,L-s$, apply the two-dimensional butterfly factorization on $K(x,\xi)=e^{2\pi \imath \Phi(x,\xi)}$ restricted in $X\times \Omega_t$. Let $\widetilde{\Omega}_t$ be the smallest square that contains $\Omega_t$. Define $L_t = 2\lfloor (L-t)/2\rfloor$, where $\lfloor \cdot\rfloor$ is the largest integer less than or equal to a given number. We construct two quadtrees $T_X$ and $T_{\widetilde{\Omega}_t}$ of depth $L_j$ with $X$ and $\widetilde{\Omega}_t$ being the roots, respectively. Applying the two-dimensional butterfly factorization using the quadtrees $T_X$ and $T_{\widetilde{\Omega}_t}$ gives the $t$-th butterfly factorization: \begin{equation*} K_t \approx U_t^{L_t}G_t^{L_t-1}\cdots G_t^{\frac{L_t}{2}} M_t^{\frac{L_t}{2}} \left(H_t^{\frac{L_t}{2}}\right)^*\cdots \left(H_t^{L_t-1}\right)^*\left(V_t^{L_t}\right)^*. \end{equation*} Note that $1/4$ of the tree $T_{\widetilde{\Omega}_t}$ is empty and we can simply ignore the computation for these parts. This is a special case of non-uniform point sets. Once we have computed all butterfly factorizations, the multiscale summation in \eqref{eq:msumd} is approximated by \begin{equation} K \approx K_C R_C + \sum_{t=0}^{L-s} U_t^{L_t}G_t^{L_t-1}\cdots M_t^{\frac{L_t}{2}}\cdots \left(H_t^{L_t-1}\right)^*\left(V_t^{L_t}\right)^* R_t. \label{eq:f1} \end{equation} \end{enumerate} The idea of the hierarchical decomposition of $\Omega$ not only avoids the singularity of $K(x,\xi)$ at $\xi=0$, but also maintains the efficiency of the butterfly factorization. The butterfly factorization for the kernel matrix restricted in $X\times \Omega_t$ is a special case of non-uniform butterfly factorization in which the center of $\Omega_t$ contains no point. Since the number of points in $\Omega_t$ is decreasing exponentially in $t$, the operation and memory complexity of the multiscale butterfly factorization is dominated by the butterfly factorization of $K_t$ for $t=0$, which is bounded by the complexity summarized in Table \ref{tab:complexity}. Depending on the SVD procedure in the middle level factorization, we refer this factorization either as MBF-m (when SVD via random matrix-vector multiplication is used) or as MBF-s (when SVD via random sampling is used). \subsection{Numerical results} \label{sec:nummbf} This section presents two numerical examples to demonstrate the efficiency of the {MBF} as well. The numerical results are obtained in the same environment as the one used in Section \ref{sec:numpbf}. Here we denote by $\{u^m(x),x\in X\}$ the results obtained via the {MBF}. The relative error is estimated by \begin{equation} e^m = \sqrt{\cfrac{\sum_{x\in S}|u^m(x)-u(x)|^2}{\sum_{x\in S}|u(x)|^2}}, \end{equation} where $S$ is a set of 256 randomly sampled from $X$. In the multiscale decomposition of $\Omega$, we recursively divide $\Omega$ until the center part is of size 16 by 16. {\bf Example 1. } We revisit the first example in Section \ref{sec:numpbf} to illustrate the performance of the {MBF}, \begin{equation}\label{eq:sec5example1fio} u(x) = \sum_{\xi\in\Omega} e^{2\pi \imath \Phi(x,\xi)}g(\xi), \quad x\in X, \end{equation} with a kernel $\Phi(x,\xi)$ given by \begin{equation} \begin{split} \Phi(x,\xi) =& x\cdot \xi+\sqrt{c_1^2(x)\xi_1^2+c_2^2(x)\xi_2^2},\\ c_1(x) = & (2+\sin(2\pi x_1)\sin(2\pi x_2))/16,\\ c_2(x) = & (2+\cos(2\pi x_1)\cos(2\pi x_2))/16, \end{split} \end{equation} where $X$ and $\Omega$ are defined in \eqref{eq:X} and \eqref{eq:Omega}. Table \ref{tab:sec5example1} summarizes the results of this example obtained by applying the {MBF-s}. \begin{table}[ht!] \centering \begin{tabular}{rcccc} \toprule $n,r$ & $\epsilon^m$ & $T_{f,m}(min)$ & $T_m(sec)$ & Speedup \\ \toprule 64,12 & 1.58e-02 & 4.48e-01 & 4.09e-02 & 1.13e+02 \\ 128,12 & 1.47e-02 & 5.64e+00 & 1.93e-01 & 2.02e+02 \\ 256,12 & 2.13e-02 & 2.16e+01 & 5.51e-01 & 9.26e+02 \\ 512,12 & 1.97e-02 & 2.97e+02 & 5.07e+00 & 6.45e+02 \\ \toprule 64,20 & 5.51e-03 & 4.74e-01 & 6.11e-02 & 6.17e+01 \\ 128,20 & 4.27e-03 & 5.95e+00 & 5.01e-01 & 7.63e+01 \\ 256,20 & 1.68e-03 & 3.03e+01 & 2.51e+00 & 1.79e+02 \\ 512,20 & 2.02e-03 & 4.57e+02 & 1.14e+01 & 2.98e+02 \\ \toprule 64,28 & 7.42e-05 & 7.18e-01 & 3.92e-02 & 6.23e+01 \\ 128,28 & 8.46e-05 & 1.23e+01 & 5.42e-01 & 7.43e+01 \\ 256,28 & 5.63e-04 & 6.73e+01 & 3.23e+00 & 1.43e+02 \\ 512,28 & 4.18e-04 & 7.20e+02 & 1.66e+01 & 2.14e+02 \\ \bottomrule \end{tabular} \caption{Numerical results provided by the {MBF} with the randomized sampling algorithm for the FIO given in \eqref{eq:sec5example1fio}. $n$ is the number of grid points in each dimension; $N=n^2$ is the size of the kernel matrix; $r$ is the max rank used in low-rank approximation; $T_{f,m}$ is the factorization time of the {MBF}; $T_d$ is the running time of the direct evaluation; $T_m$ is the application time of the {MBF}; $T_d/T_m$ is the speedup factor.} \label{tab:sec5example1} \end{table} {\bf Example 2. } Here we revisit the second example in Section \ref{sec:numpbf} to illustrate the performance of the {MBF}. Recall that the matrix representation of a composition of two {FIOs} is \begin{equation} \label{eq:sec5example2mfiomat} u =\widetilde{K}g = KFKg, \end{equation} and that there are fast algorithms to apply $K$, $F$ and their adjoints. Hence, we can apply the {MBF-m} (i.e., with the random matrix-vector multiplication) to factorize $\widetilde{K}$ into the form of \eqref{eq:f1}. Table \ref{tab:sec5example2} summarizes the results. \begin{table}[ht!] \centering \begin{tabular}{rcccc} \toprule $n,r$ & $\epsilon^m$ & $T_{f,m}(min)$ & $T_m(sec)$ & Speedup \\ \toprule 64,16 & 1.86e-02 & 4.05e+00 & 1.95e-02 & 4.23e+02 \\ 128,16 & 1.76e-02 & 1.27e+02 & 1.86e-01 & 4.17e+02 \\ \toprule 64,24 & 4.43e-03 & 5.37e+00 & 2.52e-02 & 3.27e+02 \\ 128,24 & 3.02e-03 & 1.79e+02 & 2.29e-01 & 3.40e+02 \\ \bottomrule \end{tabular} \caption{MBF numerical results for the composition of FIOs given in \eqref{eq:sec5example2mfiomat}.} \label{tab:sec5example2} \end{table} {\bf Discussion.} The results in Tables \ref{tab:sec5example1} and \ref{tab:sec5example2} agree with the $\O(N^{3/2}\log N)$ complexity analysis of the construction algorithm. As we double the problem size $n$, the factorization time increases by a factor 9 on average. The actual application time in these numerical examples matches the theoretical operation complexity of $\O(N\log N)$. In Table \ref{tab:sec5example1}, the relative error decreases by a factor of 10 when the increment of the rank $r$ is 6. In Table \ref{tab:sec5example2}, the relative error decreases by a factor of 6 when the increment of the rank $r$ is 8. \section{Conclusion} \label{sec:conc} We have introduced three multidimensional butterfly factorizations as data-sparse representations of a class of kernel matrices coming from multidimensional integral transforms. When the integral kernel $K(x,\xi)$ satisfies the complementary low-rank property in the entire domain, the butterfly factorization introduced in Section \ref{sec:gbf} represents an $N\times N$ kernel matrix as a product of $\O(\log N)$ sparse matrices. In the FIO case for which the kernel $K(x,\xi)$ is singular at $\xi=0$, we propose two extensions: (1) the polar butterfly factorization that incorporates a polar coordinate transformation to remove the singularity and (2) the multiscale butterfly factorization that relies on a hierarchical partitioning in the $\Omega$ domain. For both extensions, the resulting butterfly factorization takes $O(N\log N)$ storage space and $O(N \log N)$ steps for computing matrix-vector multiplication as before. The butterfly factorization for higher dimensions ($d>2$) can be constructed in a similar way. For the {PBF}, one simply applies a $d$-dimensional spherical transformation to the frequency domain $\Omega$. For the {MBF}, one can again decompose the frequency domain as a union of dyadic shells centered round the singularity at $\xi=0$. {\bf Acknowledgments.} This work was partially supported by the National Science Foundation under award DMS-1328230 and the U.S. Department of Energy's Advanced Scientific Computing Research program under award DE-FC02-13ER26134/DE-SC0009409. H. Yang also thanks the support from National Science Foundation under award ACI-1450372 and an AMS-Simons Travel Grant. \bibliographystyle{abbrv}
1,314,259,995,259
arxiv
\section{Introduction} For decades, the social sciences have studied how large-scale patterns of human activity emerge from the behavior of individuals~\cite{schelling}. Until a decade ago, data sets were typically gleaned from questionnaires, observational studies, etc.; and understandably rather small. Some statistical quantities need very large statistics to be seen. One such example is power-law degree distributions. With the development of information (and database) technology in the last decade, we can now observe structures that require large data sets. One such recently observed phenomenon is the power-law distributions of interevent times of online activity. This feature can be seen both at the level of populations~\cite{Mainardi2000, Plerou2000, Masoliver2003, Scalas2004, Kaizoji2004, Scalas2006} and individuals~\cite{Barabasi2005, Vazquez2006, Dezso2006}, and cannot be explained by independent, uniformly random, interaction patterns. Understanding such emerging communication patterns is essential to be able to predict the impact of new technologies, the spread of computer viruses~\cite{Balthrop2004,Vazquez2007}, human travel~\cite{Brockman2006}, etc. How do power-laws in response, or interevent, times occur? In a pioneering work, Barab\'asi~\cite{Barabasi2005} proposed a queuing model as explanation (later solved analytically~\cite{Vazquez2006, Vazquez2005, Gabrielli2007}). In this model, the power-law statistics does not come from a power-law distributed trait of the agents, but emerge from interaction between the agents and the environment. Barab\'asi's model gives response times of two universality classes---one with power-law exponent $\alpha=1$ (observed in e-mail communication~\cite{Eckmann2004, Barabasi2005}), and a class with $\alpha=1.5$ (observed in surface mail communication~\cite{Oliveira2005}). The behavioral origin of power-law tails according to Barab\'asi's model~\cite{Barabasi2005}, is that the individuals use a \emph{highest-priority-first} (HPF) protocol to decide which task needs to be executed first (rather than a first-in-first-out strategy). However, power-laws have been observed in systems driven by individuals arguably not guided by task-lists (e.g., web browsing~\cite{Dezso2006}, networked games~\cite{Henderson2001} and online chatting~\cite{Dewes2003}). In this work, we perform a detailed study of such a system, namely an online infrastructure for rating movies. Our primary quantity is the time $\tau$ between two consecutive movie ratings. The distribution $p(\tau)$ of the aggregated data follows a power law spanning more than two orders of magnitude. More interestingly, we observe a monotonous relation between the power-law exponent and the mean activity in the group (see below how to divide the whole population into several groups). This suggests that the activity of individuals is one of the key ingredients determining the distribution of interevent times. \begin{figure}\center \scalebox{0.7}[0.7]{\includegraphics{graph1}} \caption{The distribution of interevent time in the population level, indicating that $p(\tau)\sim \tau^{-2.08}$. The solid line in the log-log plot has slope $-2.08$. The data exhibits weekly oscillations, reflecting a weekly periodicity of human behavior, which has also been observed in e-mail communication~\cite{Holme2003}. }\label{fig:1} \end{figure} \section{Data source} Our data source, obtained from www.netflixprize.com, is collected by a large American company for mail order DVD-rentals, Netflix. The users can rate movies online. This information is used to give the users personalized recommendations. The data was made public as a part of a competition for the better recommender system. In total, the data comprises $M=17{,}770$ movies, $N=447{,}139$ users and $\sim 9.67\times 10^7$ records. Each record consists of four elements: a user ID $i$, a movie ID $\alpha$, the user's rating (from $1$ to $5$) $v_{i\alpha}$, and the time of the rating $t_{i\alpha}$). Tracking the records of a given user $i$, one can get $k_i-1$ interevent times where $k_i$ is the number of movies $i$ has already seen. The time resolution of the data is one day. \begin{figure}\center \scalebox{0.6}[0.6]{\includegraphics{graph2a}} \scalebox{0.6}[0.6]{\includegraphics{graph2b}} \caption{The typical distributions of interevent times at a group level---group 4 (upper panel) and group 17 (lower panel). The solid lines in the log-log plot have slopes $-2.41$ and $-1.71$, respectively. The corresponding mean activities are $1.274$ and $0.112$.}\label{fig:2} \end{figure} \section{Interevent time distribution for the whole population} In Fig.~\ref{fig:1}, we report the interevent time distribution based on the aggregated data of all users. The distribution follows a power law, $p(\tau)\sim \tau^{-\gamma}$, for more than two orders of magnitude. The power-law exponent, $\gamma\approx 2.08$, is obtained by maximum likelihood estimation~\cite{Goldstein2004}. All the power-law exponents reported in this Letter are obtained by this method. To avoid bias from the mentioned oscillation effect, at the whole-population level, we only include the data points separated by one week. That is to say, in the calculation of the power-law exponent, only the data points $F(7), F(14), F(21), \cdots$ are considered, where $F(\tau)$ denotes the frequency of interevent time $\tau$. A proposed mechanism for the emergence of power-law distributions with $\gamma\approx 2.0$ is aggregation of Poissonian distributions with different, uniformly distributed, characteristic times~\cite{Hidalgo2006}. However, as we will see later, the empirical statistics and analysis at group and individual levels demonstrate that this scaling law cannot be caused by a combination of Poissonian agents. \begin{figure}\center \scalebox{0.7}[0.7]{\includegraphics{graph3}} \caption{The relation between power-law exponent $\gamma$ of interevent time distribution and mean activity of each group. Each point corresponds to one group. All the exponents are obtained by using maximum likelihood estimation and pass the Kolmogorov--Smirnov test with threshold quantile $0.9$~\cite{Goldstein2004}. }\label{fig:3} \end{figure} \section{Interevent time distribution for groups} The HPF protocol~\cite{Barabasi2005} explains heavy tails in response times of human communication. Nevertheless, we lack an in-depth understanding of the interevent time distribution in data sets such as ours. We can probably not explain the aggregated distribution by identical behavior. A heavy smoker, consuming fifty cigarettes per day, would not make a long pause. Events separated by longer times would (assuming smoking patterns follows the same statistics) come from other people---occasional party-smokers, mischievous adolescents, or similar. Similarly, the other end of the spectrum in Fig.~\ref{fig:1} probably corresponds to other persons. To get at this we measure the \emph{activity} $A_i$~\cite{Ghoshal2006}---the frequency of events of an individual: $A_i=n_i/T_i$, where $n_i$ is the total number of records of $i$, and $T_i$ is the time between the first and the last event of $i$. In other words, $A_i$ is the frequency of movie ratings of $i$. As shown in Fig.~\ref{fig:1}, the mean activity, averaged over all users, is $\langle A \rangle=0.812$. \begin{figure}\center \scalebox{0.8}[0.8]{\includegraphics{graph4}} \caption{Cumulative distribution of activities for all the individuals. The distribution is intermediate between exponential and power-law. The insets display the same measure for group 4 and group 17, respectively. }\label{fig:4} \end{figure} To investigate the role of activity, we sort the users by activity in a descending order, and then divide this list into twenty groups, each of which has almost the same number of users. Accordingly, the mean activity of each group obeys the inequality $\langle A \rangle_1>\langle A \rangle_2>\cdots>\langle A \rangle_{20}$. In Fig.~\ref{fig:2}, we report two typical distributions of interevent time at a group level. Both these distributions follow power-laws. Note that the group with lower activity has power-law exponent, giving a longer average interevent time. The corresponding distributions for the other groups follow power-law forms as well, but with different exponents. In Figure~\ref{fig:3} we diagram the exponent as a function of activity. There is a non-trivial, monotonous increase of the exponent with the activity. This relation, in accordance with our smoker example above, indicates the significant role of activity for the observed, aggregate behavior. Note that, for a mathematically ideal power-law distribution $p(\tau)\sim \tau^{-\gamma}$, the exponent $\gamma$ has a one-to-one correspondence with $A$ from the relation \begin{equation} \gamma(A)=1+\frac{1}{1-A} ~ , ~ 0<A<1~. \end{equation} For $A>1$, there is no corresponding normalized probability distribution, of $\tau$, of a power-law form. However, the situation in the real data is very different. As shown in Figs.~\ref{fig:1} and \ref{fig:2}, the activity are mainly determined by the drooping head of $p(\tau)$, not the tail used to calculate $\gamma$ (we consider $\tau=7,14,21,\cdots$ only). A similar case can be found in~\cite{Barabasi2005} and its supplementaries, where a peak at $p(\tau=1)$, which was ignored in the calculation of $\gamma$, mainly describes the individual activity. \begin{figure}\center \scalebox{0.42}[0.42]{\includegraphics{graph5a}} \scalebox{0.41}[0.41]{\includegraphics{graph5b}} \scalebox{0.39}[0.39]{\includegraphics{graph5c}} \scalebox{0.39}[0.39]{\includegraphics{graph5d}} \caption{(Color online) The interevent time distribution between, (a)--(b) two consecutive movie ratings by two Netflix users, and (c)--(d) two consecutive sending of text-messages by two mobile telephone users. The time unit for (a) and (b) is one day, and for (c) and (d) one hour. Under the threshold quantile $0.9$, distributions in (a) and (b) can not pass the Kolmogorov--Smirnov test, while the (c) and (d) do pass it. }\label{fig:5} \end{figure} If every monitored individual has a Poisson distributed activity at separate rate $A$, then the distribution of interevent time should be~\cite{Hidalgo2006} \begin{equation} p(\tau)\sim f(A)\tau^{-2}~, \end{equation} where $f(A)$ is the activity distribution of individuals. Since the power-law exponent in population level is close to 2, if it results from an aggregation of Poissonian individuals, the activity distribution should follow a uniform pattern. However, as shown in the main plot of Fig.~\ref{fig:4}, the activity distribution in population level is not uniform. In contrast, as reported in the insets of Fig.~\ref{fig:4}, the cumulative distribution $F(A)$ for group 4 and group 17 can be well fitted by a straight line, suggesting a uniform distribution $f(A)$, while the exponents $\gamma_4$ and $\gamma_{17}$ are far from each other, and both different from 2. Therefore, the heavy-tailed nature at the group level cannot originate from homogeneous Poissonian individuals. To our knowledge, it is the first time one has observed, a monotonous relation between power-law exponent of interevent time distribution and a certain measure (i.e.\ activity). We believe this analysis illustrate the important role of the individual activity in the aggregate pattern of human behavior. \begin{figure}\center \scalebox{0.7}[0.7]{\includegraphics{graph6}} \caption{(Color online) Scatter plot showing the second moment $\langle \tau^2\rangle$ and activity, indicating a negative correlation. The red curve shows the average value of $\langle \tau^2\rangle$ for a given activity, and the blue curve represents the case of Poisson distribution whose expected value is given as the inverse of activity. }\label{fig:6} \end{figure} \section{Interevent time distribution for individuals} To continue tying together micro- and macro phenomena, we look closer at the behavior of individual agents. In particular, we investigate whether or not the monotonous relation between activity and power-law exponent also holds at an individual level. Figs.~\ref{fig:5}(a) and (b) report the interevent time distribution $p(\tau)$ of two individual users. We observe a similar relation as for the group level statistics. That is to say, the less active agent has a broader distribution and smaller power-law exponent. Although the distributions shown in Figs.~\ref{fig:5}(a) and (b) show heavy-tailed forms, they do not pass the Kolmogorov--Smirnov test with threshold quantile 0.9~\cite{Goldstein2004}. We believe this can be explained by the relative short sample times of the individual records. (The typical duration of individual records, in our case, range from a few months to a few years. This range is not as impressive as, e.g.\ Refs.~\cite{Oliveira2005,Vazquez2007b} where surface mail is studied for a period of more than half century with a resolution in days.) It may be the case that a credible power-law scaling will emerge after a sufficient while; however, so far, we cannot claim that typical $\tau$-distributions follow power-law forms. Nevertheless, almost every user has a heavy-tailed distribution (that is, much broader than a Poisson distribution with the same average interevent time $\langle \tau\rangle$). We use the second moment, $\langle \tau^2\rangle=\int \tau^2p(\tau)\,\mathrm{d}\tau$, to measure the width of $p(\tau)$. As seen in Fig.~\ref{fig:6}, all individual distributions have much larger $\langle \tau^2\rangle$ than the Poisson distributions with the same $\langle \tau\rangle$. Moreover, we observe a negative correlation between $\langle \tau^2\rangle$ and $A$, which can be seen as an individual-level variant of the relation in Fig.~\ref{fig:3}. Although the negative correlation can also be detected in Poisson distributions, this finding is interesting since it highlights the activity, as opposed to universality classes, as a signifier of human dynamics. To check the generality of our observations of the relation between activity and interevent time patterns, we investigate another empirical data set of mobile phone text-message communication. The data set comprise all messages sent and received by 20 users over half a year. Figure~\ref{fig:5}(c) and (d) report two typical interevent time distributions. These show yet more credible power-laws than those in the Netflix data (Fig.~\ref{fig:5}(a) and (b)). Actually, in this data set, all users show a power-law distribution passing the Kolmogorov--Smirnov test. (Note that, the time resolution of the text-message data is seconds. Thus, half a year is long compared to the Netflix data.) The activities and exponents belong to the intervals $A\in [6.09, 60.72]$ and $\gamma \in [1.41,2.25]$. Even at the individual level (which is sensitive to fluctuations in personal habits), an almost monotonous relation between $A$ and $\gamma$ is observed (with the exception of two users that show a slight deviation). A similar relation can also be found in data of online Go (duiyi.sports.tom.com); in this data the individual records span years, and the resolution is hours). Here, the more active players also have larger power-law exponents and narrower interevent time distributions. However, for commercial reasons, the aggregated data cannot be freely downloaded. Therefore, for the text-message and online Go data we cannot analyze the aggregate level statistics. \section{Conclusions} In previous works, the heavy-tailed interevent time distribution has been explained by a queuing mechanism in the decision making of agents. This is a relevant scenario for task-driven situations (such as e-mail~\cite{Barabasi2005} or surface mail~\cite{Oliveira2005} communication). However, similar, heavy-tailed distributions also exists in many interest-driven systems (e.g.\ web browsing~\cite{Dezso2006}, networked computer games~\cite{Henderson2001}, online chat~\cite{Dewes2003}; or, as our examples, text-message sending, and movie rating), where no tasks are waiting to be executed. As opposed to focusing on universality classes (as for task-driven systems), we highlight a common character in interest-driven systems: the power-law exponents are variable in a wide range with a strongly positive correlation to the individual's activity. This finding is helpful for further understanding the underlying origins of heavy tails of interest-driven systems. A power-law distribution of activity, might also be a factor in the dynamics of task-driven systems. This is reminiscent of the power-law distribution of extinction events (that can be explained by both the internal dynamics of evolution, and a power-law distribution of the magnitudes of natural disasters~\cite{soc}). \acknowledgments We thank Mr.\ Wei Hong for providing us the text-message data. B.J.K. acknowledges support from the Korea Science and Engineering Foundation by the grant No.\ R01--2007--000--20084--0, and H.A.T.K. acknowledges support from the Korea Research Foundation with grant No.\ KRF--2006--211--C00010. B.H.W. acknowledges the 973 Project 2006CB705500, T.Z. was supported by NNSFC--10635040, P.H. acknowledges support from The Swedish Foundation for Strategic Research.
1,314,259,995,260
arxiv
\section{Introduction} Gruzinov has proposed that force-free electrodynamics is inadequate for describing some relativistic magnetospheres, because of insufficient pair supply to short out the induced electric fields. In particular he argued that this is the case for ``weak pulsars" \cite{2013arXiv1303.4094G}. In this case, parts of the magnetosphere would fail to be force-free. When the fields are very strong and ${\bf E}\cdot{\bf B}$ is nonzero, charges will be accelerated to near the speed of light, reaching a steady state where the work done on them by the electric field is transferred to their radiation fields. In this situation, all charges move approximately at the speed of light, and the current is completely determined by the supply of charge. This regime was called {\it Aristotelian electrodynamics} (AE), because it is the velocity rather than the acceleration of the charges that is determined by the field. The name is quite fitting, since a central focus of Aristotelian mechanics was the regime of terminal velocity, in which the transients can be neglected \cite{2013arXiv1312.4057R}. Consideration of this regime in pulsar electrodynamics was earlier discussed in \cite{1985MitAG..63..174H,1989A&A...225..479F}. In the degenerate case when ${\bf E}\cdot{\bf B}=0$, if the field is magnetic ($B^2>E^2$), there exist Lorentz frames with vanishing electric field. The charges therefore have no secular acceleration, so it can be a good approximation in strong, magnetic degenerate fields to neglect the energy and momentum of the charges altogether. In this {\it force-free} approximation, the current is entirely determined by the fields at one time (see e.g.\ \cite{2014MNRAS.445.2500G} for a review and references). The Aristotelian and force-free approximations may coexist in one physical system that includes both degenerate and non-degenerate regions. Both approximations result from neglecting of the mass of the charges in certain quantities. Together they comprise what Gruzinov called the {\it electrodynamics of massless charges} \cite{2012arXiv1205.3367G}. Besides pulsars, another astrophysical setting where one would expect AE conditions to prevail is the magnetosphere of a compact binary system near coalescence. Even with copious pair creation, because of the motion of the binary, the fields change too rapidly for the charges to short out the electric field. One would therefore expect very strong charge acceleration to take place \cite{Sobacchi:2015yya}. AE may also be relevant in a laboratory setting, for plasmas in Petawatt laser fields \cite{2014arXiv1404.4615G}. In this paper I reformulate AE in a relativistically covariant form, and aim to elucidate the structure of the theory and its interface with force-free electrodynamics (FFE). The key observation is the central role played by the principal null directions of the electromagnetic field. Beyond that there is nothing new here. The covariant form also applies in curved spacetime, so would be particularly useful in systems containing a black hole. The spacetime signature is $({+}{-}{-}{-})$ and the speed of light is sometimes set to $c=1$. \section{Structure of electromagnetic fields} A non-null electromagnetic field $F_{ab}$ at a point has two null eigenvectors $k_\pm$, with opposite eigenvalues, \begin{equation}\label{eigen} F^a{}_bk^b_\pm = \pm E_0\, k_\pm^a. \end{equation} The directions of these null eigenvectors are called the {\it principal null directions} (PNDs) of the field \cite{1986ssv..book.....P}. The eigenvalue $\pm E_0$ is the electric field in a frame in which the electric and magnetic fields are parallel or one of them vanishes. In AE, $E_0\ne0$, and positive charges travel along one PND while negative charges travel along the other. In FFE, $E_0=0$, and the total current 4-vector also lies in the timelike plane spanned by the two PNDs. For null fields there is only one null eigenvector, with vanishing eigenvalue. A simple way to understand this structure is to observe that, at a given spacetime point, any electromagnetic field 2-form $F$ can be presented in one of two canonical ways in terms of an adapted orthonormal set of 1-forms $\{dt, dx, dy, dz\}$: \begin{eqnarray} F^{\rm generic} &=& E_0 \, dz\wedge dt + B_0\, dx\wedge dy,\label{generic}\\ F^{\rm null} &=& F_0 \, dz\wedge(dt-dx).\label{null} \end{eqnarray} The field in \eqref{generic} has two null eigenvectors, $k_\pm = \partial_t\pm \partial_z$, with eigenvalues $\pm E_0$. The field in \eqref{null} has one null eigenvector, $\partial_t+ \partial_x$, with vanishing eigenvalue. The electric and magnetic fields in \eqref{generic} are parallel in the Lorentz frame defined by $\partial_t$. The general statement is that they are parallel in any frame lying in the timelike plane spanned by $k_+$ and $k_-$. If the field is degenerate, i.e.\ if $F\wedge F=0$ (${\bf E}\cdot{\bf B}=0$), then either $E_0=0$ or $B_0=0$; that is, either ${\bf E}$ or ${\bf B}$ will vanish in these ``field eigenframes". If the field is null then ${\bf E}$ and ${\bf B}$ are perpendicular and have the same magnitude in all frames. The quantities $E_0$ and $B_0$ are related to Lorentz scalars by \begin{eqnarray} E_0^2 - B_0^2 &=& E^2 - B^2 = -{\textstyle{\frac{1}{2}}} F_{ab}F^{ab} \label{F2}\\ E_0B_0&=& {\bf E}\cdot{\bf B} = \tfrac18 \epsilon^{abcd}F_{ab}F_{cd}. \label{E0B0} \end{eqnarray} To fix signs uniquely, we take $E_0\ge0$. $E_0^2$ and $B_0^2$ are given in terms of $a=E^2-B^2$ and $b=2{\bf E}\cdot{\bf B}$ by $(\sqrt{a^2 + b^2}\,\pm\, a)/2$, respectively. The PNDs can be specified by a pair of spatial unit vectors ${\bf v}_\pm$ in a given frame via $k_\pm^a\leftrightarrow (1,{\bf v}_\pm)$. Gruzinov gives an explicit expression for ${\bf v}_\pm$ in terms of the electric and magnetic fields: \begin{equation}\label{vpm} {\bf v}_\pm = \frac{{\bf E}\times{\bf B} \pm(E_0{\bf E} + B_0{\bf B})}{E_0^2 + B^2}. \end{equation} [The denominator can also be written more symmetrically as ${\textstyle{\frac{1}{2}}}(E_0^2 +B_0^2 +E^2+ B^2)$.] The PNDs can be constructed as quadratic expressions from the spinor factors (or eigenspinors) of the electromagnetic spinor \cite{1986ssv..book.....P}.\footnote{An electromagnetic field can be represented by the trace-free $2\times2$ matrix $\Phi=({\bf E} + i{\bf B})\cdot\boldsymbol{\sigma}$. The Lorentz group acts by similarity transformation on $\Phi$, so the eigenvalues of $\Phi$ are Lorentz invariant. The properties of the Pauli matrices imply $\Phi^2 = ({\bf E} + i{\bf B})\cdot({\bf E} + i{\bf B})\equiv(E_0+iB_0)^2I$, so the eigenvalues are $\pm(E_0+iB_0)$. The eigenspinors $\lambda_\pm$ are the square roots of the PNDs of $F_{ab}$, in the sense that $k_\pm^a \leftrightarrow (1, \lambda_\pm^\dagger\boldsymbol{\sigma}\lambda_\pm)$, with the normalization $\lambda_\pm^\dagger\lambda_\pm=1$.} The Newman-Penrose formalism or other spinor techniques might therefore be useful for analytical and/or numerical studies of AE. The set of fields modulo Lorentz transformations can be represented as the upper half plane with vertical axis $E_0$ and horizontal axis $B_0$ (see Fig.~\ref{fieldplane}). Degenerate fields have either $E_0=0$ or $B_0=0$, while null fields have $E_0=B_0=0$. When $E_0=0$, the two values $\pm B_0$ label the same set of fields. The moduli space of fields is thus the cone obtained by identifying the positive-$B_0$ half of the dashed line in Fig.~\ref{fieldplane} with the negative half; the null fields lie at the vertex, and the magnetic and electric degenerate fields correspond to a pair of opposite rays on the cone. \begin{figure} \centering \includegraphics[scale=.6]{fieldplane.pdf} \caption{The space of electromagnetic fields modulo Lorentz transformations. The coordinates $(B_0,E_0)$ are the invariants defined in (\ref{F2},\ref{E0B0}). Degenerate fields are represented by the two axes. $(\pm B_0,0)$ are identified on the dashed lower boundary, so the space is actually a cone. AE applies above the grey strip of height $R^{-2/3}$, where $R$ is the curvature radius of the particle paths, reckoned in an instantaneous field eigenframe, and units $e=mc^2=1$ are used. The dashed $B_0$ axis corresponds to magnetic or null ($B_0=0$) degenerate fields; FFE applies for these if the energy density of the field is much greater than that of the charges.} \label{fieldplane} \end{figure} The current 4-vector lies in the plane spanned by the PNDs in both the AE and FFE regimes, though it is determined by different conditions. The distinguishing factor between these regimes is whether the PND eigenvalue is nonzero or zero. In the following we characterize these two regimes. The PND's define a pair of ``field curvature scalars", \begin{equation}\label{Rpm} R_\pm^{-1} = \frac{|(k_\pm\cdot \nabla) k_\mp|}{|k_+\cdot k_-|/2}. \end{equation} These quantities have dimensions of inverse length and are invariant under arbitrary rescalings of $k_\pm$.\footnote{Since $k_-^2=0$, $(k_+\cdot\nabla)k_-$ is orthogonal to $k_-$, so $(k_+\cdot\nabla)k_-=Ak_- + v$ for some spacelike vector $v$ orthogonal to both $k_-$ and $k_+$, and $|(k_+\cdot\nabla)k_-|=|v|$. Under a rescaling $k_\pm \rightarrow \alpha_\pm k_\pm$ the coefficient $A$ is modified, and $v\rightarrow \alpha_+\alpha_-v$, so $|v|$ is simply multiplied by $|\alpha_+\alpha_-|$, which cancels against a similar factor from the denominator of \eqref{Rpm}.} The two radii $R_\pm$ coincide if the PNDs are surface forming. For static electric or magnetic fields they coincide and are equal to the curvature radius of the field lines. More generally, they measure a rate at which one eigendirection bends away in the orthogonal spacelike direction when moving along the other PND. They are equal to twice the magnitudes of the spin coefficients $\pi$ and $\tau$ associated with a null tetrad constructed from the PNDs \cite{1986ssv..book.....P}. They may serve generally to define the relevant length scales determining the applicability of the AE approximation and the ``guiding center approximation" (see e.g.\ \cite{1987PhDT.......197B} and references therein).\footnote{I am grateful to Antony Speranza for suggesting that the invariants \eqref{Rpm} might supply the relevant notion of curvature length scale in this context.} \section{Force-free electrodynamics} If $E_0=0$, then the field is magnetic or null degenerate. If the energy density of the field is much greater than that of the charges, then the energy and momentum of the charges can be neglected, which means that the 4-force on the current can be set to zero, \begin{equation}\label{FF} F_{ab}j^b=0. \end{equation} This condition in turn implies that the field is degenerate. In the magnetic case, the FF current must be a linear combination of the vectors $k_\pm^a$, since the latter span the kernel (i.e.\ the null eigenspace) of $F_{ab}$. In the null case the PNDs coincide, the kernel of $F_{ab}$ is spanned by the unique PND and an orthogonal spacelike direction, and the current must be a linear combination of vectors in those directions. If there is only one sign of charge present, the current cannot be spacelike, so in the case of null fields it must be null. In either case, somewhat surprisingly, the current is determined by the fields at one time when \eqref{FF} and Maxwell's equations are imposed. One therefore has stand-alone evolution equations for magnetically dominated or null force-free fields, without reference to the charge or current densities. In the magnetic case the initial value problem for these equations is well-posed. \cite{komissarov2002,palenzuela-etal2011,2013arXiv1307.7782P}. \section{Aristotelian electrodynamics} For any field with $E_0\ne0$, i.e.\ which is not magnetic or null degenerate, the electric field can accelerate charges without deflection along the spatial directions of the PNDs. In a strong field the charges accelerate until they radiate energy at the same rate as the Lorentz force supplies it to them---unless collisions with something slows them down before that happens.\footnote{The charge acceleration could be halted, for example, by inverse Compton collisions with ambient photons, as in the stagnation surface scenario for black hole jets discussed in \cite{Broderick:2015swa}.} The power input is $\sim eE_0c$, reckoned in a field eigenframe in which the charge has speed $\approx c$, while a charge moving with large Lorentz $\gamma$ factor on a path following a trajectory with radius of curvature $R$ emits curvature radiation with power $\frac23 e^2\gamma}\def\G{\Gamma^4c/R^2$ \cite{1975ctf..book.....L}. Equating the input and output power yields $\gamma}\def\G{\Gamma =(3E_0 R^2/2e)^{1/4}$.\footnote{The characteristic frequency of the radiation is $\omega_c \sim \gamma}\def\G{\Gamma^3c/R \sim (E_0/e)^{3/4} cR^{-1/2}$.} The 4-velocity of the charge is nearly parallel to the principal null vector $k_+^a$ if the charge is positive, and to $k_-^a$ if the charge is negative. When does AE apply? The electric field should be strong enough that it is a good approximation to treat the current of the charges as running parallel to the principal null directions. If the field is approximately static, it is presumably necessary that the 4-velocity is ``nearly lightlike" as reckoned in the approximately static frame, and that the conditions for reaching terminal velocity have been met. Let us use units with $e=mc^2=1$ for a moment. The conditions just stated are that the terminal gamma factor must be relativistic, say $(E_0 R^2)^{1/4}>2$, and that the particles reach that terminal velocity. This latter condition requires that the voltage drop over a distance $\sim R$, reckoned in an instantaneous field eigenframe, be greater than the terminal gamma factor times the rest energy of the electron, $E_0 R > (E_0 R^2)^{1/4}$, i.e. $E_0 R^{2/3}>1$. The former condition is then automatically met in any relevant situation, since then $R\gg R_e=1$ (the classical electron radius $e^2/mc^2$ in these units). This reasoning suggests that the condition of AE applicability is \begin{equation}\label{AEapp} E_0> R^{-2/3}. \end{equation} Note that the range of applicability grows as $R$ grows (see Fig.~\ref{fieldplane}). We conjecture that the field curvature scalars \eqref{Rpm} provide the correct general interpretation of the length scale $R$ in \eqref{AEapp}. Given a unit timelike vector $u^a$, we can normalize the principal null vectors by the condition $k^a_\pm u_a = 1$. Then, because the charge 4-velocities are parallel to $k_\pm^a$ in the AE regime, the current in a region where $E_0\ne 0$ takes the form \begin{equation}\label{j} j^a = \rho_+ k_+^a - \rho_-k_-^a, \end{equation} where $\rho_+$ and $-\rho_-$ are the densities of positive and negative charge in the frame $u^a$. In terms of the velocities \eqref{vpm} relative to a fixed Lorentz frame, the 4-current density is given by \begin{equation} (\rho, {\bf j})=( \rho_+ - \rho_-, \rho_+ {\bf v}_+ - \rho_- {\bf v}_-). \end{equation} % Using the eigenvector property \eqref{eigen}, the Lorentz 4-force density acting on the current \eqref{j} is \begin{equation} F^a{}_{b}j^b = E_0(\rho_+ k_+^a + \rho_-k_-^a). \end{equation} The power deposited into the charges in the frame $u^a$ is $E_0(\rho_+ + \rho_-)$. The charge density is determined by $F_{ab}$ without time derivatives via Gauss' law, $\rho=\nabla\cdot{\bf E}$. Since the current 4-vector $j^a$ \eqref{j} lies in the plane spanned by the two null eigenvectors, the remaining freedom in $j^a$ is only one function. If all the charges in a given region have the same sign, then the current is null and thus fully determined. If instead both signs of charge are present (nonzero pair multiplicity), the charge densities are determined by pair production and subsequent propagation subject to the continuity equations, \begin{equation}\label{cons} \nabla_a (\rho_+ k_+^a) = \nabla_a (\rho_- k_-^a) =\Gamma. \end{equation} The pair creation rate $\G$ depends on $E_0$ and the photon density. Once the form of $\G$ is specified, Maxwell's equations, together with Eqs.\ \eqref{j} and \eqref{cons} and initial values for $\rho_\pm$, determine the time derivatives in terms of the field and charge densities at a given time. This system of equations is thus naively deterministic. (Whether it defines a well-posed initial value problem has not been examined, although Gruzinov has evolved them numerically and the solutions seem to behave well \cite{2013arXiv1303.4094G,2015arXiv150305158G}.) \subsection{Example: Gruzinov's device} To illustrate the workings of AE/FFE, Gruzinov considered an arrangement in two spatial dimensions where FFE and AE regimes coexist side by side. He called this the ``Device" \cite{2014arXiv1402.1520G} (see Fig.\ 2 in this reference for an illustration). In the Device, the $y=0$ plane is a conductor, and opposing FFE Poynting fluxes propagating in the $\pm x$ directions collide in an AE zone where the energy is converted to curvature radiation. The charges are electrically attracted toward the image charges and repelled by the image currents below the conducting plane. In the FFE zone the charges move rectilinearly at the speed of light, so the electric and magnetic forces balance. This zone transitions to an AE radiation zone, in analogy with the transition outside a weak pulsar, without any discontinuity. In the AE zone, the magnetic field is weaker than the electric one, so the net force attracts the charges to the conductor. Here I use the stationary Device to illustrate the formulation given above. The stationary field can be expressed as \begin{equation} F = E(y)dy\wedge[dt+ \beta(x) dx], \end{equation} corresponding to an electric field $E$ in the $y$ direction and a magnetic field $-\beta E$ in the $z$ direction. $F$ is a simple 2-form (as is any electromagnetic field in 2+1 dimensions), so it is degenerate. That is, one or both of $E_0$ and $B_0$ vanish. For $\beta^2 <1$ the field is electric, so it is $B_0$ that vanishes. There are two PNDs, \begin{equation}\label{pnd1} k_\pm = \partial_t -\beta\, \partial_x \pm \sqrt{1-\beta^2}\, \partial_y, \end{equation} with eigenvalues \begin{equation}\label{E0} E_0 = \pm E\sqrt{1-\beta^2}. \end{equation} The frames in which the magnetic field vanishes are those in the $k_+k_-$ plane. From \eqref{pnd1} we see that these have $x$-coordinate velocity $-\beta$ and any $y$-coordinate velocity with magnitude less than or equal to $\sqrt{1-\beta^2}$. A positive charge moving in the direction of the null vector $k_+$ feels a Lorentz force proportional to $k_+$. If $\beta=0$ the spatial direction of such motion is just that of the electric field, $\partial_y$, while if $\beta$ is nonzero there is also a $\partial_x$ component. For $\beta^2=1$ the field is null, and the two PNDs coalesce into a single PND with vanishing eigenvalue. In this case the field takes the form $F=E(y) dy\wedge(dt\pm dx)$, which is a solution of the force-free equations. In fact it has the form of the simple class of null Poynting flux solutions discussed in \cite{2014MNRAS.445.2500G}. For $\beta^2 >1$ the field is magnetic, but it is not a force-free solution for any nontrivial choice of the functions $E$ and $\beta$, so that case plays no role here. Let us now consider the AE field equations in the electric case $\beta^2<1$. First, the field satisfies Faraday's law, $dF=0$, by inspection. The remaining AE equations require that (i) the current computed from $F$ according to Maxwell's equations is equal to \eqref{j} and (ii) the continuity equation \eqref{cons} holds. Assuming there is no pair creation ($\G=0$), and that only electrons are present, the continuity equation follows from the Maxwell equation, $\partial_b F^{ab}= -\rho_-k_-^a$. This equation implies $\rho_-=-E_{,y}$, and $E_{,y}/E = -\beta_{,x}/\sqrt{1-\beta^2}$. Since $E$ depends only on $y$, and $\beta$ depends only on $x$, the general solution is given by \begin{equation} E=E(0)\exp(-y/\ell),\qquad \beta=\sin (x/\ell), \end{equation} where $\ell$ is a constant length. This solution matches smoothly to the force-free Poynting flux solution with $\beta=\pm1$ at $x/\ell=\pm \pi/2$. The charges and Poynting flux are thus flowing inwards toward $x=0$ from both sides. What about the validity of the approximations for the Device? In the FF zone the charges move in straight lines at the speed of light, which can presumably be understood as an ultrarelativistic approximation. Where the radiation zone begins, $E_0$ starts out at zero \eqref{E0}. According to \eqref{AEapp}, AE therefore applies at the transition only if $R=\infty$, where $R$ is the radius of curvature in an instantaneous field eigenframe. We proposed that the invariant $R_-$ defined in \eqref{Rpm} may capture the relevant curvature radius. The vector fields \eqref{pnd1} are surface forming, so $R_+=R_-$, and we find $R_\pm^{-1}=\partial_x(1-\beta^2)^{-1/2}$. This diverges at $x/\ell=\pi/2$, so $R_\pm\rightarrow 0$ there, hence for any $E(0)$ there is a region close to $x/\ell=\pi/2$ where the applicability of AE is questionable. Specifically, using \eqref{E0}, the condition \eqref{AEapp} becomes $E(0)e^{-y/\ell}> \ell^{-2/3}[\sin(x/\ell)/\cos^2(x/\ell)]^{2/3}$. \section{Summary} Aristotelian Electrodynamics \cite{2013arXiv1303.4094G} is the theory of a plasma with very strong electric field, whose charges move ultrarelativistically, essentially at the speed of light in the direction of the local electric field. In this paper we have formulated AE in a way that brings out the central role played by the principal null directions $k^a_\pm$ of the electromagnetic field tensor \eqref{eigen}. These determine the current \eqref{j} up to the densities of positive and negative charge, and can be used to construct the field curvature scalars \eqref{Rpm} relevant to applicability of the AE approximation. The total charge density is determined by the divergence of the electric field, so just one function in the 4-current, say the positive charge density, remains undetermined by the field. This is determined by the pair production rate $\Gamma$ and the continuity equation \eqref{cons}. The rate of 4-momentum density deposited into the charges is proportional to the eigenvalue $E_0$ \eqref{eigen}, and this eigenvalue can also be used to parametrize $\Gamma$. \begin{acknowledgments} I am grateful to Sam Gralla and Antony Speranza for helpful comments on a draft, to Antony for extensive discussions about the field curvature scalars, and to Carlo Rovelli for instruction on Aristotelian physics. This research was supported in part by the National Science Foundation under grants No. PHY-1407744, and PHY11-25915. \end{acknowledgments}
1,314,259,995,261
arxiv
\section{Introduction} \begin{figure} \includegraphics[width=0.9\columnwidth]{images/rotation-crop.pdf} \vspace{-5pt} \caption{The non-dominated front of behavior domination can be viewed as a rotation of the Pareto front. Guarantees from multiobjective optimization can then be applied.} \label{FigRotation} \end{figure} The ability to discover and exploit stepping stones is a hallmark of evolutionary systems. Evolutionary algorithms driven by a single fitness objective are often victims of \emph{deception}: they converge to small areas of the search space, missing available stepping stones. Novelty search \cite{lehman08, lehman11a} is an increasingly popular paradigm that overcomes deception by ranking solutions based on how different they are from others. Novelty is computed in the space of \emph{behaviors}, i.e., vectors containing semantic information about \emph{how} a solution achieves its performance when it is evaluated. In a collection of solutions with sufficiently diverse behaviors, some solutions will be useful stepping stones. However, with a large space of possible behaviors, novelty search can become increasingly unfocused, spending most of its resources in regions that will never lead to promising solutions. Recently, several approaches have been proposed to combine novelty with a more traditional fitness objective \cite{mouret15a, mouret09, gomes15, gomez09, pugh15} to reorient search towards fitness as it explores the behavior space. These approaches have helped scale novelty search to more complex environments, including an array of control \cite{mouret15, mouret12, bowren16} and content generation \cite{lehman11b, liapis13, preuss14, lehman12, nguyen15, nguyen16, lehman16} domains. This paper shows that, aside from focusing search overall, the addition of fitness can also be used to focus search on discovering useful stepping stones. The assumption is that the most likely stepping stones occur at local optima along some dimensions of the behavior space. Competition in several existing algorithms inhibits the discovery and maintenance of such stepping stones, resulting in ``spooky action at a distance'', when a small search step in one part of the space causes a novel solution to be lost in another part. Based on the notion of \emph{behavior domination}, a class of algorithms is defined in this paper as a framework for understanding the dynamics of behavior-driven search and developing algorithms that avoid such problems. Intuitively, behavior domination means that a solution exerts a negative effect on the ranking of every weaker solution, and this effect increases as their difference in fitness increases \emph{and} as the distance between their behaviors decreases. Behavior domination algorithms include several existing algorithms, and the definition makes it possible to transfer theoretical guarantees from multiobjective optimization; the non-dominated front induced by behavior domination can be viewed (Figure~\ref{FigRotation}) as a rotation of a Pareto front. Within this framework, a new algorithm is developed that uses fast non-dominated sorting \cite{deb02}. Experimental results show that this algorithm outperforms existing approaches in domains that contain useful stepping stones, and its advantage is sustained with scale. The conclusion is that behavior domination can help illuminate the complex dynamics of behavior-driven search, and can thus lead to the design of more scalable and robust algorithms. \section{Behavior-driven Ranking} \label{SecBehaviorDriven} Behavior-driven algorithms are a class of evolutionary algorithms that are guided by information about \emph{how} a solution achieves its performance during evaluation. The core defining component of such an algorithm is the ranking procedure it uses to order solutions for selection or replacement. This section reviews background for behavior-driven search, first defining some useful terms, and then describing examples of popular behavior-driven algorithms. \subsection{Behavior and Behavior Characterization} \label{SubSecBehavior} Behavior-driven algorithms use a notion of solution behavior to induce a meaningful distance metric between solutions and to facilitate the drive towards novelty and diversity. For example, in a robot control domain, a solution's behavior may be some function of the robot's trajectory \cite{gomez09, gomes13, mouret12}, whereas in an image generation domain, it may be the result of applying some deep features to the image \cite{liapis13, nguyen15, nguyen16, lehman16}. The following definitions of behavior, behavior characterization, behavior space, and behavior distance are fairly universal in the literature, though often not explicitly defined. \begin{definition} A \emph{behavior} of solution $x$ in environment $E$ is a vector $b_x$ resulting from the evaluation of $x$ in $E$. \end{definition} \begin{definition} A \emph{behavior characterization} $b(x)$ for an environment $E$ is a (possibly stochastic) function mapping any solution $x$ to its behavior $b_x$, given the evaluation of $x$ in $E$. \end{definition} By definition, the behavior characterization can be any function mapping solutions to vectors. In practice, the behavior characterization is usually designed to \emph{align} with a fitness measure or notion of interestingness in the evaluation environment \cite{pugh15}. For example, in a maze navigation task, the final position of a robot aligns more with solving the task than its final orientation. In other words, the behavior characterization is designed to capture a space whose exploration is expected to have practical benefits. \begin{definition} The \emph{behavior space} of a behavior characterization $b$ is the co-domain of $b$. \end{definition} The exploration of the behavior space by a search algorithm is facilitated by a function giving the distance between two solutions as a function of their behavior. \begin{definition} A \emph{behavior distance} is a metric $d(b(x), b(y))$. \end{definition} In pure novelty search, the behavior of a solution is the only information returned from evaluation that is used in the ranking system. This is in contrast to traditional evolutionary algorithms, which use only a single scalar fitness value $f_x$ computed from a scalar fitness function $f(x)$. In general, a behavior-driven algorithm can take advantage of both behavior and fitness when ranking solutions. \subsection{Existing Behavior-driven Algorithms} \label{SubSecExistingBDAs} The following are some of the most popular schemes for behavior-driven algorithms. As extensions to the pure novelty search paradigm, several recent algorithms use both behavior and fitness information in ranking, trying to navigate the trade-off between the pressures towards novelty and diversity, and the pressure to maximize. Although more exist that are not covered here, these below should give a sense of the behavior-driven algorithm design space. (See \cite{mouret12, pugh15,gomes15} for previous reviews of these algorithms.) \subsubsection{Novelty search (NS) \cite{lehman08, lehman11a}} Each solution is ranked based on a single \emph{novelty} function $n$, giving the average distance of its behavior to the $k$ nearest behaviors of other solutions in the population and an archive of past solutions accumulated throughout search. More specifically, $$ n(x) = \frac{1}{k}\sum_{i = 1}^{k} d(b(x), b(y_i))$$ where $y_i$ is the $i^{th}$ nearest neighbor of $x$ in the behavior space. The prevalent method of building the archive, and the method used in this paper, is to add each solution to the archive with a fixed probability $p_{add}$ \cite{lehman10b, gomes15}, in which case the archive represents a sampling from the distribution of areas visited so far. Novelty search captures the idea that more complex and interesting solutions lie away from the visited areas of the behavior space. \subsubsection{Linear scalarization of novelty and fitness (LSNF) \cite{cuccu11,gomes15}} An intuitive method of combining novelty and fitness is to rank a solution based on linear scalarization of its fitness and novelty: $$ \text{score}(x) = (1 - p) \cdot \frac{f(x) - f_{min}}{f_{max} - f_{min}} + p \cdot \frac{n(x) - n_{min}}{n_{max} - n_{min}} n(x). $$ The fitness and novelty scores here are normalized to compensate for differences in scale at every iteration. $f_{min}$, $f_{max}$, $n_{min}$, and $n_{max}$ are the minimum and maximum fitness and novelty scores in the current population. The parameter $p$ controls the trade-off of fitness vs. novelty. LSNF with $p = 0.5$ has been shown to be robust across domains \cite{gomes15}, and that is the version considered here. \subsubsection{NSGA-II with novelty and fitness objectives (NSGA-NF) \cite{mouret09, mouret12}} Another approach is to use novelty and fitness as two objectives within NSGA-II \cite{deb02}, the popular multiobjective framework. Often the novelty score in this approach is \emph{behavioral diversity}, which is a special case of novelty, where $k$ is the population size and there is no archive. This approach has been shown to improve performance on many tasks, especially those in evolutionary robotics, where some constant diversity is useful to avoid local optima. \subsubsection{Novelty search with local competition (NSLC) \cite{lehman11b, pugh15}} Novelty search with local competition also uses an NSGA-II ranking system, but instead of using a raw fitness objective alongside the novelty objective, it uses a relative fitness score: a solution's rank in fitness among its $k$ nearest neighbors. This enables the suitable exploration of diverse niches in the behavior space with different orders of magnitude of fitness. Lower fit niches are not outpaced and forgotten by having too much of the search's resources comitted to the globally most fit regions. NSLC has yielded particularly promising results in content generation domains, such as generating virtual creatures and images \cite{lehman11b, nguyen15}. \subsubsection{MAP-elites \cite{mouret15, mouret15a}} In MAP-elites, the behavior space is broken up into a set of bins, such that each behavior is mapped to a bin. For each bin, the solution with highest fitness whose behavior falls into that bin is kept. The population at any point thus consists of the most fit (elite) solution from each bin for which a behavior has been found. Because MAP-elites keeps an elite from all visited bins in the behavior space, at any point the population displays a map of the levels of fitness achievable throughout the space. So, along with being a method for generating high-quality diverse solutions, MAP-elites is a useful tool for visualization in understanding how the behavior space and fitness landscape relate. \subsubsection{Fitness-based search} It is worth including fitness-based search, the standard approach to evolutionary search, as the trivial example. In fitness-based search, solutions are ranked based on a single fitness value. Any additionally available behavior information is ignored.\\ \noindent The proliferation of recently introduced behavior-driven methods gives a strong indication that novelty alone is not generally sufficient for tackling complex domains. The methods reviewed above each have intriguing definitions that suggest they would be a good option for particular kinds of problems. However, unforeseen dynamics can emerge from the interaction between novelty and fitness, which can be difficult to disentangle. The next section sheds some light on these issues, resulting in the characterization of these existing algorithms, and the development of a new approach. \section{Behavior Domination Algorithms} \label{SecBDAs} The goal is to maintain the power of novelty search to discover stepping stones, while adding a fitness drive to focus search. Novelty search has demonstrated that a sufficiently diverse collection of solutions most likely contains useful stepping stones for solving the problem at hand. When adding fitness to focus search, the presumption is that the most useful stepping stones will be local optima along some dimensions of the behavior space. As pure fitness-based search maintains the most fit solutions, and pure novelty search maintains the most novel solutions, a method that combines the two should maintain the most promising set of stepping stones discovered so far, and the quality of this set should improve over time. Section~\ref{SubSecSpooky} discusses the presence of ``spooky action at a distance'' in several existing algorithms, which inhibits their ability to preserve useful stepping stones. Section~\ref{SubSecBDF} presents a formalization of behavior domination, which defines a sub-class of behavior-driven algorithms that can avoid this pitfall and guarantee monotonic improvement of collected stepping stones. Section~\ref{SubSecExistingBDMAs} shows that several existing behavior-driven algorithms are in this sub-class. Section~\ref{SecNewBDMA} uses behavior domination to develop a new algorithm based on fast non-dominated sorting. \subsection{``Spooky Action at a Distance'' for Behavior-driven Search} \label{SubSecSpooky} When novelty and fitness are combined, the interaction between these two drives can have unintended consequences. The stepping stone discovery ability of novelty search may not necessarily be preserved. For example, if a small change in behavior of one solution has a fatal effect on a distant isolated solution on the other edge of the explored behavior space, then a valuable stepping stone may be lost. The algorithm has taken one small step forward, but one large step back. This unsettling effect is an instance of ``spooky action at a distance'' for behavior-driven search. More specifically, spooky action at a distance occurs when a ranking decision based on a \emph{local} increase in novelty results in a \emph{global} decrease of novelty. Here, global novelty is defined by two measures: GNP, the maximum behavior distance between any pair of solutions in the population; and GNT, the total behavior distance between all pairs of solutions. It turns out several existing behavior-driven algorithms support spooky action at a distance. The following example is for a one-dimensional behavior space. Consider a population $P = \{x_0, x_1, x_2, x_3\}$, and an empty archive, where $b(x_0) = b_o$, $f(x_0) = f_o$, $b(x_1) = b_o + 10$, $f(x_1) = f_o + 11$, $b(x_2) = b_o + 11$, $f(x_2) = f_o + 10$, $b(x_3) = b_o + 21$, and $f(x_3) = f_o$. Now, consider an identical setup but with $P' = \{x_0, x_1, x_2, x_4\}$, where $b(x_4) = b_o + 22$, and $f(x_4) = f_o$ (Figure~\ref{FigSpooky}). \begin{figure} \includegraphics[width=0.95\columnwidth]{images/spooky_example_labeled-crop.pdf} \caption{\emph{(spooky action at a distance)} Consider populations $P = \{x_0, x_1, x_2, x_3\}$ and $P' = \{x_0, x_1, x_2, x_4\}$, in which one solution must be selected for deletion. Suppose $k = 2$, and the archive is empty. With population $P$, LSNF, NSGA-NF, and NSLC all delete $x_2$. However, with population $P'$, they all delete $x_0$. The small \emph{local} increase in novelty from $x_3$ to $x_4$ thus causes a \emph{global} decrease in novelty (Section~\ref{SubSecSpooky}).} \label{FigSpooky} \end{figure} Suppose an algorithm $A$ must delete one solution, and $A$ deletes $x_2$ with population $P$, but $A$ deletes $x_0$ with population $P'$. This change must be caused by the move of $x_3$ to $x_4$. $P$ with $x_2$ deleted has global novelty $\mbox{GNP}(P) = 21$ and $\mbox{GNT}(P) = 41$. However, $P'$ with $x_0$ deleted has global novelty $\mbox{GNP}(P') = 12$ and $\mbox{GNT}(P') = 24$. Thus, $A$ demonstrates spooky action at a distance. Suppose $k = 2$. Then given $P$, $n(x_0) = 21/2$, $n(x_1) = 11/2$, $n(x_2) = 11/2$, and $n(x_3) = 21/2$. Given $P'$, $n(x_0) = 21/2$, $n(x_1) = 11/2$, $n(x_2) = 12/2$, and $n(x_4) = 23/2$. The next three observations show spooky action at a distance for LSNF, NSGA-NF, and NSLC. \begin{observation}[Spookiness of LSNF] \label{ObsSpookyLSNF} With $P$, $\mbox{score}(x_0) = 0 + 1$, $\mbox{score}(x_1) = 1 + 0$, $\mbox{score}(x_2) = 10/11 + 0$, and $\mbox{score}(x_3) = 0 + 1$ $\implies$ $x_2$ is deleted. With $P'$, $\mbox{score}(x_0) = 0 + 10/12$, $\mbox{score}(x_1) = 1 + 0$, $\mbox{score}(x_2) = 10/11 + 1/12$, and $\mbox{score}(x_4) = 0 + 1$ $\implies$ $x_0$ is deleted. \end{observation} \begin{observation}[Spookiness of NSGA-NF] \label{ObsSpookyNSGANF} With $P$, $x_1$ dominates $x_2$, while all other solutions are non-dominated $\implies$ $x_2$ is deleted. With $P'$, $x_2$ is no longer dominated, but $x_4$ now dominates $x_0$ $\implies$ $x_0$ is deleted. \end{observation} \begin{observation}[Spookiness of NSLC] \label{ObsSpookyNSLC} With $P$, the local competition scores of $x_0, x_1, x_2, x_3$ are $0, 2, 1, 0$, resp. So, $x_1$ dominates $x_2$, while all other solutions are non-dominated $\implies$ $x_2$ is deleted. With $P'$, the local competition scores of $x_0, x_1, x_2, x_4$ are again $0, 2, 1, 0$, resp. So, as in Observation~\ref{ObsSpookyNSGANF}, $x_2$ is no longer dominated, but $x_4$ now dominates $x_0$ $\implies$ $x_0$ is deleted. \end{observation} With problems such as ``spooky action at a distance'' in mind, the next section introduces a notion of behavior domination from which algorithms can be developed that avoid these issues. \subsection{Ranking by Behavior Domination} \label{SubSecBDF} A practical unifying framework for behavior-driven methods should capture both the pure novelty maximization and pure fitness maximization extremes, as well as a trade-off space, that potentially captures some of the existing approaches and suggests new ones. Many components of existing ranking mechanisms (Section~\ref{SubSecExistingBDAs}) can be represented in terms of pair-wise relationships between solutions, based on their behaviors and fitnesses. These pairwise interactions capture the positive or negative effects solutions have on each other during ranking when they are competing for a spot in the population. Focusing on pairwise effects also helps avoid unintended global effects, such as that discussed in Section~\ref{SubSecSpooky}. To focus search on maintaining the most efficient set of stepping stones, behavior domination aims to formalize the idea that a solution should dominate solutions with similar behaviors and lower fitnesses. In particular, each solution exerts a domination effect over each weaker solution. Intuitively, the domination effect should increase (decrease) as the difference between their fitnesses increases (decreases), and increase (decrease) as the distance between their behaviors decreases (increases). The following definition of domination effect captures these requirements. \begin{definition} The \emph{domination effect} of $x$ on $y$ is a function $$e(x, y) = f(x) - f(y) - d(b(x), b(y))$$ where $f$ is a fitness function, $b$ is a behavior characterization, and $d$ is a behavior distance. \end{definition} The score produced by the domination effect function can be used in various ways in a ranking system. Two common methods of combining pairwise scores are (1) ranking by aggregation, and (2) ranking by domination. In ranking by aggregation, solutions are ranked by a single score based on a sum of pairwise scores, e.g., the novelty score is a normalized sum of distances between the behaviors of pairs of solutions. In ranking by domination, solutions are ranked in a partial order, by a boolean pairwise relation of whether they dominate one another. To enable ranking by domination, the following definition provides such a pairwise operator, based on the domination effect function defined above. \begin{definition} \label{DefDomination} If $e(x, y) \geq 0$, then $x \succeq y$, that is, $x$ \emph{dominates} $y$. \end{definition} It turns out that for any specification of effective domination, i.e., any choice of $f$, $b$, and $d$, this definition of domination defines a partial order over solutions. \begin{theorem} $y \succeq x$ induces a partial order over solutions for any choice of $f$, $b$, and $d$. \end{theorem} \begin{proof} Transitivity: Suppose $x \succeq y$ and $y \succeq z$. Then, $0 \leq e(x, y) + e(y, z) = (f(x) - f(y) - d(x, y)) + (f(y) - f(z) - d(y, z)) = f(x) - f(z) - (d(x, y) + d(y, z)) \leq f(z) - f(x) - d(x,z) = e(x, z) \implies x \succeq z$. Reflexivity and antisymmetry are similarly straightforward to show. \end{proof} The partial order defined by behavior domination is similar to the one defined by Pareto-dominance in multiobjective optimization. Note that, even though they make use of a notion of Pareto-dominance, neither NSGA-NF nor NSLC have the property of a stable partial-ordering of solutions, because the novelty objective fluctuates as the population changes over time. On the other hand, the front induced by behavior domination can be viewed geometrically as a rotation of a Pareto front (Figure~\ref{FigRotation}). Algorithms based on behavior domination can then more easily inherit properties from multiobjective optimization, e.g., guarantees that the non-dominated front dominates every point ever generated and all area dominated by any point ever generated, and guarantees regarding near-optimal distribution of non-dominated solutions \cite{laumanns02, deb16, coello07}. The practical expectation is that the utility of non-dominated solutions as stepping stones in multiobjective optimization will transfer to the case of behavior domination. An algorithm based on this connection to multiobjective optimization is introduced in Section~\ref{SecNewBDMA}. Although aggregation and domination are the most prevalent approaches to ranking, the definition of a behavior domination algorithm does not preclude the existence of other schemes that use a domination effect function. \begin{definition} Every algorithm whose ranking mechanism's dependence on $f$ and $b$ can be defined in terms of a domination effect function is a \emph{behavior domination algorithm (BDMA)}. \end{definition} Behavior domination algorithms can avoid ``spooky action at a distance'' (Section~\ref{SubSecSpooky}) by using a domination-based ranking scheme. When ranking decisions are only made with respect to the operator $\succeq$, moving a solution $y$ away from a non-dominated solution $x$ cannot cause $x$ to become dominated. For example, see the representation of MAP-elites in the next section (Observation~\ref{ObsMEBDMA}). \subsection{BDMA Representation of Existing Algorithms} \label{SubSecExistingBDMAs} The next three observations demonstrate how the behavioral domination framework can be used to represent existing algorithms. Such observations are helpful in clarifying the space of BDMAs. \begin{observation}[Fitness-based search is a BDMA] \label{ObsFitBDMA} Since fitness-based search does not make use of behavior, this can be achieved by setting $b$ to be the trivial behavior characterization, $b(x) = 0 \ \forall x$. Then, $\succeq$ (Definition~\ref{DefDomination}) induces the same total ordering as sorting fitness scores directly. \end{observation} \begin{observation}[Novelty search is a BDMA] This is another trivial case. Since novelty search does not make use of the fitness function, this is similarly achieved by choosing $f(x) = 0 \ \forall x$, and using the usual novelty search aggregation scoring for ranking solutions. \end{observation} \begin{observation}[MAP-elites is a BDMA] \label{ObsMEBDMA} Consider an instance of MAP-elites with fitness function $f$, behavior characterization $b_o$, and binning function $\beta$ that maps each behavior to its bin. Choose $b$ such that $b(x) = \beta(b_o(x))$, and define $d$ by $$ d(b(x), b(y))= \begin{cases} 0,& \text{if } \ b(x) = b(y),\\ \infty,& \text{otherwise.} \end{cases} $$ Then, the non-dominated solutions under $\succeq$ are exactly the elites maintained by the original MAP-elites algorithm. \end{observation} The above subsumptions demonstrate the breadth of the space of BDMAs. However, each of these representations avoids the natural geometric form of the domination effect function. Section~\ref{SecNewBDMA} develops an algorithm that follows more directly from Definition~\ref{DefDomination}. \subsection{A non-dominated sorting BMDA: BDMA-2} \label{SecNewBDMA} Given a fitness function $f$ and a behavior characterization $b$, here let the domination effect function be parameterized completely by the choice of behavior distance $d$. A new algorithm, BDMA-2, is defined with a scaled L2 distance metric: $$ d(b(x), b(y))= w \cdot \lVert b(x) - b(y) \rVert_2 .$$ The inclusion of the scaling parameter $w$ is useful for flexibility in relating fitness and behavior distance numerically. Increasing $w$ increases the emphasis on novelty; decreasing it increases the emphasis on fitness. Figure~\ref{FigFourPeaks} \begin{figure} \includegraphics[width=0.9\columnwidth]{images/maintaining_stepping_stones-crop.pdf} \hspace{5pt} \caption{A sample BDMA-2 population successfully maintaining solutions at each local maximum discovered in the four peaks domain (Section~\ref{SubSecMaintaingStones}). Dashed lines indicate the region each solution dominates for $w = 4$. The five solutions on the non-dominated front are in red, including two around the peak where $b(x) = 40$.} \label{FigFourPeaks} \end{figure} depicts an instance of a ranking step in BDMA-2, including the induced domination structure, taken from the experiments in Section~\ref{SubSecMaintaingStones}. Now that a suitable behavior distance is defined, a fast non-dominated sort (as in NSGA-II \cite{deb02}) is used to rank the solutions, based on the $\succeq$ operator induced by $d$. In contrast to the distance function used by MAP-elites (Obs.~\ref{ObsMEBDMA}), the L2 distance allows the flexible discovery of the locations of an efficient set of stepping stones, opposed to having their bounded locations determined beforehand. The expectation is that the success of the non-dominated front in NSGA-II in providing useful stepping stone for multiobjective optimization will transfer to this case of behavior domination. Similar to a previous behavior-driven tie-breaking approach \cite{hodjat16}, ties are broken on the final front from which solutions must be kept by iteratively excluding the less fit of the two nearest solutions on that front, until the desired number of solutions remain. Specifying the number of top solutions to select via the fast non-dominated sort can be viewed as specifying the number of stepping stones wished to be maintained during search. To preserve the efficient exploration capabilities of novelty search while maintaining useful stepping stones, it is useful to have a subset of the population selected as stepping stones, and the remainder selected by novelty alone. Specifying the number of stepping stones in the population is an intuitive parameterization that can be informed by domain knowledge as well as time and space requirements. On the other hand, it may take significant experimenter effort and domain knowledge to set an effective $w$. Conveniently, the definition of behavior domination can be used to develop a suitable scheme for automatically setting $w$ online during search. It is straightforward to encode rules so that $w$ is set to guarantee the domination or non-domination of some set of solutions considered harmful or desirable, respectively. In the experiments in this paper, an example of such an online adaptation scheme is considered, inspired by the avoidance of ``spooky action at a distance'' (Section~\ref{SubSecSpooky}). In this scheme, at every iteration $w$ is set at the maximal value such that neither of the two most distant solutions are dominated. This online adaptation scheme (BDMA-2a) is compared against setting a static $w$ in Section~\ref{SecExperiments}. Though it is an intuitive heuristic, setting $w$ online in this fashion does not necessarily preserve the guarantees of using a fixed domination effect function. Development of more grounded approaches to adapting $w$ is left to future work. \section{Experimental Investigation} \label{SecExperiments} Experiments were run in domains that extend limited capacity drift models, previously used to study novelty search \cite{lehman13, lehman15}, with fitness and a continuous solution space. Each solution is encoded by a vector with values in the range $[0, 150]$. The population is randomly initialized with all values in $[0, 1]$. This abstraction captures the property of real world domains that often only a small portion of the behavior space can be reached by randomly generated solutions, e.g., robots that either spin in place or crash into the nearest wall; evolution must accumulate structure in its solutions to progress beyond this initial space. The first set of experiments tests the ability to discover and maintain available stepping stones; the second tests the ability to perform well in settings where effective use of stepping stones can accelerate evolutionary progress. The underlying evolutionary algorithm for each experimental setup is a steady-state algorithm with Gaussian mutation and uniform crossover. The only difference between setups in a domain is the method of ranking solutions. See Appendix for experimental parameter settings. In each domain, the performance measures for each algorithm were averaged over ten runs. \subsection{Discovering and Maintaining Stepping Stones} \label{SubSecMaintaingStones} The first domain has a one-dimensional solution space. The fitness landscape has four peaks of differing heights, with the rightmost peak being the highest (Figure~\ref{FigFourPeaks}). The behavior characterization is the identity function, i.e., $b(x) = x$. Each peak represents a potentially useful stepping stone, with the higher peaks having more potential. In an optimal state, a population will include solutions near the tops of each peak. This domain tests an algorithm's ability to grow its solutions to successfully discover each peak while maintaining in the active population potentially useful stepping stones encountered along the way. Consider four bins in the behavior space, each of width 10 and centered around a peak. Each algorithm is evaluated against two MAP-elites-based measures \cite{mouret15, pugh15}. The first is the sum of the top fitnesses ever achieved across the bins; this measures an algorithm's ability to discover stepping stones. The second is the sum of the top fitnesses of these bins in the current population; this measures an algorithm's ability to maintain stepping stones. The results are depicted in Figure~\ref{FigOneDResults}. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{images/1D-results-total_bin_score-crop.pdf} \includegraphics[width=0.9\columnwidth]{images/1D-results-current_bin_score-crop.pdf} \caption{Four peaks domain results. A total (current) bin score near 500 indicates all stepping stones are discovered (maintained). (top) Novelty search discovers all the peaks most quickly, but BMDA-2 does not take much longer; (bottom) Only BDMA-2, NSLC, and BDMA-2a consistently maintain solutions near each discovered peak across the ten trials. \vspace{-10pt} \label{FigOneDResults}} \end{figure} As expected, novelty search is able to discover the available stepping stones most quickly, since it's focused only on exploration. However, BMDA-2 is not far behind, followed by NSLC and BDMA-2a. When it comes to maintaining these stepping stones, BDMA-2 outperforms the other algorithms, again followed closely by NSLC and BDMA-2a. Note that although MAP-elites maintains the elites in each visited bin, when the bin size is large it is difficult to jump to new bins, and when it is small the chance of selecting an elite on the edge as a parent is small. So, MAP-elites explores slowly in this domain (results shown with bin size 1). Figure~\ref{FigAdaptW} shows examples of values of $w$ adapted over the course of BDMA-2a runs. \begin{figure} \includegraphics[width=0.9\columnwidth]{images/w_over_time-crop.pdf} \caption{Adapted value of $w$ over time for three independent runs of BDMA-2a (Section~\ref{SecNewBDMA}) in the four peaks domain, along with the median $w$ over all 10 runs. Adaptation of $w$ is marked by periods of relative stability followed by periods of relative instability.} \label{FigAdaptW} \end{figure} Future schemes for adapting $w$ may try to minimize fluctuations for better predictability (Section~\ref{SecDiscussionAndFutureWork}). \subsection{Harnessing Stepping Stones} \label{SubSecHarnessingStones} The most successful algorithms at discovering and maintaining stepping stones (NSLC, BDMA-2, and BDMA-2a), along with Novelty and Fitness as controls, were evaluated in two further domains, which test the abilities of algorithms to exploit available stepping stones by focusing on the most promising areas of the search space. \subsubsection{Exponential Focus (ETF) Domain} \label{SubSubSecETF} The ETF domain captures the notion that real world domains contain complementary stepping stones, which, if harnessed successfully, can accelerate progress in a way not possible otherwise. This domain has a two-dimensional solution space, and the fitness function contains stepping stones that can enable exponential progress if used effectively. The fitness landscape consists of a series of claw-like regions that increase in size and value as they get farther away from the origin; all other areas have fitness zero (Figure~\ref{FigETFResults} (top)). \begin{figure} \includegraphics[width=0.9\columnwidth]{images/etf-crop.pdf} \\ \vspace{10pt} \includegraphics[width=0.9\columnwidth]{images/2D_etf_100-max_fitness-crop.pdf} \caption{(top) The ETF domain contains a series of claw-like regions. Each region supports two stepping stones that can be combined to reach the next higher-valued region via crossover. This domain tests the ability to harness these stepping stones; (bottom) Results in the ETF domain with $s = 100$. BDMA-2 is the most successful, followed by BDMA-2a. \label{FigETFResults}} \end{figure} The heel of the first claw is located at $(1, 1)$ and has fitness 1. The $i^{th}$ claw has a heel with fitness $h$, and three toes, each of width $\epsilon = 0.2$. Fitness increases linearly along each toe. The tip of the vertical and horizontal toes have fitness $h + i$, and the tip of the diagonal toe has fitness $h + 2i$. The heel of the $(i + 1)^{st}$ claw has fitness $2(h + i)$, and can be reached by a successful crossover of the $i^{th}$ vertical and horizontal toes. Thus, an algorithm can reach the next claw by maintaining solutions on the tips of both horizontal and vertical toes, while avoiding convergence to the deceptive diagonal toe. The behavior characterization is $b([x_0, x_1]) = s \cdot x_0 + x_1$, i.e., $s$ controls how much the first dimension of the behavior space is stretched. As $s$ increases, it is more costly for an algorithm to densely explore the entire behavior space. Experiments were run with $s = 100$, $s = 1000$, and $s = 10000$. Since the purpose of this domain is to evaluate how well an algorithm can use stepping stones to discover high-performing solutions, algorithms are compared based on their maximum fitness achieved by iteration. Results are shown in Figure~\ref{FigETFResults} (bottom) and Table~\ref{TableResults} (a). \begin{table} \vspace{25pt} {\footnotesize \begin{tabular}{| c | c | c | c | c | c |} \hline $s$ & Fitness & Novelty & NSLC & BDMA-2 & BDMA-2a \\ \hline $100$ & 2.55 (0.28) & 6.49 (0.77) & 6.02 (1.29) & \textbf{22.41} (5.32) & 11.76 (0.85) \\ \hline $1000$ & 2.55 (0.28) & 9.59 (1.74) & 6.31 (1.26) & \textbf{14.79} (2.63) & 14.16 (1.33) \\ \hline $10000$ & 2.55 (0.28) & 9.36 (1.68) & 6.13 (0.98) & 9.57 (2.04) & \textbf{15.68} (1.71) \\ \hline \end{tabular} } \\ \vspace{5pt} (a) Mean max fitness (std. err.) in the ETF domain. \vspace{15pt} \\ {\footnotesize \begin{tabular}{| c | c | c | c | c | c |} \hline $D$ & Fitness & Novelty & NSLC & BDMA-2 & BDMA-2a \\ \hline 10 & 2.708 (0.00) & 2.846 (0.09) & 2.823 (0.05) & \textbf{3.023} (0.09) & 3.010 (0.10) \\ \hline 20 & 2.708 (0.00) & 2.678 (0.01) & 2.748 (0.02) & \textbf{2.898} (0.05) & 2.791 (0.05) \\ \hline 30 & 2.708 (0.00) & 2.682 (0.01) & 2.705 (0.00) & \textbf{2.791} (0.02) & 2.711 (0.02) \\ \hline \end{tabular} } \\ \vspace{5pt} (b) Mean max fitness (std. err.) in the focused Ackley domain. \vspace{5pt} \\ \vspace{15pt} \caption{Max fitnesses achieved through 10,000 iterations, averaged across 10 runs. (a) Results in the ETF domain. Both BDMA-2 and BDMA-2a outperform the other approaches across all scales of $s$. BDMA-2's performance decreases with $s$, while BDMA-2a's increases, showing its ability to successfully adapt $w$ with this type of scaling; (b) Results in the focused Ackley domain. BMDA-2 and BDMA-2a outperform the other algorithms across all scales of $D$. \label{TableResults}} \end{table} BDMA-2a significantly outperforms each existing algorithm for each value of $s$ (Mann Whitney U Test, $p < 0.01$), with BDMA-2 showing dramatic improvements as well. \subsubsection{Focused Ackley Domain} The results in the ETF domain demonstrate that BDMA-2 can be successful in domains that contain natural stepping stones. To further validate this idea, experiments were run in a domain based on the popular Ackley benchmark function \cite{ackley87, back97}, which also has an inherent stepping stone structure. The search space is $D$-dimensional. If a solution $x$ falls in a bounded region, defined by $\lvert x_0 - x_1 \rvert < 2$ and $\sum_{i = 2}^D x_i < D / 2$, its fitness is the value of the Ackley function at $[x_0, x_1]$, otherwise, its fitness is drawn randomly from $[0,1]$ (Figure~\ref{FigAckley} (top)). \begin{figure} \includegraphics[width=0.9\columnwidth]{images/ackley-crop.png} \\ \vspace{10pt} \includegraphics[width=0.9\columnwidth]{images/10D-ackley-results-max_fitness-crop.pdf} \caption{ (top) The focused Ackley domain tests an algorithm's ability to focus on useful stepping stones, which here are local maxima bordering noisy regions in a high-dimensional behavior space. (bottom) Results in the focused Ackley domain with $D = 10$. BDMA-2 and BDMA-2a consistently outperform the other approaches.} \label{FigAckley} \end{figure} In this domain, $b(x) = x$, and scale is controlled by the number of dimensions $D$ of the behavior space. The noise outside of the bounded region is a challenge for algorithms that must decide which regions are worth exploring. The results in Figure~\ref{FigAckley} (bottom) and Table~\ref{TableResults} (b) show how BDMA-2 and BMDA-2a improve upon existing approaches. BDMA-2 significantly outperforms each existing algorithm for each value of $D$ (Mann Whitney U Test, $p < 0.02$), except for Fitness with $D = 30$, as each approach that makes use of the behavior characterization $b$ is negatively affected by increases in the dimensionality of $b$. Still, the success of BDMA-2 and BDMA-2a in these domains that contain useful stepping stones is encouraging evidence for the potential to scale behavior domination algorithms to more complex domains, where it is assumed that such stepping stones exist. \section{Discussion and Future Work} \label{SecDiscussionAndFutureWork} The existing algorithms classified under behavior domination (Section~\ref{SubSecExistingBDMAs}) have been validated across an array of complex domains \cite{back97, lehman10b, lehman11a, mouret15, mouret15a}. The experiments in Section~\ref{SecExperiments} demonstrate that the behavior domination framework can lead to progress over existing approaches on problems that contain useful stepping stones, and it will be interesting to see what new methods will be required to scale these methods to the real world, where stepping stones abound, e.g., in domains such as robot control \cite{lehman11a, mouret15, mouret12} and automatic content generation \cite{lehman11b, nguyen16, lehman16}. Effective specification of behavior is still an issue. Experiments in the ETF domain (Section~\ref{SubSubSecETF}) showed how behavior-driven algorithms can be sensitive even to linear scaling of the behavior space. Although BDMA-2 and BDMA-2a outperformed the other approaches in this scenario, their reliance on a single parameter $w$ across all behavior dimensions makes them susceptible to such issues. From the perspective of behavior domination, solutions to these issues can be hidden in the behavior characterization, i.e., by letting $b$ be some transformation of the raw behavior characterization. Automatically specifying behavior characterizations in a robust and general way is an open problem, and some recent work has begun to make progress in this direction \cite{meyerson16, nguyen16, liapis13, gomes14}. Given a reasonable behavior characterization, one method of setting $w$ automatically was presented in Section~\ref{SecNewBDMA}, but there are many methods that could be tried, some of which may be more generally effective, and preserve stability properties of the behavior domination front. Overall, more work can be done to transfer guarantees from the theory of multiobjective optimization \cite{deb16, coello07}, which will also lead to practical algorithmic improvements. Although transferring theoretical properties can be satisfying, further work is needed to understand where theoretical focus in behavior-driven search will yield the biggest practical impact. The issue of ``spooky action at a distance'' (Section~\ref{SubSecSpooky}) identifies some unsettling dynamics in existing algorithms, but it is not clear whether it strikes at the heart of the matter, or is merely a shadow of something more illusive. Further work must be done to fully characterize the emergent dynamics of ranking procedures, in parallel with work to understand how careful specification of a behavior characterization and fitness function can guarantee the existence of useful stepping stones in the joint behavior-fitness space. \section{Conclusion} The goal of this study was to understand and harness the ability of evolution to discover useful stepping stones. Existing behavior-driven algorithms have properties that interfere with this goal; the behavior domination framework was introduced to reason formally about how these properties could be avoided. A new algorithm, BDMA-2, was introduced based on this framework, and shown to improve over existing behavior-driven algorithms in domains that contain useful stepping stones. The behavior domination perspective is thus a promising tool for comparing and understanding existing behavior-driven algorithms as well as for designing better ones in the future. \bibliographystyle{ACM-Reference-Format}
1,314,259,995,262
arxiv
\section{Introduction}\label{sec:intro} Semi-continuous data are non-negative and characterized by the coexistence of high kurtosis, positive skewness and an abundance of zeros observed often enough that there are compelling substantive and statistical reasons for special treatment (\cite{belotti2015twopm}). Modeling non-negative data with clumping at zero has a long history in the statistical literature and several competing models have been proposed (see \cite{min2002modeling} and \cite{min2005random} and references therein). Such data structure arises on many occasions: economics, actuarial sciences, environmental modeling and health services (see \cite{liu2009joint, belasco2012modelling, neelon2016modeling1, neelon2016modeling2} and \cite{farewell2017two}). Linear regression approaches can be considered as starting points of the analysis (see e.g. \cite{Iversen2015}). Nevertheless, parameter estimates are sensitive to extreme values and likely to be inefficient if the underlying distribution is not Gaussian (\cite{Zhou2002} and \cite{Basu2009}). Thus, other approaches have attracted researchers attention to model semi-continuous data. Among those, two-part models, introduced by \cite{Duan1983} and \cite{Mullahy1986}, play an important role. \textcolor{black}{This class of models helps handle excess of zeros and overdispersion: they involve a mixture distribution consisting in a mixing of a discrete point mass, with all mass at zero, and a discrete or continuous random variable. In particular, they are described by two-equations: a binary choice model is fitted for the probability of observing a positive-versus-zero outcome. Then, conditional on a positive outcome, an appropriate regression model is fitted for the positive outcome (see \cite{belotti2015twopm}). This statistical method introduces significant modeling flexibility by allowing the zeros and the positive values to be generated by two different processes. For these reasons, they have been deeply implemented especially in biomedical applications and in health economics as they well recover the two-step structure in the health demand process; see e.g. \cite{diehr1999methods, deb2002structure, Alfo2010, mihaylova2011review} and \cite{Maruotti2014}.} \\ \textcolor{black}{The common structure of such models assumes that the effect of the covariates influence the mean of the conditional distribution of the response. However, in many real applications, the effect of the covariates can be different on different parts of the response distribution. In this cases it may be of interest to infer on the entire conditional distribution of the response variable using the quantile regression approach proposed in the seminal paper by \cite{koenker1978} which allows for quantile-specific inference and it is typically used for modeling non-Gaussian outcomes.} \\ \textcolor{black}{Quantile regression methods have become widely used in literature mainly because they are suitable in all those situations where skewness, fat-tails, outliers, truncation, censoring and heteroscedasticity arise. They have been implemented in a wide range of different fields, both in a frequentist paradigm and in a Bayesian setting, spanning from medicine (see \cite{cole1992smoothing, royston1994regression, alhamzawi2012bayesian} and \cite{waldmann2018quantile}), financial and economic research (see \cite{Bassett2002, bernardi2015bayesian, petrella2018cross, laporta2018selection, tian2018quantile, bernardi2018bayesian} and \cite{petrella2019joint}) and environmental modeling; see, e.g., \cite{hendricks1992hierarchical, pandey1999comparative} and \cite{reich2011bayesian} for a discussion. For a detailed review and list of references, \cite{koenker2005} and \cite{koenker2017handbook} provide an overview of the most used quantile regression techniques in a classical setting.} \\ \textcolor{black}{In longitudinal studies, quantile methods with random effects have been proposed in order to account for the dependence between serial observations on the same subject (see \cite{marino2015linear, alfo2017finite} and \cite{marino2018mixed}). \cite{alfo2017finite}, for example, defined a finite mixture of quantile regression models for heterogeneous data.} \textcolor{black}{Quantile regression and two-part models have also been positively considered in several studies: see for example \cite{Grilli2016, heras2018application, sauzet2019two} and \cite{biswas2020semi}. In particular, \cite{biswas2020semi} considered a semi-parametric quantile regression approach to zero-inflated and incomplete longitudinal outcomes in a repeated measurements design study.} \\ From an inferential point of view both classical and Bayesian inferential approaches have been used in the literature to estimate the parameters and the quantiles of the models. In the frequentist setting, the inferential approach used to estimate the parameters relies on the minimization of the asymmetric loss function of \cite{koenker1978} while, in the Bayesian setting, the Asymmetric Laplace (AL) distribution has been introduced as a likelihood inferential tool (see \cite{yu2001bayesian}). The two approaches are well-justified by the relationship between the quantile loss function and the AL density: the minimization of the quantile loss function is equivalent, in terms of parameter estimates, to the maximization of the likelihood associated with the AL density. Therefore, the AL distribution could offer a convenient device to implement a likelihood based inferential approach in a quantile regression analysis. \textcolor{black} {The main goal of the present paper is to extend the two-part quantile regression modeling framework for mixed-type outcomes to longitudinal data using a frequentist approach. In particular, we consider a mixed effect logistic regression for modeling the probability of zero-nonzero outcome, and a linear mixed quantile regression model for the continuous positive outcomes. Following \cite{alfo2017finite}, in order to prevent inconsistent parameter estimates due to misspecification of the random effects distribution, we adopt a non-parametric approach in which the random effect is left unspecified and approximated by using a discrete finite mixture. Within this scheme, our modeling framework reduces to a two-part finite mixture of quantile regressions where the components of the finite mixture represent clusters of units that share homogeneous values of model parameters.} \\ We propose to estimate model parameters through Maximum Likelihood (ML) by using the AL distribution as a working likelihood. Specifically, estimation is carried out through the Expectation-Maximization (EM) algorithm. From a computational perspective, \textcolor{black}{we generalize the work of \cite{tian2014linear} and provide an efficient version of the EM algorithm with M-step updates in closed form using the well-known location-scale mixture representation of the AL distribution; see \cite{kozumi2011gibbs}.} \\ \textcolor{black}{In statistical modeling, one of the main issue is the identification of the relevant variables to be considered in the model. It is in fact quite common, using real data, that a large number of predictors are concerned in the initial stage of the analysis. In this situation the researcher would be interested in determining a smaller subset that exhibits the strongest effects. Several variable selection methods have been proposed in the literature, one of them is the penalized method which is particularly useful when dealing with high dimensional statistical problems (see \cite{wasserman2009high} and \cite{fan2010selective}). To improve estimation, to gain in parsimony and to conduct a variable selection procedure, we consider a Penalized EM (PEM) algorithm by introducing the Least Absolute Shrinkage and Selecting Operator (LASSO) $L_1$ penalty term of \cite{Tibshirani1996}.} The relevance of our approach is also shown empirically by the analysis of a sample taken from the RAND Health Insurance Experiment (RHIE). The RHIE is one of the largest social experiments ever completed in the U.S. to study the cost sharing and its effect on service use, quality of care and health expenditures; see \cite{deb2002structure}. Healthcare data represents a striking example of semi-continuous data because they are non-negative, with substantial positive skewness, heavy tailed and, often, multi-modal, e.g. they exhibit a spike-at-zero for non-users. In this paper, we adopt the proposed two-part finite mixture of quantile regressions in order to investigate whether the effect of socioeconomic and household's characteristics changes with the increase in conditional health spending. \textcolor{black}{In accordance with the results of \cite{deb2002structure}, our analysis shows that the two-part model identifies two groups of users: a group of reluctant users and a second group that often uses healthcare services. In addition, the effect of the included covariates is not uniform across quantiles but it changes sign and magnitude as the quantile level varies.}\\ The rest of the paper is organized as follows. In Section \ref{sec:meth}, we introduce the two-part finite mixture quantile regression model. Section \ref{sec:est} illustrates the EM-based maximum likelihood approach, the closed form solutions and the PEM algorithm. In Section \ref{sec:app} we discuss the main empirical results while Section \ref{sec:con} concludes. \section{Methodology}\label{sec:meth} \subsection{Two-part quantile regression model}\label{subsec:model} Let $y_{it}$, $i=1,..., N$, $t = 1..., T_i$ be a semi-continuous variable for unit $i$ at time $t$ and let $\textbf{b}_i = (\textbf{b}_{i0} , \textbf{b}_{i1})$ be a time-constant, individual-specific, random effects vector having distribution $f_{\bf b} (\cdot)$ with support $\mathcal{B}$ where $\mathbb{E}[\textbf{b}_i] = 0$ is used for parameter identifiability. The role of the random coefficients $\textbf{b}_{i}$ is to capture unobserved heterogeneity and within subject dependence. \textcolor{black}{In a two-part model, the probability distribution of the outcome variable $y_{it}$ can be written as}: \begin{equation}\label{eq:hurdle} f (y_{it}) = p_{it}^{d_{it}}\Big[ \left(1-p_{it}\right)g \big( h(y_{it})|y_{it}>0 \big) \Big]^{1-d_{it}} \end{equation} with $$d_{it} = \boldsymbol{1}(y_{it}=0), \quad p_{it} = \Pr(y_{it}=0) = \Pr(d_{it}=1)$$ where $d_{it}$ denotes the occurrence variable for unit $i$ at time $t$, $\boldsymbol{1}(\cdot)$ is the indicator function, $g(\cdot)$ is the density function for the positive outcome given that $y_{it}>0$ and $h(\cdot)$ is a (monotone) transformation function of $y_{it}$. The model is completed by defining the linear predictors for the binary and the positive parts of the model. \textcolor{black}{We assume that the} spike-at-zero process is governed by a binary logistic model such that: \begin{equation}\label{eq:logit} {\rm logit}(p_{it} \mid {\bf s}_{it}, \textbf{b}_{i0}) = {\bf s}_{it}' \boldsymbol{\gamma} + {\bf c}_{it}' \textbf{b}_{i0} \end{equation} where ${\bf s}_{it}=(s_{it1},\dots, s_{itm})$ is the $m$ dimensional set of explanatory variables, $\boldsymbol{\gamma}$ is the parameter vector and ${\bf c}_{it}$ is a subset of ${\bf s}_{it}$.\\ \textcolor{black}{As mentioned in the Introduction, in order to determine the effect of explanatory variables on the tails of the distribution of the outcome and make inference at an arbitrary quantile level, we model the positive outcomes using the quantile regression approach.} In the quantile regression literature, it is well established that the \textcolor{black}{likelihood approach} is based on the AL distribution of \cite{yu2001bayesian}, which gives equivalent estimates to the minimization of the loss function in \cite{koenker1978}. The functional form of the AL distribution \textcolor{black}{for our model} is the following: \begin{equation}\label{eq:positive} g(y_{it}; \mu_{it} (\tau), \sigma (\tau)) = \frac{\tau(1- \tau)}{\sigma (\tau)} \exp \Bigg\{ - \rho_\tau \left( \frac{y_{it} - \mu_{it} (\tau)}{\sigma (\tau)}\right) \Bigg\}, \end{equation} where $\mu_{it}(\tau)$ represents the $\tau$-th quantile, with $\tau \in (0,1)$, of $y_{it}$, $\sigma (\tau) > 0$ is the scale parameter and \textcolor{black}{$\rho_\tau (\cdot)$ denotes the quantile loss function of \cite{koenker1978}: \begin{equation} \rho_\tau (u) = u (\tau - \boldsymbol{1}(u < 0)). \end{equation}} Because we are modeling positive values, to match the support of the AL density we consider the logarithmic transformation of the positive values of $y_{it}$. In particular, we assume that for a given $\tau$, conditionally on $\textbf{b}_{i1}$ and after log-transforming the outcome variable, $\tilde y_{it} = \log (y_{it})$, the conditional density $g( \tilde y_{it} |y_{it}>0, {\bf x}_{it}, \textbf{b}_{i1})$ in \eqref{eq:hurdle} is an AL distribution as given in \eqref{eq:positive} whose location parameter is defined by the linear model: \begin{equation}\label{eq:lm} \mu_{it} (\tau) = {\bf x}_{it}' \boldsymbol{\beta}(\tau) + {\bf z}_{it}' \textbf{b}_{i1}(\tau) \end{equation} where ${\bf z}_{it}$ is a subset of covariates of ${\bf x}_{it}$.\\ For a fixed quantile level $\tau$, \textcolor{black}{responses are assumed to be independent conditional on the random vector $\textbf{b}_i (\tau)$ and} parameter estimates can be obtained by maximizing the likelihood function of the model defined in \eqref{eq:hurdle}-\eqref{eq:lm}: \begin{equation}\label{eq:llk1} L({\bf \Phi_\tau}) = \prod_{i=1}^N \Bigg\{ \int_{\mathcal{B}} \prod_{t=1}^{T_i} \Big( p_{it}^{d_{it}}\left[\left(1-p_{it}\right) g_{it} \right]^{1-d_{it}} \Big) f_{\bf b} ({\bf b}_i) \textnormal{d} {\bf b}_i \Bigg\} \end{equation} where \textcolor{black}{${\bf \Phi_\tau} = \{ \boldsymbol{\gamma} (\tau), \boldsymbol{\beta} (\tau), \sigma (\tau) \}$ denotes the global set of model parameters.} \textcolor{black}{The likelihood in \eqref{eq:llk1} involves a multidimensional integral over the random coefficients whose corresponding distribution $f_{\bf b} (\cdot)$ allows to explain differences in the response quantiles across individuals. Hence, the choice of an appropriate distribution should be data driven and resistant to misspecification \citep{marino2015linear}. In the next Section, we will discuss how we may avoid evaluating the integral in \eqref{eq:llk1} for ML estimation.} \subsection{Finite mixture of quantile regressions}\label{sub:mix} \textcolor{black}{In the literature, typically the Gaussian distribution is a convenient choice for $f_{\bf b} (\cdot)$ from a computational point of view. In this case, we may approximate the integral in \eqref{eq:llk1} using Gaussian quadrature or adaptive Gaussian quadrature schemes (see \cite{winkelmann2004health} and \cite{rabe2005maximum}). A disadvantage of such approaches lies in the required computational effort, which is exponentially increasing with the dimension of the random parameter vector. For these reasons, potential alternatives availed themselves of simulation methods such as Monte Carlo and simulated ML approaches. However, for samples of finite size and short individual sequences, these methods may not provide a good approximation of the true mixing distribution \citep{alfo2017finite}. As a robust alternative to the Gaussian choice, a Symmetric Laplace or a multivariate Student T random variable have been considered by \cite{Geraci2014} and \cite{farcomeni2015longitudinal}. However, a parametric assumption on the distribution of the random coefficients could be rather restrictive and misspecification of the mixing distribution could lead to biased parameter estimates (see \cite{Alfo2010}). In view of these considerations, in this work we exploit the approach based on the nonparametric maximum likelihood (NPML) estimation of \cite{laird1978nonparametric}.} \textcolor{black}{Instead of specifying parametrically the distribution $f_{\bf b} (\cdot)$ we approximate it by using a discrete distribution on $G < N$ locations $\textbf{b}_{k} (\tau) = (\textbf{b}_{0k}(\tau), \textbf{b}_{1k}(\tau))$ i.e. ${\bf b}_i(\tau) \sim \sum_{k=1}^G \pi_k(\tau) \delta_{{\bf b}_k(\tau)}$ where the probability $\pi_k(\tau)$ is defined by $\pi_k(\tau) = \textnormal{Pr}({\bf b}_i (\tau) = {\bf b}_k (\tau) )$, $i = 1, . . . , N$ for $k = 1, . . . , G$ and $\delta_{{\bf b}_k(\tau)}$ is a one-point distribution putting a unit mass at ${\bf b}_k (\tau)$. } \textcolor{black}{The proposed approach can be thought as an approximation of a fully parametric framework as the discrete support approximates a possibly continuous distribution for the random coefficients.} \textcolor{black}{To ease the notation, hereinafter we omit the quantile level $\tau$ but all parameters are allowed to depend on it.} In this setting, the likelihood in \eqref{eq:llk1} reduces to: \begin{equation}\label{eq:llk2} L({\bf \Phi_\tau}) = \prod_{i=1}^N \Bigg\{ \sum_{k=1}^G \prod_{t=1}^{T_i} \Big( p_{itk}^{d_{it}}\left[\left(1-p_{itk}\right) g_{itk} \right]^{1-d_{it}} \Big) \pi_k \Bigg\}, \end{equation} \textcolor{black}{where ${\bf \Phi_\tau} = \{ \boldsymbol{\gamma}, \boldsymbol{\beta}, \sigma, {\bf b}_1, . . . , {\bf b}_G, \pi_1, . . . , \pi_G \}$ is the parameter vector.\\ The likelihood in \eqref{eq:llk2} is similar to the likelihood of a finite mixture of quantile regressions with $G$ clusters. More specifically, in the $k$-th cluster the spike-at-zero process is governed by the binary logistic model, ${\rm logit}(p_{itk} \mid {\bf s}_{it}, \textbf{b}_{0k}) = {\bf s}_{it}' \boldsymbol{\gamma} + {\bf c}_{it}' \textbf{b}_{0k}$; meanwhile the positive outcomes process is regulated by a linear mixed quantile with AL density in \eqref{eq:positive} having location parameter given by $\mu_{itk} = {\bf x}_{it}' \boldsymbol{\beta} + {\bf z}_{it}' \textbf{b}_{1k}$.}\\ Therefore, our modeling framework reduces to a finite bivariate mixture model for each quantile level where heterogeneity sources that influence the binary decision process, are assumed to influence also the distribution of the positive outcomes through the latent structure defined by discrete multivariate random effects. \section{Estimation}\label{sec:est} In this Section, we propose a maximum likelihood approach based on the EM algorithm to estimate the parameters of the methodology illustrated in Section \ref{sec:meth}. Given the finite mixture representation in \eqref{eq:llk2}, each unit $i$ can be conceptualized as drawn from one of $G$ distinct groups: we denote with $w_{ik}$ the indicator variable that is equal to $1$ if the $i$-th unit belongs to the $k$-th component of the finite mixture, and 0 otherwise. The EM algorithm treats as missing data the component membership $w_{ik}$. Thus, the log-likelihood for the complete data has the following form: \begin{fleqn}[\parindent] \begin{align} \ell_c ( {\bf \Phi_\tau}) & = \sum_{i=1}^N \sum_{k=1}^G w_{ik} \Bigg\{ \sum_{t=1}^{T_i} \log \Bigg( p_{itk}^{d_{it}}\left[\left(1-p_{itk}\right) g_{itk} \right]^{1-d_{it}} \Bigg) + \log (\pi_k) \Bigg\}. \label{eq:cdl} \end{align} \end{fleqn} In the E-step, the presence of the unobserved group-indicator $w_{ik}$ is handled by taking the conditional expectation of $w_{ik}$ given the observed data and the parameter estimates at the $r$-th iteration $\hat{{\bf \Phi}}^{(r)}_\tau = \{ \hat{\boldsymbol{\gamma}}^{(r)}, \hat{\boldsymbol{\beta}}^{(r)}, \hat{\sigma}^{(r)}, \hat{{\bf b}}_1^{(r)}, . . . , \hat{{\bf b}}_G^{(r)}, \hat{\pi}_1^{(r)}, . . . , \hat{\pi}_G^{(r)} \}$. At the $(r + 1)$-th iteration of the algorithm, we replace $w_{ik}$ by its conditional expectation $\hat{w}^{(r+1)}_{ik}$ using the following update equation: \begin{equation}\label{eq:weights} \hat{w}^{(r+1)}_{ik} = \mathbb{E} [ w_{ik} | y_{it}, {\bf s}_{it}, {\bf x}_{it}, {\bf \hat{\Phi}}^{(r)}_\tau] = \frac{\prod_{t=1}^{T_i} f^{(r)}_{itk} \hat{\pi}_k^{(r)}}{\sum_{l=1}^G \prod_{t=1}^{T_i} f^{(r)}_{itl} \hat{\pi}_l^{(r)}}, \end{equation} where $f^{(r)}_{itk} = p_{itk}(\hat{{\bf \Phi}}^{(r)}_\tau) ^{d_{it}} \left[\left(1-p_{itk} (\hat{{\bf \Phi}}^{(r)}_\tau) \right) g_{itk} (\hat{{\bf \Phi}}^{(r)}_\tau) \right]^{1-d_{it}}$. Conditionally on the posterior probabilities $\hat{w}^{(r+1)}_{ik}$ in \eqref{eq:weights}, the M-step solutions are generally updated by maximizing $\mathbb{E}[ \ell_c ( {\bf \Phi_\tau}) \mid y_{it}, {\bf s}_{it}, {\bf x}_{it}, \hat{{\bf \Phi}}^{(r)}_\tau]$ with respect to ${\bf \Phi_\tau}$ using numerical optimization techniques. The E- and M-steps are alternated until convergence, \textcolor{black}{that is when the difference between the likelihood function evaluated at two consecutive iterations is smaller than a predetermined threshold. In this paper, we set this convergence criterion equal to $10^{−5}$.} To avoid convergence to local maxima, for each value of $G$, we initialize model parameters using a multi-start strategy: we considered 20 different starting points and retained the solution corresponding to the maximum likelihood value.\\ \textcolor{black}{A considerable disadvantage of this procedure affects the M-step which could require a high computational effort and it could be very time-consuming especially when the set of explanatory variables is large. Therefore, in the next Section we develop a more efficient approach for estimating two-part quantile regression models, and to obtain a closed form of the ML estimator.} \subsection{Closed form EM algorithm solutions}\label{sub:closed} \textcolor{black}{This Section develops an efficient estimation method for the two-part quantile regression problem based on the EM algorithm presented in Section \ref{sec:est}. We reduce the computational burden of the algorithm compared to direct maximization of the likelihood and extend the work of \cite{tian2014linear} to two-part mixture models. In particular, we obtain iteratively closed form expressions for the unknown parameters vector $\boldsymbol{\beta}$ and the coefficient vectors ${\bf b}_{1k}$ for $k=1, \dots, G$.}\\ We use the location-scale mixture representation of the AL density considered in \cite{kozumi2011gibbs} to specify the positive outcome process of the model in \eqref{eq:lm} as the following hierarchical model: \begin{equation}\label{eq:hier} \tilde y_{it} \mid ( y_{it}>0, {\bf x}_{it}, \textbf{b}_{1k}, v_{it}) \sim N(\mu_{itk} + \theta v_{it}, \rho^2 \sigma v_{it}), \quad v_{it} \sim \textnormal{Exp} (\frac{1}{\sigma}), \end{equation} where $\mu_{itk} = {\bf x}_{it}' \boldsymbol{\beta} + {\bf z}_{it}' \textbf{b}_{1k}$, $\theta = \frac{1-2\tau}{\tau (1-\tau)}$ and $\rho^2 = \frac{2}{\tau (1-\tau)}$. The constraints imposed on $\theta$ and $\rho$ must be satisfied in order to guarantee that $\mu_{itk}$ coincides with the $\tau$-th conditional quantile of $\tilde y_{it} \mid (y_{it}>0, {\bf x}_{it}, \textbf{b}_{1k})$.\\ Due to the independence of the $v_{it}$'s, one can obtain that the conditional distribution of $v_{it}$ is a Generalized Inverse Gaussian (GIG) distribution (see \citealt{tian2014linear}, Section 2.3), namely: \begin{equation}\label{eq:GIG} f(v_{it} | \tilde y_{it}, y_{it}>0, {\bf x}_{it}, \textbf{b}_{1k}) \sim \textnormal{GIG} \Bigg( \frac{1}{2}, \frac{(\tilde y_{it} - \mu_{itk})^2}{\rho^2 \sigma} , \frac{2\rho^2 + \theta^2}{\rho^2 \sigma} \Bigg). \end{equation} In addition, from \eqref{eq:hier} the joint density of $\tilde y_{it}$ and $v_{it}$ is: \begin{equation}\label{eq:ytildev} f(\tilde y_{it}, v_{it} | y_{it}>0, {\bf x}_{it}, \textbf{b}_{1k}) = \frac{1}{ \sqrt{2 \pi \sigma v_{it}} \sigma \rho} \exp \bigg( - \frac{(\tilde y_{it} - \mu_{itk} - \theta v_{it})^2}{2 \rho^2 \sigma v_{it}} -\frac{v_{it}}{\sigma} \bigg). \end{equation} The underlying idea to obtain the updated parameter estimates of $\boldsymbol{\beta}$ and ${\bf b}_{1k}$ for $k=1, \dots, G$ is to consider $v_{it}$ as an additional latent variable. According to \eqref{eq:ytildev}, after omitting terms which of are not dependent on $\boldsymbol{\beta}$ and ${\bf b}_{1k}$, the complete data log-likelihood function is proportional to: \begin{fleqn} \begin{align} & \ell_c ( \boldsymbol{\beta}, {\bf b}_{11}, ..., {\bf b}_{1G}) \propto \frac{1}{2} \sum_{i=1}^N \sum_{k=1}^G \sum_{t=1}^{T_i} w_{ik} (1-d_{it}) v_{it}^{-1} (\tilde y_{it} - {\bf x}_{it}' \boldsymbol{\beta} - {\bf z}_{it}' \textbf{b}_{1k})^2 \label{eq:cdl2} \\ & - \theta \sum_{i=1}^N \sum_{k=1}^G \sum_{t=1}^{T_i} w_{ik} (1-d_{it}) (\tilde y_{it} - {\bf x}_{it}' \boldsymbol{\beta} - {\bf z}_{it}' \textbf{b}_{1k}). \end{align} \end{fleqn} In the E-step, the conditional expectation of the complete log-likelihood function, given the observed data and the current parameter estimates at the $r$-th iteration, $\hat{{\bf \Phi}}^{(r)}_\tau$, is given by: \begin{fleqn} \begin{align} \mathbb{E}[ \ell_c ( \boldsymbol{\beta}, {\bf b}_{11}, ..., {\bf b}_{1G}) \mid y_{it}, {\bf s}_{it}, {\bf x}_{it}, \hat{{\bf \Phi}}^{(r)}_\tau] \end{align} \end{fleqn} \begin{fleqn} \begin{align} & \propto \frac{1}{2} \sum_{i=1}^N \sum_{k=1}^G \sum_{t=1}^{T_i} \hat{w}_{ik}^{(r+1)} (1-d_{it}) \hat{v}_{it}^{(r+1)} (\tilde y_{it} - {\bf x}_{it}' \boldsymbol{\beta} - {\bf z}_{it}' \textbf{b}_{1k})^2 \label{eq:ecdl21} \\ & - \theta \sum_{i=1}^N \sum_{k=1}^G \sum_{t=1}^{T_i} \hat{w}_{ik}^{(r+1)} (1-d_{it}) (\tilde y_{it} - {\bf x}_{it}' \boldsymbol{\beta} - {\bf z}_{it}' {\bf b}_{1k}), \label{eq:ecdl22} \end{align} \end{fleqn} where $\hat{w}_{ik}^{(r+1)}$ has been defined in \eqref{eq:weights} and $\hat{v}_{it}^{(r+1)} = \mathbb{E}[v_{it}^{-1} \mid y_{it}, {\bf s}_{it}, {\bf x}_{it}, \hat{{\bf \Phi}}^{(r)}_\tau]$. To compute $\hat{v}_{it}^{(r+1)}$, we can exploit the moment properties of the GIG distribution in \eqref{eq:GIG}. Hence, we have that: \begin{equation}\label{eq:mominv} \hat{v}_{it}^{(r+1)} = \mathbb{E}[v_{it}^{-1} \mid y_{it}, {\bf s}_{it}, {\bf x}_{it}, \hat{{\bf \Phi}}^{(r)}_\tau] = \frac{\sqrt{\theta^2 + 2\rho^2}}{\mid \tilde y_{it} - {\bf x}_{it}' \boldsymbol{\hat{\beta}}^{(r)} - {\bf z}_{it}' {\bf \hat{b}}_{1k}^{(r)} \mid}. \end{equation} In the M-step, we determine the update expressions by setting to zero the derivative of \eqref{eq:ecdl21}-\eqref{eq:ecdl22} with respect to $\boldsymbol{\beta}$ and ${\bf b}_{11}, ..., {\bf b}_{1G}$ and solve the corresponding M-step equations. In conclusion, we obtain the following update expressions: \begin{equation}\label{eq:beta} \scriptsize \boldsymbol{\hat{\beta}}^{(r+1)} = \bigg( \sum_{i=1}^N \sum_{k=1}^G \sum_{t=1}^{T_i} \hat{w}_{ik}^{(r+1)} (1-d_{it}) \hat{v}_{it}^{(r+1)} {\bf x}_{it} {\bf x}_{it}' \bigg)^{-1} \bigg( \sum_{i=1}^N \sum_{k=1}^G \sum_{t=1}^{T_i} \hat{w}_{ik}^{(r+1)} (1-d_{it}) \big( \hat{v}_{it}^{(r+1)} {\bf x}_{it} (\tilde y_{it} - {\bf z}_{it}' {\bf \hat{b}}_{1k}^{(r)}) - \theta {\bf x}_{it} \big) \bigg) \end{equation} and \begin{equation}\label{eq:bpos} \scriptsize \hat{{\bf b}}_{1k}^{(r+1)} = \bigg( \sum_{i=1}^N \sum_{t=1}^{T_i} \hat{w}_{ik}^{(r+1)} (1-d_{it}) \hat{v}_{it}^{(r+1)} {\bf z}_{it} {\bf z}_{it}' \bigg)^{-1} \bigg( \sum_{i=1}^N \sum_{t=1}^{T_i} \hat{w}_{ik}^{(r+1)} (1-d_{it}) \big( \hat{v}_{it}^{(r+1)} {\bf z}_{it} (\tilde y_{it} - {\bf x}_{it}' \boldsymbol{\hat{\beta}}^{(r)}) - \theta {\bf z}_{it} \big) \bigg). \end{equation} Equations \eqref{eq:beta} and \eqref{eq:bpos} essentially, are equivalent to modified weighted least square estimator expressions where the quantile level $\tau$ is used to modify both the weights, $\hat{w}_{ik}^{(r)} \hat{v}_{it}^{(r)}$, and the response variable through $\hat{v}_{it}^{(r)}$ and $\theta$. \subsection{Variable selection and the Penalized EM algorithm}\label{sub:penest} \textcolor{black}{When dealing} with high-dimensional problems, \textcolor{black}{it may be of interest to reduce the set of explanatory variables using penalized regression methods which allow for sparse modeling and enhance model interpretability.} In this Section, we introduce a penalized version of the EM algorithm described in Section \ref{sec:est}. In particular, we use the PEM algorithm, originally proposed by \cite{green1990use}, which leaves the E-step unchanged while it modifies the M-step by introducing a penalty function to achieve simultaneous shrinkage and/or variable selection. Here we consider the LASSO penalty term put forward by \cite{Tibshirani1996} to shrink the coefficients of the positive outcomes, i.e. the vector $\boldsymbol{\beta}$, and simultaneously select a smaller subset of variables that exhibits the strongest effects. For a chosen quantile level $\tau$ and number of latent classes $G$, the penalized log-likelihood for the complete data has the following form: \begin{equation}\label{eq:pencdl} \ell_{pen} ({\bf \Phi_\tau} | \lambda_\tau) = \ell_c({\bf \Phi_\tau}) - \lambda_\tau J(\boldsymbol{\beta}), \end{equation} where $\ell_c({\bf \Phi_\tau})$ has been defined in \eqref{eq:cdl}, $J(\boldsymbol{\beta}) = \mid \mid \boldsymbol{\beta} \mid \mid_1$ is the LASSO penalty function and $\lambda_\tau$ is a tuning parameter that regulates the strength of the penalization assigned to the coefficients of the model. The optimal value of $\lambda_\tau$ is selected via 10-fold cross-validation which allows us to consider $\lambda_\tau$ as a data-driven parameter. \section{Application}\label{sec:app} \subsection{Data description} As stated in the Introduction, in this Section we present the application of the proposed methodology to the well-known RHIE dataset. \textcolor{black}{These data have already been discussed by \cite{deb2002structure, Duan1983} and \cite{manning1987health}.} The RHIE is the most important health insurance study ever conducted to assess how medical care costs affect a patient’s use of health services and quality and it is widely regarded as the basis of the most reliable estimates of price sensitivity of demand for medical services. The experiment, conducted by the RAND Corporation from 1974 to 1982, collects data from about 8000 enrollees in 2823 families, from six sites across the US. Each family was enrolled in one of fourteen different HIS insurance plans for either 3 or 5 years.\\ Our aim is to understand how available covariates influence healthcare decisions spending in U.S. families at different quantile levels of interest. We consider one measure of utilization: the total spending on health services (MED) defined as the sum of outpatient, inpatient, drugs, supplies and psychotherapy expenses expressed in U.S. dollars. The considered covariates are the same of the work of \cite{deb2002structure}: they include personal characteristics such as sex (FEMALE), age (XAGE), race (BLACK) and education level (EDUCDEC). There are also socio-economic variables of the household such as income (LINC), number of components (LFAM) and the presence of a person aged 18 or less (CHILD). An interaction term between the gender and the presence of a child (FEMCHILD) is also included. Quantitative indicators of health condition are measured via an index of chronic conditions (DISEA) and through the existence of a physical limitation (PHYSLM) while the binary coding of self-rated health status (HLTHG, HLTHF, HLTHP) controls for variations in perceived healthcare conditions. The summary statistics are reported in Tables \ref{tab:y} and \ref{tab:X}. \textcolor{black}{By looking at Table \ref{tab:y}, the proportion of zero expenditures is significant: 22\% did not have any medical expenditure. Meanwhile, the right tail of the distribution is very long with the maximum expense being 39182.02 U.S. dollars which indicates the presence of overdispersion and potential outliers in the data. In addition, the dependent variable is severely skewed and presents a high kurtosis. From a graphical point of view, Figure \ref{fig:data} shows that the data is characterized by a substantial zero-mass health expenditure (left graph): zero expenditure indicates no utilization, and it may reflect population' reluctance to spend on health care treatments. By looking at the right plot, also the logarithmic transformation of positive values does not follow a Gaussian distribution. Therefore, a two-part quantile regression model would be appropriate for these data. In particular, the binary decision process would estimate the association between the covariates and the probability of having any health care expenditure. Through the positive part of the model instead, our goal is to see whether the covariates have a different impact on health care expenditures at different quantiles. Indeed, by doing so, we can single out the impact of health determinants across extreme quantiles and non extreme quantiles reflecting the association with low-intensity primary health care services, at the 10-th and 25-th percentiles, and that with high-intensity such as the highly intensive and expensive technology care, at the 75-th and 90-th percentiles.} \begin{table}[h] \centering \smallskip \resizebox{1.0\columnwidth}{!}{% \begin{tabular}{lcccccccccccc} \hline Variable & Mean & S.D. & Skewness & Kurtosis & No Exp. (\%) & Maximum &\multicolumn{5}{c}{$\tau$-th quantile} \\\cmidrule(r){8-12} & & & & & & & 0.1 & 0.25 & 0.5 & 0.75 & 0.90\\ \hline MED & 171.568 & 698.201 & 20.189 & 734.007 & 22.055 & 39182.016 & 0.000 & 5.503 & 35.378 & 104.541 & 341.201 \\ \hline \end{tabular}} \caption{Summary statistics of healthcare expenditures.}\label{tab:y} \end{table} \begin{figure}[h] \center \includegraphics[width=1\linewidth, height=6.5cm, keepaspectratio]{datanew} \includegraphics[width=1\linewidth, height=6.5cm, keepaspectratio]{datalog} \caption{Medical expenditure distribution: marginal distribution (left) and distribution of the logarithm of positive values (right).}\label{fig:data} \end{figure} \begin{table}[h] \centering \smallskip \resizebox{0.8\columnwidth}{!}{% \begin{tabular}{llrrr} \hline Variable & Description & Mean & S.D. \\ \hline LOGC & $\log$(coinsurance + 1), $0 \leq$ coinsurance $\leq 100$ & 2.384 & 2.042 \\ LFAM & $\log$(family size) & 1.248 & 0.539 \\ LINC & $\log$(family income) & 8.708 & 1.228 \\ XAGE & Age in years & 25.718 & 16.768 \\ FEMALE & If person is female: 1 & 0.517 & 0.500 \\ CHILD & If age is less than 18: 1 & 0.401 & 0.490 \\ FEMCHILD & FEMALE ∗ CHILD & 0.194 & 0.395 \\ BLACK & If race of household head is black: 1 & 0.184 & 0.387 \\ EDUCDEC & Education of the household head in years & 11.966 & 2.806 \\ PHYSLM & If the person has a physical limitation: 1 & 0.118 & 0.323 \\ DISEA & Index of chronic diseases & 11.244 & 6.742 \\ HLTHG & If self-rated health is good: 1$^\dagger$ & 0.362 & 0.481 \\ HLTHF & If self-rated health is fair: 1$^\dagger$ & 0.077 & 0.267 \\ HLTHP & If self-rated health is poor: 1$^\dagger$ & 0.015 & 0.121 \\ MHI & Mental Health Index & 76.554 & 12.503 \\ \hline \end{tabular}} \caption{Covariates description and summary statistics. $^\dagger$ indicates that the baseline individual is in excellent self-rated health.}\label{tab:X} \end{table} \subsection{Marginal inferences from the two-part mixture quantile regression model} In this Section we present the results of the application to the RHIE data. We estimated the proposed \textcolor{black}{LASSO penalized} two-part finite mixture quantile regression at different quantile levels of interest $ \tau = (0.1, 0.25, 0.5, 0.75, 0.9)$, for a varying number of mixture components $G = (1,\dots, 6)$. We used the same covariates for both the binary process and the positive part of the model and we assumed that random intercepts are sufficient to capture heterogeneity between subjects even tough the generalization to multiple random effects is computationally inexpensive and straightforward thanks to the discrete mixture structure. \textcolor{black}{Because the number of components $G$ is unknown a-priori, we select it according to the Bayesian Information Criterion (BIC)\footnote{$\textnormal{BIC} = -2 \log (L) + \nu \log(N)$, where $\nu$ denotes the number of free model parameters and $N$ is the number of individuals.} (\cite{schwarz1978estimating}). Standard errors are obtained using a parametric bootstrap approach: we refitted the model to 250 bootstrap samples simulated from the estimated model parameters, and approximated the standard error of each parameter with its corresponding standard deviation computed on bootstrap samples.} All computations have been implemented in the \verb!R! software environment (version 3.5.2) with \verb!C++! object-oriented programming.\\ Table \ref{tab:pen} reports the estimated penalized model coefficients and standard errors (in parentheses) for the binary (Panel A) and positive (Panel B) part of the model. Parameter estimates are displayed in boldface when significant at the standard 5\% level. Panels C and D show the estimated masses of the bivariate discrete distribution of the random effects and the model fit summary at each quantile level, respectively.\\ \textcolor{black}{Consistently with the findings of \cite{deb2002structure, Duan1983} and \cite{bago2006latent}}, the selected number of mixture components $G$ is two for all investigated quantiles. Essentially, the two-part model demarcates two distinct subpopulations: a group of reluctant users who rarely accesses health services and a group of users that uses health services with high frequency. This claim is confirmed by the estimated mixing probabilities $(\pi_1, \pi_2)$ and locations $\textbf{b}_{1}, \textbf{b}_{2}$ which can be interpreted as high and low users. Group separation is all the more apparent as the quantile level increases with estimated masses $(0.328, 0.672)$ at the 90-th quantile.\\ By looking at the binary process estimates in Table \ref{tab:pen} (Panel A), one can see that all coefficients are statistically different from zero at the 5\% significance level except LFAM, MHI and HLTHG and HLTHF at low and high quantile levels, respectively. The marginal inferences from the binary part of the two-part model suggest that the most important determinants of people's willingness to spend on health services are related to socio-cultural and economic factors: income, age, sex, race ethnicity and education level have the expected signs and significance. The presence of a coinsurance health plan acts as a protection factor reducing the probability of consuming health services. Ageing, presumably, involves complications such as heart diseases, cancer, diabetes and many others that require (obligatory) hospitalization. The analysis points out also profound economic and ethnicity inequalities. \\%Firstly, the household's economic situation has a significant impact on goods and services in spending choices: a low income level forecloses considerably access to medical treatment. Indeed, empirical analyses show that families with a low socioeconomic status, as measured by income, remain at a disadvantage and non-financial barriers to care must also be addressed to reduce inequities; see \cite{lasser2006access, braveman2010socioeconomic}. Secondly, the probability of spending varies greatly between white and black people: blacks are less likely than whites to spend on medical care. It is worth noting that the coefficient estimates vary as the quantile level varies, e.g. they generally show an upward or downward trend as the quantile increases. This may be due to the presence of correlation between the binary and the positive processes which is channeled via the latent structure defined by the discrete bivariate random effects.\\ Moving onto the analysis of the positive outcomes (Table \ref{tab:pen}, Panel B), our results are generally consistent with those of \cite{deb2002structure}. The first thing one can notice is that not all variables have the same effect on healthcare consumption, i.e. they change in sign and magnitude and some of them have been set to zero at specific quantile levels. \textcolor{black}{This highlights the importance of considering a quantile regression approach because such effects could not be detected by the classical mean regression.} As previously described, the presence of health insurance seems to act as a safety ledge: insured people spend less. This may reflect the fact that health care is largely provided by private hospitals and clinics. The results indicate also that individuals in larger families are more likely to be in the low-use group especially up to the 75-th quantile. As expected, the economic situation appears to be one of the main barriers that influence healthcare access for low income households. In relation to the gender, females have an higher spending pattern up to the 75-th percentiles which may be caused by hospitalization being parturition or its complications.\\ Other covariates also appear to have some explanatory power, notably XAGE, CHILD, FEMCHILD and BLACK. The elderly undergoes frequent checks and require more health services due to ageing phenomena to which they may be exposed to. The interaction term, FEMCHILD, and BLACK are negatively associated with the response indicating that, in the latter case, racial differences in the amount spent on medical care are persistent and confirms that among individuals who spend on care, blacks have lower expenditures than whites. Moreover, highly literate and educated individuals tend to follow a healthy lifestyle, be less reluctant to go through a visit to a physician, and thus spend more to preserve and improve their health. Variables controlling for the health status of the individual such as PHYSLM and DISEA are positively associated with healthcare expenses: supposedly, disabilities, physical limitations and diagnosis of chronic illnesses require highly intensive, technology care or newer, more costly treatments. Also, self-perceived health condition indicators are all positively associated with health expenses with a much steeper slope at the lower percentiles meanwhile, their impact seems to be negligible above the 75-th quantile.\\ \textcolor{black}{Moreover, it is possible to see that the impact of several variables varies across quantiles: the effect of LOGC and BLACK is not uniform as the quantile increases but is worse on the left tail of the distribution of health expenditures.} Also, variables such as LINC, FEMALE, CHILD and FEMCHILD exhibit nonlinear effects: in particular, CHILD and FEMCHILD have a U-shaped and an inverted U-shape effect, respectively. EDUCDEC changes sign and magnitude across the quantile levels as well. As regards the estimated random intercepts, $\textbf{b}_{1k}$, we notice that the estimates increase with $\tau$ and this is consistent with increasing values of healthcare spending.\\ \textcolor{black}{We conclude the analysis comparing our methodology with the Linear Quantile Mixed Model (LQMM) of \cite{Geraci2014}. In particular, we fit a LQMM only on the positive values of the dependent variable under the assumption that the distribution of the random intercepts is a bivariate Gaussian while ignoring both the correlation between the binary and positive process and the sparsity induced through the LASSO regularization. By looking at the parameter estimates in Table \ref{tab:unpen}, we notice slight differences with respect to the results obtained using our methodology at low quantiles while, as we move from the left to the right tail of the response distribution, there are considerable differences. Such discrepancies in the two models might be traced back to the fact that the distribution of the random effects is not Gaussian. Indeed, the semiparametric mixture approach and the LQMM perform equivalently only if the random coefficients are Gaussianly distributed otherwise, the semiparametric mixture performs better being more flexible and able to accommodate departure from the Gaussianity assumption (see \cite{alfo2017finite}).} \begin{table \centering \smallskip \resizebox{0.85\columnwidth}{!}{ \setlength\tabcolsep{17pt} \begin{tabular}{l c c c c c } \hline Covariate &\multicolumn{5}{c}{$\tau$-th quantile} \\\cmidrule(r){2-6} & 10 & 25 & 50 & 75 & 90 \\ \hline Panel A: Binary process ($\mathbb{P}\textnormal{r}(y_{it} = 0)$)\\ LOGC & $\mathbf{0.413} \; (0.022)$ & $\mathbf{0.414} \; (0.025)$ & $\mathbf{0.406} \; (0.026)$ & $\mathbf{0.379} \; (0.022)$ & $\mathbf{0.377} \; (0.019)$ \\ LFAM & $-0.039 \; (0.025)$ & $-0.016 \; (0.026)$ & $-0.021 \; (0.027)$ & $0.014 \; (0.024)$ & $-0.017 \; (0.022)$ \\ LINC & $\mathbf{-0.190} \; (0.022)$ & $\mathbf{-0.196} \; (0.024)$ & $\mathbf{-0.186} \; (0.025)$ & $\mathbf{-0.175} \; (0.022)$ & $\mathbf{-0.097} \; (0.019)$ \\ XAGE & $\mathbf{-0.101} \; (0.037)$ & $-0.066 \; (0.039)$ & $-0.059 \; (0.041)$ & $\mathbf{-0.082} \; (0.038)$ & $\mathbf{-0.120} \; (0.034)$ \\ FEMALE & $\mathbf{-0.841} \; (0.061)$ & $\mathbf{-0.974} \; (0.067)$ & $\mathbf{-0.972} \; (0.067)$ & $\mathbf{-0.860} \; (0.059)$ & $\mathbf{-0.689} \; (0.054)$ \\ CHILD & $\mathbf{-0.450} \; (0.088)$ & $\mathbf{-0.465} \; (0.094)$ & $\mathbf{-0.426} \; (0.096)$ & $\mathbf{-0.511} \; (0.087)$ & $\mathbf{-0.432} \; (0.080)$ \\ FEMCHILD & $\mathbf{0.882} \; (0.087)$ & $\mathbf{0.919} \; (0.095)$ & $\mathbf{0.921} \; (0.094)$ & $\mathbf{0.873} \; (0.088)$ & $\mathbf{0.695} \; (0.074)$ \\ BLACK & $\mathbf{1.422} \; (0.058)$ & $\mathbf{1.351} \; (0.059)$ & $\mathbf{1.443} \; (0.066)$ & $\mathbf{1.229} \; (0.049)$ & $\mathbf{1.134} \; (0.045)$ \\ EDUCDEC & $\mathbf{-0.160} \; (0.023)$ & $\mathbf{-0.168} \; (0.024)$ & $\mathbf{-0.192} \; (0.025)$ & $\mathbf{-0.151} \; (0.021)$ & $\mathbf{-0.163} \; (0.020)$ \\ PHYSLM & $\mathbf{-0.543} \; (0.078)$ & $\mathbf{-0.725} \; (0.082)$ & $\mathbf{-0.720} \; (0.088)$ & $\mathbf{-0.590} \; (0.078)$ & $\mathbf{-0.500} \; (0.073)$ \\ DISEA & $\mathbf{-0.287} \; (0.028)$ & $\mathbf{-0.278} \; (0.028)$ & $\mathbf{-0.276} \; (0.029)$ & $\mathbf{-0.237} \; (0.027)$ & $\mathbf{-0.246} \; (0.024)$ \\ HLTHG & $\mathbf{0.123} \; (0.048)$ & $0.066 \; (0.053)$ & $0.020 \; (0.054)$ & $\mathbf{0.124} \; (0.047)$ & $\mathbf{0.117} \; (0.044)$ \\ HLTHF & $\mathbf{-0.207} \; (0.091)$ & $-0.178 \; (0.098)$ & $\mathbf{-0.272} \; (0.108)$ & $\mathbf{-0.271} \; (0.089)$ & $-0.131 \; (0.083)$ \\ HLTHP & $\mathbf{-1.026} \; (0.246)$ & $\mathbf{-0.996} \; (0.258)$ & $\mathbf{-1.011} \; (0.259)$ & $\mathbf{-0.903} \; (0.231)$ & $\mathbf{-0.635} \; (0.209)$ \\ MHI & $-0.016 \; (0.023)$ & $-0.037 \; (0.025)$ & $\mathbf{-0.057} \; (0.026)$ & $-0.026 \; (0.022)$ & $-0.030 \; (0.021)$ \\ $\textbf{b}_{01}$ & $\mathbf{-2.671} \; (0.072)$ & $\mathbf{-2.781} \; (0.081)$ & $\mathbf{-2.897} \; (0.086)$ & $\mathbf{-2.803} \; (0.084)$ & $\mathbf{-2.193} \; (0.066)$ \\ $\textbf{b}_{02}$ & $\mathbf{-0.476} \; (0.052)$ & $\mathbf{-0.326} \; (0.056)$ & $\mathbf{-0.299} \; (0.057)$ & $\mathbf{-0.686} \; (0.053)$ & $\mathbf{-0.904} \; (0.049)$ \\ \hline Panel B: Positive process\\ LOGC & $-0.196$ & $-0.192$ & $-0.142$ & $-0.091$ & $-0.081$ \\ LFAM & $-0.075$ & $-0.098$ & $-0.057$ & $-0.106$ & $-$ \\ LINC & $0.103$ & $0.088$ & $0.068$ & $0.122$ & $-$ \\ XAGE & $0.219$ & $0.194$ & $0.165$ & $0.154$ & $0.359$ \\ FEMALE & $0.287$ & $0.351$ & $0.270$ & $0.400$ & $-$ \\ CHILD & $0.096$ & $-$ & $-0.204$ & $-0.097$ & $-0.001$ \\ FEMCHILD & $-0.299$ & $-0.259$ & $-0.230$ & $-0.461$ & $-0.006$ \\ BLACK & $-0.547$ & $-0.379$ & $-0.436$ & $-0.243$ & $-$ \\ EDUCDEC & $0.054$ & $0.039$ & $0.044$ & $-0.017$ & $0.012$ \\ PHYSLM & $0.189$ & $0.404$ & $0.371$ & $0.437$ & $0.281$ \\ DISEA & $0.186$ & $0.158$ & $0.139$ & $0.073$ & $0.072$ \\ HLTHG & $0.002$ & $0.059$ & $0.100$ & $0.011$ & $-$ \\ HLTHF & $0.294$ & $0.209$ & $0.358$ & $0.411$ & $-$ \\ HLTHP & $0.801$ & $0.588$ & $0.613$ & $0.700$ & $-$ \\ MHI & $-0.039$ & $-0.043$ & $-0.029$ & $-0.060$ & $-0.053$ \\ $\textbf{b}_{11}$ & $\mathbf{3.107} \; (0.015)$ & $\mathbf{3.590} \; (0.020)$ & $\mathbf{4.353} \; (0.025)$ & $\mathbf{5.557} \; (0.020)$ & $\mathbf{6.771} \; (0.008)$ \\ $\textbf{b}_{12}$ & $\mathbf{1.700} \; (0.016)$ & $\mathbf{2.362} \; (0.020)$ & $\mathbf{3.285} \; (0.026)$ & $\mathbf{4.044} \; (0.018)$ & $\mathbf{4.705} \; (0.007)$ \\ $\sigma_\tau$ & $\mathbf{0.187} \; (0.001)$ & $\mathbf{0.355} \; (0.003)$ & $\mathbf{0.471} \; (0.004)$ & $\mathbf{0.372} \; (0.003)$ & $\mathbf{0.187} \; (0.001)$ \\ $\lambda_\tau$ & $0.013$ & $0.018$ & $0.018$ & $0.032$ & $0.566$ \\ \hline Panel C: \\ $\pi_1$ & $0.495 \; (0.006)$ & $0.484 \; (0.008)$ & $0.487 \; (0.009)$ & $\mathbf{0.337} \; (0.008)$ & $\mathbf{0.328} \; (0.007)$ \\ $\pi_2$ & $0.505 \; (0.006)$ & $0.516 \; (0.008)$ & $0.513 \; (0.009)$ & $\mathbf{0.663} \; (0.008)$ & $\mathbf{0.672} \; (0.007)$ \\ \hline Panel D: \\ $\log(L)$ & -10349.09 & -7446.91 & -6742.39 & -9394.92 & -12392.29 \\ $\#$ par & 36 & 35 & 36 & 36 & 29 \\ AIC & 20770.18 & 14963.82 & 13556.77 & 18861.84 & 24844.59 \\ BIC & 21055.04 & 15240.77 & 13841.63 & 19146.70 & 25081.97 \\ \hline \end{tabular}} \caption{\footnotesize Penalized two-part finite mixture of quantile regressions coefficient estimates. Panel A refers to the binary part while Panel B refers to the positive part of the model for the investigated quantile levels. Panel C illustrates the estimated mixing probabilities. Panel D reports the log-likelihood, number of nonzero model parameters and penalized likelihood criteria (AIC, BIC).} \label{tab:pen} \end{table} \begin{table}[h] \centering \smallskip \resizebox{1.0\columnwidth}{!}{ \setlength\tabcolsep{13pt} \begin{tabular}{l c c c c c } \hline Covariate &\multicolumn{5}{c}{$\tau$-th quantile} \\\cmidrule(r){2-6} & 10 & 25 & 50 & 75 & 90 \\ \hline LOGC & $\mathbf{-0.202} \; (0.026)$ & $\mathbf{-0.159} \; (0.017)$ & $\mathbf{-0.147} \; (0.017)$ & $\mathbf{-0.100} \; (0.025)$ & $\mathbf{-0.102} \; (0.026)$ \\ LFAM & $\mathbf{-0.130} \; (0.023)$ & $\mathbf{-0.108} \; (0.018)$ & $\mathbf{-0.116} \; (0.019)$ & $-0.039 \; (0.030)$ & $\mathbf{-0.061} \; (0.025)$ \\ LINC & $\mathbf{0.062} \; (0.028)$ & $\mathbf{0.066} \; (0.022)$ & $\mathbf{0.055} \; (0.018)$ & $\mathbf{0.079} \; (0.035)$ & $0.055 \; (0.034)$ \\ XAGE & $\mathbf{0.232} \; (0.028)$ & $\mathbf{0.169} \; (0.023)$ & $\mathbf{0.173} \; (0.024)$ & $\mathbf{0.115} \; (0.029)$ & $\mathbf{0.110} \; (0.035)$ \\ FEMALE & $\mathbf{0.328} \; (0.036)$ & $\mathbf{0.244} \; (0.029)$ & $\mathbf{0.280} \; (0.031)$ & $\mathbf{0.377} \; (0.041)$ & $\mathbf{0.368} \; (0.039)$ \\ CHILD & $\mathbf{0.064} \; (0.033)$ & $-0.020 \; (0.030)$ & $\mathbf{-0.083} \; (0.036)$ & $\mathbf{-0.279} \; (0.040)$ & $\mathbf{-0.242} \; (0.042)$ \\ FEMCHILD & $\mathbf{-0.254} \; (0.035)$ & $\mathbf{-0.281} \; (0.033)$ & $\mathbf{-0.266} \; (0.037)$ & $\mathbf{-0.464} \; (0.042)$ & $\mathbf{-0.478} \; (0.044)$ \\ BLACK & $\mathbf{-0.192} \; (0.035)$ & $\mathbf{-0.359} \; (0.039)$ & $\mathbf{-0.315} \; (0.042)$ & $\mathbf{-0.290} \; (0.050)$ & $\mathbf{-0.305} \; (0.055)$ \\ EDUCDEC & $0.056 \; (0.046)$ & $\mathbf{0.051} \; (0.016)$ & $0.023 \; (0.017)$ & $-0.016 \; (0.028)$ & $-0.011 \; (0.027)$ \\ PHYSLM & $\mathbf{0.326} \; (0.037)$ & $\mathbf{0.274} \; (0.042)$ & $\mathbf{0.342} \; (0.046)$ & $\mathbf{0.428} \; (0.049)$ & $\mathbf{0.450} \; (0.045)$ \\ DISEA & $\mathbf{0.103} \; (0.030)$ & $\mathbf{0.150} \; (0.022)$ & $\mathbf{0.115} \; (0.020)$ & $\mathbf{0.057} \; (0.023)$ & $\mathbf{0.074} \; (0.024)$ \\ HLTHG & $0.024 \; (0.044)$ & $\mathbf{0.095} \; (0.030)$ & $\mathbf{0.073} \; (0.033)$ & $\mathbf{0.087} \; (0.042)$ & $\mathbf{0.148} \; (0.044)$ \\ HLTHF & $\mathbf{0.268} \; (0.038)$ & $\mathbf{0.357} \; (0.042)$ & $\mathbf{0.327} \; (0.049)$ & $\mathbf{0.332} \; (0.043)$ & $\mathbf{0.272} \; (0.047)$ \\ HLTHP & $\mathbf{0.573} \; (0.026)$ & $\mathbf{0.631} \; (0.046)$ & $\mathbf{0.657} \; (0.040)$ & $\mathbf{0.866} \; (0.038)$ & $\mathbf{0.698} \; (0.037)$ \\ MHI & $-0.013 \; (0.024)$ & $\mathbf{-0.072} \; (0.018)$ & $\mathbf{-0.037} \; (0.015)$ & $\mathbf{-0.080} \; (0.025)$ & $\mathbf{-0.077} \; (0.023)$ \\ Intercept & $\mathbf{2.497} \; (0.036)$ & $\mathbf{3.153} \; (0.024)$ & $\mathbf{3.845} \; (0.027)$ & $\mathbf{4.637} \; (0.034)$ & $\mathbf{4.833} \; (0.030)$ \\ $\sigma_\tau$ & $\mathbf{0.155} \; (0.001)$ & $\mathbf{0.321} \; (0.003)$ & $\mathbf{0.430} \; (0.004)$ & $\mathbf{0.340} \; (0.003)$ & $\mathbf{0.291} \; (0.003)$ \\ \hline \end{tabular}} \caption \footnotesize LQMM coefficient estimates for the positive outcomes for the investigated quantile levels. Standard errors are in parentheses. Parameter estimates are displayed in boldface when significant at the standard 5\% level.} \label{tab:unpen} \end{table} \section{Conclusions}\label{sec:con} This paper introduces a two-part finite mixture of quantile regressions for mixed-type outcomes under a longitudinal setting. Random effect coefficients are added in both the binary and the positive decision mechanisms to account for zero inflation and unobserved heterogeneity. Rather than assuming a parametric distribution on the random coefficients distribution, we approximate it using a multivariate discrete variable defined on a finite number of support points. Estimation of the model parameters is based on a suitable likelihood-based EM algorithm. In addition, a LASSO penalized version of the algorithm is proposed as an automatic data-driven procedure to perform variable selection. The application of the proposed method to the RHIE on health behaviors and attitudes shows consistent results with existing studies.\\ The proposed approach can be further extended to consider time-varying sources of unobserved heterogeneity via individual-specific coefficients evolving according a hidden Markov chain. Secondly, we may extend the univariate quantile framework to a multivariate quantile regression setting taking into account for the correlation among the marginals of a multivariate response variable (\cite{petrella2019joint}).